基于FNO求解二维Navier

您所在的位置:网站首页 ns方程是根据什么原则建立的 基于FNO求解二维Navier

基于FNO求解二维Navier

2024-05-29 19:07| 来源: 网络整理| 查看: 265

基于FNO求解二维Navier-Stokes

下载Notebook下载样例代码查看源文件

概述

计算流体力学是21世纪流体力学领域的重要技术之一,其通过使用数值方法在计算机中对流体力学的控制方程进行求解,从而实现流动的分析、预测和控制。传统的有限元法(finite element method,FEM)和有限差分法(finite difference method,FDM)常用于复杂的仿真流程(物理建模、网格划分、数值离散、迭代求解等)和较高的计算成本,往往效率低下。因此,借助AI提升流体仿真效率是十分必要的。

近年来,随着神经网络的迅猛发展,为科学计算提供了新的范式。经典的神经网络是在有限维度的空间进行映射,只能学习与特定离散化相关的解。与经典神经网络不同,傅里叶神经算子(Fourier Neural Operator,FNO)是一种能够学习无限维函数空间映射的新型深度学习架构。该架构可直接学习从任意函数参数到解的映射,用于解决一类偏微分方程的求解问题,具有更强的泛化能力。更多信息可参考Fourier Neural Operator for Parametric Partial Differential Equations。

本案例教程介绍利用傅里叶神经算子的纳维-斯托克斯方程(Navier-Stokes equation)求解方法。

纳维-斯托克斯方程(Navier-Stokes equation)

纳维-斯托克斯方程(Navier-Stokes equation)是计算流体力学领域的经典方程,是一组描述流体动量守恒的偏微分方程,简称N-S方程。它在二维不可压缩流动中的涡度形式如下:

\[\partial_t w(x, t)+u(x, t) \cdot \nabla w(x, t)=\nu \Delta w(x, t)+f(x), \quad x \in(0,1)^2, t \in(0, T]\] \[\nabla \cdot u(x, t)=0, \quad x \in(0,1)^2, t \in[0, T]\] \[w(x, 0)=w_0(x), \quad x \in(0,1)^2\]

其中\(u\)表示速度场,\(w=\nabla \times u\)表示涡度,\(w_0(x)\)表示初始条件,\(\nu\)表示粘度系数,\(f(x)\)为外力合力项。

问题描述

本案例利用Fourier Neural Operator学习某一个时刻对应涡度到下一时刻涡度的映射,实现二维不可压缩N-S方程的求解:

\[w_t \mapsto w(\cdot, t+1)\] 技术路径

MindSpore Flow求解该问题的具体流程如下:

创建数据集。

构建模型。

优化器与损失函数。

模型训练。

Fourier Neural Operator

Fourier Neural Operator模型构架如下图所示。图中\(w_0(x)\)表示初始涡度,通过Lifting Layer实现输入向量的高维映射,然后将映射结果作为Fourier Layer的输入,进行频域信息的非线性变换,最后由Decoding Layer将变换结果映射至最终的预测结果\(w_1(x)\)。

Lifting Layer、Fourier Layer以及Decoding Layer共同组成了Fourier Neural Operator。

Fourier Neural Operator模型构架

Fourier Layer网络结构如下图所示。图中V表示输入向量,上框表示向量经过傅里叶变换后,经过线性变换R,过滤高频信息,然后进行傅里叶逆变换;另一分支经过线性变换W,最后通过激活函数,得到Fourier Layer输出向量。

Fourier Layer网络结构

[1]: import os import time import numpy as np import mindspore from mindspore import nn, ops, Tensor, jit, set_seed

下述src包可以在applications/data_driven/navier_stokes/fno2d/src下载。 配置文件可在config中修改。

[2]: from mindflow.cell import FNO2D from mindflow.common import get_warmup_cosine_annealing_lr from mindflow.loss import RelativeRMSELoss from mindflow.utils import load_yaml_config from mindflow.pde import UnsteadyFlowWithLoss from src import calculate_l2_error, create_training_dataset set_seed(0) np.random.seed(0) [3]: # set context for training: using graph mode for high performance training with GPU acceleration mindspore.set_context(mode=mindspore.GRAPH_MODE, device_target='GPU', device_id=2) use_ascend = mindspore.get_context(attr_key='device_target') == "Ascend" config = load_yaml_config('navier_stokes_2d.yaml') data_params = config["data"] model_params = config["model"] optimizer_params = config["optimizer"] 创建数据集

训练与测试数据下载: data_driven/navier_stokes/dataset。

本案例根据Zongyi Li在 Fourier Neural Operator for Parametric Partial Differential Equations 一文中对数据集的设置生成训练数据集与测试数据集。具体设置如下:

基于周期性边界,生成满足如下分布的初始条件\(w_0(x)\):

\[w_0 \sim \mu, \mu=\mathcal{N}\left(0,7^{3 / 2}(-\Delta+49 I)^{-2.5}\right)\]

外力项设置为:

\[f(x)=0.1\left(\sin \left(2 \pi\left(x_1+x_2\right)\right)+\right.\cos(2 \pi(x_1+x_2)))\]

采用Crank-Nicolson方法生成数据,时间步长设置为1e-4,最终数据以每 t = 1 个时间单位记录解。所有数据均在256×256的网格上生成,并被下采样至64×64网格。本案例选取粘度系数\(\nu=1e−5\),训练集样本量为19000个,测试集样本量为3800个。

[4]: train_dataset = create_training_dataset(data_params, input_resolution=model_params["input_resolution"], shuffle=True) test_input = np.load(os.path.join(data_params["path"], "test/inputs.npy")) test_label = np.load(os.path.join(data_params["path"], "test/label.npy")) Data preparation finished 构建模型

网络由1层Lifting layer、多层Fourier Layer以及1层Decoding layer叠加组成:

Lifting layer对应样例代码中FNO2D.fc0,将输出数据\(x\)映射至高维;

多层Fourier Layer的叠加对应样例代码中FNO2D.fno_seq,本案例采用离散傅里叶变换实现时域与频域的转换;

Decoding layer对应代码中FNO2D.fc1与FNO2D.fc2,获得最终的预测值。

[5]: model = FNO2D(in_channels=model_params["in_channels"], out_channels=model_params["out_channels"], resolution=model_params["input_resolution"], modes=model_params["modes"], channels=model_params["width"], depths=model_params["depth"] ) model_params_list = [] for k, v in model_params.items(): model_params_list.append(f"{k}-{v}") model_name = "_".join(model_params_list) 优化器与损失函数

使用相对均方根误差作为网络训练损失函数:

[6]: steps_per_epoch = train_dataset.get_dataset_size() lr = get_warmup_cosine_annealing_lr(lr_init=optimizer_params["initial_lr"], last_epoch=optimizer_params["train_epochs"], steps_per_epoch=steps_per_epoch, warmup_epochs=optimizer_params["warmup_epochs"]) optimizer = nn.Adam(model.trainable_params(), learning_rate=Tensor(lr)) problem = UnsteadyFlowWithLoss(model, loss_fn=RelativeRMSELoss(), data_format="NHWC") 模型训练

使用MindSpore >= 2.0.0的版本,可以使用函数式编程范式训练神经网络。

[7]: def train(): if use_ascend: from mindspore.amp import DynamicLossScaler, auto_mixed_precision, all_finite loss_scaler = DynamicLossScaler(1024, 2, 100) auto_mixed_precision(model, 'O3') def forward_fn(train_inputs, train_label): loss = problem.get_loss(train_inputs, train_label) if use_ascend: loss = loss_scaler.scale(loss) return loss grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=False) @jit def train_step(train_inputs, train_label): loss, grads = grad_fn(train_inputs, train_label) if use_ascend: loss = loss_scaler.unscale(loss) if all_finite(grads): grads = loss_scaler.unscale(grads) loss = ops.depend(loss, optimizer(grads)) else: loss = ops.depend(loss, optimizer(grads)) return loss sink_process = mindspore.data_sink(train_step, train_dataset, sink_size=1) summary_dir = os.path.join(config["summary_dir"], model_name) for cur_epoch in range(optimizer_params["train_epochs"]): local_time_beg = time.time() model.set_train() cur_loss = 0.0 for _ in range(steps_per_epoch): cur_loss = sink_process() print("epoch: %s, loss is %s" % (cur_epoch + 1, cur_loss), flush=True) local_time_end = time.time() epoch_seconds = (local_time_end - local_time_beg) * 1000 step_seconds = epoch_seconds / steps_per_epoch print("Train epoch time: {:5.3f} ms, per step time: {:5.3f} ms".format (epoch_seconds, step_seconds), flush=True) if (cur_epoch + 1) % config["save_checkpoint_epoches"] == 0: ckpt_dir = os.path.join(summary_dir, "ckpt") if not os.path.exists(ckpt_dir): os.makedirs(ckpt_dir) mindspore.save_checkpoint(model, os.path.join(ckpt_dir, model_params["name"])) if (cur_epoch + 1) % config['eval_interval'] == 0: calculate_l2_error(model, test_input, test_label, config["test_batch_size"]) [8]: train() epoch: 1, loss is 1.7631323 Train epoch time: 50405.954 ms, per step time: 50.406 ms epoch: 2, loss is 1.9283392 Train epoch time: 36591.429 ms, per step time: 36.591 ms epoch: 3, loss is 1.4265916 Train epoch time: 35085.079 ms, per step time: 35.085 ms epoch: 4, loss is 1.8609437 Train epoch time: 34407.280 ms, per step time: 34.407 ms epoch: 5, loss is 1.5222052 Train epoch time: 34596.965 ms, per step time: 34.597 ms epoch: 6, loss is 1.3424721 Train epoch time: 33847.209 ms, per step time: 33.847 ms epoch: 7, loss is 1.607729 Train epoch time: 33106.981 ms, per step time: 33.107 ms epoch: 8, loss is 1.3308442 Train epoch time: 33051.339 ms, per step time: 33.051 ms epoch: 9, loss is 1.3169765 Train epoch time: 33901.816 ms, per step time: 33.902 ms epoch: 10, loss is 1.4149593 Train epoch time: 33908.748 ms, per step time: 33.909 ms ================================Start Evaluation================================ mean rel_rmse_error: 0.15500953359901906 =================================End Evaluation================================= ... epoch: 141, loss is 0.777328 Train epoch time: 32549.911 ms, per step time: 32.550 ms epoch: 142, loss is 0.7008966 Train epoch time: 32522.572 ms, per step time: 32.523 ms epoch: 143, loss is 0.72377646 Train epoch time: 32566.685 ms, per step time: 32.567 ms epoch: 144, loss is 0.72175145 Train epoch time: 32435.932 ms, per step time: 32.436 ms epoch: 145, loss is 0.6235678 Train epoch time: 32463.707 ms, per step time: 32.464 ms epoch: 146, loss is 0.9351083 Train epoch time: 32448.413 ms, per step time: 32.448 ms epoch: 147, loss is 0.9283789 Train epoch time: 32472.401 ms, per step time: 32.472 ms epoch: 148, loss is 0.7655642 Train epoch time: 32604.642 ms, per step time: 32.605 ms epoch: 149, loss is 0.7233772 Train epoch time: 32649.832 ms, per step time: 32.650 ms epoch: 150, loss is 0.86825275 Train epoch time: 32589.243 ms, per step time: 32.589 ms ================================Start Evaluation================================ mean rel_rmse_error: 0.07437102290522307 =================================End Evaluation================================= predict total time: 15.212349653244019 s


【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3