Skip to content

Commit 811285d

Browse files
Added PT diagram of the training data
1 parent 7d1a6bb commit 811285d

File tree

2 files changed

+4
-1
lines changed

2 files changed

+4
-1
lines changed

_tutorials/compressible_flow/NICFD_nozzle/NICFD_nozzle_datadriven.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,10 @@ The thermodynamic state data used to train the network for the data-driven fluid
5353

5454
By running the script [1:generate_fluid_data.py](https://github.com/su2code/Tutorials/tree/master/compressible_flow/NICFD_nozzle/PhysicsInformed/1:generate_fluid_data.py) will generate the thermodynamic state data used for the training of the network and generate contour plots of the temperature, pressure, and speed of sound. The complete set of thermodynamic state data is stored in the file titled *fluid_data_full.csv*. 80% of the randomly sampled fluid data is used to update the weights of the network during training, 10% is used to monitor the convergence of the training process, and the remaining 10% is used to validate the accuracy of the network upon completion of the training process. The complete data set contains approximately 2.3e5 unique data points.
5555

56-
IMAGE: training data plot
56+
![PT_diagram_trainingdata](../../tutorials_files/compressible_flow/NICFD_nozzle_datadriven/images/PT_diagram.png)
57+
Figure (1): Section of training data set near the critial point ('cp').
58+
59+
5760
### 3. Train physics-informed neural network
5861
The network used in this tutorial uses two hidden layers with 12 nodes each. The exponential function is used as the hidden layer activation function. This is an unusual choice, but is motivated by the fact that it reduces the computational cost required to calculate the network Jacobian and Hessian during the CFD solution process.
5962
The training process uses an exponential decay function for the learning rate, with an initial value of 1e-3. During each update step, the weights and biases of the network are adjusted according to the value of the loss function evaluated on a batch of 64 training data points.
71.4 KB
Loading

0 commit comments

Comments
 (0)