You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
</div><!-- fragment --><h4class="doxsection"><aclass="anchor" id="autotoc_md10"></a>
343
343
Hard Coded Patches</h4>
344
-
<p>Some patch configurations are not adequately handled with the above analytic variable definitions. In this case, a hard coded patch can be used. Hard coded patches can be added by adding additional hard coded patch identifiers to <spanclass="tt">src/pre_process/include/1[2,3]dHardcodedIC.fpp</span>. When using a hard coded patch, the <spanclass="tt">patch_icpp(patch_id)%hcid</span> must be set to the hard-coded patch id. For example, to add a 2D Hardcoded patch with an id of 200, one would add the following to <spanclass="tt">src/pre_process/include/2dHardcodedIC.fpp</span></p>
344
+
<p>Some patch configurations are not adequately handled with the above analytic variable definitions. In this case, a hard coded patch can be used. Hard coded patches can be added by adding additional hard coded patch identifiers to <spanclass="tt">src/common/include/1[2,3]dHardcodedIC.fpp</span>. When using a hard coded patch, the <spanclass="tt">patch_icpp(patch_id)%hcid</span> must be set to the hard-coded patch id. For example, to add a 2D Hardcoded patch with an id of 200, one would add the following to <spanclass="tt">src/common/include/2dHardcodedIC.fpp</span></p>
</div><!-- fragment --><p>and use <spanclass="tt">patch_icpp(i)%hcid = 200</span> in the input file. Additional variables can be declared in <spanclass="tt">Hardcoded1[2,3]DVariables</span> and used in <spanclass="tt">hardcoded1[2,3]D</span>. As a convention, any hard coded patches that are part of the MFC master branch should be identified as 1[2,3]xx where the first digit indicates the number of dimensions.</p>
<li><spanclass="tt">case(270)</span>: Extrude 1D data to 2D domain</li>
352
352
<li><spanclass="tt">case(370)</span>: Extrude 2D data to 3D domain</li>
353
353
</ul>
354
-
<p>Setup: Only requires specifying <spanclass="tt">init_dir</span> and filename pattern via <spanclass="tt">zeros_default</span>. Grid dimensions are automatically detected from the data files. Implementation: All variables and file handling are managed in <spanclass="tt">src/pre_process/include/ExtrusionHardcodedIC.fpp</span> with no manual grid configuration needed. Usage: Ideal for initializing simulations from lower-dimensional solutions, enabling users to add perturbations or modifications to the base extruded fields for flow instability studies.</p>
354
+
<p>Setup: Only requires specifying <spanclass="tt">init_dir</span> and filename pattern via <spanclass="tt">zeros_default</span>. Grid dimensions are automatically detected from the data files. Implementation: All variables and file handling are managed in <spanclass="tt">src/common/include/ExtrusionHardcodedIC.fpp</span> with no manual grid configuration needed. Usage: Ideal for initializing simulations from lower-dimensional solutions, enabling users to add perturbations or modifications to the base extruded fields for flow instability studies.</p>
<p>You can navigate Docker entirely from the command line. From a bash-like shell, pull from the <ahref="https://hub.docker.com/r/sbryngelson/mfc">sbryngelson/mfc</a> repository and run the latest MFC container: </p><divclass="fragment"><divclass="line">docker run -it --rm --entrypoint bash sbryngelson/mfc:latest-cpu</div>
156
156
</div><!-- fragment --><p><b>Selecting OS/ARCH:</b> Docker selects the compatible architecture by default when pulling and running a container. You can manually specify your platform if something seems to go wrong, as Docker may suggest doing so. For example, <spanclass="tt">linux/amd64</span> handles many *nix-based x86 architectures, and <spanclass="tt">linux/arm64</span> handles Apple Silicon and Arm-based *nix devices. You can specify it like this: </p><divclass="fragment"><divclass="line">docker run -it --rm --entrypoint bash --platform linux/amd64 sbryngelson/mfc:latest-cpu</div>
157
157
</div><!-- fragment --><p><b>What's Next?</b></p>
158
158
<p>Once a container has started, the primary working directory is <spanclass="tt">/opt/MFC</span>, and all necessary files are located there. You can check out the usual MFC documentation, such as the <aclass="el" href="examples.html" title="Example Cases">Example Cases</a>, to get familiar with running cases. Then, review the <aclass="el" href="case.html" title="Case Files">Case Files</a> to write a custom case file.</p>
<p>Let's take a closer look at running MFC within a container. Kick off a CPU container: </p><divclass="fragment"><divclass="line">docker run -it --rm --entrypoint bash sbryngelson/mfc:latest-cpu</div>
162
162
</div><!-- fragment --><p> Or, start a GPU container. </p><divclass="fragment"><divclass="line">docker run -it --rm --gpus all --entrypoint bash sbryngelson/mfc:latest-gpu</div>
</div><!-- fragment --><p><b>Shared Memory</b></p>
167
167
<p>If you run a job with multiple MPI ranks, you could run into <em>MPI memory binding errors</em>. This can manifest as a failed test (launched via <spanclass="tt">./mfc.sh test</span>) and running cases with <spanclass="tt">./mfc.sh run -n X <path/to/case.py></span> where <spanclass="tt">X > 1</span>. To avoid this issue, you can increase the shared memory size (to keep MPI working): </p><divclass="fragment"><divclass="line">docker run -it --rm --entrypoint bash --shm-size=<e.g., 4gb> sbryngelson/mfc:latest-cpu</div>
168
168
</div><!-- fragment --><p> or avoid MPI altogether via <spanclass="tt">./mfc.sh <your commands> --no-mpi</span>.</p>
<p>On the source machine, pull and save the image: </p><divclass="fragment"><divclass="line">docker pull sbryngelson/mfc:latest-cpu</div>
172
172
<divclass="line">docker save -o mfc:latest-cpu.tar sbryngelson/mfc:latest-cpu</div>
173
173
</div><!-- fragment --><p> On the target machine, load and run the image: </p><divclass="fragment"><divclass="line">docker load -i mfc:latest-cpu.tar</div>
174
174
<divclass="line">docker run -it --rm mfc:latest-cpu</div>
175
-
</div><!-- fragment --><h2class="doxsection"><aclass="anchor" id="autotoc_md116"></a>
175
+
</div><!-- fragment --><h2class="doxsection"><aclass="anchor" id="autotoc_md157"></a>
176
176
Using Supercomputers/Clusters via Apptainer/Singularity</h2>
<p>On the source machine, pull and translate the image into <spanclass="tt">.sif</span> format: </p><divclass="fragment"><divclass="line">apptainer build mfc:latest-gpu.sif docker://sbryngelson/mfc:latest-gpu</div>
186
186
</div><!-- fragment --><p> On the target machine, load and start an interactive shell: </p><divclass="fragment"><divclass="line">apptainer shell --nv --fakeroot --writable-tmpfs --bind "$PWD":/mnt mfc:latest-gpu.sif</div>
187
-
</div><!-- fragment --><h3class="doxsection"><aclass="anchor" id="autotoc_md119"></a>
187
+
</div><!-- fragment --><h3class="doxsection"><aclass="anchor" id="autotoc_md160"></a>
188
188
Slurm Job</h3>
189
189
<p>Below is an example Slurm batch job script. Refer to your machine's user guide for instructions on properly loading and using Apptainer. </p><divclass="fragment"><divclass="line">#!/bin/bash</div>
</div><!-- fragment --><h3class="doxsection"><aclass="anchor" id="autotoc_md124"></a>
238
+
</div><!-- fragment --><h3class="doxsection"><aclass="anchor" id="autotoc_md165"></a>
239
239
Architecture Support</h3>
240
240
<p>You can specify your architecture with <spanclass="tt">--platform <os>/<arch></span>, typically either <spanclass="tt">linux/amd64</span> or <spanclass="tt">linux/arm64</span>. If you are unsure, Docker automatically selects the compatible image with your system architecture. If native support isn't available, QEMU emulation is enabled for the following architectures, albeit with degraded performance. </p><divclass="fragment"><divclass="line">linux/amd64</div>
0 commit comments