<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://fluid.colorado.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jrwrigh</id>
		<title>PHASTA Wiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://fluid.colorado.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jrwrigh"/>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php/Special:Contributions/Jrwrigh"/>
		<updated>2026-04-28T00:04:02Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Build_HONEE_on_Summit&amp;diff=2141</id>
		<title>Build HONEE on Summit</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Build_HONEE_on_Summit&amp;diff=2141"/>
				<updated>2026-02-18T20:34:12Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: fix -march=native flag location&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Building HONEE on Summit nodes=&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
This page describes a Summit-specific workflow to build and test '''HONEE''' (main branch) on Summit compute nodes. HONEE depends on '''libCEED''' and '''PETSc'''.&lt;br /&gt;
&lt;br /&gt;
* HONEE repo: https://gitlab.com/phypid/honee&lt;br /&gt;
* libCEED repo: https://github.com/CEED/libCEED&lt;br /&gt;
* PETSc repo: https://gitlab.com/petsc/petsc&lt;br /&gt;
&lt;br /&gt;
'''Note:''' This guide assumes you build inside an interactive PBS job on a Summit compute node.&lt;br /&gt;
&lt;br /&gt;
==1) Start an interactive Summit node==&lt;br /&gt;
From &amp;lt;code&amp;gt;jumpgate&amp;lt;/code&amp;gt;, request a node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
qsub -I -l select=1:ncpus=24:mpiprocs=24 -l walltime=72:00:00 -l place=scatter:exclhost -q workq&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should land on a Summit node.&lt;br /&gt;
&lt;br /&gt;
==2) Create a single workspace for all repositories==&lt;br /&gt;
Create a folder to hold all three code bases:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkdir honee_stack&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;cd honee_stack&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Recommended layout:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~/honee_stack/&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;libCEED/&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;petsc/&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;honee/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==3) Clone the repositories==&lt;br /&gt;
Navigate into &amp;lt;code&amp;gt;honee_stack&amp;lt;/code&amp;gt; and clone the 3 repos.&lt;br /&gt;
&lt;br /&gt;
===libCEED===&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
git clone https://github.com/CEED/libCEED&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===PETSc (main branch)===&lt;br /&gt;
'''Important:''' Use PETSc &amp;lt;code&amp;gt;main&amp;lt;/code&amp;gt;, not the release branch.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
git clone https://gitlab.com/petsc/petsc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===HONEE (main branch)===&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
git clone https://gitlab.com/phypid/honee&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==4) Build libCEED==&lt;br /&gt;
Navigate into the libceed directory &amp;lt;code&amp;gt;cd ~/honee_stack/libCEED&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
make -j&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces &amp;lt;code&amp;gt;libCEED&amp;lt;/code&amp;gt; libraries needed by HONEE.&lt;br /&gt;
&lt;br /&gt;
==5) Build PETSc ==&lt;br /&gt;
Navigate into the PETSc directory &amp;lt;code&amp;gt;cd ~/honee_stack/PETSc &amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
===Example configuration===&lt;br /&gt;
Below is a good baseline: MPI enabled, optimized build, and CGNS/HDF5 support via downloads.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
./configure \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;--with-cc=mpicc \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;--with-cxx=mpicxx \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;--with-fc=0 \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;--with-mpi=1 \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;--with-mpiexec=mpirun \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;--download-f2cblaslapack \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;--download-hdf5 \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;--download-cgns \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;--with-debugging=0 \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;COPTFLAGS='-O3 -g --march=native' \&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;CXXOPTFLAGS='-O3 -g --march=native'&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
make -j all&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Notes on the example PETSc settings===&lt;br /&gt;
* &amp;lt;code&amp;gt;--with-debugging=0&amp;lt;/code&amp;gt; is recommended for performance runs.&lt;br /&gt;
* &amp;lt;code&amp;gt;-O3&amp;lt;/code&amp;gt; is appropriate for speed; keeping &amp;lt;code&amp;gt;-g&amp;lt;/code&amp;gt; helps debugging without major runtime cost.&lt;br /&gt;
* Downloading &amp;lt;code&amp;gt;hdf5&amp;lt;/code&amp;gt; + &amp;lt;code&amp;gt;cgns&amp;lt;/code&amp;gt; is recommended for CGNS output support.&lt;br /&gt;
&lt;br /&gt;
==6) Set environment variables for HONEE==&lt;br /&gt;
HONEE requires &amp;lt;code&amp;gt;CEED_DIR&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Example setup:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export CEED_DIR=$HOME/honee_stack/libCEED&amp;lt;br&amp;gt;&lt;br /&gt;
export PETSC_DIR=$HOME/honee_stack/petsc&amp;lt;br&amp;gt;&lt;br /&gt;
export PETSC_ARCH=arch-linux-c-opt&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Important runtime note (recommended while using the example PETSc config):'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==7) Build HONEE==&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
cd ~/honee_stack/honee&amp;lt;br&amp;gt;&lt;br /&gt;
make -j&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==8) Run a sample problem==&lt;br /&gt;
Run the Gaussian wave example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
cd ~/honee_stack/honee&amp;lt;br&amp;gt;&lt;br /&gt;
build/navierstokes -options_file examples/gaussianwave.yaml&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==9) Test the installation==&lt;br /&gt;
Run the test suite:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
cd ~/honee_stack/honee&amp;lt;br&amp;gt;&lt;br /&gt;
make test -j&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Troubleshooting==&lt;br /&gt;
&lt;br /&gt;
===Missing PETSc or libCEED===&lt;br /&gt;
Verify environment variables:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
echo $CEED_DIR&amp;lt;br&amp;gt;&lt;br /&gt;
echo $PETSC_DIR&amp;lt;br&amp;gt;&lt;br /&gt;
echo $PETSC_ARCH&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Runtime linker errors (libpetsc.so not found)===&lt;br /&gt;
Ensure:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CGNS/HDF5 issues===&lt;br /&gt;
Reconfigure PETSc with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
--download-hdf5 --download-cgns&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
Official HONEE installation documentation:&amp;lt;br&amp;gt;&lt;br /&gt;
https://gitlab.com/phypid/honee&lt;br /&gt;
&lt;br /&gt;
Official PETSc configuration documentation:&amp;lt;br&amp;gt;&lt;br /&gt;
https://petsc.org/main/install/&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2074</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2074"/>
				<updated>2024-07-02T16:07:37Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Degree-based settings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Restarting Simulations ===&lt;br /&gt;
To restart a simulation from a previous simulation's result, they need to be saved to a &amp;lt;code&amp;gt;*.bin&amp;lt;/code&amp;gt; file.&lt;br /&gt;
This is done using the &amp;lt;code&amp;gt;-continue&amp;lt;/code&amp;gt; flag, which will load the file set by &amp;lt;code&amp;gt;-continue_filename&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.&lt;br /&gt;
&lt;br /&gt;
== HONEE and Gmsh ==&lt;br /&gt;
&lt;br /&gt;
=== GMSH/Grid requirements: ===&lt;br /&gt;
# The grid must be hexahedron based (so deformed cubes). HONEE can run with tetrahedron grids, but that will require some minor code modifications which we've done before (IIRC, it's a single line of code to change). If anyone is interested in this, let me know and we can work on it.&lt;br /&gt;
#* Note that HONEE cannot handle mixed meshes, so meshes with both hexahedron and tetrahedron grids. This is not a minor code change and is probably out of scope for this Summer Program&lt;br /&gt;
# In the geo file, you need to specify &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s to identify the domain boundaries to apply boundary conditions to.&lt;br /&gt;
# In the geo file, you need to specify a single &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; that has the entire domain. So if you split the domain into multiple pieces for meshing purposes (as is done with the vortex shedding grids), then you need the &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; to identify all the pieces.&lt;br /&gt;
&lt;br /&gt;
=== YAML Setup: ===&lt;br /&gt;
In the YAML file, you need to associate each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s with a boundary condition. I describe this briefly in the video, but each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt; as an ID number associated with it. In the YAML file, you need to associate that ID number with a BC choice.&lt;br /&gt;
For example, in the &amp;lt;code&amp;gt;vortexshedding.yaml&amp;lt;/code&amp;gt; file there is:&lt;br /&gt;
 # Boundary Settings&lt;br /&gt;
 bc_slip_z: 6&lt;br /&gt;
 bc_wall: 5&lt;br /&gt;
 bc_freestream: 1&lt;br /&gt;
 bc_outflow: 2&lt;br /&gt;
 bc_slip_y: 3,4&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;code&amp;gt;cylinder.geo&amp;lt;/code&amp;gt; has:&lt;br /&gt;
&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;) = {102};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;) = {116};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;) = {80, 120};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;) = {36, 112};&lt;br /&gt;
 Physical Surface(&amp;quot;cylinderwalls&amp;quot;) = {94, 28, 50, 72};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;) = {37, 1, 4, 103, 3, 81, 2, 59, 5, 125};&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;inlet&amp;quot; face is the first identified boundary, so it gets the ID number 1.&lt;br /&gt;
In the YAML file, you see &amp;lt;code&amp;gt;bc_freestream: 1&amp;lt;/code&amp;gt;, which sets this inlet boundary to be controlled by a freestream BC.&lt;br /&gt;
Similar with the &amp;quot;outlet&amp;quot; face; it's the second specified boundary in the geo file, so we set &amp;lt;code&amp;gt;bc_outflow: 2&amp;lt;/code&amp;gt;.&lt;br /&gt;
And so on with the other faces.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' When using the GMSH GUI to identify the Physical Surfaces, it will actually put something like this in the geo file:&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;, 3010) = {3005};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;, 3011) = {3003};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;, 3012) = {3004};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;, 3013) = {3002};&lt;br /&gt;
 Physical Surface(&amp;quot;nacawalls&amp;quot;, 3014) = {3008, 3006, 3007};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;, 3015) = {3009, 3001};&lt;br /&gt;
&lt;br /&gt;
Note the extra &amp;lt;code&amp;gt;3010&amp;lt;/code&amp;gt; in the &amp;quot;inlet&amp;quot; definition. I'm not sure if this means that correct ID number for it should be 3010 instead of 1, but that's a possibility.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recommendations for HONEE performance ==&lt;br /&gt;
&lt;br /&gt;
=== Compilation Flags ===&lt;br /&gt;
When compiling a program, you can pass certain flags in order to try and make HONEE run faster.&lt;br /&gt;
For the Cisco nodes, this can be done by running the following in your libCEED directory:&lt;br /&gt;
&lt;br /&gt;
 make configure OPT='-O3 -march=native -g -ffp-contract=fast -fopenmp-simd'&lt;br /&gt;
 make build/fluids-navierstokes -Bj&lt;br /&gt;
&lt;br /&gt;
=== General Running Settings ===&lt;br /&gt;
Some general settings that will help improve performance are to lower the solver tolerances.&lt;br /&gt;
Specifically &amp;lt;code&amp;gt;-snes_rtol 1e-4 -ksp_rtol 1e-4&amp;lt;/code&amp;gt; have been found to be pretty good choices.&lt;br /&gt;
These may require tweaking if you run into divergence issues.&lt;br /&gt;
&lt;br /&gt;
=== Degree-based settings ===&lt;br /&gt;
Optimal solver settings can change based on what degree elements you work with (determined by the &amp;lt;code&amp;gt;-degree&amp;lt;/code&amp;gt; flag).&lt;br /&gt;
For linear elements (&amp;lt;code&amp;gt;-degree 1&amp;lt;/code&amp;gt;), I've had success with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type -snes_lag_jacobian 5 -snes_lag_jacobian_persists false -snes_lag_preconditioner 20 -snes_lag_preconditioner_persists true -pc_type asm -sub_pc_type lu&lt;br /&gt;
&lt;br /&gt;
While for cubic elements (&amp;lt;code&amp;gt;-degree 3&amp;lt;/code&amp;gt;) I've seen better performance with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type shell -snes_lag_jacobian 15 -snes_lag_jacobian_persists true -pc_type asm -sub_pc_type lu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that these flags were when running the vortex shedding example, so the optimal options may change for other problems.&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2073</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2073"/>
				<updated>2024-07-02T16:03:34Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Degree-based settings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Restarting Simulations ===&lt;br /&gt;
To restart a simulation from a previous simulation's result, they need to be saved to a &amp;lt;code&amp;gt;*.bin&amp;lt;/code&amp;gt; file.&lt;br /&gt;
This is done using the &amp;lt;code&amp;gt;-continue&amp;lt;/code&amp;gt; flag, which will load the file set by &amp;lt;code&amp;gt;-continue_filename&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.&lt;br /&gt;
&lt;br /&gt;
== HONEE and Gmsh ==&lt;br /&gt;
&lt;br /&gt;
=== GMSH/Grid requirements: ===&lt;br /&gt;
# The grid must be hexahedron based (so deformed cubes). HONEE can run with tetrahedron grids, but that will require some minor code modifications which we've done before (IIRC, it's a single line of code to change). If anyone is interested in this, let me know and we can work on it.&lt;br /&gt;
#* Note that HONEE cannot handle mixed meshes, so meshes with both hexahedron and tetrahedron grids. This is not a minor code change and is probably out of scope for this Summer Program&lt;br /&gt;
# In the geo file, you need to specify &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s to identify the domain boundaries to apply boundary conditions to.&lt;br /&gt;
# In the geo file, you need to specify a single &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; that has the entire domain. So if you split the domain into multiple pieces for meshing purposes (as is done with the vortex shedding grids), then you need the &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; to identify all the pieces.&lt;br /&gt;
&lt;br /&gt;
=== YAML Setup: ===&lt;br /&gt;
In the YAML file, you need to associate each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s with a boundary condition. I describe this briefly in the video, but each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt; as an ID number associated with it. In the YAML file, you need to associate that ID number with a BC choice.&lt;br /&gt;
For example, in the &amp;lt;code&amp;gt;vortexshedding.yaml&amp;lt;/code&amp;gt; file there is:&lt;br /&gt;
 # Boundary Settings&lt;br /&gt;
 bc_slip_z: 6&lt;br /&gt;
 bc_wall: 5&lt;br /&gt;
 bc_freestream: 1&lt;br /&gt;
 bc_outflow: 2&lt;br /&gt;
 bc_slip_y: 3,4&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;code&amp;gt;cylinder.geo&amp;lt;/code&amp;gt; has:&lt;br /&gt;
&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;) = {102};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;) = {116};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;) = {80, 120};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;) = {36, 112};&lt;br /&gt;
 Physical Surface(&amp;quot;cylinderwalls&amp;quot;) = {94, 28, 50, 72};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;) = {37, 1, 4, 103, 3, 81, 2, 59, 5, 125};&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;inlet&amp;quot; face is the first identified boundary, so it gets the ID number 1.&lt;br /&gt;
In the YAML file, you see &amp;lt;code&amp;gt;bc_freestream: 1&amp;lt;/code&amp;gt;, which sets this inlet boundary to be controlled by a freestream BC.&lt;br /&gt;
Similar with the &amp;quot;outlet&amp;quot; face; it's the second specified boundary in the geo file, so we set &amp;lt;code&amp;gt;bc_outflow: 2&amp;lt;/code&amp;gt;.&lt;br /&gt;
And so on with the other faces.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' When using the GMSH GUI to identify the Physical Surfaces, it will actually put something like this in the geo file:&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;, 3010) = {3005};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;, 3011) = {3003};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;, 3012) = {3004};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;, 3013) = {3002};&lt;br /&gt;
 Physical Surface(&amp;quot;nacawalls&amp;quot;, 3014) = {3008, 3006, 3007};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;, 3015) = {3009, 3001};&lt;br /&gt;
&lt;br /&gt;
Note the extra &amp;lt;code&amp;gt;3010&amp;lt;/code&amp;gt; in the &amp;quot;inlet&amp;quot; definition. I'm not sure if this means that correct ID number for it should be 3010 instead of 1, but that's a possibility.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recommendations for HONEE performance ==&lt;br /&gt;
&lt;br /&gt;
=== Compilation Flags ===&lt;br /&gt;
When compiling a program, you can pass certain flags in order to try and make HONEE run faster.&lt;br /&gt;
For the Cisco nodes, this can be done by running the following in your libCEED directory:&lt;br /&gt;
&lt;br /&gt;
 make configure OPT='-O3 -march=native -g -ffp-contract=fast -fopenmp-simd'&lt;br /&gt;
 make build/fluids-navierstokes -Bj&lt;br /&gt;
&lt;br /&gt;
=== General Running Settings ===&lt;br /&gt;
Some general settings that will help improve performance are to lower the solver tolerances.&lt;br /&gt;
Specifically &amp;lt;code&amp;gt;-snes_rtol 1e-4 -ksp_rtol 1e-4&amp;lt;/code&amp;gt; have been found to be pretty good choices.&lt;br /&gt;
These may require tweaking if you run into divergence issues.&lt;br /&gt;
&lt;br /&gt;
=== Degree-based settings ===&lt;br /&gt;
Optimal solver settings can change based on what degree elements you work with (determined by the &amp;lt;code&amp;gt;-degree&amp;lt;/code&amp;gt; flag).&lt;br /&gt;
For linear elements (&amp;lt;code&amp;gt;-degree 1&amp;lt;/code&amp;gt;), I've had success with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type -snes_lag_jacobian 5 -snes_lag_jacobian_persists false -snes_lag_preconditioner 20 -snes_lag_preconditioner_persists true -pc_type asm -sub_pc_type lu&lt;br /&gt;
&lt;br /&gt;
While for cubic elements (&amp;lt;code&amp;gt;-degree 3&amp;lt;/code&amp;gt;) I've seen better performance with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type shell -snes_lag_jacobian 15 -snes_lag_jacobian_persists true -py_type asm -sub_pc_type lu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that these flags were when running the vortex shedding example, so the optimal options may change for other problems.&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2071</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2071"/>
				<updated>2024-06-21T15:52:01Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Recommendations for HONEE performance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Restarting Simulations ===&lt;br /&gt;
To restart a simulation from a previous simulation's result, they need to be saved to a &amp;lt;code&amp;gt;*.bin&amp;lt;/code&amp;gt; file.&lt;br /&gt;
This is done using the &amp;lt;code&amp;gt;-continue&amp;lt;/code&amp;gt; flag, which will load the file set by &amp;lt;code&amp;gt;-continue_filename&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.&lt;br /&gt;
&lt;br /&gt;
== HONEE and Gmsh ==&lt;br /&gt;
&lt;br /&gt;
=== GMSH/Grid requirements: ===&lt;br /&gt;
# The grid must be hexahedron based (so deformed cubes). HONEE can run with tetrahedron grids, but that will require some minor code modifications which we've done before (IIRC, it's a single line of code to change). If anyone is interested in this, let me know and we can work on it.&lt;br /&gt;
#* Note that HONEE cannot handle mixed meshes, so meshes with both hexahedron and tetrahedron grids. This is not a minor code change and is probably out of scope for this Summer Program&lt;br /&gt;
# In the geo file, you need to specify &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s to identify the domain boundaries to apply boundary conditions to.&lt;br /&gt;
# In the geo file, you need to specify a single &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; that has the entire domain. So if you split the domain into multiple pieces for meshing purposes (as is done with the vortex shedding grids), then you need the &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; to identify all the pieces.&lt;br /&gt;
&lt;br /&gt;
=== YAML Setup: ===&lt;br /&gt;
In the YAML file, you need to associate each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s with a boundary condition. I describe this briefly in the video, but each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt; as an ID number associated with it. In the YAML file, you need to associate that ID number with a BC choice.&lt;br /&gt;
For example, in the &amp;lt;code&amp;gt;vortexshedding.yaml&amp;lt;/code&amp;gt; file there is:&lt;br /&gt;
 # Boundary Settings&lt;br /&gt;
 bc_slip_z: 6&lt;br /&gt;
 bc_wall: 5&lt;br /&gt;
 bc_freestream: 1&lt;br /&gt;
 bc_outflow: 2&lt;br /&gt;
 bc_slip_y: 3,4&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;code&amp;gt;cylinder.geo&amp;lt;/code&amp;gt; has:&lt;br /&gt;
&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;) = {102};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;) = {116};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;) = {80, 120};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;) = {36, 112};&lt;br /&gt;
 Physical Surface(&amp;quot;cylinderwalls&amp;quot;) = {94, 28, 50, 72};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;) = {37, 1, 4, 103, 3, 81, 2, 59, 5, 125};&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;inlet&amp;quot; face is the first identified boundary, so it gets the ID number 1.&lt;br /&gt;
In the YAML file, you see &amp;lt;code&amp;gt;bc_freestream: 1&amp;lt;/code&amp;gt;, which sets this inlet boundary to be controlled by a freestream BC.&lt;br /&gt;
Similar with the &amp;quot;outlet&amp;quot; face; it's the second specified boundary in the geo file, so we set &amp;lt;code&amp;gt;bc_outflow: 2&amp;lt;/code&amp;gt;.&lt;br /&gt;
And so on with the other faces.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' When using the GMSH GUI to identify the Physical Surfaces, it will actually put something like this in the geo file:&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;, 3010) = {3005};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;, 3011) = {3003};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;, 3012) = {3004};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;, 3013) = {3002};&lt;br /&gt;
 Physical Surface(&amp;quot;nacawalls&amp;quot;, 3014) = {3008, 3006, 3007};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;, 3015) = {3009, 3001};&lt;br /&gt;
&lt;br /&gt;
Note the extra &amp;lt;code&amp;gt;3010&amp;lt;/code&amp;gt; in the &amp;quot;inlet&amp;quot; definition. I'm not sure if this means that correct ID number for it should be 3010 instead of 1, but that's a possibility.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recommendations for HONEE performance ==&lt;br /&gt;
&lt;br /&gt;
=== Compilation Flags ===&lt;br /&gt;
When compiling a program, you can pass certain flags in order to try and make HONEE run faster.&lt;br /&gt;
For the Cisco nodes, this can be done by running the following in your libCEED directory:&lt;br /&gt;
&lt;br /&gt;
 make configure OPT='-O3 -march=native -g -ffp-contract=fast -fopenmp-simd'&lt;br /&gt;
 make build/fluids-navierstokes -Bj&lt;br /&gt;
&lt;br /&gt;
=== General Running Settings ===&lt;br /&gt;
Some general settings that will help improve performance are to lower the solver tolerances.&lt;br /&gt;
Specifically &amp;lt;code&amp;gt;-snes_rtol 1e-4 -ksp_rtol 1e-4&amp;lt;/code&amp;gt; have been found to be pretty good choices.&lt;br /&gt;
These may require tweaking if you run into divergence issues.&lt;br /&gt;
&lt;br /&gt;
=== Degree-based settings ===&lt;br /&gt;
Optimal solver settings can change based on what degree elements you work with (determined by the &amp;lt;code&amp;gt;-degree&amp;lt;/code&amp;gt; flag).&lt;br /&gt;
For linear elements (&amp;lt;code&amp;gt;-degree 1&amp;lt;/code&amp;gt;), I've had success with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type -snes_lag_jacobian 5 -snes_lag_jacobian_persists false -snes_lag_preconditioner 20 -snes_lag_preconditioner_persists true -py_type asm -sub_pc_type lu&lt;br /&gt;
&lt;br /&gt;
While for cubic elements (&amp;lt;code&amp;gt;-degree 3&amp;lt;/code&amp;gt;) I've seen better performance with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type shell -snes_lag_jacobian 15 -snes_lag_jacobian_persists true -py_type asm -sub_pc_type lu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that these flags were when running the vortex shedding example, so the optimal options may change for other problems.&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2070</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2070"/>
				<updated>2024-06-21T15:51:16Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Recommendations for HONEE performance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Restarting Simulations ===&lt;br /&gt;
To restart a simulation from a previous simulation's result, they need to be saved to a &amp;lt;code&amp;gt;*.bin&amp;lt;/code&amp;gt; file.&lt;br /&gt;
This is done using the &amp;lt;code&amp;gt;-continue&amp;lt;/code&amp;gt; flag, which will load the file set by &amp;lt;code&amp;gt;-continue_filename&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.&lt;br /&gt;
&lt;br /&gt;
== HONEE and Gmsh ==&lt;br /&gt;
&lt;br /&gt;
=== GMSH/Grid requirements: ===&lt;br /&gt;
# The grid must be hexahedron based (so deformed cubes). HONEE can run with tetrahedron grids, but that will require some minor code modifications which we've done before (IIRC, it's a single line of code to change). If anyone is interested in this, let me know and we can work on it.&lt;br /&gt;
#* Note that HONEE cannot handle mixed meshes, so meshes with both hexahedron and tetrahedron grids. This is not a minor code change and is probably out of scope for this Summer Program&lt;br /&gt;
# In the geo file, you need to specify &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s to identify the domain boundaries to apply boundary conditions to.&lt;br /&gt;
# In the geo file, you need to specify a single &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; that has the entire domain. So if you split the domain into multiple pieces for meshing purposes (as is done with the vortex shedding grids), then you need the &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; to identify all the pieces.&lt;br /&gt;
&lt;br /&gt;
=== YAML Setup: ===&lt;br /&gt;
In the YAML file, you need to associate each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s with a boundary condition. I describe this briefly in the video, but each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt; as an ID number associated with it. In the YAML file, you need to associate that ID number with a BC choice.&lt;br /&gt;
For example, in the &amp;lt;code&amp;gt;vortexshedding.yaml&amp;lt;/code&amp;gt; file there is:&lt;br /&gt;
 # Boundary Settings&lt;br /&gt;
 bc_slip_z: 6&lt;br /&gt;
 bc_wall: 5&lt;br /&gt;
 bc_freestream: 1&lt;br /&gt;
 bc_outflow: 2&lt;br /&gt;
 bc_slip_y: 3,4&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;code&amp;gt;cylinder.geo&amp;lt;/code&amp;gt; has:&lt;br /&gt;
&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;) = {102};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;) = {116};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;) = {80, 120};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;) = {36, 112};&lt;br /&gt;
 Physical Surface(&amp;quot;cylinderwalls&amp;quot;) = {94, 28, 50, 72};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;) = {37, 1, 4, 103, 3, 81, 2, 59, 5, 125};&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;inlet&amp;quot; face is the first identified boundary, so it gets the ID number 1.&lt;br /&gt;
In the YAML file, you see &amp;lt;code&amp;gt;bc_freestream: 1&amp;lt;/code&amp;gt;, which sets this inlet boundary to be controlled by a freestream BC.&lt;br /&gt;
Similar with the &amp;quot;outlet&amp;quot; face; it's the second specified boundary in the geo file, so we set &amp;lt;code&amp;gt;bc_outflow: 2&amp;lt;/code&amp;gt;.&lt;br /&gt;
And so on with the other faces.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' When using the GMSH GUI to identify the Physical Surfaces, it will actually put something like this in the geo file:&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;, 3010) = {3005};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;, 3011) = {3003};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;, 3012) = {3004};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;, 3013) = {3002};&lt;br /&gt;
 Physical Surface(&amp;quot;nacawalls&amp;quot;, 3014) = {3008, 3006, 3007};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;, 3015) = {3009, 3001};&lt;br /&gt;
&lt;br /&gt;
Note the extra &amp;lt;code&amp;gt;3010&amp;lt;/code&amp;gt; in the &amp;quot;inlet&amp;quot; definition. I'm not sure if this means that correct ID number for it should be 3010 instead of 1, but that's a possibility.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recommendations for HONEE performance ==&lt;br /&gt;
&lt;br /&gt;
=== Compilation Flags ===&lt;br /&gt;
When compiling a program, you can pass certain flags in order to try and make HONEE run faster.&lt;br /&gt;
For the Cisco nodes, this can be done by running the following in your libCEED directory:&lt;br /&gt;
&lt;br /&gt;
 make configure OPT='-O3 -march=native -g -ffp-contract=fast -fopenmp-simd'&lt;br /&gt;
 make build/fluids-navierstokes -Bj&lt;br /&gt;
&lt;br /&gt;
=== General Running Settings ===&lt;br /&gt;
Some general settings that will help improve performance are to lower the solver tolerances.&lt;br /&gt;
Specifically &amp;lt;code&amp;gt;-snes_rtol 1e-4 -ksp_rtol 1e-4&amp;lt;/code&amp;gt; have been found to be pretty good choices.&lt;br /&gt;
These may require tweaking if you run into divergence issues.&lt;br /&gt;
&lt;br /&gt;
=== Degree-based settings ===&lt;br /&gt;
Optimal solver settings can change based on what degree elements you work with (determined by the &amp;lt;code&amp;gt;-degree&amp;lt;/code&amp;gt; flag).&lt;br /&gt;
For linear elements (&amp;lt;code&amp;gt;-degree 1&amp;lt;/code&amp;gt;), I've had success with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type -snes_lag_jacobian 5 -snes_lag_jacobian_persists false -snes_lag_preconditioner 20 -snes_lag_preconditioner_persists true -py_type asm -sub_pc_type lu&lt;br /&gt;
&lt;br /&gt;
While for cubic elements (&amp;lt;code&amp;gt;-degree 3&amp;lt;/code&amp;gt;) I've seen better performance with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type shell -snes_lag_jacobian 15 -snes_lag_jacobian_persists true -py_type asm -sub_pc_type lu&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2069</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2069"/>
				<updated>2024-06-21T15:49:38Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Recommendations for HONEE performance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Restarting Simulations ===&lt;br /&gt;
To restart a simulation from a previous simulation's result, they need to be saved to a &amp;lt;code&amp;gt;*.bin&amp;lt;/code&amp;gt; file.&lt;br /&gt;
This is done using the &amp;lt;code&amp;gt;-continue&amp;lt;/code&amp;gt; flag, which will load the file set by &amp;lt;code&amp;gt;-continue_filename&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.&lt;br /&gt;
&lt;br /&gt;
== HONEE and Gmsh ==&lt;br /&gt;
&lt;br /&gt;
=== GMSH/Grid requirements: ===&lt;br /&gt;
# The grid must be hexahedron based (so deformed cubes). HONEE can run with tetrahedron grids, but that will require some minor code modifications which we've done before (IIRC, it's a single line of code to change). If anyone is interested in this, let me know and we can work on it.&lt;br /&gt;
#* Note that HONEE cannot handle mixed meshes, so meshes with both hexahedron and tetrahedron grids. This is not a minor code change and is probably out of scope for this Summer Program&lt;br /&gt;
# In the geo file, you need to specify &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s to identify the domain boundaries to apply boundary conditions to.&lt;br /&gt;
# In the geo file, you need to specify a single &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; that has the entire domain. So if you split the domain into multiple pieces for meshing purposes (as is done with the vortex shedding grids), then you need the &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; to identify all the pieces.&lt;br /&gt;
&lt;br /&gt;
=== YAML Setup: ===&lt;br /&gt;
In the YAML file, you need to associate each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s with a boundary condition. I describe this briefly in the video, but each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt; as an ID number associated with it. In the YAML file, you need to associate that ID number with a BC choice.&lt;br /&gt;
For example, in the &amp;lt;code&amp;gt;vortexshedding.yaml&amp;lt;/code&amp;gt; file there is:&lt;br /&gt;
 # Boundary Settings&lt;br /&gt;
 bc_slip_z: 6&lt;br /&gt;
 bc_wall: 5&lt;br /&gt;
 bc_freestream: 1&lt;br /&gt;
 bc_outflow: 2&lt;br /&gt;
 bc_slip_y: 3,4&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;code&amp;gt;cylinder.geo&amp;lt;/code&amp;gt; has:&lt;br /&gt;
&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;) = {102};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;) = {116};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;) = {80, 120};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;) = {36, 112};&lt;br /&gt;
 Physical Surface(&amp;quot;cylinderwalls&amp;quot;) = {94, 28, 50, 72};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;) = {37, 1, 4, 103, 3, 81, 2, 59, 5, 125};&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;inlet&amp;quot; face is the first identified boundary, so it gets the ID number 1.&lt;br /&gt;
In the YAML file, you see &amp;lt;code&amp;gt;bc_freestream: 1&amp;lt;/code&amp;gt;, which sets this inlet boundary to be controlled by a freestream BC.&lt;br /&gt;
Similar with the &amp;quot;outlet&amp;quot; face; it's the second specified boundary in the geo file, so we set &amp;lt;code&amp;gt;bc_outflow: 2&amp;lt;/code&amp;gt;.&lt;br /&gt;
And so on with the other faces.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' When using the GMSH GUI to identify the Physical Surfaces, it will actually put something like this in the geo file:&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;, 3010) = {3005};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;, 3011) = {3003};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;, 3012) = {3004};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;, 3013) = {3002};&lt;br /&gt;
 Physical Surface(&amp;quot;nacawalls&amp;quot;, 3014) = {3008, 3006, 3007};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;, 3015) = {3009, 3001};&lt;br /&gt;
&lt;br /&gt;
Note the extra &amp;lt;code&amp;gt;3010&amp;lt;/code&amp;gt; in the &amp;quot;inlet&amp;quot; definition. I'm not sure if this means that correct ID number for it should be 3010 instead of 1, but that's a possibility.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recommendations for HONEE performance ==&lt;br /&gt;
&lt;br /&gt;
=== Compilation Flags ===&lt;br /&gt;
When compiling a program, you can pass certain flags in order to try and make HONEE run faster.&lt;br /&gt;
For the Cisco nodes, this can be done by running the following in your libCEED directory:&lt;br /&gt;
&lt;br /&gt;
 make configure OPT='-O3 -march=native -g -ffp-contract=fast -fopenmp-simd'&lt;br /&gt;
 make build/fluids-navierstokes -Bj&lt;br /&gt;
&lt;br /&gt;
=== General Running Settings ===&lt;br /&gt;
Some general settings that will help improve performance are to lower the solver tolerances.&lt;br /&gt;
Specifically &amp;lt;code&amp;gt;-snes_rtol 1e-4 -ksp_rtol 1e-4&amp;lt;/code&amp;gt; have been found to be pretty good choices.&lt;br /&gt;
These may require tweaking if you run into divergence issues.&lt;br /&gt;
&lt;br /&gt;
=== Degree-based settings ===&lt;br /&gt;
Optimal solver settings can change based on what degree elements you work with (determined by the &amp;lt;code&amp;gt;-degree&amp;lt;/code&amp;gt; flag).&lt;br /&gt;
For linear elements (&amp;lt;code&amp;gt;-degree 1&amp;lt;/code&amp;gt;), I've had success with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type -snes_lag_jacobian 5 -snes_lag_jacobian_persists false -snes_lag_preconditioner 20 -snes_lag_preconditioner_persists true -py_type asm -sub_pc_type lu&lt;br /&gt;
&lt;br /&gt;
While for cubic elements (&amp;lt;code&amp;gt;-degree 3&amp;lt;/code&amp;gt;) I've seen better performance with:&lt;br /&gt;
&lt;br /&gt;
 -amat_type shell -snes_lag_jacobian 20 -snes_lag_jacobian_persists true -py_type asm -sub_pc_type lu&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2068</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2068"/>
				<updated>2024-06-21T15:35:48Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Restarting Simulations ===&lt;br /&gt;
To restart a simulation from a previous simulation's result, they need to be saved to a &amp;lt;code&amp;gt;*.bin&amp;lt;/code&amp;gt; file.&lt;br /&gt;
This is done using the &amp;lt;code&amp;gt;-continue&amp;lt;/code&amp;gt; flag, which will load the file set by &amp;lt;code&amp;gt;-continue_filename&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.&lt;br /&gt;
&lt;br /&gt;
== HONEE and Gmsh ==&lt;br /&gt;
&lt;br /&gt;
=== GMSH/Grid requirements: ===&lt;br /&gt;
# The grid must be hexahedron based (so deformed cubes). HONEE can run with tetrahedron grids, but that will require some minor code modifications which we've done before (IIRC, it's a single line of code to change). If anyone is interested in this, let me know and we can work on it.&lt;br /&gt;
#* Note that HONEE cannot handle mixed meshes, so meshes with both hexahedron and tetrahedron grids. This is not a minor code change and is probably out of scope for this Summer Program&lt;br /&gt;
# In the geo file, you need to specify &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s to identify the domain boundaries to apply boundary conditions to.&lt;br /&gt;
# In the geo file, you need to specify a single &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; that has the entire domain. So if you split the domain into multiple pieces for meshing purposes (as is done with the vortex shedding grids), then you need the &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; to identify all the pieces.&lt;br /&gt;
&lt;br /&gt;
=== YAML Setup: ===&lt;br /&gt;
In the YAML file, you need to associate each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s with a boundary condition. I describe this briefly in the video, but each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt; as an ID number associated with it. In the YAML file, you need to associate that ID number with a BC choice.&lt;br /&gt;
For example, in the &amp;lt;code&amp;gt;vortexshedding.yaml&amp;lt;/code&amp;gt; file there is:&lt;br /&gt;
 # Boundary Settings&lt;br /&gt;
 bc_slip_z: 6&lt;br /&gt;
 bc_wall: 5&lt;br /&gt;
 bc_freestream: 1&lt;br /&gt;
 bc_outflow: 2&lt;br /&gt;
 bc_slip_y: 3,4&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;code&amp;gt;cylinder.geo&amp;lt;/code&amp;gt; has:&lt;br /&gt;
&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;) = {102};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;) = {116};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;) = {80, 120};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;) = {36, 112};&lt;br /&gt;
 Physical Surface(&amp;quot;cylinderwalls&amp;quot;) = {94, 28, 50, 72};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;) = {37, 1, 4, 103, 3, 81, 2, 59, 5, 125};&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;inlet&amp;quot; face is the first identified boundary, so it gets the ID number 1.&lt;br /&gt;
In the YAML file, you see &amp;lt;code&amp;gt;bc_freestream: 1&amp;lt;/code&amp;gt;, which sets this inlet boundary to be controlled by a freestream BC.&lt;br /&gt;
Similar with the &amp;quot;outlet&amp;quot; face; it's the second specified boundary in the geo file, so we set &amp;lt;code&amp;gt;bc_outflow: 2&amp;lt;/code&amp;gt;.&lt;br /&gt;
And so on with the other faces.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' When using the GMSH GUI to identify the Physical Surfaces, it will actually put something like this in the geo file:&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;, 3010) = {3005};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;, 3011) = {3003};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;, 3012) = {3004};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;, 3013) = {3002};&lt;br /&gt;
 Physical Surface(&amp;quot;nacawalls&amp;quot;, 3014) = {3008, 3006, 3007};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;, 3015) = {3009, 3001};&lt;br /&gt;
&lt;br /&gt;
Note the extra &amp;lt;code&amp;gt;3010&amp;lt;/code&amp;gt; in the &amp;quot;inlet&amp;quot; definition. I'm not sure if this means that correct ID number for it should be 3010 instead of 1, but that's a possibility.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recommendations for HONEE performance ==&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=2067</id>
		<title>VNC</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=2067"/>
				<updated>2024-06-12T20:05:18Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Killing a VNC Server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Virtual Network Computing (VNC)''' is a tool which projects a GUI session over the network. If may be useful if you want to use GUI tools remotely when X forwarding performs poorly. &lt;br /&gt;
&lt;br /&gt;
VNC works on a client-server architecture, where a remote system (server) runs the programs and sends the virtual screen output to the local system (client). Here, the local system would be your laptop/desktop, while the server is located on the [[PHASTA Group Machines]].&lt;br /&gt;
&lt;br /&gt;
'''For On Ramp purposes, you only need to follow the sub sections identified with steps 1 through 3: Start VNC Server, Clients, and Starting a VNC Viewer'''&lt;br /&gt;
&lt;br /&gt;
== VNC on PHASTA Machines ==&lt;br /&gt;
'''Warning: The VNC password is transmitted in clear text over the network and should not be considered secure'''&lt;br /&gt;
&lt;br /&gt;
== Server-Side Setup for PHASTA Machines ==&lt;br /&gt;
=== Start VNC Server (Step 1) ===&lt;br /&gt;
&amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt; is designated to host VNC sessions.&lt;br /&gt;
&lt;br /&gt;
First connect to jumpgate via ssh in a terminal on your personal machine (If following the On Ramp you should already have this terminal opened). To start a VNC session on portal1 type the following in your jumpgate terminal with a return after each line:&lt;br /&gt;
 ssh portal1&lt;br /&gt;
 source /etc/profile.d/vncscript.sh&lt;br /&gt;
 start_vnc.sh&lt;br /&gt;
&lt;br /&gt;
This will setup the VNC viewer for your session on the PHASTA machine side of things. Read through the output from &amp;lt;code&amp;gt;start_vnc.sh&amp;lt;/code&amp;gt; as it contains crucial information and some tips which are elaborated on in this wiki. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT: Make sure to remember the generated password and port number (&amp;quot;You should connect to 59zw on portal1&amp;quot;, &amp;quot;Your password is abcd1234ExamplePW&amp;quot;) so that you can reuse the session you just started. Hint: take a screenshot of the output of &amp;lt;code&amp;gt;start_vnc.sh&amp;lt;/code&amp;gt; and store it somewhere safe.&lt;br /&gt;
&lt;br /&gt;
It is common practice to leave your VNC session running on &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt;. Next time you want to access your desktop, just ssh into jumpgate with a tunnel between portal1's VNC port (59**) and some port on your machine. Then use a VNC client to connect to the port on your machine. See the Client (Step 2) section below for an explanation of this process.&lt;br /&gt;
&lt;br /&gt;
=== Killing a VNC Server ===&lt;br /&gt;
If for some reason you want to end your session and kill your virtual desktop, run&lt;br /&gt;
&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
 stop_vnc.sh&lt;br /&gt;
&lt;br /&gt;
ONLY run this if you want to kill your virtual desktop. Most users will never need to do this as the idea is to create one session and continue to use that one for all future usage.&lt;br /&gt;
&lt;br /&gt;
NOTE this will kill ''all'' your VNC sessions.&lt;br /&gt;
&lt;br /&gt;
=== Changing the VNC Password ===&lt;br /&gt;
While ssh'd into your session on &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt;, type the following command in your terminal:&lt;br /&gt;
  /opt/tigervnc/bin/vncpasswd&lt;br /&gt;
&lt;br /&gt;
Follow the on screen instructions.&lt;br /&gt;
&lt;br /&gt;
=== View Only Mode ===&lt;br /&gt;
&lt;br /&gt;
To share your desktop with another user in view only mode set a view only password &lt;br /&gt;
by running&lt;br /&gt;
  vncpasswd&lt;br /&gt;
&lt;br /&gt;
Have the other user connect in the same way you would but have them set their viewer to be in view only mode and use your view only password. Typically this is done as follows:&lt;br /&gt;
  vncviewer -viewonly&lt;br /&gt;
&lt;br /&gt;
=== Changing the Size (Resolution) of an Existing Session ===&lt;br /&gt;
''Note: In most modern VNC clients, this can (possibly should) be done via the locally running client. See your VNC client's documentation for details. The below is kept for posterity''&lt;br /&gt;
&lt;br /&gt;
You can usually use the &amp;lt;code&amp;gt;xrandr&amp;lt;/code&amp;gt; tool to change the resolution of a running VNC session. First you'll need to know your session's display number (this should be the last digit or two of the port number). For example, if your VNC session is running on port 5902, then your screen number should be :2. For this example, we'll use screen 2. &lt;br /&gt;
&lt;br /&gt;
Once you know your screen number, you can see the list of supported modes as follows:&lt;br /&gt;
  xrandr -display :2&lt;br /&gt;
&lt;br /&gt;
Once you pick the one you want (generally the same size or smaller than the native resolution of your client), you can choose it by running a command like&lt;br /&gt;
  xrandr -s 1400x1050 -display :2&lt;br /&gt;
&lt;br /&gt;
(this example will set the resolution to 1400 pixels by 1050 pixels)&lt;br /&gt;
&lt;br /&gt;
You'll probably be disconnected at this point, but when you reconnect your screen size should be changed (hopefully without crashing your running programs).&lt;br /&gt;
&lt;br /&gt;
=== Finding an Existing Session ===&lt;br /&gt;
SSH to portal1 and then run:&lt;br /&gt;
  /opt/vnc_script/findsession.sh&lt;br /&gt;
&lt;br /&gt;
Which will return the shortened port number of each of your currently running sessions.&lt;br /&gt;
&lt;br /&gt;
== Client-Side Setup for PHASTA Machines ==&lt;br /&gt;
=== Clients (Step 2) === &lt;br /&gt;
&lt;br /&gt;
First you need to have a VNC viewer installed on your personal machine. &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt; uses TurboVNC from the VirtualGL project, available from [http://www.virtualgl.org/Downloads/TurboVNC their website]&lt;br /&gt;
&lt;br /&gt;
Other VNC viewers will also work, such as [https://www.tightvnc.com/download.php TightVNC] and [https://www.realvnc.com/en/connect/download/viewer/ RealVNC]. Download and install the one you like best for your personal machine (RealVNC seems to be the go to for Mac users in the group).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The instructions given when you started the session on &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt; (the ones you should have taken a screen shot of in Step 1 above) are OK, but it always tells you to start a session (in a terminal on your mac or linux machine command line) with the &amp;lt;code&amp;gt;ssh -L5905:portal1:59zw jumpgate-phasta.colorado.edu&amp;lt;/code&amp;gt; command, where &amp;lt;code&amp;gt;zw&amp;lt;/code&amp;gt; will be different for each user. That suggestion is OK if you never plan to connect to anyone else's session, but since we often collaborate by sharing VNC sessions, the better practice we adopt is to use &amp;lt;code&amp;gt;zw&amp;lt;/code&amp;gt; in place of the suggested 05 (which just an arbitrary local port on your laptop). For example, when I created a session the last four numbers of the connection were &amp;lt;code&amp;gt;5923&amp;lt;/code&amp;gt;, thus &amp;lt;code&amp;gt;zw&amp;lt;/code&amp;gt; for me is &amp;lt;code&amp;gt;23&amp;lt;/code&amp;gt; and the best practice is to ignore the script suggestion in favor of &amp;lt;code&amp;gt;ssh -L5923:portal1:5923 jumpgate-phasta.colorado.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The easiest way to set up the portal from the get-go (after you have created the VNC session on the PHASTA machine side of things and left it open, AKA Step 1 above) is to close out of the terminal on your personal machine, reopen it, and run only this command where &amp;lt;code&amp;gt;zw&amp;lt;/code&amp;gt; is your assigned port number:&lt;br /&gt;
&lt;br /&gt;
 ssh -L59zw:portal1:59zw USERNAME@jumpgate-phasta.colorado.edu&lt;br /&gt;
&lt;br /&gt;
 '''Windows 8'''&lt;br /&gt;
 - Same concept as above... close the terminal that was established with PuTTY. Start a new one using PuTTY and entering the following in the Host Name:&lt;br /&gt;
 &amp;lt;code&amp;gt;USERNAME@jumpgate-phasta.colorado.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
 Again ensuring connection type is set to SSH.&lt;br /&gt;
 - Next, look to the panel on the left side of the PuTTY window, find the &amp;quot;Connection -&amp;gt; SSH -&amp;gt; Tunnels&amp;quot; menu option and click on &amp;quot;Tunnels&amp;quot;. This will &lt;br /&gt;
 open a settings page for the tunnel we are trying to establish. &lt;br /&gt;
 - In the &amp;quot;Source Port&amp;quot; field enter &amp;lt;code&amp;gt;59zw&amp;lt;/code&amp;gt;, and in the &amp;quot;Destination&amp;quot; field enter &lt;br /&gt;
 &amp;lt;code&amp;gt;portal1:59zw&amp;lt;/code&amp;gt;, again replacing &amp;lt;code&amp;gt;zw&amp;lt;/code&amp;gt; with your assigned port number. &lt;br /&gt;
 - Click the &amp;quot;Add&amp;quot; button. &lt;br /&gt;
 - Click on &amp;quot;Open&amp;quot;. You now have an established VNC connection with &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt; and it will ask you to enter your (jumpgate) password. &lt;br /&gt;
 - Remaining steps are the same as non-Windows 8.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You will then be prompted to enter your password (not the one that was just created for the VNC, but the one that you usually use to connect to jumpgate). After entering your password you will be ready for &amp;quot;Starting a VNC Viewer (Step 3)&amp;quot; in the next section. The reason that you must exit and reopen the command prompt is because you cannot connect to the VNC portal zw if you are already connected to jumpgate or portal1; you must connect to jumpgate and your session zw simultaneously for the VNC portal to work correctly.&lt;br /&gt;
&lt;br /&gt;
=== Starting a VNC viewer (Step 3) ===&lt;br /&gt;
&lt;br /&gt;
Whether you followed the Mac or Linux or Windows instructions above, successful completion will have established a tunnel from your laptop to &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt;. The last step is to start a VNC viewer (graphical windowing program) that uses this tunnel to display your &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt; session on the screen of your laptop. There are a variety of choices for each platform and they evolve but their operation seems, thankfully, pretty universal. &lt;br /&gt;
&lt;br /&gt;
The most basic input is that they provide a box that says &amp;quot;VNC Server&amp;quot;. You will want to type &amp;lt;code&amp;gt;localhost:zw&amp;lt;/code&amp;gt; in that box and then apply start (or whatever the particular applications action button is). The VNC Viewer will them prompt you for the password that was generated for this &amp;quot;zw&amp;quot; numbered session (Your version of the generated &amp;quot;abcd1234ExamplePW&amp;quot; from previous steps. Note if you changed that VNC password already, use the one you created). &lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;zw&amp;lt;/code&amp;gt; here will need to be whatever you used for the &amp;lt;code&amp;gt;-L59zw:portal1:59zw&amp;lt;/code&amp;gt; part of the tunnel. What the number is actually doing is telling this local VNC viewer on which port you have established a tunnel with the session for which you are starting the VNC viewer. Note it is possible to have several different simultaneous sessions. Each needs its own tunnel (established by repeating the process above with a unique &amp;lt;code&amp;gt;zw&amp;lt;/code&amp;gt;) and then a corresponding &amp;quot;new&amp;quot; VNC viewer. In this way we might continue to work in our own session but then take a break to look at a collaborator's session without having to close out the tunnel and viewer pair.&lt;br /&gt;
&lt;br /&gt;
Now that you've connected to the PHASTA machines, it's time to learn a bit mroe about using a UNIX based system (Linux). To continue with the On Ramp click [[UNIX|here]]&lt;br /&gt;
&lt;br /&gt;
=== Web Based Viewer ===&lt;br /&gt;
&lt;br /&gt;
If you can't or don't want to install a VNC viewer you can use a Java based one. You will need a JVM and a Java browser plugin. You will also need the port that the start_vnc script assigned you to be free on your local computer&lt;br /&gt;
&lt;br /&gt;
Forward your session through jumpgate as before before, adding a second port, 580n. For example, if the script tells you to &lt;br /&gt;
&lt;br /&gt;
ssh -L5905:portal1:5902 jumpgate-phasta.colorado.edu you should&lt;br /&gt;
  ssh -L5902:portal1:5902 -L5802:portal1:5802 jumpgate-phasta.colorado.edu&lt;br /&gt;
Then point your browser to http://localhost:5802 and log in with the password specified by the script when prompted. (Replace 2 with the value specified by the script)&lt;br /&gt;
&lt;br /&gt;
== OpenGL == &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt; is equipped with a VirtualGL install which will allow you to use OpenGL programs (which do not use pthreads)&lt;br /&gt;
&lt;br /&gt;
Simply wrap your OpenGL program with the &amp;lt;code&amp;gt;vglrun&amp;lt;/code&amp;gt; command&lt;br /&gt;
  vlgrun glxgears&lt;br /&gt;
&lt;br /&gt;
Our lab has 2 VirtualGL servers you can connect to from &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt;. You must connect to one of them for large memory and/or computationally intensive processes.&lt;br /&gt;
The names of the servers are &amp;lt;code&amp;gt;viz002&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;viz003&amp;lt;/code&amp;gt; (&amp;lt;code&amp;gt;viz001&amp;lt;/code&amp;gt; is probably never coming back to life). &lt;br /&gt;
&amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt; doesn't have a particularly fast graphics processor and MUST NOT be used for large memory or computationally intensive process)&lt;br /&gt;
&lt;br /&gt;
  vglconnect -s viz002&lt;br /&gt;
or&lt;br /&gt;
  vglconnect -s viz003&lt;br /&gt;
&lt;br /&gt;
from this connection you will want to run graphic applications (e.g., SimModeler or ParaView) prefaced by the command &amp;lt;code&amp;gt;vglrun&amp;lt;/code&amp;gt;.  You can test that you have it setup right&lt;br /&gt;
with the toy-app &amp;lt;code&amp;gt;glxgears&amp;lt;/code&amp;gt; as follows&lt;br /&gt;
&lt;br /&gt;
  vglrun glxgears&lt;br /&gt;
&lt;br /&gt;
Note that VGL uses a number of threads. If you have trouble with &amp;lt;code&amp;gt;vglrun&amp;lt;/code&amp;gt; crashing with a message about &amp;lt;code&amp;gt;Thread::Start()&amp;lt;/code&amp;gt; make sure you haven't set your stack size too large (remove any &amp;lt;code&amp;gt;ulimit -s&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;ulimit -n&amp;lt;/code&amp;gt; calls from your shell start scripts).&lt;br /&gt;
&lt;br /&gt;
NOTE ALSO:  The primary purpose for &amp;lt;code&amp;gt;viz00x&amp;lt;/code&amp;gt; is for visualization and for debugging.  Production runs should be done elsewhere.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting == &lt;br /&gt;
&lt;br /&gt;
If you have used vncserver (It doesn't matter which version) before, you will need to clear your vnc settings for the script to work. You can do this by running rm -rf ~/.vnc&lt;br /&gt;
&lt;br /&gt;
stop_vnc.sh may display some errors; this is normal.&lt;br /&gt;
&lt;br /&gt;
If you have trouble deleting ~/.vnc send an email to Benjamin.A.Matthews@colorado.edu&lt;br /&gt;
&lt;br /&gt;
If any of these commands fail, you may need to source /etc/profile to get the necessary environment variables (this should be fixed soon)&lt;br /&gt;
&lt;br /&gt;
VirtualGL has trouble with some threaded programs. If your OpenGL program exhibits segmentation faults or other issues, this could be the problem. Check back for the solution later. &lt;br /&gt;
&lt;br /&gt;
If the given password is rejected you can run stop_vnc.sh and restart to get a new one. Occasionally the random password generator may generate passwords which VNC doesn't like.&lt;br /&gt;
&lt;br /&gt;
If VirtualGL complains about not being able to get a 24bit FB config either vglconnect to another VirtualGL enabled server or complain to Benjamin.A.Matthews@Colorado.edu&lt;br /&gt;
&lt;br /&gt;
If your VNC connection is very slow, you might want to try changing the compression and encoding options. See your vncviewer's documentation or try this&lt;br /&gt;
  vncviewer -encodings tight -quality 6 -compresslevel 6&lt;br /&gt;
If you have trouble with text distortion try adding &lt;br /&gt;
  -nojpeg&lt;br /&gt;
&lt;br /&gt;
If you're running OSX and see an error about Zlib, try changing your compression settings (maximum quality usually works) or use a different client. RealVNC and certain versions of ChickenOfTheVNC both exhibit this issue. The latest build of TigerVNC should work reliably, as does the Java based TightVNC client.&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2066</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2066"/>
				<updated>2024-06-05T23:05:14Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Restarting Simulations ===&lt;br /&gt;
To restart a simulation from a previous simulation's result, they need to be saved to a &amp;lt;code&amp;gt;*.bin&amp;lt;/code&amp;gt; file.&lt;br /&gt;
This is done using the &amp;lt;code&amp;gt;-continue&amp;lt;/code&amp;gt; flag, which will load the file set by &amp;lt;code&amp;gt;-continue_filename&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.&lt;br /&gt;
&lt;br /&gt;
== HONEE and Gmsh ==&lt;br /&gt;
&lt;br /&gt;
=== GMSH/Grid requirements: ===&lt;br /&gt;
# The grid must be hexahedron based (so deformed cubes). HONEE can run with tetrahedron grids, but that will require some minor code modifications which we've done before (IIRC, it's a single line of code to change). If anyone is interested in this, let me know and we can work on it.&lt;br /&gt;
#* Note that HONEE cannot handle mixed meshes, so meshes with both hexahedron and tetrahedron grids. This is not a minor code change and is probably out of scope for this Summer Program&lt;br /&gt;
# In the geo file, you need to specify &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s to identify the domain boundaries to apply boundary conditions to.&lt;br /&gt;
# In the geo file, you need to specify a single &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; that has the entire domain. So if you split the domain into multiple pieces for meshing purposes (as is done with the vortex shedding grids), then you need the &amp;lt;code&amp;gt;Physical Volume&amp;lt;/code&amp;gt; to identify all the pieces.&lt;br /&gt;
&lt;br /&gt;
=== YAML Setup: ===&lt;br /&gt;
In the YAML file, you need to associate each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt;s with a boundary condition. I describe this briefly in the video, but each &amp;lt;code&amp;gt;Physical Surface&amp;lt;/code&amp;gt; as an ID number associated with it. In the YAML file, you need to associate that ID number with a BC choice.&lt;br /&gt;
For example, in the &amp;lt;code&amp;gt;vortexshedding.yaml&amp;lt;/code&amp;gt; file there is:&lt;br /&gt;
 # Boundary Settings&lt;br /&gt;
 bc_slip_z: 6&lt;br /&gt;
 bc_wall: 5&lt;br /&gt;
 bc_freestream: 1&lt;br /&gt;
 bc_outflow: 2&lt;br /&gt;
 bc_slip_y: 3,4&lt;br /&gt;
&lt;br /&gt;
and the &amp;lt;code&amp;gt;cylinder.geo&amp;lt;/code&amp;gt; has:&lt;br /&gt;
&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;) = {102};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;) = {116};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;) = {80, 120};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;) = {36, 112};&lt;br /&gt;
 Physical Surface(&amp;quot;cylinderwalls&amp;quot;) = {94, 28, 50, 72};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;) = {37, 1, 4, 103, 3, 81, 2, 59, 5, 125};&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;inlet&amp;quot; face is the first identified boundary, so it gets the ID number 1.&lt;br /&gt;
In the YAML file, you see &amp;lt;code&amp;gt;bc_freestream: 1&amp;lt;/code&amp;gt;, which sets this inlet boundary to be controlled by a freestream BC.&lt;br /&gt;
Similar with the &amp;quot;outlet&amp;quot; face; it's the second specified boundary in the geo file, so we set &amp;lt;code&amp;gt;bc_outflow: 2&amp;lt;/code&amp;gt;.&lt;br /&gt;
And so on with the other faces.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' When using the GMSH GUI to identify the Physical Surfaces, it will actually put something like this in the geo file:&lt;br /&gt;
 Physical Surface(&amp;quot;inlet&amp;quot;, 3010) = {3005};&lt;br /&gt;
 Physical Surface(&amp;quot;outlet&amp;quot;, 3011) = {3003};&lt;br /&gt;
 Physical Surface(&amp;quot;top&amp;quot;, 3012) = {3004};&lt;br /&gt;
 Physical Surface(&amp;quot;bottom&amp;quot;, 3013) = {3002};&lt;br /&gt;
 Physical Surface(&amp;quot;nacawalls&amp;quot;, 3014) = {3008, 3006, 3007};&lt;br /&gt;
 Physical Surface(&amp;quot;frontandback&amp;quot;, 3015) = {3009, 3001};&lt;br /&gt;
&lt;br /&gt;
Note the extra &amp;lt;code&amp;gt;3010&amp;lt;/code&amp;gt; in the &amp;quot;inlet&amp;quot; definition. I'm not sure if this means that correct ID number for it should be 3010 instead of 1, but that's a possibility.&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2065</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2065"/>
				<updated>2024-06-05T22:58:44Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Restarting Simulations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Restarting Simulations ===&lt;br /&gt;
To restart a simulation from a previous simulation's result, they need to be saved to a &amp;lt;code&amp;gt;*.bin&amp;lt;/code&amp;gt; file.&lt;br /&gt;
This is done using the &amp;lt;code&amp;gt;-continue&amp;lt;/code&amp;gt; flag, which will load the file set by &amp;lt;code&amp;gt;-continue_filename&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2064</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2064"/>
				<updated>2024-06-04T20:47:24Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Restarting Simulations ===&lt;br /&gt;
To restart a simulation from a previous simulation's result, they need to be saved to a &amp;lt;code&amp;gt;*.bin&amp;lt;/code&amp;gt; file.&lt;br /&gt;
This is done using the `-continue` flag, which will load the file set by `-continue_filename`.&lt;br /&gt;
&lt;br /&gt;
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2063</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2063"/>
				<updated>2024-06-03T16:31:20Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;br /&gt;
&lt;br /&gt;
=== Notable Input Flags ===&lt;br /&gt;
* &amp;lt;code&amp;gt;-ts_monitor_solution&amp;lt;/code&amp;gt;: This will save the results of a simulation to a file. Example: &amp;lt;code&amp;gt;-ts_monitor_solution cgns:flow_visualization.cgns&amp;lt;/code&amp;gt; will save the results to a file called &amp;lt;code&amp;gt;flow_visualization.cgns&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2062</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2062"/>
				<updated>2024-06-03T16:28:52Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Building HONEE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 cd libCEED&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2061</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2061"/>
				<updated>2024-06-03T14:53:49Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines [https://libceed.org/en/latest/ libCEED] and [https://petsc.org/release/ PETSc].&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2060</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2060"/>
				<updated>2024-06-03T14:45:44Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines libCEED and PETSc.&lt;br /&gt;
&lt;br /&gt;
== Building HONEE for Cisco nodes ==&lt;br /&gt;
&lt;br /&gt;
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.&lt;br /&gt;
&lt;br /&gt;
=== Setup Envionment ===&lt;br /&gt;
 source /projects/tools/Spackv0.23/share/spack/setup-env.sh&lt;br /&gt;
 spack load gcc@12.3&lt;br /&gt;
 module load openmpi&lt;br /&gt;
&lt;br /&gt;
=== Create new directory ===&lt;br /&gt;
 mkdir honee_build&lt;br /&gt;
 cd honee_build&lt;br /&gt;
&lt;br /&gt;
=== Build PETSc===&lt;br /&gt;
Ordinarily, I'd only recommend doing a &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;, e.g. &amp;lt;code&amp;gt;git clone https://gitlab.com/petsc/petsc.git&amp;lt;/code&amp;gt;.&lt;br /&gt;
However, the &amp;lt;code&amp;gt;/nobackup/&amp;lt;/code&amp;gt; server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.&lt;br /&gt;
I may cover that later, but for now I'll just cover it via &amp;lt;code&amp;gt;git clone&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 git clone https://gitlab.com/petsc/petsc.git&lt;br /&gt;
 cd petsc&lt;br /&gt;
 cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py&lt;br /&gt;
 ./reconfigure.py&lt;br /&gt;
 make&lt;br /&gt;
 export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32&lt;br /&gt;
 cd ..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;reconfigure.py&amp;lt;/code&amp;gt; file has the following:&lt;br /&gt;
 #!/bin/python3&lt;br /&gt;
 if __name__ == '__main__':&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   sys.path.insert(0, os.path.abspath('config'))&lt;br /&gt;
   import configure&lt;br /&gt;
   configure_options = [&lt;br /&gt;
     '--with-64-bit-indices=0',&lt;br /&gt;
     '--download-hdf5',&lt;br /&gt;
     '--download-cgns',&lt;br /&gt;
     '--download-ctetgen=1',&lt;br /&gt;
     '--download-parmetis=1',&lt;br /&gt;
     '--download-metis=1',&lt;br /&gt;
     '--download-ptscotch=1',&lt;br /&gt;
     '--with-debugging=0',&lt;br /&gt;
     '--with-fortran-bindings=0',&lt;br /&gt;
     '--with-fc=0',&lt;br /&gt;
     'PETSC_ARCH=arch-32',&lt;br /&gt;
     'COPTFLAGS=-g -O3',&lt;br /&gt;
     'CXXOPTFLAGS=-g -O3',&lt;br /&gt;
   ]&lt;br /&gt;
   configure.petsc_configure(configure_options)&lt;br /&gt;
&lt;br /&gt;
=== Building HONEE ===&lt;br /&gt;
Ensure that &amp;lt;code&amp;gt;PETSC_DIR&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PETSC_ARCH&amp;lt;/code&amp;gt; are set to the desired PETSc installation.&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/CEED/libCEED.git&lt;br /&gt;
 make build/fluids-navierstokes -j&lt;br /&gt;
&lt;br /&gt;
The executable &amp;lt;code&amp;gt;build/fluids-navierstokes&amp;lt;/code&amp;gt; is then built and ready for running.&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2059</id>
		<title>FDSI Summer Program/HONEE</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=FDSI_Summer_Program/HONEE&amp;diff=2059"/>
				<updated>2024-06-03T14:30:29Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Created page with &amp;quot;''Information for tutorial in HONEE, to be updated as needed.''  '''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines libCEED...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Information for tutorial in HONEE, to be updated as needed.''&lt;br /&gt;
&lt;br /&gt;
'''HONEE''', or the '''High-Order Navier-stokes Equation Evaluator''', is a CFD program that combines libCEED and PETSc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing HONEE Inputs ==&lt;br /&gt;
HONEE uses PETSc's input argument system for handling inputs.&lt;br /&gt;
They can be provided either via command-line flags or inside a YAML file.&lt;br /&gt;
So &lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -ts_dt 1e-3&lt;br /&gt;
&lt;br /&gt;
is equivalent to&lt;br /&gt;
&lt;br /&gt;
 ./build/fluids-navierstokes -options_file test.yaml&lt;br /&gt;
&lt;br /&gt;
if &amp;lt;code&amp;gt;test.yaml&amp;lt;/code&amp;gt; has this in it:&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
&lt;br /&gt;
Note also that PETSc allows for hierarchical flags as well within a YAML file.&lt;br /&gt;
So instead of writing&lt;br /&gt;
&lt;br /&gt;
 ts_dt: 1e-3&lt;br /&gt;
 ts_type: alpha&lt;br /&gt;
 ts_max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
You can write:&lt;br /&gt;
&lt;br /&gt;
 ts:&lt;br /&gt;
   dt: 1e-3&lt;br /&gt;
   type: alpha&lt;br /&gt;
   max_time: 2.3&lt;br /&gt;
&lt;br /&gt;
=== Documentation for Specific Flags ===&lt;br /&gt;
* [https://libceed.org/en/latest/examples/fluids/ HONEE specific flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/TS/TSSetFromOptions/ Time Stepping (TS) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/SNES/SNESSetFromOptions/ Non-linear solver (SNES) flags]&lt;br /&gt;
* [https://petsc.org/main/manualpages/KSP/KSPSetFromOptions/ Linear solver (KSP) flags]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=2049</id>
		<title>Exporting Parasolid from SolidWorks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=2049"/>
				<updated>2024-04-11T15:06:32Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Save out your Model as a parasolid from [[SolidWorks]]. Note that you will want the geometry as close to ready for meshing as possible, as performing model &amp;quot;surgery&amp;quot; in SimModeler is not always straight forward. The outputted file from SolidWorks will have the format &amp;lt;code&amp;gt;&amp;lt;file_name&amp;gt;.x_t&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you do not have a parasolid model of your own, you may use the On Ramp example file located at:&lt;br /&gt;
&lt;br /&gt;
 /projects/tutorials/OnRamp/example_geom.x_t&lt;br /&gt;
&lt;br /&gt;
Ensure that you are on one of the viznodes and not portal1. You may tunnel to viz003 by opening a terminal and running:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz003&lt;br /&gt;
&lt;br /&gt;
Navigate to and copy your file into your working directory. Typically, we create a folder where all the simulation files are stored. For example, after opening a terminal I could run the commands:&lt;br /&gt;
&lt;br /&gt;
 mkdir Demo&lt;br /&gt;
 cd Demo&lt;br /&gt;
&lt;br /&gt;
This would place me in my working directory 'Demo'. To copy all the files for the On Ramp demo, run this in your 'Demo' directory:&lt;br /&gt;
&lt;br /&gt;
 cp /projects/tutorials/OnRamp/* .&lt;br /&gt;
&lt;br /&gt;
Next, you'll want to change the parasolid file extension from &amp;lt;code&amp;gt;.x_t&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;. To do this, run &amp;lt;code&amp;gt;mv &amp;lt;file_name&amp;gt;.x_t &amp;lt;file_name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; from your terminal. From here, you are ready for the ''convert'' step. The convert step is documented here: https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Summary of video:''' &lt;br /&gt;
 1. Ensure &amp;lt;code&amp;gt;convertParasolid2Sim.sh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; are in your working directory. &lt;br /&gt;
&lt;br /&gt;
 2. Set environment with soft adds found in &amp;lt;code&amp;gt;more ~kjansen/soft-core.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 3. Run &amp;lt;code&amp;gt;./convertParasolid2sim.sh &amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; in your terminal&lt;br /&gt;
&lt;br /&gt;
 4. Convert step is complete and your directory now contains 3 new files: &amp;lt;code&amp;gt;model.smd&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;relations.log&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt;. The &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file is the one we need moving forward in this tutorial.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the convert step is complete, you are ready to move onto the next step and use [[Getting Started with Simmodeler| SimModeler]] to create a mesh for the new &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file we created!&lt;br /&gt;
&lt;br /&gt;
== Geometries with Standalone Surfaces ==&lt;br /&gt;
&lt;br /&gt;
There are sometimes cases where models will contain not only three dimensional solid bodies but also surfaces. Surfaces are treated differently by Solidworks and parasolid files than solid bodies, so they need to be exported separately. In order to do this, you select all entities except the surface and save as a parasolid, after you hit save, Solidworks will ask if you want to save all of the geometry or only the selected geometry, the second option being the one we want. Now do the same but only select the surface. In newer versions of Simmodler, only the domain needs to be run though the conversion and the surfaces can be added as parasolid files. In older versions, these two separate parasolid files needed to be converted to &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; separately (remember to rename files after they are converted to avoid overwriting) and recombined in Simmodeler. The toolchain seems to fail for multiple unconnected surfaces, so if this is the case, take care to export them separately.&lt;br /&gt;
&lt;br /&gt;
To recombine, simply open the solid body file in Simmodeler, under the &amp;quot;Modeling&amp;quot; tab select &amp;quot;Add Parts&amp;quot;, add the surfaces file, then select &amp;quot;Make New Manifold Model&amp;quot; which will combine the files into one model that is suitable for PHASTA.&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=PHASTA/Restart_Ordering&amp;diff=2046</id>
		<title>PHASTA/Restart Ordering</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=PHASTA/Restart_Ordering&amp;diff=2046"/>
				<updated>2024-03-12T20:45:32Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Optional Outputs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Knowledge of the ordering of restarts is most useful for the creation of &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; files (used to tell ParaView what to read in from restarts in POSIX or SyncIO format respectively). There are both standard outputs from PHASTA that will be written no matter what, and optional ones that may depend on the simulation type or what type of analysis is planned on being done. &lt;br /&gt;
&lt;br /&gt;
In the following, index numbering will be in index-by-1 format, or Fortran format. When creating a &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; file and filling in the &amp;lt;code&amp;gt;start_index_in_phasta_array=&amp;quot; &amp;quot;&amp;lt;/code&amp;gt; section for each field, note that you should subtract one as ParaView reads this file using index-by-0.&lt;br /&gt;
&lt;br /&gt;
NOTE: &lt;br /&gt;
This page is a work in progress and none of this information should be taken as fully accurate while this warning persists&lt;br /&gt;
&lt;br /&gt;
== Standard Outputs ==&lt;br /&gt;
&lt;br /&gt;
The output flow variables written to restarts are dependent on the choice of variables used to solve a given setup. These output flow variables are stored under the header&lt;br /&gt;
&amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt;&lt;br /&gt;
And for pressure primitive, have ordering:&lt;br /&gt;
* Pressure&lt;br /&gt;
* Velocity (vector quantity, ordered x, y, z)&lt;br /&gt;
* Temperature (if solving the compressible equations, this field can be ignored if incompressible)&lt;br /&gt;
&lt;br /&gt;
In the case where turbulence models are being used or species are being solved for, &amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt; will also populate scalar fields starting in the 6th field in sequential ordering:&lt;br /&gt;
* Scalar 1&lt;br /&gt;
* Scalar 2&lt;br /&gt;
&lt;br /&gt;
The final ordering is then:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Pressure Primitive Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1       || Pressure     ||&lt;br /&gt;
|-&lt;br /&gt;
| 2:4     || Velocity     || Vector quantity, ordered u, v, w&lt;br /&gt;
|-&lt;br /&gt;
| 5       || Temperature  || This field can be ignored if incompressible&lt;br /&gt;
|-&lt;br /&gt;
| 6       || Scalar 1     || Only written if scalar 1 exists&lt;br /&gt;
|-&lt;br /&gt;
| 7       || Scalar 2     || Only written if scalar 2 exists&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The exact quantity that any scalar represents depends on the models being used, though as some examples, for the Spalart-Allmaras (SA) 1-equation RANS model or DES/DDES formulations using the SA model, scalar 1 will be &amp;amp;nu &amp;lt;sub&amp;gt;t&amp;lt;/sub&amp;gt;. Some branches of the code may have the ability to solve more scalar equations and those extra scalars will be appended, in order, to the end of the &amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt; field.&lt;br /&gt;
&lt;br /&gt;
=== Time Derivatives ===&lt;br /&gt;
The time derivatives of all of the above fields are also present in the restarts, under the header &amp;lt;code&amp;gt;time derivative of solution&amp;lt;/code&amp;gt;. These fields are in the exact same order as in &amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt;, and are subject to the same caveats about how many scalars there may be and what they actually represent for any given branch of the code and simulation setup.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Pressure Primitive Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1       || Time derivative of pressure     ||&lt;br /&gt;
|-&lt;br /&gt;
| 2:4     || Time derivative of velocity     || Vector quantity, ordered u, v, w&lt;br /&gt;
|-&lt;br /&gt;
| 5       || Time derivative of temperature  || This field can be ignored if incompressible&lt;br /&gt;
|-&lt;br /&gt;
| 6       || Time derivative of scalar 1     || Only written if scalar 1 exists&lt;br /&gt;
|-&lt;br /&gt;
| 7       || Time derivative of scalar 2     || Only written if scalar 2 exists&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Optional Outputs ==&lt;br /&gt;
&lt;br /&gt;
=== Wall Distance ===&lt;br /&gt;
This quanity is simply a measure of the distance from any given node to the nearest wall point in the simulation. This quantity is mostly used in turbulence models but can be useful for the post-processing of complex domains&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;dwal&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;placeholder&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1     || Wall Distance || &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Vorticity ===&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;vorticity&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print vorticity: True&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1:3     || Vorticity              || Vector quantity, ordered &amp;amp;omega;&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| 4       || Magnitude of vorticity || &lt;br /&gt;
|-&lt;br /&gt;
| 5       || Q                      || Defined as the second invariant of the velocity gradient tensor&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Time Averaged Statistics (point-wise) ===&lt;br /&gt;
Point-wise time averaged statistics are useful for problems where there are no homogeneous directions in the flow to accumulate an average along. Instead, PHASTA can accumulate averages at each individual node. Note that this formulation only accumulates per-run, so an average is only computed from the start step of the current run, and total averages must be computed by adding successive averages with the appropriate weighting. &lt;br /&gt;
*Header: &amp;lt;code&amp;gt;ybar&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print ybar: True&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields (incompressible)&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1     || Average velocity                     || Vector quantity, ordered u, v, w&lt;br /&gt;
|-&lt;br /&gt;
| 2:4   || Average pressure                     || &lt;br /&gt;
|-&lt;br /&gt;
| 5     || Average speed                        || &lt;br /&gt;
|-&lt;br /&gt;
| 6:8   || Average of the square of velocity    || Vector quantity, ordered u&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;, v&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;, w&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| 9     || Average of the square of pressure    ||&lt;br /&gt;
|-&lt;br /&gt;
| 10:12 || Average of velocity cross-components || Vector quantity, ordered uv, uw, vw&lt;br /&gt;
|-&lt;br /&gt;
| 13    || Average of scalar 1                  ||&lt;br /&gt;
|-&lt;br /&gt;
| 14:16 || Average of vorticity                 || Vector quantity, ordered &amp;amp;omega;&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| 17    || Average vorticity magnitude          ||&lt;br /&gt;
|-&lt;br /&gt;
| 18    || Average of scalar 2                  ||&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Average of Q may be in position 19 depending on the branch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wall Shear Stress ===&lt;br /&gt;
This field is only defined at wall points and is used to get the most accurate measure of the wall shear stress possible as otherwise a finite gradient using the first point off of the wall must be implemented, which may or may not be normal to a wall point underneath of it. &lt;br /&gt;
*Header: &amp;lt;code&amp;gt;wss&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print Wall Fluxes: True&amp;lt;/code&amp;gt; (check this)&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1:3     || Wall shear stress || Vector quantity, ordered &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Time Averaged Wall Shear Stress ===&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;wssbar&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print Wall Fluxes: True&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Print ybar: True&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1:3     || Average wall shear stress || Vector quantity, ordered &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Pressure Projection Vectors ===&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;pressure projection vectors&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;placeholder&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=UNIX&amp;diff=1955</id>
		<title>UNIX</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=UNIX&amp;diff=1955"/>
				<updated>2023-08-08T14:49:33Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Command Line Basics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Most of our systems (and general HPC resources) run some UNIX derivative. Much of the software is command line based, so it's worthwhile to learn the basics. &lt;br /&gt;
&lt;br /&gt;
There are tons of free resources on the web for getting started ([https://www.tipsandtricks-hq.com/basic-unix-commands-list-366#:~:text=File%2FDirectory%20operation%20related%20Unix%20Commands.%201%20cp%20%E2%80%93,which%20is%20the%20pathname%20of%20a%20directory.%20 Basic UNIX Commands]). There should also be a &amp;quot;for dummies&amp;quot; book in the lab. &lt;br /&gt;
&lt;br /&gt;
As you find resources that are helpful, please update this page.&lt;br /&gt;
&lt;br /&gt;
== Connecting (SSH) ==&lt;br /&gt;
Windows:&lt;br /&gt;
[http://www.chiark.greenend.org.uk/~sgtatham/putty PuTTY SSH Client]&lt;br /&gt;
[http://winscp.net/eng/index.php WinSCP file transfer tool]&lt;br /&gt;
&lt;br /&gt;
MacOS and Linux users can use [http://openssh.org/ OpennSSH] on the command line (it generally comes with the OS).&lt;br /&gt;
&lt;br /&gt;
=== Resources for setting up ssh keys ===&lt;br /&gt;
&lt;br /&gt;
Visual guide to how ssh-keys and &amp;lt;code&amp;gt;ssh-agent&amp;lt;/code&amp;gt; work: [http://www.unixwiz.net/techtips/ssh-agent-forwarding.html An Illustrated Guide to SSH Agent Forwarding]&lt;br /&gt;
&lt;br /&gt;
For setting up &amp;lt;code&amp;gt;ssh-agent&amp;lt;/code&amp;gt; (so you don't have to type your password over and over): [http://blog.joncairns.com/2013/12/understanding-ssh-agent-and-ssh-add/ Understanding ssh-agent and ssh-add]&lt;br /&gt;
&lt;br /&gt;
Script for automatically starting &amp;lt;code&amp;gt;ssh-agent&amp;lt;/code&amp;gt; on login of a machine (place in your &amp;lt;code&amp;gt;.profile&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt;): [https://stackoverflow.com/a/18915067/7564988 StackOverflow: Start ssh-agent on login]&lt;br /&gt;
&lt;br /&gt;
=== Visual guide to ssh tunnels (ie. port forwarding) ===&lt;br /&gt;
&lt;br /&gt;
A good resource to understand port-forwarding, and more advanced uses of port-forwarding: [https://robotmoon.com/ssh-tunnels/ A visual guide to SSH tunnels]&lt;br /&gt;
&lt;br /&gt;
== Command Line Basics ==&lt;br /&gt;
&lt;br /&gt;
[https://explainshell.com/# Explain Shell]: Copy/paste a CLI command, and it will tell you what all the flags mean&lt;br /&gt;
&lt;br /&gt;
[https://arstechnica.com/gadgets/2021/08/linux-bsd-command-line-101-using-awk-sed-and-grep-in-the-terminal/ How to ''think'' when using &amp;lt;code&amp;gt;grep&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;sed&amp;lt;/code&amp;gt;]: Article going over the basic uses of &amp;lt;code&amp;gt;grep&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;sed&amp;lt;/code&amp;gt; and how to think through their use cases.&lt;br /&gt;
&lt;br /&gt;
[https://www.funtoo.org/Awk_by_Example,_Part_1 Learning Awk by Example]: Tutorials going over how to use awk, while actually using examples.&lt;br /&gt;
&lt;br /&gt;
[https://www.tldp.org/LDP/abs/html/ Advanced Bash-Scripting Guide]&lt;br /&gt;
&lt;br /&gt;
== Graphical Sessions (VNC) ==&lt;br /&gt;
See [[VNC]]&lt;br /&gt;
&lt;br /&gt;
== File Permissions and ACL ==&lt;br /&gt;
See [[File_Permissions_Basics_and_ACL|File Permissions Basics and ACL]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1954</id>
		<title>Paraview Trace</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1954"/>
				<updated>2023-06-17T14:34:35Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Basic Changes to the Python Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Paraview traces are ways to create a python script of a set of actions that can be later applied to  different datasets or over a loop of datasets automatically. The following will brefly explain the creation, cleaning, and running of a python trace in Paraview (which will often be shortened to pvTrace or something similar).&lt;br /&gt;
&lt;br /&gt;
=== Creation ===&lt;br /&gt;
General steps are to start a trace, do whatever you're wanting to record, then stop the trace.&lt;br /&gt;
&lt;br /&gt;
'''1. Start Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Start Trace&lt;br /&gt;
&lt;br /&gt;
This will bring up a dialog with different options for which things to record. Of note is &amp;quot;Skip Rendering Components&amp;quot; (useful if your trace doesn't involve any visual capturing). You can also select &amp;quot;Show Incremental Trace&amp;quot; so to verify what command are being recorded as you do them.&lt;br /&gt;
&lt;br /&gt;
'''2. Do something'''&lt;br /&gt;
&lt;br /&gt;
Do whatever you were wanting to have recorded in Paraview.&lt;br /&gt;
&lt;br /&gt;
'''3. Stop Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Stop Trace&lt;br /&gt;
&lt;br /&gt;
This will then bring up the resulting Python script, which you can then save to a file, edit, and rerun.&lt;br /&gt;
&lt;br /&gt;
=== Basic Changes to the Python Script ===&lt;br /&gt;
&lt;br /&gt;
When rerunning a trace script, it's just run via a normal Python interpreter in a Paraview-specific environment. All Python standard library packages are available to the script.&lt;br /&gt;
&lt;br /&gt;
This allows for significant flexibility in what can be done in and during the script. For example, in [https://github.com/PHASTA/utilities/blob/609efbe8879b7c341bc2dc29d14df4e90ad1084d/general/ParaView_Interpolation/BatchSync.py#L131 this Paraview script file], originally created from a trace, we use a for loop to do the same action multiple times. Additionally, the &amp;lt;code&amp;gt;os&amp;lt;/code&amp;gt; package is used to symlink and delete files so that the data read in during the for loop changes. We've even added &amp;lt;code&amp;gt;print&amp;lt;/code&amp;gt; statements to log what the script is doing.&lt;br /&gt;
&lt;br /&gt;
==== Python Versions ====&lt;br /&gt;
'''Note: The version of Python used by Paraview is often different than the version installed on your system.'''&lt;br /&gt;
&lt;br /&gt;
To determine the version of Python used by Paraview, you can select &amp;quot;View &amp;gt; Python Shell&amp;quot; in Paraview, which will show the Python interpreter. The version of Python used should be printed on the shell at the top. If not, run &amp;lt;code&amp;gt;import sys; print(sys.version)&amp;lt;/code&amp;gt; to display the Python version information. If you're using Paraview through the server/client mode, you ''MUST'' do this while attached to a &amp;lt;code&amp;gt;pvserver&amp;lt;/code&amp;gt;; the version of Python on your local Paraview client is not guaranteed to match the version on the server.&lt;br /&gt;
&lt;br /&gt;
The Python Shell in Paraview can also be used to verify if certain functions will work and what libraries are available (such as [https://numpy.org/ numpy]).&lt;br /&gt;
&lt;br /&gt;
=== Running a Trace ===&lt;br /&gt;
Running a pvTrace, whether on the same or a different dataset requires only a few key steps. For now, it will be assumed that the trace is being run on Cooley at ALCF, though many of the steps should be shared for other machines. &lt;br /&gt;
&lt;br /&gt;
If you are running the trace on a dataset that is not the same as the one for which you created the trace, it is good practice to always check your script and inputs. Make sure you have all of the restart and geombc files that you will need, and that you are pointing to the correct locations in the python script (it is recommended that you use absolute paths to reduce the chances for error). Also check your output file name and location.&lt;br /&gt;
&lt;br /&gt;
Once your python script is ready, you need to get an interactive allocation on Cooley and load the same version of paraview with which the trace was created:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;soft add +paraview-5.5.2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This loads &amp;lt;code&amp;gt;pvpython&amp;lt;/code&amp;gt; which is what is used to run the python trace script. To run this scrip is simply a modification of a standard python run command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;pvpython Trace_Script.py&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1953</id>
		<title>Paraview Trace</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1953"/>
				<updated>2023-06-17T14:33:45Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Basic Changes to the Python Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Paraview traces are ways to create a python script of a set of actions that can be later applied to  different datasets or over a loop of datasets automatically. The following will brefly explain the creation, cleaning, and running of a python trace in Paraview (which will often be shortened to pvTrace or something similar).&lt;br /&gt;
&lt;br /&gt;
=== Creation ===&lt;br /&gt;
General steps are to start a trace, do whatever you're wanting to record, then stop the trace.&lt;br /&gt;
&lt;br /&gt;
'''1. Start Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Start Trace&lt;br /&gt;
&lt;br /&gt;
This will bring up a dialog with different options for which things to record. Of note is &amp;quot;Skip Rendering Components&amp;quot; (useful if your trace doesn't involve any visual capturing). You can also select &amp;quot;Show Incremental Trace&amp;quot; so to verify what command are being recorded as you do them.&lt;br /&gt;
&lt;br /&gt;
'''2. Do something'''&lt;br /&gt;
&lt;br /&gt;
Do whatever you were wanting to have recorded in Paraview.&lt;br /&gt;
&lt;br /&gt;
'''3. Stop Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Stop Trace&lt;br /&gt;
&lt;br /&gt;
This will then bring up the resulting Python script, which you can then save to a file, edit, and rerun.&lt;br /&gt;
&lt;br /&gt;
=== Basic Changes to the Python Script ===&lt;br /&gt;
&lt;br /&gt;
When rerunning a trace script, it's just run via a normal Python interpreter in a Paraview-specific environment. All Python standard library packages are available to the script.&lt;br /&gt;
&lt;br /&gt;
This allows for significant flexibility in what can be done in and during the script. For example, in [https://github.com/PHASTA/utilities/blob/609efbe8879b7c341bc2dc29d14df4e90ad1084d/general/ParaView_Interpolation/BatchSync.py#L131 this Paraview script file], originally created from a trace, we use a for loop to do the same action multiple times. Additionally, the &amp;lt;code&amp;gt;os&amp;lt;/code&amp;gt; package is used to symlink and delete files so that the data read in during the for loop changes.&lt;br /&gt;
&lt;br /&gt;
==== Python Versions ====&lt;br /&gt;
'''Note: The version of Python used by Paraview is often different than the version installed on your system.'''&lt;br /&gt;
&lt;br /&gt;
To determine the version of Python used by Paraview, you can select &amp;quot;View &amp;gt; Python Shell&amp;quot; in Paraview, which will show the Python interpreter. The version of Python used should be printed on the shell at the top. If not, run &amp;lt;code&amp;gt;import sys; print(sys.version)&amp;lt;/code&amp;gt; to display the Python version information. If you're using Paraview through the server/client mode, you ''MUST'' do this while attached to a &amp;lt;code&amp;gt;pvserver&amp;lt;/code&amp;gt;; the version of Python on your local Paraview client is not guaranteed to match the version on the server.&lt;br /&gt;
&lt;br /&gt;
The Python Shell in Paraview can also be used to verify if certain functions will work and what libraries are available (such as [https://numpy.org/ numpy]).&lt;br /&gt;
&lt;br /&gt;
=== Running a Trace ===&lt;br /&gt;
Running a pvTrace, whether on the same or a different dataset requires only a few key steps. For now, it will be assumed that the trace is being run on Cooley at ALCF, though many of the steps should be shared for other machines. &lt;br /&gt;
&lt;br /&gt;
If you are running the trace on a dataset that is not the same as the one for which you created the trace, it is good practice to always check your script and inputs. Make sure you have all of the restart and geombc files that you will need, and that you are pointing to the correct locations in the python script (it is recommended that you use absolute paths to reduce the chances for error). Also check your output file name and location.&lt;br /&gt;
&lt;br /&gt;
Once your python script is ready, you need to get an interactive allocation on Cooley and load the same version of paraview with which the trace was created:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;soft add +paraview-5.5.2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This loads &amp;lt;code&amp;gt;pvpython&amp;lt;/code&amp;gt; which is what is used to run the python trace script. To run this scrip is simply a modification of a standard python run command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;pvpython Trace_Script.py&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1952</id>
		<title>Paraview Trace</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1952"/>
				<updated>2023-06-17T14:29:25Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Basic Changes to the Python Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Paraview traces are ways to create a python script of a set of actions that can be later applied to  different datasets or over a loop of datasets automatically. The following will brefly explain the creation, cleaning, and running of a python trace in Paraview (which will often be shortened to pvTrace or something similar).&lt;br /&gt;
&lt;br /&gt;
=== Creation ===&lt;br /&gt;
General steps are to start a trace, do whatever you're wanting to record, then stop the trace.&lt;br /&gt;
&lt;br /&gt;
'''1. Start Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Start Trace&lt;br /&gt;
&lt;br /&gt;
This will bring up a dialog with different options for which things to record. Of note is &amp;quot;Skip Rendering Components&amp;quot; (useful if your trace doesn't involve any visual capturing). You can also select &amp;quot;Show Incremental Trace&amp;quot; so to verify what command are being recorded as you do them.&lt;br /&gt;
&lt;br /&gt;
'''2. Do something'''&lt;br /&gt;
&lt;br /&gt;
Do whatever you were wanting to have recorded in Paraview.&lt;br /&gt;
&lt;br /&gt;
'''3. Stop Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Stop Trace&lt;br /&gt;
&lt;br /&gt;
This will then bring up the resulting Python script, which you can then save to a file, edit, and rerun.&lt;br /&gt;
&lt;br /&gt;
=== Basic Changes to the Python Script ===&lt;br /&gt;
&lt;br /&gt;
When rerunning a trace script, it's just run via a normal Python interpreter in a Paraview-specific environment. All Python standard libraries are available to the script. &lt;br /&gt;
&lt;br /&gt;
==== Python Versions ====&lt;br /&gt;
'''Note: The version of Python used by Paraview is often different than the version installed on your system.'''&lt;br /&gt;
&lt;br /&gt;
To determine the version of Python used by Paraview, you can select &amp;quot;View &amp;gt; Python Shell&amp;quot; in Paraview, which will show the Python interpreter. The version of Python used should be printed on the shell at the top. If not, run &amp;lt;code&amp;gt;import sys; print(sys.version)&amp;lt;/code&amp;gt; to display the Python version information. If you're using Paraview through the server/client mode, you ''MUST'' do this while attached to a &amp;lt;code&amp;gt;pvserver&amp;lt;/code&amp;gt;; the version of Python on your local Paraview client is not guaranteed to match the version on the server.&lt;br /&gt;
&lt;br /&gt;
The Python Shell in Paraview can also be used to verify if certain functions will work and what libraries are available (such as [https://numpy.org/ numpy]).&lt;br /&gt;
&lt;br /&gt;
=== Running a Trace ===&lt;br /&gt;
Running a pvTrace, whether on the same or a different dataset requires only a few key steps. For now, it will be assumed that the trace is being run on Cooley at ALCF, though many of the steps should be shared for other machines. &lt;br /&gt;
&lt;br /&gt;
If you are running the trace on a dataset that is not the same as the one for which you created the trace, it is good practice to always check your script and inputs. Make sure you have all of the restart and geombc files that you will need, and that you are pointing to the correct locations in the python script (it is recommended that you use absolute paths to reduce the chances for error). Also check your output file name and location.&lt;br /&gt;
&lt;br /&gt;
Once your python script is ready, you need to get an interactive allocation on Cooley and load the same version of paraview with which the trace was created:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;soft add +paraview-5.5.2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This loads &amp;lt;code&amp;gt;pvpython&amp;lt;/code&amp;gt; which is what is used to run the python trace script. To run this scrip is simply a modification of a standard python run command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;pvpython Trace_Script.py&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1951</id>
		<title>Paraview Trace</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1951"/>
				<updated>2023-06-17T14:28:50Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Basic Changes to the Python Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Paraview traces are ways to create a python script of a set of actions that can be later applied to  different datasets or over a loop of datasets automatically. The following will brefly explain the creation, cleaning, and running of a python trace in Paraview (which will often be shortened to pvTrace or something similar).&lt;br /&gt;
&lt;br /&gt;
=== Creation ===&lt;br /&gt;
General steps are to start a trace, do whatever you're wanting to record, then stop the trace.&lt;br /&gt;
&lt;br /&gt;
'''1. Start Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Start Trace&lt;br /&gt;
&lt;br /&gt;
This will bring up a dialog with different options for which things to record. Of note is &amp;quot;Skip Rendering Components&amp;quot; (useful if your trace doesn't involve any visual capturing). You can also select &amp;quot;Show Incremental Trace&amp;quot; so to verify what command are being recorded as you do them.&lt;br /&gt;
&lt;br /&gt;
'''2. Do something'''&lt;br /&gt;
&lt;br /&gt;
Do whatever you were wanting to have recorded in Paraview.&lt;br /&gt;
&lt;br /&gt;
'''3. Stop Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Stop Trace&lt;br /&gt;
&lt;br /&gt;
This will then bring up the resulting Python script, which you can then save to a file, edit, and rerun.&lt;br /&gt;
&lt;br /&gt;
=== Basic Changes to the Python Script ===&lt;br /&gt;
&lt;br /&gt;
When rerunning a trace script, it's just run via a normal Python interpreter with a slightly modified path. All Python standard libraries are available to the script. &lt;br /&gt;
&lt;br /&gt;
==== Python Versions ====&lt;br /&gt;
'''Note: The version of Python used by Paraview is often different than the version installed on your system.'''&lt;br /&gt;
&lt;br /&gt;
To determine the version of Python used by Paraview, you can select &amp;quot;View &amp;gt; Python Shell&amp;quot; in Paraview, which will show the Python interpreter. The version of Python used should be printed on the shell at the top. If not, run &amp;lt;code&amp;gt;import sys; print(sys.version)&amp;lt;/code&amp;gt; to display the Python version information. If you're using Paraview through the server/client mode, you ''MUST'' do this while attached to a &amp;lt;code&amp;gt;pvserver&amp;lt;/code&amp;gt;; the version of Python on your local Paraview client is not guaranteed to match the version on the server.&lt;br /&gt;
&lt;br /&gt;
The Python Shell in Paraview can also be used to verify if certain functions will work and what libraries are available (such as [https://numpy.org/ numpy]).&lt;br /&gt;
&lt;br /&gt;
=== Running a Trace ===&lt;br /&gt;
Running a pvTrace, whether on the same or a different dataset requires only a few key steps. For now, it will be assumed that the trace is being run on Cooley at ALCF, though many of the steps should be shared for other machines. &lt;br /&gt;
&lt;br /&gt;
If you are running the trace on a dataset that is not the same as the one for which you created the trace, it is good practice to always check your script and inputs. Make sure you have all of the restart and geombc files that you will need, and that you are pointing to the correct locations in the python script (it is recommended that you use absolute paths to reduce the chances for error). Also check your output file name and location.&lt;br /&gt;
&lt;br /&gt;
Once your python script is ready, you need to get an interactive allocation on Cooley and load the same version of paraview with which the trace was created:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;soft add +paraview-5.5.2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This loads &amp;lt;code&amp;gt;pvpython&amp;lt;/code&amp;gt; which is what is used to run the python trace script. To run this scrip is simply a modification of a standard python run command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;pvpython Trace_Script.py&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1950</id>
		<title>Paraview Trace</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1950"/>
				<updated>2023-06-17T14:19:21Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Creation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Paraview traces are ways to create a python script of a set of actions that can be later applied to  different datasets or over a loop of datasets automatically. The following will brefly explain the creation, cleaning, and running of a python trace in Paraview (which will often be shortened to pvTrace or something similar).&lt;br /&gt;
&lt;br /&gt;
=== Creation ===&lt;br /&gt;
General steps are to start a trace, do whatever you're wanting to record, then stop the trace.&lt;br /&gt;
&lt;br /&gt;
'''1. Start Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Start Trace&lt;br /&gt;
&lt;br /&gt;
This will bring up a dialog with different options for which things to record. Of note is &amp;quot;Skip Rendering Components&amp;quot; (useful if your trace doesn't involve any visual capturing). You can also select &amp;quot;Show Incremental Trace&amp;quot; so to verify what command are being recorded as you do them.&lt;br /&gt;
&lt;br /&gt;
'''2. Do something'''&lt;br /&gt;
&lt;br /&gt;
Do whatever you were wanting to have recorded in Paraview.&lt;br /&gt;
&lt;br /&gt;
'''3. Stop Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Stop Trace&lt;br /&gt;
&lt;br /&gt;
This will then bring up the resulting Python script, which you can then save to a file, edit, and rerun.&lt;br /&gt;
&lt;br /&gt;
=== Basic Changes to the Python Script ===&lt;br /&gt;
&lt;br /&gt;
=== Running a Trace ===&lt;br /&gt;
Running a pvTrace, whether on the same or a different dataset requires only a few key steps. For now, it will be assumed that the trace is being run on Cooley at ALCF, though many of the steps should be shared for other machines. &lt;br /&gt;
&lt;br /&gt;
If you are running the trace on a dataset that is not the same as the one for which you created the trace, it is good practice to always check your script and inputs. Make sure you have all of the restart and geombc files that you will need, and that you are pointing to the correct locations in the python script (it is recommended that you use absolute paths to reduce the chances for error). Also check your output file name and location.&lt;br /&gt;
&lt;br /&gt;
Once your python script is ready, you need to get an interactive allocation on Cooley and load the same version of paraview with which the trace was created:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;soft add +paraview-5.5.2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This loads &amp;lt;code&amp;gt;pvpython&amp;lt;/code&amp;gt; which is what is used to run the python trace script. To run this scrip is simply a modification of a standard python run command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;pvpython Trace_Script.py&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1949</id>
		<title>Paraview Trace</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1949"/>
				<updated>2023-06-17T14:19:01Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Creation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Paraview traces are ways to create a python script of a set of actions that can be later applied to  different datasets or over a loop of datasets automatically. The following will brefly explain the creation, cleaning, and running of a python trace in Paraview (which will often be shortened to pvTrace or something similar).&lt;br /&gt;
&lt;br /&gt;
=== Creation ===&lt;br /&gt;
General steps are to start a trace, do whatever you're wanting to record, then stop the trace.&lt;br /&gt;
&lt;br /&gt;
'''1. Start Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Start Trace&lt;br /&gt;
&lt;br /&gt;
This will bring up a dialog with different options for which things to record. Of note is &amp;quot;Skip Rendering Components&amp;quot; (useful if your trace doesn't involve any visual capturing). You can also select &amp;quot;Show Incremental Trace&amp;quot; so to verify what command are being recorded as you do them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''2. Do something'''&lt;br /&gt;
Do whatever you were wanting to have recorded in Paraview.&lt;br /&gt;
&lt;br /&gt;
'''3. Stop Trace'''&lt;br /&gt;
&lt;br /&gt;
Tools &amp;gt; Stop Trace&lt;br /&gt;
&lt;br /&gt;
This will then bring up the resulting Python script, which you can then save to a file, edit, and rerun.&lt;br /&gt;
&lt;br /&gt;
=== Basic Changes to the Python Script ===&lt;br /&gt;
&lt;br /&gt;
=== Running a Trace ===&lt;br /&gt;
Running a pvTrace, whether on the same or a different dataset requires only a few key steps. For now, it will be assumed that the trace is being run on Cooley at ALCF, though many of the steps should be shared for other machines. &lt;br /&gt;
&lt;br /&gt;
If you are running the trace on a dataset that is not the same as the one for which you created the trace, it is good practice to always check your script and inputs. Make sure you have all of the restart and geombc files that you will need, and that you are pointing to the correct locations in the python script (it is recommended that you use absolute paths to reduce the chances for error). Also check your output file name and location.&lt;br /&gt;
&lt;br /&gt;
Once your python script is ready, you need to get an interactive allocation on Cooley and load the same version of paraview with which the trace was created:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;soft add +paraview-5.5.2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This loads &amp;lt;code&amp;gt;pvpython&amp;lt;/code&amp;gt; which is what is used to run the python trace script. To run this scrip is simply a modification of a standard python run command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;pvpython Trace_Script.py&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=UNIX&amp;diff=1946</id>
		<title>UNIX</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=UNIX&amp;diff=1946"/>
				<updated>2023-03-07T05:34:36Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Add &amp;quot;visual guide to ssh tunnels&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Most of our systems (and general HPC resources) run some UNIX derivative. Much of the software is command line based, so it's worthwhile to learn the basics. &lt;br /&gt;
&lt;br /&gt;
There are tons of free resources on the web for getting started ([https://www.tipsandtricks-hq.com/basic-unix-commands-list-366#:~:text=File%2FDirectory%20operation%20related%20Unix%20Commands.%201%20cp%20%E2%80%93,which%20is%20the%20pathname%20of%20a%20directory.%20 Basic UNIX Commands]). There should also be a &amp;quot;for dummies&amp;quot; book in the lab. &lt;br /&gt;
&lt;br /&gt;
As you find resources that are helpful, please update this page.&lt;br /&gt;
&lt;br /&gt;
== Connecting (SSH) ==&lt;br /&gt;
Windows:&lt;br /&gt;
[http://www.chiark.greenend.org.uk/~sgtatham/putty PuTTY SSH Client]&lt;br /&gt;
[http://winscp.net/eng/index.php WinSCP file transfer tool]&lt;br /&gt;
&lt;br /&gt;
MacOS and Linux users can use [http://openssh.org/ OpennSSH] on the command line (it generally comes with the OS).&lt;br /&gt;
&lt;br /&gt;
=== Resources for setting up ssh keys ===&lt;br /&gt;
&lt;br /&gt;
Visual guide to how ssh-keys and &amp;lt;code&amp;gt;ssh-agent&amp;lt;/code&amp;gt; work: [http://www.unixwiz.net/techtips/ssh-agent-forwarding.html An Illustrated Guide to SSH Agent Forwarding]&lt;br /&gt;
&lt;br /&gt;
For setting up &amp;lt;code&amp;gt;ssh-agent&amp;lt;/code&amp;gt; (so you don't have to type your password over and over): [http://blog.joncairns.com/2013/12/understanding-ssh-agent-and-ssh-add/ Understanding ssh-agent and ssh-add]&lt;br /&gt;
&lt;br /&gt;
Script for automatically starting &amp;lt;code&amp;gt;ssh-agent&amp;lt;/code&amp;gt; on login of a machine (place in your &amp;lt;code&amp;gt;.profile&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt;): [https://stackoverflow.com/a/18915067/7564988 StackOverflow: Start ssh-agent on login]&lt;br /&gt;
&lt;br /&gt;
=== Visual guide to ssh tunnels (ie. port forwarding) ===&lt;br /&gt;
&lt;br /&gt;
A good resource to understand port-forwarding, and more advanced uses of port-forwarding: [https://robotmoon.com/ssh-tunnels/ A visual guide to SSH tunnels]&lt;br /&gt;
&lt;br /&gt;
== Command Line Basics ==&lt;br /&gt;
&lt;br /&gt;
[https://explainshell.com/# Explain Shell]: Copy/paste a CLI command, and it will tell you what all the flags mean&lt;br /&gt;
&lt;br /&gt;
[https://arstechnica.com/gadgets/2021/08/linux-bsd-command-line-101-using-awk-sed-and-grep-in-the-terminal/ How to ''think'' when using &amp;lt;code&amp;gt;grep&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;sed&amp;lt;/code&amp;gt;]: Article going over the basic uses of &amp;lt;code&amp;gt;grep&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;sed&amp;lt;/code&amp;gt; and how to think through their use cases.&lt;br /&gt;
&lt;br /&gt;
[https://www.tldp.org/LDP/abs/html/ Advanced Bash-Scripting Guide]&lt;br /&gt;
&lt;br /&gt;
== Graphical Sessions (VNC) ==&lt;br /&gt;
See [[VNC]]&lt;br /&gt;
&lt;br /&gt;
== File Permissions and ACL ==&lt;br /&gt;
See [[File_Permissions_Basics_and_ACL|File Permissions Basics and ACL]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=1917</id>
		<title>NAS</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=1917"/>
				<updated>2022-11-09T16:32:33Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Add mention of using MPICC_CC&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ Category: Compute Facilities]]&lt;br /&gt;
Wiki for information related to the '''NASA Advanced Supercomputing''' ('''NAS''') facility.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Pleiades&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Lou&lt;br /&gt;
| Storage and Analysis&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Electra&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Endeavour&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Merope&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Aitken&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/portable-batch-system-(pbs)-overview_126.html PBS]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/ Support Knowledgebase]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== How-To's in Separate Wiki's ===&lt;br /&gt;
&lt;br /&gt;
* [[Setting_Default_File_Permissions#NAS|Setting Default File Permissions]]&lt;br /&gt;
&lt;br /&gt;
=== Backup Data from Scratch Directories===&lt;br /&gt;
&lt;br /&gt;
This is done simply by copying data from the &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories to your home directory on Lou (&amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;). The &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories are mounted onto &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;, so transfers should be done on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is recommended to mirror the directory structure of your &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt; to allow for the data to be easily recovered back to it's original state. This is especially important if you use symlinks (as they are path dependent and will break if either the source file or the symlink itself are not in the correct location).&lt;br /&gt;
&lt;br /&gt;
This can be done with &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt;, but it is recommended to use NASA's in-house utility &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; will automatically perform parallel file transfers, data integrity checks and repairs, and syncing features similar to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Commands:'''&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc -r -d --sync /nobackup/jrwrigh7/models/STGFlatPlate/STFM_Tet_dz4-10_dx15 .&lt;br /&gt;
&lt;br /&gt;
This will copy the directory &amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt; to the current location (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;). The flags do as follows&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;: Recursively copy files from destination&lt;br /&gt;
* &amp;lt;code&amp;gt;-d&amp;lt;/code&amp;gt;: Create required directories that don't already exist. Equivalent of the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; flag for &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--sync&amp;lt;/code&amp;gt;: Only copy over &amp;quot;new&amp;quot; files, where &amp;quot;new&amp;quot; are any changes to the modification time or file size. &lt;br /&gt;
** If a file exists on destination (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;), but not source (&amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt;), it will not be copied back to source nor will it be deleted to match the state of source.&lt;br /&gt;
&lt;br /&gt;
Once this command is submitted, the transfer process will be backgrounded. Progress can be viewed by running &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. Additionally, you will recieve an email with the transfer job is completed.&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc --stop --id [shiftc job ID]&lt;br /&gt;
&lt;br /&gt;
This will stop the given shiftc job. The &amp;lt;code&amp;gt;[shiftc job ID]&amp;lt;/code&amp;gt; is the same number that appears beside the output of &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
More documentation for &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; can be found in its man page (&amp;lt;code&amp;gt;man shiftc&amp;lt;/code&amp;gt;) and on [https://www.nas.nasa.gov/hecc/support/kb/shift-transfer-tool-overview_300.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Control MPI Rank Placement ===&lt;br /&gt;
==== Rank 1 Solo Node ====&lt;br /&gt;
To make the rank 1 MPI process take a node on it's own, put this in the PBS directives:&lt;br /&gt;
&lt;br /&gt;
 #PBS -l select=1:mpiprocs=1:model=sky_ele+1:mpiprocs=40:model=sky_ele&lt;br /&gt;
&lt;br /&gt;
This will request 2 nodes: One will have the rank 1 process all by itself, and the other will have 40 MPI Processes (for all 40 CPU cores available on &amp;lt;code&amp;gt;sky_ele&amp;lt;/code&amp;gt; nodes). &lt;br /&gt;
&lt;br /&gt;
====Distribute Non-First Rank MPI Processes====&lt;br /&gt;
For controlling the placement of non-first rank MPI processes, use the &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; utility.&lt;br /&gt;
&lt;br /&gt;
For example, if we have requested 4 nodes and want 10 MPI processes per node, the &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; command needs to be modified to the following:&lt;br /&gt;
&lt;br /&gt;
 mpiexec -np 40 /u/scicon/tools/bin/mbind.x -n10 [executable]&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; is also socket aware, so it will distribute nodes evenly between nodes ''and'' between CPU's in each node (NAS nodes have 2 CPU's per node).&lt;br /&gt;
For more information on &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt;, see it's help flag (&amp;lt;code&amp;gt;mbind.x -help&amp;lt;/code&amp;gt;) or [https://www.nas.nasa.gov/hecc/support/kb/using-the-mbind-tool-for-pinning_288.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Common commands ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;node_stats.sh&amp;lt;/code&amp;gt;: Displays how many nodes are available or actively running jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;tracejobssh&amp;lt;/code&amp;gt;: Helps to answer &amp;quot;Why isn't my job running?&amp;quot;. Part of the [https://github.com/PHASTA/utilities git repo].&lt;br /&gt;
&lt;br /&gt;
=== See Priority &amp;quot;Score&amp;quot; in Queue ===&lt;br /&gt;
&lt;br /&gt;
To see what your priority &amp;quot;score&amp;quot; in PBS is use the &amp;lt;code&amp;gt;qstat -W o=+pri&amp;lt;/code&amp;gt; to add the &amp;quot;Priority&amp;quot; column to the output of &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Priority Scoring (as of 2021-01-22) ====&lt;br /&gt;
&lt;br /&gt;
* Job priority score grows by 1 every 12 hours&lt;br /&gt;
* We are capped at a max score of 20 per job&lt;br /&gt;
** Note that other users/groups using NAS may start with higher priority and grow higher than 20&lt;br /&gt;
** Result is that it's quite difficult to get large jobs running&lt;br /&gt;
* If you don't have any jobs running, you get an addition +10 to the score&lt;br /&gt;
** This score bump is removed as soon as you have a running job&lt;br /&gt;
&lt;br /&gt;
=== Compiling ===&lt;br /&gt;
* Generally will want use &amp;lt;code&amp;gt;module load hpe-mpi/mpt comp-intel&amp;lt;/code&amp;gt; for compiling&lt;br /&gt;
* Sometimes, &amp;lt;code&amp;gt;mpi{cc,cxx,f90}&amp;lt;/code&amp;gt; will not pick the Intel compilers by default. You can check this by running &amp;lt;code&amp;gt;mpi{cc,cxx,f90} --version&amp;lt;/code&amp;gt; to verify the compiler it links to.&lt;br /&gt;
** To fix this, you can set &amp;lt;code&amp;gt;export MPICC_CC=icc MPICXX_CXX=icpc MPIF90_F90=ifort&amp;lt;/code&amp;gt; to force it to use the Intel compilers&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=File_Permissions_Basics_and_ACL&amp;diff=1916</id>
		<title>File Permissions Basics and ACL</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=File_Permissions_Basics_and_ACL&amp;diff=1916"/>
				<updated>2022-10-24T14:37:54Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* &amp;quot;Mode&amp;quot; Parameter */ Syntax fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Access-control Lists (ACL)''' are how POSIX-based file systems control '''file permissions'''. This article forms a primer to understanding how exactly the actions in [[Setting Default File Permissions]] work and how to debug them.&lt;br /&gt;
&lt;br /&gt;
In short, ACL helps to control which users can read, write, or execute certain file objects, which are defined as anything in the file system (regular files, directories, symlinks, etc.). Note that ACL does ''not'' have complete authority on ''creating'' permissions for new file objects. &lt;br /&gt;
In analogy, ACL is akin to the guest list at a club. Continuing the analogy, the OS kernel is the bouncer of the club, determining who can get in to which section of the club based on the ACL.&lt;br /&gt;
&lt;br /&gt;
Much of the information presented here is written out in the [https://linux.die.net/man/5/acl ACL manpage]. This wiki hopefully serves as a primer for that manpage, as the terms it uses are different than the ones that are commonly presented and used when dealing with normal file permissions. I recommend reading the manpage if you're having to debug a permissions issue. It's not too long.&lt;br /&gt;
&lt;br /&gt;
== Basics of Unix/POSIX File Permissions ==&lt;br /&gt;
&lt;br /&gt;
=== What are they? ===&lt;br /&gt;
&lt;br /&gt;
* All files and directories have permissions assigned to them via ACL entries&lt;br /&gt;
* There are three different &amp;quot;levels&amp;quot; of file permissions in a standard POSIX file system: read (&amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;), write (&amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt;), and execute (&amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt;). &lt;br /&gt;
** Read allows viewing the contents of the file/directory, and copying the files&lt;br /&gt;
** Write allows rewriting and deleting files. For a directory with write permissions, it also allows creation of subdirectories and creation of new files&lt;br /&gt;
** Execute allows files to be executed directly. &lt;br /&gt;
*** Note that for script files (such as &amp;lt;code&amp;gt;bash&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt;), they can still be run by passing the file to it's interpreter if the file is readable (ie. &amp;lt;code&amp;gt;bash non_executableScript.sh&amp;lt;/code&amp;gt; is still possible if &amp;lt;code&amp;gt;non_executableScript.sh&amp;lt;/code&amp;gt; has &amp;lt;code&amp;gt;rw-&amp;lt;/code&amp;gt; permissions).&lt;br /&gt;
* ACL entries apply the three different &amp;quot;levels&amp;quot; of file permission onto 6 possible tags. From the ACL manpage, they are:&lt;br /&gt;
&lt;br /&gt;
 ACL_USER_OBJ    The ACL_USER_OBJ entry denotes access rights for the file owner.&lt;br /&gt;
 &lt;br /&gt;
 ACL_USER        ACL_USER entries denote access rights for users identified by the entry's qualifier.&lt;br /&gt;
 &lt;br /&gt;
 ACL_GROUP_OBJ   The ACL_GROUP_OBJ entry denotes access rights for the file group.&lt;br /&gt;
 &lt;br /&gt;
 ACL_GROUP       ACL_GROUP entries denote access rights for groups identified by the entry's qualifier.&lt;br /&gt;
 &lt;br /&gt;
 ACL_MASK        The ACL_MASK entry denotes the maximum access rights that can be granted by entries of type ACL_USER, ACL_GROUP_OBJ, or ACL_GROUP.&lt;br /&gt;
 &lt;br /&gt;
 ACL_OTHER       The ACL_OTHER entry denotes access rights for processes that do not match any other entry in the ACL.&lt;br /&gt;
&lt;br /&gt;
:*&amp;lt;code&amp;gt;ACL_USER_OBJ&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ACL_GROUP_OBJ&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ACL_OTHER&amp;lt;/code&amp;gt; correspond to the standard three user categories: file owner, file group, and &amp;quot;other&amp;quot;. These three categories are the ones listed by &amp;lt;code&amp;gt;ls -l&amp;lt;/code&amp;gt;. &lt;br /&gt;
:**The file owner and file group are set to the user that originally created the file and their respective user group (though this can be changed using &amp;lt;code&amp;gt;chown&amp;lt;/code&amp;gt;).&lt;br /&gt;
:*&amp;quot;Others&amp;quot; simply refers to all other users that are not the file owner and are not members of the file group. &lt;br /&gt;
:*The other three tags, &amp;lt;code&amp;gt;ACL_MASK&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ACL_GROUP&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ACL_USER&amp;lt;/code&amp;gt;, are only used for [[File_Permissions_Basics_and_ACL#Custom Permissions|custom ACL entries]] that are set by a user.&lt;br /&gt;
:** Additionally, &amp;lt;code&amp;gt;ACL_MASK&amp;lt;/code&amp;gt; is only required if &amp;lt;code&amp;gt;ACL_GROUP&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;ACL_USER&amp;lt;/code&amp;gt; entries are utilized.&lt;br /&gt;
&lt;br /&gt;
* The file permissions for &amp;lt;code&amp;gt;ACL_USER_OBJ&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ACL_GROUP_OBJ&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ACL_OTHER&amp;lt;/code&amp;gt; together form the file &amp;lt;code&amp;gt;mode&amp;lt;/code&amp;gt; parameter&lt;br /&gt;
** This parameter is used by programs to set the permissions of files they create. This is discussed further in [[File_Permissions_Basics_and_ACL#Custom Permissions|the Custom Permissions section]].&lt;br /&gt;
&lt;br /&gt;
==== Viewing Them ====&lt;br /&gt;
&lt;br /&gt;
===== Using ls =====&lt;br /&gt;
The simplest and most common way of viewing the basic permissions of a file is by using &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;. As an example, if you run &amp;lt;code&amp;gt;ls -l&amp;lt;/code&amp;gt; on a directory, you might see:&lt;br /&gt;
&lt;br /&gt;
 drwxr-x---+ 2 jrwrigh7 a1983 4.0K 2020-07-04 08:09 test2&lt;br /&gt;
 -rw-r-x---+ 1 jrwrigh7 a1983   38 2020-07-02 12:38 test2file&lt;br /&gt;
 lrwxrwxrwx  1 jrwrigh7 a1983    9 2020-07-04 08:40 test2fileLink -&amp;gt; test2file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The first block (&amp;lt;code&amp;gt;-rw-r-x---+&amp;lt;/code&amp;gt;) shows the permissions of the file for the three primary ACL tags (&amp;lt;code&amp;gt;ACL_USER_OBJ&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ACL_GROUP_OBJ&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ACL_OTHER&amp;lt;/code&amp;gt;). (described below). The file owner is shown as &amp;lt;code&amp;gt;jrwrigh7&amp;lt;/code&amp;gt; and the file group is &amp;lt;code&amp;gt;a1983&amp;lt;/code&amp;gt;. These correspond to &amp;lt;code&amp;gt;ACL_USER_OBJ&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;ACL_GROUP_OBJ&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Permissions Block:'''&lt;br /&gt;
* First character displays what kind of file it is, be it a link (&amp;lt;code&amp;gt;l&amp;lt;/code&amp;gt;), directory (&amp;lt;code&amp;gt;d&amp;lt;/code&amp;gt;), regular file (&amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt;), etc.&lt;br /&gt;
* The next 9 characters show the permissions for the &amp;lt;code&amp;gt;ACL_USER_OBJ&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ACL_GROUP_OBJ&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ACL_OTHER&amp;lt;/code&amp;gt;&lt;br /&gt;
** So for &amp;lt;code&amp;gt;test2file&amp;lt;/code&amp;gt; in the above output:&lt;br /&gt;
*** File Owner: &amp;lt;code&amp;gt;rw-&amp;lt;/code&amp;gt;&lt;br /&gt;
*** File Group: &amp;lt;code&amp;gt;r-x&amp;lt;/code&amp;gt;&lt;br /&gt;
*** Others: &amp;lt;code&amp;gt;---&amp;lt;/code&amp;gt;&lt;br /&gt;
* The last character is optional. A &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; means that there are other permission rules not displayed. This is where [[Setting_Default_File_Permissions#Setting ACL Rules|ACL rules]] come into play. &lt;br /&gt;
&lt;br /&gt;
See [https://www.gnu.org/software/coreutils/manual/html_node/What-information-is-listed.html#What-information-is-listed the ls coreutils manual] for more information on the 'long' format for &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Using getfacl =====&lt;br /&gt;
&lt;br /&gt;
You can also use &amp;lt;code&amp;gt;getfacl&amp;lt;/code&amp;gt; to get a more detailed look at the permissions of a file or directory. Simply running &amp;lt;code&amp;gt;getfacl&amp;lt;/code&amp;gt; with a file object as it's argument will show the permissions for that file object:&lt;br /&gt;
&lt;br /&gt;
 $ getfacl test2file&lt;br /&gt;
 # file: test2file&lt;br /&gt;
 # owner: jrwrigh7&lt;br /&gt;
 # group: a1983&lt;br /&gt;
 user::rw-&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:a1983:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
&lt;br /&gt;
This argument will show all permissions for a file object, including custom ones. &amp;lt;code&amp;gt;user::&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;group::&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;other::&amp;lt;/code&amp;gt; refer to the permissions set for the &amp;lt;code&amp;gt;ACL_USER_OBJ&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ACL_GROUP_OBJ&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ACL_OTHER&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
To set more specific permissions, a group or user can be inserted between the colons. This can be seen in the line &amp;lt;code&amp;gt;group:a1983:&amp;lt;/code&amp;gt; where the members of the group have &amp;lt;code&amp;gt;r-x&amp;lt;/code&amp;gt; permissions to the file. These permissions fall under the &amp;lt;code&amp;gt;ACL_USER&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;ACL_GROUP&amp;lt;/code&amp;gt; tags.&lt;br /&gt;
&lt;br /&gt;
If default ACL entries are used, they will also be displayed using &amp;lt;code&amp;gt;getfacl&amp;lt;/code&amp;gt;. An example is presented below for the directory &amp;lt;code&amp;gt;test&amp;lt;/code&amp;gt; (note that default ACL entries can ''only'' be applied to directories):&lt;br /&gt;
&lt;br /&gt;
 $ getfacl test&lt;br /&gt;
 # file: test&lt;br /&gt;
 # owner: jrwrigh7&lt;br /&gt;
 # group: a1983&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:a1983:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x&lt;br /&gt;
 default:group:a1983:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
 default:other::---&lt;br /&gt;
&lt;br /&gt;
=== Permissions as Octal Numbers ===&lt;br /&gt;
The file &amp;lt;code&amp;gt;mode&amp;lt;/code&amp;gt; is often conveyed in the form of three octal numbers (ie. base 8 numbers). It is very similar to how PHASTA handles specifying boundary conditions using bitwise logic.&lt;br /&gt;
The first bit handles read, the second bit handles write, and the third handles execute. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot; | Permissions&lt;br /&gt;
! Execute bit&lt;br /&gt;
! Write Bit&lt;br /&gt;
! Read Bit&lt;br /&gt;
! Octal Number equivalent&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | &amp;lt;code&amp;gt;---&amp;lt;/code&amp;gt;&lt;br /&gt;
| 0&lt;br /&gt;
| 0&lt;br /&gt;
| 0&lt;br /&gt;
| 0&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | &amp;lt;code&amp;gt;--x&amp;lt;/code&amp;gt;&lt;br /&gt;
| 1&lt;br /&gt;
| 0&lt;br /&gt;
| 0&lt;br /&gt;
| 1&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | &amp;lt;code&amp;gt;-w-&amp;lt;/code&amp;gt;&lt;br /&gt;
| 0&lt;br /&gt;
| 1&lt;br /&gt;
| 0&lt;br /&gt;
| 2&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | &amp;lt;code&amp;gt;-wx&amp;lt;/code&amp;gt;&lt;br /&gt;
| 1&lt;br /&gt;
| 1&lt;br /&gt;
| 0&lt;br /&gt;
| 3&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | &amp;lt;code&amp;gt;r--&amp;lt;/code&amp;gt;&lt;br /&gt;
| 0&lt;br /&gt;
| 0&lt;br /&gt;
| 1&lt;br /&gt;
| 4&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | &amp;lt;code&amp;gt;r-x&amp;lt;/code&amp;gt;&lt;br /&gt;
| 1&lt;br /&gt;
| 0&lt;br /&gt;
| 1&lt;br /&gt;
| 5&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | &amp;lt;code&amp;gt;rw-&amp;lt;/code&amp;gt;&lt;br /&gt;
| 0&lt;br /&gt;
| 1&lt;br /&gt;
| 1&lt;br /&gt;
| 6&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | &amp;lt;code&amp;gt;rwx&amp;lt;/code&amp;gt;&lt;br /&gt;
| 1&lt;br /&gt;
| 1&lt;br /&gt;
| 1&lt;br /&gt;
| 7&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;code&amp;gt;test2file&amp;lt;/code&amp;gt; as the example permissions block(&amp;lt;code&amp;gt;rw-r-x---&amp;lt;/code&amp;gt;), it is stored as &amp;lt;code&amp;gt;110 101 000&amp;lt;/code&amp;gt;. When translated into decimal, that equals &amp;lt;code&amp;gt;3 5 0&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Another way to think about it is that the octal number equivalent of a permission set = 1*x + 2*w + 4*r, where r, w, and x equal 1 or 0 depending on whether they're set or not.&lt;br /&gt;
&lt;br /&gt;
== Custom Permissions ==&lt;br /&gt;
&lt;br /&gt;
The other two tags, &amp;lt;code&amp;gt;ACL_GROUP&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;ACL_USER&amp;lt;/code&amp;gt;, are for more custom permissions and are not set by default on a blank system. &lt;br /&gt;
Different sets of permissions can be set on a per user and per group basis. &lt;br /&gt;
For more information on complex interactions (ie. if a user has permissions set and is a member of a group that has its own custom permissions), I recommend seeing the ACL manpage section on the [https://linux.die.net/man/5/acl Access Check Algorithm]&lt;br /&gt;
&lt;br /&gt;
=== What determines the permissions for a new file? ===&lt;br /&gt;
&lt;br /&gt;
When a new file object is created, a new set of ACL entries must be created for that file object. There are three sources that determine what ACL entries will be set; &amp;lt;code&amp;gt;umask&amp;lt;/code&amp;gt;, the &amp;quot;mode&amp;quot; parameter used by the program creating the file, and ACL entries of the parent directory of the file object being created.&lt;br /&gt;
&lt;br /&gt;
==== umask ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;umask&amp;lt;/code&amp;gt; is a function that contains file permission settings for any file created by a user and it is unique to the user. The user can change it at anytime using the &amp;lt;code&amp;gt;umask&amp;lt;/code&amp;gt; command. To see what it is set to, simply run &amp;lt;code&amp;gt;umask&amp;lt;/code&amp;gt; with no arguments. The output is usually in octal notation, and represents the bits that will be ''blocked'', not the bits that will be set. So an output of &amp;lt;code&amp;gt;022&amp;lt;/code&amp;gt; will allow any user permission bits to be set, but will not allow for the executable bit to be set for user and group settings.&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;Mode&amp;quot; Parameter ====&lt;br /&gt;
&lt;br /&gt;
When a new file is created, the &amp;lt;code&amp;gt;syscall&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;open()&amp;lt;/code&amp;gt; (among others) is used and a file mode parameter must be chosen by that program. For example, &amp;lt;code&amp;gt;touch&amp;lt;/code&amp;gt; will automatically apply the mode &amp;lt;code&amp;gt;666&amp;lt;/code&amp;gt; to the file, which will make the file owner, file group, and &amp;quot;others&amp;quot; all have &amp;lt;code&amp;gt;rw-&amp;lt;/code&amp;gt; permissions.&lt;br /&gt;
&lt;br /&gt;
==== ACL Defaults ====&lt;br /&gt;
&lt;br /&gt;
ACL default entries are set for a directory and are used for files and subdirectories created inside that directory. Note that ACL default entries can ''only'' be set for directories. &lt;br /&gt;
&lt;br /&gt;
=== How are new file permissions set? ===&lt;br /&gt;
This is simply a paraphrasing/rewording of the &amp;quot;Object Creation and Default ACLs&amp;quot; section of the [https://linux.die.net/man/5/acl ACL manpage].&lt;br /&gt;
&lt;br /&gt;
When creating a file/subdirectory in a parent directory:&lt;br /&gt;
&lt;br /&gt;
* If the parent directory '''does have default ACL rules''', only the ACL default entries of the parent directory and &amp;quot;mode&amp;quot; parameter are used:&lt;br /&gt;
** The new file/subdirectory first inherits the ACL default entries of its parent directory as its normal ACL entries&lt;br /&gt;
** The new file/subdirectory has its ACL entries adjusted such that no permissions exceed the &amp;quot;mode&amp;quot; parameter. &lt;br /&gt;
*** The &amp;lt;code&amp;gt;ACL_USER_OBJ&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;ACL_OTHER&amp;lt;/code&amp;gt; are changed directly&lt;br /&gt;
*** The &amp;lt;code&amp;gt;ACL_GROUP&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ACL_GROUP_OBJ&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ACL_USER&amp;lt;/code&amp;gt; are changed ''through'' adjusting the &amp;lt;code&amp;gt;ACL_MASK&amp;lt;/code&amp;gt;&lt;br /&gt;
** Additionally, if a new subdirectory is created, it will inherit its parents default ACL rules. Note that a file ''cannot'' have default ACL rules set.&lt;br /&gt;
&lt;br /&gt;
* If the parent directory '''does not have default ACL rules''', only &amp;lt;code&amp;gt;umask&amp;lt;/code&amp;gt; and &amp;quot;mode&amp;quot; parameter are used:&lt;br /&gt;
** The new file/subdirectory's &amp;lt;code&amp;gt;ACL_USER_OBJ&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ACL_GROUP_OBJ&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ACL_OTHER&amp;lt;/code&amp;gt; are set based on the permissions set by &amp;lt;code&amp;gt;umask&amp;lt;/code&amp;gt; via a bit wise NAND&lt;br /&gt;
** The new file/subdirectory's are then adjusted to limit permissions to be no looser than the &amp;quot;mode&amp;quot; parameter&lt;br /&gt;
&lt;br /&gt;
Note that this all means that if you have a default ACL rule that gives execute permissions to a group, the group will ''not'' have execute permissions by default ''unless'' the &amp;quot;mode&amp;quot; parameter also has execute permissions. Most compilers will use the execute permission bit for the &amp;quot;mode&amp;quot; parameter of it's executable, but other files will not.&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=1915</id>
		<title>NAS</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=1915"/>
				<updated>2022-10-12T22:35:25Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Add Aitken to machine list&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ Category: Compute Facilities]]&lt;br /&gt;
Wiki for information related to the '''NASA Advanced Supercomputing''' ('''NAS''') facility.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Pleiades&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Lou&lt;br /&gt;
| Storage and Analysis&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Electra&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Endeavour&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Merope&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Aitken&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/portable-batch-system-(pbs)-overview_126.html PBS]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/ Support Knowledgebase]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== How-To's in Separate Wiki's ===&lt;br /&gt;
&lt;br /&gt;
* [[Setting_Default_File_Permissions#NAS|Setting Default File Permissions]]&lt;br /&gt;
&lt;br /&gt;
=== Backup Data from Scratch Directories===&lt;br /&gt;
&lt;br /&gt;
This is done simply by copying data from the &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories to your home directory on Lou (&amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;). The &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories are mounted onto &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;, so transfers should be done on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is recommended to mirror the directory structure of your &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt; to allow for the data to be easily recovered back to it's original state. This is especially important if you use symlinks (as they are path dependent and will break if either the source file or the symlink itself are not in the correct location).&lt;br /&gt;
&lt;br /&gt;
This can be done with &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt;, but it is recommended to use NASA's in-house utility &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; will automatically perform parallel file transfers, data integrity checks and repairs, and syncing features similar to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Commands:'''&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc -r -d --sync /nobackup/jrwrigh7/models/STGFlatPlate/STFM_Tet_dz4-10_dx15 .&lt;br /&gt;
&lt;br /&gt;
This will copy the directory &amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt; to the current location (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;). The flags do as follows&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;: Recursively copy files from destination&lt;br /&gt;
* &amp;lt;code&amp;gt;-d&amp;lt;/code&amp;gt;: Create required directories that don't already exist. Equivalent of the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; flag for &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--sync&amp;lt;/code&amp;gt;: Only copy over &amp;quot;new&amp;quot; files, where &amp;quot;new&amp;quot; are any changes to the modification time or file size. &lt;br /&gt;
** If a file exists on destination (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;), but not source (&amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt;), it will not be copied back to source nor will it be deleted to match the state of source.&lt;br /&gt;
&lt;br /&gt;
Once this command is submitted, the transfer process will be backgrounded. Progress can be viewed by running &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. Additionally, you will recieve an email with the transfer job is completed.&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc --stop --id [shiftc job ID]&lt;br /&gt;
&lt;br /&gt;
This will stop the given shiftc job. The &amp;lt;code&amp;gt;[shiftc job ID]&amp;lt;/code&amp;gt; is the same number that appears beside the output of &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
More documentation for &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; can be found in its man page (&amp;lt;code&amp;gt;man shiftc&amp;lt;/code&amp;gt;) and on [https://www.nas.nasa.gov/hecc/support/kb/shift-transfer-tool-overview_300.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Control MPI Rank Placement ===&lt;br /&gt;
==== Rank 1 Solo Node ====&lt;br /&gt;
To make the rank 1 MPI process take a node on it's own, put this in the PBS directives:&lt;br /&gt;
&lt;br /&gt;
 #PBS -l select=1:mpiprocs=1:model=sky_ele+1:mpiprocs=40:model=sky_ele&lt;br /&gt;
&lt;br /&gt;
This will request 2 nodes: One will have the rank 1 process all by itself, and the other will have 40 MPI Processes (for all 40 CPU cores available on &amp;lt;code&amp;gt;sky_ele&amp;lt;/code&amp;gt; nodes). &lt;br /&gt;
&lt;br /&gt;
====Distribute Non-First Rank MPI Processes====&lt;br /&gt;
For controlling the placement of non-first rank MPI processes, use the &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; utility.&lt;br /&gt;
&lt;br /&gt;
For example, if we have requested 4 nodes and want 10 MPI processes per node, the &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; command needs to be modified to the following:&lt;br /&gt;
&lt;br /&gt;
 mpiexec -np 40 /u/scicon/tools/bin/mbind.x -n10 [executable]&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; is also socket aware, so it will distribute nodes evenly between nodes ''and'' between CPU's in each node (NAS nodes have 2 CPU's per node).&lt;br /&gt;
For more information on &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt;, see it's help flag (&amp;lt;code&amp;gt;mbind.x -help&amp;lt;/code&amp;gt;) or [https://www.nas.nasa.gov/hecc/support/kb/using-the-mbind-tool-for-pinning_288.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Common commands ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;node_stats.sh&amp;lt;/code&amp;gt;: Displays how many nodes are available or actively running jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;tracejobssh&amp;lt;/code&amp;gt;: Helps to answer &amp;quot;Why isn't my job running?&amp;quot;. Part of the [https://github.com/PHASTA/utilities git repo].&lt;br /&gt;
&lt;br /&gt;
=== See Priority &amp;quot;Score&amp;quot; in Queue ===&lt;br /&gt;
&lt;br /&gt;
To see what your priority &amp;quot;score&amp;quot; in PBS is use the &amp;lt;code&amp;gt;qstat -W o=+pri&amp;lt;/code&amp;gt; to add the &amp;quot;Priority&amp;quot; column to the output of &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Priority Scoring (as of 2021-01-22) ====&lt;br /&gt;
&lt;br /&gt;
* Job priority score grows by 1 every 12 hours&lt;br /&gt;
* We are capped at a max score of 20 per job&lt;br /&gt;
** Note that other users/groups using NAS may start with higher priority and grow higher than 20&lt;br /&gt;
** Result is that it's quite difficult to get large jobs running&lt;br /&gt;
* If you don't have any jobs running, you get an addition +10 to the score&lt;br /&gt;
** This score bump is removed as soon as you have a running job&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1914</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1914"/>
				<updated>2022-10-04T15:44:17Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Add further documentation section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ALCF]]'s High Performance Storage System (HPSS) is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in  [https://www.alcf.anl.gov/support/user-guides/data-management/filesystem-and-storage/hpss/index.html ALCF documentation] that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does ''not'' support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;br /&gt;
&lt;br /&gt;
If you need to retrieve data from tape and put it back on the &amp;quot;local&amp;quot; system, the &amp;lt;code&amp;gt;get&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cget&amp;lt;/code&amp;gt; commands act in the same way as &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; but in reverse.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
If you are a first time user of HPSS, you will likely get an error regarding a key file. This is something that must be taken care of by ALCF support (support@alcf.anl.gov). Simply email them with your ALCF username and state that you need access set up for HPSS.&lt;br /&gt;
&lt;br /&gt;
== Further Documentation ==&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; is covered above, more through documentation is [https://docs.nersc.gov/filesystems/archive/ available from NERSC].&lt;br /&gt;
&lt;br /&gt;
There is also a [https://www.hpss-collaboration.org/documents/HSI_8.3_Reference_Manual.pdf pdf reference manual] (backup saved here: [[File:HSI 8.3 Reference Manual.pdf]]) that goes into more detail. Note that it is for version 8.3, while ALCF is currently (as of 2022-10-04) running 7.4. &lt;br /&gt;
&lt;br /&gt;
[[Category:Compute Facilities]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=File:HSI_8.3_Reference_Manual.pdf&amp;diff=1913</id>
		<title>File:HSI 8.3 Reference Manual.pdf</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=File:HSI_8.3_Reference_Manual.pdf&amp;diff=1913"/>
				<updated>2022-10-04T15:39:42Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: HSI reference manual for ALCF archiving. Found at https://www.hpss-collaboration.org/documents/HSI_8.3_Reference_Manual.pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;HSI reference manual for ALCF archiving. Found at https://www.hpss-collaboration.org/documents/HSI_8.3_Reference_Manual.pdf&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1912</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1912"/>
				<updated>2022-10-04T15:27:16Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Update alcf documentation link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ALCF]]'s High Performance Storage System (HPSS) is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in  [https://www.alcf.anl.gov/support/user-guides/data-management/filesystem-and-storage/hpss/index.html ALCF documentation] that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does ''not'' support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/].&lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;br /&gt;
&lt;br /&gt;
If you need to retrieve data from tape and put it back on the &amp;quot;local&amp;quot; system, the &amp;lt;code&amp;gt;get&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cget&amp;lt;/code&amp;gt; commands act in the same way as &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; but in reverse.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
If you are a first time user of HPSS, you will likely get an error regarding a key file. This is something that must be taken care of by ALCF support (support@alcf.anl.gov). Simply email them with your ALCF username and state that you need access set up for HPSS.&lt;br /&gt;
&lt;br /&gt;
[[Category:Compute Facilities]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1910</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1910"/>
				<updated>2022-09-29T18:13:05Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* HSI Basics */ Fix Typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ALCF]]'s High Performance Storage System (HPSS) is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does ''not'' support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/].&lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;br /&gt;
&lt;br /&gt;
If you need to retrieve data from tape and put it back on the &amp;quot;local&amp;quot; system, the &amp;lt;code&amp;gt;get&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cget&amp;lt;/code&amp;gt; commands act in the same way as &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; but in reverse.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
If you are a first time user of HPSS, you will likely get an error regarding a key file. This is something that must be taken care of by ALCF support (support@alcf.anl.gov). Simply email them with your ALCF username and state that you need access set up for HPSS.&lt;br /&gt;
&lt;br /&gt;
[[Category:Compute Facilities]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TotalView&amp;diff=1906</id>
		<title>TotalView</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TotalView&amp;diff=1906"/>
				<updated>2022-09-18T18:51:56Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We currently have an evaluation license of the TotalView graphical/parallel debugger. Please give it a try and let Ben know if you have any problems &lt;br /&gt;
&lt;br /&gt;
==Running==&lt;br /&gt;
  soft add +totalview-8.13.0&lt;br /&gt;
  mpirun -tv -np 1 your_program&lt;br /&gt;
&lt;br /&gt;
==ReplayEngine/Reverse Debugging==&lt;br /&gt;
&lt;br /&gt;
  mpirun -mca mpool_rdma_rcache_size_limit 1 -x IBV_FORK_SAFE=1 -x LD_PRELOAD=/usr/local/toolworks/totalview.8.13.0-0/linux-x86-64/lib/undodb_infiniband_preload_x64.so -tv -np 1 your_program&lt;br /&gt;
&lt;br /&gt;
(see the TotalView user's guide for more information)&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
http://www.roguewave.com/support/product-documentation/totalview.aspx&lt;br /&gt;
&lt;br /&gt;
[[Category:Software Engineering]] [[Category:Software]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp&amp;diff=1905</id>
		<title>The On Ramp</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp&amp;diff=1905"/>
				<updated>2022-09-18T18:51:04Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Level 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the '''PHASTA On Ramp'''! This page is meant to organize information for getting up to speed with the overall PHASTA workflow, as well as help you up your game as a PHASTA developer.&lt;br /&gt;
&lt;br /&gt;
== Level 0 ==&lt;br /&gt;
Ready to get your feet wet? Maybe you need to brush up on some fundamental skills. Level 0 provides tutorials and information on basic developer skills you will need to work with PHASTA.&lt;br /&gt;
* [[PHASTA_Group_Machines|Logging in]]&lt;br /&gt;
* [[VNC|Setting up your VNC]]&lt;br /&gt;
* [[UNIX|Getting started with Linux]]&lt;br /&gt;
* [[Git | Getting Git]]&lt;br /&gt;
* [[Vim|Getting comfortable with Vim]]&lt;br /&gt;
* [[Fortran|New to Fortran?]]&lt;br /&gt;
* [[Making A New Wiki | Editing this Wiki]]&lt;br /&gt;
&lt;br /&gt;
== Level 1 ==&lt;br /&gt;
&lt;br /&gt;
[[File:Picture5.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So you think you know how to &amp;lt;code&amp;gt;grep&amp;lt;/code&amp;gt; for keywords and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; some folders? Now you're probably ready to jump into PHASTA. &lt;br /&gt;
&lt;br /&gt;
* [[Level 1 Model/Mesh| Model/Mesh Workflow]]&lt;br /&gt;
* [[Level 1 Partition| Partition Workflow]]&lt;br /&gt;
* [[Level 1 Solve| Solve Workflow]]&lt;br /&gt;
* [[Level 1 Post-Process| Post-Process Workflow]]&lt;br /&gt;
&lt;br /&gt;
See the [[The_On_Ramp/Level_1|Level 1 base page]] as well.&lt;br /&gt;
&lt;br /&gt;
== Level 2 ==&lt;br /&gt;
Now that you've plotted some &amp;quot;Colorful Fluid Dynamics&amp;quot; (CFD) you're probably interested in digging into the guts of PHASTA, and getting it running on some more interesting hardware.&lt;br /&gt;
* [https://github.com/PHASTA/phasta Building PHASTA from scratch] (Follow the README at the bottom of the page)&lt;br /&gt;
* [https://github.com/SCOREC/core/wiki/General-Build-instructions#Manual_Install Building Chef and other SCOREC core tools]&lt;br /&gt;
* [[ Building ParaView 5.8.1 and Shoreline Plug-ins on viz003 | Building ParaView with Shoreline Plug-ins ]]&lt;br /&gt;
&lt;br /&gt;
== Level 3 ==&lt;br /&gt;
ParaView isn't cutting it for your research? Now its time to start writing your own analysis code. How to do custom pre and post processing is described here, along with more complex ways of using PHASTA for interesting CFD simulations&lt;br /&gt;
* [[Synthetic Turbulence Inflow Generator | Synthetic Turbulence Generator]]&lt;br /&gt;
* [[VTKpytools| Python + VTK using vtkpytools]]&lt;br /&gt;
&lt;br /&gt;
== Level 4 ==&lt;br /&gt;
You're probably close to defending at this point, so before you go get that sweet industry job, you should drop some of your lofty knowledge here!&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Post-Process&amp;diff=1904</id>
		<title>The On Ramp/Level 1/Post-Process</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Post-Process&amp;diff=1904"/>
				<updated>2022-09-18T18:42:50Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: typo fix for category&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this section we will learn how to set-up the Visualization file which [[ParaView]] (a flow visualization software) reads in, and how to utilize Paraview to analyze our flow field solutions generated by [[PHASTA]]. &lt;br /&gt;
&lt;br /&gt;
== Visualization File ==&lt;br /&gt;
In order for Paraview to know which restart and geombc files to process, we need to give it this information by means of a &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; meta-data file. In the directory where the PHASTA case was run, &amp;lt;code&amp;gt;.../8-1-Chef/Run/&amp;lt;/code&amp;gt; for our example, you will need to create/copy a meta-data file with a &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file extension. Common practice is to name this file &amp;lt;code&amp;gt;flow.pht&amp;lt;/code&amp;gt; and to only include the file in your &amp;lt;code&amp;gt;Run&amp;lt;/code&amp;gt; directory when you are ready to launch and work in Paraview. You can copy over an example &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file from the tutorials folder called &amp;lt;code&amp;gt;flow.pht&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
'''A detailed explanation of the variables inside of the &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file is provided in this  [https://fluid.colorado.edu/tutorials/tutorialVideos/ParaviewWithMark_meta_data_description.mp4 video].'''&lt;br /&gt;
 &lt;br /&gt;
The following is an example &amp;lt;code&amp;gt;flow.pht&amp;lt;/code&amp;gt; file for our &amp;lt;code&amp;gt;8-1-Chef&amp;lt;/code&amp;gt; case, where we ran 10 time steps in PHASTA, saved every 5th time step, and we want to visualize the 5th and 10th time step in Paraview. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
 &amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;8&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;GeometryFileNamePattern pattern=&amp;quot;8-procs_case/geombc.dat.%d&amp;quot; ''#8-procs_case here matches the folder name where our geombc files are located.''  &lt;br /&gt;
                             has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                             has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;FieldFileNamePattern pattern=&amp;quot;8-procs_case/restart.%d.%d&amp;quot; ''#8-procs_case here matches the folder name where our restart files are located.''&lt;br /&gt;
                          has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                          has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;TimeSteps number_of_steps=&amp;quot;2&amp;quot; ''#Needs to be 2 as we want to visualize 2 time steps.''&lt;br /&gt;
               auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
               start_index=&amp;quot;5&amp;quot; ''#Index of the first time step we want to visualize. Make sure this time step is available by checking the files in''       &lt;br /&gt;
                                ''8-procs_case''&lt;br /&gt;
               increment_index_by=&amp;quot;5&amp;quot; ''#5 + 5 = 10. If we had more time steps, we could visualize the 15th, 20th, 25th, 30th, etc.'' &lt;br /&gt;
               start_value=&amp;quot;0.&amp;quot;&lt;br /&gt;
               increment_value_by=&amp;quot;0.5&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
    &amp;lt;Fields number_of_fields=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
             paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
             start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
             number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
             data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
             data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
      &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
             paraview_field_tag=&amp;quot;pressure&amp;quot;&lt;br /&gt;
             start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
             number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
             data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
             data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
      &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
             paraview_field_tag=&amp;quot;eddy visocity&amp;quot;&lt;br /&gt;
             start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
             number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
             data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
             data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;/Fields&amp;gt;&lt;br /&gt;
 &amp;lt;/PhastaMetaFile&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Visualizing Fields and Computing Quantities==&lt;br /&gt;
&lt;br /&gt;
As always, set the environment on viz003 (if you have not done so already) by using the soft adds located in:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;code&amp;gt;more ~kjansen/soft-core.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To open the visualization tool Paraview, run the command:&lt;br /&gt;
&lt;br /&gt;
 vglrun paraview&lt;br /&gt;
&lt;br /&gt;
It is common practice to be in the directory where your &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file is located when you run the &amp;lt;code&amp;gt;vglrun paraview&amp;lt;/code&amp;gt; command. This sets the working directory in Paraview to the one which contains your &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file, and you can then quickly open that file without having to search for it in the &amp;quot;Open File&amp;quot; Paraview GUI. &lt;br /&gt;
&lt;br /&gt;
A tutorial on how to navigate the GUI to visualize solution fields as well as compute other solution fields is given in this [https://fluid.colorado.edu/tutorials/tutorialVideos/ParaviewWithMark_viz_fields.mp4 video].&lt;br /&gt;
&lt;br /&gt;
'''Note:''' You can also open a specific Paraview build by setting the path to the executable file. For example, a Paraview v5.7.0 executable file was built in the following directory: &amp;lt;code&amp;gt;/users/jeffhadley/Builds/build-paraview-v5.7.0/bin/&amp;lt;/code&amp;gt;. To run this specific build of paraview, you would run the command: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;vglrun /users/jeffhadley/Builds/build-paraview-v5.7.0/bin/paraview&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Post-processing]] [[Category:Paraview]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Post-Process&amp;diff=1903</id>
		<title>The On Ramp/Level 1/Post-Process</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Post-Process&amp;diff=1903"/>
				<updated>2022-09-18T18:42:14Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Misc edits&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this section we will learn how to set-up the Visualization file which [[ParaView]] (a flow visualization software) reads in, and how to utilize Paraview to analyze our flow field solutions generated by [[PHASTA]]. &lt;br /&gt;
&lt;br /&gt;
== Visualization File ==&lt;br /&gt;
In order for Paraview to know which restart and geombc files to process, we need to give it this information by means of a &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; meta-data file. In the directory where the PHASTA case was run, &amp;lt;code&amp;gt;.../8-1-Chef/Run/&amp;lt;/code&amp;gt; for our example, you will need to create/copy a meta-data file with a &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file extension. Common practice is to name this file &amp;lt;code&amp;gt;flow.pht&amp;lt;/code&amp;gt; and to only include the file in your &amp;lt;code&amp;gt;Run&amp;lt;/code&amp;gt; directory when you are ready to launch and work in Paraview. You can copy over an example &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file from the tutorials folder called &amp;lt;code&amp;gt;flow.pht&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
'''A detailed explanation of the variables inside of the &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file is provided in this  [https://fluid.colorado.edu/tutorials/tutorialVideos/ParaviewWithMark_meta_data_description.mp4 video].'''&lt;br /&gt;
 &lt;br /&gt;
The following is an example &amp;lt;code&amp;gt;flow.pht&amp;lt;/code&amp;gt; file for our &amp;lt;code&amp;gt;8-1-Chef&amp;lt;/code&amp;gt; case, where we ran 10 time steps in PHASTA, saved every 5th time step, and we want to visualize the 5th and 10th time step in Paraview. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
 &amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;8&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;GeometryFileNamePattern pattern=&amp;quot;8-procs_case/geombc.dat.%d&amp;quot; ''#8-procs_case here matches the folder name where our geombc files are located.''  &lt;br /&gt;
                             has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                             has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;FieldFileNamePattern pattern=&amp;quot;8-procs_case/restart.%d.%d&amp;quot; ''#8-procs_case here matches the folder name where our restart files are located.''&lt;br /&gt;
                          has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                          has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;TimeSteps number_of_steps=&amp;quot;2&amp;quot; ''#Needs to be 2 as we want to visualize 2 time steps.''&lt;br /&gt;
               auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
               start_index=&amp;quot;5&amp;quot; ''#Index of the first time step we want to visualize. Make sure this time step is available by checking the files in''       &lt;br /&gt;
                                ''8-procs_case''&lt;br /&gt;
               increment_index_by=&amp;quot;5&amp;quot; ''#5 + 5 = 10. If we had more time steps, we could visualize the 15th, 20th, 25th, 30th, etc.'' &lt;br /&gt;
               start_value=&amp;quot;0.&amp;quot;&lt;br /&gt;
               increment_value_by=&amp;quot;0.5&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
    &amp;lt;Fields number_of_fields=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
             paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
             start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
             number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
             data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
             data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
      &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
             paraview_field_tag=&amp;quot;pressure&amp;quot;&lt;br /&gt;
             start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
             number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
             data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
             data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
      &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
             paraview_field_tag=&amp;quot;eddy visocity&amp;quot;&lt;br /&gt;
             start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
             number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
             data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
             data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;/Fields&amp;gt;&lt;br /&gt;
 &amp;lt;/PhastaMetaFile&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Visualizing Fields and Computing Quantities==&lt;br /&gt;
&lt;br /&gt;
As always, set the environment on viz003 (if you have not done so already) by using the soft adds located in:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;code&amp;gt;more ~kjansen/soft-core.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To open the visualization tool Paraview, run the command:&lt;br /&gt;
&lt;br /&gt;
 vglrun paraview&lt;br /&gt;
&lt;br /&gt;
It is common practice to be in the directory where your &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file is located when you run the &amp;lt;code&amp;gt;vglrun paraview&amp;lt;/code&amp;gt; command. This sets the working directory in Paraview to the one which contains your &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file, and you can then quickly open that file without having to search for it in the &amp;quot;Open File&amp;quot; Paraview GUI. &lt;br /&gt;
&lt;br /&gt;
A tutorial on how to navigate the GUI to visualize solution fields as well as compute other solution fields is given in this [https://fluid.colorado.edu/tutorials/tutorialVideos/ParaviewWithMark_viz_fields.mp4 video].&lt;br /&gt;
&lt;br /&gt;
'''Note:''' You can also open a specific Paraview build by setting the path to the executable file. For example, a Paraview v5.7.0 executable file was built in the following directory: &amp;lt;code&amp;gt;/users/jeffhadley/Builds/build-paraview-v5.7.0/bin/&amp;lt;/code&amp;gt;. To run this specific build of paraview, you would run the command: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;vglrun /users/jeffhadley/Builds/build-paraview-v5.7.0/bin/paraview&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Post-Processing]] [[Category:Paraview]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Post-Process&amp;diff=1901</id>
		<title>The On Ramp/Level 1/Post-Process</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Post-Process&amp;diff=1901"/>
				<updated>2022-09-18T18:40:28Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page Level 1 Post-Process to The On Ramp/Level 1/Post-Process: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this section we will learn how to set-up the Visualization file which Paraview (Our flow visualization software) reads in, and how to utilise Paraview to analyze our flow field solutions generated by PHASTA. &lt;br /&gt;
&lt;br /&gt;
== Visualization File ==&lt;br /&gt;
In order for Paraview to know which restart and geombc files to process, we need to give it this information by means of a &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; meta-data file. In the directory where the PHASTA case was run, &amp;lt;code&amp;gt;.../8-1-Chef/Run/&amp;lt;/code&amp;gt; for our example, you will need to create/copy a meta-data file with a &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file extension. Common practice is to name this file &amp;lt;code&amp;gt;flow.pht&amp;lt;/code&amp;gt; and to only include the file in your &amp;lt;code&amp;gt;Run&amp;lt;/code&amp;gt; directory when you are ready to launch and work in Paraview. You can copy over an example &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file from the tutorials folder called &amp;lt;code&amp;gt;flow.pht&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
'''A detailed explanation of the variables inside of the &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file is provided in this  [https://fluid.colorado.edu/tutorials/tutorialVideos/ParaviewWithMark_meta_data_description.mp4 video].'''&lt;br /&gt;
 &lt;br /&gt;
The following is an example &amp;lt;code&amp;gt;flow.pht&amp;lt;/code&amp;gt; file for our &amp;lt;code&amp;gt;8-1-Chef&amp;lt;/code&amp;gt; case, where we ran 10 time steps in PHASTA, saved every 5th time step, and we want to visualize the 5th and 10th time step in Paraview. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
 &amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;8&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;GeometryFileNamePattern pattern=&amp;quot;8-procs_case/geombc.dat.%d&amp;quot; ''#8-procs_case here matches the folder name where our geombc files are located.''  &lt;br /&gt;
                             has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                             has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;FieldFileNamePattern pattern=&amp;quot;8-procs_case/restart.%d.%d&amp;quot; ''#8-procs_case here matches the folder name where our restart files are located.''&lt;br /&gt;
                          has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                          has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;TimeSteps number_of_steps=&amp;quot;2&amp;quot; ''#Needs to be 2 as we want to visualize 2 time steps.''&lt;br /&gt;
               auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
               start_index=&amp;quot;5&amp;quot; ''#Index of the first time step we want to visualize. Make sure this time step is available by checking the files in''       &lt;br /&gt;
                                ''8-procs_case''&lt;br /&gt;
               increment_index_by=&amp;quot;5&amp;quot; ''#5 + 5 = 10. If we had more time steps, we could visualize the 15th, 20th, 25th, 30th, etc.'' &lt;br /&gt;
               start_value=&amp;quot;0.&amp;quot;&lt;br /&gt;
               increment_value_by=&amp;quot;0.5&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
    &amp;lt;Fields number_of_fields=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
             paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
             start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
             number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
             data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
             data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
      &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
             paraview_field_tag=&amp;quot;pressure&amp;quot;&lt;br /&gt;
             start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
             number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
             data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
             data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
      &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
             paraview_field_tag=&amp;quot;eddy visocity&amp;quot;&lt;br /&gt;
             start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
             number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
             data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
             data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;/Fields&amp;gt;&lt;br /&gt;
 &amp;lt;/PhastaMetaFile&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Visualizing Fields and Computing Quantities==&lt;br /&gt;
&lt;br /&gt;
As always, set the environment on viz003 (if you have not done so already) by using the soft adds located in:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;code&amp;gt;more ~kjansen/soft-core.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To open the visualization tool Paraview, run the command:&lt;br /&gt;
&lt;br /&gt;
 vglrun paraview&lt;br /&gt;
&lt;br /&gt;
It is common practice to be in the directory where your &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file is located when you run the &amp;lt;code&amp;gt;vglrun paraview&amp;lt;/code&amp;gt; command. This sets the working directory in Paraview to the one which contains your &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; file, and you can then quickly open that file without having to search for it in the &amp;quot;Open File&amp;quot; Paraview GUI. &lt;br /&gt;
&lt;br /&gt;
A tutorial on how to navigate the GUI to visualize solution fields as well as compute other solution fields is given in this [https://fluid.colorado.edu/tutorials/tutorialVideos/ParaviewWithMark_viz_fields.mp4 video].&lt;br /&gt;
&lt;br /&gt;
'''Note:''' You can also open a specific Paraview build by setting the path to the executable file. For example, a Paraview v5.7.0 executable file was built in the following directory: &amp;lt;code&amp;gt;/users/jeffhadley/Builds/build-paraview-v5.7.0/bin/&amp;lt;/code&amp;gt;. To run this specific build of paraview, you would run the command: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;vglrun /users/jeffhadley/Builds/build-paraview-v5.7.0/bin/paraview&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Level_1_Post-Process&amp;diff=1902</id>
		<title>Level 1 Post-Process</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Level_1_Post-Process&amp;diff=1902"/>
				<updated>2022-09-18T18:40:28Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page Level 1 Post-Process to The On Ramp/Level 1/Post-Process: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[The On Ramp/Level 1/Post-Process]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Solve_(Incompressible)&amp;diff=1899</id>
		<title>The On Ramp/Level 1/Solve (Incompressible)</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Solve_(Incompressible)&amp;diff=1899"/>
				<updated>2022-09-18T18:40:00Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page Level 1 Solve to The On Ramp/Level 1/Solve: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Exporting to PHASTA ===&lt;br /&gt;
After the partitioning performed via Chef in the last steps, we now have the problem domain in a form that the PHASTA executable can read. In your 8-1-Chef directory, create a sub-directory named &amp;quot;Run&amp;quot;. This will contain all of the simulation data from this case, which we will use in our PHASTA run. Remember that for this on-Ramp tutorial we have partitioned our case of interest to 8 parts. Therefore, we need to create a sub-directory named &amp;quot;8-procs_case&amp;quot; within the /8-1-Chef/Run/ directory. Once you have mkdir'd and cd'd into this new Run/8-procs_case/ subdirectory, make softlinks (&amp;quot;ln -s &amp;lt;path/file*&amp;gt;&amp;quot;) to the N=8 restart and geombc &amp;quot;checkpoint&amp;quot; files that where constructed by Chef located in the 8-1-Chef/8-procs_case/ directory. When in the Run/8-procs_case sub-directory, a good command to do this is the following:&lt;br /&gt;
&lt;br /&gt;
 ln -s ../../8-procs_case/restart* .&lt;br /&gt;
 ln -s ../../8-procs_case/geombc* .&lt;br /&gt;
&lt;br /&gt;
Also create a numstart.dat file. This file will specify the time and timestep that the simulation has completed thus far. For our case, we have not yet run the simulation, thus our time and timesteps are 0 0. Use the following command in the &amp;quot;Run/8-procs_case&amp;quot; directory to create the numstart.dat file:&lt;br /&gt;
&lt;br /&gt;
 echo 0 0 &amp;gt; numstart.dat&lt;br /&gt;
&lt;br /&gt;
=== Build the executable/specify runtime parameters===&lt;br /&gt;
What remains is to determine the version of PHASTA to build and run. Since there are a bunch of researchers working on PHASTA at any given time, there are many branches/versions of the main code.&lt;br /&gt;
&lt;br /&gt;
'''Note:''' You may not have access to the phasta-next repo yet. If that is the case, ask Dr. Jansen to have you added to the repo. In the meanwhile, you can just use the regular phasta repo (by replacing all &amp;quot;phasta-next&amp;quot; with &amp;quot;phasta&amp;quot; in these instructions).&lt;br /&gt;
&lt;br /&gt;
==== Retrieve/build a version of PHASTA code ====&lt;br /&gt;
To have your first run of PHASTA, you will need an executable of the PHASTA code. There are many ways to do this, and below is intended to get you a generic executable. The many nuances to this proccess can be found on the Level 2 page [[Compiling_PHASTA_With_CMake|here]]. Navigate to your home directory. Create and enter a directory here named &amp;quot;git-phasta&amp;quot;. In a web browser, navigate to the online git repository for phasta-next and select the &amp;quot;clone&amp;quot; or &amp;quot;code&amp;quot; icon. The result should be similar to the following picture, where a pop-up gives a web address:&lt;br /&gt;
&lt;br /&gt;
 [[File:GitClone.png]]&lt;br /&gt;
&lt;br /&gt;
Copy this address and within the &amp;quot;git-phasta&amp;quot; directory execute the following command and enter your github credentials:&lt;br /&gt;
&lt;br /&gt;
 git clone https://github.com/PHASTA/phasta-next.git&lt;br /&gt;
&lt;br /&gt;
After this is finished there will be a subdirectory created named &amp;quot;phasta-next&amp;quot; that contains the code tree that you wish to build. &lt;br /&gt;
Back within the git-phasta directory create another subdirectory named &amp;quot;build_phasta-next&amp;quot;. Now we must set the environment so that the compiler has the necessary libraries. If you perform the following command, the listed environment libraries that ken often uses are listed, and are often sufficient:&lt;br /&gt;
&lt;br /&gt;
 more ~kjansen/soft-core.sh &lt;br /&gt;
&lt;br /&gt;
At the time that this page is created, the relevant commands to load the needed environment are the following:&lt;br /&gt;
&lt;br /&gt;
 soft add +gcc-6.3.0&lt;br /&gt;
 soft add +openmpi-gnu-1.10.6-gnu49-thread&lt;br /&gt;
 soft add +simmodeler-6.0-171202&lt;br /&gt;
&lt;br /&gt;
If the user needs additional libraries they can often be found with the following command:&lt;br /&gt;
&lt;br /&gt;
 softenv&lt;br /&gt;
&lt;br /&gt;
lets build this code! Navigate into this build_phasta-next subdirectory and create a file named &amp;quot;unpack_buildFiles.sh&amp;quot; with the following content:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 Target=Example_Build&lt;br /&gt;
 mkdir Example_Build&lt;br /&gt;
 rm -r $Target/*&lt;br /&gt;
 cd $Target&lt;br /&gt;
 &lt;br /&gt;
 export PKG_CONFIG_PATH=/users/skinnerr/tools/git-petsc/build_ompi210_gnu63/lib/pkgconfig/&lt;br /&gt;
 &lt;br /&gt;
 cmake \&lt;br /&gt;
 -DCMAKE_C_COMPILER=gcc \&lt;br /&gt;
 -DCMAKE_CXX_COMPILER=g++ \&lt;br /&gt;
 -DCMAKE_Fortran_COMPILER=gfortran \&lt;br /&gt;
 -DCMAKE_BUILD_TYPE=Debug \&lt;br /&gt;
 -DPHASTA_INCOMPRESSIBLE=ON \&lt;br /&gt;
 -DPHASTA_COMPRESSIBLE=OFF \&lt;br /&gt;
 -DPHASTA_USE_LESLIB=ON \&lt;br /&gt;
 -DLESLIB=/users/matthb2/libles1.5/libles-debianjessie-gcc-ompi.a \&lt;br /&gt;
 -DCASES=/home/mrasquin/develop-phasta/phastaChefTests \&lt;br /&gt;
 -DPHASTA_TESTING=OFF \&lt;br /&gt;
 ../../phasta-next/&lt;br /&gt;
 &lt;br /&gt;
 make -j8&lt;br /&gt;
 echo &amp;quot;Target: $Target&amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&lt;br /&gt;
'''Note:''' You can create this file by typing &amp;lt;code&amp;gt;vi unpack_buildFiles.sh&amp;lt;/code&amp;gt; into the command line. This will create an empty shell script file named unpack_buildFiles.sh and enter you into the Vim editor mode, where you can practice your recently adopted Vim commands to copy the above script into the file, making sure to save and quit after you're done. Enter the following command to make sure your shell script was saved successfully with all the required text:&lt;br /&gt;
&lt;br /&gt;
 more unpack_buildFiles.sh&lt;br /&gt;
&lt;br /&gt;
Now, we must turn the above .sh file into an executable by typing the following command:&lt;br /&gt;
&lt;br /&gt;
 chmod +x unpack_buildFiles.sh&lt;br /&gt;
&lt;br /&gt;
Finally, we are ready to run the executable and build the PHASTA code!&lt;br /&gt;
&lt;br /&gt;
 ./unpack_buildFiles.sh&lt;br /&gt;
&lt;br /&gt;
You can check that your executable has been built by locating:&lt;br /&gt;
&lt;br /&gt;
 Example_Build/bin/phastaIC.exe&lt;br /&gt;
&lt;br /&gt;
===== Additional notes =====&lt;br /&gt;
If there is a specific branch off of phasta-next that you'd like to build, navigate to phasta-next and use the following command:&lt;br /&gt;
 git checkout &amp;quot;branchname&amp;quot;. &lt;br /&gt;
If this is a branch that I will be working on for a while, I tend to alter the build and code directories according to the branch-name. Note that the respective pointer in the &amp;quot;unpack_buildFiles.sh&amp;quot; file (ie the last line) will have to be set accordingly.&lt;br /&gt;
&lt;br /&gt;
=== Create Solver.inp === &lt;br /&gt;
The &amp;lt;code&amp;gt;input.config&amp;lt;/code&amp;gt; file in the newly created &amp;lt;code&amp;gt;build_phasta/Example_Build/&amp;lt;/code&amp;gt; directory contains all possible options that could be set in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; file (which we will use for our PHASTA run). Create the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; file in the &amp;lt;code&amp;gt;../PrepAndRun/8-1-Chef/Run/&amp;lt;/code&amp;gt; directory, and then specify all of the parameters that you wish to change for your run case. '''NEVER EVER Modify the input.config file itself'''. The rest of the parameters need not be specified. For example my solver.inp looks as follows:&lt;br /&gt;
&lt;br /&gt;
 # ibksiz flmpl flmpr itwmod wmodts dmodts fwr taucfct&lt;br /&gt;
 # PHASTA Version 1.5 Input File&lt;br /&gt;
 #&lt;br /&gt;
 #  Basic format is&lt;br /&gt;
 #&lt;br /&gt;
 #    Key Phrase  :  Acceptable Value (integer, double, logical, or phrase&lt;br /&gt;
 #                                     list of integers, list of doubles )&lt;br /&gt;
 #&lt;br /&gt;
 #&lt;br /&gt;
 #SOLUTION CONTROL &lt;br /&gt;
 #{                &lt;br /&gt;
     Equation of State: Incompressible&lt;br /&gt;
     Number of Timesteps: 10&lt;br /&gt;
     Time Step Size: 1e-1  # Delt(1)&lt;br /&gt;
     Turbulence Model: RANS  # No-Model # DES97 # DDES  iturb=0, RANS =-1  LES=1 #}&lt;br /&gt;
 #}&lt;br /&gt;
 &lt;br /&gt;
 #MATERIAL PROPERTIES&lt;br /&gt;
 #{&lt;br /&gt;
     Viscosity: 1.50e-5      # fills datmat (2 values REQUIRED if iLset=1)&lt;br /&gt;
     Density: 1.0           # ditto&lt;br /&gt;
     Body Force Option: None # ibody=0 =&amp;gt; matflag(5,n)&lt;br /&gt;
     Body Force: 0 0.0 0.0    # (datmat(i,5,n),i=1,nsd)&lt;br /&gt;
     Thermal Conductivity: 27.6e-1jjj  # ditto&lt;br /&gt;
     Scalar Diffusivity: 27.6e-1    # fills scdiff(1:nsclrS)&lt;br /&gt;
 #}&lt;br /&gt;
 &lt;br /&gt;
 OUTPUT CONTROL&lt;br /&gt;
 {&lt;br /&gt;
     Number of Timesteps between Restarts: 5 #replaces nout/ntout&lt;br /&gt;
     Number of SyncIO Files: 0&lt;br /&gt;
     Print Error Indicators: False&lt;br /&gt;
     Number of Error Smoothing Iterations: 0 # ierrsmooth&lt;br /&gt;
 #    Print ybar: True &lt;br /&gt;
 #    Print vorticity: True&lt;br /&gt;
 #    Print Wall Fluxes: False&lt;br /&gt;
 #    Print Statistics: False&lt;br /&gt;
     Number of Force Surfaces: 1&lt;br /&gt;
     Surface ID's for Force Calculation: 1&lt;br /&gt;
 #     Ranks per core: 4 # for varts only&lt;br /&gt;
 #     Cores per node: 16 # for varts only&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 #LINEAR SOLVER&lt;br /&gt;
 #    Solver Type: GMRES sparse&lt;br /&gt;
     Solver Type: ACUSIM with P Projection&lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1      # replaces nGMRES&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 200           # replaces Kspace&lt;br /&gt;
     Scalar 1 Solver Tolerance : 1.0e-4&lt;br /&gt;
     Tolerance on Momentum Equations: 0.05                   # epstol(1)&lt;br /&gt;
     Tolerance on ACUSIM Pressure Projection: 0.01           # prestol &lt;br /&gt;
     Number of Solves per Left-hand-side Formation: 1  #nupdat/LHSupd(1)&lt;br /&gt;
     ACUSIM Verbosity Level               : 0   #iverbose&lt;br /&gt;
     Minimum Number of ACUSIM Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Maximum Number of ACUSIM Iterations per Nonlinear Iteration: 200 # maxIter&lt;br /&gt;
 #}&lt;br /&gt;
 &lt;br /&gt;
 #DISCRETIZATION CONTROL&lt;br /&gt;
 #{&lt;br /&gt;
     Time Integration Rule: First Order    # 1st Order sets rinf(1) -1&lt;br /&gt;
 #    Time Integration Rule: Second Order    # Second Order sets rinf next&lt;br /&gt;
 #    Time Integration Rho Infinity: 0.0     # rinf(1) Only used for 2nd order&lt;br /&gt;
     Tau Matrix: Diagonal-Shakib               #itau=1&lt;br /&gt;
     Tau Time Constant: 1.0                      #dtsfct&lt;br /&gt;
     Include Viscous Correction in Stabilization: True    # if p=1 idiff=1&lt;br /&gt;
                                                          # if p=2 idiff=2  &lt;br /&gt;
     Lumped Mass Fraction on Left-hand-side: 0.0           # flmpl&lt;br /&gt;
     Lumped Mass Fraction on Right-hand-side: 0.0          # flmpr&lt;br /&gt;
     Tau C Scale Factor: 1.0                    # taucfct  best value depends  &lt;br /&gt;
     Number of Elements Per Block: 64 # switch to &amp;gt;250 if sgi&lt;br /&gt;
 #} &lt;br /&gt;
 &lt;br /&gt;
 TURBULENCE MODELING PARAMETERS&lt;br /&gt;
 {&lt;br /&gt;
    Turbulence Wall Model Type: None  #itwmod=2 RANSorLES&lt;br /&gt;
     }&lt;br /&gt;
 &lt;br /&gt;
 #STEP SEQUENCE &lt;br /&gt;
 #{&lt;br /&gt;
      Step Construction  : 0 1 10 11 0 1 10 11&lt;br /&gt;
 #}&lt;br /&gt;
&lt;br /&gt;
=== Running the Solver ===&lt;br /&gt;
Create the runPHASTA.sh bash script in your &amp;lt;code&amp;gt;8-1-Chef/Run&amp;lt;/code&amp;gt; directory. Where you see &amp;lt;pathtoBuildDir&amp;gt; include your path to your build directory where you retrieved the &amp;lt;code&amp;gt;input.config&amp;lt;/code&amp;gt; file. The #export line is an example that I have used myself, yours should look similar:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 rm $1-procs_case/doubleRun-check&lt;br /&gt;
 &lt;br /&gt;
 #export PHASTA_CONFIG=/users/jopa6460/git-phasta/build_phasta/Example_Build&lt;br /&gt;
 export PHASTA_CONFIG=&amp;lt;pathtoBuildDir&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np $1 $PHASTA_CONFIG/bin/phastaIC.exe 2&amp;gt;&amp;amp;1 | tee $1.out&lt;br /&gt;
&lt;br /&gt;
Remember to turn the file into an executable as was done for the build script above. Hint: &amp;lt;code&amp;gt;chmod +x&amp;lt;/code&amp;gt; command. &lt;br /&gt;
&lt;br /&gt;
NOTE: Before running PHASTA, it is important to check that nobody else is running important jobs on the computer you're connected to. Run &amp;lt;code&amp;gt;top&amp;lt;/code&amp;gt; in the command line to check what other processes are running, and if you see other people running large jobs you should ask them if you can run your job as well, or else switch to a different computer.&lt;br /&gt;
&lt;br /&gt;
Now you have everything you need to run your first PHASTA simulation! In the Run directory execute the following line:&lt;br /&gt;
&lt;br /&gt;
 ./runPHASTA.sh 8&lt;br /&gt;
&lt;br /&gt;
Output of a proper run of PHASTA contains information about 1) the step number of the simulation; 2) the relative residual (convergence relative to this run's initial residual); and 3) various other outputs based on runtime parameters like BC's and IC's. As the simulation continues it will produce successive checkpoint restart files that contain the solution data at that timestep. For our example, these restart files will be located in the &amp;lt;code&amp;gt;.../Run/8-procs_case&amp;lt;/code&amp;gt; directory. These restart files and the geombc files are what Paraview post-processes to give us a visualization of flow field solutions.&lt;br /&gt;
&lt;br /&gt;
Now it's time to plot the results in the [[Level 1 Post-Process]] step.&lt;br /&gt;
&lt;br /&gt;
=== A few helpful video tutorials===&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/CloneFromGithubAndBranch.mov		CloneFromGithubAndBranch.mov		]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#CloneFromGithubAndBranch.mov|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/BlancoBuidingPHASTA.mov	BlancoBuidingPHASTA.mov	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#BlancoBuidingPHASTA.mov|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/PHASTA_workflow_RB.mkv	PHASTA_workflow_RB.mkv	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#PHASTA_workflow_RB.mkv|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemoPrepSolvePost.mov	RajDemoPrepSolvePost.mov	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#RajDemoPrepSolvePost.mov|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/PrepSolvePostBLandC.mp4	PrepSolvePostBLandC.mp4	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#PrepSolvePostBLandC.mp4|Video Notes]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Level_1_Solve&amp;diff=1900</id>
		<title>Level 1 Solve</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Level_1_Solve&amp;diff=1900"/>
				<updated>2022-09-18T18:40:00Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page Level 1 Solve to The On Ramp/Level 1/Solve: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[The On Ramp/Level 1/Solve]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Partition&amp;diff=1898</id>
		<title>The On Ramp/Level 1/Partition</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Partition&amp;diff=1898"/>
				<updated>2022-09-18T18:39:27Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: /* Chef */ Link to Chef wiki page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Chef==&lt;br /&gt;
[[Chef]] is an open source SCOREC tool that is this group's primary tool to partition a problem domain to many subdomains. This is done to allow any array of compute nodes to each focus on solving the problem in parallel. That is, to divide the problem domain into subdomains to give to different computational workers. When we perform this step, all that is needed for inputs are the, 1) SCOREC mesh constructed from the output of the conversion and meshing steps, and 2) the number of parts (subdomains) to divide the problem up into. A generic workflow for this step is described below and is a good place to start when completing the workflow for the first time. As your studies continue, many aspects of the layout and modifiers will not be consistent with this On Ramp documentation. &lt;br /&gt;
=== Creating the serial case via Viz nodes===&lt;br /&gt;
Create a &amp;lt;code&amp;gt;1-1-Chef&amp;lt;/code&amp;gt; subdirectory inside the &amp;lt;code&amp;gt;PrepAndRun&amp;lt;/code&amp;gt; folder. Enter the new folder and copy over the files &amp;lt;code&amp;gt;adapt.inp&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;runChef.sh&amp;lt;/code&amp;gt; from the &amp;lt;code&amp;gt;/projects/tutorials/OnRamp&amp;lt;/code&amp;gt; folder. To learn more about the &amp;lt;code&amp;gt;adapt.inp&amp;lt;/code&amp;gt; file, enter the command:&lt;br /&gt;
&lt;br /&gt;
 more adapt.inp&lt;br /&gt;
&lt;br /&gt;
Pay particular attention to the splitFactor, attributeFileName, modelFileName, meshFileName bz2, and outMeshFileName bz2 specifications. These are often the first nobs that the new user learns to control. When it comes to first creating &amp;quot;checkpoint&amp;quot; files that are needed for the PHASTA executable. splitFactor will tell the Chef executable how many partitions each incoming part needs to be partitioned by. Immediately after the conversion step we want this to be set to 1 because we desire a single &amp;quot;checkpoint&amp;quot; file. attributeFileName will tell Chef where the file that contains the attributes of each geometry entity (region, face, line, and vertex info) is. Likewise, the modelFileName will tell Chef where the file that contains the geometry entity information is. The  meshFileName bz2 and  outMeshFileName bz2 specifications determine the directory where Chef will look to gather the SCOREC mesh and where to put the partitioned SCOREC mesh. Notice that this pre-written &amp;lt;code&amp;gt;adapt.inp&amp;lt;/code&amp;gt; file points the meshFileName bz2 attribute to the mdsMesh_bz2 folder located in the simMeshToMdsMesh directory. &lt;br /&gt;
Run the Chef bash script by entering the command &amp;lt;code&amp;gt;./runChef.sh 1&amp;lt;/code&amp;gt;. This should output a &amp;lt;code&amp;gt;1-procs_case&amp;lt;/code&amp;gt; directory, an &amp;lt;code&amp;gt;mdsMesh_bz2&amp;lt;/code&amp;gt; directory, and a &amp;lt;code&amp;gt;chef.log&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
=== Partioning to 8 processes via Viz nodes===&lt;br /&gt;
&lt;br /&gt;
Like the creation of the serial case above, we will make another subdirectory named 8-1-Chef. Copy from the 1-1-Chef directory the adapt.inp as well as the runChef.sh bash script that were used into this 8-1-Chef subdirectory. We will alter the meshFileName bz2 and splitFactor specifications within newly created adapt.inp. Set the specification meshFileName bz2 to &amp;quot;../1-1-Chef/mdsMesh_bz2&amp;quot;. Set the splitFactor to 8. Run Chef via the bash script &amp;lt;code&amp;gt;./runChef.sh 8&amp;lt;/code&amp;gt;. The executable should be able to read the mesh from the 1-1-Chef directory as well as the geom files as before.&lt;br /&gt;
&lt;br /&gt;
Now that you have successfully partitioned the mesh, it's time to use PHASTA in the [[Level 1 Solve]] step.&lt;br /&gt;
&lt;br /&gt;
=== Further Partitioning to N processes===&lt;br /&gt;
By now you may be able to guess how to go about further partitioning your case into more and more partitions. If the user desired the end part count of 32 processes ( call that N), the next steps would mimic the latter with the subdirectory named 32-8-Chef and the splitFactor be set to 4. &lt;br /&gt;
There are a few things to consider when partitioning an actual case you care about. It is tempting to make the successive split factors large as to have a fewer amount of repeated partions. However, the splitfactor should not often exceed 8 especially on an unstructured grid. If it does, Chef will have a more difficult time splitting the elements evenly amongst the parts, causing an imbalance in the workloads from process to process, and therefore reducing the effectiveness of the PHASTA run. Secondly, when the mesh is large, the partitioning that is performed on the viznode should not exceed 32 parts. The successive partitionings past this amount should be done on larger machines. Sometimes if a mesh is large enough, the preliminary serial partition should be performed on a fatter node that what is available on the viz nodes (ie CU's summit resource).&lt;br /&gt;
&lt;br /&gt;
=== A few helpful video tutorials===&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/PHASTA_workflow_RB.mkv	PHASTA_workflow_RB.mkv	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#PHASTA_workflow_RB.mkv|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemoPrepSolvePost.mov	RajDemoPrepSolvePost.mov	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#RajDemoPrepSolvePost.mov|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/PrepSolvePostBLandC.mp4	PrepSolvePostBLandC.mp4	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#PrepSolvePostBLandC.mp4|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Chef]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Partition&amp;diff=1896</id>
		<title>The On Ramp/Level 1/Partition</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Partition&amp;diff=1896"/>
				<updated>2022-09-18T18:38:32Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page Level 1 Partition to The On Ramp/Level 1/Partition: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Chef==&lt;br /&gt;
Chef is an open source SCOREC tool that is this group's primary tool to partition a problem domain to many subdomains. This is done to allow any array of compute nodes to each focus on solving the problem in parallel. That is, to divide the problem domain into subdomains to give to different computational workers. When we perform this step, all that is needed for inputs are the, 1) SCOREC mesh constructed from the output of the conversion and meshing steps, and 2) the number of parts (subdomains) to divide the problem up into. A generic workflow for this step is described below and is a good place to start when completing the workflow for the first time. As your studies continue, many aspects of the layout and modifiers will not be consistent with this On Ramp documentation. &lt;br /&gt;
=== Creating the serial case via Viz nodes===&lt;br /&gt;
Create a &amp;lt;code&amp;gt;1-1-Chef&amp;lt;/code&amp;gt; subdirectory inside the &amp;lt;code&amp;gt;PrepAndRun&amp;lt;/code&amp;gt; folder. Enter the new folder and copy over the files &amp;lt;code&amp;gt;adapt.inp&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;runChef.sh&amp;lt;/code&amp;gt; from the &amp;lt;code&amp;gt;/projects/tutorials/OnRamp&amp;lt;/code&amp;gt; folder. To learn more about the &amp;lt;code&amp;gt;adapt.inp&amp;lt;/code&amp;gt; file, enter the command:&lt;br /&gt;
&lt;br /&gt;
 more adapt.inp&lt;br /&gt;
&lt;br /&gt;
Pay particular attention to the splitFactor, attributeFileName, modelFileName, meshFileName bz2, and outMeshFileName bz2 specifications. These are often the first nobs that the new user learns to control. When it comes to first creating &amp;quot;checkpoint&amp;quot; files that are needed for the PHASTA executable. splitFactor will tell the Chef executable how many partitions each incoming part needs to be partitioned by. Immediately after the conversion step we want this to be set to 1 because we desire a single &amp;quot;checkpoint&amp;quot; file. attributeFileName will tell Chef where the file that contains the attributes of each geometry entity (region, face, line, and vertex info) is. Likewise, the modelFileName will tell Chef where the file that contains the geometry entity information is. The  meshFileName bz2 and  outMeshFileName bz2 specifications determine the directory where Chef will look to gather the SCOREC mesh and where to put the partitioned SCOREC mesh. Notice that this pre-written &amp;lt;code&amp;gt;adapt.inp&amp;lt;/code&amp;gt; file points the meshFileName bz2 attribute to the mdsMesh_bz2 folder located in the simMeshToMdsMesh directory. &lt;br /&gt;
Run the Chef bash script by entering the command &amp;lt;code&amp;gt;./runChef.sh 1&amp;lt;/code&amp;gt;. This should output a &amp;lt;code&amp;gt;1-procs_case&amp;lt;/code&amp;gt; directory, an &amp;lt;code&amp;gt;mdsMesh_bz2&amp;lt;/code&amp;gt; directory, and a &amp;lt;code&amp;gt;chef.log&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
=== Partioning to 8 processes via Viz nodes===&lt;br /&gt;
&lt;br /&gt;
Like the creation of the serial case above, we will make another subdirectory named 8-1-Chef. Copy from the 1-1-Chef directory the adapt.inp as well as the runChef.sh bash script that were used into this 8-1-Chef subdirectory. We will alter the meshFileName bz2 and splitFactor specifications within newly created adapt.inp. Set the specification meshFileName bz2 to &amp;quot;../1-1-Chef/mdsMesh_bz2&amp;quot;. Set the splitFactor to 8. Run Chef via the bash script &amp;lt;code&amp;gt;./runChef.sh 8&amp;lt;/code&amp;gt;. The executable should be able to read the mesh from the 1-1-Chef directory as well as the geom files as before.&lt;br /&gt;
&lt;br /&gt;
Now that you have successfully partitioned the mesh, it's time to use PHASTA in the [[Level 1 Solve]] step.&lt;br /&gt;
&lt;br /&gt;
=== Further Partitioning to N processes===&lt;br /&gt;
By now you may be able to guess how to go about further partitioning your case into more and more partitions. If the user desired the end part count of 32 processes ( call that N), the next steps would mimic the latter with the subdirectory named 32-8-Chef and the splitFactor be set to 4. &lt;br /&gt;
There are a few things to consider when partitioning an actual case you care about. It is tempting to make the successive split factors large as to have a fewer amount of repeated partions. However, the splitfactor should not often exceed 8 especially on an unstructured grid. If it does, Chef will have a more difficult time splitting the elements evenly amongst the parts, causing an imbalance in the workloads from process to process, and therefore reducing the effectiveness of the PHASTA run. Secondly, when the mesh is large, the partitioning that is performed on the viznode should not exceed 32 parts. The successive partitionings past this amount should be done on larger machines. Sometimes if a mesh is large enough, the preliminary serial partition should be performed on a fatter node that what is available on the viz nodes (ie CU's summit resource).&lt;br /&gt;
&lt;br /&gt;
=== A few helpful video tutorials===&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/PHASTA_workflow_RB.mkv	PHASTA_workflow_RB.mkv	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#PHASTA_workflow_RB.mkv|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemoPrepSolvePost.mov	RajDemoPrepSolvePost.mov	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#RajDemoPrepSolvePost.mov|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://fluid.colorado.edu/tutorials/tutorialVideos/PrepSolvePostBLandC.mp4	PrepSolvePostBLandC.mp4	]&lt;br /&gt;
&lt;br /&gt;
[[Tutorial_Video_Overviews#PrepSolvePostBLandC.mp4|Video Notes]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Chef]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Level_1_Partition&amp;diff=1897</id>
		<title>Level 1 Partition</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Level_1_Partition&amp;diff=1897"/>
				<updated>2022-09-18T18:38:32Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page Level 1 Partition to The On Ramp/Level 1/Partition: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[The On Ramp/Level 1/Partition]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Model/Mesh&amp;diff=1894</id>
		<title>The On Ramp/Level 1/Model/Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1/Model/Mesh&amp;diff=1894"/>
				<updated>2022-09-18T18:38:04Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page Level 1 Model/Mesh to The On Ramp/Level 1/Model/Mesh: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* [[Exporting Parasolid from SolidWorks |Exporting Parasolid from SolidWorks ]]&lt;br /&gt;
* [[Getting Started with Simmodeler|Getting Started with Simmodeler]]&lt;br /&gt;
* [[Prepping the Grid for Chef|Prepping the Grid for Chef]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Level_1_Model/Mesh&amp;diff=1895</id>
		<title>Level 1 Model/Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Level_1_Model/Mesh&amp;diff=1895"/>
				<updated>2022-09-18T18:38:04Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page Level 1 Model/Mesh to The On Ramp/Level 1/Model/Mesh: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[The On Ramp/Level 1/Model/Mesh]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1&amp;diff=1893</id>
		<title>The On Ramp/Level 1</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=The_On_Ramp/Level_1&amp;diff=1893"/>
				<updated>2022-09-18T18:37:32Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Created page with &amp;quot;This is the base page for Level 1 of The On Ramp. This contains a basic tutorial on how to create a PHASTA simulation from scratch.  This diagram covers the steps of t...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the base page for Level 1 of [[The On Ramp]]. This contains a basic tutorial on how to create a [[PHASTA]] simulation from scratch.&lt;br /&gt;
&lt;br /&gt;
This diagram covers the steps of the workflow, the files involved, and how they all relate to each other:&lt;br /&gt;
[[File:Picture5.png]]&lt;br /&gt;
&lt;br /&gt;
The steps of the tutorial are:&lt;br /&gt;
* [[Level 1 Model/Mesh| Model/Mesh Workflow]]&lt;br /&gt;
* [[Level 1 Partition| Partition Workflow]]&lt;br /&gt;
* [[Level 1 Solve| Solve Workflow]]&lt;br /&gt;
* [[Level 1 Post-Process| Post-Process Workflow]]&lt;br /&gt;
&lt;br /&gt;
== Subpages ==&lt;br /&gt;
{{Special:PrefixIndex/The On Ramp/Level 1/}}&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=PhParAdapt&amp;diff=1892</id>
		<title>PhParAdapt</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=PhParAdapt&amp;diff=1892"/>
				<updated>2022-09-18T18:27:06Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Add subpages.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''phParAdapt''' is a program for doing mesh adaptation in PHASTA. This is either done using [[SCOREC Core]] utilties, or though Simmetrix. &lt;br /&gt;
&lt;br /&gt;
[[phParAdapt-SCOREC]]&lt;br /&gt;
&lt;br /&gt;
[[phParAdapt-Simmetrix]]&lt;br /&gt;
&lt;br /&gt;
== Subpages ==&lt;br /&gt;
{{Special:PrefixIndex/PhParAdapt/}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Software]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=PhParAdapt/Simmetrix&amp;diff=1890</id>
		<title>PhParAdapt/Simmetrix</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=PhParAdapt/Simmetrix&amp;diff=1890"/>
				<updated>2022-09-18T18:26:31Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page PhParAdapt-Simmetrix to PhParAdapt/Simmetrix: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following page provides details for performing mesh adaptation using the PhParAdapt tool with Simmetrix routines. &lt;br /&gt;
&lt;br /&gt;
== Initial Notes to User ==&lt;br /&gt;
* The solution migration feature is currently broken and thus requires other means for transferring the solution to new meshes that are created from phParAdapt (e.g. solution interpolation in Paraview)&lt;br /&gt;
* To date, the steps described herein have only been tried with the following restrictions:&lt;br /&gt;
** Tetrahedral elements&lt;br /&gt;
*** Extruded element types apparently can be used but require removal of any extrusion constraints in the mesh. Below are some incomplete details regarding that process for reference. See  [[#Removal_of_Extrusion_Constraint |Removal of Extrusion Constraint]].&lt;br /&gt;
** Serial case&lt;br /&gt;
** Geometries created using SimModeler7.0-190604&lt;br /&gt;
** The version of phParAdapt noted here uses only the 6th entry of the error field for identifying regions for adaptation&lt;br /&gt;
&lt;br /&gt;
== Adaptation Process ==&lt;br /&gt;
&lt;br /&gt;
=== Creating Initial Restart and Error Files ===&lt;br /&gt;
Beginning inside of the 1-1-Chef directory,&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;mkdir 1-1-phParAdapt&lt;br /&gt;
cd 1-1-phParAdapt &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Create soft links to the geom files above the 1-1-Chef directory,&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;ln -s ../../geom.smd&lt;br /&gt;
ln -s ../../geom.sms&lt;br /&gt;
ln -s ../../geom_nat.x_t &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Create an &amp;lt;code&amp;gt;adapt.inp&amp;lt;/code&amp;gt; file using the template [[#Adapt.inp_(non-adaptation_step) | Adapt.inp (non-adaptation step)]]. Keep &amp;lt;code&amp;gt;adaptFlag&amp;lt;/code&amp;gt; set to zero as the purpose of running phParAdapt here is to create restart.&amp;lt;timeStepNumber&amp;gt;.1 file associated with a new mesh format that can be used for adaptation. &lt;br /&gt;
&lt;br /&gt;
Run phParAdapt in this directory, which for this specific example was:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mpirun -np 1 /projects/tools/Simmetrix.develop_19/phParAdapt-Sim/phParAdapt-Sim19/bin/x86_64_linux/phParAdapt-parasolid-openmpi-O 2&amp;gt;&amp;amp;1 | tee phParAdapt.log&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the above step fails, review the 'MeshSim.#.log' file that is produced to identify the error. One error related to analysis attributes has been experienced. The issue was found to be that under the 'Analysis attributes' section, on the 'problem definition' line the name must be set to 'geom'.&lt;br /&gt;
&lt;br /&gt;
After completion, the 1-procs_case and mesh_parts.sms directories should now be present. If a solution has already been generated for a different mesh on the same geometry, it can now be interpolated onto the new restart.&amp;lt;#&amp;gt;.1 file that was generated. If no solution exists, run Phasta using this restart file until the desired solution state has been achieved. Be sure to set &amp;lt;code&amp;gt;Print Error Indicators: True&amp;lt;/code&amp;gt; in the solver.inp file so the error fields are saved in the final restart file. Note, error fields are not printed in intermediate restart files. Before proceeding, check that the error fields exist using &amp;lt;code&amp;gt;grep -a &amp;quot; : &amp;lt;&amp;quot; restart.&amp;lt;#&amp;gt;.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Performing the Adaptation Step ===&lt;br /&gt;
Within the 1-1-phParAdapt directory, &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;mkdir A1-phParAdapt&lt;br /&gt;
cd A1-phParAdapt&lt;br /&gt;
ln -s ../geom.smd&lt;br /&gt;
ln -s ../geom_nat.x_t&lt;br /&gt;
ln -s ../mesh_parts.sms geom.sms&lt;br /&gt;
ln -s geom.sms parts.sms&lt;br /&gt;
ln -s ../1-procs_case/restart.1.1&lt;br /&gt;
ln -s restart.1.1 errors.1.1 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above step, it is assumed that the solution was interpolated onto restart.1.1. If that was not the case and the solution was advanced until a desired step was reached, replace restart.1.1 &amp;amp; errors.1.1 with restart.&amp;lt;#&amp;gt;.1 &amp;amp; errors.&amp;lt;#&amp;gt;.1 corresponding to the desired solution step number with error fields.&lt;br /&gt;
&lt;br /&gt;
Create an adapt.inp file using [[#Adapt.inp_(adaptation step) |Adapt.inp_(adaptation step)]] as a template. Change the &amp;lt;code&amp;gt;timeStepNumber&amp;lt;/code&amp;gt; parameter to correspond to the restart file that was linked into the current directory. Make sure the &amp;lt;code&amp;gt;adaptFlag&amp;lt;/code&amp;gt; parameter is set to one and change additional threshold flags to control the result of the refinement.&lt;br /&gt;
&lt;br /&gt;
Run phParAdapt using the mpirun command from the previous section. A directory corresponding to the &amp;lt;code&amp;gt;timeStepNumber&amp;lt;/code&amp;gt; parameter should have been created with a mesh_parts.sms folder inside upon successful completion.&lt;br /&gt;
&lt;br /&gt;
=== Generate Restart File for Refined Mesh ===&lt;br /&gt;
From within the A1-phParAdapt directory, execute the following (noting to replace &amp;lt;#&amp;gt; with the appropriate time step number):&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;cd &amp;lt;#&amp;gt;&lt;br /&gt;
mkdir 1-1-phParAdapt&lt;br /&gt;
cd 1-1-phParAdapt&lt;br /&gt;
ln -s ../../../geom.smd&lt;br /&gt;
ln -s ../../../geom_nat.x_t&lt;br /&gt;
ln -s ../mesh_parts.sms geom.sms&lt;br /&gt;
cp ../../../adapt.inp .&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above list of commands, the adapt.inp file from the non-adaptation step was copied into the current directory. Run phParAdapt using the same mpirun command as above to generate the 1-procs_case folder and the new restart file with the refined mesh. Again, since the solution migration feature is not working, the restart file will contain the initial solution and no error fields. A previously obtained solution can now be interpolated to the newly created one or PHASTA can be run in this directory to obtain a solution on the new mesh. &lt;br /&gt;
&lt;br /&gt;
In order to perform a second (or more) adaptation step, create a restart file in the 1-procs_case folder in the current directory (either by interpolation or running PHASTA) and make sure it contains the error fields. Create a new directory &amp;lt;code&amp;gt;A2-phParAdapt&amp;lt;/code&amp;gt; at this level and repeat the above steps. Be sure to use the time step number moving forward that is associated with the restart file just mentioned.&lt;br /&gt;
&lt;br /&gt;
== File Examples ==&lt;br /&gt;
&lt;br /&gt;
The file examples below are what were used for this specific example and may contain parameter flags that may need to be changed or are not used at all.&lt;br /&gt;
&lt;br /&gt;
=== Adapt.inp (non-adaptation step) ===&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;numberSolutionVars 6&lt;br /&gt;
numberErrorVars 10 &lt;br /&gt;
refWeights 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0&lt;br /&gt;
refThreshold 0.01&lt;br /&gt;
globalP 1&lt;br /&gt;
timeStepNumber 1 &lt;br /&gt;
numelX 0&lt;br /&gt;
NSFTAG -1&lt;br /&gt;
ensa_dof 6&lt;br /&gt;
attributeFileName geom.smd&lt;br /&gt;
meshFileName geom.sms&lt;br /&gt;
modelFileName geom_nat.x_t&lt;br /&gt;
Idirection 0&lt;br /&gt;
BYPASS 0&lt;br /&gt;
zScale 0&lt;br /&gt;
adaptFlag 0 &lt;br /&gt;
errorName 0&lt;br /&gt;
SONFATH 0&lt;br /&gt;
lStart 0&lt;br /&gt;
rRead 6&lt;br /&gt;
rStart 0 &lt;br /&gt;
AdaptStrategy 5&lt;br /&gt;
AdaptFactor 0.00001&lt;br /&gt;
AdaptOption 11&lt;br /&gt;
hmax 0.04&lt;br /&gt;
hmin 0.000052083&lt;br /&gt;
multipleRestarts 0&lt;br /&gt;
Periodic 0&lt;br /&gt;
prCD 0&lt;br /&gt;
timing 0&lt;br /&gt;
wGraph 0&lt;br /&gt;
phastaVersion 1.9.5&lt;br /&gt;
old_format 0&lt;br /&gt;
FortFormFlag 0&lt;br /&gt;
outputFormat binary&lt;br /&gt;
CUBES 0&lt;br /&gt;
internalBCNodes 0&lt;br /&gt;
version UNKNOWN&lt;br /&gt;
WRITEASC 0&lt;br /&gt;
phastaIO 1&lt;br /&gt;
numTotParts 1&lt;br /&gt;
SolutionMigration 0&lt;br /&gt;
DisplacementMigration 0&lt;br /&gt;
isReorder 0 &lt;br /&gt;
isMCOPI 0&lt;br /&gt;
isBLAdapt 0&lt;br /&gt;
isThickAdapt 0&lt;br /&gt;
isSizeLimit 1&lt;br /&gt;
MaxLimitFact 2&lt;br /&gt;
MinLimitFact 2&lt;br /&gt;
rho 1.225&lt;br /&gt;
mu 1.7825e-5&lt;br /&gt;
dwalMigration 0&lt;br /&gt;
buildMapping 1 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Adapt.inp (adaptation step) ===&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;eviscScal 0&lt;br /&gt;
pgrad 0&lt;br /&gt;
ub 1.45&lt;br /&gt;
lb 0.70&lt;br /&gt;
numSplit 5&lt;br /&gt;
sizeRatio 0.35 # confirmed that 0.25 does a 4x ref&lt;br /&gt;
numSmooth  0  #confirmed that this gets used  &lt;br /&gt;
ratioThresh 0.75   #  this is h_new/h_orig from isotropic days&lt;br /&gt;
localAdapt 1    # makes setLocal force adaptivity only around nodes set size&lt;br /&gt;
AnisoSimmetrix 4 # &amp;gt; 0 use Simmetrix, else ours 1 does largest length reduction by sizeRatio, 2 does both does medium and largest reduction by sizeRatio&lt;br /&gt;
coarsenMode 0 # disables coarsening see coarsenMode docs for 1, and 2&lt;br /&gt;
numberSolutionVars 6&lt;br /&gt;
numberErrorVars 10 &lt;br /&gt;
refWeights 1 1 1 1 1 1 1 1 1 1&lt;br /&gt;
refThreshold 0.01&lt;br /&gt;
globalP 1&lt;br /&gt;
timeStepNumber 1 &lt;br /&gt;
numelX 0&lt;br /&gt;
NSFTAG -1&lt;br /&gt;
ensa_dof 6&lt;br /&gt;
attributeFileName geom.smd&lt;br /&gt;
meshFileName  geom.sms&lt;br /&gt;
modelFileName geom_nat.x_t&lt;br /&gt;
Idirection 0&lt;br /&gt;
BYPASS 0&lt;br /&gt;
zScale 0&lt;br /&gt;
adaptFlag 1 &lt;br /&gt;
errorName 0&lt;br /&gt;
SONFATH 0&lt;br /&gt;
lStart 0&lt;br /&gt;
rRead 6&lt;br /&gt;
rStart 0 &lt;br /&gt;
AdaptStrategy 5&lt;br /&gt;
AdaptFactor 0.5e-4&lt;br /&gt;
AdaptOption 11&lt;br /&gt;
hmax 1e6&lt;br /&gt;
hmin 1e-5&lt;br /&gt;
multipleRestarts 0&lt;br /&gt;
Periodic 0&lt;br /&gt;
prCD 0&lt;br /&gt;
timing 0&lt;br /&gt;
wGraph 0&lt;br /&gt;
phastaVersion 1.9.5&lt;br /&gt;
old_format 0&lt;br /&gt;
FortFormFlag 0&lt;br /&gt;
outputFormat binary&lt;br /&gt;
CUBES 0&lt;br /&gt;
internalBCNodes 0&lt;br /&gt;
version UNKNOWN&lt;br /&gt;
WRITEASC 0&lt;br /&gt;
phastaIO 0&lt;br /&gt;
numTotParts 1&lt;br /&gt;
SolutionMigration 1&lt;br /&gt;
DisplacementMigration 0&lt;br /&gt;
isReorder 0 &lt;br /&gt;
isMCOPI 0&lt;br /&gt;
isBLAdapt 0&lt;br /&gt;
isThickAdapt 0&lt;br /&gt;
isSizeLimit 0&lt;br /&gt;
MaxLimitFact 2&lt;br /&gt;
MinLimitFact 2&lt;br /&gt;
rho 1.225&lt;br /&gt;
mu 1.7825e-5&lt;br /&gt;
dwalMigration 0&lt;br /&gt;
buildMapping 1 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Notes on Debugging ==&lt;br /&gt;
If debugging of phParAdapt is desired, replace the normal &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; command with the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mpirun -np 1 -tv /projects/tools/Simmetrix.develop_19/phParAdapt-Sim/phParAdapt-Sim19/bin/x86_64_linux/phParAdapt-parasolid-openmpi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;code&amp;gt;-O&amp;lt;/code&amp;gt; has been removed from the path to the executable to call the build version that was not optimized. Note that this requires phParAdapt to have been built at some point prior with the &amp;lt;code&amp;gt;Debug&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&lt;br /&gt;
== Notes on Folder Structure ==&lt;br /&gt;
Below is an overview of the folder structure to reference during the adaptation process:&lt;br /&gt;
# [ geom.smd, geom.sms, geom_nat.x_t ] &lt;br /&gt;
# 1-1-Chef&lt;br /&gt;
## 1-1-phParAdapt&lt;br /&gt;
### adapt.inp&lt;br /&gt;
### run_phParAdapt.sh&lt;br /&gt;
### 1-procs_case&lt;br /&gt;
#### geombc.dat.1&lt;br /&gt;
#### restart.&amp;lt;#&amp;gt;.1&lt;br /&gt;
### mesh_parts.sms&lt;br /&gt;
### A1-phParAdapt&lt;br /&gt;
#### restart.&amp;lt;#&amp;gt;.1&lt;br /&gt;
#### errors.&amp;lt;#&amp;gt;.1&lt;br /&gt;
#### [geom.smd, geom.sms, geom_nat.x_t, parts.sms]&lt;br /&gt;
#### adapt.inp&lt;br /&gt;
#### &amp;lt;#&amp;gt;     ( step number directory )&lt;br /&gt;
##### mesh_parts.sms&lt;br /&gt;
##### 1-1-phParAdapt&lt;br /&gt;
###### [geom.smd, geom.sms, geom_nat.x_t]&lt;br /&gt;
###### adapt.inp&lt;br /&gt;
###### 1-procs_case ( restart files with adapted mesh )&lt;br /&gt;
&lt;br /&gt;
== Removal of Extrusion Constraint ==&lt;br /&gt;
Riccardo has used a procedure in the past to remove the extrusion constraint from a mesh that enabled adaptation to be performed on extrusion-type elements. &lt;br /&gt;
&lt;br /&gt;
The following directory contains an example of this process:&lt;br /&gt;
&amp;lt;code&amp;gt; /projects/tools/Models/NASAWingBodyJunction/RajMeshFine/Mesh/1-1-phParAdapt/RemExtrusion/ &amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The executable that is called for removal of the constraint is a part of SCOREC-core, found at &amp;lt;code&amp;gt; &amp;lt;path/to/SCOREC-core/build/dir&amp;gt;/test/rm_extrusion &amp;lt;/code&amp;gt; and the source code, in case edits are required, is found at &amp;lt;code&amp;gt; &amp;lt;path/to/SCOREC-core/source/dir&amp;gt;/core/test/rm_extrusion.cc &amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=PhParAdapt-Simmetrix&amp;diff=1891</id>
		<title>PhParAdapt-Simmetrix</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=PhParAdapt-Simmetrix&amp;diff=1891"/>
				<updated>2022-09-18T18:26:31Z</updated>
		
		<summary type="html">&lt;p&gt;Jrwrigh: Jrwrigh moved page PhParAdapt-Simmetrix to PhParAdapt/Simmetrix: Move to a subpage organization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[PhParAdapt/Simmetrix]]&lt;/div&gt;</summary>
		<author><name>Jrwrigh</name></author>	</entry>

	</feed>