<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://fluid.colorado.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Conrad54418</id>
		<title>PHASTA Wiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://fluid.colorado.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Conrad54418"/>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php/Special:Contributions/Conrad54418"/>
		<updated>2026-05-11T20:13:14Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2158</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2158"/>
				<updated>2026-04-02T17:53:35Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Alternative to Using MATLAB for Reformatting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'MergeBlocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source (new mesh file) and input (MergeBlocks) blocks accordingly&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w, T, sclr. NOTE: Even with the same data files, different versions of PV will produce different ordering so '''CHECK'''.&lt;br /&gt;
&lt;br /&gt;
For primitive code turbulent:&lt;br /&gt;
 awk '{print $8,$9,$10,$2,$4,$5,$6,$3,$1}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code laminar:&lt;br /&gt;
 awk '{print $12,$13,$14,$1,$2,$3,$4,$5,$8,$9,$10,$7,$6}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code with eddy viscosity:&lt;br /&gt;
 awk '{print $13,$14,$15,$2,$3,$4,$5,$6,$9,$10,$11,$8,$7,$1}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget (which is inside 1-procs_case) and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2128</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2128"/>
				<updated>2025-11-17T18:07:55Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Process Overview (manual) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'MergeBlocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source (new mesh file) and input (MergeBlocks) blocks accordingly&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w, T, sclr. NOTE: Even with the same data files, different versions of PV will produce different ordering so '''CHECK'''.&lt;br /&gt;
&lt;br /&gt;
For primitive code turbulent:&lt;br /&gt;
 awk '{print $9,$10,$11,$3,$5,$6,$7,$4,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code laminar:&lt;br /&gt;
 awk '{print $12,$13,$14,$1,$2,$3,$4,$5,$8,$9,$10,$7,$6}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code with eddy viscosity:&lt;br /&gt;
 awk '{print $13,$14,$15,$2,$3,$4,$5,$6,$9,$10,$11,$8,$7,$1}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget (which is inside 1-procs_case) and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2105</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2105"/>
				<updated>2025-03-29T16:22:01Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Turbulent, Compressible, Unstructured Mesh */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'MergeBlocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source (new mesh file) and target (MergeBlocks) blocks accordingly&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w, T, sclr. For primitive code turbulent:&lt;br /&gt;
 awk '{print $9,$10,$11,$3,$5,$6,$7,$4,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code laminar:&lt;br /&gt;
 awk '{print $12,$13,$14,$1,$2,$3,$4,$5,$8,$9,$10,$7,$6}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code with eddy viscosity:&lt;br /&gt;
 awk '{print $13,$14,$15,$2,$3,$4,$5,$6,$9,$10,$11,$8,$7,$1}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget (which is inside 1-procs_case) and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2104</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2104"/>
				<updated>2025-02-20T16:42:35Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Alternative to Using MATLAB for Reformatting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'MergeBlocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source (new mesh file) and target (MergeBlocks) blocks accordingly&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w, T, sclr. For primitive code turbulent:&lt;br /&gt;
 awk '{print $9,$10,$11,$3,$5,$6,$7,$4,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code laminar:&lt;br /&gt;
 awk '{print $12,$13,$14,$1,$2,$3,$4,$5,$8,$9,$10,$7,$6}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code with eddy viscosity:&lt;br /&gt;
 awk '{print $13,$14,$15,$2,$3,$4,$5,$6,$9,$10,$11,$8,$7,$1}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2103</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2103"/>
				<updated>2025-02-02T13:56:27Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Alternative to Using MATLAB for Reformatting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'MergeBlocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source (new mesh file) and target (MergeBlocks) blocks accordingly&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w, T, sclr. For primitive code turbulent:&lt;br /&gt;
 awk '{print $9,$10,$11,$3,$5,$6,$7,$4,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code laminar:&lt;br /&gt;
 awk '{print $12,$13,$14,$1,$2,$3,$4,$5,$8,$9,$10,$7,$6}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code with eddy viscosity:&lt;br /&gt;
 awk '{print $14,$15,$16,$3,$4,$5,$6,$7,$10,$11,$12,$9,$8,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2102</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2102"/>
				<updated>2025-02-02T13:54:24Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Alternative to Using MATLAB for Reformatting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'MergeBlocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source (new mesh file) and target (MergeBlocks) blocks accordingly&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w, T, sclr.&lt;br /&gt;
 awk '{print $9,$10,$11,$3,$5,$6,$7,$4,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code laminar:&lt;br /&gt;
 awk '{print $12,$13,$14,$1,$2,$3,$4,$5,$8,$9,$10,$7,$6}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code with eddy viscosity:&lt;br /&gt;
 awk '{print $14,$15,$16,$3,$4,$5,$6,$7,$10,$11,$12,$9,$8,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=2101</id>
		<title>SimModeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=2101"/>
				<updated>2025-01-29T20:42:32Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Compressible */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
SimModeler is a model creation program from Simmetrix.  It takes the mesh and geometric model and creates the input files for PHASTA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
To run SimModeler, first connect via VNC, then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of SimModeler to your environment (the below example will get the &amp;quot;default&amp;quot; version):&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodeler&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
 vglrun simmodeler&lt;br /&gt;
&lt;br /&gt;
== Converting old files ==&lt;br /&gt;
This is a guide for converting old files (parasolid and .spj) to the new format (.smd).&lt;br /&gt;
&lt;br /&gt;
After connecting to one of the compute machines, add the suite of tools for SimModeler to your environment:&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodsuite&lt;br /&gt;
&lt;br /&gt;
From your case, make a new directory and copy your parasolid (.x_t or .xmt_txt), and .spj file into it. Rename the parasolid file to geom.xmt_txt and the .spj file to geom.spj, if they aren't already named that way. Then from the directory just created (now holds geom.xmt_txt and geom.spj) run: &lt;br /&gt;
&lt;br /&gt;
 /users/matthb2/simmodelerconvert/testConvert &lt;br /&gt;
&lt;br /&gt;
Your directory now contains two new files: model.smd and model.x_t&lt;br /&gt;
&lt;br /&gt;
== Creating new files ==&lt;br /&gt;
&lt;br /&gt;
Loading in geometry is about as intuitive as is possibly can be. Go to File -&amp;gt; Import Geometry, browse to the appropriate model, and select Open. Once open, it is possible to both mesh the model and to create boundary conditions for it. Because BLMesher is presently the primary meshing tool, it is only necessary to use SimModeler to create boundary conditions. Go to Analysis -&amp;gt; Select Solver, and select phasta. After selecting phasta, the Analysis Attributes option under Analysis becomes valid. Clicking it brings up the corresponding window. From this new window, it is possible to apply  boundary conditions and initial conditions by clicking the small button next to the drop down menu [add picture]. Note you must also double click on &amp;quot;problem definition&amp;quot; which will allow you to name the case.  Later post processing expects the name &amp;quot;geom&amp;quot; so always name it so.&lt;br /&gt;
&lt;br /&gt;
== Boundary conditions ==&lt;br /&gt;
&lt;br /&gt;
Commonly boundary conditions include:&lt;br /&gt;
&lt;br /&gt;
*comp3 - Specifies a 3D velocity vector&lt;br /&gt;
*comp1 - Specifies a 3D vector in which the velocity is constrained. Velocity normal to this vector is not directly affected. This is useful for creating slip walls and mimicking free stream regions. &lt;br /&gt;
*temperature - Sets the temperature of the wall. This is only needed for compressible cases. &lt;br /&gt;
*scalar_1 - Sets the scalar_1 / eddy viscosity to apply at a wall. For the Spalart Allmaras models, scalar_1 should be zero at physical walls where a boundary layer develops and 3 to 5 times the molecular viscosity at free stream boundaries (http://turbmodels.larc.nasa.gov/spalart.html)&lt;br /&gt;
*surf ID - Associates a number with one or more faces. This can then be read by Phasta and used to apply more complicated boundary conditions in software. &lt;br /&gt;
*natural pressure - Apply a mean pressure over a surface. The pressure at any particular point is still allowed to vary (someone verify). &lt;br /&gt;
*traction vector - ??. The zero vector is typically applied at outlet. &lt;br /&gt;
*heat flux - Specifies the rate at which heat is injected / removed (not sure which one) into / from the fluid domain. The value is almost always set to zero to create a perfectly insulated boundary. &lt;br /&gt;
*scalar_1 flux - set the flux of scalar_1 / eddy viscosity into / out of the domain (not sure which one). This is typically only used at outlets where high values of eddy viscosity have been convected downstream of turbulent walls. The value is almost always set to zero. &lt;br /&gt;
*turbulence wall - Indicates that a surface is to be included in the calculation of d2wall files (verify) which are then used by the Spalart Allmaras turbulence model to generate more physical turbulent kinetic energy production / dissipation budgets.&lt;br /&gt;
&lt;br /&gt;
=== Incompressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for an incompressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream kinematic viscosity)&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream kinematic viscosity)&lt;br /&gt;
*Outflow&lt;br /&gt;
**natural pressure (zero)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
*Impermeable slip walls&lt;br /&gt;
**Comp 1 (zero in wall-normal direction)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
&lt;br /&gt;
=== Compressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for a compressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream kinematic viscosity)&lt;br /&gt;
**initial pressure&lt;br /&gt;
**initial temperature&lt;br /&gt;
&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream kinematic viscosity)&lt;br /&gt;
**temperature&lt;br /&gt;
**pressure&lt;br /&gt;
&lt;br /&gt;
*Outflow&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
**heat flux (zero)&lt;br /&gt;
&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
**temperature or heat flux&lt;br /&gt;
&lt;br /&gt;
*Impermeable slip walls&lt;br /&gt;
**Comp 1 (zero in wall-normal direction)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
**heat flux (zero)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=2100</id>
		<title>SimModeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=2100"/>
				<updated>2025-01-29T20:33:34Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Compressible */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
SimModeler is a model creation program from Simmetrix.  It takes the mesh and geometric model and creates the input files for PHASTA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
To run SimModeler, first connect via VNC, then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of SimModeler to your environment (the below example will get the &amp;quot;default&amp;quot; version):&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodeler&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
 vglrun simmodeler&lt;br /&gt;
&lt;br /&gt;
== Converting old files ==&lt;br /&gt;
This is a guide for converting old files (parasolid and .spj) to the new format (.smd).&lt;br /&gt;
&lt;br /&gt;
After connecting to one of the compute machines, add the suite of tools for SimModeler to your environment:&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodsuite&lt;br /&gt;
&lt;br /&gt;
From your case, make a new directory and copy your parasolid (.x_t or .xmt_txt), and .spj file into it. Rename the parasolid file to geom.xmt_txt and the .spj file to geom.spj, if they aren't already named that way. Then from the directory just created (now holds geom.xmt_txt and geom.spj) run: &lt;br /&gt;
&lt;br /&gt;
 /users/matthb2/simmodelerconvert/testConvert &lt;br /&gt;
&lt;br /&gt;
Your directory now contains two new files: model.smd and model.x_t&lt;br /&gt;
&lt;br /&gt;
== Creating new files ==&lt;br /&gt;
&lt;br /&gt;
Loading in geometry is about as intuitive as is possibly can be. Go to File -&amp;gt; Import Geometry, browse to the appropriate model, and select Open. Once open, it is possible to both mesh the model and to create boundary conditions for it. Because BLMesher is presently the primary meshing tool, it is only necessary to use SimModeler to create boundary conditions. Go to Analysis -&amp;gt; Select Solver, and select phasta. After selecting phasta, the Analysis Attributes option under Analysis becomes valid. Clicking it brings up the corresponding window. From this new window, it is possible to apply  boundary conditions and initial conditions by clicking the small button next to the drop down menu [add picture]. Note you must also double click on &amp;quot;problem definition&amp;quot; which will allow you to name the case.  Later post processing expects the name &amp;quot;geom&amp;quot; so always name it so.&lt;br /&gt;
&lt;br /&gt;
== Boundary conditions ==&lt;br /&gt;
&lt;br /&gt;
Commonly boundary conditions include:&lt;br /&gt;
&lt;br /&gt;
*comp3 - Specifies a 3D velocity vector&lt;br /&gt;
*comp1 - Specifies a 3D vector in which the velocity is constrained. Velocity normal to this vector is not directly affected. This is useful for creating slip walls and mimicking free stream regions. &lt;br /&gt;
*temperature - Sets the temperature of the wall. This is only needed for compressible cases. &lt;br /&gt;
*scalar_1 - Sets the scalar_1 / eddy viscosity to apply at a wall. For the Spalart Allmaras models, scalar_1 should be zero at physical walls where a boundary layer develops and 3 to 5 times the molecular viscosity at free stream boundaries (http://turbmodels.larc.nasa.gov/spalart.html)&lt;br /&gt;
*surf ID - Associates a number with one or more faces. This can then be read by Phasta and used to apply more complicated boundary conditions in software. &lt;br /&gt;
*natural pressure - Apply a mean pressure over a surface. The pressure at any particular point is still allowed to vary (someone verify). &lt;br /&gt;
*traction vector - ??. The zero vector is typically applied at outlet. &lt;br /&gt;
*heat flux - Specifies the rate at which heat is injected / removed (not sure which one) into / from the fluid domain. The value is almost always set to zero to create a perfectly insulated boundary. &lt;br /&gt;
*scalar_1 flux - set the flux of scalar_1 / eddy viscosity into / out of the domain (not sure which one). This is typically only used at outlets where high values of eddy viscosity have been convected downstream of turbulent walls. The value is almost always set to zero. &lt;br /&gt;
*turbulence wall - Indicates that a surface is to be included in the calculation of d2wall files (verify) which are then used by the Spalart Allmaras turbulence model to generate more physical turbulent kinetic energy production / dissipation budgets.&lt;br /&gt;
&lt;br /&gt;
=== Incompressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for an incompressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream kinematic viscosity)&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream kinematic viscosity)&lt;br /&gt;
*Outflow&lt;br /&gt;
**natural pressure (zero)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
*Impermeable slip walls&lt;br /&gt;
**Comp 1 (zero in wall-normal direction)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
&lt;br /&gt;
=== Compressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for a compressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream kinematic viscosity)&lt;br /&gt;
**initial pressure&lt;br /&gt;
**initial temperature&lt;br /&gt;
&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream kinematic viscosity)&lt;br /&gt;
**temperature&lt;br /&gt;
**pressure&lt;br /&gt;
&lt;br /&gt;
*Outflow&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
**heat flux (zero)&lt;br /&gt;
&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
**temperature or heat flux&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=2099</id>
		<title>SimModeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=2099"/>
				<updated>2025-01-29T20:33:23Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Incompressible */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
SimModeler is a model creation program from Simmetrix.  It takes the mesh and geometric model and creates the input files for PHASTA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
To run SimModeler, first connect via VNC, then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of SimModeler to your environment (the below example will get the &amp;quot;default&amp;quot; version):&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodeler&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
 vglrun simmodeler&lt;br /&gt;
&lt;br /&gt;
== Converting old files ==&lt;br /&gt;
This is a guide for converting old files (parasolid and .spj) to the new format (.smd).&lt;br /&gt;
&lt;br /&gt;
After connecting to one of the compute machines, add the suite of tools for SimModeler to your environment:&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodsuite&lt;br /&gt;
&lt;br /&gt;
From your case, make a new directory and copy your parasolid (.x_t or .xmt_txt), and .spj file into it. Rename the parasolid file to geom.xmt_txt and the .spj file to geom.spj, if they aren't already named that way. Then from the directory just created (now holds geom.xmt_txt and geom.spj) run: &lt;br /&gt;
&lt;br /&gt;
 /users/matthb2/simmodelerconvert/testConvert &lt;br /&gt;
&lt;br /&gt;
Your directory now contains two new files: model.smd and model.x_t&lt;br /&gt;
&lt;br /&gt;
== Creating new files ==&lt;br /&gt;
&lt;br /&gt;
Loading in geometry is about as intuitive as is possibly can be. Go to File -&amp;gt; Import Geometry, browse to the appropriate model, and select Open. Once open, it is possible to both mesh the model and to create boundary conditions for it. Because BLMesher is presently the primary meshing tool, it is only necessary to use SimModeler to create boundary conditions. Go to Analysis -&amp;gt; Select Solver, and select phasta. After selecting phasta, the Analysis Attributes option under Analysis becomes valid. Clicking it brings up the corresponding window. From this new window, it is possible to apply  boundary conditions and initial conditions by clicking the small button next to the drop down menu [add picture]. Note you must also double click on &amp;quot;problem definition&amp;quot; which will allow you to name the case.  Later post processing expects the name &amp;quot;geom&amp;quot; so always name it so.&lt;br /&gt;
&lt;br /&gt;
== Boundary conditions ==&lt;br /&gt;
&lt;br /&gt;
Commonly boundary conditions include:&lt;br /&gt;
&lt;br /&gt;
*comp3 - Specifies a 3D velocity vector&lt;br /&gt;
*comp1 - Specifies a 3D vector in which the velocity is constrained. Velocity normal to this vector is not directly affected. This is useful for creating slip walls and mimicking free stream regions. &lt;br /&gt;
*temperature - Sets the temperature of the wall. This is only needed for compressible cases. &lt;br /&gt;
*scalar_1 - Sets the scalar_1 / eddy viscosity to apply at a wall. For the Spalart Allmaras models, scalar_1 should be zero at physical walls where a boundary layer develops and 3 to 5 times the molecular viscosity at free stream boundaries (http://turbmodels.larc.nasa.gov/spalart.html)&lt;br /&gt;
*surf ID - Associates a number with one or more faces. This can then be read by Phasta and used to apply more complicated boundary conditions in software. &lt;br /&gt;
*natural pressure - Apply a mean pressure over a surface. The pressure at any particular point is still allowed to vary (someone verify). &lt;br /&gt;
*traction vector - ??. The zero vector is typically applied at outlet. &lt;br /&gt;
*heat flux - Specifies the rate at which heat is injected / removed (not sure which one) into / from the fluid domain. The value is almost always set to zero to create a perfectly insulated boundary. &lt;br /&gt;
*scalar_1 flux - set the flux of scalar_1 / eddy viscosity into / out of the domain (not sure which one). This is typically only used at outlets where high values of eddy viscosity have been convected downstream of turbulent walls. The value is almost always set to zero. &lt;br /&gt;
*turbulence wall - Indicates that a surface is to be included in the calculation of d2wall files (verify) which are then used by the Spalart Allmaras turbulence model to generate more physical turbulent kinetic energy production / dissipation budgets.&lt;br /&gt;
&lt;br /&gt;
=== Incompressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for an incompressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream kinematic viscosity)&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream kinematic viscosity)&lt;br /&gt;
*Outflow&lt;br /&gt;
**natural pressure (zero)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
*Impermeable slip walls&lt;br /&gt;
**Comp 1 (zero in wall-normal direction)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
&lt;br /&gt;
=== Compressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for a compressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream molecular viscosity)&lt;br /&gt;
**initial pressure&lt;br /&gt;
**initial temperature&lt;br /&gt;
&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream molecular viscosity)&lt;br /&gt;
**temperature&lt;br /&gt;
**pressure&lt;br /&gt;
&lt;br /&gt;
*Outflow&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
**heat flux (zero)&lt;br /&gt;
&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
**temperature or heat flux&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2098</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2098"/>
				<updated>2025-01-15T23:31:07Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Alternative to Using MATLAB for Reformatting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'MergeBlocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source (new mesh file) and target (MergeBlocks) blocks accordingly&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w.&lt;br /&gt;
 awk '{print $6,$7,$8,$1,$2,$3,$4}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code laminar:&lt;br /&gt;
 awk '{print $12,$13,$14,$1,$2,$3,$4,$5,$8,$9,$10,$7,$6}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code with eddy viscosity:&lt;br /&gt;
 awk '{print $14,$15,$16,$3,$4,$5,$6,$7,$10,$11,$12,$9,$8,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2097</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2097"/>
				<updated>2025-01-15T23:19:41Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Turbulent, Compressible, Unstructured Mesh */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'MergeBlocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source (new mesh file) and target (MergeBlocks) blocks accordingly&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w.&lt;br /&gt;
 awk '{print $6,$7,$8,$1,$2,$3,$4}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code laminar:&lt;br /&gt;
 awk '{print $12,$13,$14,$1,$2,$3,$4,$5,$8,$9,$10,$7,$7}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code with eddy viscosity:&lt;br /&gt;
 awk '{print $14,$15,$16,$3,$4,$5,$6,$7,$10,$11,$12,$9,$8,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2096</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2096"/>
				<updated>2025-01-15T23:09:11Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Process Overview (manual) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'MergeBlocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source (new mesh file) and target (MergeBlocks) blocks accordingly&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w.&lt;br /&gt;
 awk '{print $6,$7,$8,$1,$2,$3,$4}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code:&lt;br /&gt;
 awk '{print $14,$15,$16,$3,$4,$5,$6,$7,$10,$11,$12,$9,$8,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2095</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2095"/>
				<updated>2024-10-26T16:42:59Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Alternative to Using MATLAB for Reformatting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'merge blocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source and target blocks accordingly (ParaView’s naming is super unclear: they want source to be the coordinates where you need the solution (new mesh) and input to be the mesh that has the solution values associated with it that you want to interpolate from (which in this case is the MergeBlocks). This is confusing because it is exactly backwards to what we use for solution interpolation where we call the mesh with a solution the source and the new mesh the target).&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w.&lt;br /&gt;
 awk '{print $6,$7,$8,$1,$2,$3,$4}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
for entropy code:&lt;br /&gt;
 awk '{print $14,$15,$16,$3,$4,$5,$6,$7,$10,$11,$12,$9,$8,$2}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2094</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2094"/>
				<updated>2024-09-06T17:07:44Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Conditions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs (see input.config in source code)&lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity, [Pa-s]&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature (related to initial vibrational temperature, see input.config in source code), [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity, [Pa-s]&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species (''nsp'') equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
     Turbulence Model: No-Model # RANS-SA&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1 # laminar&lt;br /&gt;
       # Step Construction  : 0 1 0 1 10 11 10 11 # turbulent&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''evisc'' and ''dwal'' fields would require turbulence modeling to be turned on for the example below to work. This is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;40&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;ent_from_NAS/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;ent_from_NAS/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;9&amp;quot;&lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;65800&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;1000&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;10&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_3&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_4&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;3&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_5&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;4&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;8&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
	    data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;evisc&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;10&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2093</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2093"/>
				<updated>2024-09-06T17:07:34Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Initial Conditions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs (see input.config in source code)&lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity, [Pa-s]&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature (related to initial vibrational temperature, see input.config in source code), [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity, [Pa-s]&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species (''nsp'') equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
     Turbulence Model: No-Model # RANS-SA&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1 # laminar&lt;br /&gt;
       # Step Construction  : 0 1 0 1 10 11 10 11 # turbulent&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''evisc'' and ''dwal'' fields would require turbulence modeling to be turned on for the example below to work. This is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;40&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;ent_from_NAS/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;ent_from_NAS/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;9&amp;quot;&lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;65800&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;1000&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;10&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_3&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_4&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;3&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_5&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;4&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;8&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
	    data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;evisc&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;10&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2092</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2092"/>
				<updated>2024-09-06T17:02:52Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Conditions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs (see input.config in source code)&lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity, [Pa-s]&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species (''nsp'') equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
     Turbulence Model: No-Model # RANS-SA&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1 # laminar&lt;br /&gt;
       # Step Construction  : 0 1 0 1 10 11 10 11 # turbulent&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''evisc'' and ''dwal'' fields would require turbulence modeling to be turned on for the example below to work. This is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;40&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;ent_from_NAS/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;ent_from_NAS/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;9&amp;quot;&lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;65800&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;1000&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;10&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_3&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_4&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;3&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_5&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;4&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;8&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
	    data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;evisc&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;10&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2091</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2091"/>
				<updated>2024-08-29T14:07:33Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Example of solver.inp file: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species (''nsp'') equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
     Turbulence Model: No-Model # RANS-SA&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1 # laminar&lt;br /&gt;
       # Step Construction  : 0 1 0 1 10 11 10 11 # turbulent&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''evisc'' and ''dwal'' fields would require turbulence modeling to be turned on for the example below to work. This is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;40&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;ent_from_NAS/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;ent_from_NAS/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;9&amp;quot;&lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;65800&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;1000&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;10&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_3&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_4&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;3&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_5&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;4&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;8&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
	    data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;evisc&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;10&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2090</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2090"/>
				<updated>2024-08-29T14:06:47Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Example of solver.inp file: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species (''nsp'') equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
     Turbulence Model: No-Model # RANS-SA&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1 # 10 11 10 11&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''evisc'' and ''dwal'' fields would require turbulence modeling to be turned on for the example below to work. This is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;40&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;ent_from_NAS/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;ent_from_NAS/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;9&amp;quot;&lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;65800&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;1000&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;10&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_3&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_4&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;3&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_5&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;4&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;8&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
	    data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;evisc&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;10&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2089</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2089"/>
				<updated>2024-08-29T14:05:27Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Post-Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species (''nsp'') equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''evisc'' and ''dwal'' fields would require turbulence modeling to be turned on for the example below to work. This is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;40&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;ent_from_NAS/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;ent_from_NAS/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;9&amp;quot;&lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;65800&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;1000&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;10&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_3&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_4&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;3&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_5&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;4&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;8&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
	    data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;evisc&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;10&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2088</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2088"/>
				<updated>2024-08-29T14:04:30Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Example of flow.pht file */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species (''nsp'') equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;40&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;ent_from_NAS/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;ent_from_NAS/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;9&amp;quot;&lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;65800&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;1000&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;10&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
    &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_3&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_4&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;3&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_5&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;4&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;8&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
	    data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;evisc&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;10&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;dwal&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2087</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2087"/>
				<updated>2024-08-29T14:01:51Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Simulation Inputs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species (''nsp'') equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2086</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2086"/>
				<updated>2024-08-29T14:01:27Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Example of solver.inp file: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2085</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2085"/>
				<updated>2024-08-29T14:01:13Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: Undo revision 2084 by Conrad54418 (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;# PHASTA_HYP Version 1 Input File&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2084</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2084"/>
				<updated>2024-08-29T14:00:35Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Example of solver.inp file: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2083</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2083"/>
				<updated>2024-08-29T14:00:21Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Simulation Inputs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;# PHASTA_HYP Version 1 Input File&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 1000&lt;br /&gt;
     Time Step Size: 1e-8&lt;br /&gt;
&lt;br /&gt;
#       Limit instructions : switch min max -- change switch from zero to activate&lt;br /&gt;
	Limit Temperature : 0 0 0 # also limits vibrational temperature&lt;br /&gt;
	Limit u1 : 0 0 0 &lt;br /&gt;
	Limit u2 : 0 0 0 &lt;br /&gt;
	Limit u3 : 0 -1 1&lt;br /&gt;
	Limit rho1 : 0 1e-20 0 &lt;br /&gt;
	Limit rho2 : 0 1e-20 0 &lt;br /&gt;
	Limit rho3 : 0 1e-20 0 &lt;br /&gt;
	Limit rho4 : 0 1e-20 0 &lt;br /&gt;
        Limit rho5 : 0 1e-20 0 &lt;br /&gt;
        Limit Scalar 1 : 0 0 0 &lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 100  &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscous Control: Viscous   #None   &lt;br /&gt;
     Shear Law: Wilke's Mixing Rule  # ishear=1  =&amp;gt; matflag(2,n)&lt;br /&gt;
     Bulk Viscosity Law: Constant Bulk Viscosity # ibulk=0 =&amp;gt; matflag(3,n)&lt;br /&gt;
     Conductivity Law: Wilke's Mixing Rule    # icond=1 =&amp;gt; matflag(4,n)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
     Species IDs: 1 2 3 4 5             # ispecIDs&lt;br /&gt;
                                        # IDs numbered in order: N2=1,O2=2,NO=3,N=4,O=5&lt;br /&gt;
     Inflow Concentrations: 0.78997 0.21 1e-5 1e-5 1e-5 # concinf&lt;br /&gt;
     Allow reactions: False             # ichem = 1 if True. right now can only be used for nsp=5&lt;br /&gt;
     Temperature threshold: 2000        # Tth--below which, reactions ignored. 2000 K is from Chalot 1990&lt;br /&gt;
     Equilibrium Tolerance: 1e-5        # chemtol (max species production rate for reactions to equilibrium in IC's/BC's)&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Vibrational Temperature BC: 1      # TvibBC. must be a positive number. at any BC with T set:&lt;br /&gt;
                                        # if greater than 5, then Tvib = TvibBC&lt;br /&gt;
                                        # else, then Tvib = TvibBC*T&lt;br /&gt;
     Vibrational Temperature IC: -1     # TvibIC (if negative, Tvib = T initially. if positive, value here is used)&lt;br /&gt;
     Restart from primitive file: 0     # 1 if restarting from file from primitive code&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Shock Sensor : False # i_ss_on = 1 if True&lt;br /&gt;
     Shock Sensor Value Full Off : 0.05  # % change of temperature across element above which we ramp mu until&lt;br /&gt;
     Shock Sensor Value Full On : 0.2    # percent change after which we hold  at scale factor * mu&lt;br /&gt;
     Shock Sensor Scale Factor : 100.0     # scale factor on mu or other sensor&lt;br /&gt;
     Wall Distance to Shield Shock Sensor : -1     # The above won't be applied within this wall distance (-1 ignores this condition)&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2082</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2082"/>
				<updated>2024-08-29T13:52:25Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Simulation Inputs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5.&lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;# PHASTA_HYP Version 1 Input File&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 100  &lt;br /&gt;
     Time Step Size: 3.2339656949004e-08&lt;br /&gt;
&lt;br /&gt;
     Limit Density: 0 0.01 0.1         # solution limiting on variables [switch, min, max]&lt;br /&gt;
     Limit u1: 0 0. 2.8e3&lt;br /&gt;
     Limit u2: 0 0 0&lt;br /&gt;
     Limit u3: 0 0 0&lt;br /&gt;
     Limit Temperature: 0 230 3500     # also limits vibrational temperature&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 250   &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscosity: 1.95508431704028e-5                 # dynamic viscosity (only used if nsp.eq.99 for air mixture)&lt;br /&gt;
     Thermal Conductivity: 26.6843390135759e-3      # only used if nsp.eq.99 for air mixture&lt;br /&gt;
     Viscous Control: None    #Viscous   #None      &lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
#.......currently only allowing 1&amp;lt;=nsp&amp;lt;=5. &lt;br /&gt;
#.........and make sure scalars passed from simmodeler are in correct slots&lt;br /&gt;
#&lt;br /&gt;
     Species IDs: 1             # specIDs, length of array must equal&lt;br /&gt;
                                        #    nsp (IDs numbered in order:&lt;br /&gt;
                                        #            N2,O2,NO,N,O&lt;br /&gt;
                                        #   ID:99 for Air molecule&lt;br /&gt;
     # e.g. Species IDs: 1 3      &amp;lt;&amp;lt;&amp;lt; would give N2 and NO&lt;br /&gt;
     Ref Entropy Conditions: 1e3 230 230 0   #[P0,T0,T0vib,S0]&lt;br /&gt;
     Ref Entropy Mole Frac: 1 0 0 0 0   # composition of gas used as reference entropy condition&lt;br /&gt;
                                        #  &amp;gt;&amp;gt; must sum to zero&lt;br /&gt;
     Allow reactions: False             # chemical reactions, ichem = 1 if True&lt;br /&gt;
     Chemical heat release: False       # chemical heat release, iqtot = 1 if True&lt;br /&gt;
     Limit on reaction step: 0.00001    # rlim (limits change in species cs per step)&lt;br /&gt;
     Tolerance to global time: 0.01     # ttol, chem solver is advanced in time until time diff &amp;lt; ttol*dt_global&lt;br /&gt;
     Temperature threshold: 500         # Tth (below which, reactions ignored)&lt;br /&gt;
     Reaction solver MIN steps: 5       # nstepmin, minimum number of time steps&lt;br /&gt;
     Reaction solver MAX steps: 100     # nstepmax, maximum number of time steps&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Exclude vib energy: True           # ivib0 = 1 if True&lt;br /&gt;
     Exclude vib source: True           # ivibS0 = 1 if True&lt;br /&gt;
     Tvib BC Ratio: 1.0			# at any BC with T set, Tvib = tvibBC * T&lt;br /&gt;
     Vibrational Temperature IC: -1     # set negative value to force Tvib = T, otherwise positive value set as IC value&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Discontinuity Capturing: DC-quadratic    # Current Options: DC-mallet, DC-minimum, DC-quadratic, DC-yzbeta&lt;br /&gt;
     Multiplier for DC factor: 1              # scales DC variable in e3DC&lt;br /&gt;
     Discontinuity Capturing Scheme: 1        # 0: discontinuous, 1: continuous (L2 projection)&lt;br /&gt;
     Include Source Term in DC: 0             # 1: sets idcSRC to 1&lt;br /&gt;
     Write DCqpt: 0                           # if &amp;gt; 0, writes out data at a quadrature point iDCqpt&lt;br /&gt;
&lt;br /&gt;
#----Parameters for YZBeta DC operator ----&lt;br /&gt;
     Beta Value: 1                       # 1: smoother , 2: sharper, 12: compromise between 1 and 2&lt;br /&gt;
     YZB Farfield Conditions: 1e5 2119 10 10 300 # [Pressure, X-Vel, Y-Vel, Z-Vel, Temperature] &lt;br /&gt;
     YZB Farfield Mole Frac: 1 0 0 0 0   # mole fractions at reference condition&lt;br /&gt;
                                         # [xN2,xO2,xNO,xN,xO] &lt;br /&gt;
                                         # must sum to 1, must be length 5 &lt;br /&gt;
     Include Umod Term: 1                # 0: no, 1: yes&lt;br /&gt;
     Mach Adjustment Bm Value: 1         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Mach Adjustment Bj Value: 6         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Include Time Term in Z: 1           # 0: no, 1: yes&lt;br /&gt;
#------------------------------------------&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2081</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2081"/>
				<updated>2024-08-29T13:51:10Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Initial Conditions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of turbulent eddy viscosity&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with solver.inp values to compute the mole fraction of each species&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5. This feature has been tested and shown to work for five species mass conservation equations at one time. However, when chemical species production governed by the finite-rate chemistry module is active, the solution becomes unstable. Additional work on this feature is needed. &lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;# PHASTA_HYP Version 1 Input File&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 100  &lt;br /&gt;
     Time Step Size: 3.2339656949004e-08&lt;br /&gt;
&lt;br /&gt;
     Limit Density: 0 0.01 0.1         # solution limiting on variables [switch, min, max]&lt;br /&gt;
     Limit u1: 0 0. 2.8e3&lt;br /&gt;
     Limit u2: 0 0 0&lt;br /&gt;
     Limit u3: 0 0 0&lt;br /&gt;
     Limit Temperature: 0 230 3500     # also limits vibrational temperature&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 250   &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscosity: 1.95508431704028e-5                 # dynamic viscosity (only used if nsp.eq.99 for air mixture)&lt;br /&gt;
     Thermal Conductivity: 26.6843390135759e-3      # only used if nsp.eq.99 for air mixture&lt;br /&gt;
     Viscous Control: None    #Viscous   #None      &lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
#.......currently only allowing 1&amp;lt;=nsp&amp;lt;=5. &lt;br /&gt;
#.........and make sure scalars passed from simmodeler are in correct slots&lt;br /&gt;
#&lt;br /&gt;
     Species IDs: 1             # specIDs, length of array must equal&lt;br /&gt;
                                        #    nsp (IDs numbered in order:&lt;br /&gt;
                                        #            N2,O2,NO,N,O&lt;br /&gt;
                                        #   ID:99 for Air molecule&lt;br /&gt;
     # e.g. Species IDs: 1 3      &amp;lt;&amp;lt;&amp;lt; would give N2 and NO&lt;br /&gt;
     Ref Entropy Conditions: 1e3 230 230 0   #[P0,T0,T0vib,S0]&lt;br /&gt;
     Ref Entropy Mole Frac: 1 0 0 0 0   # composition of gas used as reference entropy condition&lt;br /&gt;
                                        #  &amp;gt;&amp;gt; must sum to zero&lt;br /&gt;
     Allow reactions: False             # chemical reactions, ichem = 1 if True&lt;br /&gt;
     Chemical heat release: False       # chemical heat release, iqtot = 1 if True&lt;br /&gt;
     Limit on reaction step: 0.00001    # rlim (limits change in species cs per step)&lt;br /&gt;
     Tolerance to global time: 0.01     # ttol, chem solver is advanced in time until time diff &amp;lt; ttol*dt_global&lt;br /&gt;
     Temperature threshold: 500         # Tth (below which, reactions ignored)&lt;br /&gt;
     Reaction solver MIN steps: 5       # nstepmin, minimum number of time steps&lt;br /&gt;
     Reaction solver MAX steps: 100     # nstepmax, maximum number of time steps&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Exclude vib energy: True           # ivib0 = 1 if True&lt;br /&gt;
     Exclude vib source: True           # ivibS0 = 1 if True&lt;br /&gt;
     Tvib BC Ratio: 1.0			# at any BC with T set, Tvib = tvibBC * T&lt;br /&gt;
     Vibrational Temperature IC: -1     # set negative value to force Tvib = T, otherwise positive value set as IC value&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Discontinuity Capturing: DC-quadratic    # Current Options: DC-mallet, DC-minimum, DC-quadratic, DC-yzbeta&lt;br /&gt;
     Multiplier for DC factor: 1              # scales DC variable in e3DC&lt;br /&gt;
     Discontinuity Capturing Scheme: 1        # 0: discontinuous, 1: continuous (L2 projection)&lt;br /&gt;
     Include Source Term in DC: 0             # 1: sets idcSRC to 1&lt;br /&gt;
     Write DCqpt: 0                           # if &amp;gt; 0, writes out data at a quadrature point iDCqpt&lt;br /&gt;
&lt;br /&gt;
#----Parameters for YZBeta DC operator ----&lt;br /&gt;
     Beta Value: 1                       # 1: smoother , 2: sharper, 12: compromise between 1 and 2&lt;br /&gt;
     YZB Farfield Conditions: 1e5 2119 10 10 300 # [Pressure, X-Vel, Y-Vel, Z-Vel, Temperature] &lt;br /&gt;
     YZB Farfield Mole Frac: 1 0 0 0 0   # mole fractions at reference condition&lt;br /&gt;
                                         # [xN2,xO2,xNO,xN,xO] &lt;br /&gt;
                                         # must sum to 1, must be length 5 &lt;br /&gt;
     Include Umod Term: 1                # 0: no, 1: yes&lt;br /&gt;
     Mach Adjustment Bm Value: 1         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Mach Adjustment Bj Value: 6         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Include Time Term in Z: 1           # 0: no, 1: yes&lt;br /&gt;
#------------------------------------------&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2080</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2080"/>
				<updated>2024-08-29T13:49:39Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Conditions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures in conjunction with reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of species 2 mole fraction&lt;br /&gt;
* initial scalar_2 - Initial value of species 3 mole fraction&lt;br /&gt;
* initial scalar_3 - Initial value of species 4 mole fraction&lt;br /&gt;
* initial scalar_4 - Initial value of species 5 mole fraction&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with the initial scalar values to compute the mole fraction of species 1&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5. This feature has been tested and shown to work for five species mass conservation equations at one time. However, when chemical species production governed by the finite-rate chemistry module is active, the solution becomes unstable. Additional work on this feature is needed. &lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;# PHASTA_HYP Version 1 Input File&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 100  &lt;br /&gt;
     Time Step Size: 3.2339656949004e-08&lt;br /&gt;
&lt;br /&gt;
     Limit Density: 0 0.01 0.1         # solution limiting on variables [switch, min, max]&lt;br /&gt;
     Limit u1: 0 0. 2.8e3&lt;br /&gt;
     Limit u2: 0 0 0&lt;br /&gt;
     Limit u3: 0 0 0&lt;br /&gt;
     Limit Temperature: 0 230 3500     # also limits vibrational temperature&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 250   &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscosity: 1.95508431704028e-5                 # dynamic viscosity (only used if nsp.eq.99 for air mixture)&lt;br /&gt;
     Thermal Conductivity: 26.6843390135759e-3      # only used if nsp.eq.99 for air mixture&lt;br /&gt;
     Viscous Control: None    #Viscous   #None      &lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
#.......currently only allowing 1&amp;lt;=nsp&amp;lt;=5. &lt;br /&gt;
#.........and make sure scalars passed from simmodeler are in correct slots&lt;br /&gt;
#&lt;br /&gt;
     Species IDs: 1             # specIDs, length of array must equal&lt;br /&gt;
                                        #    nsp (IDs numbered in order:&lt;br /&gt;
                                        #            N2,O2,NO,N,O&lt;br /&gt;
                                        #   ID:99 for Air molecule&lt;br /&gt;
     # e.g. Species IDs: 1 3      &amp;lt;&amp;lt;&amp;lt; would give N2 and NO&lt;br /&gt;
     Ref Entropy Conditions: 1e3 230 230 0   #[P0,T0,T0vib,S0]&lt;br /&gt;
     Ref Entropy Mole Frac: 1 0 0 0 0   # composition of gas used as reference entropy condition&lt;br /&gt;
                                        #  &amp;gt;&amp;gt; must sum to zero&lt;br /&gt;
     Allow reactions: False             # chemical reactions, ichem = 1 if True&lt;br /&gt;
     Chemical heat release: False       # chemical heat release, iqtot = 1 if True&lt;br /&gt;
     Limit on reaction step: 0.00001    # rlim (limits change in species cs per step)&lt;br /&gt;
     Tolerance to global time: 0.01     # ttol, chem solver is advanced in time until time diff &amp;lt; ttol*dt_global&lt;br /&gt;
     Temperature threshold: 500         # Tth (below which, reactions ignored)&lt;br /&gt;
     Reaction solver MIN steps: 5       # nstepmin, minimum number of time steps&lt;br /&gt;
     Reaction solver MAX steps: 100     # nstepmax, maximum number of time steps&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Exclude vib energy: True           # ivib0 = 1 if True&lt;br /&gt;
     Exclude vib source: True           # ivibS0 = 1 if True&lt;br /&gt;
     Tvib BC Ratio: 1.0			# at any BC with T set, Tvib = tvibBC * T&lt;br /&gt;
     Vibrational Temperature IC: -1     # set negative value to force Tvib = T, otherwise positive value set as IC value&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Discontinuity Capturing: DC-quadratic    # Current Options: DC-mallet, DC-minimum, DC-quadratic, DC-yzbeta&lt;br /&gt;
     Multiplier for DC factor: 1              # scales DC variable in e3DC&lt;br /&gt;
     Discontinuity Capturing Scheme: 1        # 0: discontinuous, 1: continuous (L2 projection)&lt;br /&gt;
     Include Source Term in DC: 0             # 1: sets idcSRC to 1&lt;br /&gt;
     Write DCqpt: 0                           # if &amp;gt; 0, writes out data at a quadrature point iDCqpt&lt;br /&gt;
&lt;br /&gt;
#----Parameters for YZBeta DC operator ----&lt;br /&gt;
     Beta Value: 1                       # 1: smoother , 2: sharper, 12: compromise between 1 and 2&lt;br /&gt;
     YZB Farfield Conditions: 1e5 2119 10 10 300 # [Pressure, X-Vel, Y-Vel, Z-Vel, Temperature] &lt;br /&gt;
     YZB Farfield Mole Frac: 1 0 0 0 0   # mole fractions at reference condition&lt;br /&gt;
                                         # [xN2,xO2,xNO,xN,xO] &lt;br /&gt;
                                         # must sum to 1, must be length 5 &lt;br /&gt;
     Include Umod Term: 1                # 0: no, 1: yes&lt;br /&gt;
     Mach Adjustment Bm Value: 1         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Mach Adjustment Bj Value: 6         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Include Time Term in Z: 1           # 0: no, 1: yes&lt;br /&gt;
#------------------------------------------&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2079</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2079"/>
				<updated>2024-08-29T13:49:25Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Conditions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of each species of the gas with Dalton's Law of partial pressures using reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of species 2 mole fraction&lt;br /&gt;
* initial scalar_2 - Initial value of species 3 mole fraction&lt;br /&gt;
* initial scalar_3 - Initial value of species 4 mole fraction&lt;br /&gt;
* initial scalar_4 - Initial value of species 5 mole fraction&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with the initial scalar values to compute the mole fraction of species 1&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5. This feature has been tested and shown to work for five species mass conservation equations at one time. However, when chemical species production governed by the finite-rate chemistry module is active, the solution becomes unstable. Additional work on this feature is needed. &lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;# PHASTA_HYP Version 1 Input File&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 100  &lt;br /&gt;
     Time Step Size: 3.2339656949004e-08&lt;br /&gt;
&lt;br /&gt;
     Limit Density: 0 0.01 0.1         # solution limiting on variables [switch, min, max]&lt;br /&gt;
     Limit u1: 0 0. 2.8e3&lt;br /&gt;
     Limit u2: 0 0 0&lt;br /&gt;
     Limit u3: 0 0 0&lt;br /&gt;
     Limit Temperature: 0 230 3500     # also limits vibrational temperature&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 250   &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscosity: 1.95508431704028e-5                 # dynamic viscosity (only used if nsp.eq.99 for air mixture)&lt;br /&gt;
     Thermal Conductivity: 26.6843390135759e-3      # only used if nsp.eq.99 for air mixture&lt;br /&gt;
     Viscous Control: None    #Viscous   #None      &lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
#.......currently only allowing 1&amp;lt;=nsp&amp;lt;=5. &lt;br /&gt;
#.........and make sure scalars passed from simmodeler are in correct slots&lt;br /&gt;
#&lt;br /&gt;
     Species IDs: 1             # specIDs, length of array must equal&lt;br /&gt;
                                        #    nsp (IDs numbered in order:&lt;br /&gt;
                                        #            N2,O2,NO,N,O&lt;br /&gt;
                                        #   ID:99 for Air molecule&lt;br /&gt;
     # e.g. Species IDs: 1 3      &amp;lt;&amp;lt;&amp;lt; would give N2 and NO&lt;br /&gt;
     Ref Entropy Conditions: 1e3 230 230 0   #[P0,T0,T0vib,S0]&lt;br /&gt;
     Ref Entropy Mole Frac: 1 0 0 0 0   # composition of gas used as reference entropy condition&lt;br /&gt;
                                        #  &amp;gt;&amp;gt; must sum to zero&lt;br /&gt;
     Allow reactions: False             # chemical reactions, ichem = 1 if True&lt;br /&gt;
     Chemical heat release: False       # chemical heat release, iqtot = 1 if True&lt;br /&gt;
     Limit on reaction step: 0.00001    # rlim (limits change in species cs per step)&lt;br /&gt;
     Tolerance to global time: 0.01     # ttol, chem solver is advanced in time until time diff &amp;lt; ttol*dt_global&lt;br /&gt;
     Temperature threshold: 500         # Tth (below which, reactions ignored)&lt;br /&gt;
     Reaction solver MIN steps: 5       # nstepmin, minimum number of time steps&lt;br /&gt;
     Reaction solver MAX steps: 100     # nstepmax, maximum number of time steps&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Exclude vib energy: True           # ivib0 = 1 if True&lt;br /&gt;
     Exclude vib source: True           # ivibS0 = 1 if True&lt;br /&gt;
     Tvib BC Ratio: 1.0			# at any BC with T set, Tvib = tvibBC * T&lt;br /&gt;
     Vibrational Temperature IC: -1     # set negative value to force Tvib = T, otherwise positive value set as IC value&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Discontinuity Capturing: DC-quadratic    # Current Options: DC-mallet, DC-minimum, DC-quadratic, DC-yzbeta&lt;br /&gt;
     Multiplier for DC factor: 1              # scales DC variable in e3DC&lt;br /&gt;
     Discontinuity Capturing Scheme: 1        # 0: discontinuous, 1: continuous (L2 projection)&lt;br /&gt;
     Include Source Term in DC: 0             # 1: sets idcSRC to 1&lt;br /&gt;
     Write DCqpt: 0                           # if &amp;gt; 0, writes out data at a quadrature point iDCqpt&lt;br /&gt;
&lt;br /&gt;
#----Parameters for YZBeta DC operator ----&lt;br /&gt;
     Beta Value: 1                       # 1: smoother , 2: sharper, 12: compromise between 1 and 2&lt;br /&gt;
     YZB Farfield Conditions: 1e5 2119 10 10 300 # [Pressure, X-Vel, Y-Vel, Z-Vel, Temperature] &lt;br /&gt;
     YZB Farfield Mole Frac: 1 0 0 0 0   # mole fractions at reference condition&lt;br /&gt;
                                         # [xN2,xO2,xNO,xN,xO] &lt;br /&gt;
                                         # must sum to 1, must be length 5 &lt;br /&gt;
     Include Umod Term: 1                # 0: no, 1: yes&lt;br /&gt;
     Mach Adjustment Bm Value: 1         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Mach Adjustment Bj Value: 6         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Include Time Term in Z: 1           # 0: no, 1: yes&lt;br /&gt;
#------------------------------------------&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2078</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2078"/>
				<updated>2024-08-29T13:49:06Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Conditions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fractions of species of the gas with Dalton's Law of partial pressures using reference mole fractions specified in solver.inp&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of species 2 mole fraction&lt;br /&gt;
* initial scalar_2 - Initial value of species 3 mole fraction&lt;br /&gt;
* initial scalar_3 - Initial value of species 4 mole fraction&lt;br /&gt;
* initial scalar_4 - Initial value of species 5 mole fraction&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with the initial scalar values to compute the mole fraction of species 1&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5. This feature has been tested and shown to work for five species mass conservation equations at one time. However, when chemical species production governed by the finite-rate chemistry module is active, the solution becomes unstable. Additional work on this feature is needed. &lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;# PHASTA_HYP Version 1 Input File&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 100  &lt;br /&gt;
     Time Step Size: 3.2339656949004e-08&lt;br /&gt;
&lt;br /&gt;
     Limit Density: 0 0.01 0.1         # solution limiting on variables [switch, min, max]&lt;br /&gt;
     Limit u1: 0 0. 2.8e3&lt;br /&gt;
     Limit u2: 0 0 0&lt;br /&gt;
     Limit u3: 0 0 0&lt;br /&gt;
     Limit Temperature: 0 230 3500     # also limits vibrational temperature&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 250   &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscosity: 1.95508431704028e-5                 # dynamic viscosity (only used if nsp.eq.99 for air mixture)&lt;br /&gt;
     Thermal Conductivity: 26.6843390135759e-3      # only used if nsp.eq.99 for air mixture&lt;br /&gt;
     Viscous Control: None    #Viscous   #None      &lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
#.......currently only allowing 1&amp;lt;=nsp&amp;lt;=5. &lt;br /&gt;
#.........and make sure scalars passed from simmodeler are in correct slots&lt;br /&gt;
#&lt;br /&gt;
     Species IDs: 1             # specIDs, length of array must equal&lt;br /&gt;
                                        #    nsp (IDs numbered in order:&lt;br /&gt;
                                        #            N2,O2,NO,N,O&lt;br /&gt;
                                        #   ID:99 for Air molecule&lt;br /&gt;
     # e.g. Species IDs: 1 3      &amp;lt;&amp;lt;&amp;lt; would give N2 and NO&lt;br /&gt;
     Ref Entropy Conditions: 1e3 230 230 0   #[P0,T0,T0vib,S0]&lt;br /&gt;
     Ref Entropy Mole Frac: 1 0 0 0 0   # composition of gas used as reference entropy condition&lt;br /&gt;
                                        #  &amp;gt;&amp;gt; must sum to zero&lt;br /&gt;
     Allow reactions: False             # chemical reactions, ichem = 1 if True&lt;br /&gt;
     Chemical heat release: False       # chemical heat release, iqtot = 1 if True&lt;br /&gt;
     Limit on reaction step: 0.00001    # rlim (limits change in species cs per step)&lt;br /&gt;
     Tolerance to global time: 0.01     # ttol, chem solver is advanced in time until time diff &amp;lt; ttol*dt_global&lt;br /&gt;
     Temperature threshold: 500         # Tth (below which, reactions ignored)&lt;br /&gt;
     Reaction solver MIN steps: 5       # nstepmin, minimum number of time steps&lt;br /&gt;
     Reaction solver MAX steps: 100     # nstepmax, maximum number of time steps&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Exclude vib energy: True           # ivib0 = 1 if True&lt;br /&gt;
     Exclude vib source: True           # ivibS0 = 1 if True&lt;br /&gt;
     Tvib BC Ratio: 1.0			# at any BC with T set, Tvib = tvibBC * T&lt;br /&gt;
     Vibrational Temperature IC: -1     # set negative value to force Tvib = T, otherwise positive value set as IC value&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Discontinuity Capturing: DC-quadratic    # Current Options: DC-mallet, DC-minimum, DC-quadratic, DC-yzbeta&lt;br /&gt;
     Multiplier for DC factor: 1              # scales DC variable in e3DC&lt;br /&gt;
     Discontinuity Capturing Scheme: 1        # 0: discontinuous, 1: continuous (L2 projection)&lt;br /&gt;
     Include Source Term in DC: 0             # 1: sets idcSRC to 1&lt;br /&gt;
     Write DCqpt: 0                           # if &amp;gt; 0, writes out data at a quadrature point iDCqpt&lt;br /&gt;
&lt;br /&gt;
#----Parameters for YZBeta DC operator ----&lt;br /&gt;
     Beta Value: 1                       # 1: smoother , 2: sharper, 12: compromise between 1 and 2&lt;br /&gt;
     YZB Farfield Conditions: 1e5 2119 10 10 300 # [Pressure, X-Vel, Y-Vel, Z-Vel, Temperature] &lt;br /&gt;
     YZB Farfield Mole Frac: 1 0 0 0 0   # mole fractions at reference condition&lt;br /&gt;
                                         # [xN2,xO2,xNO,xN,xO] &lt;br /&gt;
                                         # must sum to 1, must be length 5 &lt;br /&gt;
     Include Umod Term: 1                # 0: no, 1: yes&lt;br /&gt;
     Mach Adjustment Bm Value: 1         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Mach Adjustment Bj Value: 6         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Include Time Term in Z: 1           # 0: no, 1: yes&lt;br /&gt;
#------------------------------------------&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2077</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2077"/>
				<updated>2024-08-29T13:48:08Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Conditions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Turbulent eddy viscosity&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fraction of species 1 of the gas with Dalton's Law of partial pressures and subtracting the summation of the other mole fractions from a value of 1&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of species 2 mole fraction&lt;br /&gt;
* initial scalar_2 - Initial value of species 3 mole fraction&lt;br /&gt;
* initial scalar_3 - Initial value of species 4 mole fraction&lt;br /&gt;
* initial scalar_4 - Initial value of species 5 mole fraction&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with the initial scalar values to compute the mole fraction of species 1&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5. This feature has been tested and shown to work for five species mass conservation equations at one time. However, when chemical species production governed by the finite-rate chemistry module is active, the solution becomes unstable. Additional work on this feature is needed. &lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;# PHASTA_HYP Version 1 Input File&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 100  &lt;br /&gt;
     Time Step Size: 3.2339656949004e-08&lt;br /&gt;
&lt;br /&gt;
     Limit Density: 0 0.01 0.1         # solution limiting on variables [switch, min, max]&lt;br /&gt;
     Limit u1: 0 0. 2.8e3&lt;br /&gt;
     Limit u2: 0 0 0&lt;br /&gt;
     Limit u3: 0 0 0&lt;br /&gt;
     Limit Temperature: 0 230 3500     # also limits vibrational temperature&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 250   &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscosity: 1.95508431704028e-5                 # dynamic viscosity (only used if nsp.eq.99 for air mixture)&lt;br /&gt;
     Thermal Conductivity: 26.6843390135759e-3      # only used if nsp.eq.99 for air mixture&lt;br /&gt;
     Viscous Control: None    #Viscous   #None      &lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
#.......currently only allowing 1&amp;lt;=nsp&amp;lt;=5. &lt;br /&gt;
#.........and make sure scalars passed from simmodeler are in correct slots&lt;br /&gt;
#&lt;br /&gt;
     Species IDs: 1             # specIDs, length of array must equal&lt;br /&gt;
                                        #    nsp (IDs numbered in order:&lt;br /&gt;
                                        #            N2,O2,NO,N,O&lt;br /&gt;
                                        #   ID:99 for Air molecule&lt;br /&gt;
     # e.g. Species IDs: 1 3      &amp;lt;&amp;lt;&amp;lt; would give N2 and NO&lt;br /&gt;
     Ref Entropy Conditions: 1e3 230 230 0   #[P0,T0,T0vib,S0]&lt;br /&gt;
     Ref Entropy Mole Frac: 1 0 0 0 0   # composition of gas used as reference entropy condition&lt;br /&gt;
                                        #  &amp;gt;&amp;gt; must sum to zero&lt;br /&gt;
     Allow reactions: False             # chemical reactions, ichem = 1 if True&lt;br /&gt;
     Chemical heat release: False       # chemical heat release, iqtot = 1 if True&lt;br /&gt;
     Limit on reaction step: 0.00001    # rlim (limits change in species cs per step)&lt;br /&gt;
     Tolerance to global time: 0.01     # ttol, chem solver is advanced in time until time diff &amp;lt; ttol*dt_global&lt;br /&gt;
     Temperature threshold: 500         # Tth (below which, reactions ignored)&lt;br /&gt;
     Reaction solver MIN steps: 5       # nstepmin, minimum number of time steps&lt;br /&gt;
     Reaction solver MAX steps: 100     # nstepmax, maximum number of time steps&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Exclude vib energy: True           # ivib0 = 1 if True&lt;br /&gt;
     Exclude vib source: True           # ivibS0 = 1 if True&lt;br /&gt;
     Tvib BC Ratio: 1.0			# at any BC with T set, Tvib = tvibBC * T&lt;br /&gt;
     Vibrational Temperature IC: -1     # set negative value to force Tvib = T, otherwise positive value set as IC value&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Discontinuity Capturing: DC-quadratic    # Current Options: DC-mallet, DC-minimum, DC-quadratic, DC-yzbeta&lt;br /&gt;
     Multiplier for DC factor: 1              # scales DC variable in e3DC&lt;br /&gt;
     Discontinuity Capturing Scheme: 1        # 0: discontinuous, 1: continuous (L2 projection)&lt;br /&gt;
     Include Source Term in DC: 0             # 1: sets idcSRC to 1&lt;br /&gt;
     Write DCqpt: 0                           # if &amp;gt; 0, writes out data at a quadrature point iDCqpt&lt;br /&gt;
&lt;br /&gt;
#----Parameters for YZBeta DC operator ----&lt;br /&gt;
     Beta Value: 1                       # 1: smoother , 2: sharper, 12: compromise between 1 and 2&lt;br /&gt;
     YZB Farfield Conditions: 1e5 2119 10 10 300 # [Pressure, X-Vel, Y-Vel, Z-Vel, Temperature] &lt;br /&gt;
     YZB Farfield Mole Frac: 1 0 0 0 0   # mole fractions at reference condition&lt;br /&gt;
                                         # [xN2,xO2,xNO,xN,xO] &lt;br /&gt;
                                         # must sum to 1, must be length 5 &lt;br /&gt;
     Include Umod Term: 1                # 0: no, 1: yes&lt;br /&gt;
     Mach Adjustment Bm Value: 1         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Mach Adjustment Bj Value: 6         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Include Time Term in Z: 1           # 0: no, 1: yes&lt;br /&gt;
#------------------------------------------&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2076</id>
		<title>TCNEQ Version</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=TCNEQ_Version&amp;diff=2076"/>
				<updated>2024-08-29T13:46:51Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Pre-Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
The following information relates to the use of the thermochemical nonequilibrium (TCNEQ) version of PHASTA written in terms of entropy variables. The reader is referred to the following for additional information.&lt;br /&gt;
&lt;br /&gt;
* F. Chalot, T.J.R. Hughes, and F. Shakib, '''&amp;quot;Symmetrization of Conservation Laws with Entropy for High-Temperature Hypersonic Computations,&amp;quot;''' Computing Systems in Engineering, 1(2-4):495–521, 1990.&lt;br /&gt;
&lt;br /&gt;
* J. Pointer, '''&amp;quot;Influence of Interpolation Variables and Discontinuity Capturing Operators on Inviscid Hypersonic Flow Simulations Using a Stabilized Continuous Galerkin Solver,&amp;quot;''' Ph.D. dissertation, University of Colorado, Boulder, CO, 2022.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pre-Processing ==&lt;br /&gt;
In this section, details of the meshing and model attributes are provided. Currently, capability exists to simulate a gas with number of species (''nsp'') &amp;amp;le; 5.&lt;br /&gt;
&lt;br /&gt;
=== Meshing ===&lt;br /&gt;
Within the Simmodeler utility, the mesh can either be created or loaded from an existing .cas file. Below are steps for loading a mesh from a .cas file:&lt;br /&gt;
# Launch Simmodeler (for this example, SimModeler7.0-190604 is used)&lt;br /&gt;
# File &amp;gt; Import Discrete Data &amp;gt; (select .cas file to import) &amp;gt; (keep defaults and click OK) &amp;gt; (select YES to keep volume mesh)&lt;br /&gt;
# Save .sms and .smd files &lt;br /&gt;
# Attributes can now be assigned to the model as normal&lt;br /&gt;
&lt;br /&gt;
=== Boundary Conditions ===&lt;br /&gt;
Below are the recognized boundary conditions that can be applied for the current version:&lt;br /&gt;
* comp1/comp2/comp3 - Specification of one/two/three components of velocity, [m/s]&lt;br /&gt;
* temperature - Specification of translational-rotational temperature, [K]. By default, vibrational temperature is held in equilibrium with this value and nonequilibrium is controlled through simulation inputs. &lt;br /&gt;
* surfID - When value is set to 702, the boundary is treated as a slip wall. If using this option, include a boundary layer mesh along the surface to ensure the wall normal direction is accurately computed.&lt;br /&gt;
* scalar_1 - Mole fraction of species 2 of the gas&lt;br /&gt;
* scalar_2 - Mole fraction of species 3 of the gas&lt;br /&gt;
* scalar_3 - Mole fraction of species 4 of the gas&lt;br /&gt;
* scalar_4 - Mole fraction of species 5 of the gas&lt;br /&gt;
* pressure - Specification of static pressure over a surface, [Pa]&lt;br /&gt;
** Used to compute mole fraction of species 1 of the gas with Dalton's Law of partial pressures and subtracting the summation of the other mole fractions from a value of 1&lt;br /&gt;
* heat flux - set to zero for adiabatic wall boundary condition&lt;br /&gt;
&lt;br /&gt;
=== Initial Conditions ===&lt;br /&gt;
Below are the required initial conditions for the current version:&lt;br /&gt;
* initial velocity - Components and magnitude of flow velocity, [m/s]&lt;br /&gt;
** If a supersonic outlet condition is used, set such that flow is initialized Mach &amp;gt; 1&lt;br /&gt;
* initial temperature - Value used to set translational-rotational temperature, [K]&lt;br /&gt;
* initial scalar_1 - Initial value of species 2 mole fraction&lt;br /&gt;
* initial scalar_2 - Initial value of species 3 mole fraction&lt;br /&gt;
* initial scalar_3 - Initial value of species 4 mole fraction&lt;br /&gt;
* initial scalar_4 - Initial value of species 5 mole fraction&lt;br /&gt;
* initial pressure - Static pressure of the gas, [Pa]&lt;br /&gt;
** For multi-species flows, this value is used in combination with the initial scalar values to compute the mole fraction of species 1&lt;br /&gt;
&lt;br /&gt;
== Simulation Inputs ==&lt;br /&gt;
&lt;br /&gt;
Below is an example of the input script for the current version of the code. Capability is included for handling multi-species flows up to number of species ''nsp'' equal to 5. This feature has been tested and shown to work for five species mass conservation equations at one time. However, when chemical species production governed by the finite-rate chemistry module is active, the solution becomes unstable. Additional work on this feature is needed. &lt;br /&gt;
&lt;br /&gt;
=== Example of solver.inp file: ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;# PHASTA_HYP Version 1 Input File&lt;br /&gt;
&lt;br /&gt;
#SOLUTION CONTROL &lt;br /&gt;
#{                &lt;br /&gt;
     Equation of State: Compressible&lt;br /&gt;
     Number of Timesteps: 100  &lt;br /&gt;
     Time Step Size: 3.2339656949004e-08&lt;br /&gt;
&lt;br /&gt;
     Limit Density: 0 0.01 0.1         # solution limiting on variables [switch, min, max]&lt;br /&gt;
     Limit u1: 0 0. 2.8e3&lt;br /&gt;
     Limit u2: 0 0 0&lt;br /&gt;
     Limit u3: 0 0 0&lt;br /&gt;
     Limit Temperature: 0 230 3500     # also limits vibrational temperature&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#OUTPUT CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Number of Timesteps between Restarts: 250   &lt;br /&gt;
     Print Error Indicators: True                   # shock error stored in column 6, DC factor \nu stored in column 10&lt;br /&gt;
     Error Indicator Threshold: 0.01                # err &amp;gt; thresh*err_max is flagged as 1 (i.e. identified for refinement)&lt;br /&gt;
                                                    #   --&amp;gt; smaller values = narrower flagged region along shock&lt;br /&gt;
     Number of Error Smoothing Iterations: 0        # ierrsmooth&lt;br /&gt;
     Load and set 3D IC: False                      # load the flowfield from a file as the initial condition&lt;br /&gt;
     Position Tolerance on IC Load: 1e-7            # sets the tolerance for matching node locations while loading the initial condition&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MATERIAL CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Viscosity: 1.95508431704028e-5                 # dynamic viscosity (only used if nsp.eq.99 for air mixture)&lt;br /&gt;
     Thermal Conductivity: 26.6843390135759e-3      # only used if nsp.eq.99 for air mixture&lt;br /&gt;
     Viscous Control: None    #Viscous   #None      &lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#REACTING FLOW&lt;br /&gt;
#{&lt;br /&gt;
     Number of species: 1               # nsp&lt;br /&gt;
#.......currently only allowing 1&amp;lt;=nsp&amp;lt;=5. &lt;br /&gt;
#.........and make sure scalars passed from simmodeler are in correct slots&lt;br /&gt;
#&lt;br /&gt;
     Species IDs: 1             # specIDs, length of array must equal&lt;br /&gt;
                                        #    nsp (IDs numbered in order:&lt;br /&gt;
                                        #            N2,O2,NO,N,O&lt;br /&gt;
                                        #   ID:99 for Air molecule&lt;br /&gt;
     # e.g. Species IDs: 1 3      &amp;lt;&amp;lt;&amp;lt; would give N2 and NO&lt;br /&gt;
     Ref Entropy Conditions: 1e3 230 230 0   #[P0,T0,T0vib,S0]&lt;br /&gt;
     Ref Entropy Mole Frac: 1 0 0 0 0   # composition of gas used as reference entropy condition&lt;br /&gt;
                                        #  &amp;gt;&amp;gt; must sum to zero&lt;br /&gt;
     Allow reactions: False             # chemical reactions, ichem = 1 if True&lt;br /&gt;
     Chemical heat release: False       # chemical heat release, iqtot = 1 if True&lt;br /&gt;
     Limit on reaction step: 0.00001    # rlim (limits change in species cs per step)&lt;br /&gt;
     Tolerance to global time: 0.01     # ttol, chem solver is advanced in time until time diff &amp;lt; ttol*dt_global&lt;br /&gt;
     Temperature threshold: 500         # Tth (below which, reactions ignored)&lt;br /&gt;
     Reaction solver MIN steps: 5       # nstepmin, minimum number of time steps&lt;br /&gt;
     Reaction solver MAX steps: 100     # nstepmax, maximum number of time steps&lt;br /&gt;
     Two Temperature coefficient: 0.5   # qta (Tvib**qta*T**(1-qta))&lt;br /&gt;
     Exclude vib energy: True           # ivib0 = 1 if True&lt;br /&gt;
     Exclude vib source: True           # ivibS0 = 1 if True&lt;br /&gt;
     Tvib BC Ratio: 1.0			# at any BC with T set, Tvib = tvibBC * T&lt;br /&gt;
     Vibrational Temperature IC: -1     # set negative value to force Tvib = T, otherwise positive value set as IC value&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#LINEAR SOLVER&lt;br /&gt;
#&lt;br /&gt;
     Solver Type: GMRES sparse      &lt;br /&gt;
     Number of GMRES Sweeps per Solve: 1                       # replaces nGMRES&lt;br /&gt;
     Minimum Number of Iterations per Nonlinear Iteration: 10  # minIters&lt;br /&gt;
     Number of Krylov Vectors per GMRES Sweep: 100	       # replaces Kspace    &lt;br /&gt;
     Tolerance on Momentum Equations: 0.01                     # epstol(1), affects etol for Hessenberg problem&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCRETIZATION CONTROL&lt;br /&gt;
#{&lt;br /&gt;
     Weak Form: SUPG 		             # alternate is Galerkin only for compressible&lt;br /&gt;
     Time Integration Rule: First Order      # 1st Order sets rinf(1) -1&lt;br /&gt;
     Tau Matrix: Matrix-Ent-Adv&lt;br /&gt;
     Include Viscous Correction in Stabilization: False    # if p=1 idiff=1&lt;br /&gt;
                                                           # if p=2 idiff=2  &lt;br /&gt;
     Tau Time Constant: 1.0&lt;br /&gt;
     Tau C Scale Factor: 1.0                 # taucfct  best value depends&lt;br /&gt;
     Number of Elements Per Block: 64        #ibksiz&lt;br /&gt;
&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#DISCONTINUITY CAPTURING&lt;br /&gt;
#{&lt;br /&gt;
     Discontinuity Capturing: DC-quadratic    # Current Options: DC-mallet, DC-minimum, DC-quadratic, DC-yzbeta&lt;br /&gt;
     Multiplier for DC factor: 1              # scales DC variable in e3DC&lt;br /&gt;
     Discontinuity Capturing Scheme: 1        # 0: discontinuous, 1: continuous (L2 projection)&lt;br /&gt;
     Include Source Term in DC: 0             # 1: sets idcSRC to 1&lt;br /&gt;
     Write DCqpt: 0                           # if &amp;gt; 0, writes out data at a quadrature point iDCqpt&lt;br /&gt;
&lt;br /&gt;
#----Parameters for YZBeta DC operator ----&lt;br /&gt;
     Beta Value: 1                       # 1: smoother , 2: sharper, 12: compromise between 1 and 2&lt;br /&gt;
     YZB Farfield Conditions: 1e5 2119 10 10 300 # [Pressure, X-Vel, Y-Vel, Z-Vel, Temperature] &lt;br /&gt;
     YZB Farfield Mole Frac: 1 0 0 0 0   # mole fractions at reference condition&lt;br /&gt;
                                         # [xN2,xO2,xNO,xN,xO] &lt;br /&gt;
                                         # must sum to 1, must be length 5 &lt;br /&gt;
     Include Umod Term: 1                # 0: no, 1: yes&lt;br /&gt;
     Mach Adjustment Bm Value: 1         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Mach Adjustment Bj Value: 6         # 0: off,1: smoother shock, 2: sharper shock&lt;br /&gt;
     Include Time Term in Z: 1           # 0: no, 1: yes&lt;br /&gt;
#------------------------------------------&lt;br /&gt;
#}&lt;br /&gt;
&lt;br /&gt;
#STEP SEQUENCE &lt;br /&gt;
#{&lt;br /&gt;
       Step Construction  : 0 1 0 1&lt;br /&gt;
#}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Processing ==&lt;br /&gt;
&lt;br /&gt;
An example of the ''flow.pht'' file is provided to demonstrate the ordering of the variables that can be viewed in Paraview. Note that both the ''errors'' and ''DCqpt'' fields would need to be output to the restart file for the example below to work. Saving these two fields is controlled through the solver.inp options.&lt;br /&gt;
&lt;br /&gt;
=== Example of flow.pht file ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; ?&amp;gt;&lt;br /&gt;
&amp;lt;PhastaMetaFile number_of_pieces=&amp;quot;24&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;GeometryFileNamePattern pattern=&amp;quot;24-procs_case/geombc.dat.%d&amp;quot; &lt;br /&gt;
                            has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                            has_time_entry=&amp;quot;0&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;FieldFileNamePattern pattern=&amp;quot;24-procs_case/restart.%d.%d&amp;quot;&lt;br /&gt;
                         has_piece_entry=&amp;quot;1&amp;quot;&lt;br /&gt;
                         has_time_entry=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
   &amp;lt;TimeSteps number_of_steps=&amp;quot;1&amp;quot; &lt;br /&gt;
	      auto_generate_indices=&amp;quot;1&amp;quot;&lt;br /&gt;
              start_index=&amp;quot;2010&amp;quot;&lt;br /&gt;
	      increment_index_by=&amp;quot;50&amp;quot;&lt;br /&gt;
              start_value=&amp;quot;0&amp;quot;&lt;br /&gt;
              increment_value_by=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;/TimeSteps&amp;gt;&lt;br /&gt;
   &amp;lt;Fields number_of_fields=&amp;quot;9&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_N2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;rho_O2&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;1&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;velocity&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;2&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;3&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temp_vib&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;5&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;solution&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;temperature&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;6&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;errors&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;nu&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;9&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;0&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;Field phasta_field_tag=&amp;quot;DCqpt&amp;quot;&lt;br /&gt;
            paraview_field_tag=&amp;quot;DCqpt1&amp;quot;&lt;br /&gt;
            start_index_in_phasta_array=&amp;quot;0&amp;quot;&lt;br /&gt;
            number_of_components=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_dependency=&amp;quot;1&amp;quot;&lt;br /&gt;
            data_type=&amp;quot;double&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/Fields&amp;gt;&lt;br /&gt;
&amp;lt;/PhastaMetaFile&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=PHASTA_Group_Machines&amp;diff=2075</id>
		<title>PHASTA Group Machines</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=PHASTA_Group_Machines&amp;diff=2075"/>
				<updated>2024-08-21T19:36:29Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page documents the local machines owned by the group, logging in, and two factor authentication.&lt;br /&gt;
&lt;br /&gt;
== Logging In ==&lt;br /&gt;
&lt;br /&gt;
The entry point for the group machines is &amp;lt;code&amp;gt;jumpgate&amp;lt;/code&amp;gt;, which is accessed publicly via &amp;lt;code&amp;gt;jumpgate-phasta.colorado.edu&amp;lt;/code&amp;gt;. We access this entry point by creating an SSH connection between it and our personal machines. To access the system via command line (terminal, command prompt, etc), simply run &amp;lt;code&amp;gt;ssh USERNAME@jumpgate-phasta.colorado.edu&amp;lt;/code&amp;gt;. Note: if you are using Windows 8 or older, you will need to download and install an SSH client such as [https://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY], and follow the directions specified as '''Windows 8'''. &lt;br /&gt;
&lt;br /&gt;
This [https://fluid.colorado.edu/tutorials/tutorialVideos/VNC_tutorial.mov video] walks you through the multiple next steps you will take in order to access and interact with the PHASTA machines. wThe steps in the video are more thoroughly documented in the remainder of this page and the [[VNC]] page of the wiki.&lt;br /&gt;
&lt;br /&gt;
To get started, open a command line terminal, enter &amp;lt;code&amp;gt;ssh USERNAME@jumpgate-phasta.colorado.edu&amp;lt;/code&amp;gt;, and the login process will look like the following:&lt;br /&gt;
&lt;br /&gt;
 ➜ ssh USERNAME@jumpgate-phasta.colorado.edu &lt;br /&gt;
 Password: &lt;br /&gt;
 Verification code:&lt;br /&gt;
&lt;br /&gt;
 '''**Windows 8**''' &lt;br /&gt;
 Open your PuTTY application and enter the following in the field that says &amp;quot;Host name:&amp;quot;&lt;br /&gt;
  &amp;lt;code&amp;gt;USERNAME@jumpgate-phasta.colorado.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
 Next make sure that Connection type is set to &amp;quot;SSH&amp;quot;. Click on &amp;quot;Open&amp;quot;. This will start a terminal window session connected to jumpgate that will have the &lt;br /&gt;
 same output as the non-windows 8 instructions above.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;code&amp;gt;Password:&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Verification code:&amp;lt;/code&amp;gt; are prompts for you to enter in your password and 2FA pass code. Note the 2FA request for a verification code will not start occurring until after you setup 2fa as noted below.&lt;br /&gt;
&lt;br /&gt;
Very little can/should be done on &amp;lt;code&amp;gt;jumpgate&amp;lt;/code&amp;gt;.  The most common use is to establish a tunnel for a VNC session. The second usage that must be done to set that up is connecting to &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt;. This is done via &amp;lt;code&amp;gt;ssh portal1&amp;lt;/code&amp;gt; while on &amp;lt;code&amp;gt;jumpgate&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Setting Up Two-Factor Authentication ===&lt;br /&gt;
Due to recent brute force ssh attacks we are moving to using two factor authentication (2FA). Existing users will have one week to switch over to this process.  New users are expected to do this within 24 hours.  This is pretty easy to setup as follows (from a terminal in your mac or linux laptop (and Windows if new enough)) or using PuTTY. All commands to be run are will be in &amp;lt;code&amp;gt;code&amp;lt;/code&amp;gt; block. If you have not already established a connection with jumpgate (from the previous steps above), open a new terminal on your personal computer and enter the following:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;code&amp;gt;ssh USERNAME@jumpgate-phasta.colorado.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will prompt for your ''password'' (the private password you set or, if this is your first login, the one in the account creation email). Enter your password to log in. We will refer to this terminal as &amp;quot;Primary terminal&amp;quot; for the remainder of these instructions. &lt;br /&gt;
&lt;br /&gt;
Next you need to download and install an authenticator application either for your computer or phone. There are several from Google, Microsoft, Twilio, etc (Google Authenticator works great). Launch that application on your phone or computer. In whatever mode it uses to create a new token generator, do that (often it opens with a QR code scanner enabled as it knows that is the easiest way to link the phone application to the QR scan created on the machine you are trying to access).&lt;br /&gt;
&lt;br /&gt;
Before moving forward, start a second terminal connection to &amp;lt;code&amp;gt;jumpgate&amp;lt;/code&amp;gt; by repeating the process above for establishing an ssh connection (PuTTY steps for Windows 8 users). We will refer to this second terminal connection as our &amp;quot;backup terminal&amp;quot;. If at any point you want/need to reset, simply run &amp;lt;code&amp;gt;''rm -rf ~/.google_authenticator''&amp;lt;/code&amp;gt; in the backup terminal.&lt;br /&gt;
&lt;br /&gt;
Now, in your primary &amp;lt;code&amp;gt;jumpgate&amp;lt;/code&amp;gt; terminal on your laptop type and enter:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;code&amp;gt;google-authenticator&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your terminal window is big enough, it should display a QR code that you can scan with your authenticator app on your phone. At this point it will ask you some questions about options (I answered yes to all). **You '''MUST''' answer yes to the time-based question, otherwise you will not be able to copy files onto ALCF (learned from experience)**&lt;br /&gt;
&lt;br /&gt;
Now open a 3rd terminal and log on to &amp;lt;code&amp;gt;jumpgate&amp;lt;/code&amp;gt; with ssh just as we did before. Because we have created a 2FA token, in addition to prompting for your ''password'', it will also prompt for a &amp;quot;Verification code:&amp;quot;. In your authenticator app, find the auto-generated 6 digit code associated to the jumpgate machine and enter it in the &amp;quot;Verification code:&amp;quot; field. If you've logged on successfully, then you are done and can move on to setting up your VNC. Otherwise, attempt to troubleshoot or reset the process with the &amp;lt;code&amp;gt;rm -rf ~/.google_authenticator&amp;lt;/code&amp;gt; command in the &amp;quot;backup&amp;quot; terminal you opened previously. If you have reset the process, close your primary and 3rd terminals, keep the backup terminal open, and open a new primary terminal and start the process over.&lt;br /&gt;
&lt;br /&gt;
=== VNC - Viewing your PHASTA Machine Sessions ===&lt;br /&gt;
Most members of the group interact with the PHASTA machines via a VNC tool (Virtual Network Computing), which provides a graphical user interface (GUI) link between the PHASTA machines (server side) and your personal machine (client side). This connection permits you to interact with the PHASTA machines by using your personal machine as from any location with a secure internet connection. Setting up the VNC server is documented on the [[VNC]] page, and for the purpose of the On Ramp, follow the sections identified with steps 1 through 3. The remainder of the sections on the VNC page are options you can explore at a later time. Click [[VNC|here]] to set up your VNC and continue with the On Ramp.&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;jumpgate&amp;lt;/code&amp;gt; ===&lt;br /&gt;
This is the machine that allows you to &amp;quot;jump&amp;quot; to the other machines in the local network via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt;. It is simply the public-facing machine and should only be used as such.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;portal1&amp;lt;/code&amp;gt; ===&lt;br /&gt;
This is where most of the non-computationally intensive tasks are done, such as text editing, moving files, etc. Effectively, if it takes longer than 5 seconds to run, you should probably think about running it on one of the &amp;lt;code&amp;gt;viz*&amp;lt;/code&amp;gt; nodes.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;viz003&amp;lt;/code&amp;gt; ===&lt;br /&gt;
This is where most computationally intensive tasks are done. However, they should only be run for debugging or post-processing. Production runs should be run on servers outside of the group's local machines ([[CU Boulder RC|Summit]], [[NAS]], [[ALCF]], etc.)&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;lab3&amp;lt;/code&amp;gt; ===&lt;br /&gt;
Windows machine for using Windows programs, such as [[SolidWorks]]. Accessing &amp;lt;code&amp;gt;lab3&amp;lt;/code&amp;gt; is different than with the other machines, see [[Access Lab3]]&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;ciscoX&amp;lt;/code&amp;gt; ===&lt;br /&gt;
These machines (&amp;lt;code&amp;gt;cisco1&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cisco2&amp;lt;/code&amp;gt;, etc.) are meant for computing and are accessed by submitting a job via PBS.&lt;br /&gt;
&lt;br /&gt;
== Using PBS for the cisco machines ==&lt;br /&gt;
&lt;br /&gt;
Jobs submitted to PBS can either be scripted or interactive.&lt;br /&gt;
&lt;br /&gt;
=== Interactive Job Example ===&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
matthb2@portal1:~$ soft add +pbs&lt;br /&gt;
matthb2@portal1:~$ qsub -I -l select=2:ncpus=24:mpiprocs=4 -q workq&lt;br /&gt;
qsub: waiting for job 1008.pbs to start&lt;br /&gt;
qsub: job 1008.pbs ready&lt;br /&gt;
&lt;br /&gt;
matthb2@cisco1:~$ module load openmpi&lt;br /&gt;
matthb2@cisco1:~$ mpicc mpihello/mpihello.c&lt;br /&gt;
matthb2@cisco1:~$ mpirun ./a.out&lt;br /&gt;
Hello Parallel World&lt;br /&gt;
Rank: 1 Number is: 1&lt;br /&gt;
Rank: 2 Number is: 1&lt;br /&gt;
Rank: 3 Number is: 1&lt;br /&gt;
Rank: 0 Number is: 1&lt;br /&gt;
Rank: 5 Number is: 1&lt;br /&gt;
Rank: 4 Number is: 1&lt;br /&gt;
Rank: 6 Number is: 1&lt;br /&gt;
Rank: 7 Number is: 1&lt;br /&gt;
matthb2@cisco1:~$&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Compute Facilities]]&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2072</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2072"/>
				<updated>2024-07-01T20:25:13Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/connor_primitive&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'merge blocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source and target blocks accordingly (ParaView’s naming is super unclear: they want source to be the coordinates where you need the solution (new mesh) and input to be the mesh that has the solution values associated with it that you want to interpolate from (which in this case is the MergeBlocks). This is confusing because it is exactly backwards to what we use for solution interpolation where we call the mesh with a solution the source and the new mesh the target).&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w.&lt;br /&gt;
 awk '{print $6,$7,$8,$1,$2,$3,$4}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2056</id>
		<title>NAS</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2056"/>
				<updated>2024-05-02T18:39:05Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Running TotalView on NAS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ Category: Compute Facilities]]&lt;br /&gt;
Wiki for information related to the '''NASA Advanced Supercomputing''' ('''NAS''') facility.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Pleiades&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Lou&lt;br /&gt;
| Storage and Analysis&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Electra&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Endeavour&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Merope&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Aitken&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/portable-batch-system-(pbs)-overview_126.html PBS]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/ Support Knowledgebase]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== How-To's in Separate Wiki's ===&lt;br /&gt;
&lt;br /&gt;
* [[Setting_Default_File_Permissions#NAS|Setting Default File Permissions]]&lt;br /&gt;
&lt;br /&gt;
=== Backup Data from Scratch Directories===&lt;br /&gt;
&lt;br /&gt;
This is done simply by copying data from the &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories to your home directory on Lou (&amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;). The &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories are mounted onto &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;, so transfers should be done on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is recommended to mirror the directory structure of your &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt; to allow for the data to be easily recovered back to it's original state. This is especially important if you use symlinks (as they are path dependent and will break if either the source file or the symlink itself are not in the correct location).&lt;br /&gt;
&lt;br /&gt;
This can be done with &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt;, but it is recommended to use NASA's in-house utility &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; will automatically perform parallel file transfers, data integrity checks and repairs, and syncing features similar to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Commands:'''&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc -r -d --sync /nobackup/jrwrigh7/models/STGFlatPlate/STFM_Tet_dz4-10_dx15 .&lt;br /&gt;
&lt;br /&gt;
This will copy the directory &amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt; to the current location (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;). The flags do as follows&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;: Recursively copy files from destination&lt;br /&gt;
* &amp;lt;code&amp;gt;-d&amp;lt;/code&amp;gt;: Create required directories that don't already exist. Equivalent of the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; flag for &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--sync&amp;lt;/code&amp;gt;: Only copy over &amp;quot;new&amp;quot; files, where &amp;quot;new&amp;quot; are any changes to the modification time or file size. &lt;br /&gt;
** If a file exists on destination (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;), but not source (&amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt;), it will not be copied back to source nor will it be deleted to match the state of source.&lt;br /&gt;
&lt;br /&gt;
Once this command is submitted, the transfer process will be backgrounded. Progress can be viewed by running &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. Additionally, you will recieve an email with the transfer job is completed.&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc --stop --id [shiftc job ID]&lt;br /&gt;
&lt;br /&gt;
This will stop the given shiftc job. The &amp;lt;code&amp;gt;[shiftc job ID]&amp;lt;/code&amp;gt; is the same number that appears beside the output of &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
More documentation for &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; can be found in its man page (&amp;lt;code&amp;gt;man shiftc&amp;lt;/code&amp;gt;) and on [https://www.nas.nasa.gov/hecc/support/kb/shift-transfer-tool-overview_300.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Control MPI Rank Placement ===&lt;br /&gt;
==== Rank 1 Solo Node ====&lt;br /&gt;
To make the rank 1 MPI process take a node on it's own, put this in the PBS directives:&lt;br /&gt;
&lt;br /&gt;
 #PBS -l select=1:mpiprocs=1:model=sky_ele+1:mpiprocs=40:model=sky_ele&lt;br /&gt;
&lt;br /&gt;
This will request 2 nodes: One will have the rank 1 process all by itself, and the other will have 40 MPI Processes (for all 40 CPU cores available on &amp;lt;code&amp;gt;sky_ele&amp;lt;/code&amp;gt; nodes). &lt;br /&gt;
&lt;br /&gt;
====Distribute Non-First Rank MPI Processes====&lt;br /&gt;
For controlling the placement of non-first rank MPI processes, use the &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; utility.&lt;br /&gt;
&lt;br /&gt;
For example, if we have requested 4 nodes and want 10 MPI processes per node, the &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; command needs to be modified to the following:&lt;br /&gt;
&lt;br /&gt;
 mpiexec -np 40 /u/scicon/tools/bin/mbind.x -n10 [executable]&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; is also socket aware, so it will distribute nodes evenly between nodes ''and'' between CPU's in each node (NAS nodes have 2 CPU's per node).&lt;br /&gt;
For more information on &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt;, see it's help flag (&amp;lt;code&amp;gt;mbind.x -help&amp;lt;/code&amp;gt;) or [https://www.nas.nasa.gov/hecc/support/kb/using-the-mbind-tool-for-pinning_288.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Common commands ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;node_stats.sh&amp;lt;/code&amp;gt;: Displays how many nodes are available or actively running jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;tracejobssh&amp;lt;/code&amp;gt;: Helps to answer &amp;quot;Why isn't my job running?&amp;quot;. Part of the [https://github.com/PHASTA/utilities git repo].&lt;br /&gt;
&lt;br /&gt;
=== See Priority &amp;quot;Score&amp;quot; in Queue ===&lt;br /&gt;
&lt;br /&gt;
To see what your priority &amp;quot;score&amp;quot; in PBS is use the &amp;lt;code&amp;gt;qstat -W o=+pri&amp;lt;/code&amp;gt; to add the &amp;quot;Priority&amp;quot; column to the output of &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Priority Scoring (as of 2021-01-22) ====&lt;br /&gt;
&lt;br /&gt;
* Job priority score grows by 1 every 12 hours&lt;br /&gt;
* We are capped at a max score of 20 per job&lt;br /&gt;
** Note that other users/groups using NAS may start with higher priority and grow higher than 20&lt;br /&gt;
** Result is that it's quite difficult to get large jobs running&lt;br /&gt;
* If you don't have any jobs running, you get an addition +10 to the score&lt;br /&gt;
** This score bump is removed as soon as you have a running job&lt;br /&gt;
&lt;br /&gt;
=== Compiling ===&lt;br /&gt;
* Generally will want use &amp;lt;code&amp;gt;module load hpe-mpi/mpt comp-intel&amp;lt;/code&amp;gt; for compiling&lt;br /&gt;
* Sometimes, &amp;lt;code&amp;gt;mpi{cc,cxx,f90}&amp;lt;/code&amp;gt; will not pick the Intel compilers by default. You can check this by running &amp;lt;code&amp;gt;mpi{cc,cxx,f90} --version&amp;lt;/code&amp;gt; to verify the compiler it links to.&lt;br /&gt;
** To fix this, you can set &amp;lt;code&amp;gt;export MPICC_CC=icc MPICXX_CXX=icpc MPIF90_F90=ifort&amp;lt;/code&amp;gt; to force it to use the Intel compilers&lt;br /&gt;
&lt;br /&gt;
=== Running TotalView on NAS ===&lt;br /&gt;
&lt;br /&gt;
Using TotalView on NAS starts with adding a configuration file which will enable port forwarding on the viz nodes. On the viz nodes, run these commands:&lt;br /&gt;
&lt;br /&gt;
 cd ~/.ssh/&lt;br /&gt;
 cp ~kjansen/sshconfig config&lt;br /&gt;
&lt;br /&gt;
Now, login to NAS sfe with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X [nas username here]@sfe6.nas.nasa.gov&lt;br /&gt;
&lt;br /&gt;
You'll need to enter your password and NAS secure passcode. Next transfer to NAS pfe, also with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X pfe&lt;br /&gt;
&lt;br /&gt;
You'll need to again enter a NAS secure passcode. The next step is optional. If you want to use the older TotalView interface that looks like the one on the viz nodes, run this command:&lt;br /&gt;
&lt;br /&gt;
 echo false &amp;gt; ~/.totalview/.tvnewui&lt;br /&gt;
&lt;br /&gt;
Now start an interactive job on NAS pfe with the following command:&lt;br /&gt;
&lt;br /&gt;
 qsub -X -I -q devel -lselect=1:ncpus=8:model=sky_ele,walltime=2:00:00&lt;br /&gt;
&lt;br /&gt;
The number of cpus (8) should match the number of processes for the case you'll be debugging. Now on the interactive session on NAS pfe, load the correct environment that matches the build of your code. In my case it was:&lt;br /&gt;
&lt;br /&gt;
 module load mpi-hpe/mpt&lt;br /&gt;
 module load comp-intel/2020.4.304&lt;br /&gt;
&lt;br /&gt;
It is important to note that the build of PHASTA on NAS that you want to debug must be built as the debug version. This will allow you to place breakpoints in the code using TotalView. PHASTA can be built as the debug version by setting the &amp;quot;-DCMAKE_BUILD_TYPE=Debug \&amp;quot; flag in the cmake file. Now, navigate to the directory in nobackup where you have your case setup to run. Define the path to the PHASTA build by executing this command:&lt;br /&gt;
&lt;br /&gt;
 export PHASTA_CONFIG=[path to PHASTA build]&lt;br /&gt;
&lt;br /&gt;
Now load the TotalView module:&lt;br /&gt;
&lt;br /&gt;
 module load totalview/2023.4.16&lt;br /&gt;
&lt;br /&gt;
You may need to manually remove the &amp;quot;doubleRun-check&amp;quot; folder from the &amp;quot;procs_case&amp;quot; folder before running PHASTA. Now, to run PHASTA and debug with TotalView, execute this command:&lt;br /&gt;
&lt;br /&gt;
 totalview mpiexec_mpt.real -a -np 8 $PHASTA_CONFIG/bin/phastaC.exe&lt;br /&gt;
&lt;br /&gt;
A TotalView feature that is useful while running on NAS is &amp;quot;rescan libraries&amp;quot; which can be found in the &amp;quot;File&amp;quot; drop-down menu. You can select this option after recompiling the PHASTA code which will allow you to restart the job in TotalView without having to close and reopen the application, saving time.&lt;br /&gt;
&lt;br /&gt;
With that, you should be all set! For more information about using TotalView on NAS you can visit this website: https://www.nas.nasa.gov/hecc/support/kb/totalview_95.html&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2055</id>
		<title>NAS</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2055"/>
				<updated>2024-05-02T18:38:41Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Running TotalView on NAS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ Category: Compute Facilities]]&lt;br /&gt;
Wiki for information related to the '''NASA Advanced Supercomputing''' ('''NAS''') facility.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Pleiades&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Lou&lt;br /&gt;
| Storage and Analysis&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Electra&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Endeavour&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Merope&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Aitken&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/portable-batch-system-(pbs)-overview_126.html PBS]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/ Support Knowledgebase]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== How-To's in Separate Wiki's ===&lt;br /&gt;
&lt;br /&gt;
* [[Setting_Default_File_Permissions#NAS|Setting Default File Permissions]]&lt;br /&gt;
&lt;br /&gt;
=== Backup Data from Scratch Directories===&lt;br /&gt;
&lt;br /&gt;
This is done simply by copying data from the &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories to your home directory on Lou (&amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;). The &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories are mounted onto &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;, so transfers should be done on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is recommended to mirror the directory structure of your &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt; to allow for the data to be easily recovered back to it's original state. This is especially important if you use symlinks (as they are path dependent and will break if either the source file or the symlink itself are not in the correct location).&lt;br /&gt;
&lt;br /&gt;
This can be done with &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt;, but it is recommended to use NASA's in-house utility &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; will automatically perform parallel file transfers, data integrity checks and repairs, and syncing features similar to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Commands:'''&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc -r -d --sync /nobackup/jrwrigh7/models/STGFlatPlate/STFM_Tet_dz4-10_dx15 .&lt;br /&gt;
&lt;br /&gt;
This will copy the directory &amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt; to the current location (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;). The flags do as follows&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;: Recursively copy files from destination&lt;br /&gt;
* &amp;lt;code&amp;gt;-d&amp;lt;/code&amp;gt;: Create required directories that don't already exist. Equivalent of the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; flag for &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--sync&amp;lt;/code&amp;gt;: Only copy over &amp;quot;new&amp;quot; files, where &amp;quot;new&amp;quot; are any changes to the modification time or file size. &lt;br /&gt;
** If a file exists on destination (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;), but not source (&amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt;), it will not be copied back to source nor will it be deleted to match the state of source.&lt;br /&gt;
&lt;br /&gt;
Once this command is submitted, the transfer process will be backgrounded. Progress can be viewed by running &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. Additionally, you will recieve an email with the transfer job is completed.&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc --stop --id [shiftc job ID]&lt;br /&gt;
&lt;br /&gt;
This will stop the given shiftc job. The &amp;lt;code&amp;gt;[shiftc job ID]&amp;lt;/code&amp;gt; is the same number that appears beside the output of &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
More documentation for &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; can be found in its man page (&amp;lt;code&amp;gt;man shiftc&amp;lt;/code&amp;gt;) and on [https://www.nas.nasa.gov/hecc/support/kb/shift-transfer-tool-overview_300.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Control MPI Rank Placement ===&lt;br /&gt;
==== Rank 1 Solo Node ====&lt;br /&gt;
To make the rank 1 MPI process take a node on it's own, put this in the PBS directives:&lt;br /&gt;
&lt;br /&gt;
 #PBS -l select=1:mpiprocs=1:model=sky_ele+1:mpiprocs=40:model=sky_ele&lt;br /&gt;
&lt;br /&gt;
This will request 2 nodes: One will have the rank 1 process all by itself, and the other will have 40 MPI Processes (for all 40 CPU cores available on &amp;lt;code&amp;gt;sky_ele&amp;lt;/code&amp;gt; nodes). &lt;br /&gt;
&lt;br /&gt;
====Distribute Non-First Rank MPI Processes====&lt;br /&gt;
For controlling the placement of non-first rank MPI processes, use the &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; utility.&lt;br /&gt;
&lt;br /&gt;
For example, if we have requested 4 nodes and want 10 MPI processes per node, the &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; command needs to be modified to the following:&lt;br /&gt;
&lt;br /&gt;
 mpiexec -np 40 /u/scicon/tools/bin/mbind.x -n10 [executable]&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; is also socket aware, so it will distribute nodes evenly between nodes ''and'' between CPU's in each node (NAS nodes have 2 CPU's per node).&lt;br /&gt;
For more information on &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt;, see it's help flag (&amp;lt;code&amp;gt;mbind.x -help&amp;lt;/code&amp;gt;) or [https://www.nas.nasa.gov/hecc/support/kb/using-the-mbind-tool-for-pinning_288.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Common commands ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;node_stats.sh&amp;lt;/code&amp;gt;: Displays how many nodes are available or actively running jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;tracejobssh&amp;lt;/code&amp;gt;: Helps to answer &amp;quot;Why isn't my job running?&amp;quot;. Part of the [https://github.com/PHASTA/utilities git repo].&lt;br /&gt;
&lt;br /&gt;
=== See Priority &amp;quot;Score&amp;quot; in Queue ===&lt;br /&gt;
&lt;br /&gt;
To see what your priority &amp;quot;score&amp;quot; in PBS is use the &amp;lt;code&amp;gt;qstat -W o=+pri&amp;lt;/code&amp;gt; to add the &amp;quot;Priority&amp;quot; column to the output of &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Priority Scoring (as of 2021-01-22) ====&lt;br /&gt;
&lt;br /&gt;
* Job priority score grows by 1 every 12 hours&lt;br /&gt;
* We are capped at a max score of 20 per job&lt;br /&gt;
** Note that other users/groups using NAS may start with higher priority and grow higher than 20&lt;br /&gt;
** Result is that it's quite difficult to get large jobs running&lt;br /&gt;
* If you don't have any jobs running, you get an addition +10 to the score&lt;br /&gt;
** This score bump is removed as soon as you have a running job&lt;br /&gt;
&lt;br /&gt;
=== Compiling ===&lt;br /&gt;
* Generally will want use &amp;lt;code&amp;gt;module load hpe-mpi/mpt comp-intel&amp;lt;/code&amp;gt; for compiling&lt;br /&gt;
* Sometimes, &amp;lt;code&amp;gt;mpi{cc,cxx,f90}&amp;lt;/code&amp;gt; will not pick the Intel compilers by default. You can check this by running &amp;lt;code&amp;gt;mpi{cc,cxx,f90} --version&amp;lt;/code&amp;gt; to verify the compiler it links to.&lt;br /&gt;
** To fix this, you can set &amp;lt;code&amp;gt;export MPICC_CC=icc MPICXX_CXX=icpc MPIF90_F90=ifort&amp;lt;/code&amp;gt; to force it to use the Intel compilers&lt;br /&gt;
&lt;br /&gt;
=== Running TotalView on NAS ===&lt;br /&gt;
&lt;br /&gt;
Using TotalView on NAS starts with adding a configuration file which will enable port forwarding on the viz nodes. On the viz nodes, run these commands:&lt;br /&gt;
&lt;br /&gt;
 cd ~/.ssh/&lt;br /&gt;
 cp ~kjansen/sshconfig config&lt;br /&gt;
&lt;br /&gt;
Now, login to NAS sfe with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X [nas username here]@sfe6.nas.nasa.gov&lt;br /&gt;
&lt;br /&gt;
You'll need to enter your password and NAS secure passcode. Next transfer to NAS pfe, also with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X pfe&lt;br /&gt;
&lt;br /&gt;
You'll need to again enter a NAS secure passcode. The next step is optional. If you want to use the older TotalView interface that looks like the one on the viz nodes, run this command:&lt;br /&gt;
&lt;br /&gt;
 echo false &amp;gt; ~/.totalview/.tvnewui&lt;br /&gt;
&lt;br /&gt;
Now start an interactive job on NAS pfe with the following command:&lt;br /&gt;
&lt;br /&gt;
 qsub -X -I -q devel -lselect=1:ncpus=8:model=sky_ele,walltime=2:00:00&lt;br /&gt;
&lt;br /&gt;
The number of cpus (8) should match the number of processes for the case you'll be debugging. Now on the interactive session on NAS pfe, load the correct environment that matches the build of your code. In my case it was:&lt;br /&gt;
&lt;br /&gt;
 module load mpi-hpe/mpt&lt;br /&gt;
 module load comp-intel/2020.4.304&lt;br /&gt;
&lt;br /&gt;
It is important to note that the build of PHASTA on NAS that you want to debug must be built as the debug version. This will allow you to place breakpoints in the code using TotalView. PHASTA can be built as the debug version by setting the &amp;quot;-DCMAKE_BUILD_TYPE=Debug \&amp;quot; flag in the cmake file. Now, navigate to the directory in nobackup where you have your case setup to run. Define the path to the PHASTA build by executing this command:&lt;br /&gt;
&lt;br /&gt;
 export PHASTA_CONFIG=[path to PHASTA build]&lt;br /&gt;
&lt;br /&gt;
Now load the TotalView module:&lt;br /&gt;
&lt;br /&gt;
 module load totalview/2023.4.16&lt;br /&gt;
&lt;br /&gt;
You may need to manually remove the &amp;quot;doubleRun-check&amp;quot; folder from the &amp;quot;procs_case&amp;quot; folder before running PHASTA. Now, to run PHASTA and debug with TotalView, execute this command:&lt;br /&gt;
&lt;br /&gt;
 totalview mpiexec_mpt.real -a -np 8 $PHASTA_CONFIG/bin/phastaC.exe&lt;br /&gt;
&lt;br /&gt;
One useful TotalView feature to use while running on NAS is &amp;quot;rescan libraries&amp;quot; which can be found in the &amp;quot;File&amp;quot; drop-down menu. You can select this option after recompiling the PHASTA code which will allow you to restart the job in TotalView without having to close and reopen the application, saving time.&lt;br /&gt;
&lt;br /&gt;
With that, you should be all set! For more information about using TotalView on NAS you can visit this website: https://www.nas.nasa.gov/hecc/support/kb/totalview_95.html&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2054</id>
		<title>NAS</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2054"/>
				<updated>2024-05-02T18:35:56Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Running TotalView on NAS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ Category: Compute Facilities]]&lt;br /&gt;
Wiki for information related to the '''NASA Advanced Supercomputing''' ('''NAS''') facility.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Pleiades&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Lou&lt;br /&gt;
| Storage and Analysis&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Electra&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Endeavour&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Merope&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Aitken&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/portable-batch-system-(pbs)-overview_126.html PBS]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/ Support Knowledgebase]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== How-To's in Separate Wiki's ===&lt;br /&gt;
&lt;br /&gt;
* [[Setting_Default_File_Permissions#NAS|Setting Default File Permissions]]&lt;br /&gt;
&lt;br /&gt;
=== Backup Data from Scratch Directories===&lt;br /&gt;
&lt;br /&gt;
This is done simply by copying data from the &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories to your home directory on Lou (&amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;). The &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories are mounted onto &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;, so transfers should be done on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is recommended to mirror the directory structure of your &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt; to allow for the data to be easily recovered back to it's original state. This is especially important if you use symlinks (as they are path dependent and will break if either the source file or the symlink itself are not in the correct location).&lt;br /&gt;
&lt;br /&gt;
This can be done with &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt;, but it is recommended to use NASA's in-house utility &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; will automatically perform parallel file transfers, data integrity checks and repairs, and syncing features similar to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Commands:'''&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc -r -d --sync /nobackup/jrwrigh7/models/STGFlatPlate/STFM_Tet_dz4-10_dx15 .&lt;br /&gt;
&lt;br /&gt;
This will copy the directory &amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt; to the current location (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;). The flags do as follows&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;: Recursively copy files from destination&lt;br /&gt;
* &amp;lt;code&amp;gt;-d&amp;lt;/code&amp;gt;: Create required directories that don't already exist. Equivalent of the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; flag for &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--sync&amp;lt;/code&amp;gt;: Only copy over &amp;quot;new&amp;quot; files, where &amp;quot;new&amp;quot; are any changes to the modification time or file size. &lt;br /&gt;
** If a file exists on destination (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;), but not source (&amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt;), it will not be copied back to source nor will it be deleted to match the state of source.&lt;br /&gt;
&lt;br /&gt;
Once this command is submitted, the transfer process will be backgrounded. Progress can be viewed by running &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. Additionally, you will recieve an email with the transfer job is completed.&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc --stop --id [shiftc job ID]&lt;br /&gt;
&lt;br /&gt;
This will stop the given shiftc job. The &amp;lt;code&amp;gt;[shiftc job ID]&amp;lt;/code&amp;gt; is the same number that appears beside the output of &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
More documentation for &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; can be found in its man page (&amp;lt;code&amp;gt;man shiftc&amp;lt;/code&amp;gt;) and on [https://www.nas.nasa.gov/hecc/support/kb/shift-transfer-tool-overview_300.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Control MPI Rank Placement ===&lt;br /&gt;
==== Rank 1 Solo Node ====&lt;br /&gt;
To make the rank 1 MPI process take a node on it's own, put this in the PBS directives:&lt;br /&gt;
&lt;br /&gt;
 #PBS -l select=1:mpiprocs=1:model=sky_ele+1:mpiprocs=40:model=sky_ele&lt;br /&gt;
&lt;br /&gt;
This will request 2 nodes: One will have the rank 1 process all by itself, and the other will have 40 MPI Processes (for all 40 CPU cores available on &amp;lt;code&amp;gt;sky_ele&amp;lt;/code&amp;gt; nodes). &lt;br /&gt;
&lt;br /&gt;
====Distribute Non-First Rank MPI Processes====&lt;br /&gt;
For controlling the placement of non-first rank MPI processes, use the &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; utility.&lt;br /&gt;
&lt;br /&gt;
For example, if we have requested 4 nodes and want 10 MPI processes per node, the &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; command needs to be modified to the following:&lt;br /&gt;
&lt;br /&gt;
 mpiexec -np 40 /u/scicon/tools/bin/mbind.x -n10 [executable]&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; is also socket aware, so it will distribute nodes evenly between nodes ''and'' between CPU's in each node (NAS nodes have 2 CPU's per node).&lt;br /&gt;
For more information on &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt;, see it's help flag (&amp;lt;code&amp;gt;mbind.x -help&amp;lt;/code&amp;gt;) or [https://www.nas.nasa.gov/hecc/support/kb/using-the-mbind-tool-for-pinning_288.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Common commands ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;node_stats.sh&amp;lt;/code&amp;gt;: Displays how many nodes are available or actively running jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;tracejobssh&amp;lt;/code&amp;gt;: Helps to answer &amp;quot;Why isn't my job running?&amp;quot;. Part of the [https://github.com/PHASTA/utilities git repo].&lt;br /&gt;
&lt;br /&gt;
=== See Priority &amp;quot;Score&amp;quot; in Queue ===&lt;br /&gt;
&lt;br /&gt;
To see what your priority &amp;quot;score&amp;quot; in PBS is use the &amp;lt;code&amp;gt;qstat -W o=+pri&amp;lt;/code&amp;gt; to add the &amp;quot;Priority&amp;quot; column to the output of &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Priority Scoring (as of 2021-01-22) ====&lt;br /&gt;
&lt;br /&gt;
* Job priority score grows by 1 every 12 hours&lt;br /&gt;
* We are capped at a max score of 20 per job&lt;br /&gt;
** Note that other users/groups using NAS may start with higher priority and grow higher than 20&lt;br /&gt;
** Result is that it's quite difficult to get large jobs running&lt;br /&gt;
* If you don't have any jobs running, you get an addition +10 to the score&lt;br /&gt;
** This score bump is removed as soon as you have a running job&lt;br /&gt;
&lt;br /&gt;
=== Compiling ===&lt;br /&gt;
* Generally will want use &amp;lt;code&amp;gt;module load hpe-mpi/mpt comp-intel&amp;lt;/code&amp;gt; for compiling&lt;br /&gt;
* Sometimes, &amp;lt;code&amp;gt;mpi{cc,cxx,f90}&amp;lt;/code&amp;gt; will not pick the Intel compilers by default. You can check this by running &amp;lt;code&amp;gt;mpi{cc,cxx,f90} --version&amp;lt;/code&amp;gt; to verify the compiler it links to.&lt;br /&gt;
** To fix this, you can set &amp;lt;code&amp;gt;export MPICC_CC=icc MPICXX_CXX=icpc MPIF90_F90=ifort&amp;lt;/code&amp;gt; to force it to use the Intel compilers&lt;br /&gt;
&lt;br /&gt;
=== Running TotalView on NAS ===&lt;br /&gt;
&lt;br /&gt;
Using TotalView on NAS starts with adding a configuration file which will enable port forwarding on the viz nodes. On the viz nodes, run these commands:&lt;br /&gt;
&lt;br /&gt;
 cd ~/.ssh/&lt;br /&gt;
 cp ~kjansen/sshconfig config&lt;br /&gt;
&lt;br /&gt;
Now, login to NAS sfe with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X [nas username here]@sfe6.nas.nasa.gov&lt;br /&gt;
&lt;br /&gt;
You'll need to enter your password and NAS secure passcode. Next transfer to NAS pfe, also with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X pfe&lt;br /&gt;
&lt;br /&gt;
You'll need to again enter a NAS secure passcode. The next step is optional. If you want to use the older TotalView interface that looks like the one on the viz nodes, run this command:&lt;br /&gt;
&lt;br /&gt;
 echo false &amp;gt; ~/.totalview/.tvnewui&lt;br /&gt;
&lt;br /&gt;
Now start an interactive job on NAS pfe with the following command:&lt;br /&gt;
&lt;br /&gt;
 qsub -X -I -q devel -lselect=1:ncpus=8:model=sky_ele,walltime=2:00:00&lt;br /&gt;
&lt;br /&gt;
The number of cpus (8) should match the number of processes for the case you'll be debugging. Now on the interactive session on NAS pfe, load the correct environment that matches the build of your code. In my case it was:&lt;br /&gt;
&lt;br /&gt;
 module load mpi-hpe/mpt&lt;br /&gt;
 module load comp-intel/2020.4.304&lt;br /&gt;
&lt;br /&gt;
It is important to note that the build of PHASTA on NAS that you want to debug must be built as the debug version. This will allow you to place breakpoints in the code using TotalView. PHASTA can be built as the debug version by setting the &amp;quot;-DCMAKE_BUILD_TYPE=Debug \&amp;quot; flag in the cmake file. Now, navigate to the directory in nobackup where you have your case setup to run. Define the path to the PHASTA build by executing this command:&lt;br /&gt;
&lt;br /&gt;
 export PHASTA_CONFIG=[path to PHASTA build]&lt;br /&gt;
&lt;br /&gt;
Now load the TotalView module:&lt;br /&gt;
&lt;br /&gt;
 module load totalview/2023.4.16&lt;br /&gt;
&lt;br /&gt;
You may need to manually remove the &amp;quot;doubleRun-check&amp;quot; folder from the &amp;quot;procs_case&amp;quot; folder before running PHASTA. Now, to run PHASTA and debug with TotalView, execute this command:&lt;br /&gt;
&lt;br /&gt;
 totalview mpiexec_mpt.real -a -np 8 $PHASTA_CONFIG/bin/phastaC.exe&lt;br /&gt;
&lt;br /&gt;
With that, you should be all set! For more information about using TotalView on NAS you can visit this website: https://www.nas.nasa.gov/hecc/support/kb/totalview_95.html&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2053</id>
		<title>NAS</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2053"/>
				<updated>2024-05-02T18:30:37Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Running TotalView on NAS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ Category: Compute Facilities]]&lt;br /&gt;
Wiki for information related to the '''NASA Advanced Supercomputing''' ('''NAS''') facility.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Pleiades&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Lou&lt;br /&gt;
| Storage and Analysis&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Electra&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Endeavour&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Merope&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Aitken&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/portable-batch-system-(pbs)-overview_126.html PBS]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/ Support Knowledgebase]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== How-To's in Separate Wiki's ===&lt;br /&gt;
&lt;br /&gt;
* [[Setting_Default_File_Permissions#NAS|Setting Default File Permissions]]&lt;br /&gt;
&lt;br /&gt;
=== Backup Data from Scratch Directories===&lt;br /&gt;
&lt;br /&gt;
This is done simply by copying data from the &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories to your home directory on Lou (&amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;). The &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories are mounted onto &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;, so transfers should be done on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is recommended to mirror the directory structure of your &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt; to allow for the data to be easily recovered back to it's original state. This is especially important if you use symlinks (as they are path dependent and will break if either the source file or the symlink itself are not in the correct location).&lt;br /&gt;
&lt;br /&gt;
This can be done with &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt;, but it is recommended to use NASA's in-house utility &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; will automatically perform parallel file transfers, data integrity checks and repairs, and syncing features similar to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Commands:'''&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc -r -d --sync /nobackup/jrwrigh7/models/STGFlatPlate/STFM_Tet_dz4-10_dx15 .&lt;br /&gt;
&lt;br /&gt;
This will copy the directory &amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt; to the current location (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;). The flags do as follows&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;: Recursively copy files from destination&lt;br /&gt;
* &amp;lt;code&amp;gt;-d&amp;lt;/code&amp;gt;: Create required directories that don't already exist. Equivalent of the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; flag for &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--sync&amp;lt;/code&amp;gt;: Only copy over &amp;quot;new&amp;quot; files, where &amp;quot;new&amp;quot; are any changes to the modification time or file size. &lt;br /&gt;
** If a file exists on destination (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;), but not source (&amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt;), it will not be copied back to source nor will it be deleted to match the state of source.&lt;br /&gt;
&lt;br /&gt;
Once this command is submitted, the transfer process will be backgrounded. Progress can be viewed by running &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. Additionally, you will recieve an email with the transfer job is completed.&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc --stop --id [shiftc job ID]&lt;br /&gt;
&lt;br /&gt;
This will stop the given shiftc job. The &amp;lt;code&amp;gt;[shiftc job ID]&amp;lt;/code&amp;gt; is the same number that appears beside the output of &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
More documentation for &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; can be found in its man page (&amp;lt;code&amp;gt;man shiftc&amp;lt;/code&amp;gt;) and on [https://www.nas.nasa.gov/hecc/support/kb/shift-transfer-tool-overview_300.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Control MPI Rank Placement ===&lt;br /&gt;
==== Rank 1 Solo Node ====&lt;br /&gt;
To make the rank 1 MPI process take a node on it's own, put this in the PBS directives:&lt;br /&gt;
&lt;br /&gt;
 #PBS -l select=1:mpiprocs=1:model=sky_ele+1:mpiprocs=40:model=sky_ele&lt;br /&gt;
&lt;br /&gt;
This will request 2 nodes: One will have the rank 1 process all by itself, and the other will have 40 MPI Processes (for all 40 CPU cores available on &amp;lt;code&amp;gt;sky_ele&amp;lt;/code&amp;gt; nodes). &lt;br /&gt;
&lt;br /&gt;
====Distribute Non-First Rank MPI Processes====&lt;br /&gt;
For controlling the placement of non-first rank MPI processes, use the &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; utility.&lt;br /&gt;
&lt;br /&gt;
For example, if we have requested 4 nodes and want 10 MPI processes per node, the &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; command needs to be modified to the following:&lt;br /&gt;
&lt;br /&gt;
 mpiexec -np 40 /u/scicon/tools/bin/mbind.x -n10 [executable]&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; is also socket aware, so it will distribute nodes evenly between nodes ''and'' between CPU's in each node (NAS nodes have 2 CPU's per node).&lt;br /&gt;
For more information on &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt;, see it's help flag (&amp;lt;code&amp;gt;mbind.x -help&amp;lt;/code&amp;gt;) or [https://www.nas.nasa.gov/hecc/support/kb/using-the-mbind-tool-for-pinning_288.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Common commands ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;node_stats.sh&amp;lt;/code&amp;gt;: Displays how many nodes are available or actively running jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;tracejobssh&amp;lt;/code&amp;gt;: Helps to answer &amp;quot;Why isn't my job running?&amp;quot;. Part of the [https://github.com/PHASTA/utilities git repo].&lt;br /&gt;
&lt;br /&gt;
=== See Priority &amp;quot;Score&amp;quot; in Queue ===&lt;br /&gt;
&lt;br /&gt;
To see what your priority &amp;quot;score&amp;quot; in PBS is use the &amp;lt;code&amp;gt;qstat -W o=+pri&amp;lt;/code&amp;gt; to add the &amp;quot;Priority&amp;quot; column to the output of &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Priority Scoring (as of 2021-01-22) ====&lt;br /&gt;
&lt;br /&gt;
* Job priority score grows by 1 every 12 hours&lt;br /&gt;
* We are capped at a max score of 20 per job&lt;br /&gt;
** Note that other users/groups using NAS may start with higher priority and grow higher than 20&lt;br /&gt;
** Result is that it's quite difficult to get large jobs running&lt;br /&gt;
* If you don't have any jobs running, you get an addition +10 to the score&lt;br /&gt;
** This score bump is removed as soon as you have a running job&lt;br /&gt;
&lt;br /&gt;
=== Compiling ===&lt;br /&gt;
* Generally will want use &amp;lt;code&amp;gt;module load hpe-mpi/mpt comp-intel&amp;lt;/code&amp;gt; for compiling&lt;br /&gt;
* Sometimes, &amp;lt;code&amp;gt;mpi{cc,cxx,f90}&amp;lt;/code&amp;gt; will not pick the Intel compilers by default. You can check this by running &amp;lt;code&amp;gt;mpi{cc,cxx,f90} --version&amp;lt;/code&amp;gt; to verify the compiler it links to.&lt;br /&gt;
** To fix this, you can set &amp;lt;code&amp;gt;export MPICC_CC=icc MPICXX_CXX=icpc MPIF90_F90=ifort&amp;lt;/code&amp;gt; to force it to use the Intel compilers&lt;br /&gt;
&lt;br /&gt;
=== Running TotalView on NAS ===&lt;br /&gt;
&lt;br /&gt;
Using TotalView on NAS starts with adding a configuration file which will enable port forwarding on the viz nodes. On the viz nodes, run these commands:&lt;br /&gt;
&lt;br /&gt;
 cd ~/.ssh/&lt;br /&gt;
 cp ~kjansen/sshconfig config&lt;br /&gt;
&lt;br /&gt;
Now, login to NAS sfe with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X [nas username here]@sfe6.nas.nasa.gov&lt;br /&gt;
&lt;br /&gt;
You'll need to enter your password and NAS secure passcode. Next transfer to NAS pfe, also with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X pfe&lt;br /&gt;
&lt;br /&gt;
You'll need to again enter a NAS secure passcode. The next step is optional. If you want to use the older TotalView interface that looks like the one on the viz nodes, run this command:&lt;br /&gt;
&lt;br /&gt;
 echo false &amp;gt; ~/.totalview/.tvnewui&lt;br /&gt;
&lt;br /&gt;
Now start an interactive job on NAS pfe with the following command:&lt;br /&gt;
&lt;br /&gt;
 qsub -X -I -q devel -lselect=1:ncpus=8:model=sky_ele,walltime=2:00:00&lt;br /&gt;
&lt;br /&gt;
The number of cpus (8) should match the number of processes for the case you'll be debugging. Now on the interactive session on NAS pfe, load the correct environment that matches the build of your code. In my case it was:&lt;br /&gt;
&lt;br /&gt;
 module load mpi-hpe/mpt&lt;br /&gt;
 module load comp-intel/2020.4.304&lt;br /&gt;
&lt;br /&gt;
It is important to note that the build of PHASTA on NAS that you want to debug must be built as the debug version. This will allow you to place breakpoints in the code using TotalView. PHASTA can be built as the debug version by setting the &amp;quot;-DCMAKE_BUILD_TYPE=Debug \&amp;quot; flag in the cmake file. Now, navigate to the directory in nobackup where you have your case setup to run. Define the path to the PHASTA build by executing this command:&lt;br /&gt;
&lt;br /&gt;
 export PHASTA_CONFIG=[path to PHASTA build]&lt;br /&gt;
&lt;br /&gt;
Now load the TotalView module:&lt;br /&gt;
&lt;br /&gt;
 module load totalview/2023.4.16&lt;br /&gt;
&lt;br /&gt;
You may need to manually remove the &amp;quot;doubleRun-check&amp;quot; folder from the &amp;quot;procs_case&amp;quot; folder before running PHASTA. Now, to run PHASTA and debug with TotalView, execute this command:&lt;br /&gt;
&lt;br /&gt;
 totalview mpiexec_mpt.real -a -np 8 $PHASTA_CONFIG/bin/phastaC.exe&lt;br /&gt;
&lt;br /&gt;
With that, you should be all set! For more information about using totalview on NAS you can visit this website: https://www.nas.nasa.gov/hecc/support/kb/totalview_95.html&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2052</id>
		<title>NAS</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2052"/>
				<updated>2024-05-02T18:29:18Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Running TotalView on NAS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ Category: Compute Facilities]]&lt;br /&gt;
Wiki for information related to the '''NASA Advanced Supercomputing''' ('''NAS''') facility.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Pleiades&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Lou&lt;br /&gt;
| Storage and Analysis&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Electra&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Endeavour&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Merope&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Aitken&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/portable-batch-system-(pbs)-overview_126.html PBS]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/ Support Knowledgebase]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== How-To's in Separate Wiki's ===&lt;br /&gt;
&lt;br /&gt;
* [[Setting_Default_File_Permissions#NAS|Setting Default File Permissions]]&lt;br /&gt;
&lt;br /&gt;
=== Backup Data from Scratch Directories===&lt;br /&gt;
&lt;br /&gt;
This is done simply by copying data from the &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories to your home directory on Lou (&amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;). The &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories are mounted onto &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;, so transfers should be done on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is recommended to mirror the directory structure of your &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt; to allow for the data to be easily recovered back to it's original state. This is especially important if you use symlinks (as they are path dependent and will break if either the source file or the symlink itself are not in the correct location).&lt;br /&gt;
&lt;br /&gt;
This can be done with &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt;, but it is recommended to use NASA's in-house utility &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; will automatically perform parallel file transfers, data integrity checks and repairs, and syncing features similar to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Commands:'''&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc -r -d --sync /nobackup/jrwrigh7/models/STGFlatPlate/STFM_Tet_dz4-10_dx15 .&lt;br /&gt;
&lt;br /&gt;
This will copy the directory &amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt; to the current location (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;). The flags do as follows&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;: Recursively copy files from destination&lt;br /&gt;
* &amp;lt;code&amp;gt;-d&amp;lt;/code&amp;gt;: Create required directories that don't already exist. Equivalent of the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; flag for &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--sync&amp;lt;/code&amp;gt;: Only copy over &amp;quot;new&amp;quot; files, where &amp;quot;new&amp;quot; are any changes to the modification time or file size. &lt;br /&gt;
** If a file exists on destination (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;), but not source (&amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt;), it will not be copied back to source nor will it be deleted to match the state of source.&lt;br /&gt;
&lt;br /&gt;
Once this command is submitted, the transfer process will be backgrounded. Progress can be viewed by running &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. Additionally, you will recieve an email with the transfer job is completed.&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc --stop --id [shiftc job ID]&lt;br /&gt;
&lt;br /&gt;
This will stop the given shiftc job. The &amp;lt;code&amp;gt;[shiftc job ID]&amp;lt;/code&amp;gt; is the same number that appears beside the output of &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
More documentation for &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; can be found in its man page (&amp;lt;code&amp;gt;man shiftc&amp;lt;/code&amp;gt;) and on [https://www.nas.nasa.gov/hecc/support/kb/shift-transfer-tool-overview_300.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Control MPI Rank Placement ===&lt;br /&gt;
==== Rank 1 Solo Node ====&lt;br /&gt;
To make the rank 1 MPI process take a node on it's own, put this in the PBS directives:&lt;br /&gt;
&lt;br /&gt;
 #PBS -l select=1:mpiprocs=1:model=sky_ele+1:mpiprocs=40:model=sky_ele&lt;br /&gt;
&lt;br /&gt;
This will request 2 nodes: One will have the rank 1 process all by itself, and the other will have 40 MPI Processes (for all 40 CPU cores available on &amp;lt;code&amp;gt;sky_ele&amp;lt;/code&amp;gt; nodes). &lt;br /&gt;
&lt;br /&gt;
====Distribute Non-First Rank MPI Processes====&lt;br /&gt;
For controlling the placement of non-first rank MPI processes, use the &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; utility.&lt;br /&gt;
&lt;br /&gt;
For example, if we have requested 4 nodes and want 10 MPI processes per node, the &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; command needs to be modified to the following:&lt;br /&gt;
&lt;br /&gt;
 mpiexec -np 40 /u/scicon/tools/bin/mbind.x -n10 [executable]&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; is also socket aware, so it will distribute nodes evenly between nodes ''and'' between CPU's in each node (NAS nodes have 2 CPU's per node).&lt;br /&gt;
For more information on &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt;, see it's help flag (&amp;lt;code&amp;gt;mbind.x -help&amp;lt;/code&amp;gt;) or [https://www.nas.nasa.gov/hecc/support/kb/using-the-mbind-tool-for-pinning_288.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Common commands ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;node_stats.sh&amp;lt;/code&amp;gt;: Displays how many nodes are available or actively running jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;tracejobssh&amp;lt;/code&amp;gt;: Helps to answer &amp;quot;Why isn't my job running?&amp;quot;. Part of the [https://github.com/PHASTA/utilities git repo].&lt;br /&gt;
&lt;br /&gt;
=== See Priority &amp;quot;Score&amp;quot; in Queue ===&lt;br /&gt;
&lt;br /&gt;
To see what your priority &amp;quot;score&amp;quot; in PBS is use the &amp;lt;code&amp;gt;qstat -W o=+pri&amp;lt;/code&amp;gt; to add the &amp;quot;Priority&amp;quot; column to the output of &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Priority Scoring (as of 2021-01-22) ====&lt;br /&gt;
&lt;br /&gt;
* Job priority score grows by 1 every 12 hours&lt;br /&gt;
* We are capped at a max score of 20 per job&lt;br /&gt;
** Note that other users/groups using NAS may start with higher priority and grow higher than 20&lt;br /&gt;
** Result is that it's quite difficult to get large jobs running&lt;br /&gt;
* If you don't have any jobs running, you get an addition +10 to the score&lt;br /&gt;
** This score bump is removed as soon as you have a running job&lt;br /&gt;
&lt;br /&gt;
=== Compiling ===&lt;br /&gt;
* Generally will want use &amp;lt;code&amp;gt;module load hpe-mpi/mpt comp-intel&amp;lt;/code&amp;gt; for compiling&lt;br /&gt;
* Sometimes, &amp;lt;code&amp;gt;mpi{cc,cxx,f90}&amp;lt;/code&amp;gt; will not pick the Intel compilers by default. You can check this by running &amp;lt;code&amp;gt;mpi{cc,cxx,f90} --version&amp;lt;/code&amp;gt; to verify the compiler it links to.&lt;br /&gt;
** To fix this, you can set &amp;lt;code&amp;gt;export MPICC_CC=icc MPICXX_CXX=icpc MPIF90_F90=ifort&amp;lt;/code&amp;gt; to force it to use the Intel compilers&lt;br /&gt;
&lt;br /&gt;
=== Running TotalView on NAS ===&lt;br /&gt;
&lt;br /&gt;
Using TotalView on NAS starts with adding a configuration file which will enable port forwarding on the viz nodes. On the viz nodes, run these commands:&lt;br /&gt;
&lt;br /&gt;
 cd ~/.ssh/&lt;br /&gt;
 cp ~kjansen/sshconfig config&lt;br /&gt;
&lt;br /&gt;
Now, login to NAS sfe with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X [nas username here]@sfe6.nas.nasa.gov&lt;br /&gt;
&lt;br /&gt;
You'll need to enter your password and NAS secure passcode. Next transfer to NAS pfe, also with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X pfe&lt;br /&gt;
&lt;br /&gt;
You'll need to again enter a NAS secure passcode. The next step is optional. If you want to use the older TotalView interface that looks like the one on the viz nodes, run this command:&lt;br /&gt;
&lt;br /&gt;
 echo false &amp;gt; ~/.totalview/.tvnewui&lt;br /&gt;
&lt;br /&gt;
Now start an interactive job on NAS pfe with the following command:&lt;br /&gt;
&lt;br /&gt;
 qsub -X -I -q devel -lselect=1:ncpus=8:model=sky_ele,walltime=2:00:00&lt;br /&gt;
&lt;br /&gt;
The number of cpus (8) should match the number of processes for the case you'll be debugging. Now on the interactive session on NAS pfe, load the correct environment that matches the build of your code. In my case it was:&lt;br /&gt;
&lt;br /&gt;
 module load mpi-hpe/mpt&lt;br /&gt;
 module load comp-intel/2020.4.304&lt;br /&gt;
&lt;br /&gt;
It is important to note that the build of PHASTA on NAS that you want to debug must be built as the debug version. This will allow you to place breakpoints in the code using TotalView. PHASTA can be built as the debug version by setting the &amp;quot;-DCMAKE_BUILD_TYPE=Debug \&amp;quot; flag in the cmake file. Now, navigate to the directory in nobackup where you have your case setup to run. Define the path to the PHASTA build by running this command:&lt;br /&gt;
&lt;br /&gt;
 export PHASTA_CONFIG=[path to PHASTA build]&lt;br /&gt;
&lt;br /&gt;
Now load the TotalView module:&lt;br /&gt;
&lt;br /&gt;
 module load totalview/2023.4.16&lt;br /&gt;
&lt;br /&gt;
You may need to manually remove the &amp;quot;doubleRun-check&amp;quot; folder from the &amp;quot;procs_case&amp;quot; folder before running PHASTA. Now, to run PHASTA and debug with TotalView, run this command:&lt;br /&gt;
&lt;br /&gt;
 totalview mpiexec_mpt.real -a -np 8 $PHASTA_CONFIG/bin/phastaC.exe&lt;br /&gt;
&lt;br /&gt;
With that, you should be all set! For more information about using totalview on NAS you can visit this website: https://www.nas.nasa.gov/hecc/support/kb/totalview_95.html&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2051</id>
		<title>NAS</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2051"/>
				<updated>2024-05-02T18:25:38Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ Category: Compute Facilities]]&lt;br /&gt;
Wiki for information related to the '''NASA Advanced Supercomputing''' ('''NAS''') facility.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Pleiades&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Lou&lt;br /&gt;
| Storage and Analysis&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Electra&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Endeavour&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Merope&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Aitken&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/portable-batch-system-(pbs)-overview_126.html PBS]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/ Support Knowledgebase]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== How-To's in Separate Wiki's ===&lt;br /&gt;
&lt;br /&gt;
* [[Setting_Default_File_Permissions#NAS|Setting Default File Permissions]]&lt;br /&gt;
&lt;br /&gt;
=== Backup Data from Scratch Directories===&lt;br /&gt;
&lt;br /&gt;
This is done simply by copying data from the &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories to your home directory on Lou (&amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;). The &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories are mounted onto &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;, so transfers should be done on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is recommended to mirror the directory structure of your &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt; to allow for the data to be easily recovered back to it's original state. This is especially important if you use symlinks (as they are path dependent and will break if either the source file or the symlink itself are not in the correct location).&lt;br /&gt;
&lt;br /&gt;
This can be done with &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt;, but it is recommended to use NASA's in-house utility &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; will automatically perform parallel file transfers, data integrity checks and repairs, and syncing features similar to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Commands:'''&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc -r -d --sync /nobackup/jrwrigh7/models/STGFlatPlate/STFM_Tet_dz4-10_dx15 .&lt;br /&gt;
&lt;br /&gt;
This will copy the directory &amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt; to the current location (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;). The flags do as follows&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;: Recursively copy files from destination&lt;br /&gt;
* &amp;lt;code&amp;gt;-d&amp;lt;/code&amp;gt;: Create required directories that don't already exist. Equivalent of the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; flag for &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--sync&amp;lt;/code&amp;gt;: Only copy over &amp;quot;new&amp;quot; files, where &amp;quot;new&amp;quot; are any changes to the modification time or file size. &lt;br /&gt;
** If a file exists on destination (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;), but not source (&amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt;), it will not be copied back to source nor will it be deleted to match the state of source.&lt;br /&gt;
&lt;br /&gt;
Once this command is submitted, the transfer process will be backgrounded. Progress can be viewed by running &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. Additionally, you will recieve an email with the transfer job is completed.&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc --stop --id [shiftc job ID]&lt;br /&gt;
&lt;br /&gt;
This will stop the given shiftc job. The &amp;lt;code&amp;gt;[shiftc job ID]&amp;lt;/code&amp;gt; is the same number that appears beside the output of &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
More documentation for &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; can be found in its man page (&amp;lt;code&amp;gt;man shiftc&amp;lt;/code&amp;gt;) and on [https://www.nas.nasa.gov/hecc/support/kb/shift-transfer-tool-overview_300.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Control MPI Rank Placement ===&lt;br /&gt;
==== Rank 1 Solo Node ====&lt;br /&gt;
To make the rank 1 MPI process take a node on it's own, put this in the PBS directives:&lt;br /&gt;
&lt;br /&gt;
 #PBS -l select=1:mpiprocs=1:model=sky_ele+1:mpiprocs=40:model=sky_ele&lt;br /&gt;
&lt;br /&gt;
This will request 2 nodes: One will have the rank 1 process all by itself, and the other will have 40 MPI Processes (for all 40 CPU cores available on &amp;lt;code&amp;gt;sky_ele&amp;lt;/code&amp;gt; nodes). &lt;br /&gt;
&lt;br /&gt;
====Distribute Non-First Rank MPI Processes====&lt;br /&gt;
For controlling the placement of non-first rank MPI processes, use the &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; utility.&lt;br /&gt;
&lt;br /&gt;
For example, if we have requested 4 nodes and want 10 MPI processes per node, the &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; command needs to be modified to the following:&lt;br /&gt;
&lt;br /&gt;
 mpiexec -np 40 /u/scicon/tools/bin/mbind.x -n10 [executable]&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; is also socket aware, so it will distribute nodes evenly between nodes ''and'' between CPU's in each node (NAS nodes have 2 CPU's per node).&lt;br /&gt;
For more information on &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt;, see it's help flag (&amp;lt;code&amp;gt;mbind.x -help&amp;lt;/code&amp;gt;) or [https://www.nas.nasa.gov/hecc/support/kb/using-the-mbind-tool-for-pinning_288.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Common commands ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;node_stats.sh&amp;lt;/code&amp;gt;: Displays how many nodes are available or actively running jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;tracejobssh&amp;lt;/code&amp;gt;: Helps to answer &amp;quot;Why isn't my job running?&amp;quot;. Part of the [https://github.com/PHASTA/utilities git repo].&lt;br /&gt;
&lt;br /&gt;
=== See Priority &amp;quot;Score&amp;quot; in Queue ===&lt;br /&gt;
&lt;br /&gt;
To see what your priority &amp;quot;score&amp;quot; in PBS is use the &amp;lt;code&amp;gt;qstat -W o=+pri&amp;lt;/code&amp;gt; to add the &amp;quot;Priority&amp;quot; column to the output of &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Priority Scoring (as of 2021-01-22) ====&lt;br /&gt;
&lt;br /&gt;
* Job priority score grows by 1 every 12 hours&lt;br /&gt;
* We are capped at a max score of 20 per job&lt;br /&gt;
** Note that other users/groups using NAS may start with higher priority and grow higher than 20&lt;br /&gt;
** Result is that it's quite difficult to get large jobs running&lt;br /&gt;
* If you don't have any jobs running, you get an addition +10 to the score&lt;br /&gt;
** This score bump is removed as soon as you have a running job&lt;br /&gt;
&lt;br /&gt;
=== Compiling ===&lt;br /&gt;
* Generally will want use &amp;lt;code&amp;gt;module load hpe-mpi/mpt comp-intel&amp;lt;/code&amp;gt; for compiling&lt;br /&gt;
* Sometimes, &amp;lt;code&amp;gt;mpi{cc,cxx,f90}&amp;lt;/code&amp;gt; will not pick the Intel compilers by default. You can check this by running &amp;lt;code&amp;gt;mpi{cc,cxx,f90} --version&amp;lt;/code&amp;gt; to verify the compiler it links to.&lt;br /&gt;
** To fix this, you can set &amp;lt;code&amp;gt;export MPICC_CC=icc MPICXX_CXX=icpc MPIF90_F90=ifort&amp;lt;/code&amp;gt; to force it to use the Intel compilers&lt;br /&gt;
&lt;br /&gt;
=== Running TotalView on NAS ===&lt;br /&gt;
&lt;br /&gt;
Using TotalView on NAS starts with adding a configuration file which will enable port forwarding on the viz nodes. On the viz nodes, run these commands:&lt;br /&gt;
&lt;br /&gt;
 cd ~/.ssh/&lt;br /&gt;
 cp ~kjansen/sshconfig config&lt;br /&gt;
&lt;br /&gt;
Now, login to NAS sfe with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X [nas username here]@sfe6.nas.nasa.gov&lt;br /&gt;
&lt;br /&gt;
You'll need to enter your password and NAS secure passcode. Next transfer to NAS pfe, also with X forwarding:&lt;br /&gt;
&lt;br /&gt;
 ssh -X pfe&lt;br /&gt;
&lt;br /&gt;
You'll need to again enter a NAS secure passcode. The next step is optional. If you want to use the older totalview interface that looks like the one on the viz nodes, run this command:&lt;br /&gt;
&lt;br /&gt;
 echo false &amp;gt; ~/.totalview/.tvnewui&lt;br /&gt;
&lt;br /&gt;
Now start an interactive job on NAS pfe with the following command:&lt;br /&gt;
&lt;br /&gt;
 qsub -X -I -q devel -lselect=1:ncpus=8:model=sky_ele,walltime=2:00:00&lt;br /&gt;
&lt;br /&gt;
The number of cpus (8) should match the number of processes for the case you'll be debugging. Now on the interactive session on NAS pfe, load the correct environment that matches the build of your code. In my case it was:&lt;br /&gt;
&lt;br /&gt;
 module load mpi-hpe/mpt&lt;br /&gt;
 module load comp-intel/2020.4.304&lt;br /&gt;
&lt;br /&gt;
It is important to note that the build of PHASTA on NAS that you want to debug must be built as the debug version. This will allow you to place breakpoints in the code using totalview. PHASTA can be built as the debug version by setting the &amp;quot;-DCMAKE_BUILD_TYPE=Debug \&amp;quot; flag in the cmake file. Now, navigate to the directory in nobackup where you have your case setup to run. Define the path to the PHASTA build by running this command:&lt;br /&gt;
&lt;br /&gt;
 export PHASTA_CONFIG=[path to PHASTA build]&lt;br /&gt;
&lt;br /&gt;
Now load the totalview module:&lt;br /&gt;
&lt;br /&gt;
 module load totalview/2023.4.16&lt;br /&gt;
&lt;br /&gt;
You may need to manually remove the &amp;quot;doubleRun-check&amp;quot; folder from the &amp;quot;procs_case&amp;quot; folder before running PHASTA. Now, to run PHASTA and debug with totalview, run this command:&lt;br /&gt;
&lt;br /&gt;
 totalview mpiexec_mpt.real -a -np 8 $PHASTA_CONFIG/bin/phastaC.exe&lt;br /&gt;
&lt;br /&gt;
With that, you should be all set! For more information about using totalview on NAS you can watch the tutorial video I wrote this page based on here or visit this website: https://www.nas.nasa.gov/hecc/support/kb/totalview_95.html&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2050</id>
		<title>NAS</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=NAS&amp;diff=2050"/>
				<updated>2024-05-02T18:24:58Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ Category: Compute Facilities]]&lt;br /&gt;
Wiki for information related to the '''NASA Advanced Supercomputing''' ('''NAS''') facility.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Pleiades&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Lou&lt;br /&gt;
| Storage and Analysis&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Electra&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Endeavour&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Merope&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Aitken&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/portable-batch-system-(pbs)-overview_126.html PBS]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.nas.nasa.gov/hecc/support/kb/ Support Knowledgebase]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== How-To's in Separate Wiki's ===&lt;br /&gt;
&lt;br /&gt;
* [[Setting_Default_File_Permissions#NAS|Setting Default File Permissions]]&lt;br /&gt;
&lt;br /&gt;
=== Backup Data from Scratch Directories===&lt;br /&gt;
&lt;br /&gt;
This is done simply by copying data from the &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories to your home directory on Lou (&amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;). The &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directories are mounted onto &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;, so transfers should be done on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is recommended to mirror the directory structure of your &amp;lt;code&amp;gt;/nobackup/$USER&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;lfe&amp;lt;/code&amp;gt; to allow for the data to be easily recovered back to it's original state. This is especially important if you use symlinks (as they are path dependent and will break if either the source file or the symlink itself are not in the correct location).&lt;br /&gt;
&lt;br /&gt;
This can be done with &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt;, but it is recommended to use NASA's in-house utility &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; will automatically perform parallel file transfers, data integrity checks and repairs, and syncing features similar to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Commands:'''&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc -r -d --sync /nobackup/jrwrigh7/models/STGFlatPlate/STFM_Tet_dz4-10_dx15 .&lt;br /&gt;
&lt;br /&gt;
This will copy the directory &amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt; to the current location (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;). The flags do as follows&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;: Recursively copy files from destination&lt;br /&gt;
* &amp;lt;code&amp;gt;-d&amp;lt;/code&amp;gt;: Create required directories that don't already exist. Equivalent of the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; flag for &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--sync&amp;lt;/code&amp;gt;: Only copy over &amp;quot;new&amp;quot; files, where &amp;quot;new&amp;quot; are any changes to the modification time or file size. &lt;br /&gt;
** If a file exists on destination (&amp;lt;code&amp;gt;.&amp;lt;/code&amp;gt;), but not source (&amp;lt;code&amp;gt;STFM_Tet_dz4-10_dx15&amp;lt;/code&amp;gt;), it will not be copied back to source nor will it be deleted to match the state of source.&lt;br /&gt;
&lt;br /&gt;
Once this command is submitted, the transfer process will be backgrounded. Progress can be viewed by running &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. Additionally, you will recieve an email with the transfer job is completed.&lt;br /&gt;
&lt;br /&gt;
 jrwrigh7@lfe7: shiftc --stop --id [shiftc job ID]&lt;br /&gt;
&lt;br /&gt;
This will stop the given shiftc job. The &amp;lt;code&amp;gt;[shiftc job ID]&amp;lt;/code&amp;gt; is the same number that appears beside the output of &amp;lt;code&amp;gt;shiftc --monitor&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
More documentation for &amp;lt;code&amp;gt;shiftc&amp;lt;/code&amp;gt; can be found in its man page (&amp;lt;code&amp;gt;man shiftc&amp;lt;/code&amp;gt;) and on [https://www.nas.nasa.gov/hecc/support/kb/shift-transfer-tool-overview_300.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Control MPI Rank Placement ===&lt;br /&gt;
==== Rank 1 Solo Node ====&lt;br /&gt;
To make the rank 1 MPI process take a node on it's own, put this in the PBS directives:&lt;br /&gt;
&lt;br /&gt;
 #PBS -l select=1:mpiprocs=1:model=sky_ele+1:mpiprocs=40:model=sky_ele&lt;br /&gt;
&lt;br /&gt;
This will request 2 nodes: One will have the rank 1 process all by itself, and the other will have 40 MPI Processes (for all 40 CPU cores available on &amp;lt;code&amp;gt;sky_ele&amp;lt;/code&amp;gt; nodes). &lt;br /&gt;
&lt;br /&gt;
====Distribute Non-First Rank MPI Processes====&lt;br /&gt;
For controlling the placement of non-first rank MPI processes, use the &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; utility.&lt;br /&gt;
&lt;br /&gt;
For example, if we have requested 4 nodes and want 10 MPI processes per node, the &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; command needs to be modified to the following:&lt;br /&gt;
&lt;br /&gt;
 mpiexec -np 40 /u/scicon/tools/bin/mbind.x -n10 [executable]&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt; is also socket aware, so it will distribute nodes evenly between nodes ''and'' between CPU's in each node (NAS nodes have 2 CPU's per node).&lt;br /&gt;
For more information on &amp;lt;code&amp;gt;mbind.x&amp;lt;/code&amp;gt;, see it's help flag (&amp;lt;code&amp;gt;mbind.x -help&amp;lt;/code&amp;gt;) or [https://www.nas.nasa.gov/hecc/support/kb/using-the-mbind-tool-for-pinning_288.html NAS's documentation website].&lt;br /&gt;
&lt;br /&gt;
=== Common commands ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;node_stats.sh&amp;lt;/code&amp;gt;: Displays how many nodes are available or actively running jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;tracejobssh&amp;lt;/code&amp;gt;: Helps to answer &amp;quot;Why isn't my job running?&amp;quot;. Part of the [https://github.com/PHASTA/utilities git repo].&lt;br /&gt;
&lt;br /&gt;
=== See Priority &amp;quot;Score&amp;quot; in Queue ===&lt;br /&gt;
&lt;br /&gt;
To see what your priority &amp;quot;score&amp;quot; in PBS is use the &amp;lt;code&amp;gt;qstat -W o=+pri&amp;lt;/code&amp;gt; to add the &amp;quot;Priority&amp;quot; column to the output of &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Priority Scoring (as of 2021-01-22) ====&lt;br /&gt;
&lt;br /&gt;
* Job priority score grows by 1 every 12 hours&lt;br /&gt;
* We are capped at a max score of 20 per job&lt;br /&gt;
** Note that other users/groups using NAS may start with higher priority and grow higher than 20&lt;br /&gt;
** Result is that it's quite difficult to get large jobs running&lt;br /&gt;
* If you don't have any jobs running, you get an addition +10 to the score&lt;br /&gt;
** This score bump is removed as soon as you have a running job&lt;br /&gt;
&lt;br /&gt;
=== Compiling ===&lt;br /&gt;
* Generally will want use &amp;lt;code&amp;gt;module load hpe-mpi/mpt comp-intel&amp;lt;/code&amp;gt; for compiling&lt;br /&gt;
* Sometimes, &amp;lt;code&amp;gt;mpi{cc,cxx,f90}&amp;lt;/code&amp;gt; will not pick the Intel compilers by default. You can check this by running &amp;lt;code&amp;gt;mpi{cc,cxx,f90} --version&amp;lt;/code&amp;gt; to verify the compiler it links to.&lt;br /&gt;
** To fix this, you can set &amp;lt;code&amp;gt;export MPICC_CC=icc MPICXX_CXX=icpc MPIF90_F90=ifort&amp;lt;/code&amp;gt; to force it to use the Intel compilers&lt;br /&gt;
&lt;br /&gt;
=== Running TotalView on NAS ===&lt;br /&gt;
&lt;br /&gt;
Using TotalView on NAS starts with adding a configuration file which will enable port forwarding on the viz nodes. On the viz nodes, run these commands:&lt;br /&gt;
&lt;br /&gt;
cd ~/.ssh/&lt;br /&gt;
cp ~kjansen/sshconfig config&lt;br /&gt;
&lt;br /&gt;
Now, login to NAS sfe with X forwarding:&lt;br /&gt;
&lt;br /&gt;
ssh -X [nas username here]@sfe6.nas.nasa.gov&lt;br /&gt;
&lt;br /&gt;
You'll need to enter your password and NAS secure passcode. Next transfer to NAS pfe, also with X forwarding:&lt;br /&gt;
&lt;br /&gt;
ssh -X pfe&lt;br /&gt;
&lt;br /&gt;
You'll need to again enter a NAS secure passcode. The next step is optional. If you want to use the older totalview interface that looks like the one on the viz nodes, run this command:&lt;br /&gt;
&lt;br /&gt;
echo false &amp;gt; ~/.totalview/.tvnewui&lt;br /&gt;
&lt;br /&gt;
Now start an interactive job on NAS pfe with the following command:&lt;br /&gt;
&lt;br /&gt;
qsub -X -I -q devel -lselect=1:ncpus=8:model=sky_ele,walltime=2:00:00&lt;br /&gt;
&lt;br /&gt;
The number of cpus (8) should match the number of processes for the case you'll be debugging. Now on the interactive session on NAS pfe, load the correct environment that matches the build of your code. In my case it was:&lt;br /&gt;
&lt;br /&gt;
module load mpi-hpe/mpt&lt;br /&gt;
module load comp-intel/2020.4.304&lt;br /&gt;
&lt;br /&gt;
It is important to note that the build of PHASTA on NAS that you want to debug must be built as the debug version. This will allow you to place breakpoints in the code using totalview. PHASTA can be built as the debug version by setting the &amp;quot;-DCMAKE_BUILD_TYPE=Debug \&amp;quot; flag in the cmake file. Now, navigate to the directory in nobackup where you have your case setup to run. Define the path to the PHASTA build by running this command:&lt;br /&gt;
&lt;br /&gt;
export PHASTA_CONFIG=[path to PHASTA build]&lt;br /&gt;
&lt;br /&gt;
Now load the totalview module:&lt;br /&gt;
&lt;br /&gt;
module load totalview/2023.4.16&lt;br /&gt;
&lt;br /&gt;
You may need to manually remove the &amp;quot;doubleRun-check&amp;quot; folder from the &amp;quot;procs_case&amp;quot; folder before running PHASTA. Now, to run PHASTA and debug with totalview, run this command:&lt;br /&gt;
&lt;br /&gt;
totalview mpiexec_mpt.real -a -np 8 $PHASTA_CONFIG/bin/phastaC.exe&lt;br /&gt;
&lt;br /&gt;
With that, you should be all set! For more information about using totalview on NAS you can watch the tutorial video I wrote this page based on here or visit this website:&lt;br /&gt;
&lt;br /&gt;
https://www.nas.nasa.gov/hecc/support/kb/totalview_95.html&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF&amp;diff=2048</id>
		<title>ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF&amp;diff=2048"/>
				<updated>2024-04-05T16:56:43Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Wiki page for information related to the '''Argonne Leadership Computing Facility''' ('''ALCF''') located at Argonne National Labs.&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Key&lt;br /&gt;
! Value&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Machines&lt;br /&gt;
| Cooley&lt;br /&gt;
| Visualization&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | &lt;br /&gt;
| Theta&lt;br /&gt;
| Compute&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Job Submission System&lt;br /&gt;
| [https://xgitlab.cels.anl.gov/aig-public/cobalt Cobalt]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; text-align:left;&amp;quot; | Facility Documentation&lt;br /&gt;
| [https://www.alcf.anl.gov/support-center ALCF Support Center]&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The availability of the Cooley nodes can be found [https://status.alcf.anl.gov/cooley/activity here]. The first sixteen nodes are available for checkout by PHASTA users.&lt;br /&gt;
&lt;br /&gt;
The availability of the Theta nodes can be found [https://status.alcf.anl.gov/theta/activity here].&lt;br /&gt;
&lt;br /&gt;
== How-To's ==&lt;br /&gt;
&lt;br /&gt;
=== Login ===&lt;br /&gt;
From the Viz nodes, connect to either Theta or Cooley at ALCF with the command(s):&lt;br /&gt;
&lt;br /&gt;
 ssh &amp;lt;username&amp;gt;@theta.alcf.anl.gov&lt;br /&gt;
&lt;br /&gt;
 ssh &amp;lt;username&amp;gt;@cooley.alcf.anl.gov&lt;br /&gt;
&lt;br /&gt;
It asks for a password; this is the 8-digit code in the MobilePASS+ app on your phone (instructions on how to set that up are given by ALCF when you create an account).&lt;br /&gt;
&lt;br /&gt;
=== Start Interactive Session on Cooley ===&lt;br /&gt;
&lt;br /&gt;
To start an interactive session, run this script:&lt;br /&gt;
&lt;br /&gt;
  /projects/cfdml_aesp/Viz/cooley/Viz-SyncIO/subIntNodesTime.sh 1 120&lt;br /&gt;
&lt;br /&gt;
This will reserve you one node for 120 minutes (this is the maximum time allowed by ALCF). You may need to edit the script to include the correct filesystem you are working with. This script includes &amp;quot;home&amp;quot;, &amp;quot;grand&amp;quot;, and &amp;quot;theta-fs0&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Copy Files From Viz Nodes To Cooley===&lt;br /&gt;
&lt;br /&gt;
  scp -r [your username here]@jumpgate-phasta.colorado.edu:[insert file(s)/folder here] .&lt;br /&gt;
&lt;br /&gt;
It will ask you for your password and verification; this is the same password used to login to jumpgate, not the 8-digit code from MobilePASS+.&lt;br /&gt;
&lt;br /&gt;
=== Copy Files From Cooley To Viz Nodes ===&lt;br /&gt;
&lt;br /&gt;
  scp -r [insert file(s)/folder here] [your username here]@jumpgate-phasta.colorado.edu:[insert destination folder here]&lt;br /&gt;
&lt;br /&gt;
It will ask you for your password and verification; this is the same password used to login to jumpgate, not the 8-digit code from MobilePASS+.&lt;br /&gt;
&lt;br /&gt;
=== Archiving Data to ALCF's Tape Drives ===&lt;br /&gt;
See [[ALCF/Archiving_Data_at_ALCF|Archiving Data at ALCF ]].&lt;br /&gt;
&lt;br /&gt;
== Subpages ==&lt;br /&gt;
{{Special:PrefixIndex/ALCF/}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Compute Facilities]]&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2023</id>
		<title>Interpolate Solution from Different Mesh</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Interpolate_Solution_from_Different_Mesh&amp;diff=2023"/>
				<updated>2024-01-21T21:32:59Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This will be a general page for how to interpolate solutions from different meshes onto a new mesh. &lt;br /&gt;
Those meshes are assumed to be of the same domain. &lt;br /&gt;
&lt;br /&gt;
The generic terms for the two meshes are &amp;quot;source&amp;quot; and &amp;quot;target&amp;quot;, where source has the desired solution data and target is the mesh that will be receiving. &lt;br /&gt;
&lt;br /&gt;
== Laminar, Incompressible, Semi-structured Mesh ==&lt;br /&gt;
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview ===&lt;br /&gt;
&lt;br /&gt;
# A CSV of the solution must be created. This must be done on a serial running version of ParaView '''and''' must be done after a MergeBlocks filter is done.&lt;br /&gt;
# A special solution file is created via &amp;lt;code&amp;gt;Sort2StructuredGrid&amp;lt;/code&amp;gt;, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. &lt;br /&gt;
#* Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created. &lt;br /&gt;
# The interpolation is performed onto the new grid via &amp;lt;code&amp;gt;par3DInterp3&amp;lt;/code&amp;gt;, which creates &amp;lt;code&amp;gt;solInterp.&amp;lt;1-nparts&amp;gt;&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
# The interpolation is then used by PHASTA by setting &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;. &lt;br /&gt;
#*This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory needs to be symlinked into the &amp;lt;code&amp;gt;-procs_case&amp;lt;/code&amp;gt; directory for this to work.&lt;br /&gt;
&lt;br /&gt;
==== 1. Create CSV ====&lt;br /&gt;
&lt;br /&gt;
* Created using ParaView&lt;br /&gt;
* PV must be running in Serial mode&lt;br /&gt;
** Otherwise the CSV will not be in the correct order and possibly have duplicated points&lt;br /&gt;
# Load in source dataset&lt;br /&gt;
# Apply Mergeblocks filter&lt;br /&gt;
# Save dataset as a csv with 12 digits of precision in scientific notation&lt;br /&gt;
#* Make sure the csv is in &amp;quot;pressure, u0, u1, u2, x0, x1, x2&amp;quot; format&lt;br /&gt;
#* This can be done by only loading the pressure and velocity fields into Paraview (either by editing the &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; or in the data load menu in Paraview).&lt;br /&gt;
# Replace the commas with spaces&lt;br /&gt;
#* Can use &amp;lt;code&amp;gt;vim&amp;lt;/code&amp;gt; or run &amp;lt;code&amp;gt;sed -i 's/,/\ /g' test.csv&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces&lt;br /&gt;
# Remove the first line of the csv file&lt;br /&gt;
#* Done in vi or sed (&amp;lt;code&amp;gt;sed -i 1,1d test.csv&amp;lt;/code&amp;gt;) or tail (&amp;lt;code&amp;gt;tail -n +2 test.csv &amp;gt; trimmedLine1.csv&amp;lt;/code&amp;gt;) &lt;br /&gt;
#* Needed for the next program&lt;br /&gt;
#* Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in [https://stackoverflow.com/a/46451049/7564988 this StackOverflow answer] as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).&lt;br /&gt;
&lt;br /&gt;
==== 2. Create Structured Solution File ====&lt;br /&gt;
&lt;br /&gt;
'''Note:''' These instructions will be for the &amp;lt;code&amp;gt;parallelSortDNSzBinJames&amp;lt;/code&amp;gt; executable, which has some highly specific requirements and command inputs. &lt;br /&gt;
&lt;br /&gt;
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.&lt;br /&gt;
&lt;br /&gt;
# Symlink the source mesh's ordered coordinate file as &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may come from the files used to create the mesh (ie. for [[Tutorial_Video_Overviews#MatLabMeshAndConvert.mov|matchedNodeElementReader]])&lt;br /&gt;
#* ''(Untested)''This may also be created using the coordinates from the solution file&lt;br /&gt;
# Rename/symlink csv to be the correct file name (in my specific case, it was &amp;lt;code&amp;gt;dnsSolution1procLongFort.csv&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)&lt;br /&gt;
# Load approriate MPI environment variables (&amp;lt;code&amp;gt;soft add +mvapich2&amp;lt;/code&amp;gt; for Cooley)&lt;br /&gt;
# Run the executable via &amp;lt;code&amp;gt;mpirun -np [nprocs] [executable path] [executable inputs]&amp;lt;/code&amp;gt; &lt;br /&gt;
#* This will produce &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Concatenate &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files '''in order''' into single &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; file&lt;br /&gt;
#* The individual &amp;lt;code&amp;gt;source.sln.{1..nprocs}&amp;lt;/code&amp;gt; files need to be concatenated into a single &amp;lt;code&amp;gt;solution.sln&amp;lt;/code&amp;gt; file, which can be done (in zsh at least) via  &amp;lt;code&amp;gt;echo source.sln.{1..[MPIRanks]}| xargs cat &amp;gt; source.sln&amp;lt;/code&amp;gt;  (or probably equivalent &amp;lt;code&amp;gt;cat source.sln.{1..[MPIRanks]} &amp;gt; source.sln&amp;lt;/code&amp;gt;). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence.&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Note:''' These inputs are specfic to this executable. Changing executable will change which is used&lt;br /&gt;
* Also note that the &amp;lt;code&amp;gt;[number of elements in z]&amp;lt;/code&amp;gt; is equivalent to &amp;lt;code&amp;gt;nsons&amp;lt;/code&amp;gt; - 1 or the number of nodes in z - 1.&lt;br /&gt;
&lt;br /&gt;
==== 3. Create Interpolated files ====&lt;br /&gt;
&lt;br /&gt;
This step will create the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files used by PHASTA to perform the interpolation.&lt;br /&gt;
&lt;br /&gt;
# Create &amp;lt;code&amp;gt;Interpolate.../[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory in the target's Chef directory and move to that directory&lt;br /&gt;
# Symlink the target's POSIX &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files (that were created by Chef) to the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory&lt;br /&gt;
#* The &amp;lt;code&amp;gt;geombc.[target nprocs]&amp;lt;/code&amp;gt; files should be copied in the exact fashion that they are in the Chef created &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory, including if they're &amp;quot;fanned out&amp;quot;&lt;br /&gt;
# Create a directory called &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This may be corrected in the future, but currently if &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; is not present the job will fail&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;source.sln&amp;lt;/code&amp;gt; to the directory and the &amp;lt;code&amp;gt;ordered.crd&amp;lt;/code&amp;gt; file as &amp;lt;code&amp;gt;source.crd&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;phInterp&amp;lt;/code&amp;gt; via mpirun on an interactive job.&lt;br /&gt;
# This creates a series of &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files in the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory&lt;br /&gt;
&lt;br /&gt;
The file format for &amp;lt;code&amp;gt;solInterp.N&amp;lt;/code&amp;gt; is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:&lt;br /&gt;
&lt;br /&gt;
 coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z&lt;br /&gt;
&lt;br /&gt;
'''Example Command:''' &amp;lt;code&amp;gt;mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The inputs for this command are &amp;lt;code&amp;gt;[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]&amp;lt;/code&amp;gt;&lt;br /&gt;
* Note that the number of processes given to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; times the &amp;lt;code&amp;gt;[target parts per MPIProc]&amp;lt;/code&amp;gt; must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024&lt;br /&gt;
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.&lt;br /&gt;
&lt;br /&gt;
==== 4. Interpolate the solution in PHASTA ====&lt;br /&gt;
&lt;br /&gt;
This step will take the &amp;lt;code&amp;gt;solInterp.[target nprocs]&amp;lt;/code&amp;gt; files and load them as initial conditions.&lt;br /&gt;
&lt;br /&gt;
# Symlink the &amp;lt;code&amp;gt;solnTarget&amp;lt;/code&amp;gt; directory into the &amp;lt;code&amp;gt;[target nprocs]-procs_case&amp;lt;/code&amp;gt; directory. &lt;br /&gt;
# Add/uncomment &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; &lt;br /&gt;
# Run PHASTA for a few timesteps and write out &amp;lt;code&amp;gt;restart-dat.[target nprocs]&amp;lt;/code&amp;gt; files&lt;br /&gt;
# Remove/comment out the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; line from the &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;&lt;br /&gt;
# The new restart files have the interpolated solution&lt;br /&gt;
#* Note that if you forget to remove the &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; statement from &amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt;, PHASTA will overwrite the existing solution in the restart files&lt;br /&gt;
&lt;br /&gt;
== Turbulent, Compressible, Unstructured Mesh ==&lt;br /&gt;
&lt;br /&gt;
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the &amp;lt;code&amp;gt;conrad54418/ViscousHypersonics&amp;lt;/code&amp;gt; branch (as of 6/22/22). Creating the &amp;lt;code&amp;gt;solInterp.1&amp;lt;/code&amp;gt; file can be automated via a script or done manually (see below).&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (scripted) ===&lt;br /&gt;
&lt;br /&gt;
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at &amp;lt;code&amp;gt;/project/tutorials/ParaviewSolutionTransfer&amp;lt;/code&amp;gt;. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in &amp;lt;code&amp;gt;targetFolder/solutionInterp&amp;lt;/code&amp;gt;. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.&lt;br /&gt;
&lt;br /&gt;
=== Process Overview (manual) ===&lt;br /&gt;
&lt;br /&gt;
# Load existing solution into ParaView. Use the 'merge blocks' filter to convert to serial case.&lt;br /&gt;
# Load the target case .pht file into ParaView&lt;br /&gt;
# Use the 'Resample from dataset' filter and select the source and target blocks accordingly (ParaView’s naming is super unclear: they want source to be the coordinates where you need the solution (new mesh) and input to be the mesh that has the solution values associated with it that you want to interpolate from (which in this case is the MergeBlocks). This is confusing because it is exactly backwards to what we use for solution interpolation where we call the mesh with a solution the source and the new mesh the target).&lt;br /&gt;
# Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.&lt;br /&gt;
# The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.&lt;br /&gt;
# Advance PHASTA one step in serial, then convert to desired number of processors using Chef.&lt;br /&gt;
&lt;br /&gt;
==== Alternative to Using MATLAB for Reformatting ====&lt;br /&gt;
&lt;br /&gt;
For those wanting to skip the MATLAB step, &amp;lt;code&amp;gt;awk&amp;lt;/code&amp;gt; can also do the necessary column manipulation.&lt;br /&gt;
&lt;br /&gt;
Copy your file (in case you goof up):&lt;br /&gt;
 cp PVinterp0.csv test.dat&lt;br /&gt;
Remove the header line:&lt;br /&gt;
 sed -i 1,1d test.dat&lt;br /&gt;
Replace the commas with spaces:&lt;br /&gt;
 sed -i 's/,/\ /g' test.dat&lt;br /&gt;
Rearrange the columns  to be what solInterp.1 wants.  It is probably a good idea to do a &amp;lt;code&amp;gt;head -1 PVinterp0.csv&amp;lt;/code&amp;gt; to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w.&lt;br /&gt;
 awk '{print $6,$7,$8,$1,$2,$3,$4}' test.dat &amp;gt; solInterp.1&lt;br /&gt;
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp &amp;lt;code&amp;gt;Load and set 3D IC: True&amp;lt;/code&amp;gt; for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take&lt;br /&gt;
      iexec : 0&lt;br /&gt;
or&lt;br /&gt;
     Number of Timesteps: 0&lt;br /&gt;
to not take any  actual steps.  Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that  conditional.   Alternatively, the second option also  skips over the time stepping and writes the solution  AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=1956</id>
		<title>SimModeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=1956"/>
				<updated>2023-08-14T16:21:55Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
SimModeler is a model creation program from Simmetrix.  It takes the mesh and geometric model and creates the input files for PHASTA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
To run SimModeler, first connect via VNC, then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of SimModeler to your environment (the below example will get the &amp;quot;default&amp;quot; version):&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodeler&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
 vglrun simmodeler&lt;br /&gt;
&lt;br /&gt;
== Converting old files ==&lt;br /&gt;
This is a guide for converting old files (parasolid and .spj) to the new format (.smd).&lt;br /&gt;
&lt;br /&gt;
After connecting to one of the compute machines, add the suite of tools for SimModeler to your environment:&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodsuite&lt;br /&gt;
&lt;br /&gt;
From your case, make a new directory and copy your parasolid (.x_t or .xmt_txt), and .spj file into it. Rename the parasolid file to geom.xmt_txt and the .spj file to geom.spj, if they aren't already named that way. Then from the directory just created (now holds geom.xmt_txt and geom.spj) run: &lt;br /&gt;
&lt;br /&gt;
 /users/matthb2/simmodelerconvert/testConvert &lt;br /&gt;
&lt;br /&gt;
Your directory now contains two new files: model.smd and model.x_t&lt;br /&gt;
&lt;br /&gt;
== Creating new files ==&lt;br /&gt;
&lt;br /&gt;
Loading in geometry is about as intuitive as is possibly can be. Go to File -&amp;gt; Import Geometry, browse to the appropriate model, and select Open. Once open, it is possible to both mesh the model and to create boundary conditions for it. Because BLMesher is presently the primary meshing tool, it is only necessary to use SimModeler to create boundary conditions. Go to Analysis -&amp;gt; Select Solver, and select phasta. After selecting phasta, the Analysis Attributes option under Analysis becomes valid. Clicking it brings up the corresponding window. From this new window, it is possible to apply  boundary conditions and initial conditions by clicking the small button next to the drop down menu [add picture]. Note you must also double click on &amp;quot;problem definition&amp;quot; which will allow you to name the case.  Later post processing expects the name &amp;quot;geom&amp;quot; so always name it so.&lt;br /&gt;
&lt;br /&gt;
== Boundary conditions ==&lt;br /&gt;
&lt;br /&gt;
Commonly boundary conditions include:&lt;br /&gt;
&lt;br /&gt;
*comp3 - Specifies a 3D velocity vector&lt;br /&gt;
*comp1 - Specifies a 3D vector in which the velocity is constrained. Velocity normal to this vector is not directly affected. This is useful for creating slip walls and mimicking free stream regions. &lt;br /&gt;
*temperature - Sets the temperature of the wall. This is only needed for compressible cases. &lt;br /&gt;
*scalar_1 - Sets the scalar_1 / eddy viscosity to apply at a wall. For the Spalart Allmaras models, scalar_1 should be zero at physical walls where a boundary layer develops and 3 to 5 times the molecular viscosity at free stream boundaries (http://turbmodels.larc.nasa.gov/spalart.html)&lt;br /&gt;
*surf ID - Associates a number with one or more faces. This can then be read by Phasta and used to apply more complicated boundary conditions in software. &lt;br /&gt;
*natural pressure - Apply a mean pressure over a surface. The pressure at any particular point is still allowed to vary (someone verify). &lt;br /&gt;
*traction vector - ??. The zero vector is typically applied at outlet. &lt;br /&gt;
*heat flux - Specifies the rate at which heat is injected / removed (not sure which one) into / from the fluid domain. The value is almost always set to zero to create a perfectly insulated boundary. &lt;br /&gt;
*scalar_1 flux - set the flux of scalar_1 / eddy viscosity into / out of the domain (not sure which one). This is typically only used at outlets where high values of eddy viscosity have been convected downstream of turbulent walls. The value is almost always set to zero. &lt;br /&gt;
*turbulence wall - Indicates that a surface is to be included in the calculation of d2wall files (verify) which are then used by the Spalart Allmaras turbulence model to generate more physical turbulent kinetic energy production / dissipation budgets.&lt;br /&gt;
&lt;br /&gt;
=== Incompressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for an incompressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream molecular viscosity)&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream molecular viscosity)&lt;br /&gt;
*Outflow&lt;br /&gt;
**natural pressure (zero)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
*Impermeable slip walls&lt;br /&gt;
**Comp 1 (zero in wall-normal direction)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
&lt;br /&gt;
=== Compressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for a compressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream molecular viscosity)&lt;br /&gt;
**initial pressure&lt;br /&gt;
**initial temperature&lt;br /&gt;
&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream molecular viscosity)&lt;br /&gt;
**temperature&lt;br /&gt;
**pressure&lt;br /&gt;
&lt;br /&gt;
*Outflow&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
**heat flux (zero)&lt;br /&gt;
&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
**temperature or heat flux&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1945</id>
		<title>Getting Started with Simmodeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1945"/>
				<updated>2023-02-22T03:20:11Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Layer Mesh Height Grows in X */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Goal of using SimModeler==&lt;br /&gt;
In this section you will learn how to create a mesh and apply boundary and initial conditions to your &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; model file from the previous step.  You will save both the model and mesh containing the boundary/initial conditions under new file names, which we will later prepare to be passed into '''Chef'''.&lt;br /&gt;
&lt;br /&gt;
==Launching the Software ==&lt;br /&gt;
First, tunnel to viz002 or viz003 using &amp;lt;code&amp;gt;vglconnect -s viz00x&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Then, set your environment as shown in the [https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4 convert step video].&lt;br /&gt;
&lt;br /&gt;
Finally, run &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; in your terminal to launch simmodeler. Once SimModeler is opened, select File &amp;gt; Open Model. If you ran &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; while in your working directory, the &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file should be immediately available to select and open.&lt;br /&gt;
&lt;br /&gt;
==Accessing the User Manual==&lt;br /&gt;
When launching simmodeler, there is a blue question mark at the top right of the GUI. Click it and then click &amp;quot;launch manual&amp;quot; to open the user manual associated with the version of simmodeler you are using. This gives detailed descriptions of the various attributes and how they are defined to generate the desired mesh.&lt;br /&gt;
&lt;br /&gt;
==Surface Meshing==&lt;br /&gt;
&lt;br /&gt;
===2D and 1D Boundary Layers===&lt;br /&gt;
&lt;br /&gt;
To generate a proper surface mesh, it is important that both 2D and 1D boundary layers are implemented. Note that a 2D Boundary layer is defined on a surface and a 1D Boundary layer is defined on a line. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_1D_2D_BLs.mp4 video] shows both 2D and 1D boundary layers being applied to an airfoil and the mesh that results from these applied attributes.&lt;br /&gt;
&lt;br /&gt;
===Boundary Layer Mesh Height Grows in X===&lt;br /&gt;
&lt;br /&gt;
An advanced technique for boundary layer meshing utilizes the &amp;quot;Specific Thicknesses&amp;quot; option of the 2D Boundary Layer attribute in SimModeler. This option can prescribe the boundary layer mesh to grow as it develops along the surface, mimicking the behavior of the boundary layer itself.&lt;br /&gt;
&lt;br /&gt;
# Create a 2D Boundary Layer attribute on the edge where you expect the boundary layer to grow. Select the &amp;quot;Specific Thicknesses&amp;quot; type.&lt;br /&gt;
# Select and set the face where you want the boundary layer mesh to propagate into (it should be connected to the selected edge).&lt;br /&gt;
# Edit &amp;quot;Layer Thicknesses&amp;quot; and select &amp;quot;Import&amp;quot; in the pop-up window.&lt;br /&gt;
#* The imported file should have the thicknesses of each layer of the mesh, growing as a function of x. The example below shows the first and last few lines of such a setup. In this case, the first layer of the mesh is 5e-6 m tall at the beginning of the selected edge. In the geometry file of this example, the beginning of the selected edge starts at 0.25 meters. As you traverse in x, the thickness of the mesh is scaled by a factor of 20. This pattern can be easily extrapolated in an excel file to grow the mesh at a desired rate.&lt;br /&gt;
&lt;br /&gt;
(1)  5.00E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(2)  5.88E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(3)  6.92E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
... &amp;lt;br&amp;gt;&lt;br /&gt;
(49) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(50) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mesh Size and Face Extrusions===&lt;br /&gt;
&lt;br /&gt;
Two more useful attributes when generating proper surface meshes are defining Mesh Sizes and Face Extrusions on your desired surfaces. Face Extrusions are useful when adding refinement over curved surfaces. This process is well covered in this [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_MeshSize_FaceExt.mp4 video].&lt;br /&gt;
&lt;br /&gt;
==Volume Meshing==&lt;br /&gt;
&lt;br /&gt;
===3D Boundary Layers===&lt;br /&gt;
One of the most important aspects of volume mesh development is generating proper 3D boundary layers. This process is well outlined [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 here] from around 8:30 to 16:00. Note that all mesh attributes are set up under the &amp;quot;Meshing&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Mesh Refinement Zones===&lt;br /&gt;
Mesh refinement zones are useful for increasing the grid density within certain volumes of your simulation domain. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/Mesh_refinement_zone_tutorial.mp4 mesh refinement tutorial] will briefly walk you through how these are implemented.&lt;br /&gt;
&lt;br /&gt;
==Applying Boundary and Initial Conditions==&lt;br /&gt;
A nice description of each of the common BCs used in both the Compressible and Incompressible builds of PHASTA is provided [[SimModeler|here]]. The same video as the [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 volume meshing step] shows how to apply boundary conditions for an incompressible case starting at around 16:20. This linked [https://fluid.colorado.edu/tutorials/tutorialVideos/BC_tutorial.mp4 video] also shows how the BCs are applied for an incompressible case. Note that the case needs to be called ''geom'' as is done and explained in the tutorial video. Once you are done applying the BC's, it is very important that you save out the model &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file with these applied boundary conditions.&lt;br /&gt;
&lt;br /&gt;
==Saving Out the Mesh==&lt;br /&gt;
Once you are satisfied with your mesh refinement and have applied boundary/initial conditions to your model, you are ready to generate and save out the final mesh. In the meshing tab, you will want to click ''Generate Mesh''. The default settings will suffice for proper mesh generation. If your grid is large, this step can take some time. Once the grid is finished generating, you will want to save out the generated &amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file as you will need it for the next step in the process. &lt;br /&gt;
&lt;br /&gt;
==Next Steps  -  Preping for Chef==&lt;br /&gt;
Once the BCs are successfully implemented, you have saved out a model file with these BCs (&amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt;), and have generated and saved your mesh grid (&amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file), you are ready to prepare the grid for Chef. To get started on this step, head over to [[Prepping the Grid for Chef]].&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1944</id>
		<title>Getting Started with Simmodeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1944"/>
				<updated>2023-02-22T03:19:15Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Layer Mesh Height Grows in X */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Goal of using SimModeler==&lt;br /&gt;
In this section you will learn how to create a mesh and apply boundary and initial conditions to your &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; model file from the previous step.  You will save both the model and mesh containing the boundary/initial conditions under new file names, which we will later prepare to be passed into '''Chef'''.&lt;br /&gt;
&lt;br /&gt;
==Launching the Software ==&lt;br /&gt;
First, tunnel to viz002 or viz003 using &amp;lt;code&amp;gt;vglconnect -s viz00x&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Then, set your environment as shown in the [https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4 convert step video].&lt;br /&gt;
&lt;br /&gt;
Finally, run &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; in your terminal to launch simmodeler. Once SimModeler is opened, select File &amp;gt; Open Model. If you ran &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; while in your working directory, the &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file should be immediately available to select and open.&lt;br /&gt;
&lt;br /&gt;
==Accessing the User Manual==&lt;br /&gt;
When launching simmodeler, there is a blue question mark at the top right of the GUI. Click it and then click &amp;quot;launch manual&amp;quot; to open the user manual associated with the version of simmodeler you are using. This gives detailed descriptions of the various attributes and how they are defined to generate the desired mesh.&lt;br /&gt;
&lt;br /&gt;
==Surface Meshing==&lt;br /&gt;
&lt;br /&gt;
===2D and 1D Boundary Layers===&lt;br /&gt;
&lt;br /&gt;
To generate a proper surface mesh, it is important that both 2D and 1D boundary layers are implemented. Note that a 2D Boundary layer is defined on a surface and a 1D Boundary layer is defined on a line. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_1D_2D_BLs.mp4 video] shows both 2D and 1D boundary layers being applied to an airfoil and the mesh that results from these applied attributes.&lt;br /&gt;
&lt;br /&gt;
===Boundary Layer Mesh Height Grows in X===&lt;br /&gt;
&lt;br /&gt;
An advanced technique for boundary layer meshing utilizes the &amp;quot;Specific Thicknesses&amp;quot; option of the 2D Boundary Layer attribute in SimModeler. This option can prescribe the boundary layer mesh to grow as it develops along the surface, mimicking the behavior of the boundary layer itself.&lt;br /&gt;
&lt;br /&gt;
# Create a 2D Boundary Layer attribute on the edge where you expect the boundary layer to grow. Select the &amp;quot;Specific Thicknesses&amp;quot; type.&lt;br /&gt;
# Select the set the face where you want the boundary layer mesh to propagate into (it should be connected to the selected edge).&lt;br /&gt;
# Edit &amp;quot;Layer Thicknesses&amp;quot; and select &amp;quot;Import&amp;quot; in the pop-up window.&lt;br /&gt;
#* The imported file should have the thicknesses of each layer of the mesh, growing as a function of x. The example below shows the first and last few lines of such a setup. In this case, the first layer of the mesh is 5e-6 m tall at the beginning of the selected edge. In the geometry file of this example, the beginning of the selected edge starts at 0.25 meters. As you traverse in x, the thickness of the mesh is scaled by a factor of 20. This pattern can be easily extrapolated in an excel file to grow the mesh at a desired rate.&lt;br /&gt;
&lt;br /&gt;
(1)  5.00E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(2)  5.88E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(3)  6.92E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
... &amp;lt;br&amp;gt;&lt;br /&gt;
(49) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(50) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mesh Size and Face Extrusions===&lt;br /&gt;
&lt;br /&gt;
Two more useful attributes when generating proper surface meshes are defining Mesh Sizes and Face Extrusions on your desired surfaces. Face Extrusions are useful when adding refinement over curved surfaces. This process is well covered in this [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_MeshSize_FaceExt.mp4 video].&lt;br /&gt;
&lt;br /&gt;
==Volume Meshing==&lt;br /&gt;
&lt;br /&gt;
===3D Boundary Layers===&lt;br /&gt;
One of the most important aspects of volume mesh development is generating proper 3D boundary layers. This process is well outlined [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 here] from around 8:30 to 16:00. Note that all mesh attributes are set up under the &amp;quot;Meshing&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Mesh Refinement Zones===&lt;br /&gt;
Mesh refinement zones are useful for increasing the grid density within certain volumes of your simulation domain. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/Mesh_refinement_zone_tutorial.mp4 mesh refinement tutorial] will briefly walk you through how these are implemented.&lt;br /&gt;
&lt;br /&gt;
==Applying Boundary and Initial Conditions==&lt;br /&gt;
A nice description of each of the common BCs used in both the Compressible and Incompressible builds of PHASTA is provided [[SimModeler|here]]. The same video as the [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 volume meshing step] shows how to apply boundary conditions for an incompressible case starting at around 16:20. This linked [https://fluid.colorado.edu/tutorials/tutorialVideos/BC_tutorial.mp4 video] also shows how the BCs are applied for an incompressible case. Note that the case needs to be called ''geom'' as is done and explained in the tutorial video. Once you are done applying the BC's, it is very important that you save out the model &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file with these applied boundary conditions.&lt;br /&gt;
&lt;br /&gt;
==Saving Out the Mesh==&lt;br /&gt;
Once you are satisfied with your mesh refinement and have applied boundary/initial conditions to your model, you are ready to generate and save out the final mesh. In the meshing tab, you will want to click ''Generate Mesh''. The default settings will suffice for proper mesh generation. If your grid is large, this step can take some time. Once the grid is finished generating, you will want to save out the generated &amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file as you will need it for the next step in the process. &lt;br /&gt;
&lt;br /&gt;
==Next Steps  -  Preping for Chef==&lt;br /&gt;
Once the BCs are successfully implemented, you have saved out a model file with these BCs (&amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt;), and have generated and saved your mesh grid (&amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file), you are ready to prepare the grid for Chef. To get started on this step, head over to [[Prepping the Grid for Chef]].&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1943</id>
		<title>Getting Started with Simmodeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1943"/>
				<updated>2023-02-22T03:18:31Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Layer Mesh Height Grows in X */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Goal of using SimModeler==&lt;br /&gt;
In this section you will learn how to create a mesh and apply boundary and initial conditions to your &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; model file from the previous step.  You will save both the model and mesh containing the boundary/initial conditions under new file names, which we will later prepare to be passed into '''Chef'''.&lt;br /&gt;
&lt;br /&gt;
==Launching the Software ==&lt;br /&gt;
First, tunnel to viz002 or viz003 using &amp;lt;code&amp;gt;vglconnect -s viz00x&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Then, set your environment as shown in the [https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4 convert step video].&lt;br /&gt;
&lt;br /&gt;
Finally, run &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; in your terminal to launch simmodeler. Once SimModeler is opened, select File &amp;gt; Open Model. If you ran &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; while in your working directory, the &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file should be immediately available to select and open.&lt;br /&gt;
&lt;br /&gt;
==Accessing the User Manual==&lt;br /&gt;
When launching simmodeler, there is a blue question mark at the top right of the GUI. Click it and then click &amp;quot;launch manual&amp;quot; to open the user manual associated with the version of simmodeler you are using. This gives detailed descriptions of the various attributes and how they are defined to generate the desired mesh.&lt;br /&gt;
&lt;br /&gt;
==Surface Meshing==&lt;br /&gt;
&lt;br /&gt;
===2D and 1D Boundary Layers===&lt;br /&gt;
&lt;br /&gt;
To generate a proper surface mesh, it is important that both 2D and 1D boundary layers are implemented. Note that a 2D Boundary layer is defined on a surface and a 1D Boundary layer is defined on a line. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_1D_2D_BLs.mp4 video] shows both 2D and 1D boundary layers being applied to an airfoil and the mesh that results from these applied attributes.&lt;br /&gt;
&lt;br /&gt;
===Boundary Layer Mesh Height Grows in X===&lt;br /&gt;
&lt;br /&gt;
An advanced technique for boundary layer meshing utilizes the &amp;quot;Specific Thicknesses&amp;quot; option of the 2D Boundary Layer attribute in SimModeler. This option can prescribe the boundary layer mesh to grow as it develops along the surface, mimicking the behavior of the boundary layer itself.&lt;br /&gt;
&lt;br /&gt;
# Create a 2D Boundary Layer attribute on the edge where you expect the boundary layer to grow. Select the &amp;quot;Specific Thicknesses&amp;quot; type.&lt;br /&gt;
# Select the set the face where you want the boundary layer mesh to propagate into (it should be connected to the selected edge).&lt;br /&gt;
# Edit &amp;quot;Layer Thicknesses&amp;quot; and select &amp;quot;Import&amp;quot; in the pop-up window.&lt;br /&gt;
#* The imported file should have the thicknesses of each layer of the mesh, growing as a function of x. The example below shows the first and last few lines of such a setup. In this case, the first layer of the mesh is 5e-6 m tall at the beginning of the selected edge. In the geometry file of this example, the beginning of the selected edge starts at 0.25 meters. As you traverse in x, the thickness of the mesh is scaled by a factor of 20. This pattern can be easily extrapolated in an excel file; &lt;br /&gt;
&lt;br /&gt;
(1)  5.00E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(2)  5.88E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(3)  6.92E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
... &amp;lt;br&amp;gt;&lt;br /&gt;
(49) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(50) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mesh Size and Face Extrusions===&lt;br /&gt;
&lt;br /&gt;
Two more useful attributes when generating proper surface meshes are defining Mesh Sizes and Face Extrusions on your desired surfaces. Face Extrusions are useful when adding refinement over curved surfaces. This process is well covered in this [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_MeshSize_FaceExt.mp4 video].&lt;br /&gt;
&lt;br /&gt;
==Volume Meshing==&lt;br /&gt;
&lt;br /&gt;
===3D Boundary Layers===&lt;br /&gt;
One of the most important aspects of volume mesh development is generating proper 3D boundary layers. This process is well outlined [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 here] from around 8:30 to 16:00. Note that all mesh attributes are set up under the &amp;quot;Meshing&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Mesh Refinement Zones===&lt;br /&gt;
Mesh refinement zones are useful for increasing the grid density within certain volumes of your simulation domain. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/Mesh_refinement_zone_tutorial.mp4 mesh refinement tutorial] will briefly walk you through how these are implemented.&lt;br /&gt;
&lt;br /&gt;
==Applying Boundary and Initial Conditions==&lt;br /&gt;
A nice description of each of the common BCs used in both the Compressible and Incompressible builds of PHASTA is provided [[SimModeler|here]]. The same video as the [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 volume meshing step] shows how to apply boundary conditions for an incompressible case starting at around 16:20. This linked [https://fluid.colorado.edu/tutorials/tutorialVideos/BC_tutorial.mp4 video] also shows how the BCs are applied for an incompressible case. Note that the case needs to be called ''geom'' as is done and explained in the tutorial video. Once you are done applying the BC's, it is very important that you save out the model &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file with these applied boundary conditions.&lt;br /&gt;
&lt;br /&gt;
==Saving Out the Mesh==&lt;br /&gt;
Once you are satisfied with your mesh refinement and have applied boundary/initial conditions to your model, you are ready to generate and save out the final mesh. In the meshing tab, you will want to click ''Generate Mesh''. The default settings will suffice for proper mesh generation. If your grid is large, this step can take some time. Once the grid is finished generating, you will want to save out the generated &amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file as you will need it for the next step in the process. &lt;br /&gt;
&lt;br /&gt;
==Next Steps  -  Preping for Chef==&lt;br /&gt;
Once the BCs are successfully implemented, you have saved out a model file with these BCs (&amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt;), and have generated and saved your mesh grid (&amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file), you are ready to prepare the grid for Chef. To get started on this step, head over to [[Prepping the Grid for Chef]].&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1942</id>
		<title>Getting Started with Simmodeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1942"/>
				<updated>2023-02-22T03:17:37Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Layer Mesh Height Grows in X */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Goal of using SimModeler==&lt;br /&gt;
In this section you will learn how to create a mesh and apply boundary and initial conditions to your &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; model file from the previous step.  You will save both the model and mesh containing the boundary/initial conditions under new file names, which we will later prepare to be passed into '''Chef'''.&lt;br /&gt;
&lt;br /&gt;
==Launching the Software ==&lt;br /&gt;
First, tunnel to viz002 or viz003 using &amp;lt;code&amp;gt;vglconnect -s viz00x&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Then, set your environment as shown in the [https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4 convert step video].&lt;br /&gt;
&lt;br /&gt;
Finally, run &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; in your terminal to launch simmodeler. Once SimModeler is opened, select File &amp;gt; Open Model. If you ran &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; while in your working directory, the &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file should be immediately available to select and open.&lt;br /&gt;
&lt;br /&gt;
==Accessing the User Manual==&lt;br /&gt;
When launching simmodeler, there is a blue question mark at the top right of the GUI. Click it and then click &amp;quot;launch manual&amp;quot; to open the user manual associated with the version of simmodeler you are using. This gives detailed descriptions of the various attributes and how they are defined to generate the desired mesh.&lt;br /&gt;
&lt;br /&gt;
==Surface Meshing==&lt;br /&gt;
&lt;br /&gt;
===2D and 1D Boundary Layers===&lt;br /&gt;
&lt;br /&gt;
To generate a proper surface mesh, it is important that both 2D and 1D boundary layers are implemented. Note that a 2D Boundary layer is defined on a surface and a 1D Boundary layer is defined on a line. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_1D_2D_BLs.mp4 video] shows both 2D and 1D boundary layers being applied to an airfoil and the mesh that results from these applied attributes.&lt;br /&gt;
&lt;br /&gt;
===Boundary Layer Mesh Height Grows in X===&lt;br /&gt;
&lt;br /&gt;
An advanced technique for boundary layer meshing utilizes the &amp;quot;Specific Thicknesses&amp;quot; option of the 2D Boundary Layer attribute in SimModeler. This option can prescribe the boundary layer mesh to grow as it develops along the surface, mimicking the behavior of the boundary layer itself.&lt;br /&gt;
&lt;br /&gt;
# Create a 2D Boundary Layer attribute on the edge where you expect the boundary layer to grow. Select the &amp;quot;Specific Thicknesses&amp;quot; type.&lt;br /&gt;
# Select the set the face where you want the boundary layer mesh to propagate into (it should be connected to the selected edge).&lt;br /&gt;
# Edit &amp;quot;Layer Thicknesses&amp;quot; and select &amp;quot;Import&amp;quot; in the pop-up window.&lt;br /&gt;
#* The imported file should have the thicknesses of each layer of the mesh, growing as a function of x. The example below shows the first and last few lines of such a setup. In this case, the first layer of the mesh is 5e-6 m tall at the beginning of the selected edge. In the geometry file, the beginning of the selected edge starts at 0.25 meters. As you traverse in x, the thickness of the mesh is scaled by a factor of 20. This pattern can be easily extrapolated in an excel file; &lt;br /&gt;
&lt;br /&gt;
(1)  5.00E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(2)  5.88E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(3)  6.92E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
... &amp;lt;br&amp;gt;&lt;br /&gt;
(49) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(50) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mesh Size and Face Extrusions===&lt;br /&gt;
&lt;br /&gt;
Two more useful attributes when generating proper surface meshes are defining Mesh Sizes and Face Extrusions on your desired surfaces. Face Extrusions are useful when adding refinement over curved surfaces. This process is well covered in this [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_MeshSize_FaceExt.mp4 video].&lt;br /&gt;
&lt;br /&gt;
==Volume Meshing==&lt;br /&gt;
&lt;br /&gt;
===3D Boundary Layers===&lt;br /&gt;
One of the most important aspects of volume mesh development is generating proper 3D boundary layers. This process is well outlined [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 here] from around 8:30 to 16:00. Note that all mesh attributes are set up under the &amp;quot;Meshing&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Mesh Refinement Zones===&lt;br /&gt;
Mesh refinement zones are useful for increasing the grid density within certain volumes of your simulation domain. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/Mesh_refinement_zone_tutorial.mp4 mesh refinement tutorial] will briefly walk you through how these are implemented.&lt;br /&gt;
&lt;br /&gt;
==Applying Boundary and Initial Conditions==&lt;br /&gt;
A nice description of each of the common BCs used in both the Compressible and Incompressible builds of PHASTA is provided [[SimModeler|here]]. The same video as the [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 volume meshing step] shows how to apply boundary conditions for an incompressible case starting at around 16:20. This linked [https://fluid.colorado.edu/tutorials/tutorialVideos/BC_tutorial.mp4 video] also shows how the BCs are applied for an incompressible case. Note that the case needs to be called ''geom'' as is done and explained in the tutorial video. Once you are done applying the BC's, it is very important that you save out the model &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file with these applied boundary conditions.&lt;br /&gt;
&lt;br /&gt;
==Saving Out the Mesh==&lt;br /&gt;
Once you are satisfied with your mesh refinement and have applied boundary/initial conditions to your model, you are ready to generate and save out the final mesh. In the meshing tab, you will want to click ''Generate Mesh''. The default settings will suffice for proper mesh generation. If your grid is large, this step can take some time. Once the grid is finished generating, you will want to save out the generated &amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file as you will need it for the next step in the process. &lt;br /&gt;
&lt;br /&gt;
==Next Steps  -  Preping for Chef==&lt;br /&gt;
Once the BCs are successfully implemented, you have saved out a model file with these BCs (&amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt;), and have generated and saved your mesh grid (&amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file), you are ready to prepare the grid for Chef. To get started on this step, head over to [[Prepping the Grid for Chef]].&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1941</id>
		<title>Getting Started with Simmodeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1941"/>
				<updated>2023-02-22T03:17:19Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Layer Mesh Height Grows in X */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Goal of using SimModeler==&lt;br /&gt;
In this section you will learn how to create a mesh and apply boundary and initial conditions to your &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; model file from the previous step.  You will save both the model and mesh containing the boundary/initial conditions under new file names, which we will later prepare to be passed into '''Chef'''.&lt;br /&gt;
&lt;br /&gt;
==Launching the Software ==&lt;br /&gt;
First, tunnel to viz002 or viz003 using &amp;lt;code&amp;gt;vglconnect -s viz00x&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Then, set your environment as shown in the [https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4 convert step video].&lt;br /&gt;
&lt;br /&gt;
Finally, run &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; in your terminal to launch simmodeler. Once SimModeler is opened, select File &amp;gt; Open Model. If you ran &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; while in your working directory, the &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file should be immediately available to select and open.&lt;br /&gt;
&lt;br /&gt;
==Accessing the User Manual==&lt;br /&gt;
When launching simmodeler, there is a blue question mark at the top right of the GUI. Click it and then click &amp;quot;launch manual&amp;quot; to open the user manual associated with the version of simmodeler you are using. This gives detailed descriptions of the various attributes and how they are defined to generate the desired mesh.&lt;br /&gt;
&lt;br /&gt;
==Surface Meshing==&lt;br /&gt;
&lt;br /&gt;
===2D and 1D Boundary Layers===&lt;br /&gt;
&lt;br /&gt;
To generate a proper surface mesh, it is important that both 2D and 1D boundary layers are implemented. Note that a 2D Boundary layer is defined on a surface and a 1D Boundary layer is defined on a line. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_1D_2D_BLs.mp4 video] shows both 2D and 1D boundary layers being applied to an airfoil and the mesh that results from these applied attributes.&lt;br /&gt;
&lt;br /&gt;
===Boundary Layer Mesh Height Grows in X===&lt;br /&gt;
&lt;br /&gt;
An advanced technique for boundary layer meshing utilizes the &amp;quot;Specific Thicknesses&amp;quot; option of the 2D Boundary Layer attribute in SimModeler. This option can prescribe the boundary layer mesh to grow as it develops along the surface, mimicking the behavior of the boundary layer itself.&lt;br /&gt;
&lt;br /&gt;
# Create a 2D Boundary Layer attribute on the edge where you expect the boundary layer to grow. Select the &amp;quot;Specific Thicknesses&amp;quot; type.&lt;br /&gt;
# Select the set the face where you want the boundary layer mesh to propagate into (it should be connected to the selected edge).&lt;br /&gt;
# Edit &amp;quot;Layer Thicknesses&amp;quot; and select &amp;quot;Import&amp;quot; in the pop-up window.&lt;br /&gt;
#* The imported file should have the thicknesses of each layer of the mesh, growing as a function of x. The example below shows the first and last few lines of such a setup. In this case, the first layer of the mesh is 5e-6 m tall at the beginning of the selected edge. In the geometry file, the beginning of the selected edge starts at 0.25 meters. As you traverse in x, the thickness of the mesh is scaled by a factor of 20. This pattern can be easily extrapolated in an excel file; &lt;br /&gt;
&lt;br /&gt;
(1)  5.00E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(2)  5.88E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(3)  6.92E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
. &amp;lt;br&amp;gt;&lt;br /&gt;
. &amp;lt;br&amp;gt;&lt;br /&gt;
. &amp;lt;br&amp;gt;&lt;br /&gt;
(49) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(50) 1.00E-04*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mesh Size and Face Extrusions===&lt;br /&gt;
&lt;br /&gt;
Two more useful attributes when generating proper surface meshes are defining Mesh Sizes and Face Extrusions on your desired surfaces. Face Extrusions are useful when adding refinement over curved surfaces. This process is well covered in this [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_MeshSize_FaceExt.mp4 video].&lt;br /&gt;
&lt;br /&gt;
==Volume Meshing==&lt;br /&gt;
&lt;br /&gt;
===3D Boundary Layers===&lt;br /&gt;
One of the most important aspects of volume mesh development is generating proper 3D boundary layers. This process is well outlined [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 here] from around 8:30 to 16:00. Note that all mesh attributes are set up under the &amp;quot;Meshing&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Mesh Refinement Zones===&lt;br /&gt;
Mesh refinement zones are useful for increasing the grid density within certain volumes of your simulation domain. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/Mesh_refinement_zone_tutorial.mp4 mesh refinement tutorial] will briefly walk you through how these are implemented.&lt;br /&gt;
&lt;br /&gt;
==Applying Boundary and Initial Conditions==&lt;br /&gt;
A nice description of each of the common BCs used in both the Compressible and Incompressible builds of PHASTA is provided [[SimModeler|here]]. The same video as the [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 volume meshing step] shows how to apply boundary conditions for an incompressible case starting at around 16:20. This linked [https://fluid.colorado.edu/tutorials/tutorialVideos/BC_tutorial.mp4 video] also shows how the BCs are applied for an incompressible case. Note that the case needs to be called ''geom'' as is done and explained in the tutorial video. Once you are done applying the BC's, it is very important that you save out the model &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file with these applied boundary conditions.&lt;br /&gt;
&lt;br /&gt;
==Saving Out the Mesh==&lt;br /&gt;
Once you are satisfied with your mesh refinement and have applied boundary/initial conditions to your model, you are ready to generate and save out the final mesh. In the meshing tab, you will want to click ''Generate Mesh''. The default settings will suffice for proper mesh generation. If your grid is large, this step can take some time. Once the grid is finished generating, you will want to save out the generated &amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file as you will need it for the next step in the process. &lt;br /&gt;
&lt;br /&gt;
==Next Steps  -  Preping for Chef==&lt;br /&gt;
Once the BCs are successfully implemented, you have saved out a model file with these BCs (&amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt;), and have generated and saved your mesh grid (&amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file), you are ready to prepare the grid for Chef. To get started on this step, head over to [[Prepping the Grid for Chef]].&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1940</id>
		<title>Getting Started with Simmodeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Getting_Started_with_Simmodeler&amp;diff=1940"/>
				<updated>2023-02-22T03:17:00Z</updated>
		
		<summary type="html">&lt;p&gt;Conrad54418: /* Boundary Layer Mesh Height Grows in X */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Goal of using SimModeler==&lt;br /&gt;
In this section you will learn how to create a mesh and apply boundary and initial conditions to your &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; model file from the previous step.  You will save both the model and mesh containing the boundary/initial conditions under new file names, which we will later prepare to be passed into '''Chef'''.&lt;br /&gt;
&lt;br /&gt;
==Launching the Software ==&lt;br /&gt;
First, tunnel to viz002 or viz003 using &amp;lt;code&amp;gt;vglconnect -s viz00x&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Then, set your environment as shown in the [https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4 convert step video].&lt;br /&gt;
&lt;br /&gt;
Finally, run &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; in your terminal to launch simmodeler. Once SimModeler is opened, select File &amp;gt; Open Model. If you ran &amp;lt;code&amp;gt;vglrun simmodeler&amp;lt;/code&amp;gt; while in your working directory, the &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file should be immediately available to select and open.&lt;br /&gt;
&lt;br /&gt;
==Accessing the User Manual==&lt;br /&gt;
When launching simmodeler, there is a blue question mark at the top right of the GUI. Click it and then click &amp;quot;launch manual&amp;quot; to open the user manual associated with the version of simmodeler you are using. This gives detailed descriptions of the various attributes and how they are defined to generate the desired mesh.&lt;br /&gt;
&lt;br /&gt;
==Surface Meshing==&lt;br /&gt;
&lt;br /&gt;
===2D and 1D Boundary Layers===&lt;br /&gt;
&lt;br /&gt;
To generate a proper surface mesh, it is important that both 2D and 1D boundary layers are implemented. Note that a 2D Boundary layer is defined on a surface and a 1D Boundary layer is defined on a line. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_1D_2D_BLs.mp4 video] shows both 2D and 1D boundary layers being applied to an airfoil and the mesh that results from these applied attributes.&lt;br /&gt;
&lt;br /&gt;
===Boundary Layer Mesh Height Grows in X===&lt;br /&gt;
&lt;br /&gt;
An advanced technique for boundary layer meshing utilizes the &amp;quot;Specific Thicknesses&amp;quot; option of the 2D Boundary Layer attribute in SimModeler. This option can prescribe the boundary layer mesh to grow as it develops along the surface, mimicking the behavior of the boundary layer itself.&lt;br /&gt;
&lt;br /&gt;
# Create a 2D Boundary Layer attribute on the edge where you expect the boundary layer to grow. Select the &amp;quot;Specific Thicknesses&amp;quot; type.&lt;br /&gt;
# Select the set the face where you want the boundary layer mesh to propagate into (it should be connected to the selected edge).&lt;br /&gt;
# Edit &amp;quot;Layer Thicknesses&amp;quot; and select &amp;quot;Import&amp;quot; in the pop-up window.&lt;br /&gt;
#* The imported file should have the thicknesses of each layer of the mesh, growing as a function of x. The example below shows the first and last few lines of such a setup. In this case, the first layer of the mesh is 5e-6 m tall at the beginning of the selected edge. In the geometry file, the beginning of the selected edge starts at 0.25 meters. As you traverse in x, the thickness of the mesh is scaled by a factor of 20. This pattern can be easily extrapolated in an excel file; &lt;br /&gt;
&lt;br /&gt;
(1)  5.00E-06*(1.0+($x-0.25)*20) &amp;lt;br&amp;gt;&lt;br /&gt;
(2)  5.88E-06*(1.0+($x-0.25)*20)&lt;br /&gt;
(3)  6.92E-06*(1.0+($x-0.25)*20)&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
(49) 1.00E-04*(1.0+($x-0.25)*20)&lt;br /&gt;
(50) 1.00E-04*(1.0+($x-0.25)*20)&lt;br /&gt;
&lt;br /&gt;
===Mesh Size and Face Extrusions===&lt;br /&gt;
&lt;br /&gt;
Two more useful attributes when generating proper surface meshes are defining Mesh Sizes and Face Extrusions on your desired surfaces. Face Extrusions are useful when adding refinement over curved surfaces. This process is well covered in this [https://fluid.colorado.edu/tutorials/tutorialVideos/MeshingWingInRoom_MeshSize_FaceExt.mp4 video].&lt;br /&gt;
&lt;br /&gt;
==Volume Meshing==&lt;br /&gt;
&lt;br /&gt;
===3D Boundary Layers===&lt;br /&gt;
One of the most important aspects of volume mesh development is generating proper 3D boundary layers. This process is well outlined [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 here] from around 8:30 to 16:00. Note that all mesh attributes are set up under the &amp;quot;Meshing&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
===Mesh Refinement Zones===&lt;br /&gt;
Mesh refinement zones are useful for increasing the grid density within certain volumes of your simulation domain. The linked [https://fluid.colorado.edu/tutorials/tutorialVideos/Mesh_refinement_zone_tutorial.mp4 mesh refinement tutorial] will briefly walk you through how these are implemented.&lt;br /&gt;
&lt;br /&gt;
==Applying Boundary and Initial Conditions==&lt;br /&gt;
A nice description of each of the common BCs used in both the Compressible and Incompressible builds of PHASTA is provided [[SimModeler|here]]. The same video as the [https://fluid.colorado.edu/tutorials/tutorialVideos/RajDemo.mp4 volume meshing step] shows how to apply boundary conditions for an incompressible case starting at around 16:20. This linked [https://fluid.colorado.edu/tutorials/tutorialVideos/BC_tutorial.mp4 video] also shows how the BCs are applied for an incompressible case. Note that the case needs to be called ''geom'' as is done and explained in the tutorial video. Once you are done applying the BC's, it is very important that you save out the model &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file with these applied boundary conditions.&lt;br /&gt;
&lt;br /&gt;
==Saving Out the Mesh==&lt;br /&gt;
Once you are satisfied with your mesh refinement and have applied boundary/initial conditions to your model, you are ready to generate and save out the final mesh. In the meshing tab, you will want to click ''Generate Mesh''. The default settings will suffice for proper mesh generation. If your grid is large, this step can take some time. Once the grid is finished generating, you will want to save out the generated &amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file as you will need it for the next step in the process. &lt;br /&gt;
&lt;br /&gt;
==Next Steps  -  Preping for Chef==&lt;br /&gt;
Once the BCs are successfully implemented, you have saved out a model file with these BCs (&amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt;), and have generated and saved your mesh grid (&amp;lt;code&amp;gt;.sms&amp;lt;/code&amp;gt; file), you are ready to prepare the grid for Chef. To get started on this step, head over to [[Prepping the Grid for Chef]].&lt;/div&gt;</summary>
		<author><name>Conrad54418</name></author>	</entry>

	</feed>