<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://fluid.colorado.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Prte0550</id>
		<title>PHASTA Wiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://fluid.colorado.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Prte0550"/>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php/Special:Contributions/Prte0550"/>
		<updated>2026-04-29T03:14:08Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=PHASTA/Restart_Ordering&amp;diff=2047</id>
		<title>PHASTA/Restart Ordering</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=PHASTA/Restart_Ordering&amp;diff=2047"/>
				<updated>2024-03-20T19:39:11Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Change to update to what solution phasta writes under non-pressure primitive solves&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Knowledge of the ordering of restarts is most useful for the creation of &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; files (used to tell ParaView what to read in from restarts in POSIX or SyncIO format respectively). There are both standard outputs from PHASTA that will be written no matter what, and optional ones that may depend on the simulation type or what type of analysis is planned on being done. &lt;br /&gt;
&lt;br /&gt;
In the following, index numbering will be in index-by-1 format, or Fortran format. When creating a &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; file and filling in the &amp;lt;code&amp;gt;start_index_in_phasta_array=&amp;quot; &amp;quot;&amp;lt;/code&amp;gt; section for each field, note that you should subtract one as ParaView reads this file using index-by-0.&lt;br /&gt;
&lt;br /&gt;
== Standard Outputs ==&lt;br /&gt;
&lt;br /&gt;
The output flow variables written to restarts are NOT dependent on the choice of variables used to solve a given setup. PHASTA will always write pressure, velocity, and temperature (in that order) for both compressible and incompressible for the sake of consistency. If the incompressible formulation is being used without a temperature solve, the temperature field will still exist but will simply be all zeros. These output flow variables are stored under the header&lt;br /&gt;
&amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the case where turbulence models are being used or species are being solved for, &amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt; will also populate scalar fields starting in the 6th field in sequential ordering:&lt;br /&gt;
* Scalar 1&lt;br /&gt;
* Scalar 2&lt;br /&gt;
&lt;br /&gt;
The final ordering is then:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Pressure Primitive Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1       || Pressure     ||&lt;br /&gt;
|-&lt;br /&gt;
| 2:4     || Velocity     || Vector quantity, ordered u, v, w&lt;br /&gt;
|-&lt;br /&gt;
| 5       || Temperature  || This field can be ignored if incompressible&lt;br /&gt;
|-&lt;br /&gt;
| 6       || Scalar 1     || Only written if scalar 1 exists&lt;br /&gt;
|-&lt;br /&gt;
| 7       || Scalar 2     || Only written if scalar 2 exists&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The exact quantity that any scalar represents depends on the models being used, though as some examples, for the Spalart-Allmaras (SA) 1-equation RANS model or DES/DDES formulations using the SA model, scalar 1 will be &amp;amp;nu &amp;lt;sub&amp;gt;t&amp;lt;/sub&amp;gt;. Some branches of the code may have the ability to solve more scalar equations and those extra scalars will be appended, in order, to the end of the &amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt; field.&lt;br /&gt;
&lt;br /&gt;
=== Time Derivatives ===&lt;br /&gt;
The time derivatives of all of the above fields are also present in the restarts, under the header &amp;lt;code&amp;gt;time derivative of solution&amp;lt;/code&amp;gt;. These fields are in the exact same order as in &amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt;, and are subject to the same caveats about how many scalars there may be and what they actually represent for any given branch of the code and simulation setup.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Pressure Primitive Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1       || Time derivative of pressure     ||&lt;br /&gt;
|-&lt;br /&gt;
| 2:4     || Time derivative of velocity     || Vector quantity, ordered u, v, w&lt;br /&gt;
|-&lt;br /&gt;
| 5       || Time derivative of temperature  || This field can be ignored if incompressible&lt;br /&gt;
|-&lt;br /&gt;
| 6       || Time derivative of scalar 1     || Only written if scalar 1 exists&lt;br /&gt;
|-&lt;br /&gt;
| 7       || Time derivative of scalar 2     || Only written if scalar 2 exists&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Optional Outputs ==&lt;br /&gt;
&lt;br /&gt;
=== Wall Distance ===&lt;br /&gt;
This quanity is simply a measure of the distance from any given node to the nearest wall point in the simulation. This quantity is mostly used in turbulence models but can be useful for the post-processing of complex domains&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;dwal&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;placeholder&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1     || Wall Distance || &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Vorticity ===&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;vorticity&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print vorticity: True&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1:3     || Vorticity              || Vector quantity, ordered &amp;amp;omega;&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| 4       || Magnitude of vorticity || &lt;br /&gt;
|-&lt;br /&gt;
| 5       || Q                      || Defined as the second invariant of the velocity gradient tensor&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Time Averaged Statistics (point-wise) ===&lt;br /&gt;
Point-wise time averaged statistics are useful for problems where there are no homogeneous directions in the flow to accumulate an average along. Instead, PHASTA can accumulate averages at each individual node. Note that this formulation only accumulates per-run, so an average is only computed from the start step of the current run, and total averages must be computed by adding successive averages with the appropriate weighting. &lt;br /&gt;
*Header: &amp;lt;code&amp;gt;ybar&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print ybar: True&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields (incompressible)&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1     || Average velocity                     || Vector quantity, ordered u, v, w&lt;br /&gt;
|-&lt;br /&gt;
| 2:4   || Average pressure                     || &lt;br /&gt;
|-&lt;br /&gt;
| 5     || Average speed                        || &lt;br /&gt;
|-&lt;br /&gt;
| 6:8   || Average of the square of velocity    || Vector quantity, ordered u&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;, v&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;, w&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| 9     || Average of the square of pressure    ||&lt;br /&gt;
|-&lt;br /&gt;
| 10:12 || Average of velocity cross-components || Vector quantity, ordered uv, uw, vw&lt;br /&gt;
|-&lt;br /&gt;
| 13    || Average of scalar 1                  ||&lt;br /&gt;
|-&lt;br /&gt;
| 14:16 || Average of vorticity                 || Vector quantity, ordered &amp;amp;omega;&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| 17    || Average vorticity magnitude          ||&lt;br /&gt;
|-&lt;br /&gt;
| 18    || Average of scalar 2                  ||&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Average of Q may be in position 19 depending on the branch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wall Shear Stress ===&lt;br /&gt;
This field is only defined at wall points and is used to get the most accurate measure of the wall shear stress possible as otherwise a finite gradient using the first point off of the wall must be implemented, which may or may not be normal to a wall point underneath of it. &lt;br /&gt;
*Header: &amp;lt;code&amp;gt;wss&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print Wall Fluxes: True&amp;lt;/code&amp;gt; (check this)&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1:3     || Wall shear stress || Vector quantity, ordered &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Time Averaged Wall Shear Stress ===&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;wssbar&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print Wall Fluxes: True&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Print ybar: True&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1:3     || Average wall shear stress || Vector quantity, ordered &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Pressure Projection Vectors ===&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;pressure projection vectors&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;placeholder&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=PHASTA/Restart_Ordering&amp;diff=2045</id>
		<title>PHASTA/Restart Ordering</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=PHASTA/Restart_Ordering&amp;diff=2045"/>
				<updated>2024-03-12T17:29:17Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Initial creation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Knowledge of the ordering of restarts is most useful for the creation of &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; files (used to tell ParaView what to read in from restarts in POSIX or SyncIO format respectively). There are both standard outputs from PHASTA that will be written no matter what, and optional ones that may depend on the simulation type or what type of analysis is planned on being done. &lt;br /&gt;
&lt;br /&gt;
In the following, index numbering will be in index-by-1 format, or Fortran format. When creating a &amp;lt;code&amp;gt;.pht&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.phts&amp;lt;/code&amp;gt; file and filling in the &amp;lt;code&amp;gt;start_index_in_phasta_array=&amp;quot; &amp;quot;&amp;lt;/code&amp;gt; section for each field, note that you should subtract one as ParaView reads this file using index-by-0.&lt;br /&gt;
&lt;br /&gt;
NOTE: &lt;br /&gt;
This page is a work in progress and none of this information should be taken as fully accurate while this warning persists&lt;br /&gt;
&lt;br /&gt;
== Standard Outputs ==&lt;br /&gt;
&lt;br /&gt;
The output flow variables written to restarts are dependent on the choice of variables used to solve a given setup. These output flow variables are stored under the header&lt;br /&gt;
&amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt;&lt;br /&gt;
And for pressure primitive, have ordering:&lt;br /&gt;
* Pressure&lt;br /&gt;
* Velocity (vector quantity, ordered x, y, z)&lt;br /&gt;
* Temperature (if solving the compressible equations, this field can be ignored if incompressible)&lt;br /&gt;
&lt;br /&gt;
In the case where turbulence models are being used or species are being solved for, &amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt; will also populate scalar fields starting in the 6th field in sequential ordering:&lt;br /&gt;
* Scalar 1&lt;br /&gt;
* Scalar 2&lt;br /&gt;
&lt;br /&gt;
The final ordering is then:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Pressure Primitive Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1       || Pressure     ||&lt;br /&gt;
|-&lt;br /&gt;
| 2:4     || Velocity     || Vector quantity, ordered u, v, w&lt;br /&gt;
|-&lt;br /&gt;
| 5       || Temperature  || This field can be ignored if incompressible&lt;br /&gt;
|-&lt;br /&gt;
| 6       || Scalar 1     || Only written if scalar 1 exists&lt;br /&gt;
|-&lt;br /&gt;
| 7       || Scalar 2     || Only written if scalar 2 exists&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The exact quantity that any scalar represents depends on the models being used, though as some examples, for the Spalart-Allmaras (SA) 1-equation RANS model or DES/DDES formulations using the SA model, scalar 1 will be &amp;amp;nu &amp;lt;sub&amp;gt;t&amp;lt;/sub&amp;gt;. Some branches of the code may have the ability to solve more scalar equations and those extra scalars will be appended, in order, to the end of the &amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt; field.&lt;br /&gt;
&lt;br /&gt;
=== Time Derivatives ===&lt;br /&gt;
The time derivatives of all of the above fields are also present in the restarts, under the header &amp;lt;code&amp;gt;time derivative of solution&amp;lt;/code&amp;gt;. These fields are in the exact same order as in &amp;lt;code&amp;gt;solution&amp;lt;/code&amp;gt;, and are subject to the same caveats about how many scalars there may be and what they actually represent for any given branch of the code and simulation setup.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Pressure Primitive Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1       || Time derivative of pressure     ||&lt;br /&gt;
|-&lt;br /&gt;
| 2:4     || Time derivative of velocity     || Vector quantity, ordered u, v, w&lt;br /&gt;
|-&lt;br /&gt;
| 5       || Time derivative of temperature  || This field can be ignored if incompressible&lt;br /&gt;
|-&lt;br /&gt;
| 6       || Time derivative of scalar 1     || Only written if scalar 1 exists&lt;br /&gt;
|-&lt;br /&gt;
| 7       || Time derivative of scalar 2     || Only written if scalar 2 exists&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Optional Outputs ==&lt;br /&gt;
&lt;br /&gt;
=== Wall Distance ===&lt;br /&gt;
This quanity is simply a measure of the distance from any given node to the nearest wall point in the simulation. This quantity is mostly used in turbulence models but can be useful for the post-processing of complex domains&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;dwal&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;placeholder&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1     || Wall Distance || &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Vorticity ===&lt;br /&gt;
*Header: &amp;lt;code&amp;gt;vorticity&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print vorticity: True&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1:3     || Vorticity              || Vector quantity, ordered &amp;amp;omega;&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| 4       || Magnitude of vorticity || &lt;br /&gt;
|-&lt;br /&gt;
| 5       || Q                      || Defined as the second invariant of the velocity gradient tensor&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Time Averaged Statistics (point-wise) ===&lt;br /&gt;
Point-wise time averaged statistics are useful for problems where there are no homogeneous directions in the flow to accumulate an average along. Instead, PHASTA can accumulate averages at each individual node. Note that this formulation only accumulates per-run, so an average is only computed from the start step of the current run, and total averages must be computed by adding successive averages with the appropriate weighting. &lt;br /&gt;
*Header: &amp;lt;code&amp;gt;ybar&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print ybar: True&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields (incompressible)&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1     || Average velocity                     || Vector quantity, ordered u, v, w&lt;br /&gt;
|-&lt;br /&gt;
| 2:4   || Average pressure                     || &lt;br /&gt;
|-&lt;br /&gt;
| 5     || Average speed                        || &lt;br /&gt;
|-&lt;br /&gt;
| 6:8   || Average of the square of velocity    || Vector quantity, ordered u&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;, v&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;, w&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| 9     || Average of the square of pressure    ||&lt;br /&gt;
|-&lt;br /&gt;
| 10:12 || Average of velocity cross-components || Vector quantity, ordered uv, uw, vw&lt;br /&gt;
|-&lt;br /&gt;
| 13    || Average of scalar 1                  ||&lt;br /&gt;
|-&lt;br /&gt;
| 14:16 || Average of vorticity                 || Vector quantity, ordered &amp;amp;omega;&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;, &amp;amp;omega;&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| 17    || Average vorticity magnitude          ||&lt;br /&gt;
|-&lt;br /&gt;
| 18    || Average of scalar 2                  ||&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Average of Q may be in position 19 depending on the branch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wall Shear Stress ===&lt;br /&gt;
This field is only defined at wall points and is used to get the most accurate measure of the wall shear stress possible as otherwise a finite gradient using the first point off of the wall must be implemented, which may or may not be normal to a wall point underneath of it. &lt;br /&gt;
*Header: &amp;lt;code&amp;gt;wss&amp;lt;/code&amp;gt;&lt;br /&gt;
*&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print Wall Fluxes: True&amp;lt;/code&amp;gt; (check this)&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1:3     || Wall shear stress || Vector quantity, ordered &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Time Averaged Wall Shear Stress ===&lt;br /&gt;
Header: &amp;lt;code&amp;gt;wssbar&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;Print Wall Fluxes: True&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Print ybar: True&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Fields&lt;br /&gt;
|-&lt;br /&gt;
! Field Number !! Description !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1:3     || Average wall shear stress || Vector quantity, ordered &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;y&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;, &amp;amp;tau;&amp;lt;sub&amp;gt;wall&amp;lt;sub&amp;gt;z&amp;lt;/sub&amp;gt;&amp;lt;/sub&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Pressure Projection Vectors ===&lt;br /&gt;
Header: &amp;lt;code&amp;gt;pressure projection vectors&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;solver.inp&amp;lt;/code&amp;gt; flag: &amp;lt;code&amp;gt;placeholder&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1957</id>
		<title>MGEN Extrude</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1957"/>
				<updated>2023-08-18T21:32:58Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Updated location of latest version&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MGEN is a tool in the meshing workflow that takes a 2D source mesh and extrudes it in the third dimension based off of user input. The tool was originally created for use on structured grids on the Boeing bump, but has since been generalized for use in unstructured setups.&lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
&lt;br /&gt;
MGEN code is stored in &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; and written in FORTRAN. The code takes in a source 2D mesh, z-coordinates to extrude between, the number of elements to populate the extrusion with, and the number of partitions to write the mesh to. &lt;br /&gt;
&lt;br /&gt;
Partitioning in MGEN is simply a method to reduce the cost of initial runs of Chef, but is not a replacement for the initial configuring that Chef does (via 1-1-Chef). Parting in MGEN simply allows the first run of Chef to be in parallel (i.e. 8-8-Chef). Starting Chef from parallel is most important on large grids that would take prohibitively long to run though Chef in serial.&lt;br /&gt;
&lt;br /&gt;
The most current copy of the code is available at &amp;lt;code&amp;gt;/nobackup/uncompressed/Models/GustWing/2dOTS/SymmetricRoom/Mesh/MGEN_NR&amp;lt;/code&amp;gt; as of January 2023. This version can read in model tags, handle multiple model regions, and arbitrary domain widths.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Basic Usage ==&lt;br /&gt;
&lt;br /&gt;
Once a suitable version of &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; has been located and moved to a working directory, it first needs to be complied if this has not already been done. The FORTRAN compiler to compile &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; should be the same version that was/will be used to compile the version of Chef to be used later in the meshing pipeline in order to reduce the risk of complications.&lt;br /&gt;
&lt;br /&gt;
Once a compiler version is selected and added using &amp;lt;code&amp;gt;soft add&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; (depending on the system), it can be compiled. As an example, if using &amp;lt;code&amp;gt;gcc-6.3.0&amp;lt;/code&amp;gt; on Cooley compiling would look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
soft add +gcc-6.3.0&lt;br /&gt;
&lt;br /&gt;
gfortran -03 tm3Extrude.f -o tm3Extrude&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the code is compiled, the working directory needs to be prepared to run MGEN. MGEN needs the source 2D mesh in the form of &amp;lt;code&amp;gt;geom.crd&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;geom.cnn&amp;lt;/code&amp;gt; files in the same directory as the compiled code. These source files can be produced from scratch with MATLAB for structured grids, or through the use of [[Getting Started with Simmodeler|Simmetrix]] and the [[Convert]] tool for unstructured grids.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the mesh files are in place, MGEN can be run with &amp;lt;code&amp;gt;./tm3Extrude&amp;lt;/code&amp;gt; as usual. The code will ask for inputs for zmin, zmax, numelz, and npart. These should be entered in a single string with spaces in between the values before hitting in order to continue code execution.&lt;br /&gt;
&lt;br /&gt;
== Advanced Usage ==&lt;br /&gt;
For more complex geometries, complete information about the model cannot be assumed and must instead be given to MGEN. In order to prepare for this, we will need an additional form of the mesh and to make changes to the MGEN code itself in order to tell the program where geometric features are. &lt;br /&gt;
&lt;br /&gt;
A model first needs to be converted into a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This can be created with &amp;lt;code&amp;gt;mdlConvert&amp;lt;/code&amp;gt;. Example usage of this is &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert &amp;lt;simmetixMesh&amp;gt;.xmt_txt outModel.dmg&amp;lt;/code&amp;gt;. This captures only information about model points, edges, and faces and their relationships to each other but does not capture information about physical location.&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
MGEN will write its outputs to the same working directory that the executable and source mesh files are in. There are multiple file types written, most with a suffix of a number to denote the part number of that file. The different parted files and their purposes are as follows:&lt;br /&gt;
&lt;br /&gt;
;geom3D.class :Classification file describing what type of geometric entity each point lies on (vertex, edge, face, volume)&lt;br /&gt;
;geom3D.cnndt :Connectivity of the elements &lt;br /&gt;
;geom3D.coord :Node coordinates&lt;br /&gt;
;geom3D.fathr :Parent vertex from the 2D source mesh&lt;br /&gt;
;geom3D.match :Contains periodic partners&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also one more file:&lt;br /&gt;
&lt;br /&gt;
; geom3DHead.cnn&lt;br /&gt;
&lt;br /&gt;
Which lists the headers containing information on the size of the file each of the above connectivity files.&lt;br /&gt;
&lt;br /&gt;
== Using the outputted files ==&lt;br /&gt;
The outputted files from MGEN now need to be prepared for Chef, this is done via &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt;. The provided example will be for a build on Cooley.&lt;br /&gt;
&lt;br /&gt;
First, the environment needs to be prepared via setting &amp;lt;code&amp;gt;SIM_LICENSE_FILE&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt;. Examples of this are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export SIM_LICENSE_FILE=/eagle/PHASTA_aesp/SCOREC-CORE/deps/Simmetrix/UCBoulder&lt;br /&gt;
&lt;br /&gt;
export LD_LIBRARY_PATH=/eagle/PHASTA_aesp/SCOREC-CORE/deps/16.0-220326/lib/x64_rhel_gcc48/psKrnl/:$LD_LIBRARY_PATH&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From here, &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt; can be run with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
mpirun -f /var/tmp/cobalt.2137783 -np &amp;lt;np&amp;gt; -genvall /eagle/PHASTA_aesp/SCOREC-CORE/build_gtvertCorruption/test/matchedNodeElmReader ../geom3D.cnndt ../geom3D.coord ../geom3D.match ../geom3D.class ../geom3D.fathr NULL ../geom3DHead.cnn outModel.dmg outModel/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;np&amp;gt; should be replaced by the same number as used for npart when running MGEN.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1948</id>
		<title>Paraview Trace</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1948"/>
				<updated>2023-06-16T18:38:32Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Paraview traces are ways to create a python script of a set of actions that can be later applied to  different datasets or over a loop of datasets automatically. The following will brefly explain the creation, cleaning, and running of a python trace in Paraview (which will often be shortened to pvTrace or something similar).&lt;br /&gt;
&lt;br /&gt;
=== Creation ===&lt;br /&gt;
&lt;br /&gt;
=== Basic Changes to the Python Script ===&lt;br /&gt;
&lt;br /&gt;
=== Running a Trace ===&lt;br /&gt;
Running a pvTrace, whether on the same or a different dataset requires only a few key steps. For now, it will be assumed that the trace is being run on Cooley at ALCF, though many of the steps should be shared for other machines. &lt;br /&gt;
&lt;br /&gt;
If you are running the trace on a dataset that is not the same as the one for which you created the trace, it is good practice to always check your script and inputs. Make sure you have all of the restart and geombc files that you will need, and that you are pointing to the correct locations in the python script (it is recommended that you use absolute paths to reduce the chances for error). Also check your output file name and location.&lt;br /&gt;
&lt;br /&gt;
Once your python script is ready, you need to get an interactive allocation on Cooley and load the same version of paraview with which the trace was created:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;soft add +paraview-5.5.2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This loads &amp;lt;code&amp;gt;pvpython&amp;lt;/code&amp;gt; which is what is used to run the python trace script. To run this scrip is simply a modification of a standard python run command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;pvpython Trace_Script.py&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1947</id>
		<title>Paraview Trace</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Paraview_Trace&amp;diff=1947"/>
				<updated>2023-06-16T18:37:48Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Initial creation, only added running for now because I need to do other things and those are the steps I just forgot and had to re-learn. Hoping to circle back later&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Paraview traces are ways to create a python script of a set of actions that can be later applied to  different datasets or over a loop of datasets automatically. The following will brefly explain the creation, cleaning, and running of a python trace in Paraview (which will often be shortened to pvTrace or something similar). ==&lt;br /&gt;
&lt;br /&gt;
=== Creation ===&lt;br /&gt;
&lt;br /&gt;
=== Basic Changes to the Python Script ===&lt;br /&gt;
&lt;br /&gt;
=== Running a Trace ===&lt;br /&gt;
Running a pvTrace, whether on the same or a different dataset requires only a few key steps. For now, it will be assumed that the trace is being run on Cooley at ALCF, though many of the steps should be shared for other machines. &lt;br /&gt;
&lt;br /&gt;
If you are running the trace on a dataset that is not the same as the one for which you created the trace, it is good practice to always check your script and inputs. Make sure you have all of the restart and geombc files that you will need, and that you are pointing to the correct locations in the python script (it is recommended that you use absolute paths to reduce the chances for error). Also check your output file name and location.&lt;br /&gt;
&lt;br /&gt;
Once your python script is ready, you need to get an interactive allocation on Cooley and load the same version of paraview with which the trace was created:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;soft add +paraview-5.5.2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This loads &amp;lt;code&amp;gt;pvpython&amp;lt;/code&amp;gt; which is what is used to run the python trace script. To run this scrip is simply a modification of a standard python run command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;pvpython Trace_Script.py&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=1935</id>
		<title>Chef/Mesh Partitioning</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=1935"/>
				<updated>2023-02-03T20:37:34Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: /* Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage is first inspired from a tutorial provided to Igor and his team at NCSU in order to set up two phase flow test cases on a local cluster named Firebird at NCSU and Cetus/Mira at ALCF. At this time, this tutorial includes copy-paste materials from old emails. &lt;br /&gt;
&lt;br /&gt;
The code has evolved since then! If you scroll down, you will also find critical updates since the first tutorial was written. Please do not ignore them or there is 100% chance your mesh partitioning/refinement will fail.&lt;br /&gt;
&lt;br /&gt;
Please update this page for our viz nodes when you get a chance. &lt;br /&gt;
&lt;br /&gt;
Thanks, &lt;br /&gt;
&lt;br /&gt;
- Michel&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is a tutorial about how to respectively partition the initial mesh and generate the phasta files on firebird (and other platforms including Cetus/Mira) using Chef. This tutorial is rather long but should include everything you need.&lt;br /&gt;
The testcase to demonstrate the workflow is the familiar 3-way subchannel flow. The root path of this test case is	/sgidata2/mrasquin/Models/subchannel. The parasolid model is located in /sgidata2/mrasquin/Models/subchannel/convertParasolid2ParasolidNative/geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
The workflow that describes how to use Chef is now explained in the next sections.&lt;br /&gt;
&lt;br /&gt;
= Documentation =&lt;br /&gt;
General documentation of Chef that may be used in supplement to this page is available [https://github.com/SCOREC/core/wiki/chef-partition-control here]&lt;br /&gt;
&lt;br /&gt;
= Initial tutorial =&lt;br /&gt;
&lt;br /&gt;
== Env variables==&lt;br /&gt;
&lt;br /&gt;
All the subsequent tools need&lt;br /&gt;
* The fresh version of openmpi I built on firebird&lt;br /&gt;
* The latest Simmetrix library I installed in /Install on firebird.&lt;br /&gt;
&lt;br /&gt;
To update your paths, source the following file:&lt;br /&gt;
&amp;lt;code&amp;gt;/Install/SCOREC.develop/envLinux2014.sh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The env variables defined or updated in this env script include PATH and LD_LIBRARY_PATH. What is defined in this script should prevail on your settings but I strongly suggest removing any redundancy that you may have, for instance, in your .basrc. Note that I actually source this env file directly in my .bashrc so that I do not have to do it manually every time I log in to firebird. When you source it, it will also print the version of gcc, openmpi and simmodsuite lib that are set up.&lt;br /&gt;
&lt;br /&gt;
== BLMesherParallel ==&lt;br /&gt;
&lt;br /&gt;
Note that Simmetrix only supports matched faces for single part mesh so that the mesh must be built with one core. However the initial mesh must already include some information related to the partitioning, even if the mesh only includes a single part for format reasons. This additional information about the partitioning is required for conversion of the mesh file from the Simmetrix format to the SCOREC MDS format that Chef can read.&lt;br /&gt;
&lt;br /&gt;
The initial mesh for the 3-way subchannel was built in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0&amp;lt;/code&amp;gt;. Check the script named &amp;lt;code&amp;gt;runBLMesherParallel.sh&amp;lt;/code&amp;gt; in this directory.&lt;br /&gt;
&lt;br /&gt;
Running &amp;lt;code&amp;gt;./runBLMesherParallel.sh&amp;lt;/code&amp;gt; with no arguments will tell you the usage, that is:&lt;br /&gt;
 Usage: ./runBLMesherParallel.sh &amp;lt;X&amp;gt; &amp;lt;Y&amp;gt; &amp;lt;Z&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The arguments are as follows.&lt;br /&gt;
* &amp;lt;X&amp;gt; (geometric model) should be the parasolid model geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
* &amp;lt;Y&amp;gt; (attribute file) should be BLattr.inp.&lt;br /&gt;
* &amp;lt;Z&amp;gt; (number of processors) should be 1 here since we need to generate a single part mesh using a single core.&lt;br /&gt;
&lt;br /&gt;
The BLattr.inp input file is the same as the one read by the old serial version of BLMesher. But BLMesherParallel can do whatever the old version of BLMesher can do. In addition, if your test case does not include any matched face, you may try to mesh in parallel by specifying &amp;lt;Z&amp;gt; to be larger than 1. However, some meshing features are available only when BLMesherParallel is used with a single core so it is always important to check the resulting mesh.&lt;br /&gt;
&lt;br /&gt;
BLMesherParallel outputs the following files.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;mesh.sms&amp;lt;/code&amp;gt; --- The resulting mesh is stored in a directory named mesh.sms, which is a parameter hardcoded in the runBLMesherParallel.sh script.&lt;br /&gt;
* &amp;lt;code&amp;gt;BLMesher.log&amp;lt;/code&amp;gt; --- The log from BLMesherParallel is saved in BMesher.log, whereas the Simmetrix log is saved in mesh.log. Both filenames are also hardcoded in the script.&lt;br /&gt;
&lt;br /&gt;
I also mentioned in previous discussions that Simmetrix has developed its own model format called geomsim. However, the boundary layer collapses near matched faces with this model format, which is not the case when we use the parasolid format. This issue has been reported to Simmetrix but until they can provide a fix, we are forced to start with the parasolid format when our test cases include matched faces.&lt;br /&gt;
&lt;br /&gt;
== Mesh conversion==&lt;br /&gt;
&lt;br /&gt;
Chef can read only the MDS format developed at SCOREC. Therefore, the Simmetrix mesh mush first be converted to this format.&lt;br /&gt;
&lt;br /&gt;
This operation was carried out for the 3-way channel in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/simMeshToMdsMesh&amp;lt;/code&amp;gt;. Simply run the script &amp;lt;code&amp;gt;./simMeshToMdsMesh.sh&amp;lt;/code&amp;gt;, which executes the &amp;quot;convert&amp;quot; executable. In the script, you can see that the convert executable reads 3 arguments:&lt;br /&gt;
# The '''input parasolid model''' named named geom.xmt_txt, which points to geomFromSimmodeler_nat.x_t. Note that convert expects an .xmt_txt extension (or .smd extension for the complete geomsim format).&lt;br /&gt;
# The '''input Simmetrix mesh''' named here parts.sms (for historical reason but can be renamed).&lt;br /&gt;
# The '''name of the output mds mesh directory''', which is mdsMesh_bz2 here. Note that this name is prepended by &amp;quot;bz2:&amp;quot;, which means that the output mds mesh file is compressed using bzip2. &amp;quot;bz2:&amp;quot; will not be part of the name of the output directory. If you do not specify &amp;quot;bz2:&amp;quot;, the mds mesh file will be saved in ascii format, which is a waste of space so I suggest to always prepend your directory name by &amp;quot;bz2:&amp;quot;. This will also apply later to the output mesh directory generated by Chef (see below).&lt;br /&gt;
&lt;br /&gt;
Note that convert needs to run with a number of processes (-np ##) equal to the number of input parts in the Simmetrix mesh. For cases that include match faces, the Simmetrix mesh must include only one part, which is the reason why convert runs here with -np 1. But in other circumstances, convert can run in parallel if the Simmetrix mesh has already been partitioned in n parts with n&amp;gt;1 (for instance mesh generated in parallel with BLMesherParallel and/or partitioned with phParAdapt-Simmetrix).&lt;br /&gt;
&lt;br /&gt;
== Boundary and initial conditions (spj file)==&lt;br /&gt;
&lt;br /&gt;
Before running Chef for mesh operations such as uniform refinement, tetrahedronization and partitioning, we need to define the BCs and ICs for the generation of the phasta files. These BCs and ICs are defined in an spj file, which is in ASCII to facilitate scripting of BCs/ICs. Most of the attributes you are familiar with from the Simmodeler GUI can be specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
For the 3-way channel flow, see the spj file located in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Simplified_SPJ_file/geom.spj&amp;lt;/code&amp;gt;. Each line corresponds to one attribute that applies to one face.&lt;br /&gt;
&lt;br /&gt;
The structure of the spj file is:&lt;br /&gt;
 # Optional comments anywhere preceded by the pound symbol (#).&lt;br /&gt;
 # For each boundary or initial condition a line as follows:&lt;br /&gt;
 &amp;lt;attribute_name&amp;gt;: &amp;lt;face_id&amp;gt; &amp;lt;dimension&amp;gt; &amp;lt;attribute list&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the following.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;dimension&amp;gt;&amp;lt;/code&amp;gt;: 2 for a face attribute in 2D, 3 for the initial conditions that applies to the 3D domain. 1D and 0D attributes are also allowed for lines and vertices if needed.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;attribute list&amp;gt;&amp;lt;/code&amp;gt;: typically magnitude and direction if this applies&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Syntax is strict.&lt;br /&gt;
* No empty line. Each line should be either a comment which starts with the # character, or an attribute.&lt;br /&gt;
* There must be one single space after the semicolon character.&lt;br /&gt;
* There must be one single space between any numbers.&lt;br /&gt;
&lt;br /&gt;
In this example, a zero &amp;quot;traction vector&amp;quot; attribute is specified on the periodic faces parallel to the length of the channel. It is wrong to specify such an attribute on these periodic faces for a 3-way channel, but this was inherited from the 1-way periodic channel where these faces were slip walls instead of periodic faces. I will try to update my test cases in the future. But because we have now continuous integration tools that run every night to verify the Chef code, I will need to update all the cases if I modify the spj file now. So double check the attributes that you need for this model and consider the existing spj file as a source of inspiration rather than the correct spj file for production runs.&lt;br /&gt;
&lt;br /&gt;
== Chef ==&lt;br /&gt;
&lt;br /&gt;
A few rules must be followed to run Chef.&lt;br /&gt;
&lt;br /&gt;
First, the number of mpi processes must be equal to the number of input parts (''this has changed in the newest version of Chef, as described below'').&lt;br /&gt;
&lt;br /&gt;
Second, Chef is threaded with openmp and the total number of output parts after partitioning should be at most equal to the total number of available hardware threads of your machine/allocation. On BGQ, there are 4 hardware threads per core. On Linux platform such as firebird, the number of hardware threads corresponds to the number of available cores. That said, we have observed that if the number of output parts is equal to the total number of available hardware threads, Chef can hang. Therefore, it is safer to limit the number of output parts to a lower number than the number of available hardware threads. Therefore, on firebird, we should not try to partition a mesh to more than 16 parts.&lt;br /&gt;
&lt;br /&gt;
The next mesh operations will have to take place on Tukey and Cetus/Mira.&lt;br /&gt;
&lt;br /&gt;
The first example of a partitioning with Chef can be found in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch&amp;lt;/code&amp;gt;. With my naming convention, &amp;lt;code&amp;gt;4-1-Chef-PartLocal-Scratch&amp;lt;/code&amp;gt; can be decomposed as follows:&lt;br /&gt;
* The first number (4) corresponds to the number of output parts&lt;br /&gt;
* The second number (1) correspond to the number of input parts&lt;br /&gt;
* &amp;quot;Chef&amp;quot; means this mesh was treated with this program (in opposition to phParAdapt, phTest, etc which are previous executables that we used for similar purpose).&lt;br /&gt;
* &amp;quot;PartLocal&amp;quot; means the mesh is partitioned locally.&lt;br /&gt;
* &amp;quot;Scratch&amp;quot; means that the initial solution in the resulting phasta files is generated entirely from the spj file defined in a previous section of this tutorial. That is, we are starting a simulation &amp;quot;from scratch,&amp;quot; using the spj file's initial conditions as opposed to a solution migrated from a previous run.&lt;br /&gt;
&lt;br /&gt;
In summary, Chef was used in this directory to partition a single part mesh into 4 parts and the solution in the phasta files was generated directly from scratch using the spj file.&lt;br /&gt;
&lt;br /&gt;
=== Chef's input files ===&lt;br /&gt;
&lt;br /&gt;
The script to run Chef is named runChef.sh in this directory and simply call the executable. Chef reads all it needs from two input files called numstart.dat and adapt.inp.&lt;br /&gt;
&lt;br /&gt;
==== numstart.dat ====&lt;br /&gt;
&lt;br /&gt;
Instead of building the initial solution from scratch using the initial conditions defined in the spj file, the user can migrate an existing solution stored in a set of restart files that were saved from a previous phasta simulation. Numstart.dat contains the time step stamp of the input restart files to read in order to migrate a solution.&lt;br /&gt;
&lt;br /&gt;
==== adapt.inp ====&lt;br /&gt;
&lt;br /&gt;
This input file contains all the other parameters Chef expects. Note that many of these parameters have been inherited from the old phParAdapt, and are currently obsolete or unused. In what follows, all the parameters available in adapt.inp are listed and the critical parameters are in bold. Any line that starts with # is ignored.&lt;br /&gt;
&lt;br /&gt;
* '''globalP''': obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined. Note that adaptation only works with a mixed mesh (with wedges in the BL) and not with an all-tet mesh. Tetrahedronization should therefore take place after uniform refinement. Right now, the mixed mesh gets uniformly refined everywhere including the BL but it is possible to refine uniformly outside the BL only with some light modifications of the code. In the future, we hope to have other adaptation strategies in place in Chef based on local error indicator. If interested in these strategy, then phParAdapt-Simmetrix must be used. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* rRead: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* rStart: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested in Chef. If interested in other more sophisticated adaptation strategies, phParAdapt-Simmetrix must be used for now.&lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* Periodic: obsolete. Periodicity in the mesh and in the solution is not treated automatically as long as i) the mesh built with BLMesher is periodic (i.e. location of the mesh vertices on periodic faces in the same) and ii) the spj file contains the correct &amp;quot;periodic slave&amp;quot; attributes.&lt;br /&gt;
&lt;br /&gt;
* prCD: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* timing: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* outputFormat: obsolete. Phasta files are saved by default in binary format.&lt;br /&gt;
&lt;br /&gt;
* internalBCNodes: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* WRITEASC: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* phastaIO: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''numTotParts''': Final number of parts. If numTotParts is larger than the number of Chef processes which is equal to the number of input parts, the mesh will be partitioned.&lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* isReorder: obsolete/unused. Reordering for better cache performance is now applied by default to both the phasta files and mesh files.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* numSplit: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''': local partitioning if set to 1, set global partitioning if set to 0. Currently, only local partitioning is implemented in Chef and has been shown to be sufficient so far.&lt;br /&gt;
&lt;br /&gt;
* '''RecursivePtn''': should always be set to 1. In the past, this parameter allowed recursive partitioning steps in phParAdapt. The code will stop or crash if this parameter is not 1.&lt;br /&gt;
&lt;br /&gt;
* RecursivePtnStep: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  Currently, the GRAPH method for local partitioning is hard coded in one of the Chef routine.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
The second example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-Tet-PartLocal-SolMgr. For this case, based on the naming convention of 8-4-Chef-Tet-PartLocal-SolMgr (and the parameters specified in adapt.inp and numstart.dat),&lt;br /&gt;
* the number of output parts requested is 8, &lt;br /&gt;
* the number of input parts is 4 (note &amp;quot;-np 4&amp;quot; in the runChef.sh script),&lt;br /&gt;
* the input mixed mesh is first tetrahedronized before being partitioned. &lt;br /&gt;
* the solution in the resulting phasta files is migrated from the previous Chef run. &lt;br /&gt;
Note that the spj file is different for this second example and the initial conditions have been commented out in order not to overwrite the solution that is migrated from the previous Chef run.&lt;br /&gt;
&lt;br /&gt;
The third and final example can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-UR2-Tet-PartLocal-SolMgr. In this directory 8-4-Chef-UR2-Tet-PartLocal-SolMgr, Chef &lt;br /&gt;
* reads a four part mesh, &lt;br /&gt;
* applies a double recursive uniform refinement, &lt;br /&gt;
* tetrahedronize the resulting mixed mesh that has been uniformly refined twice, &lt;br /&gt;
* partition the resulting 4 part all-tet uniformly refined mesh into 8 parts,&lt;br /&gt;
* migrate and interpolate the solution read from existing restart files coming from the first example.&lt;br /&gt;
&lt;br /&gt;
As a final comment, note that the restart files are always read directly from a procs_case directory. However, when the number of output restart files exceeds 2048, the restart files are then saved in subdirectories of the root procs_case directory in order to reduce file contention, in the same (but still different) way as what you have implemented at some point in your version of phasta. The best strategy would be to write phasta files using mpi_io for instance so that we can store more than one part in a single file and avoid large number of phasta files.&lt;br /&gt;
&lt;br /&gt;
For further partitioning on BG/Q machines a conversion to the native Parasolid model is required. The tool is located in: /Install/SCOREC.develop/scorec/test/cadToSim/cadToSim &lt;br /&gt;
and should be run from [Case directory]/convertParasolid2ParasolidNative/ on firebird.&lt;br /&gt;
&lt;br /&gt;
= Updated Chef version (2015/03/26)=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) MPI implementation&lt;br /&gt;
&lt;br /&gt;
A new version of chef has been implemented and does not rely on threads any more.&lt;br /&gt;
Instead, it is now based on a pure MPI implementation. &lt;br /&gt;
That means that there is an important change in how chef is called at runtime.&lt;br /&gt;
&lt;br /&gt;
With the previous threaded version, the number of MPI processes had to be equal to the number of input parts. &lt;br /&gt;
Chef was then in charge of starting a number of threads equal to the number of output parts, which was automatic.&lt;br /&gt;
&lt;br /&gt;
Since the pure MPI version of chef does not start thread any more, it now requires a number of MPI processes equal to the final number of output parts, and not input parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) adapt.inp&lt;br /&gt;
&lt;br /&gt;
In the new version of chef, &amp;quot;numTotParts&amp;quot; in adapt.inp (which was used to specify the final number of output parts) has been replaced by &amp;quot;splitFactor&amp;quot;, which corresponds to the ratio of the number of output parts with the number of input parts. &lt;br /&gt;
If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
If you set this parameter to 2, each part of your input mesh will be split in 2 new sub-parts, etc&lt;br /&gt;
Keep in mind that the number of MPI processes that needs to be requested for chef must therefore be equal to (number of input parts times) * (splitFactor).&lt;br /&gt;
&lt;br /&gt;
I have also removed the obsolete parameter in adapt.inp and saved a representative version of this file in /projects/tools/SCOREC.develop/runscripts/adapt.inp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3) Paths&lt;br /&gt;
&lt;br /&gt;
I have updated chef on the VIz nodes, Mira and Tukey so that it only relies on the more robust pure MPI implementation.&lt;br /&gt;
&lt;br /&gt;
On the viz nodes, use /projects/tools/SCOREC.develop/build-chefMPI-GNU-*/test/chef&lt;br /&gt;
For simplicity, this is the default version of the master branch coming directly from our github repository.&lt;br /&gt;
&lt;br /&gt;
On Tukey, use /home/mrasquin/SCOREC.develop/build-tukey-GNU-OptG-c2c360bc-mpi-*&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35-noblsnap means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is off during uniform refinement (UR).&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35 means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is on during UR.&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol33 means that the target imbalance for both the vtx and elem is 3%, and BL snapping is on during UR.&lt;br /&gt;
Note that these versions have been slightly modified w.r.t. the master branch. In particular, the imbalance target is not a parameter yet. Also, in Parma, HPS (Heavy Part Splitting) and FixDisconnectedPart are not called here because the latest version of the diffusion algorithm with improved selection of (i) target parts for element exchange and (ii) elements.&lt;br /&gt;
&lt;br /&gt;
On Mira, use /home/mrasquin/SCOREC.develop/build-XL-OptG-c2c360bc-mpi-*&lt;br /&gt;
Similar comments applies to  build-XL-OptG-c2c360bc-mpi-tol33,  build-XL-OptG-c2c360bc-mpi-tol35 and  build-XL-OptG-c2c360bc-mpi-tol35-noblsnap.&lt;br /&gt;
&lt;br /&gt;
Note that BL snapping is not called for a repartitioning of the mesh. It can only play a role during uniform refinement.&lt;br /&gt;
Consequently, if you do not request a UR in adapt.inp, then build-*-tol35 and build-*-tol35-noblsnap will behave the same way.&lt;br /&gt;
&lt;br /&gt;
In case you are wondering what the weird numbers are in the name of the build directory, this comes from the git log hash, which is a unique number associated with a git commit (easier to couple an executable with a version of the code).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Updated Chef version (2015/05/29 and 2016/04/05)=&lt;br /&gt;
&lt;br /&gt;
Updated list of useful parameters in adapt.inp&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined.  Now, mixed mesh can be refined uniformly either everywhere including the BL, or only outside the BL (see parameter splitAllLayerEdges below). Other adaptation strategies based on local error indicators are being developed in Chef and will be complimentary to the existing strategies available in phParAdapt-Simmetrix. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested and validated in Chef. &lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* '''SplitAllLayerEdges''': This parameter is only applicable to mixed meshes during uniform refinement. If set to 1, uniform refinement of all mesh edges including the edges along normal growth curve (wedges) of the BL. If set to 0, uniform refinement except for edges along normal growth curve (tets only). For all tet meshes, this parameter is ignored and all the tets get split.&lt;br /&gt;
&lt;br /&gt;
* '''Snap''': If set to 1 during a uniform refinement, Chef will attempt snapping to model surface. Use with caution and check the resulting mesh: if the input mixed mesh is too coarse, snapping can be partially ignored. Non valid meshes have also sometimes been observed (phasta crash).&lt;br /&gt;
&lt;br /&gt;
* '''splitFactor''':, ratio of the number of output parts with the number of input parts. If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  graph or zrib (for Zoltan Recursive Inertia Bisection) are both available options so far. Graph should be preferred choice for now.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''':  0 for global partitioning, 1 for local partitioning. Global partitioning coupled with graph as the partition method requires a lot of memory and time and is limited to coarse meshes with a small number of mesh parts. Global RIB is more robust but can lead to larger vertex imbalance. If starting from a well balanced mesh with few or no disconnected parts, local graph is the recommended choice so far.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''elementImbalance''': target element imbalance for Parma. Use 1.01 to 1.05, which corresponds respectively to 1% and 5%.&lt;br /&gt;
&lt;br /&gt;
* '''vertexImbalance''': target vertex imbalance for Parma. Use 1.01 to 1.05, which corresponds respectively to 1% and 5%.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
* '''filterMatches''': If set to 0, the matching between periodic entities are imposed from the mesh file regardless of the periodic faces defined in the attributes that could potentially differ. When enabled, this code derives the period associations between all model entities based on the &amp;quot;periodic slave&amp;quot; attribute set in the attribute file. Note that the mesh must be periodic to support this feature. This can be useful for instance for a three-way periodic channel mesh that can support both 1-way and 3-way periodic attributes without the need to build two distinct meshes for that purpose.&lt;br /&gt;
&lt;br /&gt;
* '''axisymmetry''': When enabled, this parameter supports axisymmetric periodic mesh and attributes for instance for an annular flow.&lt;br /&gt;
&lt;br /&gt;
= Words of caution=&lt;br /&gt;
When working with large meshes occasionally the executable will crash by no fault of the user. This has happened in the past when (adapt.inp set parameter: ensa_ndof)*(number of nodes in the mesh)&amp;gt;(maximum value of the 4-byte integer 2^31). This was debugged by using the addr2line command. Information about this command can be found at https://fluid.colorado.edu/wiki/index.php/Debugging.&lt;br /&gt;
&lt;br /&gt;
[[Category:Chef]]&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=1934</id>
		<title>Chef/Mesh Partitioning</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=1934"/>
				<updated>2023-02-03T20:35:56Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: /* Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage is first inspired from a tutorial provided to Igor and his team at NCSU in order to set up two phase flow test cases on a local cluster named Firebird at NCSU and Cetus/Mira at ALCF. At this time, this tutorial includes copy-paste materials from old emails. &lt;br /&gt;
&lt;br /&gt;
The code has evolved since then! If you scroll down, you will also find critical updates since the first tutorial was written. Please do not ignore them or there is 100% chance your mesh partitioning/refinement will fail.&lt;br /&gt;
&lt;br /&gt;
Please update this page for our viz nodes when you get a chance. &lt;br /&gt;
&lt;br /&gt;
Thanks, &lt;br /&gt;
&lt;br /&gt;
- Michel&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is a tutorial about how to respectively partition the initial mesh and generate the phasta files on firebird (and other platforms including Cetus/Mira) using Chef. This tutorial is rather long but should include everything you need.&lt;br /&gt;
The testcase to demonstrate the workflow is the familiar 3-way subchannel flow. The root path of this test case is	/sgidata2/mrasquin/Models/subchannel. The parasolid model is located in /sgidata2/mrasquin/Models/subchannel/convertParasolid2ParasolidNative/geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
The workflow that describes how to use Chef is now explained in the next sections.&lt;br /&gt;
&lt;br /&gt;
= Documentation =&lt;br /&gt;
General documentation of Chef that may be used in supplement to this page is available [https://github.com/SCOREC/core/wiki/chef-partition-control|here]&lt;br /&gt;
&lt;br /&gt;
= Initial tutorial =&lt;br /&gt;
&lt;br /&gt;
== Env variables==&lt;br /&gt;
&lt;br /&gt;
All the subsequent tools need&lt;br /&gt;
* The fresh version of openmpi I built on firebird&lt;br /&gt;
* The latest Simmetrix library I installed in /Install on firebird.&lt;br /&gt;
&lt;br /&gt;
To update your paths, source the following file:&lt;br /&gt;
&amp;lt;code&amp;gt;/Install/SCOREC.develop/envLinux2014.sh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The env variables defined or updated in this env script include PATH and LD_LIBRARY_PATH. What is defined in this script should prevail on your settings but I strongly suggest removing any redundancy that you may have, for instance, in your .basrc. Note that I actually source this env file directly in my .bashrc so that I do not have to do it manually every time I log in to firebird. When you source it, it will also print the version of gcc, openmpi and simmodsuite lib that are set up.&lt;br /&gt;
&lt;br /&gt;
== BLMesherParallel ==&lt;br /&gt;
&lt;br /&gt;
Note that Simmetrix only supports matched faces for single part mesh so that the mesh must be built with one core. However the initial mesh must already include some information related to the partitioning, even if the mesh only includes a single part for format reasons. This additional information about the partitioning is required for conversion of the mesh file from the Simmetrix format to the SCOREC MDS format that Chef can read.&lt;br /&gt;
&lt;br /&gt;
The initial mesh for the 3-way subchannel was built in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0&amp;lt;/code&amp;gt;. Check the script named &amp;lt;code&amp;gt;runBLMesherParallel.sh&amp;lt;/code&amp;gt; in this directory.&lt;br /&gt;
&lt;br /&gt;
Running &amp;lt;code&amp;gt;./runBLMesherParallel.sh&amp;lt;/code&amp;gt; with no arguments will tell you the usage, that is:&lt;br /&gt;
 Usage: ./runBLMesherParallel.sh &amp;lt;X&amp;gt; &amp;lt;Y&amp;gt; &amp;lt;Z&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The arguments are as follows.&lt;br /&gt;
* &amp;lt;X&amp;gt; (geometric model) should be the parasolid model geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
* &amp;lt;Y&amp;gt; (attribute file) should be BLattr.inp.&lt;br /&gt;
* &amp;lt;Z&amp;gt; (number of processors) should be 1 here since we need to generate a single part mesh using a single core.&lt;br /&gt;
&lt;br /&gt;
The BLattr.inp input file is the same as the one read by the old serial version of BLMesher. But BLMesherParallel can do whatever the old version of BLMesher can do. In addition, if your test case does not include any matched face, you may try to mesh in parallel by specifying &amp;lt;Z&amp;gt; to be larger than 1. However, some meshing features are available only when BLMesherParallel is used with a single core so it is always important to check the resulting mesh.&lt;br /&gt;
&lt;br /&gt;
BLMesherParallel outputs the following files.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;mesh.sms&amp;lt;/code&amp;gt; --- The resulting mesh is stored in a directory named mesh.sms, which is a parameter hardcoded in the runBLMesherParallel.sh script.&lt;br /&gt;
* &amp;lt;code&amp;gt;BLMesher.log&amp;lt;/code&amp;gt; --- The log from BLMesherParallel is saved in BMesher.log, whereas the Simmetrix log is saved in mesh.log. Both filenames are also hardcoded in the script.&lt;br /&gt;
&lt;br /&gt;
I also mentioned in previous discussions that Simmetrix has developed its own model format called geomsim. However, the boundary layer collapses near matched faces with this model format, which is not the case when we use the parasolid format. This issue has been reported to Simmetrix but until they can provide a fix, we are forced to start with the parasolid format when our test cases include matched faces.&lt;br /&gt;
&lt;br /&gt;
== Mesh conversion==&lt;br /&gt;
&lt;br /&gt;
Chef can read only the MDS format developed at SCOREC. Therefore, the Simmetrix mesh mush first be converted to this format.&lt;br /&gt;
&lt;br /&gt;
This operation was carried out for the 3-way channel in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/simMeshToMdsMesh&amp;lt;/code&amp;gt;. Simply run the script &amp;lt;code&amp;gt;./simMeshToMdsMesh.sh&amp;lt;/code&amp;gt;, which executes the &amp;quot;convert&amp;quot; executable. In the script, you can see that the convert executable reads 3 arguments:&lt;br /&gt;
# The '''input parasolid model''' named named geom.xmt_txt, which points to geomFromSimmodeler_nat.x_t. Note that convert expects an .xmt_txt extension (or .smd extension for the complete geomsim format).&lt;br /&gt;
# The '''input Simmetrix mesh''' named here parts.sms (for historical reason but can be renamed).&lt;br /&gt;
# The '''name of the output mds mesh directory''', which is mdsMesh_bz2 here. Note that this name is prepended by &amp;quot;bz2:&amp;quot;, which means that the output mds mesh file is compressed using bzip2. &amp;quot;bz2:&amp;quot; will not be part of the name of the output directory. If you do not specify &amp;quot;bz2:&amp;quot;, the mds mesh file will be saved in ascii format, which is a waste of space so I suggest to always prepend your directory name by &amp;quot;bz2:&amp;quot;. This will also apply later to the output mesh directory generated by Chef (see below).&lt;br /&gt;
&lt;br /&gt;
Note that convert needs to run with a number of processes (-np ##) equal to the number of input parts in the Simmetrix mesh. For cases that include match faces, the Simmetrix mesh must include only one part, which is the reason why convert runs here with -np 1. But in other circumstances, convert can run in parallel if the Simmetrix mesh has already been partitioned in n parts with n&amp;gt;1 (for instance mesh generated in parallel with BLMesherParallel and/or partitioned with phParAdapt-Simmetrix).&lt;br /&gt;
&lt;br /&gt;
== Boundary and initial conditions (spj file)==&lt;br /&gt;
&lt;br /&gt;
Before running Chef for mesh operations such as uniform refinement, tetrahedronization and partitioning, we need to define the BCs and ICs for the generation of the phasta files. These BCs and ICs are defined in an spj file, which is in ASCII to facilitate scripting of BCs/ICs. Most of the attributes you are familiar with from the Simmodeler GUI can be specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
For the 3-way channel flow, see the spj file located in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Simplified_SPJ_file/geom.spj&amp;lt;/code&amp;gt;. Each line corresponds to one attribute that applies to one face.&lt;br /&gt;
&lt;br /&gt;
The structure of the spj file is:&lt;br /&gt;
 # Optional comments anywhere preceded by the pound symbol (#).&lt;br /&gt;
 # For each boundary or initial condition a line as follows:&lt;br /&gt;
 &amp;lt;attribute_name&amp;gt;: &amp;lt;face_id&amp;gt; &amp;lt;dimension&amp;gt; &amp;lt;attribute list&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the following.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;dimension&amp;gt;&amp;lt;/code&amp;gt;: 2 for a face attribute in 2D, 3 for the initial conditions that applies to the 3D domain. 1D and 0D attributes are also allowed for lines and vertices if needed.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;attribute list&amp;gt;&amp;lt;/code&amp;gt;: typically magnitude and direction if this applies&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Syntax is strict.&lt;br /&gt;
* No empty line. Each line should be either a comment which starts with the # character, or an attribute.&lt;br /&gt;
* There must be one single space after the semicolon character.&lt;br /&gt;
* There must be one single space between any numbers.&lt;br /&gt;
&lt;br /&gt;
In this example, a zero &amp;quot;traction vector&amp;quot; attribute is specified on the periodic faces parallel to the length of the channel. It is wrong to specify such an attribute on these periodic faces for a 3-way channel, but this was inherited from the 1-way periodic channel where these faces were slip walls instead of periodic faces. I will try to update my test cases in the future. But because we have now continuous integration tools that run every night to verify the Chef code, I will need to update all the cases if I modify the spj file now. So double check the attributes that you need for this model and consider the existing spj file as a source of inspiration rather than the correct spj file for production runs.&lt;br /&gt;
&lt;br /&gt;
== Chef ==&lt;br /&gt;
&lt;br /&gt;
A few rules must be followed to run Chef.&lt;br /&gt;
&lt;br /&gt;
First, the number of mpi processes must be equal to the number of input parts (''this has changed in the newest version of Chef, as described below'').&lt;br /&gt;
&lt;br /&gt;
Second, Chef is threaded with openmp and the total number of output parts after partitioning should be at most equal to the total number of available hardware threads of your machine/allocation. On BGQ, there are 4 hardware threads per core. On Linux platform such as firebird, the number of hardware threads corresponds to the number of available cores. That said, we have observed that if the number of output parts is equal to the total number of available hardware threads, Chef can hang. Therefore, it is safer to limit the number of output parts to a lower number than the number of available hardware threads. Therefore, on firebird, we should not try to partition a mesh to more than 16 parts.&lt;br /&gt;
&lt;br /&gt;
The next mesh operations will have to take place on Tukey and Cetus/Mira.&lt;br /&gt;
&lt;br /&gt;
The first example of a partitioning with Chef can be found in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch&amp;lt;/code&amp;gt;. With my naming convention, &amp;lt;code&amp;gt;4-1-Chef-PartLocal-Scratch&amp;lt;/code&amp;gt; can be decomposed as follows:&lt;br /&gt;
* The first number (4) corresponds to the number of output parts&lt;br /&gt;
* The second number (1) correspond to the number of input parts&lt;br /&gt;
* &amp;quot;Chef&amp;quot; means this mesh was treated with this program (in opposition to phParAdapt, phTest, etc which are previous executables that we used for similar purpose).&lt;br /&gt;
* &amp;quot;PartLocal&amp;quot; means the mesh is partitioned locally.&lt;br /&gt;
* &amp;quot;Scratch&amp;quot; means that the initial solution in the resulting phasta files is generated entirely from the spj file defined in a previous section of this tutorial. That is, we are starting a simulation &amp;quot;from scratch,&amp;quot; using the spj file's initial conditions as opposed to a solution migrated from a previous run.&lt;br /&gt;
&lt;br /&gt;
In summary, Chef was used in this directory to partition a single part mesh into 4 parts and the solution in the phasta files was generated directly from scratch using the spj file.&lt;br /&gt;
&lt;br /&gt;
=== Chef's input files ===&lt;br /&gt;
&lt;br /&gt;
The script to run Chef is named runChef.sh in this directory and simply call the executable. Chef reads all it needs from two input files called numstart.dat and adapt.inp.&lt;br /&gt;
&lt;br /&gt;
==== numstart.dat ====&lt;br /&gt;
&lt;br /&gt;
Instead of building the initial solution from scratch using the initial conditions defined in the spj file, the user can migrate an existing solution stored in a set of restart files that were saved from a previous phasta simulation. Numstart.dat contains the time step stamp of the input restart files to read in order to migrate a solution.&lt;br /&gt;
&lt;br /&gt;
==== adapt.inp ====&lt;br /&gt;
&lt;br /&gt;
This input file contains all the other parameters Chef expects. Note that many of these parameters have been inherited from the old phParAdapt, and are currently obsolete or unused. In what follows, all the parameters available in adapt.inp are listed and the critical parameters are in bold. Any line that starts with # is ignored.&lt;br /&gt;
&lt;br /&gt;
* '''globalP''': obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined. Note that adaptation only works with a mixed mesh (with wedges in the BL) and not with an all-tet mesh. Tetrahedronization should therefore take place after uniform refinement. Right now, the mixed mesh gets uniformly refined everywhere including the BL but it is possible to refine uniformly outside the BL only with some light modifications of the code. In the future, we hope to have other adaptation strategies in place in Chef based on local error indicator. If interested in these strategy, then phParAdapt-Simmetrix must be used. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* rRead: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* rStart: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested in Chef. If interested in other more sophisticated adaptation strategies, phParAdapt-Simmetrix must be used for now.&lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* Periodic: obsolete. Periodicity in the mesh and in the solution is not treated automatically as long as i) the mesh built with BLMesher is periodic (i.e. location of the mesh vertices on periodic faces in the same) and ii) the spj file contains the correct &amp;quot;periodic slave&amp;quot; attributes.&lt;br /&gt;
&lt;br /&gt;
* prCD: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* timing: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* outputFormat: obsolete. Phasta files are saved by default in binary format.&lt;br /&gt;
&lt;br /&gt;
* internalBCNodes: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* WRITEASC: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* phastaIO: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''numTotParts''': Final number of parts. If numTotParts is larger than the number of Chef processes which is equal to the number of input parts, the mesh will be partitioned.&lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* isReorder: obsolete/unused. Reordering for better cache performance is now applied by default to both the phasta files and mesh files.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* numSplit: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''': local partitioning if set to 1, set global partitioning if set to 0. Currently, only local partitioning is implemented in Chef and has been shown to be sufficient so far.&lt;br /&gt;
&lt;br /&gt;
* '''RecursivePtn''': should always be set to 1. In the past, this parameter allowed recursive partitioning steps in phParAdapt. The code will stop or crash if this parameter is not 1.&lt;br /&gt;
&lt;br /&gt;
* RecursivePtnStep: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  Currently, the GRAPH method for local partitioning is hard coded in one of the Chef routine.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
The second example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-Tet-PartLocal-SolMgr. For this case, based on the naming convention of 8-4-Chef-Tet-PartLocal-SolMgr (and the parameters specified in adapt.inp and numstart.dat),&lt;br /&gt;
* the number of output parts requested is 8, &lt;br /&gt;
* the number of input parts is 4 (note &amp;quot;-np 4&amp;quot; in the runChef.sh script),&lt;br /&gt;
* the input mixed mesh is first tetrahedronized before being partitioned. &lt;br /&gt;
* the solution in the resulting phasta files is migrated from the previous Chef run. &lt;br /&gt;
Note that the spj file is different for this second example and the initial conditions have been commented out in order not to overwrite the solution that is migrated from the previous Chef run.&lt;br /&gt;
&lt;br /&gt;
The third and final example can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-UR2-Tet-PartLocal-SolMgr. In this directory 8-4-Chef-UR2-Tet-PartLocal-SolMgr, Chef &lt;br /&gt;
* reads a four part mesh, &lt;br /&gt;
* applies a double recursive uniform refinement, &lt;br /&gt;
* tetrahedronize the resulting mixed mesh that has been uniformly refined twice, &lt;br /&gt;
* partition the resulting 4 part all-tet uniformly refined mesh into 8 parts,&lt;br /&gt;
* migrate and interpolate the solution read from existing restart files coming from the first example.&lt;br /&gt;
&lt;br /&gt;
As a final comment, note that the restart files are always read directly from a procs_case directory. However, when the number of output restart files exceeds 2048, the restart files are then saved in subdirectories of the root procs_case directory in order to reduce file contention, in the same (but still different) way as what you have implemented at some point in your version of phasta. The best strategy would be to write phasta files using mpi_io for instance so that we can store more than one part in a single file and avoid large number of phasta files.&lt;br /&gt;
&lt;br /&gt;
For further partitioning on BG/Q machines a conversion to the native Parasolid model is required. The tool is located in: /Install/SCOREC.develop/scorec/test/cadToSim/cadToSim &lt;br /&gt;
and should be run from [Case directory]/convertParasolid2ParasolidNative/ on firebird.&lt;br /&gt;
&lt;br /&gt;
= Updated Chef version (2015/03/26)=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) MPI implementation&lt;br /&gt;
&lt;br /&gt;
A new version of chef has been implemented and does not rely on threads any more.&lt;br /&gt;
Instead, it is now based on a pure MPI implementation. &lt;br /&gt;
That means that there is an important change in how chef is called at runtime.&lt;br /&gt;
&lt;br /&gt;
With the previous threaded version, the number of MPI processes had to be equal to the number of input parts. &lt;br /&gt;
Chef was then in charge of starting a number of threads equal to the number of output parts, which was automatic.&lt;br /&gt;
&lt;br /&gt;
Since the pure MPI version of chef does not start thread any more, it now requires a number of MPI processes equal to the final number of output parts, and not input parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) adapt.inp&lt;br /&gt;
&lt;br /&gt;
In the new version of chef, &amp;quot;numTotParts&amp;quot; in adapt.inp (which was used to specify the final number of output parts) has been replaced by &amp;quot;splitFactor&amp;quot;, which corresponds to the ratio of the number of output parts with the number of input parts. &lt;br /&gt;
If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
If you set this parameter to 2, each part of your input mesh will be split in 2 new sub-parts, etc&lt;br /&gt;
Keep in mind that the number of MPI processes that needs to be requested for chef must therefore be equal to (number of input parts times) * (splitFactor).&lt;br /&gt;
&lt;br /&gt;
I have also removed the obsolete parameter in adapt.inp and saved a representative version of this file in /projects/tools/SCOREC.develop/runscripts/adapt.inp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3) Paths&lt;br /&gt;
&lt;br /&gt;
I have updated chef on the VIz nodes, Mira and Tukey so that it only relies on the more robust pure MPI implementation.&lt;br /&gt;
&lt;br /&gt;
On the viz nodes, use /projects/tools/SCOREC.develop/build-chefMPI-GNU-*/test/chef&lt;br /&gt;
For simplicity, this is the default version of the master branch coming directly from our github repository.&lt;br /&gt;
&lt;br /&gt;
On Tukey, use /home/mrasquin/SCOREC.develop/build-tukey-GNU-OptG-c2c360bc-mpi-*&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35-noblsnap means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is off during uniform refinement (UR).&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35 means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is on during UR.&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol33 means that the target imbalance for both the vtx and elem is 3%, and BL snapping is on during UR.&lt;br /&gt;
Note that these versions have been slightly modified w.r.t. the master branch. In particular, the imbalance target is not a parameter yet. Also, in Parma, HPS (Heavy Part Splitting) and FixDisconnectedPart are not called here because the latest version of the diffusion algorithm with improved selection of (i) target parts for element exchange and (ii) elements.&lt;br /&gt;
&lt;br /&gt;
On Mira, use /home/mrasquin/SCOREC.develop/build-XL-OptG-c2c360bc-mpi-*&lt;br /&gt;
Similar comments applies to  build-XL-OptG-c2c360bc-mpi-tol33,  build-XL-OptG-c2c360bc-mpi-tol35 and  build-XL-OptG-c2c360bc-mpi-tol35-noblsnap.&lt;br /&gt;
&lt;br /&gt;
Note that BL snapping is not called for a repartitioning of the mesh. It can only play a role during uniform refinement.&lt;br /&gt;
Consequently, if you do not request a UR in adapt.inp, then build-*-tol35 and build-*-tol35-noblsnap will behave the same way.&lt;br /&gt;
&lt;br /&gt;
In case you are wondering what the weird numbers are in the name of the build directory, this comes from the git log hash, which is a unique number associated with a git commit (easier to couple an executable with a version of the code).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Updated Chef version (2015/05/29 and 2016/04/05)=&lt;br /&gt;
&lt;br /&gt;
Updated list of useful parameters in adapt.inp&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined.  Now, mixed mesh can be refined uniformly either everywhere including the BL, or only outside the BL (see parameter splitAllLayerEdges below). Other adaptation strategies based on local error indicators are being developed in Chef and will be complimentary to the existing strategies available in phParAdapt-Simmetrix. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested and validated in Chef. &lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* '''SplitAllLayerEdges''': This parameter is only applicable to mixed meshes during uniform refinement. If set to 1, uniform refinement of all mesh edges including the edges along normal growth curve (wedges) of the BL. If set to 0, uniform refinement except for edges along normal growth curve (tets only). For all tet meshes, this parameter is ignored and all the tets get split.&lt;br /&gt;
&lt;br /&gt;
* '''Snap''': If set to 1 during a uniform refinement, Chef will attempt snapping to model surface. Use with caution and check the resulting mesh: if the input mixed mesh is too coarse, snapping can be partially ignored. Non valid meshes have also sometimes been observed (phasta crash).&lt;br /&gt;
&lt;br /&gt;
* '''splitFactor''':, ratio of the number of output parts with the number of input parts. If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  graph or zrib (for Zoltan Recursive Inertia Bisection) are both available options so far. Graph should be preferred choice for now.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''':  0 for global partitioning, 1 for local partitioning. Global partitioning coupled with graph as the partition method requires a lot of memory and time and is limited to coarse meshes with a small number of mesh parts. Global RIB is more robust but can lead to larger vertex imbalance. If starting from a well balanced mesh with few or no disconnected parts, local graph is the recommended choice so far.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''elementImbalance''': target element imbalance for Parma. Use 1.01 to 1.05, which corresponds respectively to 1% and 5%.&lt;br /&gt;
&lt;br /&gt;
* '''vertexImbalance''': target vertex imbalance for Parma. Use 1.01 to 1.05, which corresponds respectively to 1% and 5%.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
* '''filterMatches''': If set to 0, the matching between periodic entities are imposed from the mesh file regardless of the periodic faces defined in the attributes that could potentially differ. When enabled, this code derives the period associations between all model entities based on the &amp;quot;periodic slave&amp;quot; attribute set in the attribute file. Note that the mesh must be periodic to support this feature. This can be useful for instance for a three-way periodic channel mesh that can support both 1-way and 3-way periodic attributes without the need to build two distinct meshes for that purpose.&lt;br /&gt;
&lt;br /&gt;
* '''axisymmetry''': When enabled, this parameter supports axisymmetric periodic mesh and attributes for instance for an annular flow.&lt;br /&gt;
&lt;br /&gt;
= Words of caution=&lt;br /&gt;
When working with large meshes occasionally the executable will crash by no fault of the user. This has happened in the past when (adapt.inp set parameter: ensa_ndof)*(number of nodes in the mesh)&amp;gt;(maximum value of the 4-byte integer 2^31). This was debugged by using the addr2line command. Information about this command can be found at https://fluid.colorado.edu/wiki/index.php/Debugging.&lt;br /&gt;
&lt;br /&gt;
[[Category:Chef]]&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=1933</id>
		<title>Chef/Mesh Partitioning</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=1933"/>
				<updated>2023-02-03T19:11:01Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Added link to documentation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage is first inspired from a tutorial provided to Igor and his team at NCSU in order to set up two phase flow test cases on a local cluster named Firebird at NCSU and Cetus/Mira at ALCF. At this time, this tutorial includes copy-paste materials from old emails. &lt;br /&gt;
&lt;br /&gt;
The code has evolved since then! If you scroll down, you will also find critical updates since the first tutorial was written. Please do not ignore them or there is 100% chance your mesh partitioning/refinement will fail.&lt;br /&gt;
&lt;br /&gt;
Please update this page for our viz nodes when you get a chance. &lt;br /&gt;
&lt;br /&gt;
Thanks, &lt;br /&gt;
&lt;br /&gt;
- Michel&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is a tutorial about how to respectively partition the initial mesh and generate the phasta files on firebird (and other platforms including Cetus/Mira) using Chef. This tutorial is rather long but should include everything you need.&lt;br /&gt;
The testcase to demonstrate the workflow is the familiar 3-way subchannel flow. The root path of this test case is	/sgidata2/mrasquin/Models/subchannel. The parasolid model is located in /sgidata2/mrasquin/Models/subchannel/convertParasolid2ParasolidNative/geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
The workflow that describes how to use Chef is now explained in the next sections.&lt;br /&gt;
&lt;br /&gt;
= Documentation =&lt;br /&gt;
General documentation of Chef that may be used in supplement to this page is available [[https://github.com/SCOREC/core/wiki/chef-partition-control|here]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Initial tutorial =&lt;br /&gt;
&lt;br /&gt;
== Env variables==&lt;br /&gt;
&lt;br /&gt;
All the subsequent tools need&lt;br /&gt;
* The fresh version of openmpi I built on firebird&lt;br /&gt;
* The latest Simmetrix library I installed in /Install on firebird.&lt;br /&gt;
&lt;br /&gt;
To update your paths, source the following file:&lt;br /&gt;
&amp;lt;code&amp;gt;/Install/SCOREC.develop/envLinux2014.sh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The env variables defined or updated in this env script include PATH and LD_LIBRARY_PATH. What is defined in this script should prevail on your settings but I strongly suggest removing any redundancy that you may have, for instance, in your .basrc. Note that I actually source this env file directly in my .bashrc so that I do not have to do it manually every time I log in to firebird. When you source it, it will also print the version of gcc, openmpi and simmodsuite lib that are set up.&lt;br /&gt;
&lt;br /&gt;
== BLMesherParallel ==&lt;br /&gt;
&lt;br /&gt;
Note that Simmetrix only supports matched faces for single part mesh so that the mesh must be built with one core. However the initial mesh must already include some information related to the partitioning, even if the mesh only includes a single part for format reasons. This additional information about the partitioning is required for conversion of the mesh file from the Simmetrix format to the SCOREC MDS format that Chef can read.&lt;br /&gt;
&lt;br /&gt;
The initial mesh for the 3-way subchannel was built in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0&amp;lt;/code&amp;gt;. Check the script named &amp;lt;code&amp;gt;runBLMesherParallel.sh&amp;lt;/code&amp;gt; in this directory.&lt;br /&gt;
&lt;br /&gt;
Running &amp;lt;code&amp;gt;./runBLMesherParallel.sh&amp;lt;/code&amp;gt; with no arguments will tell you the usage, that is:&lt;br /&gt;
 Usage: ./runBLMesherParallel.sh &amp;lt;X&amp;gt; &amp;lt;Y&amp;gt; &amp;lt;Z&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The arguments are as follows.&lt;br /&gt;
* &amp;lt;X&amp;gt; (geometric model) should be the parasolid model geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
* &amp;lt;Y&amp;gt; (attribute file) should be BLattr.inp.&lt;br /&gt;
* &amp;lt;Z&amp;gt; (number of processors) should be 1 here since we need to generate a single part mesh using a single core.&lt;br /&gt;
&lt;br /&gt;
The BLattr.inp input file is the same as the one read by the old serial version of BLMesher. But BLMesherParallel can do whatever the old version of BLMesher can do. In addition, if your test case does not include any matched face, you may try to mesh in parallel by specifying &amp;lt;Z&amp;gt; to be larger than 1. However, some meshing features are available only when BLMesherParallel is used with a single core so it is always important to check the resulting mesh.&lt;br /&gt;
&lt;br /&gt;
BLMesherParallel outputs the following files.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;mesh.sms&amp;lt;/code&amp;gt; --- The resulting mesh is stored in a directory named mesh.sms, which is a parameter hardcoded in the runBLMesherParallel.sh script.&lt;br /&gt;
* &amp;lt;code&amp;gt;BLMesher.log&amp;lt;/code&amp;gt; --- The log from BLMesherParallel is saved in BMesher.log, whereas the Simmetrix log is saved in mesh.log. Both filenames are also hardcoded in the script.&lt;br /&gt;
&lt;br /&gt;
I also mentioned in previous discussions that Simmetrix has developed its own model format called geomsim. However, the boundary layer collapses near matched faces with this model format, which is not the case when we use the parasolid format. This issue has been reported to Simmetrix but until they can provide a fix, we are forced to start with the parasolid format when our test cases include matched faces.&lt;br /&gt;
&lt;br /&gt;
== Mesh conversion==&lt;br /&gt;
&lt;br /&gt;
Chef can read only the MDS format developed at SCOREC. Therefore, the Simmetrix mesh mush first be converted to this format.&lt;br /&gt;
&lt;br /&gt;
This operation was carried out for the 3-way channel in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/simMeshToMdsMesh&amp;lt;/code&amp;gt;. Simply run the script &amp;lt;code&amp;gt;./simMeshToMdsMesh.sh&amp;lt;/code&amp;gt;, which executes the &amp;quot;convert&amp;quot; executable. In the script, you can see that the convert executable reads 3 arguments:&lt;br /&gt;
# The '''input parasolid model''' named named geom.xmt_txt, which points to geomFromSimmodeler_nat.x_t. Note that convert expects an .xmt_txt extension (or .smd extension for the complete geomsim format).&lt;br /&gt;
# The '''input Simmetrix mesh''' named here parts.sms (for historical reason but can be renamed).&lt;br /&gt;
# The '''name of the output mds mesh directory''', which is mdsMesh_bz2 here. Note that this name is prepended by &amp;quot;bz2:&amp;quot;, which means that the output mds mesh file is compressed using bzip2. &amp;quot;bz2:&amp;quot; will not be part of the name of the output directory. If you do not specify &amp;quot;bz2:&amp;quot;, the mds mesh file will be saved in ascii format, which is a waste of space so I suggest to always prepend your directory name by &amp;quot;bz2:&amp;quot;. This will also apply later to the output mesh directory generated by Chef (see below).&lt;br /&gt;
&lt;br /&gt;
Note that convert needs to run with a number of processes (-np ##) equal to the number of input parts in the Simmetrix mesh. For cases that include match faces, the Simmetrix mesh must include only one part, which is the reason why convert runs here with -np 1. But in other circumstances, convert can run in parallel if the Simmetrix mesh has already been partitioned in n parts with n&amp;gt;1 (for instance mesh generated in parallel with BLMesherParallel and/or partitioned with phParAdapt-Simmetrix).&lt;br /&gt;
&lt;br /&gt;
== Boundary and initial conditions (spj file)==&lt;br /&gt;
&lt;br /&gt;
Before running Chef for mesh operations such as uniform refinement, tetrahedronization and partitioning, we need to define the BCs and ICs for the generation of the phasta files. These BCs and ICs are defined in an spj file, which is in ASCII to facilitate scripting of BCs/ICs. Most of the attributes you are familiar with from the Simmodeler GUI can be specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
For the 3-way channel flow, see the spj file located in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Simplified_SPJ_file/geom.spj&amp;lt;/code&amp;gt;. Each line corresponds to one attribute that applies to one face.&lt;br /&gt;
&lt;br /&gt;
The structure of the spj file is:&lt;br /&gt;
 # Optional comments anywhere preceded by the pound symbol (#).&lt;br /&gt;
 # For each boundary or initial condition a line as follows:&lt;br /&gt;
 &amp;lt;attribute_name&amp;gt;: &amp;lt;face_id&amp;gt; &amp;lt;dimension&amp;gt; &amp;lt;attribute list&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the following.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;dimension&amp;gt;&amp;lt;/code&amp;gt;: 2 for a face attribute in 2D, 3 for the initial conditions that applies to the 3D domain. 1D and 0D attributes are also allowed for lines and vertices if needed.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;attribute list&amp;gt;&amp;lt;/code&amp;gt;: typically magnitude and direction if this applies&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Syntax is strict.&lt;br /&gt;
* No empty line. Each line should be either a comment which starts with the # character, or an attribute.&lt;br /&gt;
* There must be one single space after the semicolon character.&lt;br /&gt;
* There must be one single space between any numbers.&lt;br /&gt;
&lt;br /&gt;
In this example, a zero &amp;quot;traction vector&amp;quot; attribute is specified on the periodic faces parallel to the length of the channel. It is wrong to specify such an attribute on these periodic faces for a 3-way channel, but this was inherited from the 1-way periodic channel where these faces were slip walls instead of periodic faces. I will try to update my test cases in the future. But because we have now continuous integration tools that run every night to verify the Chef code, I will need to update all the cases if I modify the spj file now. So double check the attributes that you need for this model and consider the existing spj file as a source of inspiration rather than the correct spj file for production runs.&lt;br /&gt;
&lt;br /&gt;
== Chef ==&lt;br /&gt;
&lt;br /&gt;
A few rules must be followed to run Chef.&lt;br /&gt;
&lt;br /&gt;
First, the number of mpi processes must be equal to the number of input parts (''this has changed in the newest version of Chef, as described below'').&lt;br /&gt;
&lt;br /&gt;
Second, Chef is threaded with openmp and the total number of output parts after partitioning should be at most equal to the total number of available hardware threads of your machine/allocation. On BGQ, there are 4 hardware threads per core. On Linux platform such as firebird, the number of hardware threads corresponds to the number of available cores. That said, we have observed that if the number of output parts is equal to the total number of available hardware threads, Chef can hang. Therefore, it is safer to limit the number of output parts to a lower number than the number of available hardware threads. Therefore, on firebird, we should not try to partition a mesh to more than 16 parts.&lt;br /&gt;
&lt;br /&gt;
The next mesh operations will have to take place on Tukey and Cetus/Mira.&lt;br /&gt;
&lt;br /&gt;
The first example of a partitioning with Chef can be found in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch&amp;lt;/code&amp;gt;. With my naming convention, &amp;lt;code&amp;gt;4-1-Chef-PartLocal-Scratch&amp;lt;/code&amp;gt; can be decomposed as follows:&lt;br /&gt;
* The first number (4) corresponds to the number of output parts&lt;br /&gt;
* The second number (1) correspond to the number of input parts&lt;br /&gt;
* &amp;quot;Chef&amp;quot; means this mesh was treated with this program (in opposition to phParAdapt, phTest, etc which are previous executables that we used for similar purpose).&lt;br /&gt;
* &amp;quot;PartLocal&amp;quot; means the mesh is partitioned locally.&lt;br /&gt;
* &amp;quot;Scratch&amp;quot; means that the initial solution in the resulting phasta files is generated entirely from the spj file defined in a previous section of this tutorial. That is, we are starting a simulation &amp;quot;from scratch,&amp;quot; using the spj file's initial conditions as opposed to a solution migrated from a previous run.&lt;br /&gt;
&lt;br /&gt;
In summary, Chef was used in this directory to partition a single part mesh into 4 parts and the solution in the phasta files was generated directly from scratch using the spj file.&lt;br /&gt;
&lt;br /&gt;
=== Chef's input files ===&lt;br /&gt;
&lt;br /&gt;
The script to run Chef is named runChef.sh in this directory and simply call the executable. Chef reads all it needs from two input files called numstart.dat and adapt.inp.&lt;br /&gt;
&lt;br /&gt;
==== numstart.dat ====&lt;br /&gt;
&lt;br /&gt;
Instead of building the initial solution from scratch using the initial conditions defined in the spj file, the user can migrate an existing solution stored in a set of restart files that were saved from a previous phasta simulation. Numstart.dat contains the time step stamp of the input restart files to read in order to migrate a solution.&lt;br /&gt;
&lt;br /&gt;
==== adapt.inp ====&lt;br /&gt;
&lt;br /&gt;
This input file contains all the other parameters Chef expects. Note that many of these parameters have been inherited from the old phParAdapt, and are currently obsolete or unused. In what follows, all the parameters available in adapt.inp are listed and the critical parameters are in bold. Any line that starts with # is ignored.&lt;br /&gt;
&lt;br /&gt;
* '''globalP''': obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined. Note that adaptation only works with a mixed mesh (with wedges in the BL) and not with an all-tet mesh. Tetrahedronization should therefore take place after uniform refinement. Right now, the mixed mesh gets uniformly refined everywhere including the BL but it is possible to refine uniformly outside the BL only with some light modifications of the code. In the future, we hope to have other adaptation strategies in place in Chef based on local error indicator. If interested in these strategy, then phParAdapt-Simmetrix must be used. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* rRead: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* rStart: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested in Chef. If interested in other more sophisticated adaptation strategies, phParAdapt-Simmetrix must be used for now.&lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* Periodic: obsolete. Periodicity in the mesh and in the solution is not treated automatically as long as i) the mesh built with BLMesher is periodic (i.e. location of the mesh vertices on periodic faces in the same) and ii) the spj file contains the correct &amp;quot;periodic slave&amp;quot; attributes.&lt;br /&gt;
&lt;br /&gt;
* prCD: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* timing: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* outputFormat: obsolete. Phasta files are saved by default in binary format.&lt;br /&gt;
&lt;br /&gt;
* internalBCNodes: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* WRITEASC: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* phastaIO: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''numTotParts''': Final number of parts. If numTotParts is larger than the number of Chef processes which is equal to the number of input parts, the mesh will be partitioned.&lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* isReorder: obsolete/unused. Reordering for better cache performance is now applied by default to both the phasta files and mesh files.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* numSplit: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''': local partitioning if set to 1, set global partitioning if set to 0. Currently, only local partitioning is implemented in Chef and has been shown to be sufficient so far.&lt;br /&gt;
&lt;br /&gt;
* '''RecursivePtn''': should always be set to 1. In the past, this parameter allowed recursive partitioning steps in phParAdapt. The code will stop or crash if this parameter is not 1.&lt;br /&gt;
&lt;br /&gt;
* RecursivePtnStep: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  Currently, the GRAPH method for local partitioning is hard coded in one of the Chef routine.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
The second example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-Tet-PartLocal-SolMgr. For this case, based on the naming convention of 8-4-Chef-Tet-PartLocal-SolMgr (and the parameters specified in adapt.inp and numstart.dat),&lt;br /&gt;
* the number of output parts requested is 8, &lt;br /&gt;
* the number of input parts is 4 (note &amp;quot;-np 4&amp;quot; in the runChef.sh script),&lt;br /&gt;
* the input mixed mesh is first tetrahedronized before being partitioned. &lt;br /&gt;
* the solution in the resulting phasta files is migrated from the previous Chef run. &lt;br /&gt;
Note that the spj file is different for this second example and the initial conditions have been commented out in order not to overwrite the solution that is migrated from the previous Chef run.&lt;br /&gt;
&lt;br /&gt;
The third and final example can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-UR2-Tet-PartLocal-SolMgr. In this directory 8-4-Chef-UR2-Tet-PartLocal-SolMgr, Chef &lt;br /&gt;
* reads a four part mesh, &lt;br /&gt;
* applies a double recursive uniform refinement, &lt;br /&gt;
* tetrahedronize the resulting mixed mesh that has been uniformly refined twice, &lt;br /&gt;
* partition the resulting 4 part all-tet uniformly refined mesh into 8 parts,&lt;br /&gt;
* migrate and interpolate the solution read from existing restart files coming from the first example.&lt;br /&gt;
&lt;br /&gt;
As a final comment, note that the restart files are always read directly from a procs_case directory. However, when the number of output restart files exceeds 2048, the restart files are then saved in subdirectories of the root procs_case directory in order to reduce file contention, in the same (but still different) way as what you have implemented at some point in your version of phasta. The best strategy would be to write phasta files using mpi_io for instance so that we can store more than one part in a single file and avoid large number of phasta files.&lt;br /&gt;
&lt;br /&gt;
For further partitioning on BG/Q machines a conversion to the native Parasolid model is required. The tool is located in: /Install/SCOREC.develop/scorec/test/cadToSim/cadToSim &lt;br /&gt;
and should be run from [Case directory]/convertParasolid2ParasolidNative/ on firebird.&lt;br /&gt;
&lt;br /&gt;
= Updated Chef version (2015/03/26)=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) MPI implementation&lt;br /&gt;
&lt;br /&gt;
A new version of chef has been implemented and does not rely on threads any more.&lt;br /&gt;
Instead, it is now based on a pure MPI implementation. &lt;br /&gt;
That means that there is an important change in how chef is called at runtime.&lt;br /&gt;
&lt;br /&gt;
With the previous threaded version, the number of MPI processes had to be equal to the number of input parts. &lt;br /&gt;
Chef was then in charge of starting a number of threads equal to the number of output parts, which was automatic.&lt;br /&gt;
&lt;br /&gt;
Since the pure MPI version of chef does not start thread any more, it now requires a number of MPI processes equal to the final number of output parts, and not input parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) adapt.inp&lt;br /&gt;
&lt;br /&gt;
In the new version of chef, &amp;quot;numTotParts&amp;quot; in adapt.inp (which was used to specify the final number of output parts) has been replaced by &amp;quot;splitFactor&amp;quot;, which corresponds to the ratio of the number of output parts with the number of input parts. &lt;br /&gt;
If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
If you set this parameter to 2, each part of your input mesh will be split in 2 new sub-parts, etc&lt;br /&gt;
Keep in mind that the number of MPI processes that needs to be requested for chef must therefore be equal to (number of input parts times) * (splitFactor).&lt;br /&gt;
&lt;br /&gt;
I have also removed the obsolete parameter in adapt.inp and saved a representative version of this file in /projects/tools/SCOREC.develop/runscripts/adapt.inp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3) Paths&lt;br /&gt;
&lt;br /&gt;
I have updated chef on the VIz nodes, Mira and Tukey so that it only relies on the more robust pure MPI implementation.&lt;br /&gt;
&lt;br /&gt;
On the viz nodes, use /projects/tools/SCOREC.develop/build-chefMPI-GNU-*/test/chef&lt;br /&gt;
For simplicity, this is the default version of the master branch coming directly from our github repository.&lt;br /&gt;
&lt;br /&gt;
On Tukey, use /home/mrasquin/SCOREC.develop/build-tukey-GNU-OptG-c2c360bc-mpi-*&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35-noblsnap means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is off during uniform refinement (UR).&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35 means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is on during UR.&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol33 means that the target imbalance for both the vtx and elem is 3%, and BL snapping is on during UR.&lt;br /&gt;
Note that these versions have been slightly modified w.r.t. the master branch. In particular, the imbalance target is not a parameter yet. Also, in Parma, HPS (Heavy Part Splitting) and FixDisconnectedPart are not called here because the latest version of the diffusion algorithm with improved selection of (i) target parts for element exchange and (ii) elements.&lt;br /&gt;
&lt;br /&gt;
On Mira, use /home/mrasquin/SCOREC.develop/build-XL-OptG-c2c360bc-mpi-*&lt;br /&gt;
Similar comments applies to  build-XL-OptG-c2c360bc-mpi-tol33,  build-XL-OptG-c2c360bc-mpi-tol35 and  build-XL-OptG-c2c360bc-mpi-tol35-noblsnap.&lt;br /&gt;
&lt;br /&gt;
Note that BL snapping is not called for a repartitioning of the mesh. It can only play a role during uniform refinement.&lt;br /&gt;
Consequently, if you do not request a UR in adapt.inp, then build-*-tol35 and build-*-tol35-noblsnap will behave the same way.&lt;br /&gt;
&lt;br /&gt;
In case you are wondering what the weird numbers are in the name of the build directory, this comes from the git log hash, which is a unique number associated with a git commit (easier to couple an executable with a version of the code).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Updated Chef version (2015/05/29 and 2016/04/05)=&lt;br /&gt;
&lt;br /&gt;
Updated list of useful parameters in adapt.inp&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined.  Now, mixed mesh can be refined uniformly either everywhere including the BL, or only outside the BL (see parameter splitAllLayerEdges below). Other adaptation strategies based on local error indicators are being developed in Chef and will be complimentary to the existing strategies available in phParAdapt-Simmetrix. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested and validated in Chef. &lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* '''SplitAllLayerEdges''': This parameter is only applicable to mixed meshes during uniform refinement. If set to 1, uniform refinement of all mesh edges including the edges along normal growth curve (wedges) of the BL. If set to 0, uniform refinement except for edges along normal growth curve (tets only). For all tet meshes, this parameter is ignored and all the tets get split.&lt;br /&gt;
&lt;br /&gt;
* '''Snap''': If set to 1 during a uniform refinement, Chef will attempt snapping to model surface. Use with caution and check the resulting mesh: if the input mixed mesh is too coarse, snapping can be partially ignored. Non valid meshes have also sometimes been observed (phasta crash).&lt;br /&gt;
&lt;br /&gt;
* '''splitFactor''':, ratio of the number of output parts with the number of input parts. If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  graph or zrib (for Zoltan Recursive Inertia Bisection) are both available options so far. Graph should be preferred choice for now.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''':  0 for global partitioning, 1 for local partitioning. Global partitioning coupled with graph as the partition method requires a lot of memory and time and is limited to coarse meshes with a small number of mesh parts. Global RIB is more robust but can lead to larger vertex imbalance. If starting from a well balanced mesh with few or no disconnected parts, local graph is the recommended choice so far.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''elementImbalance''': target element imbalance for Parma. Use 1.01 to 1.05, which corresponds respectively to 1% and 5%.&lt;br /&gt;
&lt;br /&gt;
* '''vertexImbalance''': target vertex imbalance for Parma. Use 1.01 to 1.05, which corresponds respectively to 1% and 5%.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
* '''filterMatches''': If set to 0, the matching between periodic entities are imposed from the mesh file regardless of the periodic faces defined in the attributes that could potentially differ. When enabled, this code derives the period associations between all model entities based on the &amp;quot;periodic slave&amp;quot; attribute set in the attribute file. Note that the mesh must be periodic to support this feature. This can be useful for instance for a three-way periodic channel mesh that can support both 1-way and 3-way periodic attributes without the need to build two distinct meshes for that purpose.&lt;br /&gt;
&lt;br /&gt;
* '''axisymmetry''': When enabled, this parameter supports axisymmetric periodic mesh and attributes for instance for an annular flow.&lt;br /&gt;
&lt;br /&gt;
= Words of caution=&lt;br /&gt;
When working with large meshes occasionally the executable will crash by no fault of the user. This has happened in the past when (adapt.inp set parameter: ensa_ndof)*(number of nodes in the mesh)&amp;gt;(maximum value of the 4-byte integer 2^31). This was debugged by using the addr2line command. Information about this command can be found at https://fluid.colorado.edu/wiki/index.php/Debugging.&lt;br /&gt;
&lt;br /&gt;
[[Category:Chef]]&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1932</id>
		<title>Convert</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1932"/>
				<updated>2023-02-02T22:24:43Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: /* Model Convert */ Know issue addition&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group. &lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
The main goal of Convert is to take a Symmetrix model and mesh and convert it to &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, though there are also versions that will push the model to &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; format.&lt;br /&gt;
&lt;br /&gt;
For all of the options below, also note that you should always use a version of convert built for the specific version of Simmodeler used to generate the mesh. This is most easily achieved by using up-to-date versions.&lt;br /&gt;
&lt;br /&gt;
== Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, and a &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file and outputs &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, and a &amp;lt;code&amp;gt;./mdsMesh&amp;lt;/code&amp;gt; directory. A specific implementation will look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=4321 --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where the root face is the face which holds the original meshing attributes in Simmodeler (extrusion meshing from within simmodeler would originate from this face).&lt;br /&gt;
&lt;br /&gt;
=== Other Functionality ===&lt;br /&gt;
&lt;br /&gt;
Note that Convert can intake multiple model regions if that is required for a geometry. This is available in the main bran and simply takes in more than one model root face arguments in a separate file. This file should have all of the root faces in a new-line delimited format, and the call to convert changes to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=ExtruRootID.txt --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;ExtruRootID.txt&amp;lt;/code&amp;gt; is the file containing the root face ID's.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file and outputs a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This file type simply stores information about model faces, edges, and vertices, and their relationships to each other. This is needed to classify mesh points.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Model Convert is a part of Chef, and by default a simple version will be built in the process of building Chef. There are also standalone builds of the tool that are required to be built for unique geometries, for instance, for the Gust Wing project, a version of the tool for closed test section slices is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
See MgenExtru_MGENClassificationAirfoilPt2 video--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that there is a known issue with simmodler where performing certain geometric transformations and adjustments can cause the &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;_nat.x_t&amp;lt;/code&amp;gt; to have disagreeing model tags for the same model entity. In this scinario other tools will lean on the information in the &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; so it is best to use mdlConvert with a &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file as input.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1931</id>
		<title>Convert</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1931"/>
				<updated>2023-02-02T22:17:18Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: /* Model Convert */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group. &lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
The main goal of Convert is to take a Symmetrix model and mesh and convert it to &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, though there are also versions that will push the model to &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; format.&lt;br /&gt;
&lt;br /&gt;
For all of the options below, also note that you should always use a version of convert built for the specific version of Simmodeler used to generate the mesh. This is most easily achieved by using up-to-date versions.&lt;br /&gt;
&lt;br /&gt;
== Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, and a &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file and outputs &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, and a &amp;lt;code&amp;gt;./mdsMesh&amp;lt;/code&amp;gt; directory. A specific implementation will look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=4321 --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where the root face is the face which holds the original meshing attributes in Simmodeler (extrusion meshing from within simmodeler would originate from this face).&lt;br /&gt;
&lt;br /&gt;
=== Other Functionality ===&lt;br /&gt;
&lt;br /&gt;
Note that Convert can intake multiple model regions if that is required for a geometry. This is available in the main bran and simply takes in more than one model root face arguments in a separate file. This file should have all of the root faces in a new-line delimited format, and the call to convert changes to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=ExtruRootID.txt --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;ExtruRootID.txt&amp;lt;/code&amp;gt; is the file containing the root face ID's.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt;file and outputs a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This file type simply stores information about model faces, edges, and vertices, and their relationships to each other. This is needed to classify mesh points.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Model Convert is a part of Chef, and by default a simple version will be built in the process of building Chef. There are also standalone builds of the tool that are required to be built for unique geometries, for instance, for the Gust Wing project, a version of the tool for closed test section slices is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
See MgenExtru_MGENClassificationAirfoilPt2 video--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=1930</id>
		<title>Exporting Parasolid from SolidWorks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=1930"/>
				<updated>2023-01-12T21:53:36Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Save out your Model as a parasolid from [[SolidWorks]]. Note that you will want the geometry as close to ready for meshing as possible, as performing model &amp;quot;surgery&amp;quot; in SimModeler is not always straight forward. The outputted file from SolidWorks will have the format &amp;lt;code&amp;gt;&amp;lt;file_name&amp;gt;.x_t&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you do not have a parasolid model of your own, you may use the On Ramp example file located at:&lt;br /&gt;
&lt;br /&gt;
 /projects/tutorials/OnRamp/example_geom.x_t&lt;br /&gt;
&lt;br /&gt;
Ensure that you are on one of the viznodes and not portal1. You may tunnel to viz003 by opening a terminal and running:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz003&lt;br /&gt;
&lt;br /&gt;
Navigate to and copy your file into your working directory. Typically, we create a folder where all the simulation files are stored. For example, after opening a terminal I could run the commands:&lt;br /&gt;
&lt;br /&gt;
 mkdir Demo&lt;br /&gt;
 cd Demo&lt;br /&gt;
&lt;br /&gt;
This would place me in my working directory 'Demo'. &lt;br /&gt;
&lt;br /&gt;
Next, you'll want to change the parasolid file extension from &amp;lt;code&amp;gt;.x_t&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;. To do this, run &amp;lt;code&amp;gt;mv &amp;lt;file_name&amp;gt;.x_t &amp;lt;file_name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; from your terminal. From here, you are ready for the ''convert'' step. Copy over the convert script by running:&lt;br /&gt;
&lt;br /&gt;
 cp /projects/tutorials/OnRamp/convertParasolid2Sim.sh .&lt;br /&gt;
&lt;br /&gt;
The convert step is documented here: https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Summary of video:''' &lt;br /&gt;
 1. Ensure &amp;lt;code&amp;gt;convertParasolid2Sim.sh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; are in your working directory. &lt;br /&gt;
&lt;br /&gt;
 2. Set environment with soft adds found in &amp;lt;code&amp;gt;more ~kjansen/soft-core.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 3. Run &amp;lt;code&amp;gt;./convertParasolid2sim.sh &amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; in your terminal&lt;br /&gt;
&lt;br /&gt;
 4. Convert step is complete and your directory now contains 3 new files: &amp;lt;code&amp;gt;model.smd&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;relations.log&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt;. The &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file is the one we need moving forward in this tutorial.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the convert step is complete, you are ready to move onto the next step and use [[Getting Started with Simmodeler| SimModeler]] to create a mesh for the new &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file we created!&lt;br /&gt;
&lt;br /&gt;
== Geometries with Standalone Surfaces ==&lt;br /&gt;
&lt;br /&gt;
There are sometimes cases where models will contain not only three dimensional solid bodies but also surfaces. Surfaces are treated differently by Solidworks and parasolid files than solid bodies, so they need to be exported separately. In order to do this, you select all entities except the surface and save as a parasolid, after you hit save, Solidworks will ask if you want to save all of the geometry or only the selected geometry, the second option being the one we want. Now do the same but only select the surface. In newer versions of Simmodler, only the domain needs to be run though the conversion and the surfaces can be added as parasolid files. In older versions, these two separate parasolid files needed to be converted to &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; separately (remember to rename files after they are converted to avoid overwriting) and recombined in Simmodeler. The toolchain seems to fail for multiple unconnected surfaces, so if this is the case, take care to export them separately.&lt;br /&gt;
&lt;br /&gt;
To recombine, simply open the solid body file in Simmodeler, under the &amp;quot;Modeling&amp;quot; tab select &amp;quot;Add Parts&amp;quot;, add the surfaces file, then select &amp;quot;Make New Manifold Model&amp;quot; which will combine the files into one model that is suitable for PHASTA.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=1929</id>
		<title>Exporting Parasolid from SolidWorks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=1929"/>
				<updated>2023-01-12T21:50:43Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Save out your Model as a parasolid from [[SolidWorks]]. Note that you will want the geometry as close to ready for meshing as possible, as performing model &amp;quot;surgery&amp;quot; in SimModeler is not always straight forward. The outputted file from SolidWorks will have the format &amp;lt;code&amp;gt;&amp;lt;file_name&amp;gt;.x_t&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you do not have a parasolid model of your own, you may use the On Ramp example file located at:&lt;br /&gt;
&lt;br /&gt;
 /projects/tutorials/OnRamp/example_geom.x_t&lt;br /&gt;
&lt;br /&gt;
Ensure that you are on one of the viznodes and not portal1. You may tunnel to viz003 by opening a terminal and running:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz003&lt;br /&gt;
&lt;br /&gt;
Navigate to and copy your file into your working directory. Typically, we create a folder where all the simulation files are stored. For example, after opening a terminal I could run the commands:&lt;br /&gt;
&lt;br /&gt;
 mkdir Demo&lt;br /&gt;
 cd Demo&lt;br /&gt;
&lt;br /&gt;
This would place me in my working directory 'Demo'. &lt;br /&gt;
&lt;br /&gt;
Next, you'll want to change the parasolid file extension from &amp;lt;code&amp;gt;.x_t&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;. To do this, run &amp;lt;code&amp;gt;mv &amp;lt;file_name&amp;gt;.x_t &amp;lt;file_name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; from your terminal. From here, you are ready for the ''convert'' step. Copy over the convert script by running:&lt;br /&gt;
&lt;br /&gt;
 cp /projects/tutorials/OnRamp/convertParasolid2Sim.sh .&lt;br /&gt;
&lt;br /&gt;
The convert step is documented here: https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Summary of video:''' &lt;br /&gt;
 1. Ensure &amp;lt;code&amp;gt;convertParasolid2Sim.sh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; are in your working directory. &lt;br /&gt;
&lt;br /&gt;
 2. Set environment with soft adds found in &amp;lt;code&amp;gt;more ~kjansen/soft-core.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 3. Run &amp;lt;code&amp;gt;./convertParasolid2sim.sh &amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; in your terminal&lt;br /&gt;
&lt;br /&gt;
 4. Convert step is complete and your directory now contains 3 new files: &amp;lt;code&amp;gt;model.smd&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;relations.log&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt;. The &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file is the one we need moving forward in this tutorial.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the convert step is complete, you are ready to move onto the next step and use [[Getting Started with Simmodeler| SimModeler]] to create a mesh for the new &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file we created!&lt;br /&gt;
&lt;br /&gt;
== Geometries with Standalone Surfaces ==&lt;br /&gt;
&lt;br /&gt;
There are sometimes cases where models will contain not only three dimensional solid bodies but also surfaces. Surfaces are treated differently by Solidworks and parasolid files than solid bodies, so they need to be exported separately. In order to do this, you select all entities except the surface and save as a parasolid, after you hit save, Solidworks will ask if you want to save all of the geometry or only the selected geometry, the second option being the one we want. Now do the same but only select the surface. These two separate files can then be converted to &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; separately (remember to rename files after they are converted to avoid overwriting) and recombined in Simmodeler. The toolchain seems to fail for multiple unconnected surfaces, so if this is the case, take care to export them separately.&lt;br /&gt;
&lt;br /&gt;
To recombine, simply open the solid body file in Simmodeler, under the &amp;quot;Modeling&amp;quot; tab select &amp;quot;Add Parts&amp;quot;, add the surfaces file, then select &amp;quot;Make New Manifold Model&amp;quot; which will combine the files into one model that is suitable for PHASTA.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1928</id>
		<title>Convert</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1928"/>
				<updated>2023-01-05T21:38:12Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group. &lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
The main goal of Convert is to take a Symmetrix model and mesh and convert it to &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, though there are also versions that will push the model to &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; format.&lt;br /&gt;
&lt;br /&gt;
For all of the options below, also note that you should always use a version of convert built for the specific version of Simmodeler used to generate the mesh. This is most easily achieved by using up-to-date versions.&lt;br /&gt;
&lt;br /&gt;
== Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, and a &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file and outputs &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, and a &amp;lt;code&amp;gt;./mdsMesh&amp;lt;/code&amp;gt; directory. A specific implementation will look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=4321 --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where the root face is the face which holds the original meshing attributes in Simmodeler (extrusion meshing from within simmodeler would originate from this face).&lt;br /&gt;
&lt;br /&gt;
=== Other Functionality ===&lt;br /&gt;
&lt;br /&gt;
Note that Convert can intake multiple model regions if that is required for a geometry. This is available in the main bran and simply takes in more than one model root face arguments in a separate file. This file should have all of the root faces in a new-line delimited format, and the call to convert changes to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=ExtruRootID.txt --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;ExtruRootID.txt&amp;lt;/code&amp;gt; is the file containing the root face ID's.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; file and outputs a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This file type simply stores information about model faces, edges, and vertices, and their relationships to each other. This is needed to classify mesh points.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Model Convert is a part of Chef, and by default a simple version will be built in the process of building Chef. There are also standalone builds of the tool that are required to be built for unique geometries, for instance, for the Gust Wing project, a version of the tool for closed test section slices is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
See MgenExtru_MGENClassificationAirfoilPt2 video--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1927</id>
		<title>Convert</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1927"/>
				<updated>2023-01-05T21:37:32Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group. &lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
The main goal of Convert is to take a Symmetrix model and mesh and convert it to &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, though there are also versions that will push the model to &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; format.&lt;br /&gt;
&lt;br /&gt;
For all of the options below, also note that you should always use a version of convert built for the specific version of Simmodeler used to generate the mesh. This is most easily achieved by using up-to-date versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Usage ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, and a &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file and outputs &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, and a &amp;lt;code&amp;gt;./mdsMesh&amp;lt;/code&amp;gt; directory. A specific implementation will look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=4321 --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where the root face is the face which holds the original meshing attributes in Simmodeler (extrusion meshing from within simmodeler would originate from this face).&lt;br /&gt;
&lt;br /&gt;
=== Other Functionality ===&lt;br /&gt;
&lt;br /&gt;
Note that Convert can intake multiple model regions if that is required for a geometry. This is available in the main bran and simply takes in more than one model root face arguments in a separate file. This file should have all of the root faces in a new-line delimited format, and the call to convert changes to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=ExtruRootID.txt --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;ExtruRootID.txt&amp;lt;/code&amp;gt; is the file containing the root face ID's.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; file and outputs a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This file type simply stores information about model faces, edges, and vertices, and their relationships to each other. This is needed to classify mesh points.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Model Convert is a part of Chef, and by default a simple version will be built in the process of building Chef. There are also standalone builds of the tool that are required to be built for unique geometries, for instance, for the Gust Wing project, a version of the tool for closed test section slices is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
See MgenExtru_MGENClassificationAirfoilPt2 video--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1926</id>
		<title>Convert</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1926"/>
				<updated>2023-01-05T21:37:08Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group. &lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
The main goal of Convert is to take a Symmetrix model and mesh and convert it to &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, though there are also versions that will push the model to &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; format.&lt;br /&gt;
&lt;br /&gt;
For all of the options below, also note that you should always use a version of convert built for the specific version of Simmodeler used to generate the mesh. This is most easily achieved by using up-to-date versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Usage ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, and a &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file and outputs &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, and a &amp;lt;code&amp;gt;./mdsMesh&amp;lt;/code&amp;gt; directory. A specific implementation will look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=4321 --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where the root face is the face which holds the original meshing attributes in Simmodeler (extrusion meshing from within simmodeler would originate from this face).&lt;br /&gt;
&lt;br /&gt;
= Other Functionality =&lt;br /&gt;
&lt;br /&gt;
Note that Convert can intake multiple model regions if that is required for a geometry. This is available in the main bran and simply takes in more than one model root face arguments in a separate file. This file should have all of the root faces in a new-line delimited format, and the call to convert changes to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=ExtruRootID.txt --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;ExtruRootID.txt&amp;lt;/code&amp;gt; is the file containing the root face ID's.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; file and outputs a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This file type simply stores information about model faces, edges, and vertices, and their relationships to each other. This is needed to classify mesh points.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Model Convert is a part of Chef, and by default a simple version will be built in the process of building Chef. There are also standalone builds of the tool that are required to be built for unique geometries, for instance, for the Gust Wing project, a version of the tool for closed test section slices is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
See MgenExtru_MGENClassificationAirfoilPt2 video--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1925</id>
		<title>Convert</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1925"/>
				<updated>2023-01-05T21:36:56Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group. &lt;br /&gt;
&lt;br /&gt;
== Basic Overview =&lt;br /&gt;
The main goal of Convert is to take a Symmetrix model and mesh and convert it to &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, though there are also versions that will push the model to &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; format.&lt;br /&gt;
&lt;br /&gt;
For all of the options below, also note that you should always use a version of convert built for the specific version of Simmodeler used to generate the mesh. This is most easily achieved by using up-to-date versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Usage ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, and a &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file and outputs &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, and a &amp;lt;code&amp;gt;./mdsMesh&amp;lt;/code&amp;gt; directory. A specific implementation will look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=4321 --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where the root face is the face which holds the original meshing attributes in Simmodeler (extrusion meshing from within simmodeler would originate from this face).&lt;br /&gt;
&lt;br /&gt;
= Other Functionality =&lt;br /&gt;
&lt;br /&gt;
Note that Convert can intake multiple model regions if that is required for a geometry. This is available in the main bran and simply takes in more than one model root face arguments in a separate file. This file should have all of the root faces in a new-line delimited format, and the call to convert changes to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/buildTestMergeEWT/test/convert --model-face-root=ExtruRootID.txt --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;code&amp;gt;ExtruRootID.txt&amp;lt;/code&amp;gt; is the file containing the root face ID's.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; file and outputs a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This file type simply stores information about model faces, edges, and vertices, and their relationships to each other. This is needed to classify mesh points.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Model Convert is a part of Chef, and by default a simple version will be built in the process of building Chef. There are also standalone builds of the tool that are required to be built for unique geometries, for instance, for the Gust Wing project, a version of the tool for closed test section slices is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
See MgenExtru_MGENClassificationAirfoilPt2 video--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1924</id>
		<title>MGEN Extrude</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1924"/>
				<updated>2023-01-05T21:29:49Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MGEN is a tool in the meshing workflow that takes a 2D source mesh and extrudes it in the third dimension based off of user input. The tool was originally created for use on structured grids on the Boeing bump, but has since been generalized for use in unstructured setups.&lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
&lt;br /&gt;
MGEN code is stored in &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; and written in FORTRAN. The code takes in a source 2D mesh, z-coordinates to extrude between, the number of elements to populate the extrusion with, and the number of partitions to write the mesh to. &lt;br /&gt;
&lt;br /&gt;
Partitioning in MGEN is simply a method to reduce the cost of initial runs of Chef, but is not a replacement for the initial configuring that Chef does (via 1-1-Chef). Parting in MGEN simply allows the first run of Chef to be in parallel (i.e. 8-8-Chef). Starting Chef from parallel is most important on large grids that would take prohibitively long to run though Chef in serial.&lt;br /&gt;
&lt;br /&gt;
The most current copy of the code is available at &amp;lt;code&amp;gt;(location)&amp;lt;/code&amp;gt; as of (date)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Basic Usage ==&lt;br /&gt;
&lt;br /&gt;
Once a suitable version of &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; has been located and moved to a working directory, it first needs to be complied if this has not already been done. The FORTRAN compiler to compile &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; should be the same version that was/will be used to compile the version of Chef to be used later in the meshing pipeline in order to reduce the risk of complications.&lt;br /&gt;
&lt;br /&gt;
Once a compiler version is selected and added using &amp;lt;code&amp;gt;soft add&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; (depending on the system), it can be compiled. As an example, if using &amp;lt;code&amp;gt;gcc-6.3.0&amp;lt;/code&amp;gt; on Cooley compiling would look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
soft add +gcc-6.3.0&lt;br /&gt;
&lt;br /&gt;
gfortran -03 tm3Extrude.f -o tm3Extrude&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the code is compiled, the working directory needs to be prepared to run MGEN. MGEN needs the source 2D mesh in the form of &amp;lt;code&amp;gt;geom.crd&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;geom.cnn&amp;lt;/code&amp;gt; files in the same directory as the compiled code. These source files can be produced from scratch with MATLAB for structured grids, or through the use of [[Getting Started with Simmodeler|Simmetrix]] and the [[Convert]] tool for unstructured grids.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the mesh files are in place, MGEN can be run with &amp;lt;code&amp;gt;./tm3Extrude&amp;lt;/code&amp;gt; as usual. The code will ask for inputs for zmin, zmax, numelz, and npart. These should be entered in a single string with spaces in between the values before hitting in order to continue code execution.&lt;br /&gt;
&lt;br /&gt;
== Advanced Usage ==&lt;br /&gt;
For more complex geometries, complete information about the model cannot be assumed and must instead be given to MGEN. In order to prepare for this, we will need an additional form of the mesh and to make changes to the MGEN code itself in order to tell the program where geometric features are. &lt;br /&gt;
&lt;br /&gt;
A model first needs to be converted into a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This can be created with &amp;lt;code&amp;gt;mdlConvert&amp;lt;/code&amp;gt;. Example usage of this is &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert &amp;lt;simmetixMesh&amp;gt;.xmt_txt outModel.dmg&amp;lt;/code&amp;gt;. This captures only information about model points, edges, and faces and their relationships to each other but does not capture information about physical location.&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
MGEN will write its outputs to the same working directory that the executable and source mesh files are in. There are multiple file types written, most with a suffix of a number to denote the part number of that file. The different parted files and their purposes are as follows:&lt;br /&gt;
&lt;br /&gt;
;geom3D.class :Classification file describing what type of geometric entity each point lies on (vertex, edge, face, volume)&lt;br /&gt;
;geom3D.cnndt :Connectivity of the elements &lt;br /&gt;
;geom3D.coord :Node coordinates&lt;br /&gt;
;geom3D.fathr :Parent vertex from the 2D source mesh&lt;br /&gt;
;geom3D.match :Contains periodic partners&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also one more file:&lt;br /&gt;
&lt;br /&gt;
; geom3DHead.cnn&lt;br /&gt;
&lt;br /&gt;
Which lists the headers containing information on the size of the file each of the above connectivity files.&lt;br /&gt;
&lt;br /&gt;
== Using the outputted files ==&lt;br /&gt;
The outputted files from MGEN now need to be prepared for Chef, this is done via &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt;. The provided example will be for a build on Cooley.&lt;br /&gt;
&lt;br /&gt;
First, the environment needs to be prepared via setting &amp;lt;code&amp;gt;SIM_LICENSE_FILE&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt;. Examples of this are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export SIM_LICENSE_FILE=/eagle/PHASTA_aesp/SCOREC-CORE/deps/Simmetrix/UCBoulder&lt;br /&gt;
&lt;br /&gt;
export LD_LIBRARY_PATH=/eagle/PHASTA_aesp/SCOREC-CORE/deps/16.0-220326/lib/x64_rhel_gcc48/psKrnl/:$LD_LIBRARY_PATH&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From here, &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt; can be run with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
mpirun -f /var/tmp/cobalt.2137783 -np &amp;lt;np&amp;gt; -genvall /eagle/PHASTA_aesp/SCOREC-CORE/build_gtvertCorruption/test/matchedNodeElmReader ../geom3D.cnndt ../geom3D.coord ../geom3D.match ../geom3D.class ../geom3D.fathr NULL ../geom3DHead.cnn outModel.dmg outModel/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;np&amp;gt; should be replaced by the same number as used for npart when running MGEN.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=1923</id>
		<title>Exporting Parasolid from SolidWorks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=1923"/>
				<updated>2023-01-05T21:28:13Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Save out your Model as a parasolid from [[SolidWorks]]. Note that you will want the geometry as close to ready for meshing as possible, as performing model &amp;quot;surgery&amp;quot; in SimModeler is not always straight forward. The outputted file from SolidWorks will have the format &amp;lt;code&amp;gt;&amp;lt;file_name&amp;gt;.x_t&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you do not have a parasolid model of your own, you may use the On Ramp example file located at:&lt;br /&gt;
&lt;br /&gt;
 /projects/tutorials/OnRamp/example_geom.x_t&lt;br /&gt;
&lt;br /&gt;
Ensure that you are on one of the viznodes and not portal1. You may tunnel to viz003 by opening a terminal and running:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz003&lt;br /&gt;
&lt;br /&gt;
Navigate to and copy your file into your working directory. Typically, we create a folder where all the simulation files are stored. For example, after opening a terminal I could run the commands:&lt;br /&gt;
&lt;br /&gt;
 mkdir Demo&lt;br /&gt;
 cd Demo&lt;br /&gt;
&lt;br /&gt;
This would place me in my working directory 'Demo'. &lt;br /&gt;
&lt;br /&gt;
Next, you'll want to change the parasolid file extension from &amp;lt;code&amp;gt;.x_t&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;. To do this, run &amp;lt;code&amp;gt;mv &amp;lt;file_name&amp;gt;.x_t &amp;lt;file_name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; from your terminal. From here, you are ready for the ''convert'' step. Copy over the convert script by running:&lt;br /&gt;
&lt;br /&gt;
 cp /projects/tutorials/OnRamp/convertParasolid2Sim.sh .&lt;br /&gt;
&lt;br /&gt;
The convert step is documented here: https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Summary of video:''' &lt;br /&gt;
 1. Ensure &amp;lt;code&amp;gt;convertParasolid2Sim.sh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; are in your working directory. &lt;br /&gt;
&lt;br /&gt;
 2. Set environment with soft adds found in &amp;lt;code&amp;gt;more ~kjansen/soft-core.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 3. Run &amp;lt;code&amp;gt;./convertParasolid2sim.sh &amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; in your terminal&lt;br /&gt;
&lt;br /&gt;
 4. Convert step is complete and your directory now contains 3 new files: &amp;lt;code&amp;gt;model.smd&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;relations.log&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt;. The &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file is the one we need moving forward in this tutorial.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the convert step is complete, you are ready to move onto the next step and use [[Getting Started with Simmodeler| SimModeler]] to create a mesh for the new &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file we created!&lt;br /&gt;
&lt;br /&gt;
== Geometries with Standalone Surfaces ==&lt;br /&gt;
&lt;br /&gt;
There are sometimes cases where models will contain not only three dimensional solid bodies but also surfaces. Surfaces are treated differently by Solidworks and parasolid files than solid bodies, so they need to be exported separately. IN order to do this, you select all entities except the surface and save as a parasolid, after you hit save, Solidworks will ask if you want to save all of the geometry or only the selected geometry, the second option being the one we want. Now do the same but only select the surfaces. These two separate files can then be converted to &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; separately (remember to rename files after they are converted to avoid overwriting) and recombined in Simmodeler.&lt;br /&gt;
&lt;br /&gt;
To recombine, simply open the solid body file in Simmodeler, under the &amp;quot;Modeling&amp;quot; tab select &amp;quot;Add Parts&amp;quot;, add the surfaces file, then select &amp;quot;Make New Manifold Model&amp;quot; which will combine the files into one model that is suitable for PHASTA.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=1922</id>
		<title>Exporting Parasolid from SolidWorks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Exporting_Parasolid_from_SolidWorks&amp;diff=1922"/>
				<updated>2023-01-05T18:02:00Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Added notes for mixed solid body / surface models&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Save out your Model as a parasolid from [[SolidWorks]]. Note that you will want the geometry as close to ready for meshing as possible, as performing model &amp;quot;surgery&amp;quot; in SimModeler is not always straight forward. The outputted file from SolidWorks will have the format &amp;lt;code&amp;gt;&amp;lt;file_name&amp;gt;.x_t&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you do not have a parasolid model of your own, you may use the On Ramp example file located at:&lt;br /&gt;
&lt;br /&gt;
 /projects/tutorials/OnRamp/example_geom.x_t&lt;br /&gt;
&lt;br /&gt;
Ensure that you are on one of the viznodes and not portal1. You may tunnel to viz003 by opening a terminal and running:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz003&lt;br /&gt;
&lt;br /&gt;
Navigate to and copy your file into your working directory. Typically, we create a folder where all the simulation files are stored. For example, after opening a terminal I could run the commands:&lt;br /&gt;
&lt;br /&gt;
 mkdir Demo&lt;br /&gt;
 cd Demo&lt;br /&gt;
&lt;br /&gt;
This would place me in my working directory 'Demo'. &lt;br /&gt;
&lt;br /&gt;
Next, you'll want to change the parasolid file extension from &amp;lt;code&amp;gt;.x_t&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;. To do this, run &amp;lt;code&amp;gt;mv &amp;lt;file_name&amp;gt;.x_t &amp;lt;file_name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; from your terminal. From here, you are ready for the ''convert'' step. Copy over the convert script by running:&lt;br /&gt;
&lt;br /&gt;
 cp /projects/tutorials/OnRamp/convertParasolid2Sim.sh .&lt;br /&gt;
&lt;br /&gt;
The convert step is documented here: https://fluid.colorado.edu/tutorials/tutorialVideos/Convert2Sim_Tutorial.mp4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Summary of video:''' &lt;br /&gt;
 1. Ensure &amp;lt;code&amp;gt;convertParasolid2Sim.sh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; are in your working directory. &lt;br /&gt;
&lt;br /&gt;
 2. Set environment with soft adds found in &amp;lt;code&amp;gt;more ~kjansen/soft-core.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 3. Run &amp;lt;code&amp;gt;./convertParasolid2sim.sh &amp;lt;file name&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; in your terminal&lt;br /&gt;
&lt;br /&gt;
 4. Convert step is complete and your directory now contains 3 new files: &amp;lt;code&amp;gt;model.smd&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;relations.log&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt;. The &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file is the one we need moving forward in this tutorial.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the convert step is complete, you are ready to move onto the next step and use [[Getting Started with Simmodeler| SimModeler]] to create a mesh for the new &amp;lt;code&amp;gt;translated-model.smd&amp;lt;/code&amp;gt; file we created!&lt;br /&gt;
&lt;br /&gt;
== Geometries with Standalone Surfaces ==&lt;br /&gt;
&lt;br /&gt;
There are sometimes cases where models will contain not only three dimensional solid bodies but also surfaces. Surfaces are treated differently by Solidworks and parasolid files than solid bodies, so they need to be exported separately. IN order to do this, you select all entities except the surface and save as a parasolid, after you hit save, Solidworks will ask if you want to save all of the geometry or only the selected geometry, the second option being the one we want. Now do the same but only select the surfaces. These two separate files can then be converted to &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; separately (remember to rename files after they are converted to avoid overwriting) and recombined in Symmodeler.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1921</id>
		<title>Convert</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1921"/>
				<updated>2023-01-04T19:11:44Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group. &lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
The main goal of Convert is to take a Symmetrix model and mesh and convert it to &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt; and &amp;lt;/code&amp;gt;.crd&amp;lt;/code&amp;gt; files for use with &amp;lt;/code&amp;gt;.crd&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Basic Usage ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, and a &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file and outputs &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, and a &amp;lt;code&amp;gt;./mdsMesh&amp;lt;/code&amp;gt; directory. A specific implementation will look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/build16_Opt/test/convert --model-face-root=4321 --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where the root face is the face which holds the original meshing attributes in simmodeler (extrusion meshing from within simmodeler would originate from this face).&lt;br /&gt;
&lt;br /&gt;
Note that there is a second version of convert which can intake multiple model regions if that is required for a geometry. This is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build16_Opt/test/convert&amp;lt;/code&amp;gt; and simply takes in more than one (comma delimited) model root face arguments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; file and outputs a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This file type simply stores information about model faces, edges, and vertices, and their relationships to each other. This is needed to classify mesh points.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Model Convert is a part of Chef, and by default a simple version will be built in the process of building Chef. There are also standalone builds of the tool that are required to be built for unique geometries, for instance, for the Gust Wing project, a version of the tool for closed test section slices is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
See MgenExtru_MGENClassificationAirfoilPt2 video--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1920</id>
		<title>Convert</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1920"/>
				<updated>2023-01-04T18:33:52Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group. &lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
The main goal of Convert is to take a Symmetrix model and mesh and convert it to &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt; and &amp;lt;/code&amp;gt;.crd&amp;lt;/code&amp;gt; files for use with &amp;lt;/code&amp;gt;.crd&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Basic Usage ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, and a &amp;lt;code&amp;gt;.smd&amp;lt;/code&amp;gt; file and outputs &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.crd&amp;lt;/code&amp;gt; files, and a &amp;lt;code&amp;gt;./mdsMesh&amp;lt;/code&amp;gt; directory. A specific implementation will look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/build16_Opt/test/convert --model-face-root=4321 --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where the root face is the face which holds the original meshing attributes in simmodeler (extrusion meshing from within simmodeler would originate from this face).&lt;br /&gt;
&lt;br /&gt;
Note that there is a second version of convert which can intake multiple model regions if that is required for a geometry. This is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build16_Opt/test/convert&amp;lt;/code&amp;gt; and simply takes in more than one (space delimited) model root face arguments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; file and outputs a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This file type simply stores information about model faces, edges, and vertices, and their relationships to each other. This is needed to classify mesh points.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Model Convert is a part of Chef, and by default a simple version will be built in the process of building Chef. There are also standalone builds of the tool that are required to be built for unique geometries, for instance, for the Gust Wing project, a version of the tool for closed test section slices is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
See MgenExtru_MGENClassificationAirfoilPt2 video--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView/Run_on_Remote_Machine&amp;diff=1919</id>
		<title>ParaView/Run on Remote Machine</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView/Run_on_Remote_Machine&amp;diff=1919"/>
				<updated>2022-11-11T19:28:56Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are instructions for running ParaView on some selected remote machines. &lt;br /&gt;
&lt;br /&gt;
In general, this involves:&lt;br /&gt;
# Launching an interactive job using the machine's job scheduler &lt;br /&gt;
# Loading whatever required software is needed on the session&lt;br /&gt;
# Launching the Paraview server on the session&lt;br /&gt;
# Connecting to the remote Paraview server through a local Paraview instance&lt;br /&gt;
&lt;br /&gt;
== Current Machines ==&lt;br /&gt;
&lt;br /&gt;
=== Cooley (ALCF) ===&lt;br /&gt;
==== Connect to Cooley ====&lt;br /&gt;
*From the command line on the Viz Nodes,&lt;br /&gt;
   ssh '''''username'''''@cooley.alcf.anl.gov&lt;br /&gt;
*Enter the Mobile Pass + passcode from your phone&lt;br /&gt;
&lt;br /&gt;
==== Submit Interactive Job ====&lt;br /&gt;
There are multiple versions of interactive submission scripts used by the group, but most take in two inputs, number of nodes and run time. It is recommended that you check the contents of any given &amp;lt;code&amp;gt;submitInteractive.sh&amp;lt;/code&amp;gt; script you receive to look for the argument order. As an example though, the scripts being used for Gust AFOSR work (at &amp;lt;code&amp;gt;/projects/PHASTA_aesp/Models/GustWing/OTS/PastRuns &amp;lt;/code&amp;gt; is used as: &amp;lt;code&amp;gt; ./submitInteractive.sh &amp;lt;runtime in minutes&amp;gt; &amp;lt;nodes&amp;gt; &amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once the interactive job is started, you will need to start a ParaView server using a &amp;lt;code&amp;gt;pvserverLaunch.&amp;lt;version&amp;gt;.sh&amp;lt;/code&amp;gt; script. Note that the version of the server needs to match the version you will use on the Viz Nodes. ParaView 5.5.2 is the most recent version fully supported on both the Viz Nodes and Cooley. The script available at he same directory as above takes in the number of processes per node as an input, which to use all of Cooley's resources should be 12, which is ran as: &amp;lt;code&amp;gt;./pvserverLaunch.5.5.2.sh 12&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Once the server is running and &amp;quot;Waiting for client...&amp;quot; you should launch ParaView on the Viz Nodes and select the connect to server icon (just to the right of the open file icon at the top of the screen). From here, using the listed &amp;lt;code&amp;gt;cc***&amp;lt;/code&amp;gt; number, you will need to configure the server connection if you do not have one configured for the &amp;lt;code&amp;gt;cc***&amp;lt;/code&amp;gt; number. This is done by selecting &amp;quot;Add Server&amp;quot; and then populating the &amp;quot;Edit Server Configuration&amp;quot; page. The name of the server should be the &amp;lt;code&amp;gt;cc***&amp;lt;/code&amp;gt; number, the host field will be &amp;lt;code&amp;gt;cc***.cooley.pub.alcf.anl.gov&amp;lt;code&amp;gt; and the port is &amp;lt;code&amp;gt;8000&amp;lt;/code&amp;gt;. From here you can select configure and then connect.&lt;br /&gt;
&lt;br /&gt;
If, upon connection, you need to load the SyncIO remote plugin, these can be found for multiple versions of ParaView at &amp;lt;code&amp;gt;/lus/theta-fs0/projects/PHASTA_aesp/ParaView/ParaViewSyncIOReaderPlugin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Retired Machines ==&lt;br /&gt;
&lt;br /&gt;
=== Eureka (ALCF) ===&lt;br /&gt;
==== Connect to Eureka ====&lt;br /&gt;
*From the command line on the Colorado machine,&lt;br /&gt;
   ssh '''''username'''''@eureka.alcf.anl.gov&lt;br /&gt;
*Enter 4 digit pin, followed by the number on the CRYPTOCard&lt;br /&gt;
&lt;br /&gt;
==== Submit Interactive Job ====&lt;br /&gt;
*From the command line on Eureka, &amp;lt;code&amp;gt;cp /home/jmartin/qsub_interactive_command.sh&amp;lt;/code&amp;gt; to your home directory. Open &amp;lt;code&amp;gt;qsub_interactive_command.sh&amp;lt;/code&amp;gt; and change the total time you want to run paraview (time), and the allocation your account is under (account).&lt;br /&gt;
   ./qsub_interactive_command.sh '''''nodes'''''&lt;br /&gt;
where '''''nodes''''' is the total number of nodes you want. Each node has 8 cores, and 32 GB of memory.&lt;br /&gt;
&lt;br /&gt;
=== Janus (CUBoulder Research Computing) ===&lt;br /&gt;
Video Tutorial (use the paths below, NOT the ones in the Video)&lt;br /&gt;
http://fluid.colorado.edu/~matthb2/janus/pv_on_janus.html&lt;br /&gt;
&lt;br /&gt;
==== Running the UI on Portal0 ====&lt;br /&gt;
  soft add @paraview-3.8.0&lt;br /&gt;
  soft add +paraview-3.8.0-gnu-ompi-covis&lt;br /&gt;
  vglrun paraview&lt;br /&gt;
&lt;br /&gt;
and then start and connect to the server:&lt;br /&gt;
&lt;br /&gt;
==== Starting the Server on Janus ====&lt;br /&gt;
  . /projects/jansenke/matthb2/env-gnu.sh&lt;br /&gt;
  qsub -q janus-debug /projects/jansenke/matthb2/pvserver-gnu_runscript-sysgl.sh&lt;br /&gt;
&lt;br /&gt;
and use checkjob or look at the output files to figure out which node your rank0 is on and connect ParaView to that (or change it to use reverse connections if you prefer).&lt;br /&gt;
&lt;br /&gt;
=== Tukey (ALCF) ===&lt;br /&gt;
==== ParaView GUI running on portal0 at Colorado ====&lt;br /&gt;
&lt;br /&gt;
Video Tutorial about how to run a pvserver-syncio in parallel on the Tukey visualization nodes and connect the pvserver to a ParaView Gui running on portal0 at Colorado &lt;br /&gt;
  http://fluid.colorado.edu/~mrasquin/Documents_HIDE/Tukey/ParaviewOnTukeyFromPortal0/index.html&lt;br /&gt;
&lt;br /&gt;
This video can be copied from /users/mrasquin/public_html/Documents_HIDE/Tukey/ParaviewOnTukeyFromPortal0 on the viz nodes.&lt;br /&gt;
&lt;br /&gt;
==== ParaView GUI running on the Tukey login node ====&lt;br /&gt;
Video Tutorial about how to run a pvserver-syncio in parallel on the Tukey visualization nodes and connect the pvserver to a ParaView Gui running on the Tukey login node&lt;br /&gt;
  https://fluid.colorado.edu/~mrasquin/phasta/ParaViewOnTukey/index.html&lt;br /&gt;
 &lt;br /&gt;
This video can be copied from /users/mrasquin/public_html/Tukey/ParaviewOnTukeyThroughVNC on the viz nodes.&lt;br /&gt;
&lt;br /&gt;
Note that because vncserver on the Tukey head node does not support OpenGL, this method does not allow the export of png pictures from the ParaView Gui. Indeed, the result will be completely fuzzy. The first method is therefore strongly recommended.&lt;br /&gt;
&lt;br /&gt;
[[Category:Paraview]]&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView/Run_on_Remote_Machine&amp;diff=1918</id>
		<title>ParaView/Run on Remote Machine</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView/Run_on_Remote_Machine&amp;diff=1918"/>
				<updated>2022-11-11T17:43:37Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Added Cooley documentation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are instructions for running ParaView on some selected remote machines. &lt;br /&gt;
&lt;br /&gt;
In general, this involves:&lt;br /&gt;
# Launching an interactive job using the machine's job scheduler &lt;br /&gt;
# Loading whatever required software is needed on the session&lt;br /&gt;
# Launching the Paraview server on the session&lt;br /&gt;
# Connecting to the remote Paraview server through a local Paraview instance&lt;br /&gt;
&lt;br /&gt;
== Current Machines ==&lt;br /&gt;
&lt;br /&gt;
=== Cooley (ALCF) ===&lt;br /&gt;
==== Connect to Cooley ====&lt;br /&gt;
*From the command line on the Viz Nodes,&lt;br /&gt;
   ssh '''''username'''''@cooley.alcf.anl.gov&lt;br /&gt;
*Enter the Mobile Pass + passcode from your phone&lt;br /&gt;
&lt;br /&gt;
==== Submit Interactive Job ====&lt;br /&gt;
There are multiple versions of interactive submission scripts used by the group, but most take in two inputs, number of nodes and run time. It is recommended that you check the contents of any given &amp;lt;code&amp;gt;submitInteractive.sh&amp;lt;/code&amp;gt; script you receive to look for the argument order. As an example though, the scripts being used for Gust AFOSR work (at &amp;lt;code&amp;gt;/projects/PHASTA_aesp/Models/GustWing/OTS/PastRuns &amp;lt;/code&amp;gt; is used as: &amp;lt;code&amp;gt; ./submitInteractive.sh &amp;lt;runtime in minutes&amp;gt; &amp;lt;nodes&amp;gt; &amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once the interactive job is started, you will need to start a ParaView server using a &amp;lt;code&amp;gt;pvserverLaunch.&amp;lt;version&amp;gt;.sh&amp;lt;/code&amp;gt; script. Note that the version of the server needs to match the version you will use on the Viz Nodes. ParaView 5.5.2 is the most recent version fully supported on both the Viz Nodes and Cooley. The script available at he same directory as above takes in the number of processes per node as an input, which to use all of Cooley's resources should be 12, which is ran as: &amp;lt;code&amp;gt;./pvserverLaunch.5.5.2.sh 12&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Once the server is running and &amp;quot;Waiting for client...&amp;quot; you should launch ParaView on the Viz Nodes and select the connect to server icon (just to the right of the open file icon at the top of the screen). From here, using the listed &amp;lt;code&amp;gt;cc***&amp;lt;/code&amp;gt; number, you will need to configure the server connection if you do not have one configured for the &amp;lt;code&amp;gt;cc***&amp;lt;/code&amp;gt; number. This is done by selecting &amp;quot;Add Server&amp;quot; and then populating the &amp;quot;Edit Server Configuration&amp;quot; page. The name of the server should be the &amp;lt;code&amp;gt;cc***&amp;lt;/code&amp;gt; number, the host field will be &amp;lt;code&amp;gt;cc***.cooley.pub.alcf.anl.gov&amp;lt;code&amp;gt; and the port is &amp;lt;code&amp;gt;8000&amp;lt;/code&amp;gt;. From here you can select configure and then connect.&lt;br /&gt;
&lt;br /&gt;
If, upon connection, you need to load the SyncIO plugin, these can be found for multiple versions of ParaView at &amp;lt;code&amp;gt;/lus/theta-fs0/projects/PHASTA_aesp/ParaView/ParaViewSyncIOReaderPlugin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Retired Machines ==&lt;br /&gt;
&lt;br /&gt;
=== Eureka (ALCF) ===&lt;br /&gt;
==== Connect to Eureka ====&lt;br /&gt;
*From the command line on the Colorado machine,&lt;br /&gt;
   ssh '''''username'''''@eureka.alcf.anl.gov&lt;br /&gt;
*Enter 4 digit pin, followed by the number on the CRYPTOCard&lt;br /&gt;
&lt;br /&gt;
==== Submit Interactive Job ====&lt;br /&gt;
*From the command line on Eureka, &amp;lt;code&amp;gt;cp /home/jmartin/qsub_interactive_command.sh&amp;lt;/code&amp;gt; to your home directory. Open &amp;lt;code&amp;gt;qsub_interactive_command.sh&amp;lt;/code&amp;gt; and change the total time you want to run paraview (time), and the allocation your account is under (account).&lt;br /&gt;
   ./qsub_interactive_command.sh '''''nodes'''''&lt;br /&gt;
where '''''nodes''''' is the total number of nodes you want. Each node has 8 cores, and 32 GB of memory.&lt;br /&gt;
&lt;br /&gt;
=== Janus (CUBoulder Research Computing) ===&lt;br /&gt;
Video Tutorial (use the paths below, NOT the ones in the Video)&lt;br /&gt;
http://fluid.colorado.edu/~matthb2/janus/pv_on_janus.html&lt;br /&gt;
&lt;br /&gt;
==== Running the UI on Portal0 ====&lt;br /&gt;
  soft add @paraview-3.8.0&lt;br /&gt;
  soft add +paraview-3.8.0-gnu-ompi-covis&lt;br /&gt;
  vglrun paraview&lt;br /&gt;
&lt;br /&gt;
and then start and connect to the server:&lt;br /&gt;
&lt;br /&gt;
==== Starting the Server on Janus ====&lt;br /&gt;
  . /projects/jansenke/matthb2/env-gnu.sh&lt;br /&gt;
  qsub -q janus-debug /projects/jansenke/matthb2/pvserver-gnu_runscript-sysgl.sh&lt;br /&gt;
&lt;br /&gt;
and use checkjob or look at the output files to figure out which node your rank0 is on and connect ParaView to that (or change it to use reverse connections if you prefer).&lt;br /&gt;
&lt;br /&gt;
=== Tukey (ALCF) ===&lt;br /&gt;
==== ParaView GUI running on portal0 at Colorado ====&lt;br /&gt;
&lt;br /&gt;
Video Tutorial about how to run a pvserver-syncio in parallel on the Tukey visualization nodes and connect the pvserver to a ParaView Gui running on portal0 at Colorado &lt;br /&gt;
  http://fluid.colorado.edu/~mrasquin/Documents_HIDE/Tukey/ParaviewOnTukeyFromPortal0/index.html&lt;br /&gt;
&lt;br /&gt;
This video can be copied from /users/mrasquin/public_html/Documents_HIDE/Tukey/ParaviewOnTukeyFromPortal0 on the viz nodes.&lt;br /&gt;
&lt;br /&gt;
==== ParaView GUI running on the Tukey login node ====&lt;br /&gt;
Video Tutorial about how to run a pvserver-syncio in parallel on the Tukey visualization nodes and connect the pvserver to a ParaView Gui running on the Tukey login node&lt;br /&gt;
  https://fluid.colorado.edu/~mrasquin/phasta/ParaViewOnTukey/index.html&lt;br /&gt;
 &lt;br /&gt;
This video can be copied from /users/mrasquin/public_html/Tukey/ParaviewOnTukeyThroughVNC on the viz nodes.&lt;br /&gt;
&lt;br /&gt;
Note that because vncserver on the Tukey head node does not support OpenGL, this method does not allow the export of png pictures from the ParaView Gui. Indeed, the result will be completely fuzzy. The first method is therefore strongly recommended.&lt;br /&gt;
&lt;br /&gt;
[[Category:Paraview]]&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1909</id>
		<title>Convert</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Convert&amp;diff=1909"/>
				<updated>2022-09-21T18:02:34Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Created page with &amp;quot;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group....&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Convert, often referred to simply as &amp;quot;Convert&amp;quot;, is a tool used to convert Geometric model files into a file type usable with the preprocessing tools utilized in this group. &lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
The main goal of Convert is to take a Symmetrix model and mesh and convert it to &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt; and &amp;lt;/code&amp;gt;.crd&amp;lt;/code&amp;gt; files for use with &amp;lt;/code&amp;gt;.crd&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Basic Usage ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt;, and a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; file and outputs &amp;lt;code&amp;gt;.cnn&amp;lt;/code&amp;gt;, &amp;lt;/code&amp;gt;.crd&amp;lt;/code&amp;gt; files, and a &amp;lt;code./mdsMesh&amp;lt;/code&amp;gt; directory. A specific implementation will look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt; mpirun -np 1 /projects/tools/SCOREC-core/build16_Opt/test/convert --model-face-root=4321 --native_model=geom.xmt_txt geom.smd geom.sms mdsMesh/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where the root face is the face which holds the original meshing attributes in simmodeler (extrusion meshing from within simmodeler would originate from this face).&lt;br /&gt;
&lt;br /&gt;
== Model Convert ==&lt;br /&gt;
Convert takes in a &amp;lt;code&amp;gt;.xmt_txt&amp;lt;/code&amp;gt; file and outputs a &amp;lt;code&amp;gt;.dmg&amp;lt;/code&amp;gt; file. This file type simply stores information about model faces, edges, and vertices, and their relationships to each other. This is needed to classify mesh points when generating a mesh with.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Model Convert is a part of Chef, and by default a simple version will be built in the process of building Chef. There are also standalone builds of the tool that are required to be built for unique geometries, for instance, for the Gust Wing project, a version of the tool for closed test section slices is available at &amp;lt;code&amp;gt;/projects/tools/SCOREC-core/build-14-190604dev_omp110/test/mdlConvert&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
See MgenExtru_MGENClassificationAirfoilPt2 video--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1908</id>
		<title>MGEN Extrude</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1908"/>
				<updated>2022-09-21T17:45:39Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MGEN is a tool in the meshing workflow that takes a 2D source mesh and extrudes it in the third dimension based off of user input. The tool was originally created for use on structured grids on the Boeing bump, but has since been generalized for use in unstructured setups.&lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
&lt;br /&gt;
MGEN code is stored in &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; and written in FORTRAN. The code takes in a source 2D mesh, z-coordinates to extrude between, the number of elements to populate the extrusion with, and the number of partitions to write the mesh to. &lt;br /&gt;
&lt;br /&gt;
Partitioning in MGEN is simply a method to reduce the cost of initial runs of Chef, but is not a replacement for the initial configuring that Chef does (via 1-1-Chef). Parting in MGEN simply allows the first run of Chef to be in parallel (i.e. 8-8-Chef). Starting Chef from parallel is most important on large grids that would take prohibitively long to run though Chef in serial.&lt;br /&gt;
&lt;br /&gt;
The most current copy of the code is available at &amp;lt;code&amp;gt;(location)&amp;lt;/code&amp;gt; as of (date)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&lt;br /&gt;
Once a suitable version of &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; has been located and moved to a working directory, it first needs to be complied if this has not already been done. The FORTRAN compiler to compile &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; should be the same version that was/will be used to compile the version of Chef to be used later in the meshing pipeline in order to reduce the risk of complications.&lt;br /&gt;
&lt;br /&gt;
Once a compiler version is selected and added using &amp;lt;code&amp;gt;soft add&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; (depending on the system), it can be compiled. As an example, if using &amp;lt;code&amp;gt;gcc-6.3.0&amp;lt;/code&amp;gt; on Cooley compiling would look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
soft add +gcc-6.3.0&lt;br /&gt;
&lt;br /&gt;
gfortran -03 tm3Extrude.f -o tm3Extrude&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the code is compiled, the working directory needs to be prepared to run MGEN. MGEN needs the source 2D mesh in the form of &amp;lt;code&amp;gt;geom.crd&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;geom.cnn&amp;lt;/code&amp;gt; files in the same directory as the compiled code. These source files can be produced from scratch with MATLAB for structured grids, or through the use of [[Getting Started with Simmodeler|Simmetrix]] and the [[Convert]] tool for unstructured grids.&lt;br /&gt;
&lt;br /&gt;
Once the mesh files are in place, MGEN can be run with &amp;lt;code&amp;gt;./tm3Extrude&amp;lt;/code&amp;gt; as usual. The code will ask for inputs for zmin, zmax, numelz, and npart. These should be entered in a single string with spaces in between the values before hitting in order to continue code execution.&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
MGEN will write its outputs to the same working directory that the executable and source mesh files are in. There are multiple file types written, most with a suffix of a number to denote the part number of that file. The different parted files and their purposes are as follows:&lt;br /&gt;
&lt;br /&gt;
;geom3D.class :Classification file describing what type of geometric entity each point lies on (vertex, edge, face, volume)&lt;br /&gt;
;geom3D.cnndt :Connectivity of the elements &lt;br /&gt;
;geom3D.coord :Node coordinates&lt;br /&gt;
;geom3D.fathr :Parent vertex from the 2D source mesh&lt;br /&gt;
;geom3D.match :Contains periodic partners&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also one more file:&lt;br /&gt;
&lt;br /&gt;
; geom3DHead.cnn&lt;br /&gt;
&lt;br /&gt;
Which lists the headers containing information on the size of the file each of the above connectivity files.&lt;br /&gt;
&lt;br /&gt;
== Using the outputted files ==&lt;br /&gt;
The outputted files from MGEN now need to be prepared for Chef, this is done via &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt;. The provided example will be for a build on Cooley.&lt;br /&gt;
&lt;br /&gt;
First, the enviroment needs to be prepared via setting &amp;lt;code&amp;gt;SIM_LICENSE_FILE&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt;. Examples of this are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export SIM_LICENSE_FILE=/eagle/PHASTA_aesp/SCOREC-CORE/deps/Simmetrix/UCBoulder&lt;br /&gt;
&lt;br /&gt;
export LD_LIBRARY_PATH=/eagle/PHASTA_aesp/SCOREC-CORE/deps/16.0-220326/lib/x64_rhel_gcc48/psKrnl/:$LD_LIBRARY_PATH&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From here, &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt; can be run with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
mpirun -f /var/tmp/cobalt.2137783 -np &amp;lt;np&amp;gt; -genvall /eagle/PHASTA_aesp/SCOREC-CORE/build_gtvertCorruption/test/matchedNodeElmReader ../geom3D.cnndt ../geom3D.coord ../geom3D.match ../geom3D.class ../geom3D.fathr NULL ../geom3DHead.cnn outModel.dmg outModel/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;np&amp;gt; should be replaced by the same number as used for npart when running MGEN.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1907</id>
		<title>MGEN Extrude</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1907"/>
				<updated>2022-09-21T17:23:46Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MGEN is a tool in the meshing workflow that takes a 2D source mesh and extrudes it in the third dimension based off of user input. The tool was originally created for use on structured grids on the Boeing bump, but has since been generalized for use in unstructured setups.&lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
&lt;br /&gt;
MGEN code is stored in &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; and written in FORTRAN. The code takes in a source 2D mesh, z-coordinates to extrude between, the number of elements to populate the extrusion with, and the number of partitions to write the mesh to. &lt;br /&gt;
&lt;br /&gt;
Partitioning in MGEN is simply a method to reduce the cost of initial runs of Chef, but is not a replacement for the initial configuring that Chef does (via 1-1-Chef). Parting in MGEN simply allows the first run of Chef to be in parallel (i.e. 8-8-Chef). Starting Chef from parallel is most important on large grids that would take prohibitively long to run though Chef in serial.&lt;br /&gt;
&lt;br /&gt;
The most current copy of the code is available at &amp;lt;code&amp;gt;(location)&amp;lt;/code&amp;gt; as of (date)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&lt;br /&gt;
Once a suitable version of &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; has been located and moved to a working directory, it first needs to be complied if this has not already been done. The FORTRAN compiler to compile &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; should be the same version that was/will be used to compile the version of Chef to be used later in the meshing pipeline in order to reduce the risk of complications.&lt;br /&gt;
&lt;br /&gt;
Once a compiler version is selected and added using &amp;lt;code&amp;gt;soft add&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; (depending on the system), it can be compiled. As an example, if using &amp;lt;code&amp;gt;gcc-6.3.0&amp;lt;/code&amp;gt; on Cooley compiling would look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
soft add +gcc-6.3.0&lt;br /&gt;
&lt;br /&gt;
gfortran -03 tm3Extrude.f -o tm3Extrude&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the code is compiled, the working directory needs to be prepared to run MGEN. MGEN needs the source 2D mesh in the form of &amp;lt;code&amp;gt;geom.crd&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;geom.cnn&amp;lt;/code&amp;gt; files in the same directory as the compiled code. These source files can be produced from scratch with MATLAB for structured grids, or through the use of [[Getting Started with Simmodeler|Simmetrix]] and the [[Model Convert]] tool for unstructured grids.&lt;br /&gt;
&lt;br /&gt;
Once the mesh files are in place, MGEN can be run with &amp;lt;code&amp;gt;./tm3Extrude&amp;lt;/code&amp;gt; as usual. The code will ask for inputs for zmin, zmax, numelz, and npart. These should be entered in a single string with spaces in between the values before hitting in order to continue code execution.&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
MGEN will write its outputs to the same working directory that the executable and source mesh files are in. There are multiple file types written, most with a suffix of a number to denote the part number of that file. The different parted files and their purposes are as follows:&lt;br /&gt;
&lt;br /&gt;
;geom3D.class :Classification file describing what type of geometric entity each point lies on (vertex, edge, face, volume)&lt;br /&gt;
;geom3D.cnndt :Connectivity of the elements &lt;br /&gt;
;geom3D.coord :Node coordinates&lt;br /&gt;
;geom3D.fathr :Parent vertex from the 2D source mesh&lt;br /&gt;
;geom3D.match :Contains periodic partners&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also one more file:&lt;br /&gt;
&lt;br /&gt;
; geom3DHead.cnn&lt;br /&gt;
&lt;br /&gt;
Which lists the headers containing information on the size of the file each of the above connectivity files.&lt;br /&gt;
&lt;br /&gt;
== Using the outputted files ==&lt;br /&gt;
The outputted files from MGEN now need to be prepared for Chef, this is done via &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt;. The provided example will be for a build on Cooley.&lt;br /&gt;
&lt;br /&gt;
First, the enviroment needs to be prepared via setting &amp;lt;code&amp;gt;SIM_LICENSE_FILE&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt;. Examples of this are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export SIM_LICENSE_FILE=/eagle/PHASTA_aesp/SCOREC-CORE/deps/Simmetrix/UCBoulder&lt;br /&gt;
&lt;br /&gt;
export LD_LIBRARY_PATH=/eagle/PHASTA_aesp/SCOREC-CORE/deps/16.0-220326/lib/x64_rhel_gcc48/psKrnl/:$LD_LIBRARY_PATH&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From here, &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt; can be run with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
mpirun -f /var/tmp/cobalt.2137783 -np &amp;lt;np&amp;gt; -genvall /eagle/PHASTA_aesp/SCOREC-CORE/build_gtvertCorruption/test/matchedNodeElmReader ../geom3D.cnndt ../geom3D.coord ../geom3D.match ../geom3D.class ../geom3D.fathr NULL ../geom3DHead.cnn outModel.dmg outModel/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;np&amp;gt; should be replaced by the same number as used for npart when running MGEN.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1818</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1818"/>
				<updated>2022-08-23T17:46:25Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Initial HPSS setup info&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/]. &lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;br /&gt;
&lt;br /&gt;
If you need to retrieve data from tape and put it back on the &amp;quot;local&amp;quot; system, the &amp;lt;code&amp;gt;get&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cget&amp;lt;/code&amp;gt; commands act in the same way as &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; but in reverse.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
If you are a first time user of HPSS, you will likely get an error regarding a key file. This is something that must be taken care of by ALCF support (support@alcf.anl.gov). Simply email them with your ALCF username and state that you need access set up for HPSS.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1817</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1817"/>
				<updated>2022-08-22T21:04:34Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Added get information&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/]. &lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;br /&gt;
&lt;br /&gt;
If you need to retrieve data from tape and put it back on the &amp;quot;local&amp;quot; system, the &amp;lt;code&amp;gt;get&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cget&amp;lt;/code&amp;gt; commands act in the same way as &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; but in reverse.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1816</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1816"/>
				<updated>2022-08-22T21:03:02Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/]. &lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1815</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1815"/>
				<updated>2022-08-22T21:02:42Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Basics ==&lt;br /&gt;
&lt;br /&gt;
HSI is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/]. &lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1814</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1814"/>
				<updated>2022-08-22T21:02:17Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Overview ==&lt;br /&gt;
&lt;br /&gt;
HSI is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/]. &lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1813</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1813"/>
				<updated>2022-08-22T19:47:54Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) system is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Overview ==&lt;br /&gt;
&lt;br /&gt;
HSI is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/]. &lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1812</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1812"/>
				<updated>2022-08-22T17:41:13Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: List fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) system is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Overview ==&lt;br /&gt;
&lt;br /&gt;
HSI is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/]. &lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1811</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1811"/>
				<updated>2022-08-22T17:40:54Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Finalization of initial version&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) system is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Overview ==&lt;br /&gt;
&lt;br /&gt;
HSI is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will place you into your home HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
While standard use cases of &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; will be covered below, more documentation (that is more thorough and helpful than the ALCF documentation is available here: [https://docs.nersc.gov/filesystems/archive/]. &lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
Once the destination directory for the data has been created and/or navigated to, there are a few options and considerations to actually archive the data. These are most notably:&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; is the most basic archiving tool, and will overwrite any versions of the files being archived already on HPSS. &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; is a conditional version of &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; that will only overwrite files if there is a newer version &amp;quot;locally&amp;quot; compared to the file already on HPSS. This makes &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; the tool of choice for updating partially archived datasets, but due to its otherwise similar functionality to &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt;, it is also the recommended default command to use.&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;put&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; have similar syntax, and the following will cover both, but &amp;lt;code&amp;gt;cput&amp;lt;/code&amp;gt; will be used as an example. It is assumed that the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; command has already been run to enter the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; utility before attempting the following.&lt;br /&gt;
&lt;br /&gt;
Simple usage to store a single file is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change the name of a file as it is archived:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput &amp;lt;filename&amp;gt; : &amp;lt;newFilename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whole directories can be stored using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;lt;dirName&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you with to keep the parent directory intact, or with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cput -R &amp;quot;*&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you simply want to move the contents of a directory and everything beneath it. Simply be mindful of your &amp;quot;local&amp;quot; directory location when choosing between these options.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1810</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1810"/>
				<updated>2022-08-22T16:00:36Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: External link fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) system is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in ALCF documentation ([https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]) that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Overview ==&lt;br /&gt;
&lt;br /&gt;
HSI is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will dump you into your &amp;quot;home&amp;quot; HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
== Example Usage ==&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1809</id>
		<title>ALCF/Archiving Data at ALCF</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ALCF/Archiving_Data_at_ALCF&amp;diff=1809"/>
				<updated>2022-08-22T15:59:07Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Initial page creation and outlining&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALCF's High Performance Storage System (HPSS) system is a robotic tape drive system used for large amount of archival data storage that will not be often accessed. The system has two interfaces listed in [[documentation|https://www.alcf.anl.gov/support-center/theta/using-hpss-theta]] that can be used, &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;htar&amp;lt;/code&amp;gt;. In this wiki, the &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; interface will be focused on. &lt;br /&gt;
&lt;br /&gt;
== HSI Overview ==&lt;br /&gt;
&lt;br /&gt;
HSI is a utility to interface with the HPSS system. It looks and operates much like the typical bash command lines that we are used to, but with some added complexities. When you enter &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt;, the system will dump you into your &amp;quot;home&amp;quot; HPSS space at &amp;lt;code&amp;gt;/home/username&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; keeps track of both your location in this HPSS space and also your location in the &amp;quot;local&amp;quot; system that you are running &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; from. The &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; system will automatically set the &amp;quot;local&amp;quot; directory location to be the location that you entered the utility from. &lt;br /&gt;
&lt;br /&gt;
Navigation through HPSS operates the same as a normal command line, with &amp;lt;code&amp;gt;ls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;mkdir&amp;lt;/code&amp;gt; among others being valid commands for navigation through HPSS. If you need to navigate though the &amp;quot;local&amp;quot; directories though, this can still be done by appending an &amp;quot;l&amp;quot; to the front of these standard commands (i.e. &amp;lt;code&amp;gt;lls&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;lcd&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;lmkdir&amp;lt;/code&amp;gt;). This can be useful if you entered &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; at the incorrect point or wish to archive data in multiple locations.&lt;br /&gt;
&lt;br /&gt;
It should be noted that &amp;lt;code&amp;gt;hsi&amp;lt;/code&amp;gt; does support tab-complete or up-arrowing for past commands. It is recommended that you enter the utility with a defined plan in order to reduce the amount of annoyance that the lack of these luxuries can cause.&lt;br /&gt;
&lt;br /&gt;
== Archiving of Data ==&lt;br /&gt;
&lt;br /&gt;
== Example Usage ==&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1808</id>
		<title>MGEN Extrude</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1808"/>
				<updated>2022-08-18T18:40:14Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: More initial info - output and afterwards oriented&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MGEN is a tool in the meshing workflow that takes a 2D source mesh and extrudes it in the third dimension based off of user input. The tool was originally created for use on structured grids on the Boeing bump, but has since been generalized for use in unstructured setups.&lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
&lt;br /&gt;
MGEN code is stored in &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; and written in FORTRAN. The code takes in a source 2D mesh, z-coordinates to extrude between, the number of elements to populate the extrusion with, and the number of partitions to write the mesh to. &lt;br /&gt;
&lt;br /&gt;
Partitioning in MGEN is simply a method to reduce the cost of initial runs of Chef, but is not a replacement for the initial configuring that Chef does (via 1-1-Chef). Parting in MGEN simply allows the first run of Chef to be in parallel (i.e. 8-8-Chef). Starting Chef from parallel is most important on large grids that would take prohibitively long to run though Chef in serial.&lt;br /&gt;
&lt;br /&gt;
The most current copy of the code is available at &amp;lt;code&amp;gt;(location)&amp;lt;/code&amp;gt; as of (date)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&lt;br /&gt;
Once a suitable version of &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; has been located and moved to a working directory, it first needs to be complied if this has not already been done. The FORTRAN compiler to compile &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; should be the same version that was/will be used to compile the version of Chef to be used later in the meshing pipeline in order to reduce the risk of complications.&lt;br /&gt;
&lt;br /&gt;
Once a compiler version is selected and added using &amp;lt;code&amp;gt;soft add&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; (depending on the system), it can be compiled. As an example, if using &amp;lt;code&amp;gt;gcc-6.3.0&amp;lt;/code&amp;gt; on Cooley compiling would look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
soft add +gcc-6.3.0&lt;br /&gt;
&lt;br /&gt;
gfortran -03 tm3Extrude.f -o tm3Extrude&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the code is compiled, the working directory needs to be prepared to run MGEN. MGEN needs the source 2D mesh in the form of &amp;lt;code&amp;gt;geom.crd&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;geom.cnn&amp;lt;/code&amp;gt; files in the same directory as the compiled code. These source files can be produced from scratch with MATLAB for structured grids, or through the use of [[Getting Started with Simmodeler|Simmetrix]] and the [[Convert]] tool for unstructured grids.&lt;br /&gt;
&lt;br /&gt;
Once the mesh files are in place, MGEN can be run with &amp;lt;code&amp;gt;./tm3Extrude&amp;lt;/code&amp;gt; as usual. The code will ask for inputs for zmin, zmax, numelz, and npart. These should be entered in a single string with spaces in between the values before hitting in order to continue code execution.&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
MGEN will write its outputs to the same working directory that the executable and source mesh files are in. There are multiple file types written, most with a suffix of a number to denote the part number of that file. The different parted files and their purposes are as follows:&lt;br /&gt;
&lt;br /&gt;
;geom3D.class :Classification file describing what type of geometric entity each point lies on (vertex, edge, face, volume)&lt;br /&gt;
;geom3D.cnndt :Connectivity of the elements &lt;br /&gt;
;geom3D.coord :Node coordinates&lt;br /&gt;
;geom3D.fathr :Parent vertex from the 2D source mesh&lt;br /&gt;
;geom3D.match :Contains periodic partners&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also one more file:&lt;br /&gt;
&lt;br /&gt;
; geom3DHead.cnn&lt;br /&gt;
&lt;br /&gt;
Which lists the headers containing information on the size of the file each of the above connectivity files.&lt;br /&gt;
&lt;br /&gt;
== Using the outputted files ==&lt;br /&gt;
The outputted files from MGEN now need to be prepared for Chef, this is done via &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt;. The provided example will be for a build on Cooley.&lt;br /&gt;
&lt;br /&gt;
First, the enviroment needs to be prepared via setting &amp;lt;code&amp;gt;SIM_LICENSE_FILE&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt;. Examples of this are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export SIM_LICENSE_FILE=/eagle/PHASTA_aesp/SCOREC-CORE/deps/Simmetrix/UCBoulder&lt;br /&gt;
&lt;br /&gt;
export LD_LIBRARY_PATH=/eagle/PHASTA_aesp/SCOREC-CORE/deps/16.0-220326/lib/x64_rhel_gcc48/psKrnl/:$LD_LIBRARY_PATH&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From here, &amp;lt;code&amp;gt;matchedNodeElmReader&amp;lt;/code&amp;gt; can be run with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
mpirun -f /var/tmp/cobalt.2137783 -np &amp;lt;np&amp;gt; -genvall /eagle/PHASTA_aesp/SCOREC-CORE/build_gtvertCorruption/test/matchedNodeElmReader ../geom3D.cnndt ../geom3D.coord ../geom3D.match ../geom3D.class ../geom3D.fathr NULL ../geom3DHead.cnn outModel.dmg outModel/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;np&amp;gt; should be replaced by the same number as used for npart when running MGEN.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1807</id>
		<title>MGEN Extrude</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=MGEN_Extrude&amp;diff=1807"/>
				<updated>2022-08-18T16:02:21Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Initial edits&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MGEN is a tool in the meshing workflow that takes a 2D source mesh and extrudes it in the third dimension based off of user input. The tool was originally created for use on structured grids on the Boeing bump, but has since been generalized for use in unstructured setups.&lt;br /&gt;
&lt;br /&gt;
== Basic Overview ==&lt;br /&gt;
&lt;br /&gt;
MGEN code is stored in &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; and written in FORTRAN. The code takes in a source 2D mesh, z-coordinates to extrude between, the number of elements to populate the extrusion with, and the number of partitions to write the mesh to. &lt;br /&gt;
&lt;br /&gt;
Partitioning in MGEN is simply a method to reduce the cost of initial runs of Chef, but is not a replacement for the initial configuring that Chef does (via 1-1-Chef). Parting in MGEN simply allows the first run of Chef to be in parallel (i.e. 8-8-Chef). Starting Chef from parallel is most important on large grids that would take prohibitively long to run though Chef in serial.&lt;br /&gt;
&lt;br /&gt;
The most current copy of the code is available at &amp;lt;code&amp;gt;(location)&amp;lt;/code&amp;gt; as of (date)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&lt;br /&gt;
Once a suitable version of &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; has been located and moved to a working directory, it first needs to be complied if this has not already been done. The FORTRAN compiler to compile &amp;lt;code&amp;gt;tm3Extrude.f&amp;lt;/code&amp;gt; should be the same version that was/will be used to compile the version of Chef to be used later in the meshing pipeline in order to reduce the risk of complications.&lt;br /&gt;
&lt;br /&gt;
Once a compiler version is selected and added using &amp;lt;code&amp;gt;soft add&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; (depending on the system), it can be compiled. As an example, if using &amp;lt;code&amp;gt;gcc-6.3.0&amp;lt;/code&amp;gt; on Cooley compiling would look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
soft add +gcc-6.3.0&lt;br /&gt;
&lt;br /&gt;
gfortran -03 tm3Extrude.f -o tm3Extrude&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the code is compiled, the working directory needs to be prepared to run MGEN. MGEN needs the source 2D mesh in the form of &amp;lt;code&amp;gt;geom.crd&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;geom.cnn&amp;lt;/code&amp;gt; files in the same directory as the compiled code. These source files can be produced from scratch with MATLAB for structured grids, or through the use of [[Getting Started with Simmodeler|Simmetrix]] and the [[Convert]] tool for unstructured grids.&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef&amp;diff=1600</id>
		<title>Chef</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef&amp;diff=1600"/>
				<updated>2021-06-17T21:58:31Z</updated>
		
		<summary type="html">&lt;p&gt;Prte0550: Adding a link to better introductory information for new users&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Chef''' is the mesh partitioning and file preparation program used to create the files read by [[PHASTA]]. It is a part of the [[SCOREC-core]] tools.&lt;br /&gt;
&lt;br /&gt;
Basic usage information is located in the [[Level 1 Partition]] page, while other related pages can be found in the [[:Category:Chef|Chef Category]].&lt;br /&gt;
&lt;br /&gt;
[[Category: Chef]]&lt;/div&gt;</summary>
		<author><name>Prte0550</name></author>	</entry>

	</feed>