<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://fluid.colorado.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Skinnerr</id>
		<title>PHASTA Wiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://fluid.colorado.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Skinnerr"/>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php/Special:Contributions/Skinnerr"/>
		<updated>2026-04-29T22:10:33Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=631</id>
		<title>ParaView/Tricks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=631"/>
				<updated>2018-11-10T22:17:41Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains a list of tips and tricks for ParaView, which may be useful for others.&lt;br /&gt;
&lt;br /&gt;
==Difference Between Two Timesteps==&lt;br /&gt;
To look at the &amp;quot;delta&amp;quot; between two timesteps, follow these steps.&lt;br /&gt;
# Load your data file twice, so your pipeline browser has two separate instances of &amp;quot;flow.pht&amp;quot; or whatever file you open.&lt;br /&gt;
# Apply one Force Time filter to each input file, and specify the physical time (not time index) for each. This will force the time for each and make them independent of the global ParaView time control in the upper toolbar.&lt;br /&gt;
# Select both Force Time filters simultaneously, and apply a Programmable Filter. Your pipeline browser will now show arrows representing multiple inputs to the single programmable filter.&lt;br /&gt;
# Populate the programmable filter with the following code:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;tmp = inputs[0].PointData['myAvailableFieldName'] - inputs[1].PointData['myAvailableFieldName']&amp;lt;br&amp;gt;output.PointData.append(tmp, 'delta of myAvailableFieldName')&amp;lt;/code&amp;gt;&lt;br /&gt;
# You can now view the field, &amp;quot;delta of myAvailableFieldName,&amp;quot; and change the time steps it uses to compute the time-delta by modifying the Force Time filter properties.&lt;br /&gt;
&lt;br /&gt;
Example script: relative change in eddy viscosity&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
s = 'EV'&lt;br /&gt;
a = inputs[0].PointData[s]&lt;br /&gt;
b = inputs[1].PointData[s]&lt;br /&gt;
tmp = (a - b) / b&lt;br /&gt;
output.PointData.append(tmp, 'delta')&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Stepping Forward in Time==&lt;br /&gt;
Let's say you load your ParaView file (for PHASTA, *.pht or *.phts) as &amp;lt;code&amp;gt;phtFile&amp;lt;/code&amp;gt;. The times associated with each timestep referenced by the file are given by &amp;lt;code&amp;gt;phtFile.TimestepValues&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To step forward in time, set the time of the current view. For example, &amp;lt;code&amp;gt;GetActiveView().ViewTime = myTime&amp;lt;/code&amp;gt;. When you save an image, all visible sources will be rendered using data from that time (&amp;lt;code&amp;gt;myTime&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
==Saving Data at a Given Time==&lt;br /&gt;
It turns out the data fields you access through Python are not updated simply by setting the active view's &amp;lt;code&amp;gt;ViewTime&amp;lt;/code&amp;gt; property. For example, if you're trying to extract data from a probe location or along a line, simply saving that pipeline object does not reflect the updated timestep. Instead, you need to call the &amp;lt;code&amp;gt;UpdatePipeline(time=myTime)&amp;lt;/code&amp;gt; method of your &amp;lt;code&amp;gt;ProbeLocation&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Slice&amp;lt;/code&amp;gt;, or other pipeline object. This method is inherited from &amp;lt;code&amp;gt;SourceProxy&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Visualizing Vortices==&lt;br /&gt;
Two methods for visualizing vortices are taking iso-surfaces of either Q-criterion or the Omega-criterion (Liu 2016, &amp;quot;New omega vortex identification method&amp;quot;). To compute both, apply &amp;quot;Gradient of Unstructured Dataset&amp;quot; to the velocity field in ParaView, and name the result &amp;lt;code&amp;gt;g&amp;lt;/code&amp;gt;. Then enter one of the following expressions into a Calculator, and finally apply a Contour at the appropriate value:&lt;br /&gt;
; Q-criterion&lt;br /&gt;
: &amp;lt;code&amp;gt;-g_1*g_3-g_2*g_6-g_5*g_7+g_4*g_8+g_0*(g_4+g_8)&amp;lt;/code&amp;gt;&lt;br /&gt;
: Contour at whatever value shows the features you want; try 1e7 as a starting point, but Q has a huge range of magnitudes&lt;br /&gt;
; Omega-criterion&lt;br /&gt;
: &amp;lt;code&amp;gt;(g_1^2 + g_2^2 - 2*g_1*g_3 + g_3^2 + g_5^2 - 2*g_2*g_6 + g_6^2 - 2*g_5*g_7 + g_7^2) / (2 * (g_0^2 + g_1^2 + g_2^2 + g_3^2 + g_4^2 + g_5^2 + g_6^2 + g_7^2 + g_8^2 + 1e-14))&amp;lt;/code&amp;gt;&lt;br /&gt;
: Contour at 0.52 as recommended in the Liu (2016) paper to start; Omega is always between 0 and 1&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=620</id>
		<title>ParaView/Tricks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=620"/>
				<updated>2017-11-28T01:00:49Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains a list of tips and tricks for ParaView, which may be useful for others.&lt;br /&gt;
&lt;br /&gt;
==Difference Between Two Timesteps==&lt;br /&gt;
To look at the &amp;quot;delta&amp;quot; between two timesteps, follow these steps.&lt;br /&gt;
# Load your data file twice, so your pipeline browser has two separate instances of &amp;quot;flow.pht&amp;quot; or whatever file you open.&lt;br /&gt;
# Apply one Force Time filter to each input file, and specify the physical time (not time index) for each. This will force the time for each and make them independent of the global ParaView time control in the upper toolbar.&lt;br /&gt;
# Select both Force Time filters simultaneously, and apply a Programmable Filter. Your pipeline browser will now show arrows representing multiple inputs to the single programmable filter.&lt;br /&gt;
# Populate the programmable filter with the following code:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;tmp = inputs[0].PointData['myAvailableFieldName'] - inputs[1].PointData['myAvailableFieldName']&amp;lt;br&amp;gt;output.PointData.append(tmp, 'delta of myAvailableFieldName')&amp;lt;/code&amp;gt;&lt;br /&gt;
# You can now view the field, &amp;quot;delta of myAvailableFieldName,&amp;quot; and change the time steps it uses to compute the time-delta by modifying the Force Time filter properties.&lt;br /&gt;
&lt;br /&gt;
Example script: relative change in eddy viscosity&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
s = 'EV'&lt;br /&gt;
a = inputs[0].PointData[s]&lt;br /&gt;
b = inputs[1].PointData[s]&lt;br /&gt;
tmp = (a - b) / b&lt;br /&gt;
output.PointData.append(tmp, 'delta')&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Stepping Forward in Time==&lt;br /&gt;
Let's say you load your ParaView file (for PHASTA, *.pht or *.phts) as &amp;lt;code&amp;gt;phtFile&amp;lt;/code&amp;gt;. The times associated with each timestep referenced by the file are given by &amp;lt;code&amp;gt;phtFile.TimestepValues&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To step forward in time, set the time of the current view. For example, &amp;lt;code&amp;gt;GetActiveView().ViewTime = myTime&amp;lt;/code&amp;gt;. When you save an image, all visible sources will be rendered using data from that time (&amp;lt;code&amp;gt;myTime&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
==Saving Data at a Given Time==&lt;br /&gt;
It turns out the data fields you access through Python are not updated simply by setting the active view's &amp;lt;code&amp;gt;ViewTime&amp;lt;/code&amp;gt; property. For example, if you're trying to extract data from a probe location or along a line, simply saving that pipeline object does not reflect the updated timestep. Instead, you need to call the &amp;lt;code&amp;gt;UpdatePipeline(time=myTime)&amp;lt;/code&amp;gt; method of your &amp;lt;code&amp;gt;ProbeLocation&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Slice&amp;lt;/code&amp;gt;, or other pipeline object. This method is inherited from &amp;lt;code&amp;gt;SourceProxy&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=619</id>
		<title>ParaView/Tricks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=619"/>
				<updated>2017-08-23T20:59:53Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Difference Between Two Timesteps */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains a list of tips and tricks for ParaView, which may be useful for others.&lt;br /&gt;
&lt;br /&gt;
==Difference Between Two Timesteps==&lt;br /&gt;
To look at the &amp;quot;delta&amp;quot; between two timesteps, follow these steps.&lt;br /&gt;
# Load your data file twice, so your pipeline browser has two separate instances of &amp;quot;flow.pht&amp;quot; or whatever file you open.&lt;br /&gt;
# Apply one Force Time filter to each input file, and specify the physical time (not time index) for each. This will force the time for each and make them independent of the global ParaView time control in the upper toolbar.&lt;br /&gt;
# Select both Force Time filters simultaneously, and apply a Programmable Filter. Your pipeline browser will now show arrows representing multiple inputs to the single programmable filter.&lt;br /&gt;
# Populate the programmable filter with the following code:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;tmp = inputs[0].PointData['myAvailableFieldName'] - inputs[1].PointData['myAvailableFieldName']&amp;lt;br&amp;gt;output.PointData.append(tmp, 'delta of myAvailableFieldName')&amp;lt;/code&amp;gt;&lt;br /&gt;
# You can now view the field, &amp;quot;delta of myAvailableFieldName,&amp;quot; and change the time steps it uses to compute the time-delta by modifying the Force Time filter properties.&lt;br /&gt;
&lt;br /&gt;
Example script: relative change in eddy viscosity&amp;lt;br&amp;gt;&lt;br /&gt;
s = 'EV'&amp;lt;br&amp;gt;&lt;br /&gt;
a = inputs[0].PointData[s]&amp;lt;br&amp;gt;&lt;br /&gt;
b = inputs[1].PointData[s]&amp;lt;br&amp;gt;&lt;br /&gt;
tmp = (a-b)/b&amp;lt;br&amp;gt;&lt;br /&gt;
output.PointData.append(tmp, 'delta')&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=618</id>
		<title>ParaView/Tricks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=618"/>
				<updated>2017-08-23T20:59:13Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Difference Between Two Timesteps */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains a list of tips and tricks for ParaView, which may be useful for others.&lt;br /&gt;
&lt;br /&gt;
==Difference Between Two Timesteps==&lt;br /&gt;
To look at the &amp;quot;delta&amp;quot; between two timesteps, follow these steps.&lt;br /&gt;
# Load your data file twice, so your pipeline browser has two separate instances of &amp;quot;flow.pht&amp;quot; or whatever file you open.&lt;br /&gt;
# Apply one Force Time filter to each input file, and specify the physical time (not time index) for each. This will force the time for each and make them independent of the global ParaView time control in the upper toolbar.&lt;br /&gt;
# Select both Force Time filters simultaneously, and apply a Programmable Filter. Your pipeline browser will now show arrows representing multiple inputs to the single programmable filter.&lt;br /&gt;
# Populate the programmable filter with the following code:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;tmp = inputs[0].PointData['myAvailableFieldName'] - inputs[1].PointData['myAvailableFieldName']&amp;lt;br&amp;gt;output.PointData.append(tmp, 'delta of myAvailableFieldName')&amp;lt;/code&amp;gt;&lt;br /&gt;
# You can now view the field, &amp;quot;delta of myAvailableFieldName,&amp;quot; and change the time steps it uses to compute the time-delta by modifying the Force Time filter properties.&lt;br /&gt;
&lt;br /&gt;
Example script: relative change in eddy viscosity&lt;br /&gt;
s = 'EV'&lt;br /&gt;
a = inputs[0].PointData[s]&lt;br /&gt;
b = inputs[1].PointData[s]&lt;br /&gt;
tmp = (a-b)/b&lt;br /&gt;
output.PointData.append(tmp, 'delta')&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=617</id>
		<title>ParaView/Tricks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=617"/>
				<updated>2017-08-15T17:25:21Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Difference Between Two Timesteps */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains a list of tips and tricks for ParaView, which may be useful for others.&lt;br /&gt;
&lt;br /&gt;
==Difference Between Two Timesteps==&lt;br /&gt;
To look at the &amp;quot;delta&amp;quot; between two timesteps, follow these steps.&lt;br /&gt;
# Load your data file twice, so your pipeline browser has two separate instances of &amp;quot;flow.pht&amp;quot; or whatever file you open.&lt;br /&gt;
# Apply one Force Time filter to each input file, and specify the physical time (not time index) for each. This will force the time for each and make them independent of the global ParaView time control in the upper toolbar.&lt;br /&gt;
# Select both Force Time filters simultaneously, and apply a Programmable Filter. Your pipeline browser will now show arrows representing multiple inputs to the single programmable filter.&lt;br /&gt;
# Populate the programmable filter with the following code:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;tmp = inputs[0].PointData['myAvailableFieldName'] - inputs[1].PointData['myAvailableFieldName']&amp;lt;br&amp;gt;output.PointData.append(tmp, 'delta of myAvailableFieldName')&amp;lt;/code&amp;gt;&lt;br /&gt;
# You can now view the field, &amp;quot;delta of myAvailableFieldName,&amp;quot; and change the time steps it uses to compute the time-delta by modifying the Force Time filter properties.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=616</id>
		<title>ParaView/Tricks</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView/Tricks&amp;diff=616"/>
				<updated>2017-08-15T17:23:07Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: Created page with &amp;quot;This page contains a list of tips and tricks for ParaView, which may be useful for others.  ==Difference Between Two Timesteps== To look at the &amp;quot;delta&amp;quot; between two timesteps,...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains a list of tips and tricks for ParaView, which may be useful for others.&lt;br /&gt;
&lt;br /&gt;
==Difference Between Two Timesteps==&lt;br /&gt;
To look at the &amp;quot;delta&amp;quot; between two timesteps, follow these steps.&lt;br /&gt;
# Load your data file twice, so your pipeline browser has two separate instances of &amp;quot;flow.pht&amp;quot; or whatever file you open.&lt;br /&gt;
# Apply one Force Time filter to each input file, and specify the physical time (not time index) for each. This will make force the time for each and make them independent of the global ParaView shown in the upper toolbar.&lt;br /&gt;
# Select both Force Time filters simultaneously, and apply a Programmable Filter. Your pipeline browser will now show arrows representing multiple inputs to the single programmable filter.&lt;br /&gt;
# Populate the programmable filter with the following code:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;tmp = inputs[0].PointData['myAvailableFieldName'] - inputs[1].PointData['myAvailableFieldName']&amp;lt;br&amp;gt;output.PointData.append(tmp, 'delta of myAvailableFieldName')&amp;lt;/code&amp;gt;&lt;br /&gt;
# You can now view the field, &amp;quot;delta of myAvailableFieldName,&amp;quot; and change the time steps it uses to compute the time-delta by modifying the Force Time filter properties.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_Core&amp;diff=613</id>
		<title>Building SCOREC Core</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_Core&amp;diff=613"/>
				<updated>2017-04-26T17:18:15Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Acquiring Source Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Acquiring Source Code ==&lt;br /&gt;
&lt;br /&gt;
RPI maintains SCOREC Core, a set of C/C++ libraries for unstructured meshing and simulation pre-processing. The code base lives on a github page at https://github.com/SCOREC/core. To clone the source code, navigate to the desired directory and run&lt;br /&gt;
  git clone https://github.com/SCOREC/core&lt;br /&gt;
&lt;br /&gt;
General build instructions can be found at&lt;br /&gt;
  https://github.com/SCOREC/core/wiki/General-Build-instructions&lt;br /&gt;
&lt;br /&gt;
== Building on Viz Nodes ==&lt;br /&gt;
&lt;br /&gt;
As of 2017-04-26, SCOREC-core lives in&lt;br /&gt;
  /projects/tools/SCOREC-core/build-viz003&lt;br /&gt;
&lt;br /&gt;
It was built using&lt;br /&gt;
  soft add +openmpi-gnu-2.1.0-gnu49-thread&lt;br /&gt;
  soft add +simmodsuite-11.0-170405dev&lt;br /&gt;
  (no soft add for gcc, meaning the default gcc 4.9.2 is used)&lt;br /&gt;
&lt;br /&gt;
If you want to build you own, do the following.&lt;br /&gt;
# go to where you have checked out SCOREC/core  (or check it out yourself) and sit in the directory above it&lt;br /&gt;
# soft add necessary dependencies (see above)&lt;br /&gt;
# cp ~kjansen/compilation&lt;br /&gt;
# mkdir build (or whatever descriptive name you like)&lt;br /&gt;
# cd build&lt;br /&gt;
#  . ../compilation/env_ucb_gnu_dev17_rhel7&lt;br /&gt;
#  ../compilation/doConfigure-dev17-rhel7 (this step has OS version dependencies)&lt;br /&gt;
# make -j&lt;br /&gt;
&lt;br /&gt;
=== Note ===&lt;br /&gt;
In general, you have to rebuild the partitionwrapper to match the mpi version. Ben did that for openmpi210 for rhel7.  This is a fairly easy process (  e.g., cp -r /usr/local/simmetrix/simmodsuite/11.0-170405dev/code/PartitionWrapper  tmpdir; cd  tmpdir;  make -f Makefile.custom PARALLEL=omp210 CC=mpicc CXX=mpicxx -j 8;) but the tricky bit is that you have to copy this wrapper to a place that the SCOREC cmake tools for tools will “find” it. For now, on viz003, Ben has copied it to the location of the rest of the Simmetrix libraries and named it with an openmpi210 extension which I made match in ../compilation/doConfigure-dev17-rhel7  line 15.  This works fine when you have been the person who copied or setup the libraries but less great if not.&lt;br /&gt;
&lt;br /&gt;
== Building on Summit ==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_Core&amp;diff=612</id>
		<title>Building SCOREC Core</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_Core&amp;diff=612"/>
				<updated>2017-04-26T17:17:43Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Building on Viz Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Acquiring Source Code ==&lt;br /&gt;
&lt;br /&gt;
RPI maintains SCOREC Core, a set of C/C++ libraries for unstructured meshing and simulation pre-processing. The code base lives on a github page at https://github.com/SCOREC/core. To clone the source code, navigate to the desired directory and run&lt;br /&gt;
  git clone https://github.com/SCOREC/core&lt;br /&gt;
&lt;br /&gt;
== Building on Viz Nodes ==&lt;br /&gt;
&lt;br /&gt;
As of 2017-04-26, SCOREC-core lives in&lt;br /&gt;
  /projects/tools/SCOREC-core/build-viz003&lt;br /&gt;
&lt;br /&gt;
It was built using&lt;br /&gt;
  soft add +openmpi-gnu-2.1.0-gnu49-thread&lt;br /&gt;
  soft add +simmodsuite-11.0-170405dev&lt;br /&gt;
  (no soft add for gcc, meaning the default gcc 4.9.2 is used)&lt;br /&gt;
&lt;br /&gt;
If you want to build you own, do the following.&lt;br /&gt;
# go to where you have checked out SCOREC/core  (or check it out yourself) and sit in the directory above it&lt;br /&gt;
# soft add necessary dependencies (see above)&lt;br /&gt;
# cp ~kjansen/compilation&lt;br /&gt;
# mkdir build (or whatever descriptive name you like)&lt;br /&gt;
# cd build&lt;br /&gt;
#  . ../compilation/env_ucb_gnu_dev17_rhel7&lt;br /&gt;
#  ../compilation/doConfigure-dev17-rhel7 (this step has OS version dependencies)&lt;br /&gt;
# make -j&lt;br /&gt;
&lt;br /&gt;
=== Note ===&lt;br /&gt;
In general, you have to rebuild the partitionwrapper to match the mpi version. Ben did that for openmpi210 for rhel7.  This is a fairly easy process (  e.g., cp -r /usr/local/simmetrix/simmodsuite/11.0-170405dev/code/PartitionWrapper  tmpdir; cd  tmpdir;  make -f Makefile.custom PARALLEL=omp210 CC=mpicc CXX=mpicxx -j 8;) but the tricky bit is that you have to copy this wrapper to a place that the SCOREC cmake tools for tools will “find” it. For now, on viz003, Ben has copied it to the location of the rest of the Simmetrix libraries and named it with an openmpi210 extension which I made match in ../compilation/doConfigure-dev17-rhel7  line 15.  This works fine when you have been the person who copied or setup the libraries but less great if not.&lt;br /&gt;
&lt;br /&gt;
== Building on Summit ==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_core&amp;diff=611</id>
		<title>Building SCOREC core</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_core&amp;diff=611"/>
				<updated>2017-04-25T17:58:21Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: Skinnerr moved page Building SCOREC core to Building SCOREC Core: captialization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Building SCOREC Core]]&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_Core&amp;diff=610</id>
		<title>Building SCOREC Core</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_Core&amp;diff=610"/>
				<updated>2017-04-25T17:58:21Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: Skinnerr moved page Building SCOREC core to Building SCOREC Core: captialization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Acquiring Source Code ==&lt;br /&gt;
&lt;br /&gt;
RPI maintains SCOREC Core, a set of C/C++ libraries for unstructured meshing and simulation pre-processing. The code base lives on a github page at https://github.com/SCOREC/core. To clone the source code, navigate to the desired directory and run&lt;br /&gt;
  git clone https://github.com/SCOREC/core&lt;br /&gt;
&lt;br /&gt;
== Building on Viz Nodes ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Building on Summit ==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_Core&amp;diff=609</id>
		<title>Building SCOREC Core</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Building_SCOREC_Core&amp;diff=609"/>
				<updated>2017-04-25T17:57:32Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: initial draft of page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Acquiring Source Code ==&lt;br /&gt;
&lt;br /&gt;
RPI maintains SCOREC Core, a set of C/C++ libraries for unstructured meshing and simulation pre-processing. The code base lives on a github page at https://github.com/SCOREC/core. To clone the source code, navigate to the desired directory and run&lt;br /&gt;
  git clone https://github.com/SCOREC/core&lt;br /&gt;
&lt;br /&gt;
== Building on Viz Nodes ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Building on Summit ==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=605</id>
		<title>SimModeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=605"/>
				<updated>2017-03-10T20:31:17Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Boundary conditions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
SimModeler is a model creation program from Simmetrix.  It takes the mesh and geometric model and creates the input files for PHASTA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
To run SimModeler, first connect via VNC, then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of SimModeler to your environment (the below example will get the &amp;quot;default&amp;quot; version):&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodeler&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
 vglrun simmodeler&lt;br /&gt;
&lt;br /&gt;
== Converting old files ==&lt;br /&gt;
This is a guide for converting old files (parasolid and .spj) to the new format (.smd).&lt;br /&gt;
&lt;br /&gt;
After connecting to one of the compute machines, add the suite of tools for SimModeler to your environment:&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodsuite&lt;br /&gt;
&lt;br /&gt;
From your case, make a new directory and copy your parasolid (.x_t or .xmt_txt), and .spj file into it. Rename the parasolid file to geom.xmt_txt and the .spj file to geom.spj, if they aren't already named that way. Then from the directory just created (now holds geom.xmt_txt and geom.spj) run: &lt;br /&gt;
&lt;br /&gt;
 /users/matthb2/simmodelerconvert/testConvert &lt;br /&gt;
&lt;br /&gt;
Your directory now contains two new files: model.smd and model.x_t&lt;br /&gt;
&lt;br /&gt;
== Creating new files ==&lt;br /&gt;
&lt;br /&gt;
Loading in geometry is about as intuitive as is possibly can be. Go to File -&amp;gt; Import Geometry, browse to the appropriate model, and select Open. Once open, it is possible to both mesh the model and to create boundary conditions for it. Because BLMesher is presently the primary meshing tool, it is only necessary to use SimModeler to create boundary conditions. Go to Analysis -&amp;gt; Select Solver, and select phasta. After selecting phasta, the Analysis Attributes option under Analysis becomes valid. Clicking it brings up the corresponding window. From this new window, it is possible to apply  boundary conditions and initial conditions by clicking the small button next to the drop down menu [add picture]. Note you must also double click on &amp;quot;problem definition&amp;quot; which will allow you to name the case.  Later post processing expects the name &amp;quot;geom&amp;quot; so always name it so.&lt;br /&gt;
&lt;br /&gt;
== Boundary conditions ==&lt;br /&gt;
&lt;br /&gt;
Commonly boundary conditions include:&lt;br /&gt;
&lt;br /&gt;
*comp3 - Specifies a 3D velocity vector&lt;br /&gt;
*comp1 - Specifies a 3D vector in which the velocity is constrained. Velocity normal to this vector is not directly affected. This is useful for creating slip walls and mimicking free stream regions. &lt;br /&gt;
*temperature - Sets the temperature of the wall. This is only needed for compressible cases. &lt;br /&gt;
*scalar_1 - Sets the scalar_1 / eddy viscosity to apply at a wall. For the Spalart Allmaras models, scalar_1 should be zero at physical walls where a boundary layer develops and 3 to 5 times the molecular viscosity at free stream boundaries (http://turbmodels.larc.nasa.gov/spalart.html)&lt;br /&gt;
*surf ID - Associates a number with one or more faces. This can then be read by Phasta and used to apply more complicated boundary conditions in software. &lt;br /&gt;
*natural pressure - Apply a mean pressure over a surface. The pressure at any particular point is still allowed to vary (someone verify). &lt;br /&gt;
*traction vector - ??. The zero vector is typically applied at outlet. &lt;br /&gt;
*heat flux - Specifies the rate at which heat is injected / removed (not sure which one) into / from the fluid domain. The value is almost always set to zero to create a perfectly insulated boundary. &lt;br /&gt;
*scalar_1 flux - set the flux of scalar_1 / eddy viscosity into / out of the domain (not sure which one). This is typically only used at outlets where high values of eddy viscosity have been convected downstream of turbulent walls. The value is almost always set to zero. &lt;br /&gt;
*turbulence wall - Indicates that a surface is to be included in the calculation of d2wall files (verify) which are then used by the Spalart Allmaras turbulence model to generate more physical turbulent kinetic energy production / dissipation budgets.&lt;br /&gt;
&lt;br /&gt;
=== Incompressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for an incompressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream molecular viscosity)&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream molecular viscosity)&lt;br /&gt;
*Outflow&lt;br /&gt;
**natural pressure (zero)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
*Impermeable slip walls&lt;br /&gt;
**Comp 1 (zero in wall-normal direction)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
&lt;br /&gt;
=== Compressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for a compressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream molecular viscosity)&lt;br /&gt;
**initial pressure&lt;br /&gt;
**initial temperature&lt;br /&gt;
&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream molecular viscosity)&lt;br /&gt;
**temperature&lt;br /&gt;
&lt;br /&gt;
*Outflow&lt;br /&gt;
**(?) pressure or natural pressure (zero)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
**heat flux (zero)&lt;br /&gt;
&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
**temperature or heat flux&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=604</id>
		<title>SimModeler</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=SimModeler&amp;diff=604"/>
				<updated>2017-03-10T20:20:59Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: add common boundary conditions for incompressible case&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
SimModeler is a model creation program from Simmetrix.  It takes the mesh and geometric model and creates the input files for PHASTA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
To run SimModeler, first connect via VNC, then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
 vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of SimModeler to your environment (the below example will get the &amp;quot;default&amp;quot; version):&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodeler&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
 vglrun simmodeler&lt;br /&gt;
&lt;br /&gt;
== Converting old files ==&lt;br /&gt;
This is a guide for converting old files (parasolid and .spj) to the new format (.smd).&lt;br /&gt;
&lt;br /&gt;
After connecting to one of the compute machines, add the suite of tools for SimModeler to your environment:&lt;br /&gt;
&lt;br /&gt;
 soft add +simmodsuite&lt;br /&gt;
&lt;br /&gt;
From your case, make a new directory and copy your parasolid (.x_t or .xmt_txt), and .spj file into it. Rename the parasolid file to geom.xmt_txt and the .spj file to geom.spj, if they aren't already named that way. Then from the directory just created (now holds geom.xmt_txt and geom.spj) run: &lt;br /&gt;
&lt;br /&gt;
 /users/matthb2/simmodelerconvert/testConvert &lt;br /&gt;
&lt;br /&gt;
Your directory now contains two new files: model.smd and model.x_t&lt;br /&gt;
&lt;br /&gt;
== Creating new files ==&lt;br /&gt;
&lt;br /&gt;
Loading in geometry is about as intuitive as is possibly can be. Go to File -&amp;gt; Import Geometry, browse to the appropriate model, and select Open. Once open, it is possible to both mesh the model and to create boundary conditions for it. Because BLMesher is presently the primary meshing tool, it is only necessary to use SimModeler to create boundary conditions. Go to Analysis -&amp;gt; Select Solver, and select phasta. After selecting phasta, the Analysis Attributes option under Analysis becomes valid. Clicking it brings up the corresponding window. From this new window, it is possible to apply  boundary conditions and initial conditions by clicking the small button next to the drop down menu [add picture]. Note you must also double click on &amp;quot;problem definition&amp;quot; which will allow you to name the case.  Later post processing expects the name &amp;quot;geom&amp;quot; so always name it so.&lt;br /&gt;
&lt;br /&gt;
== Boundary conditions ==&lt;br /&gt;
&lt;br /&gt;
Commonly boundary conditions include:&lt;br /&gt;
&lt;br /&gt;
*comp3 - Specifies a 3D velocity vector&lt;br /&gt;
*comp1 - Specifies a 3D vector in which the velocity is constrained. Velocity normal to this vector is not directly affected. This is useful for creating slip walls and mimicking free stream regions. &lt;br /&gt;
*temperature - Sets the temperature of the wall. This is only needed for compressible cases. &lt;br /&gt;
*scalar_1 - Sets the scalar_1 / eddy viscosity to apply at a wall. For the Spalart Allmaras models, scalar_1 should be zero at physical walls where a boundary layer develops and 3 to 5 times the molecular viscosity at free stream boundaries (http://turbmodels.larc.nasa.gov/spalart.html)&lt;br /&gt;
*surf ID - Associates a number with one or more faces. This can then be read by Phasta and used to apply more complicated boundary conditions in software. &lt;br /&gt;
*natural pressure - Apply a mean pressure over a surface. The pressure at any particular point is still allowed to vary (someone verify). &lt;br /&gt;
*traction vector - ??. The zero vector is typically applied at outlet. &lt;br /&gt;
*heat flux - Specifies the rate at which heat is injected / removed (not sure which one) into / from the fluid domain. The value is almost always set to zero to create a perfectly insulated boundary. &lt;br /&gt;
*scalar_1 flux - set the flux of scalar_1 / eddy viscosity into / out of the domain (not sure which one). This is typically only used at outlets where high values of eddy viscosity have been convected downstream of turbulent walls. The value is almost always set to zero. &lt;br /&gt;
*turbulence wall - Indicates that a surface is to be included in the calculation of d2wall files (verify) which are then used by the Spalart Allmaras turbulence model to generate more physical turbulent kinetic energy production / dissipation budgets.&lt;br /&gt;
&lt;br /&gt;
=== Incompressible ===&lt;br /&gt;
&lt;br /&gt;
Common BCs used for an incompressible case with the S-A turbulence model&lt;br /&gt;
&lt;br /&gt;
*Initial conditions&lt;br /&gt;
**initial velocity (nonzero, typically small)&lt;br /&gt;
**initial scalar_1 (3-5 times free-stream molecular viscosity)&lt;br /&gt;
*Inflow&lt;br /&gt;
**Comp 3&lt;br /&gt;
**scalar_1 (also 3-5 times free-stream molecular viscosity)&lt;br /&gt;
*Outflow&lt;br /&gt;
**natural pressure (zero)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;br /&gt;
*Solid physical walls&lt;br /&gt;
**Comp 3 (zero vector)&lt;br /&gt;
**scalar_1 (zero)&lt;br /&gt;
**turbulence wall (value unimportant; use zero)&lt;br /&gt;
*Impermeable slip walls&lt;br /&gt;
**Comp 1 (zero in wall-normal direction)&lt;br /&gt;
**scalar_1 flux (zero)&lt;br /&gt;
**traction vector (zero vector)&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Opening_a_Simmetrix_ticket&amp;diff=603</id>
		<title>Opening a Simmetrix ticket</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Opening_a_Simmetrix_ticket&amp;diff=603"/>
				<updated>2016-12-01T21:42:53Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To open a ticket with simmetrix, use the following support form.&lt;br /&gt;
&lt;br /&gt;
 ====Simmetrix Support Form====&lt;br /&gt;
 &lt;br /&gt;
 Customer ID: jan001&lt;br /&gt;
 Platform: linux&lt;br /&gt;
 Simmetrix Product: MeshSim&lt;br /&gt;
 Product Version: 7.2 (edit this to whichever version you are using)&lt;br /&gt;
 Type [Question,Bug,Feature Request]: Question/Bug/Feature request&lt;br /&gt;
 Priority [High,Medium,Low]: High/Medium/Low&lt;br /&gt;
 Summary: (e.g. BL gets destroyed in initial meshing at some places.)&lt;br /&gt;
 &lt;br /&gt;
 Description:&lt;br /&gt;
 &lt;br /&gt;
 Here, put a brief description of the bug or question you have for Simmetrix.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can put the Summary above as the subject in your email. The email should be sent to &amp;quot;support@simmetrix.com'. Remember, for the email to get registered as a support ticket, the delimiter ( ====Simmetrix Support Form====) has to be before all the content of the email and has to be exactly &lt;br /&gt;
 ====Simmetrix Support Form==== &lt;br /&gt;
&lt;br /&gt;
If this is not followed, you will get an email from Simmetrix saying &amp;quot;Delimiter not found&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
SimModeler:&lt;br /&gt;
&lt;br /&gt;
Finally, for issues generated in SimModeler, you need to save the model with each meshing case, and then send each *.smd and *_nat.x_t, indicating which ones work and which do not. They will be able to reproduce the issue internally.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Opening_a_Simmetrix_ticket&amp;diff=602</id>
		<title>Opening a Simmetrix ticket</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Opening_a_Simmetrix_ticket&amp;diff=602"/>
				<updated>2016-12-01T21:42:42Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To open a ticket with simmetrix, use the following support form.&lt;br /&gt;
&lt;br /&gt;
 ====Simmetrix Support Form====&lt;br /&gt;
 &lt;br /&gt;
 Customer ID: jan001&lt;br /&gt;
 Platform: linux&lt;br /&gt;
 Simmetrix Product: MeshSim&lt;br /&gt;
 Product Version: 7.2 (edit this to whichever version you are using)&lt;br /&gt;
 Type [Question,Bug,Feature Request]: Question/Bug/Feature request&lt;br /&gt;
 Priority [High,Medium,Low]: High/Medium/Low&lt;br /&gt;
 Summary: (e.g. BL gets destroyed in initial meshing at some places.)&lt;br /&gt;
 &lt;br /&gt;
 Description:&lt;br /&gt;
 &lt;br /&gt;
 Here, put a brief description of the bug or question you have for Simmetrix.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can put the Summary above as the subject in your email. The email should be sent to &amp;quot;support@simmetrix.com'. Remember, for the email to get registered as a support ticket, the delimiter ( ====Simmetrix Support Form====) has to be before all the content of the email and has to be exactly &lt;br /&gt;
 ====Simmetrix Support Form==== &lt;br /&gt;
&lt;br /&gt;
If this is not followed, you will get an email from Simmetrix saying &amp;quot;Delimiter not found&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Simmodeler:&lt;br /&gt;
&lt;br /&gt;
Finally, for issues generated in SimModeler, you need to save the model with each meshing case, and then send each *.smd and *_nat.x_t, indicating which ones work and which do not. They will be able to reproduce the issue internally.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Opening_a_Simmetrix_ticket&amp;diff=601</id>
		<title>Opening a Simmetrix ticket</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Opening_a_Simmetrix_ticket&amp;diff=601"/>
				<updated>2016-12-01T21:42:02Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To open a ticket with simmetrix, use the following support form.&lt;br /&gt;
&lt;br /&gt;
 ====Simmetrix Support Form====&lt;br /&gt;
 &lt;br /&gt;
 Customer ID: jan001&lt;br /&gt;
 Platform: linux&lt;br /&gt;
 Simmetrix Product: MeshSim&lt;br /&gt;
 Product Version: 7.2 (edit this to whichever version you are using)&lt;br /&gt;
 Type [Question,Bug,Feature Request]: Question/Bug/Feature request&lt;br /&gt;
 Priority [High,Medium,Low]: High/Medium/Low&lt;br /&gt;
 Summary: (e.g. BL gets destroyed in initial meshing at some places.)&lt;br /&gt;
 &lt;br /&gt;
 Description:&lt;br /&gt;
 &lt;br /&gt;
 Here, put a brief description of the bug or question you have for Simmetrix.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can put the Summary above as the subject in your email. The email should be sent to &amp;quot;support@simmetrix.com'. Remember, for the email to get registered as a support ticket, the delimiter ( ====Simmetrix Support Form====) has to be before all the content of the email and has to be exactly &lt;br /&gt;
 ====Simmetrix Support Form==== &lt;br /&gt;
&lt;br /&gt;
If this is not followed, you will get an email from Simmetrix saying &amp;quot;Delimiter not found&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Finally, for issues generated in SimModeler, you need to save the model with each meshing case, and then send each *.smd and *_nat.x_t, indicating which ones work and which do not. They will be able to reproduce the issue internally.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=MATLAB&amp;diff=596</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=MATLAB&amp;diff=596"/>
				<updated>2015-11-23T04:46:46Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
=General=&lt;br /&gt;
&lt;br /&gt;
Matlab is installed only on viz001, viz002, and viz003 (since we only have node-locked licenses). Several recent versions are installed in /opt/matlab. The full path to the latest version will look like:&lt;br /&gt;
&lt;br /&gt;
  /opt/matlab/R2015b/bin/matlab&lt;br /&gt;
&lt;br /&gt;
The licenses expire periodically, but can be renewed. If you need an older version and find that the license has expired, please email Benjamin.A.Matthews@Colorado.edu&lt;br /&gt;
&lt;br /&gt;
=OpenGL=&lt;br /&gt;
&lt;br /&gt;
Recent versions of matlab, starting with R2014b, automatically enable hardware-accelerated OpenGL rendering when available. If matlab has trouble finding the display driver, it will switch to a software-based OpenGL implementation, which is inferior.&lt;br /&gt;
&lt;br /&gt;
To use hardware-accelerated OpenGL on the viz nodes, you must have executed&lt;br /&gt;
&lt;br /&gt;
  vglconnect -s viz003&lt;br /&gt;
  vglrun matlab -nosoftwareopengl&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=UNIX&amp;diff=593</id>
		<title>UNIX</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=UNIX&amp;diff=593"/>
				<updated>2015-11-20T03:09:54Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: broken link removed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Most of our systems (and general HPC resources) run some UNIX derivative. Much of the software is command line based, so it's worthwhile to learn the basics. &lt;br /&gt;
&lt;br /&gt;
There are tons of free resources on the web for getting started, for example this [[http://learncodethehardway.org/cli/book book]]. There should also be a &amp;quot;for dummies&amp;quot; book in the lab. &lt;br /&gt;
&lt;br /&gt;
As you find resources that are helpful, please update this page.&lt;br /&gt;
&lt;br /&gt;
== Connecting (SSH) ==&lt;br /&gt;
Windows:&lt;br /&gt;
[[http://www.chiark.greenend.org.uk/~sgtatham/putty PuTTY SSH Client]]&lt;br /&gt;
[[http://winscp.net/eng/index.php WinSCP file transfer tool]]&lt;br /&gt;
&lt;br /&gt;
MacOS and Linux users can use [[http://openssh.org/ OpennSSH]] on the command line (it generally comes with the OS).&lt;br /&gt;
&lt;br /&gt;
== Command Line Basics ==&lt;br /&gt;
&lt;br /&gt;
[[https://www.rc.colorado.edu/support/tutorials/linux Slides and Video from CU's Research Computing group]]&lt;br /&gt;
&lt;br /&gt;
[[http://www.nixsrv.com/llthw  &amp;quot;Learn Linux the Hard Way&amp;quot; (online book)]]&lt;br /&gt;
== Graphical Sessions (VNC) ==&lt;br /&gt;
See [[VNC]]&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=592</id>
		<title>VNC</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=592"/>
				<updated>2015-11-20T02:55:07Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;VNC is a tool which projects a GUI session over the network. If may be useful if you want to use GUI tools remotely when X forwarding performs poorly. &lt;br /&gt;
&lt;br /&gt;
'''Warning: This is still being tested and should NOT be considered stable (portal0 may be rebooted without warning)'''&lt;br /&gt;
'''Warning: The vnc password is transmitted in clear text over the network and should not be considered secure'''&lt;br /&gt;
&lt;br /&gt;
Portal0 is designated to host VNC sessions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To start a VNC session:&lt;br /&gt;
&lt;br /&gt;
  ssh jumpgate-phasta.colorado.edu&lt;br /&gt;
  ssh portal0&lt;br /&gt;
  source /etc/profile&lt;br /&gt;
  start_vnc.sh&lt;br /&gt;
&lt;br /&gt;
Then follow the directions from start_vnc.sh. Make sure to remember your password and port number (59**) so that you can reuse your session.&lt;br /&gt;
&lt;br /&gt;
It's okay to leave your VNC session running on portal0. Next time you want to access your desktop, just ssh into jumpgate with a tunnel between portal0's vnc port (59**) and some port on your machine. Then use a VNC client to connect to the port on your machine.&lt;br /&gt;
&lt;br /&gt;
If, for some reason, you want to end your session and kill your virtual desktop, run&lt;br /&gt;
&lt;br /&gt;
  source /etc/profile&lt;br /&gt;
  stop_vnc.sh     # ONLY run this if you want to kill your virtual desktop.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== OpenGL == &lt;br /&gt;
&lt;br /&gt;
Portal0 is equipped with a VirtualGL install which will allow you to use OpenGL programs (which do not use pthreads)&lt;br /&gt;
&lt;br /&gt;
Simply wrap your OpenGL program with the &amp;quot;vglrun&amp;quot; command&lt;br /&gt;
  vlgrun glxgears&lt;br /&gt;
&lt;br /&gt;
If you have access to another VirtualGL server you can connect to it first (Portal0 doesn't have a particularly fast graphics processor)&lt;br /&gt;
  vglconnect server&lt;br /&gt;
  vglrun glxgears&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that VGL uses a number of threads. If you have trouble with vglrun crashing with a message about Thread::Start() make sure you haven't set your stack size too large (remove any ulimit -s or ulimit -n calls from your shell start scripts)&lt;br /&gt;
&lt;br /&gt;
== Clients == &lt;br /&gt;
&lt;br /&gt;
Portal0 uses TurboVNC from the VirtualGL project, available from http://www.virtualgl.org/Downloads/TurboVNC&lt;br /&gt;
&lt;br /&gt;
Other VNC viewers will also work, such as TightVNC and RealVNC&lt;br /&gt;
&lt;br /&gt;
== Changing the VNC Password ==&lt;br /&gt;
&lt;br /&gt;
  /opt/tigervnc/bin/vncpasswd&lt;br /&gt;
&lt;br /&gt;
== View Only Mode == &lt;br /&gt;
&lt;br /&gt;
To share your desktop with another user in view only mode set a view only password &lt;br /&gt;
by running&lt;br /&gt;
  vncpasswd&lt;br /&gt;
&lt;br /&gt;
Have the other user connect in the same way you would but have them set their viewer to be in view only mode and use your view only password. Typically this is done as follows:&lt;br /&gt;
  vncviewer -viewonly&lt;br /&gt;
&lt;br /&gt;
== Windows == &lt;br /&gt;
The PuTTY SSH client can handle ssh tunneling on Windows based machines. You can download it here: http://www.chiark.greenend.org.uk/~sgtatham/putty/&lt;br /&gt;
&lt;br /&gt;
When you open putty, enter jumpgate-phasta.colorado.edu as in the Host Name box. Then click the + button next to SSH on the left pane (to expand the SSH tree node). Choose the Tunnels page. The start_vnc.sh script should tell you to run &amp;quot;ssh -L????:portal0:???? jumpgate-phasta.colorado.edu&amp;quot; on your machine. Enter the number between the -L and the first colon in the &amp;quot;Source port&amp;quot; box. Enter the rest in the Destination box (starting with portal0) and '''click the add button'''. Then click &amp;quot;Open&amp;quot; and login as normal. You will then be able to use a vncviewer as instructed by the script.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
The script says:&lt;br /&gt;
ssh -L5905:portal0:5900 jumpgate-phasta.colorado.edu&lt;br /&gt;
enter 5905 in the Source port box&lt;br /&gt;
enter portal0:5900 in the destination box.&lt;br /&gt;
&lt;br /&gt;
Try using this viewer utility&lt;br /&gt;
http://www.tightvnc.com/download/1.3.10/tightvnc-1.3.10_x86_viewer.zip&lt;br /&gt;
&lt;br /&gt;
'''Connecting to your VNC with PuTTY'''&lt;br /&gt;
&lt;br /&gt;
Once we SSH to jumpgate (on the default SSH port 22), our main desktop on portal0 can be accessed via a VNC session as follows.&lt;br /&gt;
&lt;br /&gt;
# The VNC server should already be running on portal0 using port 59xx.&lt;br /&gt;
## To check the port, on portal0 run &amp;lt;code&amp;gt;/opt/vnc_script/findsession.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
## To confirm the VNC server is running (and see port), run &amp;lt;code&amp;gt;ps aux | grep vnc&amp;lt;/code&amp;gt;&lt;br /&gt;
# Open PuTTY on your local machine.&lt;br /&gt;
# Under &amp;quot;Session&amp;quot;, SSH to &amp;lt;code&amp;gt;x@jumpgate-phasta.colorado.edu:22&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; is your username on jumpgate, and &amp;lt;code&amp;gt;22&amp;lt;/code&amp;gt; is the standard SSH port.&lt;br /&gt;
# Under &amp;quot;Session&amp;quot;&amp;gt;&amp;quot;SSH&amp;quot;&amp;gt;&amp;quot;Tunnels&amp;quot;, select source port &amp;lt;code&amp;gt;59xx&amp;lt;/code&amp;gt; and destination port &amp;lt;code&amp;gt;portal0:59xx&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;xx&amp;lt;/code&amp;gt; is the two-digit number of your VNC session. Select destination &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; and click &amp;quot;Add&amp;quot;. We select &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; because we have a service (VNC Server) running on a machine (portal0) that can be reached from the remote machine (jumpgate), and we want to access it directly from the &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; machine.&lt;br /&gt;
# Confirm the dialog by clicking &amp;quot;Open&amp;quot;, thus establishing an SSH connection between localhost and jumpgate, and tunneling localhost:59xx to portal0:59xx via this connection.&lt;br /&gt;
# Open RealVNC, and connect to &amp;lt;code&amp;gt;localhost:xx&amp;lt;/code&amp;gt;, which is shorthand for &amp;lt;code&amp;gt;localhost:59xx&amp;lt;/code&amp;gt;. VNC ports are enumerated starting with &amp;lt;code&amp;gt;5901&amp;lt;/code&amp;gt;, so any two digit port &amp;lt;code&amp;gt;xx&amp;lt;/code&amp;gt; is assumed to be port &amp;lt;code&amp;gt;59xx&amp;lt;/code&amp;gt;.&lt;br /&gt;
# You should now have access to your desktop on portal0.&lt;br /&gt;
&lt;br /&gt;
== Web Based Viewer ==&lt;br /&gt;
&lt;br /&gt;
If you can't or don't want to install a VNC viewer you can use a Java based one. You will need a JVM and a Java browser plugin. You will also need the port that the start_vnc script assigned you to be free on your local computer&lt;br /&gt;
&lt;br /&gt;
Forward your session through jumpgate as before before, adding a second port, 580n. For example, if the script tells you to &lt;br /&gt;
&lt;br /&gt;
ssh -L5905:portal0:5902 jumpgate-phasta.colorado.edu you should&lt;br /&gt;
  ssh -L5902:portal0:5902 -L5802:portal0:5802 jumpgate-phasta.colorado.edu&lt;br /&gt;
Then point your browser to http://localhost:5802 and log in with the password specified by the script when prompted. (Replace 2 with the value specified by the script)&lt;br /&gt;
&lt;br /&gt;
== Changing the Size (Resolution) of an Existing Session ==&lt;br /&gt;
&lt;br /&gt;
You can usually use the &amp;quot;xrandr&amp;quot; tool to change the resolution of a running vnc session. First you'll need to know your session's display number (this should be the last digit or two of the port number). For example, if your VNC session is running on port 5902, then your screen number should be :2. For this example, we'll use screen 2. &lt;br /&gt;
&lt;br /&gt;
Once you know your screen number, you can see the list of supported modes as follows:&lt;br /&gt;
  xrandr -display :2&lt;br /&gt;
&lt;br /&gt;
Once you pick the one you want (generally the same size or smaller than the native resolution of your client), you can choose it by running a command like&lt;br /&gt;
  xrandr -s 1400x1050 -display :2&lt;br /&gt;
&lt;br /&gt;
(this example will set the resolution to 1400 pixels by 1050 pixels)&lt;br /&gt;
&lt;br /&gt;
You'll probably be disconnected at this point, but when you reconnect your screen size should be changed (hopefully without crashing your running programs). &lt;br /&gt;
&lt;br /&gt;
== Finding an Existing Session ==&lt;br /&gt;
SSH to portal0 and then run:&lt;br /&gt;
  /opt/vnc_script/findsession.sh&lt;br /&gt;
&lt;br /&gt;
Which will return the shortened port number of each of your currently running sessions.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting == &lt;br /&gt;
&lt;br /&gt;
If you have used vncserver (It doesn't matter which version) on a SCOREC machine before, you will need to clear your vnc settings for the script to work. You can do this by running rm -rf ~/.vnc&lt;br /&gt;
&lt;br /&gt;
stop_vnc.sh may display some errors; this is normal.&lt;br /&gt;
&lt;br /&gt;
If you have trouble deleting ~/.vnc send an email to Benjamin.A.Matthews@colorado.edu&lt;br /&gt;
&lt;br /&gt;
If any of these commands fail, you may need to source /etc/profile to get the necessary environment variables (this should be fixed soon)&lt;br /&gt;
&lt;br /&gt;
VirtualGL has trouble with some threaded programs. If your OpenGL program exhibits segmentation faults or other issues, this could be the problem. Check back for the solution later. &lt;br /&gt;
&lt;br /&gt;
If the given password is rejected you can run stop_vnc.sh and restart to get a new one. Occasionally the random password generator may generate passwords which VNC doesn't like.&lt;br /&gt;
&lt;br /&gt;
If VirtualGL complains about not being able to get a 24bit FB config either vglconnect to another VirtualGL enabled server or complain to Benjamin.A.Matthews@Colorado.edu&lt;br /&gt;
&lt;br /&gt;
If your VNC connection is very slow, you might want to try changing the compression and encoding options. See your vncviewer's documentation or try this&lt;br /&gt;
  vncviewer -encodings tight -quality 6 -compresslevel 6&lt;br /&gt;
If you have trouble with text distortion try adding &lt;br /&gt;
  -nojpeg&lt;br /&gt;
&lt;br /&gt;
If you're running OSX and see an error about Zlib, try changing your compression settings (maximum quality usually works) or use a different client. RealVNC and certain versions of ChickenOfTheVNC both exhibit this issue. The latest build of TigerVNC should work reliably, as does the Java based TightVNC client.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=591</id>
		<title>VNC</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=591"/>
				<updated>2015-09-15T22:23:35Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;VNC is a tool which projects a GUI session over the network. If may be useful if you want to use GUI tools remotely when X forwarding performs poorly. &lt;br /&gt;
&lt;br /&gt;
'''Warning: This is still being tested and should NOT be considered stable (portal0 may be rebooted without warning)'''&lt;br /&gt;
'''Warning: The vnc password is transmitted in clear text over the network and should not be considered secure'''&lt;br /&gt;
&lt;br /&gt;
Portal0 is designated to host VNC sessions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To start a VNC session:&lt;br /&gt;
&lt;br /&gt;
  ssh jumpgate-phasta.colorado.edu&lt;br /&gt;
  ssh portal0&lt;br /&gt;
  source /etc/profile&lt;br /&gt;
  start_vnc.sh&lt;br /&gt;
&lt;br /&gt;
Then follow the directions from start_vnc.sh.&lt;br /&gt;
&lt;br /&gt;
(You may want to remember your password and port number so that you can reuse your session)&lt;br /&gt;
&lt;br /&gt;
If, for some reason, you want to end your session and kill your virtual desktop, run&lt;br /&gt;
&lt;br /&gt;
  source /etc/profile&lt;br /&gt;
  stop_vnc.sh     # ONLY run this if you want to kill your virtual desktop.&lt;br /&gt;
                  # It's okay to leave your VNC session running on portal0.&lt;br /&gt;
                  # Next time you want to access your desktop, just ssh into jumpgate&lt;br /&gt;
                  #   with a tunnel between portal0's vnc port (59**) and some port&lt;br /&gt;
                  #   on your machine. Then use a VNC client to connect to the port&lt;br /&gt;
                  #   on your machine.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== OpenGL == &lt;br /&gt;
&lt;br /&gt;
Portal0 is equipped with a VirtualGL install which will allow you to use OpenGL programs (which do not use pthreads)&lt;br /&gt;
&lt;br /&gt;
Simply wrap your OpenGL program with the &amp;quot;vglrun&amp;quot; command&lt;br /&gt;
  vlgrun glxgears&lt;br /&gt;
&lt;br /&gt;
If you have access to another VirtualGL server you can connect to it first (Portal0 doesn't have a particularly fast graphics processor)&lt;br /&gt;
  vglconnect server&lt;br /&gt;
  vglrun glxgears&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that VGL uses a number of threads. If you have trouble with vglrun crashing with a message about Thread::Start() make sure you haven't set your stack size too large (remove any ulimit -s or ulimit -n calls from your shell start scripts)&lt;br /&gt;
&lt;br /&gt;
== Clients == &lt;br /&gt;
&lt;br /&gt;
Portal0 uses TurboVNC from the VirtualGL project, available from http://www.virtualgl.org/Downloads/TurboVNC&lt;br /&gt;
&lt;br /&gt;
Other VNC viewers will also work, such as TightVNC and RealVNC&lt;br /&gt;
&lt;br /&gt;
== Changing the VNC Password ==&lt;br /&gt;
&lt;br /&gt;
  /opt/tigervnc/bin/vncpasswd&lt;br /&gt;
&lt;br /&gt;
== View Only Mode == &lt;br /&gt;
&lt;br /&gt;
To share your desktop with another user in view only mode set a view only password &lt;br /&gt;
by running&lt;br /&gt;
  vncpasswd&lt;br /&gt;
&lt;br /&gt;
Have the other user connect in the same way you would but have them set their viewer to be in view only mode and use your view only password. Typically this is done as follows:&lt;br /&gt;
  vncviewer -viewonly&lt;br /&gt;
&lt;br /&gt;
== Windows == &lt;br /&gt;
The PuTTY SSH client can handle ssh tunneling on Windows based machines. You can download it here: http://www.chiark.greenend.org.uk/~sgtatham/putty/&lt;br /&gt;
&lt;br /&gt;
When you open putty, enter jumpgate-phasta.colorado.edu as in the Host Name box. Then click the + button next to SSH on the left pane (to expand the SSH tree node). Choose the Tunnels page. The start_vnc.sh script should tell you to run &amp;quot;ssh -L????:portal0:???? jumpgate-phasta.colorado.edu&amp;quot; on your machine. Enter the number between the -L and the first colon in the &amp;quot;Source port&amp;quot; box. Enter the rest in the Destination box (starting with portal0) and '''click the add button'''. Then click &amp;quot;Open&amp;quot; and login as normal. You will then be able to use a vncviewer as instructed by the script.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
The script says:&lt;br /&gt;
ssh -L5905:portal0:5900 jumpgate-phasta.colorado.edu&lt;br /&gt;
enter 5905 in the Source port box&lt;br /&gt;
enter portal0:5900 in the destination box.&lt;br /&gt;
&lt;br /&gt;
Try using this viewer utility&lt;br /&gt;
http://www.tightvnc.com/download/1.3.10/tightvnc-1.3.10_x86_viewer.zip&lt;br /&gt;
&lt;br /&gt;
'''Connecting to your VNC with PuTTY'''&lt;br /&gt;
&lt;br /&gt;
Once we SSH to jumpgate (on the default SSH port 22), our main desktop on portal0 can be accessed via a VNC session as follows.&lt;br /&gt;
&lt;br /&gt;
# The VNC server should already be running on portal0 using port 59xx.&lt;br /&gt;
## To check the port, on portal0 run &amp;lt;code&amp;gt;/opt/vnc_script/findsession.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
## To confirm the VNC server is running (and see port), run &amp;lt;code&amp;gt;ps aux | grep vnc&amp;lt;/code&amp;gt;&lt;br /&gt;
# Open PuTTY on your local machine.&lt;br /&gt;
# Under &amp;quot;Session&amp;quot;, SSH to &amp;lt;code&amp;gt;x@jumpgate-phasta.colorado.edu:22&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; is your username on jumpgate, and &amp;lt;code&amp;gt;22&amp;lt;/code&amp;gt; is the standard SSH port.&lt;br /&gt;
# Under &amp;quot;Session&amp;quot;&amp;gt;&amp;quot;SSH&amp;quot;&amp;gt;&amp;quot;Tunnels&amp;quot;, select source port &amp;lt;code&amp;gt;59xx&amp;lt;/code&amp;gt; and destination port &amp;lt;code&amp;gt;portal0:59xx&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;xx&amp;lt;/code&amp;gt; is the two-digit number of your VNC session. Select destination &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; and click &amp;quot;Add&amp;quot;. We select &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; because we have a service (VNC Server) running on a machine (portal0) that can be reached from the remote machine (jumpgate), and we want to access it directly from the &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; machine.&lt;br /&gt;
# Confirm the dialog by clicking &amp;quot;Open&amp;quot;, thus establishing an SSH connection between localhost and jumpgate, and tunneling localhost:59xx to portal0:59xx via this connection.&lt;br /&gt;
# Open RealVNC, and connect to &amp;lt;code&amp;gt;localhost:xx&amp;lt;/code&amp;gt;, which is shorthand for &amp;lt;code&amp;gt;localhost:59xx&amp;lt;/code&amp;gt;. VNC ports are enumerated starting with &amp;lt;code&amp;gt;5901&amp;lt;/code&amp;gt;, so any two digit port &amp;lt;code&amp;gt;xx&amp;lt;/code&amp;gt; is assumed to be port &amp;lt;code&amp;gt;59xx&amp;lt;/code&amp;gt;.&lt;br /&gt;
# You should now have access to your desktop on portal0.&lt;br /&gt;
&lt;br /&gt;
== Web Based Viewer ==&lt;br /&gt;
&lt;br /&gt;
If you can't or don't want to install a VNC viewer you can use a Java based one. You will need a JVM and a Java browser plugin. You will also need the port that the start_vnc script assigned you to be free on your local computer&lt;br /&gt;
&lt;br /&gt;
Forward your session through jumpgate as before before, adding a second port, 580n. For example, if the script tells you to &lt;br /&gt;
&lt;br /&gt;
ssh -L5905:portal0:5902 jumpgate-phasta.colorado.edu you should&lt;br /&gt;
  ssh -L5902:portal0:5902 -L5802:portal0:5802 jumpgate-phasta.colorado.edu&lt;br /&gt;
Then point your browser to http://localhost:5802 and log in with the password specified by the script when prompted. (Replace 2 with the value specified by the script)&lt;br /&gt;
&lt;br /&gt;
== Changing the Size (Resolution) of an Existing Session ==&lt;br /&gt;
&lt;br /&gt;
You can usually use the &amp;quot;xrandr&amp;quot; tool to change the resolution of a running vnc session. First you'll need to know your session's display number (this should be the last digit or two of the port number). For example, if your VNC session is running on port 5902, then your screen number should be :2. For this example, we'll use screen 2. &lt;br /&gt;
&lt;br /&gt;
Once you know your screen number, you can see the list of supported modes as follows:&lt;br /&gt;
  xrandr -display :2&lt;br /&gt;
&lt;br /&gt;
Once you pick the one you want (generally the same size or smaller than the native resolution of your client), you can choose it by running a command like&lt;br /&gt;
  xrandr -s 1400x1050 -display :2&lt;br /&gt;
&lt;br /&gt;
(this example will set the resolution to 1400 pixels by 1050 pixels)&lt;br /&gt;
&lt;br /&gt;
You'll probably be disconnected at this point, but when you reconnect your screen size should be changed (hopefully without crashing your running programs). &lt;br /&gt;
&lt;br /&gt;
== Finding an Existing Session ==&lt;br /&gt;
SSH to portal0 and then run:&lt;br /&gt;
  /opt/vnc_script/findsession.sh&lt;br /&gt;
&lt;br /&gt;
Which will return the shortened port number of each of your currently running sessions.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting == &lt;br /&gt;
&lt;br /&gt;
If you have used vncserver (It doesn't matter which version) on a SCOREC machine before, you will need to clear your vnc settings for the script to work. You can do this by running rm -rf ~/.vnc&lt;br /&gt;
&lt;br /&gt;
stop_vnc.sh may display some errors; this is normal.&lt;br /&gt;
&lt;br /&gt;
If you have trouble deleting ~/.vnc send an email to Benjamin.A.Matthews@colorado.edu&lt;br /&gt;
&lt;br /&gt;
If any of these commands fail, you may need to source /etc/profile to get the necessary environment variables (this should be fixed soon)&lt;br /&gt;
&lt;br /&gt;
VirtualGL has trouble with some threaded programs. If your OpenGL program exhibits segmentation faults or other issues, this could be the problem. Check back for the solution later. &lt;br /&gt;
&lt;br /&gt;
If the given password is rejected you can run stop_vnc.sh and restart to get a new one. Occasionally the random password generator may generate passwords which VNC doesn't like.&lt;br /&gt;
&lt;br /&gt;
If VirtualGL complains about not being able to get a 24bit FB config either vglconnect to another VirtualGL enabled server or complain to Benjamin.A.Matthews@Colorado.edu&lt;br /&gt;
&lt;br /&gt;
If your VNC connection is very slow, you might want to try changing the compression and encoding options. See your vncviewer's documentation or try this&lt;br /&gt;
  vncviewer -encodings tight -quality 6 -compresslevel 6&lt;br /&gt;
If you have trouble with text distortion try adding &lt;br /&gt;
  -nojpeg&lt;br /&gt;
&lt;br /&gt;
If you're running OSX and see an error about Zlib, try changing your compression settings (maximum quality usually works) or use a different client. RealVNC and certain versions of ChickenOfTheVNC both exhibit this issue. The latest build of TigerVNC should work reliably, as does the Java based TightVNC client.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=590</id>
		<title>VNC</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=590"/>
				<updated>2015-09-15T22:19:47Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;VNC is a tool which projects a GUI session over the network. If may be useful if you want to use GUI tools remotely when X forwarding performs poorly. &lt;br /&gt;
&lt;br /&gt;
'''Warning: This is still being tested and should NOT be considered stable (portal0 may be rebooted without warning)'''&lt;br /&gt;
'''Warning: The vnc password is transmitted in clear text over the network and should not be considered secure'''&lt;br /&gt;
&lt;br /&gt;
Portal0 is designated to host VNC sessions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can start a session by running the commands&lt;br /&gt;
&lt;br /&gt;
  ssh jumpgate-phasta.colorado.edu&lt;br /&gt;
  ssh portal0&lt;br /&gt;
  source /etc/profile&lt;br /&gt;
  start_vnc.sh&lt;br /&gt;
&lt;br /&gt;
And following the directions output by start_vnc.sh. This starts your virtual desktop.&lt;br /&gt;
&lt;br /&gt;
(You may want to remember your password and port number so that you can reuse your session)&lt;br /&gt;
&lt;br /&gt;
If, for some reason, you want to end your session and kill your virtual desktop, run&lt;br /&gt;
&lt;br /&gt;
  source /etc/profile&lt;br /&gt;
  stop_vnc.sh&lt;br /&gt;
&lt;br /&gt;
Most of the time, you *do not need to do this*, as your VNC session will stay running on portal0, and you can then connect to it at any time.&lt;br /&gt;
&lt;br /&gt;
== OpenGL == &lt;br /&gt;
&lt;br /&gt;
Portal0 is equipped with a VirtualGL install which will allow you to use OpenGL programs (which do not use pthreads)&lt;br /&gt;
&lt;br /&gt;
Simply wrap your OpenGL program with the &amp;quot;vglrun&amp;quot; command&lt;br /&gt;
  vlgrun glxgears&lt;br /&gt;
&lt;br /&gt;
If you have access to another VirtualGL server you can connect to it first (Portal0 doesn't have a particularly fast graphics processor)&lt;br /&gt;
  vglconnect server&lt;br /&gt;
  vglrun glxgears&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that VGL uses a number of threads. If you have trouble with vglrun crashing with a message about Thread::Start() make sure you haven't set your stack size too large (remove any ulimit -s or ulimit -n calls from your shell start scripts)&lt;br /&gt;
&lt;br /&gt;
== Clients == &lt;br /&gt;
&lt;br /&gt;
Portal0 uses TurboVNC from the VirtualGL project, available from http://www.virtualgl.org/Downloads/TurboVNC&lt;br /&gt;
&lt;br /&gt;
Other VNC viewers will also work, such as TightVNC and RealVNC&lt;br /&gt;
&lt;br /&gt;
== Changing the VNC Password ==&lt;br /&gt;
&lt;br /&gt;
  /opt/tigervnc/bin/vncpasswd&lt;br /&gt;
&lt;br /&gt;
== View Only Mode == &lt;br /&gt;
&lt;br /&gt;
To share your desktop with another user in view only mode set a view only password &lt;br /&gt;
by running&lt;br /&gt;
  vncpasswd&lt;br /&gt;
&lt;br /&gt;
Have the other user connect in the same way you would but have them set their viewer to be in view only mode and use your view only password. Typically this is done as follows:&lt;br /&gt;
  vncviewer -viewonly&lt;br /&gt;
&lt;br /&gt;
== Windows == &lt;br /&gt;
The PuTTY SSH client can handle ssh tunneling on Windows based machines. You can download it here: http://www.chiark.greenend.org.uk/~sgtatham/putty/&lt;br /&gt;
&lt;br /&gt;
When you open putty, enter jumpgate-phasta.colorado.edu as in the Host Name box. Then click the + button next to SSH on the left pane (to expand the SSH tree node). Choose the Tunnels page. The start_vnc.sh script should tell you to run &amp;quot;ssh -L????:portal0:???? jumpgate-phasta.colorado.edu&amp;quot; on your machine. Enter the number between the -L and the first colon in the &amp;quot;Source port&amp;quot; box. Enter the rest in the Destination box (starting with portal0) and '''click the add button'''. Then click &amp;quot;Open&amp;quot; and login as normal. You will then be able to use a vncviewer as instructed by the script.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
The script says:&lt;br /&gt;
ssh -L5905:portal0:5900 jumpgate-phasta.colorado.edu&lt;br /&gt;
enter 5905 in the Source port box&lt;br /&gt;
enter portal0:5900 in the destination box.&lt;br /&gt;
&lt;br /&gt;
Try using this viewer utility&lt;br /&gt;
http://www.tightvnc.com/download/1.3.10/tightvnc-1.3.10_x86_viewer.zip&lt;br /&gt;
&lt;br /&gt;
'''Connecting to your VNC with PuTTY'''&lt;br /&gt;
&lt;br /&gt;
Once we SSH to jumpgate (on the default SSH port 22), our main desktop on portal0 can be accessed via a VNC session as follows.&lt;br /&gt;
&lt;br /&gt;
# The VNC server should already be running on portal0 using port 59xx.&lt;br /&gt;
## To check the port, on portal0 run &amp;lt;code&amp;gt;/opt/vnc_script/findsession.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
## To confirm the VNC server is running (and see port), run &amp;lt;code&amp;gt;ps aux | grep vnc&amp;lt;/code&amp;gt;&lt;br /&gt;
# Open PuTTY on your local machine.&lt;br /&gt;
# Under &amp;quot;Session&amp;quot;, SSH to &amp;lt;code&amp;gt;x@jumpgate-phasta.colorado.edu:22&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; is your username on jumpgate, and &amp;lt;code&amp;gt;22&amp;lt;/code&amp;gt; is the standard SSH port.&lt;br /&gt;
# Under &amp;quot;Session&amp;quot;&amp;gt;&amp;quot;SSH&amp;quot;&amp;gt;&amp;quot;Tunnels&amp;quot;, select source port &amp;lt;code&amp;gt;59xx&amp;lt;/code&amp;gt; and destination port &amp;lt;code&amp;gt;portal0:59xx&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;xx&amp;lt;/code&amp;gt; is the two-digit number of your VNC session. Select destination &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; and click &amp;quot;Add&amp;quot;. We select &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; because we have a service (VNC Server) running on a machine (portal0) that can be reached from the remote machine (jumpgate), and we want to access it directly from the &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; machine.&lt;br /&gt;
# Confirm the dialog by clicking &amp;quot;Open&amp;quot;, thus establishing an SSH connection between localhost and jumpgate, and tunneling localhost:59xx to portal0:59xx via this connection.&lt;br /&gt;
# Open RealVNC, and connect to &amp;lt;code&amp;gt;localhost:xx&amp;lt;/code&amp;gt;, which is shorthand for &amp;lt;code&amp;gt;localhost:59xx&amp;lt;/code&amp;gt;. VNC ports are enumerated starting with &amp;lt;code&amp;gt;5901&amp;lt;/code&amp;gt;, so any two digit port &amp;lt;code&amp;gt;xx&amp;lt;/code&amp;gt; is assumed to be port &amp;lt;code&amp;gt;59xx&amp;lt;/code&amp;gt;.&lt;br /&gt;
# You should now have access to your desktop on portal0.&lt;br /&gt;
&lt;br /&gt;
== Web Based Viewer ==&lt;br /&gt;
&lt;br /&gt;
If you can't or don't want to install a VNC viewer you can use a Java based one. You will need a JVM and a Java browser plugin. You will also need the port that the start_vnc script assigned you to be free on your local computer&lt;br /&gt;
&lt;br /&gt;
Forward your session through jumpgate as before before, adding a second port, 580n. For example, if the script tells you to &lt;br /&gt;
&lt;br /&gt;
ssh -L5905:portal0:5902 jumpgate-phasta.colorado.edu you should&lt;br /&gt;
  ssh -L5902:portal0:5902 -L5802:portal0:5802 jumpgate-phasta.colorado.edu&lt;br /&gt;
Then point your browser to http://localhost:5802 and log in with the password specified by the script when prompted. (Replace 2 with the value specified by the script)&lt;br /&gt;
&lt;br /&gt;
== Changing the Size (Resolution) of an Existing Session ==&lt;br /&gt;
&lt;br /&gt;
You can usually use the &amp;quot;xrandr&amp;quot; tool to change the resolution of a running vnc session. First you'll need to know your session's display number (this should be the last digit or two of the port number). For example, if your VNC session is running on port 5902, then your screen number should be :2. For this example, we'll use screen 2. &lt;br /&gt;
&lt;br /&gt;
Once you know your screen number, you can see the list of supported modes as follows:&lt;br /&gt;
  xrandr -display :2&lt;br /&gt;
&lt;br /&gt;
Once you pick the one you want (generally the same size or smaller than the native resolution of your client), you can choose it by running a command like&lt;br /&gt;
  xrandr -s 1400x1050 -display :2&lt;br /&gt;
&lt;br /&gt;
(this example will set the resolution to 1400 pixels by 1050 pixels)&lt;br /&gt;
&lt;br /&gt;
You'll probably be disconnected at this point, but when you reconnect your screen size should be changed (hopefully without crashing your running programs). &lt;br /&gt;
&lt;br /&gt;
== Finding an Existing Session ==&lt;br /&gt;
SSH to portal0 and then run:&lt;br /&gt;
  /opt/vnc_script/findsession.sh&lt;br /&gt;
&lt;br /&gt;
Which will return the shortened port number of each of your currently running sessions.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting == &lt;br /&gt;
&lt;br /&gt;
If you have used vncserver (It doesn't matter which version) on a SCOREC machine before, you will need to clear your vnc settings for the script to work. You can do this by running rm -rf ~/.vnc&lt;br /&gt;
&lt;br /&gt;
stop_vnc.sh may display some errors; this is normal.&lt;br /&gt;
&lt;br /&gt;
If you have trouble deleting ~/.vnc send an email to Benjamin.A.Matthews@colorado.edu&lt;br /&gt;
&lt;br /&gt;
If any of these commands fail, you may need to source /etc/profile to get the necessary environment variables (this should be fixed soon)&lt;br /&gt;
&lt;br /&gt;
VirtualGL has trouble with some threaded programs. If your OpenGL program exhibits segmentation faults or other issues, this could be the problem. Check back for the solution later. &lt;br /&gt;
&lt;br /&gt;
If the given password is rejected you can run stop_vnc.sh and restart to get a new one. Occasionally the random password generator may generate passwords which VNC doesn't like.&lt;br /&gt;
&lt;br /&gt;
If VirtualGL complains about not being able to get a 24bit FB config either vglconnect to another VirtualGL enabled server or complain to Benjamin.A.Matthews@Colorado.edu&lt;br /&gt;
&lt;br /&gt;
If your VNC connection is very slow, you might want to try changing the compression and encoding options. See your vncviewer's documentation or try this&lt;br /&gt;
  vncviewer -encodings tight -quality 6 -compresslevel 6&lt;br /&gt;
If you have trouble with text distortion try adding &lt;br /&gt;
  -nojpeg&lt;br /&gt;
&lt;br /&gt;
If you're running OSX and see an error about Zlib, try changing your compression settings (maximum quality usually works) or use a different client. RealVNC and certain versions of ChickenOfTheVNC both exhibit this issue. The latest build of TigerVNC should work reliably, as does the Java based TightVNC client.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=589</id>
		<title>ParaView</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=589"/>
				<updated>2015-08-28T15:48:34Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* One viz nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
ParaView is a parallel, scalable, visualization package from Kitware. See http://paraview.org/&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
&lt;br /&gt;
To launch a single threaded ParaView instance, first connect via [[VNC]], then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
  vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of ParaView to your environment (the below example will get the &amp;quot;default&amp;quot; version)&lt;br /&gt;
&lt;br /&gt;
  soft add @paraview&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
  vglrun paraview&lt;br /&gt;
&lt;br /&gt;
== Viewing Serial Cases ==&lt;br /&gt;
&lt;br /&gt;
Within the GUI, open the .pht file that corresponds to your case.  The .pht file will appear in the &amp;quot;pipeline browser&amp;quot; within ParaView.  To actually see your model, click the &amp;quot;apply&amp;quot; button on the properties tab.  To visualize a particular flow property, choose that property from the dropdown menu in the &amp;quot;active variable controls&amp;quot; toolbar, and then click the button corresponding to the type of visualization you want (contour, slice, etc.) in the &amp;quot;common&amp;quot; toolbar.  The properties of the visualization element can then be controlled in the &amp;quot;properties&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
For more help with the ParaView GUI, see [http://www.paraview.org/Wiki/The_ParaView_Tutorial ParaView's tutorial].&lt;br /&gt;
&lt;br /&gt;
== Parallel (Client/Server) Mode ==&lt;br /&gt;
&lt;br /&gt;
=== One viz nodes ===&lt;br /&gt;
&lt;br /&gt;
To visualize cases in parallel on one viz node, start the ParaView server in parallel with&lt;br /&gt;
&lt;br /&gt;
  mpirun -np N pvserver&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt;''' is the number of processes, maximum of 8 on one viz node.&lt;br /&gt;
&lt;br /&gt;
=== Multiple viz nodes ===&lt;br /&gt;
&lt;br /&gt;
To visualize using more than one viz node, start a ParaView server utilizing multiple viz nodes with&lt;br /&gt;
&lt;br /&gt;
  mpirun --prefix A -x DISPLAY=&amp;quot;:0&amp;quot; -x PATH -x LD_LIBRARY_PATH -hostfile ~matthb2/hostfile-ib -np 16 pvserver&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;--prefix A&amp;lt;/code&amp;gt;''' mpirun is located at &amp;lt;code&amp;gt;A/bin/mpirun&amp;lt;/code&amp;gt;; the &amp;lt;code&amp;gt;which mpirun&amp;lt;/code&amp;gt; command will tell you this. Make sure to add one of the &amp;lt;code&amp;gt;@paraview-verison-number&amp;lt;/code&amp;gt; macros from &amp;lt;code&amp;gt;softenv&amp;lt;/code&amp;gt; first, as it will set the best &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; path for that version. &amp;lt;code&amp;gt;A&amp;lt;/code&amp;gt; is the 'prefix' directory. In traditional unix, the prefix directory contains bin, etc, include, lob, share directories associated with the program.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt; flag means 'copy a variable' from the machine you run &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; from (the headnode) to those it's starting processes on (the slaves).&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-x DISPLAY=&amp;quot;:0&amp;quot;&amp;lt;/code&amp;gt;''' Location of the X-server. Assumes localhost. Could do &amp;lt;code&amp;gt;viz002:0&amp;lt;/code&amp;gt; to use graphics card on viz002 instead. 0 is the default graphics hardware, whereas 1, 2, etc could be VNC servers or other software display.&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-x PATH&amp;lt;/code&amp;gt;''' The lookup path for the binary. &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt; is an environment variable maintained by the shell.&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-x LD_LIBRARY_PATH&amp;lt;/code&amp;gt;''' The lookup path for shared libraries. Same as &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt;, but for dynamically-linked libraries.&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-hostfile ~matthb2/hostfile-ib&amp;lt;/code&amp;gt;''' File specifying list of hosts. Contents are lines that look like &amp;lt;code&amp;gt;172.18.4.11 slots=8&amp;lt;/code&amp;gt;. This is mpi-implementation specific. IP address of viz001, for instance, and the number of cores it has (slots=8).&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-np 16&amp;lt;/code&amp;gt;''' Total number of processes that mpirun should start. Maximum is 16 when using two viz nodes (8 per node).&lt;br /&gt;
&lt;br /&gt;
=== Connecting to server ===&lt;br /&gt;
&lt;br /&gt;
When the ParaView server starts, it will say it is accepting connections on some port. Connect to that port from a ParaView client.&lt;br /&gt;
&lt;br /&gt;
==CoProcessing==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=588</id>
		<title>ParaView</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=588"/>
				<updated>2015-08-28T15:48:02Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Multiple viz nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
ParaView is a parallel, scalable, visualization package from Kitware. See http://paraview.org/&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
&lt;br /&gt;
To launch a single threaded ParaView instance, first connect via [[VNC]], then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
  vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of ParaView to your environment (the below example will get the &amp;quot;default&amp;quot; version)&lt;br /&gt;
&lt;br /&gt;
  soft add @paraview&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
  vglrun paraview&lt;br /&gt;
&lt;br /&gt;
== Viewing Serial Cases ==&lt;br /&gt;
&lt;br /&gt;
Within the GUI, open the .pht file that corresponds to your case.  The .pht file will appear in the &amp;quot;pipeline browser&amp;quot; within ParaView.  To actually see your model, click the &amp;quot;apply&amp;quot; button on the properties tab.  To visualize a particular flow property, choose that property from the dropdown menu in the &amp;quot;active variable controls&amp;quot; toolbar, and then click the button corresponding to the type of visualization you want (contour, slice, etc.) in the &amp;quot;common&amp;quot; toolbar.  The properties of the visualization element can then be controlled in the &amp;quot;properties&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
For more help with the ParaView GUI, see [http://www.paraview.org/Wiki/The_ParaView_Tutorial ParaView's tutorial].&lt;br /&gt;
&lt;br /&gt;
== Parallel (Client/Server) Mode ==&lt;br /&gt;
&lt;br /&gt;
=== One viz nodes ===&lt;br /&gt;
&lt;br /&gt;
To visualize cases in parallel on one viz node, start the ParaView server in parallel with&lt;br /&gt;
&lt;br /&gt;
  mpirun -np N pvserver&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; is the number of processes, and is no greater than 8.&lt;br /&gt;
&lt;br /&gt;
=== Multiple viz nodes ===&lt;br /&gt;
&lt;br /&gt;
To visualize using more than one viz node, start a ParaView server utilizing multiple viz nodes with&lt;br /&gt;
&lt;br /&gt;
  mpirun --prefix A -x DISPLAY=&amp;quot;:0&amp;quot; -x PATH -x LD_LIBRARY_PATH -hostfile ~matthb2/hostfile-ib -np 16 pvserver&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;--prefix A&amp;lt;/code&amp;gt;''' mpirun is located at &amp;lt;code&amp;gt;A/bin/mpirun&amp;lt;/code&amp;gt;; the &amp;lt;code&amp;gt;which mpirun&amp;lt;/code&amp;gt; command will tell you this. Make sure to add one of the &amp;lt;code&amp;gt;@paraview-verison-number&amp;lt;/code&amp;gt; macros from &amp;lt;code&amp;gt;softenv&amp;lt;/code&amp;gt; first, as it will set the best &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; path for that version. &amp;lt;code&amp;gt;A&amp;lt;/code&amp;gt; is the 'prefix' directory. In traditional unix, the prefix directory contains bin, etc, include, lob, share directories associated with the program.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt; flag means 'copy a variable' from the machine you run &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; from (the headnode) to those it's starting processes on (the slaves).&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-x DISPLAY=&amp;quot;:0&amp;quot;&amp;lt;/code&amp;gt;''' Location of the X-server. Assumes localhost. Could do &amp;lt;code&amp;gt;viz002:0&amp;lt;/code&amp;gt; to use graphics card on viz002 instead. 0 is the default graphics hardware, whereas 1, 2, etc could be VNC servers or other software display.&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-x PATH&amp;lt;/code&amp;gt;''' The lookup path for the binary. &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt; is an environment variable maintained by the shell.&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-x LD_LIBRARY_PATH&amp;lt;/code&amp;gt;''' The lookup path for shared libraries. Same as &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt;, but for dynamically-linked libraries.&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-hostfile ~matthb2/hostfile-ib&amp;lt;/code&amp;gt;''' File specifying list of hosts. Contents are lines that look like &amp;lt;code&amp;gt;172.18.4.11 slots=8&amp;lt;/code&amp;gt;. This is mpi-implementation specific. IP address of viz001, for instance, and the number of cores it has (slots=8).&lt;br /&gt;
&lt;br /&gt;
* '''&amp;lt;code&amp;gt;-np 16&amp;lt;/code&amp;gt;''' Total number of processes that mpirun should start. Maximum is 16 when using two viz nodes (8 per node).&lt;br /&gt;
&lt;br /&gt;
=== Connecting to server ===&lt;br /&gt;
&lt;br /&gt;
When the ParaView server starts, it will say it is accepting connections on some port. Connect to that port from a ParaView client.&lt;br /&gt;
&lt;br /&gt;
==CoProcessing==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=587</id>
		<title>ParaView</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=587"/>
				<updated>2015-08-28T15:39:28Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* One viz nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
ParaView is a parallel, scalable, visualization package from Kitware. See http://paraview.org/&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
&lt;br /&gt;
To launch a single threaded ParaView instance, first connect via [[VNC]], then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
  vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of ParaView to your environment (the below example will get the &amp;quot;default&amp;quot; version)&lt;br /&gt;
&lt;br /&gt;
  soft add @paraview&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
  vglrun paraview&lt;br /&gt;
&lt;br /&gt;
== Viewing Serial Cases ==&lt;br /&gt;
&lt;br /&gt;
Within the GUI, open the .pht file that corresponds to your case.  The .pht file will appear in the &amp;quot;pipeline browser&amp;quot; within ParaView.  To actually see your model, click the &amp;quot;apply&amp;quot; button on the properties tab.  To visualize a particular flow property, choose that property from the dropdown menu in the &amp;quot;active variable controls&amp;quot; toolbar, and then click the button corresponding to the type of visualization you want (contour, slice, etc.) in the &amp;quot;common&amp;quot; toolbar.  The properties of the visualization element can then be controlled in the &amp;quot;properties&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
For more help with the ParaView GUI, see [http://www.paraview.org/Wiki/The_ParaView_Tutorial ParaView's tutorial].&lt;br /&gt;
&lt;br /&gt;
== Parallel (Client/Server) Mode ==&lt;br /&gt;
&lt;br /&gt;
=== One viz nodes ===&lt;br /&gt;
&lt;br /&gt;
To visualize cases in parallel on one viz node, start the ParaView server in parallel with&lt;br /&gt;
&lt;br /&gt;
  mpirun -np N pvserver&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;N&amp;lt;/code&amp;gt; is the number of processes, and is no greater than 8.&lt;br /&gt;
&lt;br /&gt;
=== Multiple viz nodes ===&lt;br /&gt;
&lt;br /&gt;
To visualize using more than one viz node, start a ParaView server utilizing multiple viz nodes with&lt;br /&gt;
&lt;br /&gt;
  mpirun --prefix A -x DISPLAY=&amp;quot;:0&amp;quot; -x PATH -x LD_LIBRARY_PATH -hostfile ~matthb2/hostfile-ib -np 16 pvserver&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;--prefix A&amp;lt;/code&amp;gt; mpirun is located at A/bin/mpirun; the which mpirun command will tell you this. Make sure to add one of the @paraview-verison-number macros from softenv first, as it will set the best mpirun path for that version. A is the 'prefix' directory. In traditional unix, the prefix directory contains bin, etc, include, lob, share directories associated with the program.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-x DISPLAY=&amp;quot;:0&amp;quot;&amp;lt;/code&amp;gt; The -x flag is 'copy a variable' from the machine you run mpirun from (the headnode) to those it's starting processes on (the slaves).&lt;br /&gt;
Location of the X-server. Assumes localhost. Could do viz002:0 to use graphics card on viz002 instead. 0 is the default graphics hardwear, whereas 1, 2, etc could be VNC servers or other software display.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-x PATH&amp;lt;/code&amp;gt; The lookup path for the binary. PATH is an environment variable maintained by the shell.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-x LD_LIBRARY_PATH&amp;lt;/code&amp;gt; The lookup path for shared libraries. Same as PATH, but for dynamically-linked libraries.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-hostfile ~matthb2/hostfile-ib&amp;lt;/code&amp;gt; File specifying list of hosts. Contents are lines that look like '172.18.4.11 slots=8'. This is mpi-implementation specific. IP address of viz001, for instance, and the number of cores it has (slots=8).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-np 16&amp;lt;/code&amp;gt; Total number of processes that mpirun should start.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to server ===&lt;br /&gt;
&lt;br /&gt;
When the ParaView server starts, it will say it is accepting connections on some port. Connect to that port from a ParaView client.&lt;br /&gt;
&lt;br /&gt;
==CoProcessing==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=586</id>
		<title>ParaView</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=586"/>
				<updated>2015-08-28T15:35:23Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Parallel (Client/Server) Mode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
ParaView is a parallel, scalable, visualization package from Kitware. See http://paraview.org/&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
&lt;br /&gt;
To launch a single threaded ParaView instance, first connect via [[VNC]], then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
  vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of ParaView to your environment (the below example will get the &amp;quot;default&amp;quot; version)&lt;br /&gt;
&lt;br /&gt;
  soft add @paraview&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
  vglrun paraview&lt;br /&gt;
&lt;br /&gt;
== Viewing Serial Cases ==&lt;br /&gt;
&lt;br /&gt;
Within the GUI, open the .pht file that corresponds to your case.  The .pht file will appear in the &amp;quot;pipeline browser&amp;quot; within ParaView.  To actually see your model, click the &amp;quot;apply&amp;quot; button on the properties tab.  To visualize a particular flow property, choose that property from the dropdown menu in the &amp;quot;active variable controls&amp;quot; toolbar, and then click the button corresponding to the type of visualization you want (contour, slice, etc.) in the &amp;quot;common&amp;quot; toolbar.  The properties of the visualization element can then be controlled in the &amp;quot;properties&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
For more help with the ParaView GUI, see [http://www.paraview.org/Wiki/The_ParaView_Tutorial ParaView's tutorial].&lt;br /&gt;
&lt;br /&gt;
== Parallel (Client/Server) Mode ==&lt;br /&gt;
&lt;br /&gt;
=== One viz nodes ===&lt;br /&gt;
&lt;br /&gt;
To visualize cases in parallel on one viz node, start the ParaView server in parallel with&lt;br /&gt;
&lt;br /&gt;
  mpirun -np N pvserver&lt;br /&gt;
&lt;br /&gt;
where the number of processes N is less than or equal to 8.&lt;br /&gt;
&lt;br /&gt;
=== Multiple viz nodes ===&lt;br /&gt;
&lt;br /&gt;
To visualize using more than one viz node, start a ParaView server utilizing multiple viz nodes with&lt;br /&gt;
&lt;br /&gt;
  mpirun --prefix A -x DISPLAY=&amp;quot;:0&amp;quot; -x PATH -x LD_LIBRARY_PATH -hostfile ~matthb2/hostfile-ib -np 16 pvserver&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;--prefix A&amp;lt;/code&amp;gt; mpirun is located at A/bin/mpirun; the which mpirun command will tell you this. Make sure to add one of the @paraview-verison-number macros from softenv first, as it will set the best mpirun path for that version. A is the 'prefix' directory. In traditional unix, the prefix directory contains bin, etc, include, lob, share directories associated with the program.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-x DISPLAY=&amp;quot;:0&amp;quot;&amp;lt;/code&amp;gt; The -x flag is 'copy a variable' from the machine you run mpirun from (the headnode) to those it's starting processes on (the slaves).&lt;br /&gt;
Location of the X-server. Assumes localhost. Could do viz002:0 to use graphics card on viz002 instead. 0 is the default graphics hardwear, whereas 1, 2, etc could be VNC servers or other software display.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-x PATH&amp;lt;/code&amp;gt; The lookup path for the binary. PATH is an environment variable maintained by the shell.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-x LD_LIBRARY_PATH&amp;lt;/code&amp;gt; The lookup path for shared libraries. Same as PATH, but for dynamically-linked libraries.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-hostfile ~matthb2/hostfile-ib&amp;lt;/code&amp;gt; File specifying list of hosts. Contents are lines that look like '172.18.4.11 slots=8'. This is mpi-implementation specific. IP address of viz001, for instance, and the number of cores it has (slots=8).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;-np 16&amp;lt;/code&amp;gt; Total number of processes that mpirun should start.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to server ===&lt;br /&gt;
&lt;br /&gt;
When the ParaView server starts, it will say it is accepting connections on some port. Connect to that port from a ParaView client.&lt;br /&gt;
&lt;br /&gt;
==CoProcessing==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=585</id>
		<title>ParaView</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=585"/>
				<updated>2015-08-28T15:28:58Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Parallel (Client/Server) Mode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
ParaView is a parallel, scalable, visualization package from Kitware. See http://paraview.org/&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
&lt;br /&gt;
To launch a single threaded ParaView instance, first connect via [[VNC]], then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
  vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of ParaView to your environment (the below example will get the &amp;quot;default&amp;quot; version)&lt;br /&gt;
&lt;br /&gt;
  soft add @paraview&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
  vglrun paraview&lt;br /&gt;
&lt;br /&gt;
== Viewing Serial Cases ==&lt;br /&gt;
&lt;br /&gt;
Within the GUI, open the .pht file that corresponds to your case.  The .pht file will appear in the &amp;quot;pipeline browser&amp;quot; within ParaView.  To actually see your model, click the &amp;quot;apply&amp;quot; button on the properties tab.  To visualize a particular flow property, choose that property from the dropdown menu in the &amp;quot;active variable controls&amp;quot; toolbar, and then click the button corresponding to the type of visualization you want (contour, slice, etc.) in the &amp;quot;common&amp;quot; toolbar.  The properties of the visualization element can then be controlled in the &amp;quot;properties&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
For more help with the ParaView GUI, see [http://www.paraview.org/Wiki/The_ParaView_Tutorial ParaView's tutorial].&lt;br /&gt;
&lt;br /&gt;
== Parallel (Client/Server) Mode ==&lt;br /&gt;
&lt;br /&gt;
To visualize cases in parallel on one viz node, start the ParaView server in parallel with&lt;br /&gt;
&lt;br /&gt;
  mpirun -np N pvserver&lt;br /&gt;
&lt;br /&gt;
where the number of processes N is less than or equal to 8.&lt;br /&gt;
&lt;br /&gt;
To visualize using more than one viz node, start a ParaView server utilizing multiple viz nodes with&lt;br /&gt;
&lt;br /&gt;
  mpirun --prefix A -x DISPLAY=&amp;quot;:0&amp;quot; -x PATH -x LD_LIBRARY_PATH -hostfile ~matthb2/hostfile-ib -np 16 pvserver&lt;br /&gt;
&lt;br /&gt;
  --prefix A&lt;br /&gt;
mpirun is located at A/bin/mpirun; the which mpirun command will tell you this. Make sure to add one of the @paraview-verison-number macros from softenv first, as it will set the best mpirun path for that version. A is the 'prefix' directory. In traditional unix, the prefix directory contains bin, etc, include, lob, share directories associated with the program.&lt;br /&gt;
&lt;br /&gt;
  -x DISPLAY=&amp;quot;:0&amp;quot;&lt;br /&gt;
The -x flag is 'copy a variable' from the machine you run mpirun from (the headnode) to those it's starting processes on (the slaves).&lt;br /&gt;
Location of the X-server. Assumes localhost. Could do viz002:0 to use graphics card on viz002 instead. 0 is the default graphics hardwear, whereas 1, 2, etc could be VNC servers or other software display.&lt;br /&gt;
&lt;br /&gt;
  -x PATH&lt;br /&gt;
The lookup path for the binary. PATH is an environment variable maintained by the shell.&lt;br /&gt;
&lt;br /&gt;
  -x LD_LIBRARY_PATH&lt;br /&gt;
The lookup path for shared libraries. Same as PATH, but for dynamically-linked libraries.&lt;br /&gt;
&lt;br /&gt;
  -hostfile ~matthb2/hostfile-ib&lt;br /&gt;
File specifying list of hosts. Contents are lines that look like '172.18.4.11 slots=8'. This is mpi-implementation specific. IP address of viz001, for instance, and the number of cores it has (slots=8).&lt;br /&gt;
&lt;br /&gt;
 -np 16&lt;br /&gt;
Total number of processes that mpirun should start.&lt;br /&gt;
&lt;br /&gt;
When pvserver starts, it will say it is accepting connections on some port. Connect to that port from a ParaView client.&lt;br /&gt;
&lt;br /&gt;
==CoProcessing==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=584</id>
		<title>ParaView</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=584"/>
				<updated>2015-08-28T15:28:26Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Parallel (Client/Server) Mode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
ParaView is a parallel, scalable, visualization package from Kitware. See http://paraview.org/&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
&lt;br /&gt;
To launch a single threaded ParaView instance, first connect via [[VNC]], then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
  vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of ParaView to your environment (the below example will get the &amp;quot;default&amp;quot; version)&lt;br /&gt;
&lt;br /&gt;
  soft add @paraview&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
  vglrun paraview&lt;br /&gt;
&lt;br /&gt;
== Viewing Serial Cases ==&lt;br /&gt;
&lt;br /&gt;
Within the GUI, open the .pht file that corresponds to your case.  The .pht file will appear in the &amp;quot;pipeline browser&amp;quot; within ParaView.  To actually see your model, click the &amp;quot;apply&amp;quot; button on the properties tab.  To visualize a particular flow property, choose that property from the dropdown menu in the &amp;quot;active variable controls&amp;quot; toolbar, and then click the button corresponding to the type of visualization you want (contour, slice, etc.) in the &amp;quot;common&amp;quot; toolbar.  The properties of the visualization element can then be controlled in the &amp;quot;properties&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
For more help with the ParaView GUI, see [http://www.paraview.org/Wiki/The_ParaView_Tutorial ParaView's tutorial].&lt;br /&gt;
&lt;br /&gt;
== Parallel (Client/Server) Mode ==&lt;br /&gt;
&lt;br /&gt;
To visualize cases in parallel on one viz node, start the ParaView server in parallel with&lt;br /&gt;
&lt;br /&gt;
  mpirun -np N pvserver&lt;br /&gt;
&lt;br /&gt;
where the number of processes N is less than or equal to 8.&lt;br /&gt;
&lt;br /&gt;
To visualize using more than one viz node, start a ParaView server utilizing multiple viz nodes with&lt;br /&gt;
&lt;br /&gt;
mpirun --prefix A -x DISPLAY=&amp;quot;:0&amp;quot; -x PATH -x LD_LIBRARY_PATH -hostfile ~matthb2/hostfile-ib -np 16 pvserver&lt;br /&gt;
&lt;br /&gt;
  --prefix A&lt;br /&gt;
mpirun is located at A/bin/mpirun; the which mpirun command will tell you this. Make sure to add one of the @paraview-verison-number macros from softenv first, as it will set the best mpirun path for that version. A is the 'prefix' directory. In traditional unix, the prefix directory contains bin, etc, include, lob, share directories associated with the program.&lt;br /&gt;
&lt;br /&gt;
  -x DISPLAY=&amp;quot;:0&amp;quot;&lt;br /&gt;
The -x flag is 'copy a variable' from the machine you run mpirun from (the headnode) to those it's starting processes on (the slaves).&lt;br /&gt;
Location of the X-server. Assumes localhost. Could do viz002:0 to use graphics card on viz002 instead. 0 is the default graphics hardwear, whereas 1, 2, etc could be VNC servers or other software display.&lt;br /&gt;
&lt;br /&gt;
  -x PATH&lt;br /&gt;
The lookup path for the binary. PATH is an environment variable maintained by the shell.&lt;br /&gt;
&lt;br /&gt;
  -x LD_LIBRARY_PATH&lt;br /&gt;
The lookup path for shared libraries. Same as PATH, but for dynamically-linked libraries.&lt;br /&gt;
&lt;br /&gt;
  -hostfile ~matthb2/hostfile-ib&lt;br /&gt;
File specifying list of hosts. Contents are lines that look like '172.18.4.11 slots=8'. This is mpi-implementation specific. IP address of viz001, for instance, and the number of cores it has (slots=8).&lt;br /&gt;
&lt;br /&gt;
 -np 16&lt;br /&gt;
Total number of processes that mpirun should start.&lt;br /&gt;
&lt;br /&gt;
When pvserver starts, it will say it is accepting connections on some port. Connect to that port from a ParaView client.&lt;br /&gt;
&lt;br /&gt;
==CoProcessing==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=583</id>
		<title>ParaView</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=ParaView&amp;diff=583"/>
				<updated>2015-08-28T15:27:00Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Parallel (Client/Server) Mode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
ParaView is a parallel, scalable, visualization package from Kitware. See http://paraview.org/&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
&lt;br /&gt;
To launch a single threaded ParaView instance, first connect via [[VNC]], then use vglconnect to connect to one of the compute machines:&lt;br /&gt;
&lt;br /&gt;
  vglconnect -s viz001&lt;br /&gt;
&lt;br /&gt;
Add the desired version of ParaView to your environment (the below example will get the &amp;quot;default&amp;quot; version)&lt;br /&gt;
&lt;br /&gt;
  soft add @paraview&lt;br /&gt;
&lt;br /&gt;
and lunch the GUI:&lt;br /&gt;
&lt;br /&gt;
  vglrun paraview&lt;br /&gt;
&lt;br /&gt;
== Viewing Serial Cases ==&lt;br /&gt;
&lt;br /&gt;
Within the GUI, open the .pht file that corresponds to your case.  The .pht file will appear in the &amp;quot;pipeline browser&amp;quot; within ParaView.  To actually see your model, click the &amp;quot;apply&amp;quot; button on the properties tab.  To visualize a particular flow property, choose that property from the dropdown menu in the &amp;quot;active variable controls&amp;quot; toolbar, and then click the button corresponding to the type of visualization you want (contour, slice, etc.) in the &amp;quot;common&amp;quot; toolbar.  The properties of the visualization element can then be controlled in the &amp;quot;properties&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
For more help with the ParaView GUI, see [http://www.paraview.org/Wiki/The_ParaView_Tutorial ParaView's tutorial].&lt;br /&gt;
&lt;br /&gt;
== Parallel (Client/Server) Mode ==&lt;br /&gt;
&lt;br /&gt;
To visualize cases in parallel on one viz node, start the ParaView server in parallel with&lt;br /&gt;
mpirun -np N pvserver&lt;br /&gt;
where the number of processes N is less than or equal to 8.&lt;br /&gt;
&lt;br /&gt;
To visualize using more than one viz node, run the following&lt;br /&gt;
mpirun --prefix A -x DISPLAY=&amp;quot;:0&amp;quot; -x PATH -x LD_LIBRARY_PATH -hostfile ~matthb2/hostfile-ib -np 16 pvserver&lt;br /&gt;
&lt;br /&gt;
--prefix A&lt;br /&gt;
mpirun is located at A/bin/mpirun; the which mpirun command will tell you this. Make sure to add one of the @paraview-verison-number macros from softenv first, as it will set the best mpirun path for that version. A is the 'prefix' directory. In traditional unix, the prefix directory contains bin, etc, include, lob, share directories associated with the program.&lt;br /&gt;
&lt;br /&gt;
-x DISPLAY=&amp;quot;:0&amp;quot;&lt;br /&gt;
The -x flag is 'copy a variable' from the machine you run mpirun from (the headnode) to those it's starting processes on (the slaves).&lt;br /&gt;
Location of the X-server. Assumes localhost. Could do viz002:0 to use graphics card on viz002 instead. 0 is the default graphics hardwear, whereas 1, 2, etc could be VNC servers or other software display.&lt;br /&gt;
&lt;br /&gt;
-x PATH&lt;br /&gt;
The lookup path for the binary. PATH is an environment variable maintained by the shell.&lt;br /&gt;
&lt;br /&gt;
-x LD_LIBRARY_PATH&lt;br /&gt;
The lookup path for shared libraries. Same as PATH, but for dynamically-linked libraries.&lt;br /&gt;
&lt;br /&gt;
-hostfile ~matthb2/hostfile-ib&lt;br /&gt;
File specifying list of hosts. Contents are lines that look like '172.18.4.11 slots=8'. This is mpi-implementation specific. IP address of viz001, for instance, and the number of cores it has (slots=8).&lt;br /&gt;
&lt;br /&gt;
-np 16&lt;br /&gt;
Total number of processes that mpirun should start.&lt;br /&gt;
&lt;br /&gt;
When pvserver starts, it will say it is accepting connections on some port. Connect to that port from a ParaView client.&lt;br /&gt;
&lt;br /&gt;
==CoProcessing==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Git&amp;diff=555</id>
		<title>Git</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Git&amp;diff=555"/>
				<updated>2015-06-05T18:58:32Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Git is a distributed revision control system (VCS), designed to work well with highly non-linear workflows. It was initially conceived to support development of the Linux kernel, and has since become the most widely-used VCS for software development.&lt;br /&gt;
&lt;br /&gt;
This page is essentially a cheat-sheet for git commands frequently used by our group.&lt;br /&gt;
&lt;br /&gt;
== Basic Git ==&lt;br /&gt;
&lt;br /&gt;
Basic information to get started with Git can be found at http://rogerdudler.github.io/git-guide/.&lt;br /&gt;
&lt;br /&gt;
; Git repository (repo)&lt;br /&gt;
: Is a directory that contains files of the current branch and a hidden &amp;lt;code&amp;gt;.git&amp;lt;/code&amp;gt; directory&lt;br /&gt;
: Contains an '''Index''' in &amp;lt;code&amp;gt;.git&amp;lt;/code&amp;gt; that acts as a staging area for all your branches&lt;br /&gt;
: Contains a '''HEAD''' in &amp;lt;code&amp;gt;.git&amp;lt;/code&amp;gt; that points to the last commit you made&lt;br /&gt;
: Your PHASTA repository should always be at &amp;lt;code&amp;gt;~/git-phasta/phasta&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Create a new repository&lt;br /&gt;
: &amp;lt;code&amp;gt;git init&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Checkout a repository&lt;br /&gt;
: &amp;lt;code&amp;gt;git clone /path/to/repo&amp;lt;/code&amp;gt;&lt;br /&gt;
: &amp;lt;code&amp;gt;git clone user@host:/path/to/repo&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Add files to be tracked in the git index&lt;br /&gt;
: &amp;lt;code&amp;gt;git add &amp;lt;filename1&amp;gt; &amp;lt;filename2&amp;gt; ...&amp;lt;/code&amp;gt;&lt;br /&gt;
: &amp;lt;code&amp;gt;git add *&amp;lt;/code&amp;gt;&lt;br /&gt;
: &amp;lt;code&amp;gt;git add -u&amp;lt;/code&amp;gt; (all files updated since last commit)&lt;br /&gt;
&lt;br /&gt;
; Commit changes to HEAD&lt;br /&gt;
: &amp;lt;code&amp;gt;git commit -m &amp;quot;my_username: my comments about this commit&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Push changes to remote repo&lt;br /&gt;
: &amp;lt;code&amp;gt;git push origin master&amp;lt;/code&amp;gt; (pushes the &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt; branch to the original/main repo)&lt;br /&gt;
&lt;br /&gt;
; Connect to a remote server&lt;br /&gt;
: &amp;lt;code&amp;gt;git remote add origin &amp;lt;server&amp;gt;&amp;lt;/code&amp;gt; (connects &amp;lt;code&amp;gt;&amp;lt;server&amp;gt;&amp;lt;/code&amp;gt; to your main, or &amp;lt;code&amp;gt;origin&amp;lt;/code&amp;gt; repository)&lt;br /&gt;
: &amp;lt;code&amp;gt;git remote add myrepo &amp;lt;server&amp;gt;&amp;lt;/code&amp;gt; (connects &amp;lt;code&amp;gt;&amp;lt;server&amp;gt;&amp;lt;/code&amp;gt; to a named branch called &amp;lt;code&amp;gt;mybranch&amp;lt;/code&amp;gt;)&lt;br /&gt;
: Note: &amp;lt;code&amp;gt;&amp;lt;server&amp;gt;&amp;lt;/code&amp;gt; could be &amp;lt;code&amp;gt;user@host:/path/to/repo&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Create and delete branch&lt;br /&gt;
: &amp;lt;code&amp;gt;git checkout -b branch_name&amp;lt;/code&amp;gt; (creates and checkout branch)&lt;br /&gt;
: &amp;lt;code&amp;gt;git checkout master&amp;lt;/code&amp;gt; (checkout master branch)&lt;br /&gt;
: &amp;lt;code&amp;gt;git branch -d branch_name&amp;lt;/code&amp;gt; (delete branch we just created)&lt;br /&gt;
&lt;br /&gt;
; Push branch to a remote&lt;br /&gt;
: &amp;lt;code&amp;gt;git push origin branch&amp;lt;/code&amp;gt; (pushes from your main &amp;lt;code&amp;gt;origin&amp;lt;/code&amp;gt; repo)&lt;br /&gt;
&lt;br /&gt;
; Update and merge&lt;br /&gt;
: &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt; (so all branches are up-to-date for the merge)&lt;br /&gt;
: &amp;lt;code&amp;gt;git merge origin/master&amp;lt;/code&amp;gt; (merges current branch with &amp;lt;code&amp;gt;origin&amp;lt;/code&amp;gt;'s &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt; branch)&lt;br /&gt;
: If a merge has conflicts, you need to edit the conflict files provided by Git, and then mark them as merged with &amp;lt;code&amp;gt;git add&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Compare code in different branches&lt;br /&gt;
: &amp;lt;code&amp;gt;git diff &amp;lt;branch_a&amp;gt; &amp;lt;branch_b&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; View history of commits&lt;br /&gt;
: &amp;lt;code&amp;gt;git log&amp;lt;/code&amp;gt;&lt;br /&gt;
: &amp;lt;code&amp;gt;git log --author=foo&amp;lt;/code&amp;gt;&lt;br /&gt;
: &amp;lt;code&amp;gt;git log --pretty=oneline&amp;lt;/code&amp;gt; (show each commit as one line)&lt;br /&gt;
: &amp;lt;code&amp;gt;git log --graph --oneline --decorate --all&amp;lt;/code&amp;gt; (ASCII tree of branches, commits, merges)&lt;br /&gt;
: &amp;lt;code&amp;gt;git log --name-status&amp;lt;/code&amp;gt; (show which files changed in each commit)&lt;br /&gt;
: &amp;lt;code&amp;gt;git log --help&amp;lt;/code&amp;gt; (more information)&lt;br /&gt;
&lt;br /&gt;
== Advanced Git ==&lt;br /&gt;
&lt;br /&gt;
== Git Prompt Statement (PS1) ==&lt;br /&gt;
&lt;br /&gt;
# Acquire &amp;lt;code&amp;gt;git-prompt.sh&amp;lt;/code&amp;gt; from https://github.com/git/git/blob/master/contrib/completion/git-prompt.sh, and save it somewhere like &amp;lt;code&amp;gt;~/.git-prompt.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
# Edit your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; to say &amp;lt;code&amp;gt;source ~/.git-prompt.sh&amp;lt;/code&amp;gt; before the lines that set your &amp;lt;code&amp;gt;PS1&amp;lt;/code&amp;gt;&lt;br /&gt;
# Modify your &amp;lt;code&amp;gt;PS1&amp;lt;/code&amp;gt; string to include &amp;lt;code&amp;gt;$(__git_ps1 &amp;quot;(%s)&amp;quot;)&amp;lt;/code&amp;gt; where you want the current branch to appear in your prompt&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Git&amp;diff=554</id>
		<title>Git</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Git&amp;diff=554"/>
				<updated>2015-06-05T18:17:34Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: Created page with &amp;quot;== Introduction ==  Git is a distributed revision control system (VCS), designed to work well with highly non-linear workflows. It was initially conceived to support development ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Git is a distributed revision control system (VCS), designed to work well with highly non-linear workflows. It was initially conceived to support development of the Linux kernel, and has since become the most widely-used VCS for software development.&lt;br /&gt;
&lt;br /&gt;
This page is essentially a cheat-sheet for git commands frequently used by our group.&lt;br /&gt;
&lt;br /&gt;
== Basic Git ==&lt;br /&gt;
&lt;br /&gt;
Basic information to get started with Git can be found at http://rogerdudler.github.io/git-guide/.&lt;br /&gt;
&lt;br /&gt;
; Git repository&lt;br /&gt;
: Is a directory that contains files of the current branch and a hidden &amp;lt;code&amp;gt;.git&amp;lt;/code&amp;gt; directory&lt;br /&gt;
: Contains an '''Index''' in &amp;lt;code&amp;gt;.git&amp;lt;/code&amp;gt; that acts as a staging area for all your branches&lt;br /&gt;
: Contains a '''HEAD''' in &amp;lt;code&amp;gt;.git&amp;lt;/code&amp;gt; that points to the last commit you made&lt;br /&gt;
: Your PHASTA repository should always be at &amp;lt;code&amp;gt;~/git-phasta/phasta&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Create a new repository&lt;br /&gt;
: &amp;lt;code&amp;gt;git init&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Checkout a repository&lt;br /&gt;
: &amp;lt;code&amp;gt;git clone /path/to/repository&amp;lt;/code&amp;gt;&lt;br /&gt;
: &amp;lt;code&amp;gt;git clone username@remote-host:/path/to/repository&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Add files to be tracked in the git index&lt;br /&gt;
: &amp;lt;code&amp;gt;git add &amp;lt;filename1&amp;gt; &amp;lt;filename2&amp;gt; ...&amp;lt;/code&amp;gt;&lt;br /&gt;
: &amp;lt;code&amp;gt;git add *&amp;lt;/code&amp;gt;&lt;br /&gt;
: &amp;lt;code&amp;gt;git add -u&amp;lt;/code&amp;gt; (all files updated since last commit)&lt;br /&gt;
&lt;br /&gt;
; Commit changes to HEAD&lt;br /&gt;
: &amp;lt;code&amp;gt;git commit -m &amp;quot;my_username: my comments about this commit&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Push changes to remote repo&lt;br /&gt;
: &amp;lt;code&amp;gt;git push origin master&amp;lt;/code&amp;gt; (pushes the &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt; branch to the original/main repo)&lt;br /&gt;
&lt;br /&gt;
; &lt;br /&gt;
: &amp;lt;code&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Deeper Shit (more advanced material) ==&lt;br /&gt;
&lt;br /&gt;
== Setting Up PHASTA ==&lt;br /&gt;
&lt;br /&gt;
== Modifying PHASTA ==&lt;br /&gt;
&lt;br /&gt;
==&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=542</id>
		<title>Chef/Mesh Partitioning</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=542"/>
				<updated>2015-03-29T06:22:48Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Chef */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage is inspired from a tutorial provided to Igor and his team at NCSU in order to set up two phase flow test cases on a local cluster named Firebird at NCSU and Cetus/Mira at ALCF.&lt;br /&gt;
At this time, do not expect anything but a series of copy-paste from emails. &lt;br /&gt;
Please update this page for our viz nodes when you get a chance. &lt;br /&gt;
&lt;br /&gt;
Thanks, &lt;br /&gt;
&lt;br /&gt;
- Michel&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is a tutorial about how to respectively partition the initial mesh and generate the phasta files on firebird (and other platforms including Cetus/Mira) using Chef. This tutorial is rather long but should include everything you need.&lt;br /&gt;
The testcase to demonstrate the workflow is the familiar 3-way subchannel flow. The root path of this test case is	/sgidata2/mrasquin/Models/subchannel. The parasolid model is located in /sgidata2/mrasquin/Models/subchannel/convertParasolid2ParasolidNative/geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
The workflow that describes how to use Chef is now explained in the next sections.&lt;br /&gt;
&lt;br /&gt;
== Env variables==&lt;br /&gt;
&lt;br /&gt;
All the subsequent tools need&lt;br /&gt;
* The fresh version of openmpi I built on firebird&lt;br /&gt;
* The latest Simmetrix library I installed in /Install on firebird.&lt;br /&gt;
&lt;br /&gt;
To update your paths, source the following file:&lt;br /&gt;
&amp;lt;code&amp;gt;/Install/SCOREC.develop/envLinux2014.sh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The env variables defined or updated in this env script include PATH and LD_LIBRARY_PATH. What is defined in this script should prevail on your settings but I strongly suggest removing any redundancy that you may have, for instance, in your .basrc. Note that I actually source this env file directly in my .bashrc so that I do not have to do it manually every time I log in to firebird. When you source it, it will also print the version of gcc, openmpi and simmodsuite lib that are set up.&lt;br /&gt;
&lt;br /&gt;
== BLMesherParallel ==&lt;br /&gt;
&lt;br /&gt;
Note that Simmetrix only supports matched faces for single part mesh so that the mesh must be built with one core. However the initial mesh must already include some information related to the partitioning, even if the mesh only includes a single part for format reasons. This additional information about the partitioning is required for conversion of the mesh file from the Simmetrix format to the SCOREC MDS format that Chef can read.&lt;br /&gt;
&lt;br /&gt;
The initial mesh for the 3-way subchannel was built in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0&amp;lt;/code&amp;gt;. Check the script named &amp;lt;code&amp;gt;runBLMesherParallel.sh&amp;lt;/code&amp;gt; in this directory.&lt;br /&gt;
&lt;br /&gt;
Running &amp;lt;code&amp;gt;./runBLMesherParallel.sh&amp;lt;/code&amp;gt; with no arguments will tell you the usage, that is:&lt;br /&gt;
 Usage: ./runBLMesherParallel.sh &amp;lt;X&amp;gt; &amp;lt;Y&amp;gt; &amp;lt;Z&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The arguments are as follows.&lt;br /&gt;
* &amp;lt;X&amp;gt; (geometric model) should be the parasolid model geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
* &amp;lt;Y&amp;gt; (attribute file) should be BLattr.inp.&lt;br /&gt;
* &amp;lt;Z&amp;gt; (number of processors) should be 1 here since we need to generate a single part mesh using a single core.&lt;br /&gt;
&lt;br /&gt;
The BLattr.inp input file is the same as the one read by the old serial version of BLMesher. But BLMesherParallel can do whatever the old version of BLMesher can do. In addition, if your test case does not include any matched face, you may try to mesh in parallel by specifying &amp;lt;Z&amp;gt; to be larger than 1. However, some meshing features are available only when BLMesherParallel is used with a single core so it is always important to check the resulting mesh.&lt;br /&gt;
&lt;br /&gt;
BLMesherParallel outputs the following files.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;mesh.sms&amp;lt;/code&amp;gt; --- The resulting mesh is stored in a directory named mesh.sms, which is a parameter hardcoded in the runBLMesherParallel.sh script.&lt;br /&gt;
* &amp;lt;code&amp;gt;BLMesher.log&amp;lt;/code&amp;gt; --- The log from BLMesherParallel is saved in BMesher.log, whereas the Simmetrix log is saved in mesh.log. Both filenames are also hardcoded in the script.&lt;br /&gt;
&lt;br /&gt;
I also mentioned in previous discussions that Simmetrix has developed its own model format called geomsim. However, the boundary layer collapses near matched faces with this model format, which is not the case when we use the parasolid format. This issue has been reported to Simmetrix but until they can provide a fix, we are forced to start with the parasolid format when our test cases include matched faces.&lt;br /&gt;
&lt;br /&gt;
== Mesh conversion==&lt;br /&gt;
&lt;br /&gt;
Chef can read only the MDS format developed at SCOREC. Therefore, the Simmetrix mesh mush first be converted to this format.&lt;br /&gt;
&lt;br /&gt;
This operation was carried out for the 3-way channel in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/simMeshToMdsMesh&amp;lt;/code&amp;gt;. Simply run the script &amp;lt;code&amp;gt;./simMeshToMdsMesh.sh&amp;lt;/code&amp;gt;, which executes the &amp;quot;convert&amp;quot; executable. In the script, you can see that the convert executable reads 3 arguments:&lt;br /&gt;
# The '''input parasolid model''' named named geom.xmt_txt, which points to geomFromSimmodeler_nat.x_t. Note that convert expects an .xmt_txt extension (or .smd extension for the complete geomsim format).&lt;br /&gt;
# The '''input Simmetrix mesh''' named here parts.sms (for historical reason but can be renamed).&lt;br /&gt;
# The '''name of the output mds mesh directory''', which is mdsMesh_bz2 here. Note that this name is prepended by &amp;quot;bz2:&amp;quot;, which means that the output mds mesh file is compressed using bzip2. &amp;quot;bz2:&amp;quot; will not be part of the name of the output directory. If you do not specify &amp;quot;bz2:&amp;quot;, the mds mesh file will be saved in ascii format, which is a waste of space so I suggest to always prepend your directory name by &amp;quot;bz2:&amp;quot;. This will also apply later to the output mesh directory generated by Chef (see below).&lt;br /&gt;
&lt;br /&gt;
Note that convert needs to run with a number of processes (-np ##) equal to the number of input parts in the Simmetrix mesh. For cases that include match faces, the Simmetrix mesh must include only one part, which is the reason why convert runs here with -np 1. But in other circumstances, convert can run in parallel if the Simmetrix mesh has already been partitioned in n parts with n&amp;gt;1 (for instance mesh generated in parallel with BLMesherParallel and/or partitioned with phParAdapt-Simmetrix).&lt;br /&gt;
&lt;br /&gt;
== Boundary and initial conditions (spj file)==&lt;br /&gt;
&lt;br /&gt;
Before running Chef for mesh operations such as uniform refinement, tetrahedronization and partitioning, we need to define the BCs and ICs for the generation of the phasta files. These BCs and ICs are defined in an spj file, which is in ASCII to facilitate scripting of BCs/ICs. Most of the attributes you are familiar with from the Simmodeler GUI can be specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
For the 3-way channel flow, see the spj file located in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Simplified_SPJ_file/geom.spj&amp;lt;/code&amp;gt;. Each line corresponds to one attribute that applies to one face.&lt;br /&gt;
&lt;br /&gt;
The structure of the spj file is:&lt;br /&gt;
 # Optional comments anywhere preceded by the pound symbol (#).&lt;br /&gt;
 # For each boundary or initial condition a line as follows:&lt;br /&gt;
 &amp;lt;attribute_name&amp;gt;: &amp;lt;face_id&amp;gt; &amp;lt;dimension&amp;gt; &amp;lt;attribute list&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the following.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;dimension&amp;gt;&amp;lt;/code&amp;gt;: 2 for a face attribute in 2D, 3 for the initial conditions that applies to the 3D domain. 1D and 0D attributes are also allowed for lines and vertices if needed.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;attribute list&amp;gt;&amp;lt;/code&amp;gt;: typically magnitude and direction if this applies&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Syntax is strict.&lt;br /&gt;
* No empty line. Each line should be either a comment which starts with the # character, or an attribute.&lt;br /&gt;
* There must be one single space after the semicolon character.&lt;br /&gt;
* There must be one single space between any numbers.&lt;br /&gt;
&lt;br /&gt;
In this example, a zero &amp;quot;traction vector&amp;quot; attribute is specified on the periodic faces parallel to the length of the channel. It is wrong to specify such an attribute on these periodic faces for a 3-way channel, but this was inherited from the 1-way periodic channel where these faces were slip walls instead of periodic faces. I will try to update my test cases in the future. But because we have now continuous integration tools that run every night to verify the Chef code, I will need to update all the cases if I modify the spj file now. So double check the attributes that you need for this model and consider the existing spj file as a source of inspiration rather than the correct spj file for production runs.&lt;br /&gt;
&lt;br /&gt;
== Chef ==&lt;br /&gt;
&lt;br /&gt;
A few rules must be followed to run Chef.&lt;br /&gt;
&lt;br /&gt;
First, the number of mpi processes must be equal to the number of input parts (''this has changed in the newest version of Chef, as described below'').&lt;br /&gt;
&lt;br /&gt;
Second, Chef is threaded with openmp and the total number of output parts after partitioning should be at most equal to the total number of available hardware threads of your machine/allocation. On BGQ, there are 4 hardware threads per core. On Linux platform such as firebird, the number of hardware threads corresponds to the number of available cores. That said, we have observed that if the number of output parts is equal to the total number of available hardware threads, Chef can hang. Therefore, it is safer to limit the number of output parts to a lower number than the number of available hardware threads. Therefore, on firebird, we should not try to partition a mesh to more than 16 parts.&lt;br /&gt;
&lt;br /&gt;
The next mesh operations will have to take place on Tukey and Cetus/Mira.&lt;br /&gt;
&lt;br /&gt;
The first example of a partitioning with Chef can be found in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch&amp;lt;/code&amp;gt;. With my naming convention, &amp;lt;code&amp;gt;4-1-Chef-PartLocal-Scratch&amp;lt;/code&amp;gt; can be decomposed as follows:&lt;br /&gt;
* The first number (4) corresponds to the number of output parts&lt;br /&gt;
* The second number (1) correspond to the number of input parts&lt;br /&gt;
* &amp;quot;Chef&amp;quot; means this mesh was treated with this program (in opposition to phParAdapt, phTest, etc which are previous executables that we used for similar purpose).&lt;br /&gt;
* &amp;quot;PartLocal&amp;quot; means the mesh is partitioned locally.&lt;br /&gt;
* &amp;quot;Scratch&amp;quot; means that the initial solution in the resulting phasta files is generated entirely from the spj file defined in a previous section of this tutorial. That is, we are starting a simulation &amp;quot;from scratch,&amp;quot; using the spj file's initial conditions as opposed to a solution migrated from a previous run.&lt;br /&gt;
&lt;br /&gt;
In summary, Chef was used in this directory to partition a single part mesh into 4 parts and the solution in the phasta files was generated directly from scratch using the spj file.&lt;br /&gt;
&lt;br /&gt;
=== Chef's input files ===&lt;br /&gt;
&lt;br /&gt;
The script to run Chef is named runChef.sh in this directory and simply call the executable. Chef reads all it needs from two input files called numstart.dat and adapt.inp.&lt;br /&gt;
&lt;br /&gt;
==== numstart.dat ====&lt;br /&gt;
&lt;br /&gt;
Instead of building the initial solution from scratch using the initial conditions defined in the spj file, the user can migrate an existing solution stored in a set of restart files that were saved from a previous phasta simulation. Numstart.dat contains the time step stamp of the input restart files to read in order to migrate a solution.&lt;br /&gt;
&lt;br /&gt;
==== adapt.inp ====&lt;br /&gt;
&lt;br /&gt;
This input file contains all the other parameters Chef expects. Note that many of these parameters have been inherited from the old phParAdapt, and are currently obsolete or unused. In what follows, all the parameters available in adapt.inp are listed and the critical parameters are in bold. Any line that starts with # is ignored.&lt;br /&gt;
&lt;br /&gt;
* '''globalP''': obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined. Note that adaptation only works with a mixed mesh (with wedges in the BL) and not with an all-tet mesh. Tetrahedronization should therefore take place after uniform refinement. Right now, the mixed mesh gets uniformly refined everywhere including the BL but it is possible to refine uniformly outside the BL only with some light modifications of the code. In the future, we hope to have other adaptation strategies in place in Chef based on local error indicator. If interested in these strategy, then phParAdapt-Simmetrix must be used. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* rRead: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* rStart: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested in Chef. If interested in other more sophisticated adaptation strategies, phParAdapt-Simmetrix must be used for now.&lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* Periodic: obsolete. Periodicity in the mesh and in the solution is not treated automatically as long as i) the mesh built with BLMesher is periodic (i.e. location of the mesh vertices on periodic faces in the same) and ii) the spj file contains the correct &amp;quot;periodic slave&amp;quot; attributes.&lt;br /&gt;
&lt;br /&gt;
* prCD: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* timing: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* outputFormat: obsolete. Phasta files are saved by default in binary format.&lt;br /&gt;
&lt;br /&gt;
* internalBCNodes: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* WRITEASC: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* phastaIO: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''numTotParts''': Final number of parts. If numTotParts is larger than the number of Chef processes which is equal to the number of input parts, the mesh will be partitioned.&lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* isReorder: obsolete/unused. Reordering for better cache performance is now applied by default to both the phasta files and mesh files.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* numSplit: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''': local partitioning if set to 1, set global partitioning if set to 0. Currently, only local partitioning is implemented in Chef and has been shown to be sufficient so far.&lt;br /&gt;
&lt;br /&gt;
* '''RecursivePtn''': should always be set to 1. In the past, this parameter allowed recursive partitioning steps in phParAdapt. The code will stop or crash if this parameter is not 1.&lt;br /&gt;
&lt;br /&gt;
* RecursivePtnStep: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  Currently, the GRAPH method for local partitioning is hard coded in one of the Chef routine.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
The second example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-Tet-PartLocal-SolMgr. For this case, based on the naming convention of 8-4-Chef-Tet-PartLocal-SolMgr (and the parameters specified in adapt.inp and numstart.dat),&lt;br /&gt;
* the number of output parts requested is 8, &lt;br /&gt;
* the number of input parts is 4 (note &amp;quot;-np 4&amp;quot; in the runChef.sh script),&lt;br /&gt;
* the input mixed mesh is first tetrahedronized before being partitioned. &lt;br /&gt;
* the solution in the resulting phasta files is migrated from the previous Chef run. &lt;br /&gt;
Note that the spj file is different for this second example and the initial conditions have been commented out in order not to overwrite the solution that is migrated from the previous Chef run.&lt;br /&gt;
&lt;br /&gt;
The third and final example can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-UR2-Tet-PartLocal-SolMgr. In this directory 8-4-Chef-UR2-Tet-PartLocal-SolMgr, Chef &lt;br /&gt;
* reads a four part mesh, &lt;br /&gt;
* applies a double recursive uniform refinement, &lt;br /&gt;
* tetrahedronize the resulting mixed mesh that has been uniformly refined twice, &lt;br /&gt;
* partition the resulting 4 part all-tet uniformly refined mesh into 8 parts,&lt;br /&gt;
* migrate and interpolate the solution read from existing restart files coming from the first example.&lt;br /&gt;
&lt;br /&gt;
As a final comment, note that the restart files are always read directly from a procs_case directory. However, when the number of output restart files exceeds 2048, the restart files are then saved in subdirectories of the root procs_case directory in order to reduce file contention, in the same (but still different) way as what you have implemented at some point in your version of phasta. The best strategy would be to write phasta files using mpi_io for instance so that we can store more than one part in a single file and avoid large number of phasta files.&lt;br /&gt;
&lt;br /&gt;
For further partitioning on BG/Q machines a conversion to the native Parasolid model is required. The tool is located in: /Install/SCOREC.develop/scorec/test/cadToSim/cadToSim &lt;br /&gt;
and should be run from [Case directory]/convertParasolid2ParasolidNative/ on firebird.&lt;br /&gt;
&lt;br /&gt;
== Updated Chef version (2015/03/26)==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) MPI implementation&lt;br /&gt;
&lt;br /&gt;
A new version of chef has been implemented and does not rely on threads any more.&lt;br /&gt;
Instead, it is now based on a pure MPI implementation. &lt;br /&gt;
That means that there is an important change in how chef is called at runtime.&lt;br /&gt;
&lt;br /&gt;
With the previous threaded version, the number of MPI processes had to be equal to the number of input parts. &lt;br /&gt;
Chef was then in charge of starting a number of threads equal to the number of output parts, which was automatic.&lt;br /&gt;
&lt;br /&gt;
Since the pure MPI version of chef does not start thread any more, it now requires a number of MPI processes equal to the final number of output parts, and not input parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) adapt.inp&lt;br /&gt;
&lt;br /&gt;
In the new version of chef, &amp;quot;numTotParts&amp;quot; in adapt.inp (which was used to specify the final number of output parts) has been replaced by &amp;quot;splitFactor&amp;quot;, which corresponds to the ratio of the number of output parts with the number of input parts. &lt;br /&gt;
If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
If you set this parameter to 2, each part of your input mesh will be split in 2 new sub-parts, etc&lt;br /&gt;
Keep in mind that the number of MPI processes that needs to be requested for chef must therefore be equal to (number of input parts times) * (splitFactor).&lt;br /&gt;
&lt;br /&gt;
I have also removed the obsolete parameter in adapt.inp and saved a representative version of this file in /projects/tools/SCOREC.develop/runscripts/adapt.inp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3) Paths&lt;br /&gt;
&lt;br /&gt;
I have updated chef on the VIz nodes, Mira and Tukey so that it only relies on the more robust pure MPI implementation.&lt;br /&gt;
&lt;br /&gt;
On the viz nodes, use /projects/tools/SCOREC.develop/build-chefMPI-GNU-*/test/chef&lt;br /&gt;
For simplicity, this is the default version of the master branch coming directly from our github repository.&lt;br /&gt;
&lt;br /&gt;
On Tukey, use /home/mrasquin/SCOREC.develop/build-tukey-GNU-OptG-c2c360bc-mpi-*&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35-noblsnap means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is off during uniform refinement (UR).&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35 means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is on during UR.&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol33 means that the target imbalance for both the vtx and elem is 3%, and BL snapping is on during UR.&lt;br /&gt;
Note that these versions have been slightly modified w.r.t. the master branch. In particular, the imbalance target is not a parameter yet. Also, in Parma, HPS (Heavy Part Splitting) and FixDisconnectedPart are not called here because the latest version of the diffusion algorithm with improved selection of (i) target parts for element exchange and (ii) elements.&lt;br /&gt;
&lt;br /&gt;
On Mira, use /home/mrasquin/SCOREC.develop/build-XL-OptG-c2c360bc-mpi-*&lt;br /&gt;
Similar comments applies to  build-XL-OptG-c2c360bc-mpi-tol33,  build-XL-OptG-c2c360bc-mpi-tol35 and  build-XL-OptG-c2c360bc-mpi-tol35-noblsnap.&lt;br /&gt;
&lt;br /&gt;
Note that BL snapping is not called for a repartitioning of the mesh. It can only play a role during uniform refinement.&lt;br /&gt;
Consequently, if you do not request a UR in adapt.inp, then build-*-tol35 and build-*-tol35-noblsnap will behave the same way.&lt;br /&gt;
&lt;br /&gt;
In case you are wondering what the weird numbers are in the name of the build directory, this comes from the git log hash, which is a unique number associated with a git commit (easier to couple an executable with a version of the code).&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=541</id>
		<title>Chef/Mesh Partitioning</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=541"/>
				<updated>2015-03-29T06:15:00Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Boundary and initial conditions (spj file) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage is inspired from a tutorial provided to Igor and his team at NCSU in order to set up two phase flow test cases on a local cluster named Firebird at NCSU and Cetus/Mira at ALCF.&lt;br /&gt;
At this time, do not expect anything but a series of copy-paste from emails. &lt;br /&gt;
Please update this page for our viz nodes when you get a chance. &lt;br /&gt;
&lt;br /&gt;
Thanks, &lt;br /&gt;
&lt;br /&gt;
- Michel&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is a tutorial about how to respectively partition the initial mesh and generate the phasta files on firebird (and other platforms including Cetus/Mira) using Chef. This tutorial is rather long but should include everything you need.&lt;br /&gt;
The testcase to demonstrate the workflow is the familiar 3-way subchannel flow. The root path of this test case is	/sgidata2/mrasquin/Models/subchannel. The parasolid model is located in /sgidata2/mrasquin/Models/subchannel/convertParasolid2ParasolidNative/geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
The workflow that describes how to use Chef is now explained in the next sections.&lt;br /&gt;
&lt;br /&gt;
== Env variables==&lt;br /&gt;
&lt;br /&gt;
All the subsequent tools need&lt;br /&gt;
* The fresh version of openmpi I built on firebird&lt;br /&gt;
* The latest Simmetrix library I installed in /Install on firebird.&lt;br /&gt;
&lt;br /&gt;
To update your paths, source the following file:&lt;br /&gt;
&amp;lt;code&amp;gt;/Install/SCOREC.develop/envLinux2014.sh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The env variables defined or updated in this env script include PATH and LD_LIBRARY_PATH. What is defined in this script should prevail on your settings but I strongly suggest removing any redundancy that you may have, for instance, in your .basrc. Note that I actually source this env file directly in my .bashrc so that I do not have to do it manually every time I log in to firebird. When you source it, it will also print the version of gcc, openmpi and simmodsuite lib that are set up.&lt;br /&gt;
&lt;br /&gt;
== BLMesherParallel ==&lt;br /&gt;
&lt;br /&gt;
Note that Simmetrix only supports matched faces for single part mesh so that the mesh must be built with one core. However the initial mesh must already include some information related to the partitioning, even if the mesh only includes a single part for format reasons. This additional information about the partitioning is required for conversion of the mesh file from the Simmetrix format to the SCOREC MDS format that Chef can read.&lt;br /&gt;
&lt;br /&gt;
The initial mesh for the 3-way subchannel was built in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0&amp;lt;/code&amp;gt;. Check the script named &amp;lt;code&amp;gt;runBLMesherParallel.sh&amp;lt;/code&amp;gt; in this directory.&lt;br /&gt;
&lt;br /&gt;
Running &amp;lt;code&amp;gt;./runBLMesherParallel.sh&amp;lt;/code&amp;gt; with no arguments will tell you the usage, that is:&lt;br /&gt;
 Usage: ./runBLMesherParallel.sh &amp;lt;X&amp;gt; &amp;lt;Y&amp;gt; &amp;lt;Z&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The arguments are as follows.&lt;br /&gt;
* &amp;lt;X&amp;gt; (geometric model) should be the parasolid model geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
* &amp;lt;Y&amp;gt; (attribute file) should be BLattr.inp.&lt;br /&gt;
* &amp;lt;Z&amp;gt; (number of processors) should be 1 here since we need to generate a single part mesh using a single core.&lt;br /&gt;
&lt;br /&gt;
The BLattr.inp input file is the same as the one read by the old serial version of BLMesher. But BLMesherParallel can do whatever the old version of BLMesher can do. In addition, if your test case does not include any matched face, you may try to mesh in parallel by specifying &amp;lt;Z&amp;gt; to be larger than 1. However, some meshing features are available only when BLMesherParallel is used with a single core so it is always important to check the resulting mesh.&lt;br /&gt;
&lt;br /&gt;
BLMesherParallel outputs the following files.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;mesh.sms&amp;lt;/code&amp;gt; --- The resulting mesh is stored in a directory named mesh.sms, which is a parameter hardcoded in the runBLMesherParallel.sh script.&lt;br /&gt;
* &amp;lt;code&amp;gt;BLMesher.log&amp;lt;/code&amp;gt; --- The log from BLMesherParallel is saved in BMesher.log, whereas the Simmetrix log is saved in mesh.log. Both filenames are also hardcoded in the script.&lt;br /&gt;
&lt;br /&gt;
I also mentioned in previous discussions that Simmetrix has developed its own model format called geomsim. However, the boundary layer collapses near matched faces with this model format, which is not the case when we use the parasolid format. This issue has been reported to Simmetrix but until they can provide a fix, we are forced to start with the parasolid format when our test cases include matched faces.&lt;br /&gt;
&lt;br /&gt;
== Mesh conversion==&lt;br /&gt;
&lt;br /&gt;
Chef can read only the MDS format developed at SCOREC. Therefore, the Simmetrix mesh mush first be converted to this format.&lt;br /&gt;
&lt;br /&gt;
This operation was carried out for the 3-way channel in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/simMeshToMdsMesh&amp;lt;/code&amp;gt;. Simply run the script &amp;lt;code&amp;gt;./simMeshToMdsMesh.sh&amp;lt;/code&amp;gt;, which executes the &amp;quot;convert&amp;quot; executable. In the script, you can see that the convert executable reads 3 arguments:&lt;br /&gt;
# The '''input parasolid model''' named named geom.xmt_txt, which points to geomFromSimmodeler_nat.x_t. Note that convert expects an .xmt_txt extension (or .smd extension for the complete geomsim format).&lt;br /&gt;
# The '''input Simmetrix mesh''' named here parts.sms (for historical reason but can be renamed).&lt;br /&gt;
# The '''name of the output mds mesh directory''', which is mdsMesh_bz2 here. Note that this name is prepended by &amp;quot;bz2:&amp;quot;, which means that the output mds mesh file is compressed using bzip2. &amp;quot;bz2:&amp;quot; will not be part of the name of the output directory. If you do not specify &amp;quot;bz2:&amp;quot;, the mds mesh file will be saved in ascii format, which is a waste of space so I suggest to always prepend your directory name by &amp;quot;bz2:&amp;quot;. This will also apply later to the output mesh directory generated by Chef (see below).&lt;br /&gt;
&lt;br /&gt;
Note that convert needs to run with a number of processes (-np ##) equal to the number of input parts in the Simmetrix mesh. For cases that include match faces, the Simmetrix mesh must include only one part, which is the reason why convert runs here with -np 1. But in other circumstances, convert can run in parallel if the Simmetrix mesh has already been partitioned in n parts with n&amp;gt;1 (for instance mesh generated in parallel with BLMesherParallel and/or partitioned with phParAdapt-Simmetrix).&lt;br /&gt;
&lt;br /&gt;
== Boundary and initial conditions (spj file)==&lt;br /&gt;
&lt;br /&gt;
Before running Chef for mesh operations such as uniform refinement, tetrahedronization and partitioning, we need to define the BCs and ICs for the generation of the phasta files. These BCs and ICs are defined in an spj file, which is in ASCII to facilitate scripting of BCs/ICs. Most of the attributes you are familiar with from the Simmodeler GUI can be specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
For the 3-way channel flow, see the spj file located in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Simplified_SPJ_file/geom.spj&amp;lt;/code&amp;gt;. Each line corresponds to one attribute that applies to one face.&lt;br /&gt;
&lt;br /&gt;
The structure of the spj file is:&lt;br /&gt;
 # Optional comments anywhere preceded by the pound symbol (#).&lt;br /&gt;
 # For each boundary or initial condition a line as follows:&lt;br /&gt;
 &amp;lt;attribute_name&amp;gt;: &amp;lt;face_id&amp;gt; &amp;lt;dimension&amp;gt; &amp;lt;attribute list&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the following.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;dimension&amp;gt;&amp;lt;/code&amp;gt;: 2 for a face attribute in 2D, 3 for the initial conditions that applies to the 3D domain. 1D and 0D attributes are also allowed for lines and vertices if needed.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;attribute list&amp;gt;&amp;lt;/code&amp;gt;: typically magnitude and direction if this applies&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Syntax is strict.&lt;br /&gt;
* No empty line. Each line should be either a comment which starts with the # character, or an attribute.&lt;br /&gt;
* There must be one single space after the semicolon character.&lt;br /&gt;
* There must be one single space between any numbers.&lt;br /&gt;
&lt;br /&gt;
In this example, a zero &amp;quot;traction vector&amp;quot; attribute is specified on the periodic faces parallel to the length of the channel. It is wrong to specify such an attribute on these periodic faces for a 3-way channel, but this was inherited from the 1-way periodic channel where these faces were slip walls instead of periodic faces. I will try to update my test cases in the future. But because we have now continuous integration tools that run every night to verify the Chef code, I will need to update all the cases if I modify the spj file now. So double check the attributes that you need for this model and consider the existing spj file as a source of inspiration rather than the correct spj file for production runs.&lt;br /&gt;
&lt;br /&gt;
== Chef==&lt;br /&gt;
&lt;br /&gt;
A few rules must be followed to run Chef. First, the number of mpi processes must be equal to the number of input parts. Second, Chef is threaded with openmp and the total number of output parts after partitioning should be at most equal to the total number of available hardware threads of your machine/allocation. On BGQ, there are 4 hardware threads per core. On Linux platform such as firebird, the number of hardware threads corresponds to the number of available cores. That said, we have observed that if the number of output parts is equal to the total number of available hardware threads, Chef can hang. Therefore, it is safer to limit the number of output parts to a lower number than the number of available hardware threads. Therefore, on firebird, we should not try to partition a mesh to more than 16 parts. The next mesh operations will have to take place on Tukey and Cetus/Mira.&lt;br /&gt;
The first example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch. With my naming convention, &amp;quot;4-1-Chef-PartLocal-Scratch&amp;quot; can be decomposed as follows: - the first number correspond to the number of output parts, - the second number correspond to the number of input parts, - Chef means this mesh was treated with this program (in opposition to phParAdapt, phTest, etc which are previous executables that we used for similar purpose), - PartLocal means the mesh is partitioned locally, - Scratch means that the initial solution in the resulting phasta files is generated entirely from the spj file defined in 4). In summary, Chef was used in this directory to partition a single part mesh into 4 parts and the solution in the phasta files was generated directly from scratch using the spj file.&lt;br /&gt;
The script to run Chef is named runChef.sh in this directory and simply call the executable. Chef reads all it needs from two input files called numstart.dat and adapt.inp.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''a) numstart.dat'''&lt;br /&gt;
&lt;br /&gt;
Instead of building the initial solution from scratch using the initial conditions defined in the spj file, the user can migrate an existing solution stored in a set of restart files that were saved from a previous phasta simulation. Numstart.dat contains the time step stamp of the input restart files to read in order to migrate a solution.&lt;br /&gt;
&lt;br /&gt;
'''b) adapt.inp'''&lt;br /&gt;
&lt;br /&gt;
This input file contains all the other parameters Cher expects. Note that many of these parameters have been inherited from the old phParAdaptare and are currently obsolete or unused. In what follows, all the parameters available in adapt.inp are listed and the critical parameters are in bold. Any line that starts with # is ignored.&lt;br /&gt;
&lt;br /&gt;
* '''globalP''': obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined. Note that adaptation only works with a mixed mesh (with wedges in the BL) and not with an all-tet mesh. Tetrahedronization should therefore take place after uniform refinement. Right now, the mixed mesh gets uniformly refined everywhere including the BL but it is possible to refine uniformly outside the BL only with some light modifications of the code. In the future, we hope to have other adaptation strategies in place in Chef based on local error indicator. If interested in these strategy, then phParAdapt-Simmetrix must be used. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* rRead: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* rStart: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested in Chef. If interested in other more sophisticated adaptation strategies, phParAdapt-Simmetrix must be used for now.&lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* Periodic: obsolete. Periodicity in the mesh and in the solution is not treated automatically as long as i) the mesh built with BLMesher is periodic (i.e. location of the mesh vertices on periodic faces in the same) and ii) the spj file contains the correct &amp;quot;periodic slave&amp;quot; attributes.&lt;br /&gt;
&lt;br /&gt;
* prCD: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* timing: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* outputFormat: obsolete. Phasta files are saved by default in binary format.&lt;br /&gt;
&lt;br /&gt;
* internalBCNodes: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* WRITEASC: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* phastaIO: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''numTotParts''': Final number of parts. If numTotParts is larger than the number of Chef processes which is equal to the number of input parts, the mesh will be partitioned.&lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* isReorder: obsolete/unused. Reordering for better cache performance is now applied by default to both the phasta files and mesh files.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* numSplit: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''': local partitioning if set to 1, set global partitioning if set to 0. Currently, only local partitioning is implemented in Chef and has been shown to be sufficient so far.&lt;br /&gt;
&lt;br /&gt;
* '''RecursivePtn''': should always be set to 1. In the past, this parameter allowed recursive partitioning steps in phParAdapt. The code will stop or crash if this parameter is not 1.&lt;br /&gt;
&lt;br /&gt;
* RecursivePtnStep: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  Currently, the GRAPH method for local partitioning is hard coded in one of the Chef routine.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
The second example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-Tet-PartLocal-SolMgr. For this case, based on the naming convention of 8-4-Chef-Tet-PartLocal-SolMgr (and the parameters specified in adapt.inp and numstart.dat),&lt;br /&gt;
* the number of output parts requested is 8, &lt;br /&gt;
* the number of input parts is 4 (note &amp;quot;-np 4&amp;quot; in the runChef.sh script),&lt;br /&gt;
* the input mixed mesh is first tetrahedronized before being partitioned. &lt;br /&gt;
* the solution in the resulting phasta files is migrated from the previous Chef run. &lt;br /&gt;
Note that the spj file is different for this second example and the initial conditions have been commented out in order not to overwrite the solution that is migrated from the previous Chef run.&lt;br /&gt;
&lt;br /&gt;
The third and final example can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-UR2-Tet-PartLocal-SolMgr. In this directory 8-4-Chef-UR2-Tet-PartLocal-SolMgr, Chef &lt;br /&gt;
* reads a four part mesh, &lt;br /&gt;
* applies a double recursive uniform refinement, &lt;br /&gt;
* tetrahedronize the resulting mixed mesh that has been uniformly refined twice, &lt;br /&gt;
* partition the resulting 4 part all-tet uniformly refined mesh into 8 parts,&lt;br /&gt;
* migrate and interpolate the solution read from existing restart files coming from the first example.&lt;br /&gt;
&lt;br /&gt;
As a final comment, note that the restart files are always read directly from a procs_case directory. However, when the number of output restart files exceeds 2048, the restart files are then saved in subdirectories of the root procs_case directory in order to reduce file contention, in the same (but still different) way as what you have implemented at some point in your version of phasta. The best strategy would be to write phasta files using mpi_io for instance so that we can store more than one part in a single file and avoid large number of phasta files.&lt;br /&gt;
&lt;br /&gt;
For further partitioning on BG/Q machines a conversion to the native Parasolid model is required. The tool is located in: /Install/SCOREC.develop/scorec/test/cadToSim/cadToSim &lt;br /&gt;
and should be run from [Case directory]/convertParasolid2ParasolidNative/ on firebird.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Updated Chef version (2015/03/26)==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) MPI implementation&lt;br /&gt;
&lt;br /&gt;
A new version of chef has been implemented and does not rely on threads any more.&lt;br /&gt;
Instead, it is now based on a pure MPI implementation. &lt;br /&gt;
That means that there is an important change in how chef is called at runtime.&lt;br /&gt;
&lt;br /&gt;
With the previous threaded version, the number of MPI processes had to be equal to the number of input parts. &lt;br /&gt;
Chef was then in charge of starting a number of threads equal to the number of output parts, which was automatic.&lt;br /&gt;
&lt;br /&gt;
Since the pure MPI version of chef does not start thread any more, it now requires a number of MPI processes equal to the final number of output parts, and not input parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) adapt.inp&lt;br /&gt;
&lt;br /&gt;
In the new version of chef, &amp;quot;numTotParts&amp;quot; in adapt.inp (which was used to specify the final number of output parts) has been replaced by &amp;quot;splitFactor&amp;quot;, which corresponds to the ratio of the number of output parts with the number of input parts. &lt;br /&gt;
If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
If you set this parameter to 2, each part of your input mesh will be split in 2 new sub-parts, etc&lt;br /&gt;
Keep in mind that the number of MPI processes that needs to be requested for chef must therefore be equal to (number of input parts times) * (splitFactor).&lt;br /&gt;
&lt;br /&gt;
I have also removed the obsolete parameter in adapt.inp and saved a representative version of this file in /projects/tools/SCOREC.develop/runscripts/adapt.inp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3) Paths&lt;br /&gt;
&lt;br /&gt;
I have updated chef on the VIz nodes, Mira and Tukey so that it only relies on the more robust pure MPI implementation.&lt;br /&gt;
&lt;br /&gt;
On the viz nodes, use /projects/tools/SCOREC.develop/build-chefMPI-GNU-*/test/chef&lt;br /&gt;
For simplicity, this is the default version of the master branch coming directly from our github repository.&lt;br /&gt;
&lt;br /&gt;
On Tukey, use /home/mrasquin/SCOREC.develop/build-tukey-GNU-OptG-c2c360bc-mpi-*&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35-noblsnap means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is off during uniform refinement (UR).&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35 means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is on during UR.&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol33 means that the target imbalance for both the vtx and elem is 3%, and BL snapping is on during UR.&lt;br /&gt;
Note that these versions have been slightly modified w.r.t. the master branch. In particular, the imbalance target is not a parameter yet. Also, in Parma, HPS (Heavy Part Splitting) and FixDisconnectedPart are not called here because the latest version of the diffusion algorithm with improved selection of (i) target parts for element exchange and (ii) elements.&lt;br /&gt;
&lt;br /&gt;
On Mira, use /home/mrasquin/SCOREC.develop/build-XL-OptG-c2c360bc-mpi-*&lt;br /&gt;
Similar comments applies to  build-XL-OptG-c2c360bc-mpi-tol33,  build-XL-OptG-c2c360bc-mpi-tol35 and  build-XL-OptG-c2c360bc-mpi-tol35-noblsnap.&lt;br /&gt;
&lt;br /&gt;
Note that BL snapping is not called for a repartitioning of the mesh. It can only play a role during uniform refinement.&lt;br /&gt;
Consequently, if you do not request a UR in adapt.inp, then build-*-tol35 and build-*-tol35-noblsnap will behave the same way.&lt;br /&gt;
&lt;br /&gt;
In case you are wondering what the weird numbers are in the name of the build directory, this comes from the git log hash, which is a unique number associated with a git commit (easier to couple an executable with a version of the code).&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=540</id>
		<title>Chef/Mesh Partitioning</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=540"/>
				<updated>2015-03-29T06:05:31Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* BLMesherParallel */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage is inspired from a tutorial provided to Igor and his team at NCSU in order to set up two phase flow test cases on a local cluster named Firebird at NCSU and Cetus/Mira at ALCF.&lt;br /&gt;
At this time, do not expect anything but a series of copy-paste from emails. &lt;br /&gt;
Please update this page for our viz nodes when you get a chance. &lt;br /&gt;
&lt;br /&gt;
Thanks, &lt;br /&gt;
&lt;br /&gt;
- Michel&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is a tutorial about how to respectively partition the initial mesh and generate the phasta files on firebird (and other platforms including Cetus/Mira) using Chef. This tutorial is rather long but should include everything you need.&lt;br /&gt;
The testcase to demonstrate the workflow is the familiar 3-way subchannel flow. The root path of this test case is	/sgidata2/mrasquin/Models/subchannel. The parasolid model is located in /sgidata2/mrasquin/Models/subchannel/convertParasolid2ParasolidNative/geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
The workflow that describes how to use Chef is now explained in the next sections.&lt;br /&gt;
&lt;br /&gt;
== Env variables==&lt;br /&gt;
&lt;br /&gt;
All the subsequent tools need&lt;br /&gt;
* The fresh version of openmpi I built on firebird&lt;br /&gt;
* The latest Simmetrix library I installed in /Install on firebird.&lt;br /&gt;
&lt;br /&gt;
To update your paths, source the following file:&lt;br /&gt;
&amp;lt;code&amp;gt;/Install/SCOREC.develop/envLinux2014.sh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The env variables defined or updated in this env script include PATH and LD_LIBRARY_PATH. What is defined in this script should prevail on your settings but I strongly suggest removing any redundancy that you may have, for instance, in your .basrc. Note that I actually source this env file directly in my .bashrc so that I do not have to do it manually every time I log in to firebird. When you source it, it will also print the version of gcc, openmpi and simmodsuite lib that are set up.&lt;br /&gt;
&lt;br /&gt;
== BLMesherParallel ==&lt;br /&gt;
&lt;br /&gt;
Note that Simmetrix only supports matched faces for single part mesh so that the mesh must be built with one core. However the initial mesh must already include some information related to the partitioning, even if the mesh only includes a single part for format reasons. This additional information about the partitioning is required for conversion of the mesh file from the Simmetrix format to the SCOREC MDS format that Chef can read.&lt;br /&gt;
&lt;br /&gt;
The initial mesh for the 3-way subchannel was built in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0&amp;lt;/code&amp;gt;. Check the script named &amp;lt;code&amp;gt;runBLMesherParallel.sh&amp;lt;/code&amp;gt; in this directory.&lt;br /&gt;
&lt;br /&gt;
Running &amp;lt;code&amp;gt;./runBLMesherParallel.sh&amp;lt;/code&amp;gt; with no arguments will tell you the usage, that is:&lt;br /&gt;
 Usage: ./runBLMesherParallel.sh &amp;lt;X&amp;gt; &amp;lt;Y&amp;gt; &amp;lt;Z&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The arguments are as follows.&lt;br /&gt;
* &amp;lt;X&amp;gt; (geometric model) should be the parasolid model geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
* &amp;lt;Y&amp;gt; (attribute file) should be BLattr.inp.&lt;br /&gt;
* &amp;lt;Z&amp;gt; (number of processors) should be 1 here since we need to generate a single part mesh using a single core.&lt;br /&gt;
&lt;br /&gt;
The BLattr.inp input file is the same as the one read by the old serial version of BLMesher. But BLMesherParallel can do whatever the old version of BLMesher can do. In addition, if your test case does not include any matched face, you may try to mesh in parallel by specifying &amp;lt;Z&amp;gt; to be larger than 1. However, some meshing features are available only when BLMesherParallel is used with a single core so it is always important to check the resulting mesh.&lt;br /&gt;
&lt;br /&gt;
BLMesherParallel outputs the following files.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;mesh.sms&amp;lt;/code&amp;gt; --- The resulting mesh is stored in a directory named mesh.sms, which is a parameter hardcoded in the runBLMesherParallel.sh script.&lt;br /&gt;
* &amp;lt;code&amp;gt;BLMesher.log&amp;lt;/code&amp;gt; --- The log from BLMesherParallel is saved in BMesher.log, whereas the Simmetrix log is saved in mesh.log. Both filenames are also hardcoded in the script.&lt;br /&gt;
&lt;br /&gt;
I also mentioned in previous discussions that Simmetrix has developed its own model format called geomsim. However, the boundary layer collapses near matched faces with this model format, which is not the case when we use the parasolid format. This issue has been reported to Simmetrix but until they can provide a fix, we are forced to start with the parasolid format when our test cases include matched faces.&lt;br /&gt;
&lt;br /&gt;
== Mesh conversion==&lt;br /&gt;
&lt;br /&gt;
Chef can read only the MDS format developed at SCOREC. Therefore, the Simmetrix mesh mush first be converted to this format.&lt;br /&gt;
&lt;br /&gt;
This operation was carried out for the 3-way channel in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/simMeshToMdsMesh&amp;lt;/code&amp;gt;. Simply run the script &amp;lt;code&amp;gt;./simMeshToMdsMesh.sh&amp;lt;/code&amp;gt;, which executes the &amp;quot;convert&amp;quot; executable. In the script, you can see that the convert executable reads 3 arguments:&lt;br /&gt;
# The '''input parasolid model''' named named geom.xmt_txt, which points to geomFromSimmodeler_nat.x_t. Note that convert expects an .xmt_txt extension (or .smd extension for the complete geomsim format).&lt;br /&gt;
# The '''input Simmetrix mesh''' named here parts.sms (for historical reason but can be renamed).&lt;br /&gt;
# The '''name of the output mds mesh directory''', which is mdsMesh_bz2 here. Note that this name is prepended by &amp;quot;bz2:&amp;quot;, which means that the output mds mesh file is compressed using bzip2. &amp;quot;bz2:&amp;quot; will not be part of the name of the output directory. If you do not specify &amp;quot;bz2:&amp;quot;, the mds mesh file will be saved in ascii format, which is a waste of space so I suggest to always prepend your directory name by &amp;quot;bz2:&amp;quot;. This will also apply later to the output mesh directory generated by Chef (see below).&lt;br /&gt;
&lt;br /&gt;
Note that convert needs to run with a number of processes (-np ##) equal to the number of input parts in the Simmetrix mesh. For cases that include match faces, the Simmetrix mesh must include only one part, which is the reason why convert runs here with -np 1. But in other circumstances, convert can run in parallel if the Simmetrix mesh has already been partitioned in n parts with n&amp;gt;1 (for instance mesh generated in parallel with BLMesherParallel and/or partitioned with phParAdapt-Simmetrix).&lt;br /&gt;
&lt;br /&gt;
== Boundary and initial conditions (spj file)==&lt;br /&gt;
&lt;br /&gt;
Before running Chef for mesh operations such as uniform refinement, tetrahedronization and partitioning, we need to defined the BCs and ICs for the generation of the phasta files. Most of the attributes you are familiar with from the Simmodeler GUI can be specified in the spj file. For the 3-way channel flow, see the spj file located in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Simplified_SPJ_file/geom.spj. Each line corresponds to one attribute that applies to one face. The structure is the following: &amp;lt;attribute_name&amp;gt;: &amp;lt;face_id&amp;gt; &amp;lt;dimension: 2 for a face attribute in 2D, 3 for the initial conditions that applies to the 3D domain. 1D and 0D attributes are also allowed for lines and vertices if needed&amp;gt; &amp;lt;attribute list, typically magnitude and direction if this applies&amp;gt;. Note that the syntax is strict: - No empty line. Each line should be either a comment which starts with the # character, or an attribute. - There must be one single space after the semicolon character. - There must be one single space between any number.&lt;br /&gt;
Note that in this example, a zero &amp;quot;traction vector&amp;quot; attribute is specified on the periodic faces parallel to the length of the channel. This is wrong to specify such an attribute on these periodic faces for a 3-way channel but this was inherited from the 1-way periodic channel where these faces were slip walls instead of periodic faces. I will try to update my test cases in the future. But because we have now continuous integration tools that run every night to verify the Chef code, I will need to update all the cases if I modify the spj file now. So double check the attributes that you need for this model and consider the existing spj file as a source of inspiration rather than the correct spj file for production runs.&lt;br /&gt;
&lt;br /&gt;
== Chef==&lt;br /&gt;
&lt;br /&gt;
A few rules must be followed to run Chef. First, the number of mpi processes must be equal to the number of input parts. Second, Chef is threaded with openmp and the total number of output parts after partitioning should be at most equal to the total number of available hardware threads of your machine/allocation. On BGQ, there are 4 hardware threads per core. On Linux platform such as firebird, the number of hardware threads corresponds to the number of available cores. That said, we have observed that if the number of output parts is equal to the total number of available hardware threads, Chef can hang. Therefore, it is safer to limit the number of output parts to a lower number than the number of available hardware threads. Therefore, on firebird, we should not try to partition a mesh to more than 16 parts. The next mesh operations will have to take place on Tukey and Cetus/Mira.&lt;br /&gt;
The first example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch. With my naming convention, &amp;quot;4-1-Chef-PartLocal-Scratch&amp;quot; can be decomposed as follows: - the first number correspond to the number of output parts, - the second number correspond to the number of input parts, - Chef means this mesh was treated with this program (in opposition to phParAdapt, phTest, etc which are previous executables that we used for similar purpose), - PartLocal means the mesh is partitioned locally, - Scratch means that the initial solution in the resulting phasta files is generated entirely from the spj file defined in 4). In summary, Chef was used in this directory to partition a single part mesh into 4 parts and the solution in the phasta files was generated directly from scratch using the spj file.&lt;br /&gt;
The script to run Chef is named runChef.sh in this directory and simply call the executable. Chef reads all it needs from two input files called numstart.dat and adapt.inp.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''a) numstart.dat'''&lt;br /&gt;
&lt;br /&gt;
Instead of building the initial solution from scratch using the initial conditions defined in the spj file, the user can migrate an existing solution stored in a set of restart files that were saved from a previous phasta simulation. Numstart.dat contains the time step stamp of the input restart files to read in order to migrate a solution.&lt;br /&gt;
&lt;br /&gt;
'''b) adapt.inp'''&lt;br /&gt;
&lt;br /&gt;
This input file contains all the other parameters Cher expects. Note that many of these parameters have been inherited from the old phParAdaptare and are currently obsolete or unused. In what follows, all the parameters available in adapt.inp are listed and the critical parameters are in bold. Any line that starts with # is ignored.&lt;br /&gt;
&lt;br /&gt;
* '''globalP''': obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined. Note that adaptation only works with a mixed mesh (with wedges in the BL) and not with an all-tet mesh. Tetrahedronization should therefore take place after uniform refinement. Right now, the mixed mesh gets uniformly refined everywhere including the BL but it is possible to refine uniformly outside the BL only with some light modifications of the code. In the future, we hope to have other adaptation strategies in place in Chef based on local error indicator. If interested in these strategy, then phParAdapt-Simmetrix must be used. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* rRead: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* rStart: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested in Chef. If interested in other more sophisticated adaptation strategies, phParAdapt-Simmetrix must be used for now.&lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* Periodic: obsolete. Periodicity in the mesh and in the solution is not treated automatically as long as i) the mesh built with BLMesher is periodic (i.e. location of the mesh vertices on periodic faces in the same) and ii) the spj file contains the correct &amp;quot;periodic slave&amp;quot; attributes.&lt;br /&gt;
&lt;br /&gt;
* prCD: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* timing: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* outputFormat: obsolete. Phasta files are saved by default in binary format.&lt;br /&gt;
&lt;br /&gt;
* internalBCNodes: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* WRITEASC: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* phastaIO: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''numTotParts''': Final number of parts. If numTotParts is larger than the number of Chef processes which is equal to the number of input parts, the mesh will be partitioned.&lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* isReorder: obsolete/unused. Reordering for better cache performance is now applied by default to both the phasta files and mesh files.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* numSplit: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''': local partitioning if set to 1, set global partitioning if set to 0. Currently, only local partitioning is implemented in Chef and has been shown to be sufficient so far.&lt;br /&gt;
&lt;br /&gt;
* '''RecursivePtn''': should always be set to 1. In the past, this parameter allowed recursive partitioning steps in phParAdapt. The code will stop or crash if this parameter is not 1.&lt;br /&gt;
&lt;br /&gt;
* RecursivePtnStep: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  Currently, the GRAPH method for local partitioning is hard coded in one of the Chef routine.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
The second example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-Tet-PartLocal-SolMgr. For this case, based on the naming convention of 8-4-Chef-Tet-PartLocal-SolMgr (and the parameters specified in adapt.inp and numstart.dat),&lt;br /&gt;
* the number of output parts requested is 8, &lt;br /&gt;
* the number of input parts is 4 (note &amp;quot;-np 4&amp;quot; in the runChef.sh script),&lt;br /&gt;
* the input mixed mesh is first tetrahedronized before being partitioned. &lt;br /&gt;
* the solution in the resulting phasta files is migrated from the previous Chef run. &lt;br /&gt;
Note that the spj file is different for this second example and the initial conditions have been commented out in order not to overwrite the solution that is migrated from the previous Chef run.&lt;br /&gt;
&lt;br /&gt;
The third and final example can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-UR2-Tet-PartLocal-SolMgr. In this directory 8-4-Chef-UR2-Tet-PartLocal-SolMgr, Chef &lt;br /&gt;
* reads a four part mesh, &lt;br /&gt;
* applies a double recursive uniform refinement, &lt;br /&gt;
* tetrahedronize the resulting mixed mesh that has been uniformly refined twice, &lt;br /&gt;
* partition the resulting 4 part all-tet uniformly refined mesh into 8 parts,&lt;br /&gt;
* migrate and interpolate the solution read from existing restart files coming from the first example.&lt;br /&gt;
&lt;br /&gt;
As a final comment, note that the restart files are always read directly from a procs_case directory. However, when the number of output restart files exceeds 2048, the restart files are then saved in subdirectories of the root procs_case directory in order to reduce file contention, in the same (but still different) way as what you have implemented at some point in your version of phasta. The best strategy would be to write phasta files using mpi_io for instance so that we can store more than one part in a single file and avoid large number of phasta files.&lt;br /&gt;
&lt;br /&gt;
For further partitioning on BG/Q machines a conversion to the native Parasolid model is required. The tool is located in: /Install/SCOREC.develop/scorec/test/cadToSim/cadToSim &lt;br /&gt;
and should be run from [Case directory]/convertParasolid2ParasolidNative/ on firebird.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Updated Chef version (2015/03/26)==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) MPI implementation&lt;br /&gt;
&lt;br /&gt;
A new version of chef has been implemented and does not rely on threads any more.&lt;br /&gt;
Instead, it is now based on a pure MPI implementation. &lt;br /&gt;
That means that there is an important change in how chef is called at runtime.&lt;br /&gt;
&lt;br /&gt;
With the previous threaded version, the number of MPI processes had to be equal to the number of input parts. &lt;br /&gt;
Chef was then in charge of starting a number of threads equal to the number of output parts, which was automatic.&lt;br /&gt;
&lt;br /&gt;
Since the pure MPI version of chef does not start thread any more, it now requires a number of MPI processes equal to the final number of output parts, and not input parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) adapt.inp&lt;br /&gt;
&lt;br /&gt;
In the new version of chef, &amp;quot;numTotParts&amp;quot; in adapt.inp (which was used to specify the final number of output parts) has been replaced by &amp;quot;splitFactor&amp;quot;, which corresponds to the ratio of the number of output parts with the number of input parts. &lt;br /&gt;
If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
If you set this parameter to 2, each part of your input mesh will be split in 2 new sub-parts, etc&lt;br /&gt;
Keep in mind that the number of MPI processes that needs to be requested for chef must therefore be equal to (number of input parts times) * (splitFactor).&lt;br /&gt;
&lt;br /&gt;
I have also removed the obsolete parameter in adapt.inp and saved a representative version of this file in /projects/tools/SCOREC.develop/runscripts/adapt.inp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3) Paths&lt;br /&gt;
&lt;br /&gt;
I have updated chef on the VIz nodes, Mira and Tukey so that it only relies on the more robust pure MPI implementation.&lt;br /&gt;
&lt;br /&gt;
On the viz nodes, use /projects/tools/SCOREC.develop/build-chefMPI-GNU-*/test/chef&lt;br /&gt;
For simplicity, this is the default version of the master branch coming directly from our github repository.&lt;br /&gt;
&lt;br /&gt;
On Tukey, use /home/mrasquin/SCOREC.develop/build-tukey-GNU-OptG-c2c360bc-mpi-*&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35-noblsnap means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is off during uniform refinement (UR).&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35 means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is on during UR.&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol33 means that the target imbalance for both the vtx and elem is 3%, and BL snapping is on during UR.&lt;br /&gt;
Note that these versions have been slightly modified w.r.t. the master branch. In particular, the imbalance target is not a parameter yet. Also, in Parma, HPS (Heavy Part Splitting) and FixDisconnectedPart are not called here because the latest version of the diffusion algorithm with improved selection of (i) target parts for element exchange and (ii) elements.&lt;br /&gt;
&lt;br /&gt;
On Mira, use /home/mrasquin/SCOREC.develop/build-XL-OptG-c2c360bc-mpi-*&lt;br /&gt;
Similar comments applies to  build-XL-OptG-c2c360bc-mpi-tol33,  build-XL-OptG-c2c360bc-mpi-tol35 and  build-XL-OptG-c2c360bc-mpi-tol35-noblsnap.&lt;br /&gt;
&lt;br /&gt;
Note that BL snapping is not called for a repartitioning of the mesh. It can only play a role during uniform refinement.&lt;br /&gt;
Consequently, if you do not request a UR in adapt.inp, then build-*-tol35 and build-*-tol35-noblsnap will behave the same way.&lt;br /&gt;
&lt;br /&gt;
In case you are wondering what the weird numbers are in the name of the build directory, this comes from the git log hash, which is a unique number associated with a git commit (easier to couple an executable with a version of the code).&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=539</id>
		<title>Chef/Mesh Partitioning</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=539"/>
				<updated>2015-03-29T05:57:36Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Mesh conversion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage is inspired from a tutorial provided to Igor and his team at NCSU in order to set up two phase flow test cases on a local cluster named Firebird at NCSU and Cetus/Mira at ALCF.&lt;br /&gt;
At this time, do not expect anything but a series of copy-paste from emails. &lt;br /&gt;
Please update this page for our viz nodes when you get a chance. &lt;br /&gt;
&lt;br /&gt;
Thanks, &lt;br /&gt;
&lt;br /&gt;
- Michel&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is a tutorial about how to respectively partition the initial mesh and generate the phasta files on firebird (and other platforms including Cetus/Mira) using Chef. This tutorial is rather long but should include everything you need.&lt;br /&gt;
The testcase to demonstrate the workflow is the familiar 3-way subchannel flow. The root path of this test case is	/sgidata2/mrasquin/Models/subchannel. The parasolid model is located in /sgidata2/mrasquin/Models/subchannel/convertParasolid2ParasolidNative/geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
The workflow that describes how to use Chef is now explained in the next sections.&lt;br /&gt;
&lt;br /&gt;
== Env variables==&lt;br /&gt;
&lt;br /&gt;
All the subsequent tools need&lt;br /&gt;
* The fresh version of openmpi I built on firebird&lt;br /&gt;
* The latest Simmetrix library I installed in /Install on firebird.&lt;br /&gt;
&lt;br /&gt;
To update your paths, source the following file:&lt;br /&gt;
&amp;lt;code&amp;gt;/Install/SCOREC.develop/envLinux2014.sh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The env variables defined or updated in this env script include PATH and LD_LIBRARY_PATH. What is defined in this script should prevail on your settings but I strongly suggest removing any redundancy that you may have, for instance, in your .basrc. Note that I actually source this env file directly in my .bashrc so that I do not have to do it manually every time I log in to firebird. When you source it, it will also print the version of gcc, openmpi and simmodsuite lib that are set up.&lt;br /&gt;
&lt;br /&gt;
== BLMesherParallel ==&lt;br /&gt;
&lt;br /&gt;
Note that Simmetrix only supports matched faces for single part mesh so that the mesh must be built with one core. However the initial mesh must already include some information related to the partitioning, even if the mesh only includes a single part for format reasons. These additional information about the partitioning are required for the conversion of the mesh file from the Simmetrix format to the SCOREC MDS format that Chef can read.&lt;br /&gt;
The initial mesh for the 3-way subchannel was built in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0. Check the script named runBLMesherParallel.sh in this directory. Running ./runBLMesherParallel.sh will tell you the usage, that is:&lt;br /&gt;
Usage: ./runBLMesherParallel.sh &amp;lt;X&amp;gt; &amp;lt;Y&amp;gt; &amp;lt;Z&amp;gt;&lt;br /&gt;
 &amp;lt;X&amp;gt;: geometric model&lt;br /&gt;
 &amp;lt;Y&amp;gt;: attribute file&lt;br /&gt;
 &amp;lt;Z&amp;gt;: number of processors&lt;br /&gt;
&amp;lt;X&amp;gt; should be the parasolid model geomFromSimmodeler_nat.xmt_txt. &amp;lt;Y&amp;gt; should be BLattr.inp &amp;lt;Z&amp;gt; should be 1 here since we need to generate a single part mesh using a single core.&lt;br /&gt;
The BLattr.inp input file is the same as the one read by the old serial version of BLMesher. But BLMesherParallel can do whatever the old version of BLMesher can do. In addition, if your test case does not include any matched face, you may try to mesh in parallel by specifying &amp;lt;Z&amp;gt; to be larger than 1. However, some meshing features are available only when BLMesherParallel is used with a single core so it is always important to check the resulting mesh.&lt;br /&gt;
The resulting mesh is then stored in the directory named mesh.sms, which is a parameter hardcoded in the runBLMesherParallel.sh script. The log from BLMesherParallel is saved in BMesher.log, whereas the Simmetrix log is saved in mesh.log. Both filenames are also hardcoded in the script.&lt;br /&gt;
I also mentioned in previous discussions that Simmetrix has developed its own model format called geomsim. However, the boundary layer collapses near matched faces with this model format, which is not the case when we use the parasolid format. This issue has been reported to Simmetrix but until they can provide a fix, we are forced to start with the parasolid format when our test cases include matched faces.&lt;br /&gt;
&lt;br /&gt;
== Mesh conversion==&lt;br /&gt;
&lt;br /&gt;
Chef can read only the MDS format developed at SCOREC. Therefore, the Simmetrix mesh mush first be converted to this format.&lt;br /&gt;
&lt;br /&gt;
This operation was carried out for the 3-way channel in &amp;lt;code&amp;gt;/sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/simMeshToMdsMesh&amp;lt;/code&amp;gt;. Simply run the script &amp;lt;code&amp;gt;./simMeshToMdsMesh.sh&amp;lt;/code&amp;gt;, which executes the &amp;quot;convert&amp;quot; executable. In the script, you can see that the convert executable reads 3 arguments:&lt;br /&gt;
# The '''input parasolid model''' named named geom.xmt_txt, which points to geomFromSimmodeler_nat.x_t. Note that convert expects an .xmt_txt extension (or .smd extension for the complete geomsim format).&lt;br /&gt;
# The '''input Simmetrix mesh''' named here parts.sms (for historical reason but can be renamed).&lt;br /&gt;
# The '''name of the output mds mesh directory''', which is mdsMesh_bz2 here. Note that this name is prepended by &amp;quot;bz2:&amp;quot;, which means that the output mds mesh file is compressed using bzip2. &amp;quot;bz2:&amp;quot; will not be part of the name of the output directory. If you do not specify &amp;quot;bz2:&amp;quot;, the mds mesh file will be saved in ascii format, which is a waste of space so I suggest to always prepend your directory name by &amp;quot;bz2:&amp;quot;. This will also apply later to the output mesh directory generated by Chef (see below).&lt;br /&gt;
&lt;br /&gt;
Note that convert needs to run with a number of processes (-np ##) equal to the number of input parts in the Simmetrix mesh. For cases that include match faces, the Simmetrix mesh must include only one part, which is the reason why convert runs here with -np 1. But in other circumstances, convert can run in parallel if the Simmetrix mesh has already been partitioned in n parts with n&amp;gt;1 (for instance mesh generated in parallel with BLMesherParallel and/or partitioned with phParAdapt-Simmetrix).&lt;br /&gt;
&lt;br /&gt;
== Boundary and initial conditions (spj file)==&lt;br /&gt;
&lt;br /&gt;
Before running Chef for mesh operations such as uniform refinement, tetrahedronization and partitioning, we need to defined the BCs and ICs for the generation of the phasta files. Most of the attributes you are familiar with from the Simmodeler GUI can be specified in the spj file. For the 3-way channel flow, see the spj file located in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Simplified_SPJ_file/geom.spj. Each line corresponds to one attribute that applies to one face. The structure is the following: &amp;lt;attribute_name&amp;gt;: &amp;lt;face_id&amp;gt; &amp;lt;dimension: 2 for a face attribute in 2D, 3 for the initial conditions that applies to the 3D domain. 1D and 0D attributes are also allowed for lines and vertices if needed&amp;gt; &amp;lt;attribute list, typically magnitude and direction if this applies&amp;gt;. Note that the syntax is strict: - No empty line. Each line should be either a comment which starts with the # character, or an attribute. - There must be one single space after the semicolon character. - There must be one single space between any number.&lt;br /&gt;
Note that in this example, a zero &amp;quot;traction vector&amp;quot; attribute is specified on the periodic faces parallel to the length of the channel. This is wrong to specify such an attribute on these periodic faces for a 3-way channel but this was inherited from the 1-way periodic channel where these faces were slip walls instead of periodic faces. I will try to update my test cases in the future. But because we have now continuous integration tools that run every night to verify the Chef code, I will need to update all the cases if I modify the spj file now. So double check the attributes that you need for this model and consider the existing spj file as a source of inspiration rather than the correct spj file for production runs.&lt;br /&gt;
&lt;br /&gt;
== Chef==&lt;br /&gt;
&lt;br /&gt;
A few rules must be followed to run Chef. First, the number of mpi processes must be equal to the number of input parts. Second, Chef is threaded with openmp and the total number of output parts after partitioning should be at most equal to the total number of available hardware threads of your machine/allocation. On BGQ, there are 4 hardware threads per core. On Linux platform such as firebird, the number of hardware threads corresponds to the number of available cores. That said, we have observed that if the number of output parts is equal to the total number of available hardware threads, Chef can hang. Therefore, it is safer to limit the number of output parts to a lower number than the number of available hardware threads. Therefore, on firebird, we should not try to partition a mesh to more than 16 parts. The next mesh operations will have to take place on Tukey and Cetus/Mira.&lt;br /&gt;
The first example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch. With my naming convention, &amp;quot;4-1-Chef-PartLocal-Scratch&amp;quot; can be decomposed as follows: - the first number correspond to the number of output parts, - the second number correspond to the number of input parts, - Chef means this mesh was treated with this program (in opposition to phParAdapt, phTest, etc which are previous executables that we used for similar purpose), - PartLocal means the mesh is partitioned locally, - Scratch means that the initial solution in the resulting phasta files is generated entirely from the spj file defined in 4). In summary, Chef was used in this directory to partition a single part mesh into 4 parts and the solution in the phasta files was generated directly from scratch using the spj file.&lt;br /&gt;
The script to run Chef is named runChef.sh in this directory and simply call the executable. Chef reads all it needs from two input files called numstart.dat and adapt.inp.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''a) numstart.dat'''&lt;br /&gt;
&lt;br /&gt;
Instead of building the initial solution from scratch using the initial conditions defined in the spj file, the user can migrate an existing solution stored in a set of restart files that were saved from a previous phasta simulation. Numstart.dat contains the time step stamp of the input restart files to read in order to migrate a solution.&lt;br /&gt;
&lt;br /&gt;
'''b) adapt.inp'''&lt;br /&gt;
&lt;br /&gt;
This input file contains all the other parameters Cher expects. Note that many of these parameters have been inherited from the old phParAdaptare and are currently obsolete or unused. In what follows, all the parameters available in adapt.inp are listed and the critical parameters are in bold. Any line that starts with # is ignored.&lt;br /&gt;
&lt;br /&gt;
* '''globalP''': obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined. Note that adaptation only works with a mixed mesh (with wedges in the BL) and not with an all-tet mesh. Tetrahedronization should therefore take place after uniform refinement. Right now, the mixed mesh gets uniformly refined everywhere including the BL but it is possible to refine uniformly outside the BL only with some light modifications of the code. In the future, we hope to have other adaptation strategies in place in Chef based on local error indicator. If interested in these strategy, then phParAdapt-Simmetrix must be used. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* rRead: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* rStart: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested in Chef. If interested in other more sophisticated adaptation strategies, phParAdapt-Simmetrix must be used for now.&lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* Periodic: obsolete. Periodicity in the mesh and in the solution is not treated automatically as long as i) the mesh built with BLMesher is periodic (i.e. location of the mesh vertices on periodic faces in the same) and ii) the spj file contains the correct &amp;quot;periodic slave&amp;quot; attributes.&lt;br /&gt;
&lt;br /&gt;
* prCD: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* timing: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* outputFormat: obsolete. Phasta files are saved by default in binary format.&lt;br /&gt;
&lt;br /&gt;
* internalBCNodes: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* WRITEASC: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* phastaIO: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''numTotParts''': Final number of parts. If numTotParts is larger than the number of Chef processes which is equal to the number of input parts, the mesh will be partitioned.&lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* isReorder: obsolete/unused. Reordering for better cache performance is now applied by default to both the phasta files and mesh files.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* numSplit: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''': local partitioning if set to 1, set global partitioning if set to 0. Currently, only local partitioning is implemented in Chef and has been shown to be sufficient so far.&lt;br /&gt;
&lt;br /&gt;
* '''RecursivePtn''': should always be set to 1. In the past, this parameter allowed recursive partitioning steps in phParAdapt. The code will stop or crash if this parameter is not 1.&lt;br /&gt;
&lt;br /&gt;
* RecursivePtnStep: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  Currently, the GRAPH method for local partitioning is hard coded in one of the Chef routine.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
The second example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-Tet-PartLocal-SolMgr. For this case, based on the naming convention of 8-4-Chef-Tet-PartLocal-SolMgr (and the parameters specified in adapt.inp and numstart.dat),&lt;br /&gt;
* the number of output parts requested is 8, &lt;br /&gt;
* the number of input parts is 4 (note &amp;quot;-np 4&amp;quot; in the runChef.sh script),&lt;br /&gt;
* the input mixed mesh is first tetrahedronized before being partitioned. &lt;br /&gt;
* the solution in the resulting phasta files is migrated from the previous Chef run. &lt;br /&gt;
Note that the spj file is different for this second example and the initial conditions have been commented out in order not to overwrite the solution that is migrated from the previous Chef run.&lt;br /&gt;
&lt;br /&gt;
The third and final example can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-UR2-Tet-PartLocal-SolMgr. In this directory 8-4-Chef-UR2-Tet-PartLocal-SolMgr, Chef &lt;br /&gt;
* reads a four part mesh, &lt;br /&gt;
* applies a double recursive uniform refinement, &lt;br /&gt;
* tetrahedronize the resulting mixed mesh that has been uniformly refined twice, &lt;br /&gt;
* partition the resulting 4 part all-tet uniformly refined mesh into 8 parts,&lt;br /&gt;
* migrate and interpolate the solution read from existing restart files coming from the first example.&lt;br /&gt;
&lt;br /&gt;
As a final comment, note that the restart files are always read directly from a procs_case directory. However, when the number of output restart files exceeds 2048, the restart files are then saved in subdirectories of the root procs_case directory in order to reduce file contention, in the same (but still different) way as what you have implemented at some point in your version of phasta. The best strategy would be to write phasta files using mpi_io for instance so that we can store more than one part in a single file and avoid large number of phasta files.&lt;br /&gt;
&lt;br /&gt;
For further partitioning on BG/Q machines a conversion to the native Parasolid model is required. The tool is located in: /Install/SCOREC.develop/scorec/test/cadToSim/cadToSim &lt;br /&gt;
and should be run from [Case directory]/convertParasolid2ParasolidNative/ on firebird.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Updated Chef version (2015/03/26)==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) MPI implementation&lt;br /&gt;
&lt;br /&gt;
A new version of chef has been implemented and does not rely on threads any more.&lt;br /&gt;
Instead, it is now based on a pure MPI implementation. &lt;br /&gt;
That means that there is an important change in how chef is called at runtime.&lt;br /&gt;
&lt;br /&gt;
With the previous threaded version, the number of MPI processes had to be equal to the number of input parts. &lt;br /&gt;
Chef was then in charge of starting a number of threads equal to the number of output parts, which was automatic.&lt;br /&gt;
&lt;br /&gt;
Since the pure MPI version of chef does not start thread any more, it now requires a number of MPI processes equal to the final number of output parts, and not input parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) adapt.inp&lt;br /&gt;
&lt;br /&gt;
In the new version of chef, &amp;quot;numTotParts&amp;quot; in adapt.inp (which was used to specify the final number of output parts) has been replaced by &amp;quot;splitFactor&amp;quot;, which corresponds to the ratio of the number of output parts with the number of input parts. &lt;br /&gt;
If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
If you set this parameter to 2, each part of your input mesh will be split in 2 new sub-parts, etc&lt;br /&gt;
Keep in mind that the number of MPI processes that needs to be requested for chef must therefore be equal to (number of input parts times) * (splitFactor).&lt;br /&gt;
&lt;br /&gt;
I have also removed the obsolete parameter in adapt.inp and saved a representative version of this file in /projects/tools/SCOREC.develop/runscripts/adapt.inp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3) Paths&lt;br /&gt;
&lt;br /&gt;
I have updated chef on the VIz nodes, Mira and Tukey so that it only relies on the more robust pure MPI implementation.&lt;br /&gt;
&lt;br /&gt;
On the viz nodes, use /projects/tools/SCOREC.develop/build-chefMPI-GNU-*/test/chef&lt;br /&gt;
For simplicity, this is the default version of the master branch coming directly from our github repository.&lt;br /&gt;
&lt;br /&gt;
On Tukey, use /home/mrasquin/SCOREC.develop/build-tukey-GNU-OptG-c2c360bc-mpi-*&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35-noblsnap means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is off during uniform refinement (UR).&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35 means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is on during UR.&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol33 means that the target imbalance for both the vtx and elem is 3%, and BL snapping is on during UR.&lt;br /&gt;
Note that these versions have been slightly modified w.r.t. the master branch. In particular, the imbalance target is not a parameter yet. Also, in Parma, HPS (Heavy Part Splitting) and FixDisconnectedPart are not called here because the latest version of the diffusion algorithm with improved selection of (i) target parts for element exchange and (ii) elements.&lt;br /&gt;
&lt;br /&gt;
On Mira, use /home/mrasquin/SCOREC.develop/build-XL-OptG-c2c360bc-mpi-*&lt;br /&gt;
Similar comments applies to  build-XL-OptG-c2c360bc-mpi-tol33,  build-XL-OptG-c2c360bc-mpi-tol35 and  build-XL-OptG-c2c360bc-mpi-tol35-noblsnap.&lt;br /&gt;
&lt;br /&gt;
Note that BL snapping is not called for a repartitioning of the mesh. It can only play a role during uniform refinement.&lt;br /&gt;
Consequently, if you do not request a UR in adapt.inp, then build-*-tol35 and build-*-tol35-noblsnap will behave the same way.&lt;br /&gt;
&lt;br /&gt;
In case you are wondering what the weird numbers are in the name of the build directory, this comes from the git log hash, which is a unique number associated with a git commit (easier to couple an executable with a version of the code).&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=538</id>
		<title>Chef/Mesh Partitioning</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=538"/>
				<updated>2015-03-29T05:52:03Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Env variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage is inspired from a tutorial provided to Igor and his team at NCSU in order to set up two phase flow test cases on a local cluster named Firebird at NCSU and Cetus/Mira at ALCF.&lt;br /&gt;
At this time, do not expect anything but a series of copy-paste from emails. &lt;br /&gt;
Please update this page for our viz nodes when you get a chance. &lt;br /&gt;
&lt;br /&gt;
Thanks, &lt;br /&gt;
&lt;br /&gt;
- Michel&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is a tutorial about how to respectively partition the initial mesh and generate the phasta files on firebird (and other platforms including Cetus/Mira) using Chef. This tutorial is rather long but should include everything you need.&lt;br /&gt;
The testcase to demonstrate the workflow is the familiar 3-way subchannel flow. The root path of this test case is	/sgidata2/mrasquin/Models/subchannel. The parasolid model is located in /sgidata2/mrasquin/Models/subchannel/convertParasolid2ParasolidNative/geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
The workflow that describes how to use Chef is now explained in the next sections.&lt;br /&gt;
&lt;br /&gt;
== Env variables==&lt;br /&gt;
&lt;br /&gt;
All the subsequent tools need&lt;br /&gt;
* The fresh version of openmpi I built on firebird&lt;br /&gt;
* The latest Simmetrix library I installed in /Install on firebird.&lt;br /&gt;
&lt;br /&gt;
To update your paths, source the following file:&lt;br /&gt;
&amp;lt;code&amp;gt;/Install/SCOREC.develop/envLinux2014.sh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The env variables defined or updated in this env script include PATH and LD_LIBRARY_PATH. What is defined in this script should prevail on your settings but I strongly suggest removing any redundancy that you may have, for instance, in your .basrc. Note that I actually source this env file directly in my .bashrc so that I do not have to do it manually every time I log in to firebird. When you source it, it will also print the version of gcc, openmpi and simmodsuite lib that are set up.&lt;br /&gt;
&lt;br /&gt;
== BLMesherParallel ==&lt;br /&gt;
&lt;br /&gt;
Note that Simmetrix only supports matched faces for single part mesh so that the mesh must be built with one core. However the initial mesh must already include some information related to the partitioning, even if the mesh only includes a single part for format reasons. These additional information about the partitioning are required for the conversion of the mesh file from the Simmetrix format to the SCOREC MDS format that Chef can read.&lt;br /&gt;
The initial mesh for the 3-way subchannel was built in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0. Check the script named runBLMesherParallel.sh in this directory. Running ./runBLMesherParallel.sh will tell you the usage, that is:&lt;br /&gt;
Usage: ./runBLMesherParallel.sh &amp;lt;X&amp;gt; &amp;lt;Y&amp;gt; &amp;lt;Z&amp;gt;&lt;br /&gt;
 &amp;lt;X&amp;gt;: geometric model&lt;br /&gt;
 &amp;lt;Y&amp;gt;: attribute file&lt;br /&gt;
 &amp;lt;Z&amp;gt;: number of processors&lt;br /&gt;
&amp;lt;X&amp;gt; should be the parasolid model geomFromSimmodeler_nat.xmt_txt. &amp;lt;Y&amp;gt; should be BLattr.inp &amp;lt;Z&amp;gt; should be 1 here since we need to generate a single part mesh using a single core.&lt;br /&gt;
The BLattr.inp input file is the same as the one read by the old serial version of BLMesher. But BLMesherParallel can do whatever the old version of BLMesher can do. In addition, if your test case does not include any matched face, you may try to mesh in parallel by specifying &amp;lt;Z&amp;gt; to be larger than 1. However, some meshing features are available only when BLMesherParallel is used with a single core so it is always important to check the resulting mesh.&lt;br /&gt;
The resulting mesh is then stored in the directory named mesh.sms, which is a parameter hardcoded in the runBLMesherParallel.sh script. The log from BLMesherParallel is saved in BMesher.log, whereas the Simmetrix log is saved in mesh.log. Both filenames are also hardcoded in the script.&lt;br /&gt;
I also mentioned in previous discussions that Simmetrix has developed its own model format called geomsim. However, the boundary layer collapses near matched faces with this model format, which is not the case when we use the parasolid format. This issue has been reported to Simmetrix but until they can provide a fix, we are forced to start with the parasolid format when our test cases include matched faces.&lt;br /&gt;
&lt;br /&gt;
== Mesh conversion==&lt;br /&gt;
&lt;br /&gt;
Chef can read only the MDS format developed at SCOREC. Therefore, the Simmetrix mesh mush first be converted to this format.&lt;br /&gt;
This operation was carried out for the 3-way channel in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/simMeshToMdsMesh Simply run the script ./simMeshToMdsMesh.sh, which executes the &amp;quot;convert&amp;quot; executable. In the script, you can see that the convert executable reads 3 arguments: - the input parasolid model named named geom.xmt_txt, which points to geomFromSimmodeler_nat.x_t. Note that convert expects an .xmt_txt extension (or .smd extension for the complete geomsim format), - the input Simmetrix mesh named here parts.sms (for historical reason but can be renamed), - the name of the output mds mesh directory, which is mdsMesh_bz2 here. Note that this name is prepended by &amp;quot;bz2:&amp;quot;, which means that the output mds mesh file is compressed using bzip2. &amp;quot;bz2:&amp;quot; will not be part of the name of the output directory. If you do not specify &amp;quot;bz2:&amp;quot;, the mds mesh file will be saved in ascii format, which is a waste of space so I suggest to always prepend your directory name by &amp;quot;bz2:&amp;quot;. This will also apply later to the output mesh directory generated by Chef (see below).&lt;br /&gt;
Note that convert needs to run with a number of processes (-np ##) equal to the number of input parts in the Simmetrix mesh. For cases that include match faces, the Simmetrix mesh must include only one part, which is the reason why convert runs here with -np 1. But in other circumstances, convert can run in parallel if the Simmetrix mesh has already been partitioned in n parts with n&amp;gt;1 (for instance mesh generated in parallel with BLMesherParallel and/or partitioned with phParAdapt-Simmetrix).&lt;br /&gt;
&lt;br /&gt;
== Boundary and initial conditions (spj file)==&lt;br /&gt;
&lt;br /&gt;
Before running Chef for mesh operations such as uniform refinement, tetrahedronization and partitioning, we need to defined the BCs and ICs for the generation of the phasta files. Most of the attributes you are familiar with from the Simmodeler GUI can be specified in the spj file. For the 3-way channel flow, see the spj file located in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Simplified_SPJ_file/geom.spj. Each line corresponds to one attribute that applies to one face. The structure is the following: &amp;lt;attribute_name&amp;gt;: &amp;lt;face_id&amp;gt; &amp;lt;dimension: 2 for a face attribute in 2D, 3 for the initial conditions that applies to the 3D domain. 1D and 0D attributes are also allowed for lines and vertices if needed&amp;gt; &amp;lt;attribute list, typically magnitude and direction if this applies&amp;gt;. Note that the syntax is strict: - No empty line. Each line should be either a comment which starts with the # character, or an attribute. - There must be one single space after the semicolon character. - There must be one single space between any number.&lt;br /&gt;
Note that in this example, a zero &amp;quot;traction vector&amp;quot; attribute is specified on the periodic faces parallel to the length of the channel. This is wrong to specify such an attribute on these periodic faces for a 3-way channel but this was inherited from the 1-way periodic channel where these faces were slip walls instead of periodic faces. I will try to update my test cases in the future. But because we have now continuous integration tools that run every night to verify the Chef code, I will need to update all the cases if I modify the spj file now. So double check the attributes that you need for this model and consider the existing spj file as a source of inspiration rather than the correct spj file for production runs.&lt;br /&gt;
&lt;br /&gt;
== Chef==&lt;br /&gt;
&lt;br /&gt;
A few rules must be followed to run Chef. First, the number of mpi processes must be equal to the number of input parts. Second, Chef is threaded with openmp and the total number of output parts after partitioning should be at most equal to the total number of available hardware threads of your machine/allocation. On BGQ, there are 4 hardware threads per core. On Linux platform such as firebird, the number of hardware threads corresponds to the number of available cores. That said, we have observed that if the number of output parts is equal to the total number of available hardware threads, Chef can hang. Therefore, it is safer to limit the number of output parts to a lower number than the number of available hardware threads. Therefore, on firebird, we should not try to partition a mesh to more than 16 parts. The next mesh operations will have to take place on Tukey and Cetus/Mira.&lt;br /&gt;
The first example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch. With my naming convention, &amp;quot;4-1-Chef-PartLocal-Scratch&amp;quot; can be decomposed as follows: - the first number correspond to the number of output parts, - the second number correspond to the number of input parts, - Chef means this mesh was treated with this program (in opposition to phParAdapt, phTest, etc which are previous executables that we used for similar purpose), - PartLocal means the mesh is partitioned locally, - Scratch means that the initial solution in the resulting phasta files is generated entirely from the spj file defined in 4). In summary, Chef was used in this directory to partition a single part mesh into 4 parts and the solution in the phasta files was generated directly from scratch using the spj file.&lt;br /&gt;
The script to run Chef is named runChef.sh in this directory and simply call the executable. Chef reads all it needs from two input files called numstart.dat and adapt.inp.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''a) numstart.dat'''&lt;br /&gt;
&lt;br /&gt;
Instead of building the initial solution from scratch using the initial conditions defined in the spj file, the user can migrate an existing solution stored in a set of restart files that were saved from a previous phasta simulation. Numstart.dat contains the time step stamp of the input restart files to read in order to migrate a solution.&lt;br /&gt;
&lt;br /&gt;
'''b) adapt.inp'''&lt;br /&gt;
&lt;br /&gt;
This input file contains all the other parameters Cher expects. Note that many of these parameters have been inherited from the old phParAdaptare and are currently obsolete or unused. In what follows, all the parameters available in adapt.inp are listed and the critical parameters are in bold. Any line that starts with # is ignored.&lt;br /&gt;
&lt;br /&gt;
* '''globalP''': obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined. Note that adaptation only works with a mixed mesh (with wedges in the BL) and not with an all-tet mesh. Tetrahedronization should therefore take place after uniform refinement. Right now, the mixed mesh gets uniformly refined everywhere including the BL but it is possible to refine uniformly outside the BL only with some light modifications of the code. In the future, we hope to have other adaptation strategies in place in Chef based on local error indicator. If interested in these strategy, then phParAdapt-Simmetrix must be used. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* rRead: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* rStart: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested in Chef. If interested in other more sophisticated adaptation strategies, phParAdapt-Simmetrix must be used for now.&lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* Periodic: obsolete. Periodicity in the mesh and in the solution is not treated automatically as long as i) the mesh built with BLMesher is periodic (i.e. location of the mesh vertices on periodic faces in the same) and ii) the spj file contains the correct &amp;quot;periodic slave&amp;quot; attributes.&lt;br /&gt;
&lt;br /&gt;
* prCD: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* timing: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* outputFormat: obsolete. Phasta files are saved by default in binary format.&lt;br /&gt;
&lt;br /&gt;
* internalBCNodes: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* WRITEASC: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* phastaIO: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''numTotParts''': Final number of parts. If numTotParts is larger than the number of Chef processes which is equal to the number of input parts, the mesh will be partitioned.&lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* isReorder: obsolete/unused. Reordering for better cache performance is now applied by default to both the phasta files and mesh files.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* numSplit: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''': local partitioning if set to 1, set global partitioning if set to 0. Currently, only local partitioning is implemented in Chef and has been shown to be sufficient so far.&lt;br /&gt;
&lt;br /&gt;
* '''RecursivePtn''': should always be set to 1. In the past, this parameter allowed recursive partitioning steps in phParAdapt. The code will stop or crash if this parameter is not 1.&lt;br /&gt;
&lt;br /&gt;
* RecursivePtnStep: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  Currently, the GRAPH method for local partitioning is hard coded in one of the Chef routine.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
The second example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-Tet-PartLocal-SolMgr. For this case, based on the naming convention of 8-4-Chef-Tet-PartLocal-SolMgr (and the parameters specified in adapt.inp and numstart.dat),&lt;br /&gt;
* the number of output parts requested is 8, &lt;br /&gt;
* the number of input parts is 4 (note &amp;quot;-np 4&amp;quot; in the runChef.sh script),&lt;br /&gt;
* the input mixed mesh is first tetrahedronized before being partitioned. &lt;br /&gt;
* the solution in the resulting phasta files is migrated from the previous Chef run. &lt;br /&gt;
Note that the spj file is different for this second example and the initial conditions have been commented out in order not to overwrite the solution that is migrated from the previous Chef run.&lt;br /&gt;
&lt;br /&gt;
The third and final example can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-UR2-Tet-PartLocal-SolMgr. In this directory 8-4-Chef-UR2-Tet-PartLocal-SolMgr, Chef &lt;br /&gt;
* reads a four part mesh, &lt;br /&gt;
* applies a double recursive uniform refinement, &lt;br /&gt;
* tetrahedronize the resulting mixed mesh that has been uniformly refined twice, &lt;br /&gt;
* partition the resulting 4 part all-tet uniformly refined mesh into 8 parts,&lt;br /&gt;
* migrate and interpolate the solution read from existing restart files coming from the first example.&lt;br /&gt;
&lt;br /&gt;
As a final comment, note that the restart files are always read directly from a procs_case directory. However, when the number of output restart files exceeds 2048, the restart files are then saved in subdirectories of the root procs_case directory in order to reduce file contention, in the same (but still different) way as what you have implemented at some point in your version of phasta. The best strategy would be to write phasta files using mpi_io for instance so that we can store more than one part in a single file and avoid large number of phasta files.&lt;br /&gt;
&lt;br /&gt;
For further partitioning on BG/Q machines a conversion to the native Parasolid model is required. The tool is located in: /Install/SCOREC.develop/scorec/test/cadToSim/cadToSim &lt;br /&gt;
and should be run from [Case directory]/convertParasolid2ParasolidNative/ on firebird.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Updated Chef version (2015/03/26)==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) MPI implementation&lt;br /&gt;
&lt;br /&gt;
A new version of chef has been implemented and does not rely on threads any more.&lt;br /&gt;
Instead, it is now based on a pure MPI implementation. &lt;br /&gt;
That means that there is an important change in how chef is called at runtime.&lt;br /&gt;
&lt;br /&gt;
With the previous threaded version, the number of MPI processes had to be equal to the number of input parts. &lt;br /&gt;
Chef was then in charge of starting a number of threads equal to the number of output parts, which was automatic.&lt;br /&gt;
&lt;br /&gt;
Since the pure MPI version of chef does not start thread any more, it now requires a number of MPI processes equal to the final number of output parts, and not input parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) adapt.inp&lt;br /&gt;
&lt;br /&gt;
In the new version of chef, &amp;quot;numTotParts&amp;quot; in adapt.inp (which was used to specify the final number of output parts) has been replaced by &amp;quot;splitFactor&amp;quot;, which corresponds to the ratio of the number of output parts with the number of input parts. &lt;br /&gt;
If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
If you set this parameter to 2, each part of your input mesh will be split in 2 new sub-parts, etc&lt;br /&gt;
Keep in mind that the number of MPI processes that needs to be requested for chef must therefore be equal to (number of input parts times) * (splitFactor).&lt;br /&gt;
&lt;br /&gt;
I have also removed the obsolete parameter in adapt.inp and saved a representative version of this file in /projects/tools/SCOREC.develop/runscripts/adapt.inp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3) Paths&lt;br /&gt;
&lt;br /&gt;
I have updated chef on the VIz nodes, Mira and Tukey so that it only relies on the more robust pure MPI implementation.&lt;br /&gt;
&lt;br /&gt;
On the viz nodes, use /projects/tools/SCOREC.develop/build-chefMPI-GNU-*/test/chef&lt;br /&gt;
For simplicity, this is the default version of the master branch coming directly from our github repository.&lt;br /&gt;
&lt;br /&gt;
On Tukey, use /home/mrasquin/SCOREC.develop/build-tukey-GNU-OptG-c2c360bc-mpi-*&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35-noblsnap means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is off during uniform refinement (UR).&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35 means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is on during UR.&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol33 means that the target imbalance for both the vtx and elem is 3%, and BL snapping is on during UR.&lt;br /&gt;
Note that these versions have been slightly modified w.r.t. the master branch. In particular, the imbalance target is not a parameter yet. Also, in Parma, HPS (Heavy Part Splitting) and FixDisconnectedPart are not called here because the latest version of the diffusion algorithm with improved selection of (i) target parts for element exchange and (ii) elements.&lt;br /&gt;
&lt;br /&gt;
On Mira, use /home/mrasquin/SCOREC.develop/build-XL-OptG-c2c360bc-mpi-*&lt;br /&gt;
Similar comments applies to  build-XL-OptG-c2c360bc-mpi-tol33,  build-XL-OptG-c2c360bc-mpi-tol35 and  build-XL-OptG-c2c360bc-mpi-tol35-noblsnap.&lt;br /&gt;
&lt;br /&gt;
Note that BL snapping is not called for a repartitioning of the mesh. It can only play a role during uniform refinement.&lt;br /&gt;
Consequently, if you do not request a UR in adapt.inp, then build-*-tol35 and build-*-tol35-noblsnap will behave the same way.&lt;br /&gt;
&lt;br /&gt;
In case you are wondering what the weird numbers are in the name of the build directory, this comes from the git log hash, which is a unique number associated with a git commit (easier to couple an executable with a version of the code).&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=536</id>
		<title>Chef/Mesh Partitioning</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Chef/Mesh_Partitioning&amp;diff=536"/>
				<updated>2015-03-26T19:01:44Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: moved Mesh partitioning with Chef to Chef: Mesh Partitioning: easier to find in alphabetical index&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage is inspired from a tutorial provided to Igor and his team at NCSU in order to set up two phase flow test cases on a local cluster named Firebird at NCSU and Cetus/Mira at ALCF.&lt;br /&gt;
At this time, do not expect anything but a series of copy-paste from emails. &lt;br /&gt;
Please update this page for our viz nodes when you get a chance. &lt;br /&gt;
&lt;br /&gt;
Thanks, &lt;br /&gt;
&lt;br /&gt;
- Michel&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is a tutorial about how to respectively partition the initial mesh and generate the phasta files on firebird (and other platforms including Cetus/Mira) using Chef. This tutorial is rather long but should include everything you need.&lt;br /&gt;
The testcase to demonstrate the workflow is the familiar 3-way subchannel flow. The root path of this test case is	/sgidata2/mrasquin/Models/subchannel. The parasolid model is located in /sgidata2/mrasquin/Models/subchannel/convertParasolid2ParasolidNative/geomFromSimmodeler_nat.xmt_txt.&lt;br /&gt;
The workflow that describes how to use Chef is now explained in the next sections.&lt;br /&gt;
&lt;br /&gt;
== Env variables==&lt;br /&gt;
&lt;br /&gt;
All the subsequent tools need the fresh version of openmpi I built on firebird, as well as the latest Simmetrix library I installed in /Install on firebird. To update your paths, source the following file:&lt;br /&gt;
&amp;gt; . /Install/SCOREC.develop/envLinux2014.sh&lt;br /&gt;
The env variables defined or updated in this env script include PATH and LD_LIBRARY_PATH. What is defined in this script should prevail on your settings but I strongly suggest to remove any redundancy that you may have in your .basrc for instance. Note that I actually source this env file directly in my .bashrc so that I do not have to do it manually every time I log in to firebird. When you source it, it will also print the version of gcc, openmpi and simmodsuite lib that are set up.&lt;br /&gt;
&lt;br /&gt;
== BLMesherParallel ==&lt;br /&gt;
&lt;br /&gt;
Note that Simmetrix only supports matched faces for single part mesh so that the mesh must be built with one core. However the initial mesh must already include some information related to the partitioning, even if the mesh only includes a single part for format reasons. These additional information about the partitioning are required for the conversion of the mesh file from the Simmetrix format to the SCOREC MDS format that Chef can read.&lt;br /&gt;
The initial mesh for the 3-way subchannel was built in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0. Check the script named runBLMesherParallel.sh in this directory. Running ./runBLMesherParallel.sh will tell you the usage, that is:&lt;br /&gt;
Usage: ./runBLMesherParallel.sh &amp;lt;X&amp;gt; &amp;lt;Y&amp;gt; &amp;lt;Z&amp;gt;&lt;br /&gt;
 &amp;lt;X&amp;gt;: geometric model&lt;br /&gt;
 &amp;lt;Y&amp;gt;: attribute file&lt;br /&gt;
 &amp;lt;Z&amp;gt;: number of processors&lt;br /&gt;
&amp;lt;X&amp;gt; should be the parasolid model geomFromSimmodeler_nat.xmt_txt. &amp;lt;Y&amp;gt; should be BLattr.inp &amp;lt;Z&amp;gt; should be 1 here since we need to generate a single part mesh using a single core.&lt;br /&gt;
The BLattr.inp input file is the same as the one read by the old serial version of BLMesher. But BLMesherParallel can do whatever the old version of BLMesher can do. In addition, if your test case does not include any matched face, you may try to mesh in parallel by specifying &amp;lt;Z&amp;gt; to be larger than 1. However, some meshing features are available only when BLMesherParallel is used with a single core so it is always important to check the resulting mesh.&lt;br /&gt;
The resulting mesh is then stored in the directory named mesh.sms, which is a parameter hardcoded in the runBLMesherParallel.sh script. The log from BLMesherParallel is saved in BMesher.log, whereas the Simmetrix log is saved in mesh.log. Both filenames are also hardcoded in the script.&lt;br /&gt;
I also mentioned in previous discussions that Simmetrix has developed its own model format called geomsim. However, the boundary layer collapses near matched faces with this model format, which is not the case when we use the parasolid format. This issue has been reported to Simmetrix but until they can provide a fix, we are forced to start with the parasolid format when our test cases include matched faces.&lt;br /&gt;
&lt;br /&gt;
== Mesh conversion==&lt;br /&gt;
&lt;br /&gt;
Chef can read only the MDS format developed at SCOREC. Therefore, the Simmetrix mesh mush first be converted to this format.&lt;br /&gt;
This operation was carried out for the 3-way channel in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/simMeshToMdsMesh Simply run the script ./simMeshToMdsMesh.sh, which executes the &amp;quot;convert&amp;quot; executable. In the script, you can see that the convert executable reads 3 arguments: - the input parasolid model named named geom.xmt_txt, which points to geomFromSimmodeler_nat.x_t. Note that convert expects an .xmt_txt extension (or .smd extension for the complete geomsim format), - the input Simmetrix mesh named here parts.sms (for historical reason but can be renamed), - the name of the output mds mesh directory, which is mdsMesh_bz2 here. Note that this name is prepended by &amp;quot;bz2:&amp;quot;, which means that the output mds mesh file is compressed using bzip2. &amp;quot;bz2:&amp;quot; will not be part of the name of the output directory. If you do not specify &amp;quot;bz2:&amp;quot;, the mds mesh file will be saved in ascii format, which is a waste of space so I suggest to always prepend your directory name by &amp;quot;bz2:&amp;quot;. This will also apply later to the output mesh directory generated by Chef (see below).&lt;br /&gt;
Note that convert needs to run with a number of processes (-np ##) equal to the number of input parts in the Simmetrix mesh. For cases that include match faces, the Simmetrix mesh must include only one part, which is the reason why convert runs here with -np 1. But in other circumstances, convert can run in parallel if the Simmetrix mesh has already been partitioned in n parts with n&amp;gt;1 (for instance mesh generated in parallel with BLMesherParallel and/or partitioned with phParAdapt-Simmetrix).&lt;br /&gt;
&lt;br /&gt;
== Boundary and initial conditions (spj file)==&lt;br /&gt;
&lt;br /&gt;
Before running Chef for mesh operations such as uniform refinement, tetrahedronization and partitioning, we need to defined the BCs and ICs for the generation of the phasta files. Most of the attributes you are familiar with from the Simmodeler GUI can be specified in the spj file. For the 3-way channel flow, see the spj file located in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Simplified_SPJ_file/geom.spj. Each line corresponds to one attribute that applies to one face. The structure is the following: &amp;lt;attribute_name&amp;gt;: &amp;lt;face_id&amp;gt; &amp;lt;dimension: 2 for a face attribute in 2D, 3 for the initial conditions that applies to the 3D domain. 1D and 0D attributes are also allowed for lines and vertices if needed&amp;gt; &amp;lt;attribute list, typically magnitude and direction if this applies&amp;gt;. Note that the syntax is strict: - No empty line. Each line should be either a comment which starts with the # character, or an attribute. - There must be one single space after the semicolon character. - There must be one single space between any number.&lt;br /&gt;
Note that in this example, a zero &amp;quot;traction vector&amp;quot; attribute is specified on the periodic faces parallel to the length of the channel. This is wrong to specify such an attribute on these periodic faces for a 3-way channel but this was inherited from the 1-way periodic channel where these faces were slip walls instead of periodic faces. I will try to update my test cases in the future. But because we have now continuous integration tools that run every night to verify the Chef code, I will need to update all the cases if I modify the spj file now. So double check the attributes that you need for this model and consider the existing spj file as a source of inspiration rather than the correct spj file for production runs.&lt;br /&gt;
&lt;br /&gt;
== Chef==&lt;br /&gt;
&lt;br /&gt;
A few rules must be followed to run Chef. First, the number of mpi processes must be equal to the number of input parts. Second, Chef is threaded with openmp and the total number of output parts after partitioning should be at most equal to the total number of available hardware threads of your machine/allocation. On BGQ, there are 4 hardware threads per core. On Linux platform such as firebird, the number of hardware threads corresponds to the number of available cores. That said, we have observed that if the number of output parts is equal to the total number of available hardware threads, Chef can hang. Therefore, it is safer to limit the number of output parts to a lower number than the number of available hardware threads. Therefore, on firebird, we should not try to partition a mesh to more than 16 parts. The next mesh operations will have to take place on Tukey and Cetus/Mira.&lt;br /&gt;
The first example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140927/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch. With my naming convention, &amp;quot;4-1-Chef-PartLocal-Scratch&amp;quot; can be decomposed as follows: - the first number correspond to the number of output parts, - the second number correspond to the number of input parts, - Chef means this mesh was treated with this program (in opposition to phParAdapt, phTest, etc which are previous executables that we used for similar purpose), - PartLocal means the mesh is partitioned locally, - Scratch means that the initial solution in the resulting phasta files is generated entirely from the spj file defined in 4). In summary, Chef was used in this directory to partition a single part mesh into 4 parts and the solution in the phasta files was generated directly from scratch using the spj file.&lt;br /&gt;
The script to run Chef is named runChef.sh in this directory and simply call the executable. Chef reads all it needs from two input files called numstart.dat and adapt.inp.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''a) numstart.dat'''&lt;br /&gt;
&lt;br /&gt;
Instead of building the initial solution from scratch using the initial conditions defined in the spj file, the user can migrate an existing solution stored in a set of restart files that were saved from a previous phasta simulation. Numstart.dat contains the time step stamp of the input restart files to read in order to migrate a solution.&lt;br /&gt;
&lt;br /&gt;
'''b) adapt.inp'''&lt;br /&gt;
&lt;br /&gt;
This input file contains all the other parameters Cher expects. Note that many of these parameters have been inherited from the old phParAdaptare and are currently obsolete or unused. In what follows, all the parameters available in adapt.inp are listed and the critical parameters are in bold. Any line that starts with # is ignored.&lt;br /&gt;
&lt;br /&gt;
* '''globalP''': obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''timeStepNumber''': this is the time step of the output phasta files that will be generated by Chef. This stamp can be different from the number specified in numstart.dat which can be practical in some situations. But most of the time, this number is set equal to what is specified in numstart.dat&lt;br /&gt;
&lt;br /&gt;
* '''ensa_dof''': this corresponds to the number of degrees of freedom in the solution field of the output restart file. Note that it should correspond to the number of initial conditions specified in the spj file if the solution is built from scratch. When the solution is migrated from existing restart files, it should also correspond to the number of dof in the existing solution field. Here, this number is set to 5 for single phase flow with no turbulence model.&lt;br /&gt;
&lt;br /&gt;
* '''attributeFileName''': path to the spj file for the boundary and potentially initial conditions&lt;br /&gt;
&lt;br /&gt;
* '''modelFileName''': path to the geometric model (can be a parasolid or geomsim model on Linux but only geomsim is available on BGQ).&lt;br /&gt;
&lt;br /&gt;
* '''meshFileName''': path to the directory that includes the input mesh files under the SCOREC MDS format. Note that the path must end with a /. This path can also be prepended by &amp;quot;bz2:&amp;quot; to tell the mesh file reader that the files have been compressed. This follows the same convention as mentioned in 3)&lt;br /&gt;
&lt;br /&gt;
* '''outMeshFileName''': obviously the name of the directory that will include the resulting output mesh files. Note again the trailing / character. The same convention with &amp;quot;bz2:&amp;quot; keyword applies.&lt;br /&gt;
&lt;br /&gt;
* '''restartFileName''': this gives the path to the restart files that needs to be read in when solution migration is activated. In this case, the path should look for instance like &amp;quot;../4-procs_case/restart&amp;quot;. The phasta reader will then add the time step stamp to the name of this restartFileName variable, as well as the file #. When there is no solution migration like in this example, this parameter can be commented out for the sake of clarity.&lt;br /&gt;
&lt;br /&gt;
* '''adaptFlag''': if 0, no mesh adaptation will take place. But if set to 1 and if AdaptStrategy is set to 7, then the mesh will be uniformly refined. Note that adaptation only works with a mixed mesh (with wedges in the BL) and not with an all-tet mesh. Tetrahedronization should therefore take place after uniform refinement. Right now, the mixed mesh gets uniformly refined everywhere including the BL but it is possible to refine uniformly outside the BL only with some light modifications of the code. In the future, we hope to have other adaptation strategies in place in Chef based on local error indicator. If interested in these strategy, then phParAdapt-Simmetrix must be used. If adaptFlags is set to 1, note also that solutionMigration must be also set to 1 (see below for this parameter) and the path to the restart files specified.&lt;br /&gt;
&lt;br /&gt;
* rRead: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* rStart: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''AdaptStrategy''': This parameter is read if adaptFlat is 1. When set up to 7, uniform refinement of a mixed mesh can take place. This is currently the only strategy tested in Chef. If interested in other more sophisticated adaptation strategies, phParAdapt-Simmetrix must be used for now.&lt;br /&gt;
&lt;br /&gt;
* '''RecursiveUR''': if AdaptStrategy is set to 7, Chef offers the possibility to do recursive uniform refinement within the same job. Beware of the memory consumption if you set this value to more than 1, since the mesh can grow quickly.&lt;br /&gt;
&lt;br /&gt;
* Periodic: obsolete. Periodicity in the mesh and in the solution is not treated automatically as long as i) the mesh built with BLMesher is periodic (i.e. location of the mesh vertices on periodic faces in the same) and ii) the spj file contains the correct &amp;quot;periodic slave&amp;quot; attributes.&lt;br /&gt;
&lt;br /&gt;
* prCD: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* timing: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* outputFormat: obsolete. Phasta files are saved by default in binary format.&lt;br /&gt;
&lt;br /&gt;
* internalBCNodes: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* WRITEASC: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* phastaIO: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''numTotParts''': Final number of parts. If numTotParts is larger than the number of Chef processes which is equal to the number of input parts, the mesh will be partitioned.&lt;br /&gt;
&lt;br /&gt;
* '''elementsPerMigration''': In order to reduce the memory foot print of Chef, the user can reduce the default number of elements that can be migrated at a time during partitioning or partition improvement.&lt;br /&gt;
&lt;br /&gt;
* '''SolutionMigration''': Activates the migration of the solution from an existing set of restart files. In this case, the path to the phasta files that contain the solution to migrate must be specified through the restartFileName parameter (see above). If the mesh is refined, the solution that is migrated will be interpolated to the new vertices of the mesh. Note also that if the solution is migrated, then the spj file should contain NO information about the initial condition. Indeed any information mentioned in the spj file will prevail. Therefore, if the spj file contains information about the initial conditions, the solution migrated from existing restart files will be overwritten and the resulting phasta files will include again the scratch solution specified in the spj file.&lt;br /&gt;
&lt;br /&gt;
* '''DisplacementMigration''': Migrates also the displacement field along with with solution field for other adaptation strategies. Not used for AdaptStrategy 7 so can be ignored for now.&lt;br /&gt;
&lt;br /&gt;
* isReorder: obsolete/unused. Reordering for better cache performance is now applied by default to both the phasta files and mesh files.&lt;br /&gt;
&lt;br /&gt;
* '''Tetrahedronize''': tetrahedronize a mixed mesh if set to 1. Note that if both AdaptFlag and Tetrahedronize are set to 1, adaptation of the input mixed mesh will take place before tetrahedronization. In all cases, partitioning is always the last mesh operation. But again, an all tet mesh cannot be further refined so tetrahedronization should not take place too early in the partitioning workflow in order to get enough aggregated memory for potential future adaptation.&lt;br /&gt;
&lt;br /&gt;
* numSplit: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''LocalPtn''': local partitioning if set to 1, set global partitioning if set to 0. Currently, only local partitioning is implemented in Chef and has been shown to be sufficient so far.&lt;br /&gt;
&lt;br /&gt;
* '''RecursivePtn''': should always be set to 1. In the past, this parameter allowed recursive partitioning steps in phParAdapt. The code will stop or crash if this parameter is not 1.&lt;br /&gt;
&lt;br /&gt;
* RecursivePtnStep: obsolete/unused.&lt;br /&gt;
&lt;br /&gt;
* '''partitionMethod''':  Currently, the GRAPH method for local partitioning is hard coded in one of the Chef routine.&lt;br /&gt;
&lt;br /&gt;
* '''ParmaPtn''': If set to 1, the load balance in terms of both elements and vertices per part is improved further after the partitioning with Parma. It is strongly suggested to keep ParmaPtn set to 1.&lt;br /&gt;
&lt;br /&gt;
* '''dwalMigration''': This parameter is useful in case the distance to the wall for a turbulence model such as RANS or DDES has already been computed by phasta. In this case, it is possible to migrate also this field along with the solution field. SolutionMigration must therefore be set to 1 for that purpose, since the dwal field cannot be migrated alone without the solution field.&lt;br /&gt;
&lt;br /&gt;
* '''buildMapping''': This computes the vertex mapping between the input and output mesh. It is strongly suggested to keep this parameter always set to 1. Otherwise, you will not be able to reduce your solution from your final partitioning down to the initial or any intermediate mesh (we have developed a tool for that purpose), which can be catastrophic if you are interested in local adaptation based on an error indicator. Note that building the mapping does not make sense if the mesh is uniformly refined so it should be set to 0 in this case.&lt;br /&gt;
&lt;br /&gt;
* '''initBubbles''': The Chef will use the external bubble information file 'bubbles.inp' to initialize the level set distance field if this flag is detected to be activated.&lt;br /&gt;
&lt;br /&gt;
The second example of a partitioning with Chef can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-Tet-PartLocal-SolMgr. For this case, based on the naming convention of 8-4-Chef-Tet-PartLocal-SolMgr (and the parameters specified in adapt.inp and numstart.dat),&lt;br /&gt;
* the number of output parts requested is 8, &lt;br /&gt;
* the number of input parts is 4 (note &amp;quot;-np 4&amp;quot; in the runChef.sh script),&lt;br /&gt;
* the input mixed mesh is first tetrahedronized before being partitioned. &lt;br /&gt;
* the solution in the resulting phasta files is migrated from the previous Chef run. &lt;br /&gt;
Note that the spj file is different for this second example and the initial conditions have been commented out in order not to overwrite the solution that is migrated from the previous Chef run.&lt;br /&gt;
&lt;br /&gt;
The third and final example can be found in /sgidata2/mrasquin/Models/TwoPhase/subchannel/subchannel_3way/Mixed-Parallel1-parasolid-9.0-140906/2-A0/1PFPP-phPA/4-1-Chef-PartLocal-Scratch/8-4-Chef-UR2-Tet-PartLocal-SolMgr. In this directory 8-4-Chef-UR2-Tet-PartLocal-SolMgr, Chef &lt;br /&gt;
* reads a four part mesh, &lt;br /&gt;
* applies a double recursive uniform refinement, &lt;br /&gt;
* tetrahedronize the resulting mixed mesh that has been uniformly refined twice, &lt;br /&gt;
* partition the resulting 4 part all-tet uniformly refined mesh into 8 parts,&lt;br /&gt;
* migrate and interpolate the solution read from existing restart files coming from the first example.&lt;br /&gt;
&lt;br /&gt;
As a final comment, note that the restart files are always read directly from a procs_case directory. However, when the number of output restart files exceeds 2048, the restart files are then saved in subdirectories of the root procs_case directory in order to reduce file contention, in the same (but still different) way as what you have implemented at some point in your version of phasta. The best strategy would be to write phasta files using mpi_io for instance so that we can store more than one part in a single file and avoid large number of phasta files.&lt;br /&gt;
&lt;br /&gt;
For further partitioning on BG/Q machines a conversion to the native Parasolid model is required. The tool is located in: /Install/SCOREC.develop/scorec/test/cadToSim/cadToSim &lt;br /&gt;
and should be run from [Case directory]/convertParasolid2ParasolidNative/ on firebird.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Updated Chef version (2015/03/26)==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) MPI implementation&lt;br /&gt;
&lt;br /&gt;
A new version of chef has been implemented and does not rely on threads any more.&lt;br /&gt;
Instead, it is now based on a pure MPI implementation. &lt;br /&gt;
That means that there is an important change in how chef is called at runtime.&lt;br /&gt;
&lt;br /&gt;
With the previous threaded version, the number of MPI processes had to be equal to the number of input parts. &lt;br /&gt;
Chef was then in charge of starting a number of threads equal to the number of output parts, which was automatic.&lt;br /&gt;
&lt;br /&gt;
Since the pure MPI version of chef does not start thread any more, it now requires a number of MPI processes equal to the final number of output parts, and not input parts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) adapt.inp&lt;br /&gt;
&lt;br /&gt;
In the new version of chef, &amp;quot;numTotParts&amp;quot; in adapt.inp (which was used to specify the final number of output parts) has been replaced by &amp;quot;splitFactor&amp;quot;, which corresponds to the ratio of the number of output parts with the number of input parts. &lt;br /&gt;
If you set this parameter to 1, the mesh will not be split and the number of output parts will be equal to the number of input parts. &lt;br /&gt;
If you set this parameter to 2, each part of your input mesh will be split in 2 new sub-parts, etc&lt;br /&gt;
Keep in mind that the number of MPI processes that needs to be requested for chef must therefore be equal to (number of input parts times) * (splitFactor).&lt;br /&gt;
&lt;br /&gt;
I have also removed the obsolete parameter in adapt.inp and saved a representative version of this file in /projects/tools/SCOREC.develop/runscripts/adapt.inp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3) Paths&lt;br /&gt;
&lt;br /&gt;
I have updated chef on the VIz nodes, Mira and Tukey so that it only relies on the more robust pure MPI implementation.&lt;br /&gt;
&lt;br /&gt;
On the viz nodes, use /projects/tools/SCOREC.develop/build-chefMPI-GNU-*/test/chef&lt;br /&gt;
For simplicity, this is the default version of the master branch coming directly from our github repository.&lt;br /&gt;
&lt;br /&gt;
On Tukey, use /home/mrasquin/SCOREC.develop/build-tukey-GNU-OptG-c2c360bc-mpi-*&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35-noblsnap means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is off during uniform refinement (UR).&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol35 means that the target imbalance for the vtx and elem is 3% and 5% respectively, and BL snapping is on during UR.&lt;br /&gt;
- build-tukey-GNU-OptG-c2c360bc-mpi-tol33 means that the target imbalance for both the vtx and elem is 3%, and BL snapping is on during UR.&lt;br /&gt;
Note that these versions have been slightly modified w.r.t. the master branch. In particular, the imbalance target is not a parameter yet. Also, in Parma, HPS (Heavy Part Splitting) and FixDisconnectedPart are not called here because the latest version of the diffusion algorithm with improved selection of (i) target parts for element exchange and (ii) elements.&lt;br /&gt;
&lt;br /&gt;
On Mira, use /home/mrasquin/SCOREC.develop/build-XL-OptG-c2c360bc-mpi-*&lt;br /&gt;
Similar comments applies to  build-XL-OptG-c2c360bc-mpi-tol33,  build-XL-OptG-c2c360bc-mpi-tol35 and  build-XL-OptG-c2c360bc-mpi-tol35-noblsnap.&lt;br /&gt;
&lt;br /&gt;
Note that BL snapping is not called for a repartitioning of the mesh. It can only play a role during uniform refinement.&lt;br /&gt;
Consequently, if you do not request a UR in adapt.inp, then build-*-tol35 and build-*-tol35-noblsnap will behave the same way.&lt;br /&gt;
&lt;br /&gt;
In case you are wondering what the weird numbers are in the name of the build directory, this comes from the git log hash, which is a unique number associated with a git commit (easier to couple an executable with a version of the code).&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=Mesh_partitioning_with_Chef&amp;diff=537</id>
		<title>Mesh partitioning with Chef</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=Mesh_partitioning_with_Chef&amp;diff=537"/>
				<updated>2015-03-26T19:01:44Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: moved Mesh partitioning with Chef to Chef: Mesh Partitioning: easier to find in alphabetical index&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Chef: Mesh Partitioning]]&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	<entry>
		<id>https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=532</id>
		<title>VNC</title>
		<link rel="alternate" type="text/html" href="https://fluid.colorado.edu/wiki/index.php?title=VNC&amp;diff=532"/>
				<updated>2015-03-06T20:32:54Z</updated>
		
		<summary type="html">&lt;p&gt;Skinnerr: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;VNC is a tool which projects a GUI session over the network. If may be useful if you want to use GUI tools remotely when X forwarding performs poorly. &lt;br /&gt;
&lt;br /&gt;
'''Warning: This is still being tested and should NOT be considered stable (portal0 may be rebooted without warning)'''&lt;br /&gt;
'''Warning: The vnc password is transmitted in clear text over the network and should not be considered secure'''&lt;br /&gt;
&lt;br /&gt;
Portal0 is designated to host VNC sessions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can start a session as follows:&lt;br /&gt;
&lt;br /&gt;
  ssh jumpgate-phasta.colorado.edu&lt;br /&gt;
  ssh portal0&lt;br /&gt;
  source /etc/profile&lt;br /&gt;
  start_vnc.sh&lt;br /&gt;
&lt;br /&gt;
And follow the directions&lt;br /&gt;
 &lt;br /&gt;
(You may want to remember your password and port number so that you can reuse your session)&lt;br /&gt;
&lt;br /&gt;
When you are done, end your session by&lt;br /&gt;
  source /etc/profile&lt;br /&gt;
  stop_vnc.sh&lt;br /&gt;
&lt;br /&gt;
== OpenGL == &lt;br /&gt;
&lt;br /&gt;
Portal0 is equipped with a VirtualGL install which will allow you to use OpenGL programs (which do not use pthreads)&lt;br /&gt;
&lt;br /&gt;
Simply wrap your OpenGL program with the &amp;quot;vglrun&amp;quot; command&lt;br /&gt;
  vlgrun glxgears&lt;br /&gt;
&lt;br /&gt;
If you have access to another VirtualGL server you can connect to it first (Portal0 doesn't have a particularly fast graphics processor)&lt;br /&gt;
  vglconnect server&lt;br /&gt;
  vglrun glxgears&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that VGL uses a number of threads. If you have trouble with vglrun crashing with a message about Thread::Start() make sure you haven't set your stack size too large (remove any ulimit -s or ulimit -n calls from your shell start scripts)&lt;br /&gt;
&lt;br /&gt;
== Clients == &lt;br /&gt;
&lt;br /&gt;
Portal0 uses TurboVNC from the VirtualGL project, available from http://www.virtualgl.org/Downloads/TurboVNC&lt;br /&gt;
&lt;br /&gt;
Other VNC viewers will also work, such as TightVNC and RealVNC&lt;br /&gt;
&lt;br /&gt;
== Changing the VNC Password ==&lt;br /&gt;
&lt;br /&gt;
  /opt/tigervnc/bin/vncpasswd&lt;br /&gt;
&lt;br /&gt;
== View Only Mode == &lt;br /&gt;
&lt;br /&gt;
To share your desktop with another user in view only mode set a view only password &lt;br /&gt;
by running&lt;br /&gt;
  vncpasswd&lt;br /&gt;
&lt;br /&gt;
Have the other user connect in the same way you would but have them set their viewer to be in view only mode and use your view only password. Typically this is done as follows:&lt;br /&gt;
  vncviewer -viewonly&lt;br /&gt;
&lt;br /&gt;
== Windows == &lt;br /&gt;
The PuTTY SSH client can handle ssh tunneling on Windows based machines. You can download it here: http://www.chiark.greenend.org.uk/~sgtatham/putty/&lt;br /&gt;
&lt;br /&gt;
When you open putty, enter jumpgate-phasta.colorado.edu as in the Host Name box. Then click the + button next to SSH on the left pane (to expand the SSH tree node). Choose the Tunnels page. The start_vnc.sh script should tell you to run &amp;quot;ssh -L????:portal0:???? jumpgate-phasta.colorado.edu&amp;quot; on your machine. Enter the number between the -L and the first colon in the &amp;quot;Source port&amp;quot; box. Enter the rest in the Destination box (starting with portal0) and '''click the add button'''. Then click &amp;quot;Open&amp;quot; and login as normal. You will then be able to use a vncviewer as instructed by the script.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
The script says:&lt;br /&gt;
ssh -L5905:portal0:5900 jumpgate-phasta.colorado.edu&lt;br /&gt;
enter 5905 in the Source port box&lt;br /&gt;
enter portal0:5900 in the destination box.&lt;br /&gt;
&lt;br /&gt;
Try using this viewer utility&lt;br /&gt;
http://www.tightvnc.com/download/1.3.10/tightvnc-1.3.10_x86_viewer.zip&lt;br /&gt;
&lt;br /&gt;
'''Connecting to your VNC with PuTTY'''&lt;br /&gt;
&lt;br /&gt;
Once we SSH to jumpgate (on the default SSH port 22), our main desktop on portal0 can be accessed via a VNC session as follows.&lt;br /&gt;
&lt;br /&gt;
# The VNC server should already be running on portal0 using port 59xx.&lt;br /&gt;
## To check the port, on portal0 run &amp;lt;code&amp;gt;/opt/vnc_script/findsession.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
## To confirm the VNC server is running (and see port), run &amp;lt;code&amp;gt;ps aux | grep vnc&amp;lt;/code&amp;gt;&lt;br /&gt;
# Open PuTTY on your local machine.&lt;br /&gt;
# Under &amp;quot;Session&amp;quot;, SSH to &amp;lt;code&amp;gt;x@jumpgate-phasta.colorado.edu:22&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; is your username on jumpgate, and &amp;lt;code&amp;gt;22&amp;lt;/code&amp;gt; is the standard SSH port.&lt;br /&gt;
# Under &amp;quot;Session&amp;quot;&amp;gt;&amp;quot;SSH&amp;quot;&amp;gt;&amp;quot;Tunnels&amp;quot;, select source port &amp;lt;code&amp;gt;59xx&amp;lt;/code&amp;gt; and destination port &amp;lt;code&amp;gt;portal0:59xx&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;xx&amp;lt;/code&amp;gt; is the two-digit number of your VNC session. Select destination &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; and click &amp;quot;Add&amp;quot;. We select &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; because we have a service (VNC Server) running on a machine (portal0) that can be reached from the remote machine (jumpgate), and we want to access it directly from the &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; machine.&lt;br /&gt;
# Confirm the dialog by clicking &amp;quot;Open&amp;quot;, thus establishing an SSH connection between localhost and jumpgate, and tunneling localhost:59xx to portal0:59xx via this connection.&lt;br /&gt;
# Open RealVNC, and connect to &amp;lt;code&amp;gt;localhost:xx&amp;lt;/code&amp;gt;, which is shorthand for &amp;lt;code&amp;gt;localhost:59xx&amp;lt;/code&amp;gt;. VNC ports are enumerated starting with &amp;lt;code&amp;gt;5901&amp;lt;/code&amp;gt;, so any two digit port &amp;lt;code&amp;gt;xx&amp;lt;/code&amp;gt; is assumed to be port &amp;lt;code&amp;gt;59xx&amp;lt;/code&amp;gt;.&lt;br /&gt;
# You should now have access to your desktop on portal0.&lt;br /&gt;
&lt;br /&gt;
== Web Based Viewer ==&lt;br /&gt;
&lt;br /&gt;
If you can't or don't want to install a VNC viewer you can use a Java based one. You will need a JVM and a Java browser plugin. You will also need the port that the start_vnc script assigned you to be free on your local computer&lt;br /&gt;
&lt;br /&gt;
Forward your session through jumpgate as before before, adding a second port, 580n. For example, if the script tells you to &lt;br /&gt;
&lt;br /&gt;
ssh -L5905:portal0:5902 jumpgate-phasta.colorado.edu you should&lt;br /&gt;
  ssh -L5902:portal0:5902 -L5802:portal0:5802 jumpgate-phasta.colorado.edu&lt;br /&gt;
Then point your browser to http://localhost:5802 and log in with the password specified by the script when prompted. (Replace 2 with the value specified by the script)&lt;br /&gt;
&lt;br /&gt;
== Changing the Size (Resolution) of an Existing Session ==&lt;br /&gt;
&lt;br /&gt;
You can usually use the &amp;quot;xrandr&amp;quot; tool to change the resolution of a running vnc session. First you'll need to know your session's display number (this should be the last digit or two of the port number). For example, if your VNC session is running on port 5902, then your screen number should be :2. For this example, we'll use screen 2. &lt;br /&gt;
&lt;br /&gt;
Once you know your screen number, you can see the list of supported modes as follows:&lt;br /&gt;
  xrandr -display :2&lt;br /&gt;
&lt;br /&gt;
Once you pick the one you want (generally the same size or smaller than the native resolution of your client), you can choose it by running a command like&lt;br /&gt;
  xrandr -s 1400x1050 -display :2&lt;br /&gt;
&lt;br /&gt;
(this example will set the resolution to 1400 pixels by 1050 pixels)&lt;br /&gt;
&lt;br /&gt;
You'll probably be disconnected at this point, but when you reconnect your screen size should be changed (hopefully without crashing your running programs). &lt;br /&gt;
&lt;br /&gt;
== Finding an Existing Session ==&lt;br /&gt;
SSH to portal0 and then run:&lt;br /&gt;
  /opt/vnc_script/findsession.sh&lt;br /&gt;
&lt;br /&gt;
Which will return the shortened port number of each of your currently running sessions.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting == &lt;br /&gt;
&lt;br /&gt;
If you have used vncserver (It doesn't matter which version) on a SCOREC machine before, you will need to clear your vnc settings for the script to work. You can do this by running rm -rf ~/.vnc&lt;br /&gt;
&lt;br /&gt;
stop_vnc.sh may display some errors; this is normal.&lt;br /&gt;
&lt;br /&gt;
If you have trouble deleting ~/.vnc send an email to Benjamin.A.Matthews@colorado.edu&lt;br /&gt;
&lt;br /&gt;
If any of these commands fail, you may need to source /etc/profile to get the necessary environment variables (this should be fixed soon)&lt;br /&gt;
&lt;br /&gt;
VirtualGL has trouble with some threaded programs. If your OpenGL program exhibits segmentation faults or other issues, this could be the problem. Check back for the solution later. &lt;br /&gt;
&lt;br /&gt;
If the given password is rejected you can run stop_vnc.sh and restart to get a new one. Occasionally the random password generator may generate passwords which VNC doesn't like.&lt;br /&gt;
&lt;br /&gt;
If VirtualGL complains about not being able to get a 24bit FB config either vglconnect to another VirtualGL enabled server or complain to Benjamin.A.Matthews@Colorado.edu&lt;br /&gt;
&lt;br /&gt;
If your VNC connection is very slow, you might want to try changing the compression and encoding options. See your vncviewer's documentation or try this&lt;br /&gt;
  vncviewer -encodings tight -quality 6 -compresslevel 6&lt;br /&gt;
If you have trouble with text distortion try adding &lt;br /&gt;
  -nojpeg&lt;br /&gt;
&lt;br /&gt;
If you're running OSX and see an error about Zlib, try changing your compression settings (maximum quality usually works) or use a different client. RealVNC and certain versions of ChickenOfTheVNC both exhibit this issue. The latest build of TigerVNC should work reliably, as does the Java based TightVNC client.&lt;/div&gt;</summary>
		<author><name>Skinnerr</name></author>	</entry>

	</feed>