Difference between revisions of "Interpolate Solution from Different Mesh"
(Add description of solInterp file format) |
Conrad54418 (talk | contribs) (→Alternative to Using MATLAB for Reformatting) |
||
(15 intermediate revisions by 2 users not shown) | |||
Line 4: | Line 4: | ||
The generic terms for the two meshes are "source" and "target", where source has the desired solution data and target is the mesh that will be receiving. | The generic terms for the two meshes are "source" and "target", where source has the desired solution data and target is the mesh that will be receiving. | ||
− | == | + | == Laminar, Incompressible, Semi-structured Mesh == |
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN. | This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN. | ||
Line 35: | Line 35: | ||
==== 2. Create Structured Solution File ==== | ==== 2. Create Structured Solution File ==== | ||
+ | |||
'''Note:''' These instructions will be for the <code>parallelSortDNSzBinJames</code> executable, which has some highly specific requirements and command inputs. | '''Note:''' These instructions will be for the <code>parallelSortDNSzBinJames</code> executable, which has some highly specific requirements and command inputs. | ||
Line 49: | Line 50: | ||
# Concatenate <code>source.sln.{1..nprocs}</code> files '''in order''' into single <code>source.sln</code> file | # Concatenate <code>source.sln.{1..nprocs}</code> files '''in order''' into single <code>source.sln</code> file | ||
#* The individual <code>source.sln.{1..nprocs}</code> files need to be concatenated into a single <code>solution.sln</code> file, which can be done (in zsh at least) via <code>echo source.sln.{1..[MPIRanks]}| xargs cat > source.sln</code> (or probably equivalent <code>cat source.sln.{1..[MPIRanks]} > source.sln</code>). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence. | #* The individual <code>source.sln.{1..nprocs}</code> files need to be concatenated into a single <code>solution.sln</code> file, which can be done (in zsh at least) via <code>echo source.sln.{1..[MPIRanks]}| xargs cat > source.sln</code> (or probably equivalent <code>cat source.sln.{1..[MPIRanks]} > source.sln</code>). Note these files '''must''' be concatenated in order of rank, otherwise it will be out of sequence. | ||
− | |||
'''Example Command:''' <code>mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291</code> | '''Example Command:''' <code>mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291</code> | ||
Line 70: | Line 70: | ||
# This creates a series of <code>solInterp.[target nprocs]</code> files in the <code>solnTarget</code> directory | # This creates a series of <code>solInterp.[target nprocs]</code> files in the <code>solnTarget</code> directory | ||
− | '''Example Command:''' <code>mpirun -np 64 /path/to/phInterp 16 799 281 213</code> | + | The file format for <code>solInterp.N</code> is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns: |
+ | |||
+ | coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z | ||
+ | |||
+ | '''Example Command:''' <code>mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452</code> | ||
− | * The inputs for this command are <code>[target parts per MPIProc] [source nx] [source ny] [source nz]</code> | + | * The inputs for this command are <code>[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]</code> |
* Note that the number of processes given to <code>mpirun</code> times the <code>[target parts per MPIProc]</code> must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024 | * Note that the number of processes given to <code>mpirun</code> times the <code>[target parts per MPIProc]</code> must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024 | ||
* The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate. | * The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate. | ||
Line 87: | Line 91: | ||
#* Note that if you forget to remove the <code>Load and set 3D IC: True</code> statement from <code>solver.inp</code>, PHASTA will overwrite the existing solution in the restart files | #* Note that if you forget to remove the <code>Load and set 3D IC: True</code> statement from <code>solver.inp</code>, PHASTA will overwrite the existing solution in the restart files | ||
− | == solInterp | + | == Turbulent, Compressible, Unstructured Mesh == |
− | + | ||
+ | Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the <code>conrad54418/connor_primitive</code> branch (as of 6/22/22). Creating the <code>solInterp.1</code> file can be automated via a script or done manually (see below). | ||
+ | |||
+ | === Process Overview (scripted) === | ||
+ | |||
+ | Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at <code>/project/tutorials/ParaviewSolutionTransfer</code>. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in <code>targetFolder/solutionInterp</code>. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing. | ||
+ | |||
+ | === Process Overview (manual) === | ||
+ | |||
+ | # Load existing solution into ParaView. Use the 'merge blocks' filter to convert to serial case. | ||
+ | # Load the target case .pht file into ParaView | ||
+ | # Use the 'Resample from dataset' filter and select the source and target blocks accordingly (ParaView’s naming is super unclear: they want source to be the coordinates where you need the solution (new mesh) and input to be the mesh that has the solution values associated with it that you want to interpolate from (which in this case is the MergeBlocks). This is confusing because it is exactly backwards to what we use for solution interpolation where we call the mesh with a solution the source and the new mesh the target). | ||
+ | # Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values. | ||
+ | # The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1. | ||
+ | # Advance PHASTA one step in serial, then convert to desired number of processors using Chef. | ||
+ | |||
+ | ==== Alternative to Using MATLAB for Reformatting ==== | ||
+ | |||
+ | For those wanting to skip the MATLAB step, <code>awk</code> can also do the necessary column manipulation. | ||
− | + | Copy your file (in case you goof up): | |
+ | cp PVinterp0.csv test.dat | ||
+ | Remove the header line: | ||
+ | sed -i 1,1d test.dat | ||
+ | Replace the commas with spaces: | ||
+ | sed -i 's/,/\ /g' test.dat | ||
+ | Rearrange the columns to be what solInterp.1 wants. It is probably a good idea to do a <code>head -1 PVinterp0.csv</code> to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w. | ||
+ | awk '{print $6,$7,$8,$1,$2,$3,$4}' test.dat > solInterp.1 | ||
+ | for entropy code: | ||
+ | awk '{print $14,$15,$16,$3,$4,$5,$6,$7,$10,$11,$12,$9,$8,$2}' test.dat > solInterp.1 | ||
+ | and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp <code>Load and set 3D IC: True</code> for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take | ||
+ | iexec : 0 | ||
+ | or | ||
+ | Number of Timesteps: 0 | ||
+ | to not take any actual steps. Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that conditional. Alternatively, the second option also skips over the time stepping and writes the solution AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this) |
Latest revision as of 09:42, 26 October 2024
This will be a general page for how to interpolate solutions from different meshes onto a new mesh. Those meshes are assumed to be of the same domain.
The generic terms for the two meshes are "source" and "target", where source has the desired solution data and target is the mesh that will be receiving.
Contents
Laminar, Incompressible, Semi-structured Mesh
This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.
Process Overview
- A CSV of the solution must be created. This must be done on a serial running version of ParaView and must be done after a MergeBlocks filter is done.
- A special solution file is created via
Sort2StructuredGrid
, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab.- Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created.
- The interpolation is performed onto the new grid via
par3DInterp3
, which createssolInterp.<1-nparts>
files in thesolnTarget
directory. - The interpolation is then used by PHASTA by setting
Load and set 3D IC: True
in thesolver.inp
.- This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The
solnTarget
directory needs to be symlinked into the-procs_case
directory for this to work.
- This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The
1. Create CSV
- Created using ParaView
- PV must be running in Serial mode
- Otherwise the CSV will not be in the correct order and possibly have duplicated points
- Load in source dataset
- Apply Mergeblocks filter
- Save dataset as a csv with 12 digits of precision in scientific notation
- Make sure the csv is in "pressure, u0, u1, u2, x0, x1, x2" format
- This can be done by only loading the pressure and velocity fields into Paraview (either by editing the
.phts
or in the data load menu in Paraview).
- Replace the commas with spaces
- Can use
vim
or runsed -i 's/,/\ /g' test.csv
- Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces
- Can use
- Remove the first line of the csv file
- Done in vi or sed (
sed -i 1,1d test.csv
) or tail (tail -n +2 test.csv > trimmedLine1.csv
) - Needed for the next program
- Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in this StackOverflow answer as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).
- Done in vi or sed (
2. Create Structured Solution File
Note: These instructions will be for the parallelSortDNSzBinJames
executable, which has some highly specific requirements and command inputs.
This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.
- Symlink the source mesh's ordered coordinate file as
ordered.crd
- This may come from the files used to create the mesh (ie. for matchedNodeElementReader)
- (Untested)This may also be created using the coordinates from the solution file
- Rename/symlink csv to be the correct file name (in my specific case, it was
dnsSolution1procLongFort.csv
) - Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)
- Load approriate MPI environment variables (
soft add +mvapich2
for Cooley) - Run the executable via
mpirun -np [nprocs] [executable path] [executable inputs]
- This will produce
source.sln.{1..nprocs}
files
- This will produce
- Concatenate
source.sln.{1..nprocs}
files in order into singlesource.sln
file- The individual
source.sln.{1..nprocs}
files need to be concatenated into a singlesolution.sln
file, which can be done (in zsh at least) viaecho source.sln.{1..[MPIRanks]}| xargs cat > source.sln
(or probably equivalentcat source.sln.{1..[MPIRanks]} > source.sln
). Note these files must be concatenated in order of rank, otherwise it will be out of sequence.
- The individual
Example Command: mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291
- The inputs for this command are
[nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]
- Note: These inputs are specfic to this executable. Changing executable will change which is used
- Also note that the
[number of elements in z]
is equivalent tonsons
- 1 or the number of nodes in z - 1.
3. Create Interpolated files
This step will create the solInterp.[target nprocs]
files used by PHASTA to perform the interpolation.
- Create
Interpolate.../[target nprocs]-procs_case
directory in the target's Chef directory and move to that directory - Symlink the target's POSIX
geombc.[target nprocs]
files (that were created by Chef) to the[target nprocs]-procs_case
directory- The
geombc.[target nprocs]
files should be copied in the exact fashion that they are in the Chef created[target nprocs]-procs_case
directory, including if they're "fanned out"
- The
- Create a directory called
solnTarget
- This may be corrected in the future, but currently if
solnTarget
is not present the job will fail
- This may be corrected in the future, but currently if
- Symlink the
source.sln
to the directory and theordered.crd
file assource.crd
- Run
phInterp
via mpirun on an interactive job. - This creates a series of
solInterp.[target nprocs]
files in thesolnTarget
directory
The file format for solInterp.N
is quite simple. Each line corresponds to the node number in the partition and the file itself has 7 columns:
coord_x coord_y coord_z pressure velocity_x velocity_y velocity_z
Example Command: mpirun -np 64 /path/to/phInterp 16 799 281 213 0.452
- The inputs for this command are
[target parts per MPIProc] [source nx] [source ny] [source nz] [z Length]
- Note that the number of processes given to
mpirun
times the[target parts per MPIProc]
must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024 - The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.
4. Interpolate the solution in PHASTA
This step will take the solInterp.[target nprocs]
files and load them as initial conditions.
- Symlink the
solnTarget
directory into the[target nprocs]-procs_case
directory. - Add/uncomment
Load and set 3D IC: True
in thesolver.inp
- Run PHASTA for a few timesteps and write out
restart-dat.[target nprocs]
files - Remove/comment out the
Load and set 3D IC: True
line from thesolver.inp
- The new restart files have the interpolated solution
- Note that if you forget to remove the
Load and set 3D IC: True
statement fromsolver.inp
, PHASTA will overwrite the existing solution in the restart files
- Note that if you forget to remove the
Turbulent, Compressible, Unstructured Mesh
Currently, the only version of PHASTA that is set up to handle this type of solution transfer is the conrad54418/connor_primitive
branch (as of 6/22/22). Creating the solInterp.1
file can be automated via a script or done manually (see below).
Process Overview (scripted)
Scripts have been made for use with the compressible, but not turbulent, version of the code. An example folder with the scripts included can be found at /project/tutorials/ParaviewSolutionTransfer
. The three scripts needed are called interpolateSol.py, pvCSV2customSLN_Nproc_prim.m, and parRunAll.sh. The only script of those three you need to run is parRunAll.sh, which can be found in targetFolder/solutionInterp
. You then need to run PHASTA via the runPhasta.sh script, which will produce a restart.1.1 file that contains the transferred solution on the new mesh. The manual section below provides insight about what the scripts are doing.
Process Overview (manual)
- Load existing solution into ParaView. Use the 'merge blocks' filter to convert to serial case.
- Load the target case .pht file into ParaView
- Use the 'Resample from dataset' filter and select the source and target blocks accordingly (ParaView’s naming is super unclear: they want source to be the coordinates where you need the solution (new mesh) and input to be the mesh that has the solution values associated with it that you want to interpolate from (which in this case is the MergeBlocks). This is confusing because it is exactly backwards to what we use for solution interpolation where we call the mesh with a solution the source and the new mesh the target).
- Save the output as a .csv file. Write a single time step and select scientific notation to 12 decimals. NOTE: sometimes ParaView will make an error and write zeros where it can't quite find the closest point when doing the solution transfer. In this case, the .csv file needs to be manually edited to replace the zeros for pressure and temperature with realistic values.
- The .csv file can now be reformatted and renamed with MATLAB to match the expected form of solInterp.1.
- Advance PHASTA one step in serial, then convert to desired number of processors using Chef.
Alternative to Using MATLAB for Reformatting
For those wanting to skip the MATLAB step, awk
can also do the necessary column manipulation.
Copy your file (in case you goof up):
cp PVinterp0.csv test.dat
Remove the header line:
sed -i 1,1d test.dat
Replace the commas with spaces:
sed -i 's/,/\ /g' test.dat
Rearrange the columns to be what solInterp.1 wants. It is probably a good idea to do a head -1 PVinterp0.csv
to be sure as you might have more or less fields than I did but use that header to find the column numbers (starting from 1, not 0) to write x, y, z, p, u, v, w.
awk '{print $6,$7,$8,$1,$2,$3,$4}' test.dat > solInterp.1
for entropy code:
awk '{print $14,$15,$16,$3,$4,$5,$6,$7,$10,$11,$12,$9,$8,$2}' test.dat > solInterp.1
and don’t forget to put this into a directory called solnTarget and also to turn on the flag in solver.inp Load and set 3D IC: True
for the 1 step that Joe mentioned. Finally, if you are worried about that one step messing up your solution recent versions of the code can take
iexec : 0
or
Number of Timesteps: 0
to not take any actual steps. Note however that only very recent version of the code have the iexec conditional moved AFTER the loading of the interpolated solution but it should be pretty easy to figure out where to move that conditional. Alternatively, the second option also skips over the time stepping and writes the solution AFTER applying the boundary conditions which can be useful to see as well to confirm you have the intended BC’s set (iexec :0 won’t detect this)