Interpolate Solution from Different Mesh

From PHASTA Wiki
Revision as of 15:58, 28 July 2020 by Jrwrigh (talk | contribs)
Jump to: navigation, search

This will be a general page for how to interpolate solutions from different meshes onto a new mesh. Those meshes are assumed to be of the same domain.

The generic terms for the two meshes are "source" and "target", where source has the desired solution data and target is the mesh that will be receiving.


"Structured" Mesh Methodology

This section will assume that the source mesh is in a structured ijk form. In the future, this may be expanded to meshes created by MGEN.

Process Overview

First, a CSV of the solution must be created. This must be done on a serial running version of ParaView and must be done after a MergeBlocks filter is done.

Second, a special solution file is created via Sort2StructuredGrid, which will take in the csv from step 1 and the ordered coordinates of the source file from Matlab. Effectively, it creates a single solution file in the same ordering as the Matlab points. The importance of this is significant as the executable used in the next step depends on some expected ordering. If the ordering is different, a new executable will need to be used/created.

Third, the interpolation is performed onto the new grid via par3DInterp3, which creates solInterp.<1-nparts> files in the solnTarget directory.

Fourth, the interpolation is then used by PHASTA by setting Load and set 3D IC: True in the solver.inp. This should only be done for 1 timestep, as it will continue to reset the IC for all proceeding timesteps. The solnTarget directory needs to be symlinked into the -procs_case directory for this to work.

1. Create CSV

  • Created using ParaView
  • PV must be running in Serial mode
    • Otherwise the CSV will not be in the correct order and possibly have duplicated points
  1. Load in source dataset
  2. Apply Mergeblocks filter
  3. Save dataset as a csv with 12 digits of precision in scientific notation
  4. Take csv make sure it is in "pressure, u0, u1, u2, x0, x1, x2" format
  5. Though the next step looks for a .csv extension, it is a fortran formatted read and actually needs those commas replaced by spaces so edit them out with vi or sed -i 's/,/\ /g' test.csv
  6. Note you also need to delete the first line of this file for the next program. Again, could be done in vi or sed (sed -i 1,1d test.csv) or tail (tail -n +2 test.csv > trimmedLine1.csv)
    • Better yet, we should change the next code to read past that header line and then delete this line when that is complete. We should also consider the solution in this StackOverflow answer as it shows how to make a data structure that could read the csv lines directly in the next program and avoid ALL this file manipulation with modern fortran (see HighPerformanceMark's answer).

2. Create Structured Solution File

Note: These instructions will be for the parallelSortDNSzBinJames executable, which has some highly specific requirements and command inputs.

This step will take the data from the source solution file and put it in an format/order that will make the interpolation process work much faster.

  1. Symlink the sources ordered coordinate file as ordered.crd
  2. Rename/symlink csv to be the correct file name (in my specific case, it was dnsSolution1procLongFort.csv)
  3. Create an interactive job on whatever machine you're needing to run on (ALCF Cooley in this case)
  4. Load approriate MPI environment variables (soft add +mvapich2 for Cooley)
  5. Run the executable via mpirun -np [nprocs] [executable path] [executable inputs]
    • This will produce source.sln.{1..nprocs} files
  6. Concatenate source.sln.{1..nprocs} files in order into single source.sln file
    • The individual source.sln.{1..nprocs} files need to be concatenated into a single solution.sln file, which can be done (in zsh at least) via echo source.sln.{1..[MPIRanks]}| xargs cat > source.sln (or probably equivalent cat source.sln.{1..[MPIRanks]} > source.sln). Note these files must be concatenated in order of rank, otherwise it will be out of sequence.


Example Command: mpirun -np 24 /lus/theta-fs0/projects/PHASTA_aesp/Utilities/Sort2StructuredGrid/parallelSortDNSzBinJames 47822547 47822547 212 0.0291

  • The inputs for this command are [nlines of csv] [nlines of ordered.crd] [number of element in z] [z domain width]
  • Note: These inputs are specfic to this executable. Changing executable will change which is used
  • Also note that the [number of elements in z] is equivalent to nsons - 1 or the number of nodes in z - 1.

3. Create Interpolated files

This step will create the solInterp.[target nprocs] files used by PHASTA to perform the interpolation.

  1. Create Interpolate... directory in the target's Chef directory
  2. Symlink the coords.[target nprocs] files (that were created by Chef) to the a directory called coordsTarget
    • Note that it's been common practice to put these files in a directory called Coords within a Chef directory. In that case, just symlink the whole directory
  3. Create a directory called solnTarget
    • This may be corrected in the future, but currently if solnTarget is not present the job will fail
  4. Symlink the source.sln to the directory and the ordered.crd file as source.crd
  5. Run the interpolation executable via mpirun on an interactive job.
  6. This creates a series of solInterp.[target nprocs] files in the solnTarget directory

Example Command: mpirun -np 64 $ph_aesp/Utilities/Intpolate/par3DInterp3 16 799 281 213

  • The inputs for this command are [target parts per MPIProc] [source nx] [source ny] [source nz]
  • Note that the number of processes given to mpirun times the [target parts per MPIProc] must be equal to the number of target partitions. In this case, the target partition was 1024, so 64*16 = 1024
  • The way it works is that each of the 64 MPIProcs is given 16 partitions to interpolate.

4. Interpolate the solution in PHASTA

This step will take the solInterp.[target nprocs] files and load them as initial conditions.

  1. Symlink the solnTarget directory into the [target nprocs]-procs_case directory.
  2. Add/uncomment Load and set 3D IC: True in the solver.inp
  3. Run PHASTA for a few timesteps and write out restart-dat.[target nprocs] files
  4. Remove/comment out the Load and set 3D IC: True line from the solver.inp
  5. The new restart files have the interpolated solution
    • Note that if you forget to remove the Load and set 3D IC: True statement from solver.inp, PHASTA will overwrite the existing solution in the restart files