FDSI Summer Program/HONEE
Information for tutorial in HONEE, to be updated as needed.
HONEE, or the High-Order Navier-stokes Equation Evaluator, is a CFD program that combines libCEED and PETSc.
Contents
Building HONEE for Cisco nodes
First, start a job on the Cisco nodes. All the below commands should be run within a terminal on a Cisco node.
Setup Envionment
source /projects/tools/Spackv0.23/share/spack/setup-env.sh spack load gcc@12.3 module load openmpi
Create new directory
mkdir honee_build cd honee_build
Build PETSc
Ordinarily, I'd only recommend doing a git clone
, e.g. git clone https://gitlab.com/petsc/petsc.git
.
However, the /nobackup/
server is quite slow and this process takes around 20 minutes (normally about a minute on my laptop), so downloading and extracting a tarball may be quicker.
I may cover that later, but for now I'll just cover it via git clone
.
git clone https://gitlab.com/petsc/petsc.git cd petsc cp /nobackup/uncompressed/jrwrigh/HONEE_Setup/petsc/reconfigure.py ./reconfigure.py make export PETSC_DIR=$(pwd) PETSC_ARCH=arch-32 cd ..
The reconfigure.py
file has the following:
#!/bin/python3 if __name__ == '__main__': import sys import os sys.path.insert(0, os.path.abspath('config')) import configure configure_options = [ '--with-64-bit-indices=0', '--download-hdf5', '--download-cgns', '--download-ctetgen=1', '--download-parmetis=1', '--download-metis=1', '--download-ptscotch=1', '--with-debugging=0', '--with-fortran-bindings=0', '--with-fc=0', 'PETSC_ARCH=arch-32', 'COPTFLAGS=-g -O3', 'CXXOPTFLAGS=-g -O3', ] configure.petsc_configure(configure_options)
Building HONEE
Ensure that PETSC_DIR
and PETSC_ARCH
are set to the desired PETSc installation.
git clone https://github.com/CEED/libCEED.git cd libCEED make build/fluids-navierstokes -j
The executable build/fluids-navierstokes
is then built and ready for running.
Changing HONEE Inputs
HONEE uses PETSc's input argument system for handling inputs. They can be provided either via command-line flags or inside a YAML file. So
./build/fluids-navierstokes -ts_dt 1e-3
is equivalent to
./build/fluids-navierstokes -options_file test.yaml
if test.yaml
has this in it:
ts_dt: 1e-3
Note also that PETSc allows for hierarchical flags as well within a YAML file. So instead of writing
ts_dt: 1e-3 ts_type: alpha ts_max_time: 2.3
You can write:
ts: dt: 1e-3 type: alpha max_time: 2.3
Documentation for Specific Flags
- HONEE specific flags
- Time Stepping (TS) flags
- Non-linear solver (SNES) flags
- Linear solver (KSP) flags
Notable Input Flags
-
-ts_monitor_solution
: This will save the results of a simulation to a file. Example:-ts_monitor_solution cgns:flow_visualization.cgns
will save the results to a file calledflow_visualization.cgns
Restarting Simulations
To restart a simulation from a previous simulation's result, they need to be saved to a *.bin
file.
This is done using the -continue
flag, which will load the file set by -continue_filename
.
Note that restarting from a binary file can only be done if the binary file was written by a simulation running at the same part count.
HONEE and Gmsh
GMSH/Grid requirements:
- The grid must be hexahedron based (so deformed cubes). HONEE can run with tetrahedron grids, but that will require some minor code modifications which we've done before (IIRC, it's a single line of code to change). If anyone is interested in this, let me know and we can work on it.
- Note that HONEE cannot handle mixed meshes, so meshes with both hexahedron and tetrahedron grids. This is not a minor code change and is probably out of scope for this Summer Program
- In the geo file, you need to specify
Physical Surface
s to identify the domain boundaries to apply boundary conditions to. - In the geo file, you need to specify a single
Physical Volume
that has the entire domain. So if you split the domain into multiple pieces for meshing purposes (as is done with the vortex shedding grids), then you need thePhysical Volume
to identify all the pieces.
YAML Setup:
In the YAML file, you need to associate each Physical Surface
s with a boundary condition. I describe this briefly in the video, but each Physical Surface
as an ID number associated with it. In the YAML file, you need to associate that ID number with a BC choice.
For example, in the vortexshedding.yaml
file there is:
# Boundary Settings bc_slip_z: 6 bc_wall: 5 bc_freestream: 1 bc_outflow: 2 bc_slip_y: 3,4
and the cylinder.geo
has:
Physical Surface("inlet") = {102}; Physical Surface("outlet") = {116}; Physical Surface("top") = {80, 120}; Physical Surface("bottom") = {36, 112}; Physical Surface("cylinderwalls") = {94, 28, 50, 72}; Physical Surface("frontandback") = {37, 1, 4, 103, 3, 81, 2, 59, 5, 125};
The "inlet" face is the first identified boundary, so it gets the ID number 1.
In the YAML file, you see bc_freestream: 1
, which sets this inlet boundary to be controlled by a freestream BC.
Similar with the "outlet" face; it's the second specified boundary in the geo file, so we set bc_outflow: 2
.
And so on with the other faces.
NOTE: When using the GMSH GUI to identify the Physical Surfaces, it will actually put something like this in the geo file:
Physical Surface("inlet", 3010) = {3005}; Physical Surface("outlet", 3011) = {3003}; Physical Surface("top", 3012) = {3004}; Physical Surface("bottom", 3013) = {3002}; Physical Surface("nacawalls", 3014) = {3008, 3006, 3007}; Physical Surface("frontandback", 3015) = {3009, 3001};
Note the extra 3010
in the "inlet" definition. I'm not sure if this means that correct ID number for it should be 3010 instead of 1, but that's a possibility.
Recommendations for HONEE performance
Compilation Flags
When compiling a program, you can pass certain flags in order to try and make HONEE run faster. For the Cisco nodes, this can be done by running the following in your libCEED directory:
make configure OPT='-O3 -march=native -g -ffp-contract=fast -fopenmp-simd' make build/fluids-navierstokes -Bj
General Running Settings
Some general settings that will help improve performance are to lower the solver tolerances.
Specifically -snes_rtol 1e-4 -ksp_rtol 1e-4
have been found to be pretty good choices.
These may require tweaking if you run into divergence issues.
Degree-based settings
Optimal solver settings can change based on what degree elements you work with (determined by the -degree
flag).
For linear elements (-degree 1
), I've had success with:
-amat_type -snes_lag_jacobian 5 -snes_lag_jacobian_persists false -snes_lag_preconditioner 20 -snes_lag_preconditioner_persists true -pc_type asm -sub_pc_type lu
While for cubic elements (-degree 3
) I've seen better performance with:
-amat_type shell -snes_lag_jacobian 15 -snes_lag_jacobian_persists true -pc_type asm -sub_pc_type lu
Note that these flags were when running the vortex shedding example, so the optimal options may change for other problems.