| /petsc/doc/tutorials/ |
| H A D | handson.md | 35 $ mpiexec -n 1 ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view 48 …$ mpiexec -n 4 ./ex50 -da_grid_x 120 -da_grid_y 120 -pc_type lu -pc_factor_mat_solver_type superl… 61 $ mpiexec -n 4 ./ex50 -da_grid_x 1025 -da_grid_y 1025 -pc_type mg -pc_mg_levels 9 -ksp_monitor 99 $ mpiexec -n 1 ./ex2 -ts_max_steps 10 -ts_monitor 112 $ mpiexec -n 4 ./ex2 -ts_max_steps 10 -ts_monitor -snes_monitor -ksp_monitor 124 $ mpiexec -n 16 ./ex2 -ts_max_steps 10 -ts_monitor -M 128 163 $ mpiexec -n 4 ./ex19 -da_refine 5 -snes_monitor -ksp_monitor -snes_view 176 $ mpiexec -n 4 ./ex19 -da_refine 5 -snes_monitor -ksp_monitor -snes_view -pc_type mg 192 $ mpiexec -n 4 ./ex19 -da_refine 5 -snes_monitor -ksp_monitor -snes_view -pc_type hypre 209 $ mpiexec -n 4 ./ex19 -da_refine 5 -snes_monitor -ksp_monitor -snes_view -pc_type ml [all …]
|
| /petsc/config/examples/ |
| H A D | arch-ci-mswin-intel-cxx-cmplx.py | 8 mpiexec=os.popen('cygpath -u '+os.popen('cygpath -ms '+mpiexecf).read()).read().strip() variable 26 '--with-mpiexec='+mpiexec,
|
| /petsc/config/BuildSystem/config/packages/ |
| H A D | MPI.py | 60 self.mpiexec = None 87 if self.mpiexec: output += ' mpiexec: '+self.mpiexec+'\n' 207 …self.mpiexec = 'Not_appropriate_for_batch_systems_You_must_use_your_batch_system_to_submit_MPI_job… 209 self.addMakeMacro('MPIEXEC', self.mpiexec) 215 self.mpiexec = self.argDB['with-mpiexec'] 217 self.mpiexec = os.path.abspath(os.path.join('bin', 'mpiexec.poe')) 251 …self.mpiexec = self.mpiexec.replace(' ', r'\\ ').replace('(', r'\\(').replace(')', r'\\)').replace… 256 …(out, err, ret) = Configure.executeShellCommand(self.mpiexec+' -help all', checkCommand = noCheck,… 260 self.getExecutable(self.mpiexec, getFullPath=1, resultName='mpiexecExecutable',setMakeMacro=0) 279 …(out, err, ret) = Configure.executeShellCommand(self.mpiexec+' -n 1 printenv | grep -v KEY', check… [all …]
|
| /petsc/config/ |
| H A D | petsc_harness.sh | 96 U ) mpiexec="petsc_mpiexec_cudamemcheck $mpiexec" 99 V ) mpiexec="petsc_mpiexec_valgrind $mpiexec"
|
| /petsc/src/binding/petsc4py/demo/legacy/petsc-examples/ksp/ |
| H A D | makefile | 3 MPIEXEC=mpiexec -n 2
|
| /petsc/src/sys/mpiuni/ |
| H A D | makefile | 4 SCRIPTS = ../../../../lib/petsc/bin/petsc-mpiexec.uni
|
| /petsc/src/benchmarks/ |
| H A D | benchmarkExample.py | 23 def mpiexec(self): member in PETSc 25 mpiexec = os.path.join(self.dir(), self.arch(), 'bin', 'mpiexec') 26 if not os.path.isfile(mpiexec): 28 return mpiexec 100 if self.petsc.mpiexec() is not None: 101 cmd += self.petsc.mpiexec() + ' '
|
| /petsc/doc/install/ |
| H A D | windows.md | 168 --with-mpiexec=/cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec \ 240 …han one available in a MSYS2 MinGW shell and 2) tell PETSc where MS-MPI `mpiexec` is. We recommend… 243 $ /usr/bin/python ./configure --with-mpiexec='/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec' \
|
| H A D | install.md | 48 installed in default system/compiler locations and `mpicc`, `mpif90`, mpiexec are available 64 …ch/bin/mpicc --with-mpi-f90=/usr/local/mpich/bin/mpif90 --with-mpiexec=/usr/local/mpich/bin/mpiexec 267 `--with-mpiexec` for [MPICH]).
|
| /petsc/lib/petsc/bin/ |
| H A D | petscmpiexec | 78 …OME "${PETSC_DIR}/${PETSC_ARCH}/lib/petsc/conf/petscvariables" | grep -v mpiexec | grep -v include…
|
| /petsc/doc/manual/ |
| H A D | streams.md | 64 - MPI, options to `mpiexec` 82 It is possible that the MPI initialization (including the use of `mpiexec`) can change the default … 86 `ex69f` with four OpenMP threads without `mpiexec` and see almost perfect scaling. 96 Running under `mpiexec` gives a very different wall clock time, indicating that all four threads ra… 99 $ OMP_NUM_THREADS=4 mpiexec -n 1 ./ex69f 105 If we add some binding/mapping options to `mpiexec` we obtain 108 $ OMP_NUM_THREADS=4 mpiexec --bind-to numa -n 1 --map-by core ./ex69f 114 Thus we conclude that this `mpiexec` implementation is, by default, binding the process (including … 115 Consider also the `mpiexec` option `--map-by socket:pe=$OMP_NUM_THREADS` to ensure each thread gets… 121 $ OMP_PROC_BIND=spread OMP_NUM_THREADS=4 mpiexec -n 1 ./ex69f
|
| H A D | profiling.md | 163 mpiexec -n 4 ./ex10 -f0 medium -f1 arco6 -ksp_gmres_classicalgramschmidt -log_view -mat_type baij \ 235 mpiexec -n 4 ./ex10 -f0 medium -f1 arco6 -ksp_gmres_classicalgramschmidt -log_view -mat_type baij \ 359 mpiexec -n 2 ./ex30 -log_view ::ascii_flamegraph | flamegraph | display 695 mpiexec -n 1 nsys profile -t nvtx,cuda -o file_name --stats=true --force-overwrite true ./a.out : -… 714 mpiexec -n 1 rocprofv3 --marker-trace -o file_name -- ./path/to/application -log_roctx 736 mpiexec -n 4 tau_exec -T mpi ./ex56 -log_perfstubs <args>
|
| H A D | performance.md | 143 `mpiexec`. Consider the hardware topology information returned by 203 `--bind-to core --map-by socket` to `mpiexec`: 206 $ mpiexec -n 6 --bind-to core --map-by socket ./stream 225 $ mpiexec -n 6 --bind-to core --map-by core ./stream 238 One must not assume that `mpiexec` uses good defaults. To
|
| /petsc/doc/tutorials/performance/ |
| H A D | guide_to_TAS.md | 21 …mpiexec -n 2 ./ex13 -log_view :/home/<user name>/PETSC_DIR/lib/petsc/bin/ex_13_test.py:ascii_info_…
|
| /petsc/src/binding/petsc4py/docs/source/ |
| H A D | install.rst | 60 $ mpiexec -n 4 python test/runtests.py
|
| /petsc/doc/developers/ |
| H A D | testing.md | 52 - one or more `mpiexec` tests that run the executable 61 mpiexec -n 1 ../ex1 1> ex1.tmp 2> ex1.err 119 - This integer is passed to mpiexec; i.e., `mpiexec -n nsize` 401 Assuming that this is `ex10.c`, there would be two mpiexec/diff 716 $ /scratch/kruger/contrib/petsc-mpich-cxx/bin/mpiexec -n 1 arch-mpich-cxx-py3/tests/vec/is/sf/tests… 978 …g_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive # mpiexec -n 1 ../ex9 -ksp… 987 mpiexec -n 1 ../ex9 -ksp_converged_reason -ksp_error_if_not_converged -pc_fieldsplit_diag_use_ama…
|
| /petsc/src/ksp/ksp/tests/benchmarkscatters/ |
| H A D | Baseline-Intel-8 | 4 …nux/mkl --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpiifort --with-mpiexec="mpiexec.hydra -hosts …
|
| H A D | Baseline-Intel-16 | 4 …nux/mkl --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpiifort --with-mpiexec="mpiexec.hydra -hosts …
|
| /petsc/src/binding/petsc4py/ |
| H A D | makefile | 10 MPIEXEC = mpiexec
|
| /petsc/doc/changes/ |
| H A D | 233.md | 25 - Changed the use of mpirun throughout the source to mpiexec; this
|
| H A D | 36.md | 19 - Script for running MPIUni jobs is now bin/petsc-mpiexec.uni
|
| H A D | 322.md | 12 - Add `-mpiuni-allow-multiprocess-launch` to allow mpiexec to launch multiple indendent MPI-Uni job…
|
| /petsc/ |
| H A D | gmakefile.test | 246 # ensure mpiexec and test executable is on firewall list 261 macos-firewall-register-mpiexec: 425 starttime: pre-clean $(libpetscall) macos-firewall-register-mpiexec 460 -@echo " getmpiexec - print the mpiexec to use to run PETSc programs"
|
| /petsc/config/PETSc/ |
| H A D | petsc.py | 311 …= libraries, initArgs = '&argc, &argv, 0, 0', boolType = 'PetscBool ', executor = self.mpi.mpiexec)
|
| /petsc/src/benchmarks/results/ |
| H A D | performance_arco1 | 2 …mpiexec -np 1 ./ex10 -f0 ~/datafiles/matrices/medium -f1 ~/datafiles/matrices/arco1 -pc_type ilu -… 3 …mpiexec -np 1 ./ex10 -f0 ~/datafiles/matrices/medium -f1 ~/datafiles/matrices/arco1 -pc_type ilu -…
|