Searched refs:MPI (Results 1 – 13 of 13) sorted by relevance
| /libCEED/examples/nek/ |
| H A D | Makefile | 18 MPI ?= 1 macro 20 ifeq ($(MPI),0) 41 FC=$(FC) CC=$(CC) MPI=$(MPI) NEK5K_DIR=$(NEK5K_DIR) CEED_DIR=$(CEED_DIR) && ./nek-examples.sh -m
|
| H A D | README.md | 29 By default, the examples are built with MPI. 30 To build the examples without MPI, set the environment variable `MPI=0`. 50 -n|-np Specify number of MPI ranks for the run (optional, default: 1)
|
| H A D | nek-examples.sh | 45 : ${MPI:=1} 163 CC=$CC FC=$FC MPI=$MPI NEK_SOURCE_ROOT="${NEK5K_DIR}" CFLAGS="$CFLAGS" \
|
| /libCEED/examples/ |
| H A D | Makefile | 27 MPI ?= 1 macro 46 +make CEED_DIR=$(CEED_DIR) NEK5K_DIR=$(NEK5K_DIR) MPI=$(MPI) -C nek bps
|
| /libCEED/examples/deal.II/ |
| H A D | bps-kokkos.cc | 143 Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1); in main() 149 ConditionalOStream pout(std::cout, Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0); in main()
|
| H A D | bps-cpu.cc | 143 Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1); in main() 149 ConditionalOStream pout(std::cout, Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0); in main()
|
| H A D | bps-ceed.h | 158 std::make_shared<Utilities::MPI::Partitioner>(dof_handler.locally_owned_dofs(), in reinit() 491 std::make_shared<Utilities::MPI::Partitioner>(geo_dof_handler.locally_owned_dofs(), in compute_metric_data() 635 std::shared_ptr<Utilities::MPI::Partitioner> partitioner;
|
| /libCEED/examples/petsc/ |
| H A D | README.md | 39 … which will run a problem-size sweep of 600, 1200, 2400, 4800, 9600, 192000 FEM nodes per MPI rank.
|
| /libCEED/doc/sphinx/source/ |
| H A D | libCEEDapi.md | 175 Thus, a natural mapping of $\bm{A}$ on a parallel computer is to split the **T-vector** over MPI ra… 177 …s $\bm{P}$, $\bm{\mathcal{E}}$, $\bm{B}$ and $\bm{D}$ clearly separate the MPI parallelism in the … 182 Each MPI rank can use one or more {ref}`Ceed`s, and each {ref}`Ceed`, in turn, can represent one or… 183 The idea is that every MPI rank can use any logical device it is assigned at runtime. 184 For example, on a node with 2 CPU sockets and 4 GPUs, one may decide to use 6 MPI ranks (each using… 185 Another choice could be to run 1 MPI rank on the whole node and use 5 {ref}`Ceed` objects: 1 managi… 191 …tion to the original operator on **L-vector** level (i.e. independently on each device / MPI task).
|
| /libCEED/ |
| H A D | Makefile | 267 MPI ?= 1 macro 722 +$(MAKE) -C examples MPI=$(MPI) CEED_DIR=`pwd` NEK5K_DIR="$(abspath $(NEK5K_DIR))" nek
|
| /libCEED/rust/libceed-sys/c-src/ |
| H A D | Makefile | 267 MPI ?= 1 macro 722 +$(MAKE) -C examples MPI=$(MPI) CEED_DIR=`pwd` NEK5K_DIR="$(abspath $(NEK5K_DIR))" nek
|
| /libCEED/doc/papers/joss/ |
| H A D | paper.bib | 344 title={Using {MPI}: Portable Parallel Programming with the Message-Passing Interface},
|
| H A D | paper.md | 95 …l P$ is an optional external operator, such as the parallel restriction in MPI-based [@gropp2014us…
|