Home
last modified time | relevance | path

Searched refs:MPI (Results 1 – 13 of 13) sorted by relevance

/libCEED/examples/nek/
H A DMakefile18 MPI ?= 1 macro
20 ifeq ($(MPI),0)
41 FC=$(FC) CC=$(CC) MPI=$(MPI) NEK5K_DIR=$(NEK5K_DIR) CEED_DIR=$(CEED_DIR) && ./nek-examples.sh -m
H A DREADME.md29 By default, the examples are built with MPI.
30 To build the examples without MPI, set the environment variable `MPI=0`.
50 -n|-np Specify number of MPI ranks for the run (optional, default: 1)
H A Dnek-examples.sh45 : ${MPI:=1}
163 CC=$CC FC=$FC MPI=$MPI NEK_SOURCE_ROOT="${NEK5K_DIR}" CFLAGS="$CFLAGS" \
/libCEED/examples/
H A DMakefile27 MPI ?= 1 macro
46 +make CEED_DIR=$(CEED_DIR) NEK5K_DIR=$(NEK5K_DIR) MPI=$(MPI) -C nek bps
/libCEED/examples/deal.II/
H A Dbps-kokkos.cc143 Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1); in main()
149 ConditionalOStream pout(std::cout, Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0); in main()
H A Dbps-cpu.cc143 Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1); in main()
149 ConditionalOStream pout(std::cout, Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0); in main()
H A Dbps-ceed.h158 std::make_shared<Utilities::MPI::Partitioner>(dof_handler.locally_owned_dofs(), in reinit()
491 std::make_shared<Utilities::MPI::Partitioner>(geo_dof_handler.locally_owned_dofs(), in compute_metric_data()
635 std::shared_ptr<Utilities::MPI::Partitioner> partitioner;
/libCEED/examples/petsc/
H A DREADME.md39 … which will run a problem-size sweep of 600, 1200, 2400, 4800, 9600, 192000 FEM nodes per MPI rank.
/libCEED/doc/sphinx/source/
H A DlibCEEDapi.md175 Thus, a natural mapping of $\bm{A}$ on a parallel computer is to split the **T-vector** over MPI ra…
177 …s $\bm{P}$, $\bm{\mathcal{E}}$, $\bm{B}$ and $\bm{D}$ clearly separate the MPI parallelism in the …
182 Each MPI rank can use one or more {ref}`Ceed`s, and each {ref}`Ceed`, in turn, can represent one or…
183 The idea is that every MPI rank can use any logical device it is assigned at runtime.
184 For example, on a node with 2 CPU sockets and 4 GPUs, one may decide to use 6 MPI ranks (each using…
185 Another choice could be to run 1 MPI rank on the whole node and use 5 {ref}`Ceed` objects: 1 managi…
191 …tion to the original operator on **L-vector** level (i.e. independently on each device / MPI task).
/libCEED/
H A DMakefile267 MPI ?= 1 macro
722 +$(MAKE) -C examples MPI=$(MPI) CEED_DIR=`pwd` NEK5K_DIR="$(abspath $(NEK5K_DIR))" nek
/libCEED/rust/libceed-sys/c-src/
H A DMakefile267 MPI ?= 1 macro
722 +$(MAKE) -C examples MPI=$(MPI) CEED_DIR=`pwd` NEK5K_DIR="$(abspath $(NEK5K_DIR))" nek
/libCEED/doc/papers/joss/
H A Dpaper.bib344 title={Using {MPI}: Portable Parallel Programming with the Message-Passing Interface},
H A Dpaper.md95 …l P$ is an optional external operator, such as the parallel restriction in MPI-based [@gropp2014us…