xref: /petsc/doc/manual/streams.md (revision b11d9968bc79904c690b122f9399be46447eb113)
1---
2html_theme.sidebar_secondary.remove: 'true'
3---
4
5(ch_streams)=
6
7# STREAMS: Example Study
8
9Most algorithms in PETSc are memory
10bandwidth limited. The speed of a simulation depends more on the total achievable [^achievable-footnote] memory bandwidth of the computer than the speed
11(or number) of floating point units.
12The STREAMS benchmark, a key tool in our field, is invaluable for gaining insights into parallel performance (scaling) by measuring achievable memory bandwidth.
13PETSc contains
14multiple implementations of the `triad` STREAMS benchmark: including an
15<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/benchmarks/streams/OpenMPVersion.c.html">OpenMP version</a> and an
16<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/benchmarks/streams/OpenMPVersion.c.html">MPI version</a>.
17
18```
19for (int j = 0; j < n; ++j) a[j] = b[j]+scalar*c[j]
20```
21
22STREAMS measures the total memory bandwidth achievable when running `n` independent threads or processes on non-overlapping memory regions of an array of total length
23`N` on a shared memory node.
24The bandwidth is then computed as `3*n*sizeof(double)/min(time[])`. The timing is done with `MPI_Wtime()`. A call to the timer takes less than 3e-08 seconds, significantly
25smaller than the benchmark time. The STREAMS benchmark is intentionally embarrassingly parallel, that is, each thread or process works on its own data, completely independently of other threads or processes data.
26Though real simulations have more complex memory access patterns, most computations for PDEs have large sections of private data and share only data along ghost (halo) regions. Thus the completely
27independent non-overlapping memory STREAMS model still provides useful information.
28
29As more threads or processes are added, the bandwidth achieved begins to saturate at some `n`, generally less than the number of cores on the node. How quickly the bandwidth
30saturates, and the speed up (or parallel efficiency) obtained on a given system indicates the likely performance of memory bandwidth-limited computations.
31
32Fig. {any}`fig_gcc_streams` plots the total memory bandwidth achieved and the speedup for runs on an Intel system whose details are provided below. The achieved bandwidth
33increases rapidly with more cores initially but then less so as more cores are utilized. Also, note that the improvement may, unintuitively, be non-monotone when adding
34more cores. This is due to the complex interconnect between the cores and their various levels of caches and how the threads or processes are assigned to cores.
35
36:::{figure} /images/manual/gcc_streams.svg
37:alt: STREAMS benchmark gcc
38:name: fig_gcc_streams
39
40STREAMS benchmark gcc
41:::
42
43There are three important concepts needed to understand memory bandwidth-limited computing.
44
45- Thread or process **binding** to hardware subsets of the shared memory node. The Unix operating system allows threads and processes to migrate among the cores of a node
46  during a computation. This migration is managed by the operating system (OS). [^memorymigration-footnote]
47  A thread or process that is "near" some data may suddenly be far from the data when the thread or process gets migrated.
48  Binding the thread or process to a hardware unit prevents or limits the migration.
49- Thread or process **mapping** (assignment) to hardware subsets when more threads or processes are used. Physical memory is divided into multiple distinct units, each of which can
50  independently provide a certain memory bandwidth. Different cores may be more closely connected to different memory units. This results in
51  non-uniform memory access (**NUMA**), meaning the memory latency or bandwidth for any particular core depends on the physical address of the requested memory.
52  When increasing from one thread or process to two, one obviously would like the second thread
53  or process to use a different memory unit
54  and not share the same unit with the first thread or process.
55  Mapping each new thread or process to cores that do not share the previously assigned core's memory unit ensures a higher total achievable bandwidth.
56- In addition to mapping, one must ensure that each thread or process **uses data on the closest memory unit**. The OS selects the memory unit to place new pages
57  of virtual memory based on **first touch**:
58  the core of the first thread or process to touch (read or write to) a memory address determines to which memory unit the page of the data is assigned. This is automatic for multiple processes since only one process (on a particular core) will ever touch its data. For threads, care must be taken that the data a thread is to compute on is first touched by that thread.
59  For example, the performance will suffer if the first thread initializes an entire array that multiple threads will later access.
60  For small data arrays that remain in the cache, first touch may produce no performance difference.
61
62MPI and OpenMP provide ways to bind and map processes and cores. They also provide ways to display the current mapping.
63
64- MPI, options to `mpiexec`
65
66  - --bind-to hwthread | core | l1cache | l2cache | l3cache | socket | numa | board
67  - --map-by hwthread | core | socket | numa | board | node
68  - --report-bindings
69  - --cpu-list list of cores
70  - --cpu-set list of sets of cores
71
72- OpenMP, environmental variables
73
74  - OMP_NUM_THREADS=n
75  - OMP_PROC_BIND=close | spread
76  - OMP_PLACES="list of sets of cores" for example \{0:2},\{2:2},\{32:2},\{34:2}
77  - OMP_DISPLAY_ENV=false | true
78  - OMP_DISPLAY_AFFINITY=false | true
79
80Providing appropriate values may be crucial to high performance; the defaults may produce poor results. The best bindings for the STREAMS benchmark are often the best bindings for large PETSc applications. The Linux commands `lscpu` and `numactl -H` provide useful information about the hardware configuration.
81
82It is possible that the MPI initialization (including the use of `mpiexec`) can change the default OpenMP binding/mapping behavior and thus seriously affect the application runtime.
83The <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/sys/tests/ex69.c.html">C</a> and <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/sys/tests/ex69f.F90.html">Fortran</a>) examples demonstrate this.
84
85We run
86`ex69f` with four OpenMP threads without `mpiexec` and see almost perfect scaling.
87The CPU time of the process, which is summed over the four threads in process, is the same as the wall clock time indicating that each thread is run on a different core as desired.
88
89```
90$ OMP_NUM_THREADS=4  ./ex69f
91  CPU time reported by cpu_time()               6.1660000000000006E-002
92  Wall clock time reported by system_clock()    1.8335562000000000E-002
93  Wall clock time reported by omp_get_wtime()   1.8330062011955306E-002
94```
95
96Running under `mpiexec` gives a very different wall clock time, indicating that all four threads ran on the same core.
97
98```
99$ OMP_NUM_THREADS=4 mpiexec -n 1  ./ex69f
100  CPU time reported by cpu_time()               7.2290999999999994E-002
101  Wall clock time reported by system_clock()    7.2356641999999999E-002
102  Wall clock time reported by omp_get_wtime()   7.2353694995399565E-002
103```
104
105If we add some binding/mapping options to `mpiexec` we obtain
106
107```
108$ OMP_NUM_THREADS=4 mpiexec --bind-to numa -n 1 --map-by core ./ex69f
109  CPU time reported by cpu_time()               7.0021000000000000E-002
110  Wall clock time reported by system_clock()    1.8489282999999999E-002
111  Wall clock time reported by omp_get_wtime()   1.8486462999135256E-002
112```
113
114Thus we conclude that this `mpiexec` implementation is, by default, binding the process (including all of its threads) to a single core.
115Consider also the `mpiexec` option `--map-by socket:pe=$OMP_NUM_THREADS` to ensure each thread gets is own core for computation.
116
117Note that setting
118`OMP_PROC_BIND=spread` alone does not resolve the problem, as the output below indicates.
119
120```
121$ OMP_PROC_BIND=spread OMP_NUM_THREADS=4 mpiexec -n 1  ./ex69f
122  CPU time reported by cpu_time()               7.2841999999999990E-002
123  Wall clock time reported by system_clock()    7.2946015000000003E-002
124  Wall clock time reported by omp_get_wtime()   7.2942997998325154E-002
125```
126
127The Fortran routine `cpu_time()` can sometimes produce misleading results when run with multiple threads. Consider again the
128<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/sys/tests/ex69f.F90.html">Fortran</a> example. For an OpenMP parallel loop with enough available cores and the proper binding of threads
129to cores, one expects the CPU time for the process to be roughly the number of threads times the wall clock time. However, for a loop that is not parallelized (like the second
130loop in the Fortran example), the CPU time one would expect would match the wall clock time. However, this may not be the case; for example, we have run the Fortran example
131on an Intel system with the Intel ifort compiler and observed the recorded CPU for the second loop to be roughly the number of threads times the wall clock time even
132though only a single thread is computing the loop. Thus, comparing the CPU time to the wall clock time of a computation with OpenMP does not give you
133a good measure of the speedup produced by OpenMP.
134
135## Detailed STREAMS study for large arrays
136
137We now present a detailed study of a particular Intel Icelake system, the Intel(R) Xeon(R) Platinum 8362 CPU @ 2.80GH. It has 32 cores on each of two sockets
138(each with a single NUMA region, so a total of two NUMA regions), a
13948 Megabyte L3 cache and 32 1.25 Megabyte L2 caches, each shared by 2 cores.
140It is running the Rocky Linux 8.8 (Green Obsidian) distribution. The compilers
141used are GNU 12.2, Intel(R) oneAPI Compiler 2023.0.0 with both icc and icx, and NVIDIA nvhpc/23.1. The MPI implementation is OpenMPI 4.0.7, except for nvhpc, which uses 3.15. The compiler options were
142
143- gcc -O3 -march=native
144- icc -O3 -march=native
145- icx -O3 -ffinite-math-only (the -xHost option, that replaces -march=native, crashed the compiler so was not used)
146- nvc -O3 -march=native
147
148We first run the STREAMS benchmark with large double precision arrays of length $1.6\times10^8$; the size was selected to be large enough to eliminate cache effects.
149Fig. {any}`fig_streams` shows the achieved bandwidth for gcc, icc, icx, and nvc using MPI and OpenMP with their default bindings and with the MPI binding of `--bind-to core --map-by numa`
150and the OpenMP binding of `spread`.
151
152:::{figure} /images/manual/streams.svg
153:alt: STREAMS benchmark
154:name: fig_streams
155
156Comprehensive STREAMS performance on Intel system
157:::
158
159Note the two dips in the performance with OpenMP and gcc using binding in Fig. {any}`fig_gcc_streams`.
160Requesting the `spread` binding produces better results for small core counts but poorer ones for larger ones.
161These are a result of a bug in the gcc `spread` option, placing more threads in one NUMA domain than the other.
162For example, with gcc, the `OMP_DISPLAY_AFFINITY` shows that for 28 threads, 12 are placed on NUMA region 1, and 16 are placed on the other NUMA region.
163The other compilers spread the cores evenly.
164
165Fig. {any}`fig_icc_streams` shows the performance with the icc compiler. Note that the icc compiler produces significantly faster code for
166the benchmark than the other compilers
167so its STREAMS speedups are smaller,
168though it
169provides better performance. No significant dips occur with the OpenMP binding using icc, icx, and nvc;
170using `OMP_DISPLAY_AFFINITY` confirms, for example, that 14 threads (out of 28) are assigned to each NUMA domain, unlike with gcc.
171Using the exact thread placement that icc uses with gcc using the OpenMP `OMP_PLACES` option removes most of the dip in the gcc OpenMP binding result.
172Thus, we conclude that on this system, the `spread` option does not always give the best thread placement with gcc due to its bug.
173
174:::{figure} /images/manual/icc_streams.svg
175:alt: STREAMS benchmark icc
176:name: fig_icc_streams
177
178STREAMS benchmark icc
179:::
180
181Fig. {any}`fig_icx_streams` shows the performance with the icx compiler.
182
183:::{figure} /images/manual/icx_streams.svg
184:alt: STREAMS benchmark icx
185:name: fig_icx_streams
186
187STREAMS benchmark icx
188:::
189
190:::{figure} /images/manual/nvc_streams.svg
191:alt: STREAMS benchmark nvc
192:name: fig_nvc_streams
193
194STREAMS benchmark nvc
195:::
196
197To understand the disparity in the STREAMS performance with icc we reran it with the highest optimization level that produced the same results as gcc and icx: `-O1` without `-march=native`.
198The results are displayed in Fig. {any}`fig_icc_O1_streams`; sure enough, the results now match that of gcc and icx.
199
200:::{figure} /images/manual/icc_O1_streams.svg
201:alt: STREAMS benchmark icc -O1
202:name: fig_icc_O1_streams
203
204STREAMS benchmark icc -O1
205:::
206
207Next we display the STREAMS results using gcc with parallel efficiency instead of speedup in {any}`fig_streams_pe`
208
209:::{figure} /images/manual/gcc_streams_pe.svg
210:alt: STREAMS parallel efficiency
211:name: fig_streams_pe
212
213STREAMS parallel efficiency gcc
214:::
215
216Observations:
217
218- For MPI, the default binding and mapping on this system produces results that are as good as providing a specific binding and mapping. This is not true on many systems!
219- For OpenMP gcc, the default binding is better than using `spread`, because `spread` has a bug. For the other compilers using `spread` is crucial for good performance on more than 32 cores.
220- We do not have any explanation why the improvement in speedup for gcc, icx, and nvc slows down between 32 and 48 cores and then improves rapidly since we believe appropriate bindings are being used.
221
222We now present a limited version of the analysis above on an Apple MacBook Pro M2 Max using MPICH, version 4.1, gcc version 13.2 (installed via Homebrew), XCode 15.0.1
223and -O3 optimization flags with a smaller N of 80,000,000. macOS contains no public API for setting or controlling affinities so it is not possible to set bindings for either MPI or OpenMP. In addition, the M2 has a combination of performance and efficiency cores which we have no control over the use of.
224
225Fig. {any}`fig_m2_gcc_streams` provides the results. Based on the plateau in the middle of the plot, we assume that the core numbering that
226is used by MPICH does not produce the best
227binding.
228
229:::{figure} /images/manual/m2_gcc_streams.svg
230:alt: STREAMS benchmark on Apple M2
231:name: fig_m2_gcc_streams
232
233STREAMS benchmark on Apple M2
234
235OpenMPI (installed via Homebrew) produced similar results.
236:::
237
238## Detailed study with application
239
240We now move on to a <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ksp/ksp/tutorials/ex45.c.html">PETSc application</a> which solves a three-dimensional Poisson problem on a unit
241cube discretized with
242finite differences whose linear system is solved with the PETSc algebraic multigrid preconditioner, `PCGAMG` and Krylov accelerator GMRES. Strong scaling is used to compare with the STREAMS benchmark: measuring the time to construct the preconditioner,
243the time to solve the linear system with the preconditioner, and the time for the matrix-vector products. These are displayed in Fig. {any}`fig_gamg`. The runtime options were
244`-da_refine 6 -pc_type gamg -log_view`. This study did not attempt to tune the default `PCGAMG` parameters.
245There were very similar speedups for all the
246compilers so we only display results for gcc.
247
248:::{figure} /images/manual/gamg.svg
249:alt: GAMG speedup
250:name: fig_gamg
251
252GAMG speedup
253:::
254
255:::{figure} /images/manual/gamg_pe.svg
256:alt: GAMG parallel efficiency
257:name: fig_gamg_pe
258
259GAMG parallel efficiency
260:::
261
262The dips in the performance at certain core counts are consistent between compilers
263and results from the amount of MPI communication required from the communication pattern which results from the different three-dimensional parallel
264grid layout.
265
266We now present GAMG on the Apple MacBook Pro M2 Max.
267Fig. {any}`fig_m2_gamg` provides the results. The performance is better than predicted by the STREAMS benchmark for all portions of the solver.
268
269:::{figure} /images/manual/m2_gamg.svg
270:alt: GAMG speedup on Apple M2
271:name: fig_m2_gamg
272
273GAMG speedup Apple M2
274:::
275
276(sec_pcmpi_study)=
277
278## Application with the MPI linear solver server
279
280We now run the same PETSc application using the MPI linear solver server mode, set using `-mpi_linear_solver_server`.
281All compilers deliver largely the same performance so we only present results with gcc.
282We plot the speedup in Fig. {any}`fig_gamg_server` and parallel efficiency in {any}`fig_gamg_server_pe`
283Note that it is far below the parallel solve without the server. However, the distribution time for these runs was always less than three percent of the complete solution time.
284The reason for the poorer performance is because in the pure MPI version, the vectors are partitioned directly from the three-dimensional grid; the cube is divided into (approximate)
285sub-cubes, this minimizes the inter-process communication, especially in the matrix-vector product. In server mode, the vector is laid out using the cube's natural ordering, and then each MPI process is assigned a contiguous subset of the vector. As a result, the flop rate for the matrix-vector product is significantly higher than that of the pure MPI version.
286This indicates that a naive use of the MPI linear solver server will not produce as much performance as a usage that considers the matrix/vector layouts by performing an
287initial grid partitioning. For example, if OpenMP is used to generate the matrix, it would be appropriate to have each OpenMP thread assigned a contiguous
288vector mapping to a sub-cube of the domain. This would require, of course, a far more complicated OpenMP code that is written using MPI-like parallelism and decomposition of the data.
289
290`PCMPI` has two approaches for distributing the linear system. The first uses `MPI_Scatterv()` to communicate the matrix and vector entries from the initial compute process to all of the
291server processes. Unfortunately, `MPI_Scatterv()` does not scale with more MPI processes; hence, the solution time is limited by the `MPI_Scatterv()`. To remove this limitation,
292the second communication mechanism is Unix shared memory `shmget()`. Here, `PCMPI` allocates shared memory
293from which all the MPI processes in the server
294can access their portion of the matrices and vectors that they need.
295There is still a (now much smaller) server processing overhead since the initial data storage of the sequential matrix (in `MATSEQAIJ` storage)
296still must be converted to `MATMPIAIJ` storage. `VecPlaceArray()` is used to convert the sequential vector to an MPI vector, so there is
297no overhead, not even a copy, for this operation.
298
299:::{figure} /images/manual/gamg_server.svg
300:alt: GAMG server speedup
301:name: fig_gamg_server
302
303GAMG server speedup
304:::
305
306:::{figure} /images/manual/gamg_server_pe.svg
307:alt: GAMG server parallel efficiency
308:name: fig_gamg_server_pe
309
310GAMG server parallel efficiency
311:::
312
313:::{figure} /images/manual/gamg_server_pe_streams.svg
314:alt: GAMG server parallel efficiency
315:name: fig_gamg_server_pe_streams
316
317GAMG server parallel efficiency vs STREAMS
318:::
319
320In {any}`fig_gamg_server_pe_streams`, we plot the parallel efficiency of the linear solve and the STREAMS benchmark, which track each other well.
321This example demonstrates the **utility of the STREAMS benchmark to predict the speedup (parallel efficiency) of a memory bandwidth limited application** on a shared memory Linux system.
322
323For the Apple M2, we present the results using Unix shared-memory communication of the matrix and vectors to the server processes
324in {any}`fig_m2_gamg_server_shared_speedup`.
325To run this one must first set up the machine to use shared memory as described in `PetscShmgetAllocateArray()`
326
327:::{figure} /images/manual/m2_gamg_server_shared_speedup.svg
328:alt: GAMG solver speedup
329:name: fig_m2_gamg_server_shared_speedup
330
331GAMG server solver speedup on Apple M2
332:::
333
334This example demonstrates that the **MPI linear solver server feature of PETSc can generate a reasonable speedup in the linear solver** on machines that have significant
335memory bandwidth. However, one should not expect the speedup to be near the total number of cores on the compute node.
336
337```{rubric} Footnotes
338```
339
340[^achievable-footnote]: Achievable memory bandwidth is the actual bandwidth one can obtain
341    as opposed to the theoretical peak that is calculated using the hardware specification.
342
343[^memorymigration-footnote]: Data can also be migrated among different memory sockets during a computation by the OS, but we ignore this possibility in the discussion.
344
345```{eval-rst}
346.. bibliography:: /petsc.bib
347   :filter: docname in docnames
348```
349