1*7f296bb3SBarry Smith--- 2*7f296bb3SBarry Smithhtml_theme.sidebar_secondary.remove: 'true' 3*7f296bb3SBarry Smith--- 4*7f296bb3SBarry Smith 5*7f296bb3SBarry Smith(ch_streams)= 6*7f296bb3SBarry Smith 7*7f296bb3SBarry Smith# STREAMS: Example Study 8*7f296bb3SBarry Smith 9*7f296bb3SBarry SmithMost algorithms in PETSc are memory 10*7f296bb3SBarry Smithbandwidth limited. The speed of a simulation depends more on the total achievable [^achievable-footnote] memory bandwidth of the computer than the speed 11*7f296bb3SBarry Smith(or number) of floating point units. 12*7f296bb3SBarry SmithThe STREAMS benchmark, a key tool in our field, is invaluable for gaining insights into parallel performance (scaling) by measuring achievable memory bandwidth. 13*7f296bb3SBarry SmithPETSc contains 14*7f296bb3SBarry Smithmultiple implementations of the `triad` STREAMS benchmark: including an 15*7f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/benchmarks/streams/OpenMPVersion.c.html">OpenMP version</a> and an 16*7f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/benchmarks/streams/OpenMPVersion.c.html">MPI version</a>. 17*7f296bb3SBarry Smith 18*7f296bb3SBarry Smith``` 19*7f296bb3SBarry Smithfor (int j = 0; j < n; ++j) a[j] = b[j]+scalar*c[j] 20*7f296bb3SBarry Smith``` 21*7f296bb3SBarry Smith 22*7f296bb3SBarry SmithSTREAMS measures the total memory bandwidth achievable when running `n` independent threads or processes on non-overlapping memory regions of an array of total length 23*7f296bb3SBarry Smith`N` on a shared memory node. 24*7f296bb3SBarry SmithThe bandwidth is then computed as `3*n*sizeof(double)/min(time[])`. The timing is done with `MPI_Wtime()`. A call to the timer takes less than 3e-08 seconds, significantly 25*7f296bb3SBarry Smithsmaller than the benchmark time. The STREAMS benchmark is intentionally embarrassingly parallel, that is, each thread or process works on its own data, completely independently of other threads or processes data. 26*7f296bb3SBarry SmithThough real simulations have more complex memory access patterns, most computations for PDEs have large sections of private data and share only data along ghost (halo) regions. Thus the completely 27*7f296bb3SBarry Smithindependent non-overlapping memory STREAMS model still provides useful information. 28*7f296bb3SBarry Smith 29*7f296bb3SBarry SmithAs more threads or processes are added, the bandwidth achieved begins to saturate at some `n`, generally less than the number of cores on the node. How quickly the bandwidth 30*7f296bb3SBarry Smithsaturates, and the speed up (or parallel efficiency) obtained on a given system indicates the likely performance of memory bandwidth-limited computations. 31*7f296bb3SBarry Smith 32*7f296bb3SBarry SmithFig. {any}`fig_gcc_streams` plots the total memory bandwidth achieved and the speedup for runs on an Intel system whose details are provided below. The achieved bandwidth 33*7f296bb3SBarry Smithincreases rapidly with more cores initially but then less so as more cores are utilized. Also, note that the improvement may, unintuitively, be non-monotone when adding 34*7f296bb3SBarry Smithmore cores. This is due to the complex interconnect between the cores and their various levels of caches and how the threads or processes are assigned to cores. 35*7f296bb3SBarry Smith 36*7f296bb3SBarry Smith:::{figure} /images/manual/gcc_streams.svg 37*7f296bb3SBarry Smith:alt: STREAMS benchmark gcc 38*7f296bb3SBarry Smith:name: fig_gcc_streams 39*7f296bb3SBarry Smith 40*7f296bb3SBarry SmithSTREAMS benchmark gcc 41*7f296bb3SBarry Smith::: 42*7f296bb3SBarry Smith 43*7f296bb3SBarry SmithThere are three important concepts needed to understand memory bandwidth-limited computing. 44*7f296bb3SBarry Smith 45*7f296bb3SBarry Smith- Thread or process **binding** to hardware subsets of the shared memory node. The Unix operating system allows threads and processes to migrate among the cores of a node 46*7f296bb3SBarry Smith during a computation. This migration is managed by the operating system (OS). [^memorymigration-footnote] 47*7f296bb3SBarry Smith A thread or process that is "near" some data may suddenly be far from the data when the thread or process gets migrated. 48*7f296bb3SBarry Smith Binding the thread or process to a hardware unit prevents or limits the migration. 49*7f296bb3SBarry Smith- Thread or process **mapping** (assignment) to hardware subsets when more threads or processes are used. Physical memory is divided into multiple distinct units, each of which can 50*7f296bb3SBarry Smith independently provide a certain memory bandwidth. Different cores may be more closely connected to different memory units. This results in 51*7f296bb3SBarry Smith non-uniform memory access (**NUMA**), meaning the memory latency or bandwidth for any particular core depends on the physical address of the requested memory. 52*7f296bb3SBarry Smith When increasing from one thread or process to two, one obviously would like the second thread 53*7f296bb3SBarry Smith or process to use a different memory unit 54*7f296bb3SBarry Smith and not share the same unit with the first thread or process. 55*7f296bb3SBarry Smith Mapping each new thread or process to cores that do not share the previously assigned core's memory unit ensures a higher total achievable bandwidth. 56*7f296bb3SBarry Smith- In addition to mapping, one must ensure that each thread or process **uses data on the closest memory unit**. The OS selects the memory unit to place new pages 57*7f296bb3SBarry Smith of virtual memory based on **first touch**: 58*7f296bb3SBarry Smith the core of the first thread or process to touch (read or write to) a memory address determines to which memory unit the page of the data is assigned. This is automatic for multiple processes since only one process (on a particular core) will ever touch its data. For threads, care must be taken that the data a thread is to compute on is first touched by that thread. 59*7f296bb3SBarry Smith For example, the performance will suffer if the first thread initializes an entire array that multiple threads will later access. 60*7f296bb3SBarry Smith For small data arrays that remain in the cache, first touch may produce no performance difference. 61*7f296bb3SBarry Smith 62*7f296bb3SBarry SmithMPI and OpenMP provide ways to bind and map processes and cores. They also provide ways to display the current mapping. 63*7f296bb3SBarry Smith 64*7f296bb3SBarry Smith- MPI, options to `mpiexec` 65*7f296bb3SBarry Smith 66*7f296bb3SBarry Smith - --bind-to hwthread | core | l1cache | l2cache | l3cache | socket | numa | board 67*7f296bb3SBarry Smith - --map-by hwthread | core | socket | numa | board | node 68*7f296bb3SBarry Smith - --report-bindings 69*7f296bb3SBarry Smith - --cpu-list list of cores 70*7f296bb3SBarry Smith - --cpu-set list of sets of cores 71*7f296bb3SBarry Smith 72*7f296bb3SBarry Smith- OpenMP, environmental variables 73*7f296bb3SBarry Smith 74*7f296bb3SBarry Smith - OMP_NUM_THREADS=n 75*7f296bb3SBarry Smith - OMP_PROC_BIND=close | spread 76*7f296bb3SBarry Smith - OMP_PLACES="list of sets of cores" for example \{0:2},\{2:2},\{32:2},\{34:2} 77*7f296bb3SBarry Smith - OMP_DISPLAY_ENV=false | true 78*7f296bb3SBarry Smith - OMP_DISPLAY_AFFINITY=false | true 79*7f296bb3SBarry Smith 80*7f296bb3SBarry SmithProviding appropriate values may be crucial to high performance; the defaults may produce poor results. The best bindings for the STREAMS benchmark are often the best bindings for large PETSc applications. The Linux commands `lscpu` and `numactl -H` provide useful information about the hardware configuration. 81*7f296bb3SBarry Smith 82*7f296bb3SBarry SmithIt is possible that the MPI initialization (including the use of `mpiexec`) can change the default OpenMP binding/mapping behavior and thus seriously affect the application runtime. 83*7f296bb3SBarry SmithThe <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/sys/tests/ex69.c.html">C</a> and <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/sys/tests/ex69f.F90.html">Fortran</a>) examples demonstrate this. 84*7f296bb3SBarry Smith 85*7f296bb3SBarry SmithWe run 86*7f296bb3SBarry Smith`ex69f` with four OpenMP threads without `mpiexec` and see almost perfect scaling. 87*7f296bb3SBarry SmithThe CPU time of the process, which is summed over the four threads in process, is the same as the wall clock time indicating that each thread is run on a different core as desired. 88*7f296bb3SBarry Smith 89*7f296bb3SBarry Smith``` 90*7f296bb3SBarry Smith$ OMP_NUM_THREADS=4 ./ex69f 91*7f296bb3SBarry Smith CPU time reported by cpu_time() 6.1660000000000006E-002 92*7f296bb3SBarry Smith Wall clock time reported by system_clock() 1.8335562000000000E-002 93*7f296bb3SBarry Smith Wall clock time reported by omp_get_wtime() 1.8330062011955306E-002 94*7f296bb3SBarry Smith``` 95*7f296bb3SBarry Smith 96*7f296bb3SBarry SmithRunning under `mpiexec` gives a very different wall clock time, indicating that all four threads ran on the same core. 97*7f296bb3SBarry Smith 98*7f296bb3SBarry Smith``` 99*7f296bb3SBarry Smith$ OMP_NUM_THREADS=4 mpiexec -n 1 ./ex69f 100*7f296bb3SBarry Smith CPU time reported by cpu_time() 7.2290999999999994E-002 101*7f296bb3SBarry Smith Wall clock time reported by system_clock() 7.2356641999999999E-002 102*7f296bb3SBarry Smith Wall clock time reported by omp_get_wtime() 7.2353694995399565E-002 103*7f296bb3SBarry Smith``` 104*7f296bb3SBarry Smith 105*7f296bb3SBarry SmithIf we add some binding/mapping options to `mpiexec` we obtain 106*7f296bb3SBarry Smith 107*7f296bb3SBarry Smith``` 108*7f296bb3SBarry Smith$ OMP_NUM_THREADS=4 mpiexec --bind-to numa -n 1 --map-by core ./ex69f 109*7f296bb3SBarry Smith CPU time reported by cpu_time() 7.0021000000000000E-002 110*7f296bb3SBarry Smith Wall clock time reported by system_clock() 1.8489282999999999E-002 111*7f296bb3SBarry Smith Wall clock time reported by omp_get_wtime() 1.8486462999135256E-002 112*7f296bb3SBarry Smith``` 113*7f296bb3SBarry Smith 114*7f296bb3SBarry SmithThus we conclude that this `mpiexec` implementation is, by default, binding the process (including all of its threads) to a single core. 115*7f296bb3SBarry SmithConsider also the `mpiexec` option `--map-by socket:pe=$OMP_NUM_THREADS` to ensure each thread gets is own core for computation. 116*7f296bb3SBarry Smith 117*7f296bb3SBarry SmithNote that setting 118*7f296bb3SBarry Smith`OMP_PROC_BIND=spread` alone does not resolve the problem, as the output below indicates. 119*7f296bb3SBarry Smith 120*7f296bb3SBarry Smith``` 121*7f296bb3SBarry Smith$ OMP_PROC_BIND=spread OMP_NUM_THREADS=4 mpiexec -n 1 ./ex69f 122*7f296bb3SBarry Smith CPU time reported by cpu_time() 7.2841999999999990E-002 123*7f296bb3SBarry Smith Wall clock time reported by system_clock() 7.2946015000000003E-002 124*7f296bb3SBarry Smith Wall clock time reported by omp_get_wtime() 7.2942997998325154E-002 125*7f296bb3SBarry Smith``` 126*7f296bb3SBarry Smith 127*7f296bb3SBarry SmithThe Fortran routine `cpu_time()` can sometimes produce misleading results when run with multiple threads. Consider again the 128*7f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/sys/tests/ex69f.F90.html">Fortran</a> example. For an OpenMP parallel loop with enough available cores and the proper binding of threads 129*7f296bb3SBarry Smithto cores, one expects the CPU time for the process to be roughly the number of threads times the wall clock time. However, for a loop that is not parallelized (like the second 130*7f296bb3SBarry Smithloop in the Fortran example), the CPU time one would expect would match the wall clock time. However, this may not be the case; for example, we have run the Fortran example 131*7f296bb3SBarry Smithon an Intel system with the Intel ifort compiler and observed the recorded CPU for the second loop to be roughly the number of threads times the wall clock time even 132*7f296bb3SBarry Smiththough only a single thread is computing the loop. Thus, comparing the CPU time to the wall clock time of a computation with OpenMP does not give you 133*7f296bb3SBarry Smitha good measure of the speedup produced by OpenMP. 134*7f296bb3SBarry Smith 135*7f296bb3SBarry Smith## Detailed STREAMS study for large arrays 136*7f296bb3SBarry Smith 137*7f296bb3SBarry SmithWe now present a detailed study of a particular Intel Icelake system, the Intel(R) Xeon(R) Platinum 8362 CPU @ 2.80GH. It has 32 cores on each of two sockets 138*7f296bb3SBarry Smith(each with a single NUMA region, so a total of two NUMA regions), a 139*7f296bb3SBarry Smith48 Megabyte L3 cache and 32 1.25 Megabyte L2 caches, each shared by 2 cores. 140*7f296bb3SBarry SmithIt is running the Rocky Linux 8.8 (Green Obsidian) distribution. The compilers 141*7f296bb3SBarry Smithused are GNU 12.2, Intel(R) oneAPI Compiler 2023.0.0 with both icc and icx, and NVIDIA nvhpc/23.1. The MPI implementation is OpenMPI 4.0.7, except for nvhpc, which uses 3.15. The compiler options were 142*7f296bb3SBarry Smith 143*7f296bb3SBarry Smith- gcc -O3 -march=native 144*7f296bb3SBarry Smith- icc -O3 -march=native 145*7f296bb3SBarry Smith- icx -O3 -ffinite-math-only (the -xHost option, that replaces -march=native, crashed the compiler so was not used) 146*7f296bb3SBarry Smith- nvc -O3 -march=native 147*7f296bb3SBarry Smith 148*7f296bb3SBarry SmithWe first run the STREAMS benchmark with large double precision arrays of length $1.6\times10^8$; the size was selected to be large enough to eliminate cache effects. 149*7f296bb3SBarry SmithFig. {any}`fig_streams` shows the achieved bandwidth for gcc, icc, icx, and nvc using MPI and OpenMP with their default bindings and with the MPI binding of `--bind-to core --map-by numa` 150*7f296bb3SBarry Smithand the OpenMP binding of `spread`. 151*7f296bb3SBarry Smith 152*7f296bb3SBarry Smith:::{figure} /images/manual/streams.svg 153*7f296bb3SBarry Smith:alt: STREAMS benchmark 154*7f296bb3SBarry Smith:name: fig_streams 155*7f296bb3SBarry Smith 156*7f296bb3SBarry SmithComprehensive STREAMS performance on Intel system 157*7f296bb3SBarry Smith::: 158*7f296bb3SBarry Smith 159*7f296bb3SBarry SmithNote the two dips in the performance with OpenMP and gcc using binding in Fig. {any}`fig_gcc_streams`. 160*7f296bb3SBarry SmithRequesting the `spread` binding produces better results for small core counts but poorer ones for larger ones. 161*7f296bb3SBarry SmithThese are a result of a bug in the gcc `spread` option, placing more threads in one NUMA domain than the other. 162*7f296bb3SBarry SmithFor example, with gcc, the `OMP_DISPLAY_AFFINITY` shows that for 28 threads, 12 are placed on NUMA region 1, and 16 are placed on the other NUMA region. 163*7f296bb3SBarry SmithThe other compilers spread the cores evenly. 164*7f296bb3SBarry Smith 165*7f296bb3SBarry SmithFig. {any}`fig_icc_streams` shows the performance with the icc compiler. Note that the icc compiler produces significantly faster code for 166*7f296bb3SBarry Smiththe benchmark than the other compilers 167*7f296bb3SBarry Smithso its STREAMS speedups are smaller, 168*7f296bb3SBarry Smiththough it 169*7f296bb3SBarry Smithprovides better performance. No significant dips occur with the OpenMP binding using icc, icx, and nvc; 170*7f296bb3SBarry Smithusing `OMP_DISPLAY_AFFINITY` confirms, for example, that 14 threads (out of 28) are assigned to each NUMA domain, unlike with gcc. 171*7f296bb3SBarry SmithUsing the exact thread placement that icc uses with gcc using the OpenMP `OMP_PLACES` option removes most of the dip in the gcc OpenMP binding result. 172*7f296bb3SBarry SmithThus, we conclude that on this system, the `spread` option does not always give the best thread placement with gcc due to its bug. 173*7f296bb3SBarry Smith 174*7f296bb3SBarry Smith:::{figure} /images/manual/icc_streams.svg 175*7f296bb3SBarry Smith:alt: STREAMS benchmark icc 176*7f296bb3SBarry Smith:name: fig_icc_streams 177*7f296bb3SBarry Smith 178*7f296bb3SBarry SmithSTREAMS benchmark icc 179*7f296bb3SBarry Smith::: 180*7f296bb3SBarry Smith 181*7f296bb3SBarry SmithFig. {any}`fig_icx_streams` shows the performance with the icx compiler. 182*7f296bb3SBarry Smith 183*7f296bb3SBarry Smith:::{figure} /images/manual/icx_streams.svg 184*7f296bb3SBarry Smith:alt: STREAMS benchmark icx 185*7f296bb3SBarry Smith:name: fig_icx_streams 186*7f296bb3SBarry Smith 187*7f296bb3SBarry SmithSTREAMS benchmark icx 188*7f296bb3SBarry Smith::: 189*7f296bb3SBarry Smith 190*7f296bb3SBarry Smith:::{figure} /images/manual/nvc_streams.svg 191*7f296bb3SBarry Smith:alt: STREAMS benchmark nvc 192*7f296bb3SBarry Smith:name: fig_nvc_streams 193*7f296bb3SBarry Smith 194*7f296bb3SBarry SmithSTREAMS benchmark nvc 195*7f296bb3SBarry Smith::: 196*7f296bb3SBarry Smith 197*7f296bb3SBarry SmithTo understand the disparity in the STREAMS performance with icc we reran it with the highest optimization level that produced the same results as gcc and icx: `-O1` without `-march=native`. 198*7f296bb3SBarry SmithThe results are displayed in Fig. {any}`fig_icc_O1_streams`; sure enough, the results now match that of gcc and icx. 199*7f296bb3SBarry Smith 200*7f296bb3SBarry Smith:::{figure} /images/manual/icc_O1_streams.svg 201*7f296bb3SBarry Smith:alt: STREAMS benchmark icc -O1 202*7f296bb3SBarry Smith:name: fig_icc_O1_streams 203*7f296bb3SBarry Smith 204*7f296bb3SBarry SmithSTREAMS benchmark icc -O1 205*7f296bb3SBarry Smith::: 206*7f296bb3SBarry Smith 207*7f296bb3SBarry SmithNext we display the STREAMS results using gcc with parallel efficiency instead of speedup in {any}`fig_streams_pe` 208*7f296bb3SBarry Smith 209*7f296bb3SBarry Smith:::{figure} /images/manual/gcc_streams_pe.svg 210*7f296bb3SBarry Smith:alt: STREAMS parallel efficiency 211*7f296bb3SBarry Smith:name: fig_streams_pe 212*7f296bb3SBarry Smith 213*7f296bb3SBarry SmithSTREAMS parallel efficiency gcc 214*7f296bb3SBarry Smith::: 215*7f296bb3SBarry Smith 216*7f296bb3SBarry SmithObservations: 217*7f296bb3SBarry Smith 218*7f296bb3SBarry Smith- For MPI, the default binding and mapping on this system produces results that are as good as providing a specific binding and mapping. This is not true on many systems! 219*7f296bb3SBarry Smith- For OpenMP gcc, the default binding is better than using `spread`, because `spread` has a bug. For the other compilers using `spread` is crucial for good performance on more than 32 cores. 220*7f296bb3SBarry Smith- We do not have any explanation why the improvement in speedup for gcc, icx, and nvc slows down between 32 and 48 cores and then improves rapidly since we believe appropriate bindings are being used. 221*7f296bb3SBarry Smith 222*7f296bb3SBarry SmithWe now present a limited version of the analysis above on an Apple MacBook Pro M2 Max using MPICH, version 4.1, gcc version 13.2 (installed via Homebrew), XCode 15.0.1 223*7f296bb3SBarry Smithand -O3 optimization flags with a smaller N of 80,000,000. macOS contains no public API for setting or controlling affinities so it is not possible to set bindings for either MPI or OpenMP. In addition, the M2 has a combination of performance and efficiency cores which we have no control over the use of. 224*7f296bb3SBarry Smith 225*7f296bb3SBarry SmithFig. {any}`fig_m2_gcc_streams` provides the results. Based on the plateau in the middle of the plot, we assume that the core numbering that 226*7f296bb3SBarry Smithis used by MPICH does not produce the best 227*7f296bb3SBarry Smithbinding. 228*7f296bb3SBarry Smith 229*7f296bb3SBarry Smith:::{figure} /images/manual/m2_gcc_streams.svg 230*7f296bb3SBarry Smith:alt: STREAMS benchmark on Apple M2 231*7f296bb3SBarry Smith:name: fig_m2_gcc_streams 232*7f296bb3SBarry Smith 233*7f296bb3SBarry SmithSTREAMS benchmark on Apple M2 234*7f296bb3SBarry Smith 235*7f296bb3SBarry SmithOpenMPI (installed via Homebrew) produced similar results. 236*7f296bb3SBarry Smith::: 237*7f296bb3SBarry Smith 238*7f296bb3SBarry Smith## Detailed study with application 239*7f296bb3SBarry Smith 240*7f296bb3SBarry SmithWe now move on to a <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ksp/ksp/tutorials/ex45.c.html">PETSc application</a> which solves a three-dimensional Poisson problem on a unit 241*7f296bb3SBarry Smithcube discretized with 242*7f296bb3SBarry Smithfinite differences whose linear system is solved with the PETSc algebraic multigrid preconditioner, `PCGAMG` and Krylov accelerator GMRES. Strong scaling is used to compare with the STREAMS benchmark: measuring the time to construct the preconditioner, 243*7f296bb3SBarry Smiththe time to solve the linear system with the preconditioner, and the time for the matrix-vector products. These are displayed in Fig. {any}`fig_gamg`. The runtime options were 244*7f296bb3SBarry Smith`-da_refine 6 -pc_type gamg -log_view`. This study did not attempt to tune the default `PCGAMG` parameters. 245*7f296bb3SBarry SmithThere were very similar speedups for all the 246*7f296bb3SBarry Smithcompilers so we only display results for gcc. 247*7f296bb3SBarry Smith 248*7f296bb3SBarry Smith:::{figure} /images/manual/gamg.svg 249*7f296bb3SBarry Smith:alt: GAMG speedup 250*7f296bb3SBarry Smith:name: fig_gamg 251*7f296bb3SBarry Smith 252*7f296bb3SBarry SmithGAMG speedup 253*7f296bb3SBarry Smith::: 254*7f296bb3SBarry Smith 255*7f296bb3SBarry Smith:::{figure} /images/manual/gamg_pe.svg 256*7f296bb3SBarry Smith:alt: GAMG parallel efficiency 257*7f296bb3SBarry Smith:name: fig_gamg_pe 258*7f296bb3SBarry Smith 259*7f296bb3SBarry SmithGAMG parallel efficiency 260*7f296bb3SBarry Smith::: 261*7f296bb3SBarry Smith 262*7f296bb3SBarry SmithThe dips in the performance at certain core counts are consistent between compilers 263*7f296bb3SBarry Smithand results from the amount of MPI communication required from the communication pattern which results from the different three-dimensional parallel 264*7f296bb3SBarry Smithgrid layout. 265*7f296bb3SBarry Smith 266*7f296bb3SBarry SmithWe now present GAMG on the Apple MacBook Pro M2 Max. 267*7f296bb3SBarry SmithFig. {any}`fig_m2_gamg` provides the results. The performance is better than predicted by the STREAMS benchmark for all portions of the solver. 268*7f296bb3SBarry Smith 269*7f296bb3SBarry Smith:::{figure} /images/manual/m2_gamg.svg 270*7f296bb3SBarry Smith:alt: GAMG speedup on Apple M2 271*7f296bb3SBarry Smith:name: fig_m2_gamg 272*7f296bb3SBarry Smith 273*7f296bb3SBarry SmithGAMG speedup Apple M2 274*7f296bb3SBarry Smith::: 275*7f296bb3SBarry Smith 276*7f296bb3SBarry Smith(sec_pcmpi_study)= 277*7f296bb3SBarry Smith 278*7f296bb3SBarry Smith## Application with the MPI linear solver server 279*7f296bb3SBarry Smith 280*7f296bb3SBarry SmithWe now run the same PETSc application using the MPI linear solver server mode, set using `-mpi_linear_solver_server`. 281*7f296bb3SBarry SmithAll compilers deliver largely the same performance so we only present results with gcc. 282*7f296bb3SBarry SmithWe plot the speedup in Fig. {any}`fig_gamg_server` and parallel efficiency in {any}`fig_gamg_server_pe` 283*7f296bb3SBarry SmithNote that it is far below the parallel solve without the server. However, the distribution time for these runs was always less than three percent of the complete solution time. 284*7f296bb3SBarry SmithThe reason for the poorer performance is because in the pure MPI version, the vectors are partitioned directly from the three-dimensional grid; the cube is divided into (approximate) 285*7f296bb3SBarry Smithsub-cubes, this minimizes the inter-process communication, especially in the matrix-vector product. In server mode, the vector is laid out using the cube's natural ordering, and then each MPI process is assigned a contiguous subset of the vector. As a result, the flop rate for the matrix-vector product is significantly higher than that of the pure MPI version. 286*7f296bb3SBarry SmithThis indicates that a naive use of the MPI linear solver server will not produce as much performance as a usage that considers the matrix/vector layouts by performing an 287*7f296bb3SBarry Smithinitial grid partitioning. For example, if OpenMP is used to generate the matrix, it would be appropriate to have each OpenMP thread assigned a contiguous 288*7f296bb3SBarry Smithvector mapping to a sub-cube of the domain. This would require, of course, a far more complicated OpenMP code that is written using MPI-like parallelism and decomposition of the data. 289*7f296bb3SBarry Smith 290*7f296bb3SBarry Smith`PCMPI` has two approaches for distributing the linear system. The first uses `MPI_Scatterv()` to communicate the matrix and vector entries from the initial compute process to all of the 291*7f296bb3SBarry Smithserver processes. Unfortunately, `MPI_Scatterv()` does not scale with more MPI processes; hence, the solution time is limited by the `MPI_Scatterv()`. To remove this limitation, 292*7f296bb3SBarry Smiththe second communication mechanism is Unix shared memory `shmget()`. Here, `PCMPI` allocates shared memory 293*7f296bb3SBarry Smithfrom which all the MPI processes in the server 294*7f296bb3SBarry Smithcan access their portion of the matrices and vectors that they need. 295*7f296bb3SBarry SmithThere is still a (now much smaller) server processing overhead since the initial data storage of the sequential matrix (in `MATSEQAIJ` storage) 296*7f296bb3SBarry Smithstill must be converted to `MATMPIAIJ` storage. `VecPlaceArray()` is used to convert the sequential vector to an MPI vector, so there is 297*7f296bb3SBarry Smithno overhead, not even a copy, for this operation. 298*7f296bb3SBarry Smith 299*7f296bb3SBarry Smith:::{figure} /images/manual/gamg_server.svg 300*7f296bb3SBarry Smith:alt: GAMG server speedup 301*7f296bb3SBarry Smith:name: fig_gamg_server 302*7f296bb3SBarry Smith 303*7f296bb3SBarry SmithGAMG server speedup 304*7f296bb3SBarry Smith::: 305*7f296bb3SBarry Smith 306*7f296bb3SBarry Smith:::{figure} /images/manual/gamg_server_pe.svg 307*7f296bb3SBarry Smith:alt: GAMG server parallel efficiency 308*7f296bb3SBarry Smith:name: fig_gamg_server_pe 309*7f296bb3SBarry Smith 310*7f296bb3SBarry SmithGAMG server parallel efficiency 311*7f296bb3SBarry Smith::: 312*7f296bb3SBarry Smith 313*7f296bb3SBarry Smith:::{figure} /images/manual/gamg_server_pe_streams.svg 314*7f296bb3SBarry Smith:alt: GAMG server parallel efficiency 315*7f296bb3SBarry Smith:name: fig_gamg_server_pe_streams 316*7f296bb3SBarry Smith 317*7f296bb3SBarry SmithGAMG server parallel efficiency vs STREAMS 318*7f296bb3SBarry Smith::: 319*7f296bb3SBarry Smith 320*7f296bb3SBarry SmithIn {any}`fig_gamg_server_pe_streams`, we plot the parallel efficiency of the linear solve and the STREAMS benchmark, which track each other well. 321*7f296bb3SBarry SmithThis example demonstrates the **utility of the STREAMS benchmark to predict the speedup (parallel efficiency) of a memory bandwidth limited application** on a shared memory Linux system. 322*7f296bb3SBarry Smith 323*7f296bb3SBarry SmithFor the Apple M2, we present the results using Unix shared-memory communication of the matrix and vectors to the server processes 324*7f296bb3SBarry Smithin {any}`fig_m2_gamg_server_shared_speedup`. 325*7f296bb3SBarry SmithTo run this one must first set up the machine to use shared memory as described in `PetscShmgetAllocateArray()` 326*7f296bb3SBarry Smith 327*7f296bb3SBarry Smith:::{figure} /images/manual/m2_gamg_server_shared_speedup.svg 328*7f296bb3SBarry Smith:alt: GAMG solver speedup 329*7f296bb3SBarry Smith:name: fig_m2_gamg_server_shared_speedup 330*7f296bb3SBarry Smith 331*7f296bb3SBarry SmithGAMG server solver speedup on Apple M2 332*7f296bb3SBarry Smith::: 333*7f296bb3SBarry Smith 334*7f296bb3SBarry SmithThis example demonstrates that the **MPI linear solver server feature of PETSc can generate a reasonable speedup in the linear solver** on machines that have significant 335*7f296bb3SBarry Smithmemory bandwidth. However, one should not expect the speedup to be near the total number of cores on the compute node. 336*7f296bb3SBarry Smith 337*7f296bb3SBarry Smith```{rubric} Footnotes 338*7f296bb3SBarry Smith``` 339*7f296bb3SBarry Smith 340*7f296bb3SBarry Smith[^achievable-footnote]: Achievable memory bandwidth is the actual bandwidth one can obtain 341*7f296bb3SBarry Smith as opposed to the theoretical peak that is calculated using the hardware specification. 342*7f296bb3SBarry Smith 343*7f296bb3SBarry Smith[^memorymigration-footnote]: Data can also be migrated among different memory sockets during a computation by the OS, but we ignore this possibility in the discussion. 344*7f296bb3SBarry Smith 345*7f296bb3SBarry Smith```{eval-rst} 346*7f296bb3SBarry Smith.. bibliography:: /petsc.bib 347*7f296bb3SBarry Smith :filter: docname in docnames 348*7f296bb3SBarry Smith``` 349