Lines Matching refs:server
278 ## Application with the MPI linear solver server
280 We now run the same PETSc application using the MPI linear solver server mode, set using `-mpi_line…
283 Note that it is far below the parallel solve without the server. However, the distribution time for…
285 …inter-process communication, especially in the matrix-vector product. In server mode, the vector i…
286 This indicates that a naive use of the MPI linear solver server will not produce as much performanc…
291 server processes. Unfortunately, `MPI_Scatterv()` does not scale with more MPI processes; hence, th…
293 from which all the MPI processes in the server
295 There is still a (now much smaller) server processing overhead since the initial data storage of th…
300 :alt: GAMG server speedup
303 GAMG server speedup
307 :alt: GAMG server parallel efficiency
310 GAMG server parallel efficiency
314 :alt: GAMG server parallel efficiency
317 GAMG server parallel efficiency vs STREAMS
323 …he results using Unix shared-memory communication of the matrix and vectors to the server processes
331 GAMG server solver speedup on Apple M2
334 This example demonstrates that the **MPI linear solver server feature of PETSc can generate a reaso…