| /petsc/src/ts/tutorials/output/ |
| H A D | ex30_restart_simplex_refonly_nsize-1.out | 4 Test parallel save 10 Test parallel load from sequential save 13 Test sequential load from parallel save 16 Test parallel load from parallel save
|
| H A D | ex30_restart_simplex_nsize-1_dm_refine_hierarchy-0.out | 4 Test parallel save 10 Test parallel load from sequential save 13 Test sequential load from parallel save 16 Test parallel load from parallel save
|
| H A D | ex30_restart_nsize-1_dm_refine_hierarchy-0.out | 4 Test parallel save 10 Test parallel load from sequential save 13 Test sequential load from parallel save 16 Test parallel load from parallel save
|
| H A D | ex30_restart_nsize-1_dm_refine_hierarchy-1.out | 4 Test parallel save 10 Test parallel load from sequential save 13 Test sequential load from parallel save 16 Test parallel load from parallel save
|
| H A D | ex30_restart_simplex_nsize-1_dm_refine_hierarchy-1.out | 4 Test parallel save 10 Test parallel load from sequential save 13 Test sequential load from parallel save 16 Test parallel load from parallel save
|
| H A D | ex30_restart_refonly_nsize-1.out | 4 Test parallel save 10 Test parallel load from sequential save 13 Test sequential load from parallel save 16 Test parallel load from parallel save
|
| H A D | ex30_restart_simplex_nsize-2_dm_refine_hierarchy-0.out | 6 Test parallel save 14 Test parallel load from sequential save 17 Test sequential load from parallel save 22 Test parallel load from parallel save
|
| H A D | ex30_restart_simplex_nsize-2_dm_refine_hierarchy-1.out | 6 Test parallel save 14 Test parallel load from sequential save 17 Test sequential load from parallel save 22 Test parallel load from parallel save
|
| H A D | ex30_restart_refonly_nsize-2.out | 6 Test parallel save 14 Test parallel load from sequential save 17 Test sequential load from parallel save 22 Test parallel load from parallel save
|
| H A D | ex30_restart_nsize-2_dm_refine_hierarchy-1.out | 6 Test parallel save 14 Test parallel load from sequential save 17 Test sequential load from parallel save 22 Test parallel load from parallel save
|
| H A D | ex30_restart_simplex_refonly_nsize-2.out | 6 Test parallel save 14 Test parallel load from sequential save 17 Test sequential load from parallel save 22 Test parallel load from parallel save
|
| H A D | ex30_restart_nsize-2_dm_refine_hierarchy-0.out | 6 Test parallel save 14 Test parallel load from sequential save 17 Test sequential load from parallel save 22 Test parallel load from parallel save
|
| /petsc/src/binding/petsc4py/docs/source/ |
| H A D | overview.rst | 5 scalable (parallel) solution of scientific applications modeled by 18 solution process. PETSc includes an expanding suite of parallel linear 21 mechanisms needed within parallel application codes, such as simple 22 parallel matrix and vector assembly routines that allow the overlap of 57 required for many parallel solutions of PDEs. 61 easy-to-use parallel scatter and gather operations, as well as 66 of parallel sparse matrices. Includes several different parallel 70 :PC: A collection of sequential and parallel preconditioners, 72 parallel) block Jacobi, overlapping additive Schwarz methods.
|
| /petsc/doc/overview/ |
| H A D | index.md | 4 includes a large suite of scalable parallel **linear and nonlinear 7 managing parallel PDE discretizations including parallel matrix and vector assembly routines. {any}…
|
| /petsc/src/dm/impls/plex/tests/ |
| H A D | ex31.c | 9 PetscBool parallel; /* Use ParMetis or Metis */ member 17 options->parallel = PETSC_FALSE; in ProcessOptions() 23 …arallel", "Use ParMetis instead of Metis", FILENAME, options->parallel, &options->parallel, NULL)); in ProcessOptions() 105 …PetscCall(DMPlexRebalanceSharedPoints(dm, user.entityDepth, user.useInitialGuess, user.parallel, &… in main()
|
| /petsc/doc/manualpages/MANSECHeaders/ |
| H A D | DMDA | 3 … mesh, with interfaces for both topology and geometry. It is capable of parallel refinement and co… 4 Some support for parallel redistribution is available through the `PCTELESCOPE` object. A piecewise…
|
| H A D | DMForest | 4 It is capable of parallel structured adaptive mesh refinement and coarsening and parallel redistrib…
|
| H A D | DMPlex | 3 … mesh, with interfaces for both topology and geometry. It is capable of parallel refinement and co… 4 (using Pragmatic or ParMmg) and parallel redistribution for load balancing. It is designed to inter…
|
| /petsc/doc/manual/ |
| H A D | about_this_manual.md | 8 on parallel (and serial) computers. PETSc uses the MPI standard for all 11 PETSc/TAO includes a large suite of parallel **linear solvers**, **nonlinear 34 - PETSc can be used to provide a “MPI parallel linear 35 solver” in an otherwise sequential or OpenMP parallel code.
|
| H A D | blas-lapack.md | 40 …ETSc simulations which do not use external packages there is generally no benefit to using parallel 41 …et the number of threads used by each MPI process for its shared memory parallel BLAS/LAPACK. The … 42 …ACES` may also need to be set appropriately for the system to obtain good parallel performance with 43 …re option `--with-openmp` will trigger PETSc to try to locate and use a parallel BLAS/LAPACK libra… 45 Certain external packages such as MUMPS may benefit from using parallel BLAS/LAPACK operations. See… 46 how one can restrict the number of MPI processes while running MUMPS to utilize parallel BLAS/LAPAC…
|
| /petsc/src/dm/impls/plex/tests/output/ |
| H A D | ex69_quad_3.out | 20 [0]BT for parallel flipped cells: 22 [1]BT for parallel flipped cells: 24 [2]BT for parallel flipped cells: 26 [3]BT for parallel flipped cells:
|
| /petsc/src/benchmarks/streams/ |
| H A D | OpenMPVersion.c | 45 #pragma omp parallel for schedule(static) in main() 58 #pragma omp parallel for schedule(static) in main()
|
| H A D | OpenMPVersionLikeMPI.c | 48 #pragma omp parallel for schedule(static) in main() 64 #pragma omp parallel for schedule(static) in main()
|
| /petsc/doc/changes/ |
| H A D | 2029.md | 66 - ISGetSize() now returns global parallel size, ISGetLocalSize() 133 - Added DAMGCreate() etc to help easily write parallel multigrid 156 - Added DAMGCreate() etc to help easily write parallel multigrid 183 local evaluation of the parallel nonlinear function required for 189 directly doing any parallel computing.
|
| /petsc/doc/install/ |
| H A D | external_software.md | 15 - [CUDA](https://developer.nvidia.com/cuda-toolkit) A parallel computing platform and application p… 27 …ab/METIS) and [ParMeTiS](https://github.com/KarypisLab/PARMETIS) serial/parallel graph partitioner… 29 - [PaStiX](https://gforge.inria.fr/projects/pastix/) A parallel LU and Cholesky solver package. 34 ….gov/~xiaoye/SuperLU/#superlu_dist) Robust and efficient sequential and parallel direct sparse sol… 49 - libtfs (the scalable parallel direct solver created and written by Henry Tufo and Paul
|