| /petsc/src/ksp/pc/impls/tfs/ |
| H A D | gs.c | 385 PetscInt **reduce; in gsi_via_bit_mask() local 409 reduce = gs->local_reduce; in gsi_via_bit_mask() 410 for (i = 0, t1 = 0; i < gs->num_local; i++, reduce++) { in gsi_via_bit_mask() 411 …if ((PCTFS_ivec_binary_search(**reduce, gs->pw_elm_list, gs->len_pw_list) >= 0) || PCTFS_ivec_bina… in gsi_via_bit_mask() 416 **reduce = map[**reduce]; in gsi_via_bit_mask() 820 PetscInt *num, *map, **reduce; in PCTFS_gs_gop_local_out() local 825 reduce = gs->gop_local_reduce; in PCTFS_gs_gop_local_out() 826 while ((map = *reduce++)) { in PCTFS_gs_gop_local_out() 849 PetscInt *num, *map, **reduce; in PCTFS_gs_gop_local_plus() local 854 reduce = gs->local_reduce; in PCTFS_gs_gop_local_plus() [all …]
|
| /petsc/config/BuildSystem/config/utilities/ |
| H A D | FPTrap.py | 3 from functools import reduce 35 …if reduce(lambda x,y: x and y, map(self.functions.check, ['fp_sh_trap_info', 'fp_trap', 'fp_enable…
|
| /petsc/config/BuildSystem/ |
| H A D | graph.py | 3 from functools import reduce 19 …return 'DirectedGraph with '+str(len(self.vertices))+' vertices and '+str(reduce(lambda k,l: k+l, …
|
| /petsc/src/binding/petsc4py/src/petsc4py/PETSc/ |
| H A D | SF.pyx | 464 """End a broadcast & reduce operation started with `bcastBegin`. 501 Values to reduce. 527 Values to reduce.
|
| /petsc/doc/manual/ |
| H A D | advanced.md | 79 ordering for the matrix. The ordering generally is done to reduce fill 209 for numerical stability. This is because trying to both reduce fill and
|
| H A D | regressor.md | 149 constructs a linear model to reduce the sum of squared differences
|
| H A D | ksp.md | 643 thus effectively "hiding" the time of the reductions. In addition, they may reduce the number of gl… 1209 number of active processes on coarse grids to reduce communication costs 1211 costs down. Most AMG solvers reduce to just one active process on the 1213 the coarse grid on all processes to reduce communication
|
| H A D | mat.md | 1230 * - Ordering to reduce fill 1366 reduce fill in sparse matrix factorizations.
|
| H A D | performance.md | 528 `MatILUFactorSymbolic()` can reduce greatly the number of mallocs and
|
| H A D | snes.md | 949 are often introduced that significantly reduce these expenses and yet
|
| H A D | vec.md | 1513 One may wish to gather the entries of the `leafdata` for each root but not reduce them to a single …
|
| H A D | tao.md | 1274 sufficiently reduce the nonlinear objective function, then the step is
|
| /petsc/src/mat/impls/aij/seq/mkl_pardiso/ |
| H A D | mkl_pardiso.c | 353 …rSchur_Private(Mat_MKL_PARDISO *mpardiso, PetscScalar *whole, PetscScalar *schur, PetscBool reduce) in MatMKLPardisoScatterSchur_Private() argument 356 if (reduce) { /* data given for the whole matrix */ in MatMKLPardisoScatterSchur_Private()
|
| /petsc/doc/changes/ |
| H A D | 315.md | 146 which to reduce active processors on coarse grids in `PCGAMG` that
|
| H A D | 2024.md | 355 `VecNormEnd()`, which reduce communication overhead in parallel;
|
| /petsc/doc/install/ |
| H A D | install_tutorial.md | 23 Don't need Fortran? Use `--with-fortran-bindings=0` to reduce the build times. If you
|
| /petsc/doc/developers/ |
| H A D | buildsystem.md | 13 are mechanical operations that reduce to applying a construction rule to
|
| /petsc/config/BuildSystem/config/ |
| H A D | setCompilers.py | 6 from functools import reduce 18 return reduce(lambda x,y:x or y,lst,False)
|
| /petsc/src/vec/vec/impls/seq/cupm/ |
| H A D | vecseqcupm_impl.hpp | 2098 …#define THRUST_MINMAX_REDUCE(s, b, e, real_part__, ...) THRUST_CALL(thrust::reduce, s, b, e, __VA_… in MinMax_() 2187 PetscCallThrust(*sum = THRUST_CALL(thrust::reduce, stream, dptr, dptr + n, PetscScalar{0.0});); in Sum()
|
| /petsc/src/ksp/ksp/tutorials/output/ |
| H A D | ex2_help.out | 287 …-pc_factor_mat_ordering_type <now natural : formerly natural>: Reordering to reduce nonzeros in fa…
|
| /petsc/lib/petsc/bin/maint/ |
| H A D | toclapack.sh | 2522 /* If this is true, then we need to reduce EMAX by one because */ 4513 /* If this is true, then we need to reduce EMAX by one because */
|
| /petsc/doc/faq/ |
| H A D | index.md | 126 ### Does all the PETSc error checking and logging reduce PETSc's efficiency? 321 ### The PETSc distribution is SO Large. How can I reduce my disk space usage?
|
| /petsc/share/petsc/datafiles/meshes/ |
| H A D | testcase3D.cas | 238 (mixing-plane/reduce-backflow? #f) 5470 (morpher/reduce-bb-factor 0.8) 7543 (cache-flush/target/reduce-by-mb 0)
|