Lines Matching full:when
90 bandwidth-bound PETSc applications of at most 4x when running multiple
91 MPI ranks on the node. Most of the gains are already obtained when
114 bandwidth on the node is obtained when the memory controllers on each
131 point of issuing `malloc()`, but at the point when the respective
300 For the non-optimized version on the left, the speedup obtained when
461 they may be turned on when using options for very aggressive
504 - When possible, use `VecMDot()` rather than a series of calls to
525 When symbolically factoring an AIJ matrix, PETSc has to guess how much
527 `MatFactorInfo` structure when calling `MatLUFactorSymbolic()` or
556 …ack its memory usage and perform error checking. Users are urged to use these routines as well when
562 but only works when PETSc is configured with `--with-debugging` (the default configuration).
583 - When finer granularity is
593 - When running with `-log_view`, the additional option `-log_view_memory`
609 performance will be degraded. For example, when solving a nonlinear
611 structure for many steps when appropriate can make the code run
687 - **Problem too large for physical memory size**: When timing a
718 overcoming paging overhead when profiling a code. We have found on