Lines Matching refs:be
18 techniques and their associated options can be selected at runtime.
20 `KSP` can also be used to solve least squares problems, using, for example, `KSPLSQR`. See
57 the matrix from which the preconditioner is to be constructed, `Pmat`,
63 Much of the power of `KSP` can be accessed through the single routine
80 process stopped can be obtained using
91 regarding convergence testing. Note that multiple linear solves can be
93 longer needed, it should be destroyed with the command
139 `PCFactorSetUseInPlace()`, discussed below, causes factorization to be
150 for ILU) will be done during the first call to `KSPSolve()` only; such
151 operations will *not* be repeated for successive solves.
155 still simply call `KPSSolve()`. In this case the preconditioner will be recomputed
159 old preconditioner can be more efficient.
170 be used, one calls the command
176 The type can be one of `KSPRICHARDSON`, `KSPCHEBYSHEV`, `KSPCG`,
180 The `KSP` method can also be set with the options database command
196 GMRES restart and Richardson damping factor can also be set with the
201 GMRES is the unmodified (classical) Gram-Schmidt method, which can be
210 stability of the orthogonalization. This can be changed with the option
264 can be applied to the system {eq}`eq_axeqb` by
271 matrices from which the preconditioner is to be constructed). If
289 preconditioning by default. Right preconditioning can be activated for
441 thus the `KSPBICG` cannot always be used.
458 can be used by the options database command
489 can be set with the routine
499 initial values when the object's type was set. These parameters can also be set from the options
528 of the residuals should be displayed at each iteration by using `-ksp_monitor` with
566 formula $r = b - Ax$, the routine is slow and should be used only
568 be accessed with the command line options `-ksp_monitor`,
606 compute its eigenvalues. It should be only used for matrices of size up
610 The eigenvalues may also be computed and displayed graphically with the
612 `-ksp_view_eigenvalues_explicit draw`. Or they can be dumped to the
620 Standard Krylov methods require that the preconditioner be a linear operator, thus, for example, a …
623 on memory) the preconditioner to be nonlinear. For example, they can be used with the `PCKSP` preco…
641 but can be above 10,000 processes) this synchronization is very time consuming and can significantl…
644 computations in a way that some of them can be collapsed, e.g., two or more calls to `MPI_Allreduce…
647 Special configuration of MPI may be necessary for reductions to make asynchronous progress, which i…
661 calculated or it may be stored in a different location. To access the
775 can be set with routines and options database commands provided for this
778 list can be found by consulting the `PCType` manual page; we discuss
800 in the first factorization to be reused for later factorizations.
805 besides natural and cannot be used with the drop tolerance
806 factorization. These options may be set in the database with
826 PETSc provides only a sequential SOR preconditioner; it can only be
844 sets the kind of SOR sweep, where the argument `type` can be one of
847 Setting the type to be `SOR_SYMMETRIC_SWEEP` produces the SSOR method.
851 variants can also be set with the options `-pc_sor_omega <omega>`,
857 preconditioning can be employed with the method `PCEISENSTAT`
879 causes the factorization to be performed in-place and hence destroys the
891 These orderings can also be set through the options database by
921 In fact, all of the `KSP` and `PC` options can be applied to the
968 also be set with the options database `-pc_asm_type` `[basic`,
1001 be computed beyond what may have been set with a call to
1003 `overlap` must be set to be 0. In particular, if one does *not*
1005 would be computed internally by PETSc, and using an overlap of 0 would
1013 the user to specify subdomains that span multiple MPI processes. This can be
1015 To be effective, the multi-processor subproblems must be solved using a
1017 similar parallel direct solver could be used; other choices may include
1023 options have the same meaning as with `PCASM` and may also be set with
1041 connected. In the future this will be addressed with a hierarchical
1051 constructing multi-rank subdomains that can be then used with
1065 Examples of the described `PCGASM` usage can be found in
1078 and *AMGx* (CUDA platforms only) that can be downloaded in the
1086 construction. `PCGAMG` is designed from the beginning to be modular, to
1087 allow for new components to be added easily and also populates a
1105 …rger value produces a faster preconditioner to create and solve, but the convergence may be slower.
1115 …roduces a preconditioner that is faster to create and solve with but the convergence may be slower.
1122 …> - `-pc_gamg_agg_nsmooths` \<n:int:1> Number of smoothing steps to be used in constructing the pr…
1123 …> generally, one or more is best. For some strongly nonsymmetric problems, 0 may be best. See `P…
1128 …> are used for the coarser mesh). A larger value will cause the coarser problems to be run on fe…
1143 …> (otherwise fewer MPI processes will be used). A larger value will cause the coarse problems …
1150 …> method for the entire multigrid solve has to be a flexible method such as `KSPFGMRES`. General…
1161 > the subdomains. This option automatically switches the smoother on the levels to be `PCASM`.
1172 for symmetric positive definite systems. Unsmoothed aggregation can be
1177 used for SA, can be set with `-pc_gamg_esteig_ksp_max_it` and
1188 Jacobi preconditioning. This can be overridden with
1194 `-mat_block_size bs` or `MatSetBlockSize(mat,bs)`. Equations must be
1214 costs potentially). However, this forcing to one process can be overridden if one
1227 until it is small enough to be solved with an exact solver (e.g., LU or
1286 of AMG solvers and often special purpose methods must be developed to
1288 performance degradation that may not be fixed with parameters in PETSc
1299 specifically, AMS needs the discrete gradient operator, which can be
1302 operator, which can be set using `PCHYPRESetDiscreteCurl()`.
1330 each level, which can be used to see if you are coarsening at an
1338 entries on the fine level). Grid complexity should be well under 2.0 and
1345 threshold) should be tried if convergence is slow.
1365 iterations of SOR), then you may be fairly close to textbook multigrid
1366 efficiency. However, you also need to check the setup costs. This can be
1387 …ion from the coarse space to the fine space. We would like this process to be accurate for the fun…
1393 …the coarse space need not be as accurate as the fine solution, for the same reason that updates in…
1413 Now we would like the interpolant of the coarse representer to the fine grid to be as close as poss…
1434 …tions in finite elements, and thus the naive sparsity pattern from local interpolation can be used.
1436 …here. Our general PETSc routine should work for both since the input would be the checking set (fi…
1486 which can be solved using LAPACK.
1494 …genvalue was not adequately represented by $P^l_{l+1}`$, and the interpolator should be recomputed.
1505 domain decomposition method which can be easily adapted to different
1525 `-pc_bddc_neumann_approximate` should be used to inform `PCBDDC`. If
1527 operator should be attached to the local matrix via
1533 freedom can be supplied to `PCBDDC` by using the following functions:
1544 specification of the primal constraints to be imposed at the interface
1554 quadrature rules for different classes of the interface can be listed in
1556 corresponds to the maximum number of constraints that can be imposed for
1563 be beneficial for `PCBDDC`; use `PCBDDCSetChangeOfBasisMat()` to
1577 selection of constraints could be requested by specifying a threshold
1596 The latter can be requested by specifying the number of requested level
1599 the number of subdomains that will be generated at the next level; the
1644 can be used to obtain the operators with `PCGetOperators()` and the
1664 $B = B_1 + B_2$ can be obtained with
1676 behavior. An alternative can be set with the option
1703 the preconditioner can be done in two ways: using the original linear
1712 The individual preconditioners can be accessed (in order to set options)
1732 These various options can also be set via the options database. For
1734 causes the composite preconditioner to be used with two preconditioners:
1743 PETSc also allows a preconditioner to be a complete `KSPSolve()` linear solver. This
1757 matrix, `Pmat`, as the matrix to be solved in the linear system; to
1781 these components to be encapsulated within a PETSc-compliant
1808 For standard V or W-cycle multigrids, one sets the `mode` to be
1821 also be set from the options database. The option names are
1827 separate configuration of up and down smooths is required, it can be
1888 set, its transpose will be used for the other.
1890 It is possible for these interpolation operations to be matrix-free (see
1905 The `residual()` function normally does not need to be set if one’s
1928 `-mg_levels_ksp_type cg` will cause the CG method to be used as the
1931 ILU preconditioner to be used on each level with two levels of fill in
1948 unknowns, the matrix can be written as
1961 process, the blocks may be stored in one block followed by another
1988 block must be the same size. Matrices obtained with `DMCreateMatrix()`
1990 matrices can also be stored using the `MATNEST` format, which holds
1994 better formats (e.g., `MATBAIJ` or `MATSBAIJ`) can be used for the
2002 interlaced then `PCFieldSplitSetFields()` can be called repeatedly to
2004 `PCFieldSplitSetIS()` can be used to indicate exactly which
2010 diagonal blocks to be found, one associated with all rows/columns that
2017 …lds, the first consists of all rows with a nonzero on the diagonal, and the second will be all rows
2033 …For symmetric problems where `KSPCG` is used `symmetric_multiplicative` must be used instead of `m…
2035 …Schur complement, but when it works well can be extremely effective. See `PCFieldSplitSetType()`. …
2192 These can be accessed with
2209 $A_{01},A_{10}$ etc. to be extracted out of `Amat`.
2314 obtained by dropping some of the terms; these can be obtained with
2351 cannot be built out of such matrices. Instead, you can *assemble* an
2401 be handled separately, since they are such a common case. Create a
2420 The `Amat` should be the *first* matrix argument used with
2587 be found by specifying `-help` at runtime.
2615 `MatConvert()` cannot be called on matrices that have already been
2622 to be run as both a single process or with multiple processes, depending
2625 The call for the incorrect type will simply be ignored without any harm
2634 MPI process (with or without OpenMP). The application code must be built with MPI and must call
2641 The program must then be launched using the standard approaches for launching MPI programs with the…
2643 …xample `-ksp_type cg -ksp_monitor -pc_type bjacobi -ksp_view`. The solver options cannot be set via
2649 matrix has 1,000 rows and columns the solution will not be parallelized by default. One can use the…
2657 in the computation time; thus it is crucial to understand what phases of a computation must be para…