xref: /petsc/doc/manual/ksp.md (revision 4e8208cbcbc709572b8abe32f33c78b69c819375)
17f296bb3SBarry Smith(ch_ksp)=
27f296bb3SBarry Smith
37f296bb3SBarry Smith# KSP: Linear System Solvers
47f296bb3SBarry Smith
57f296bb3SBarry SmithThe `KSP` object is the heart of PETSc, because it provides uniform
67f296bb3SBarry Smithand efficient access to all of the package’s linear system solvers,
77f296bb3SBarry Smithincluding parallel and sequential, direct and iterative. `KSP` is
87f296bb3SBarry Smithintended for solving systems of the form
97f296bb3SBarry Smith
107f296bb3SBarry Smith$$
117f296bb3SBarry SmithA x = b,
127f296bb3SBarry Smith$$ (eq_axeqb)
137f296bb3SBarry Smith
147f296bb3SBarry Smithwhere $A$ denotes the matrix representation of a linear operator,
157f296bb3SBarry Smith$b$ is the right-hand-side vector, and $x$ is the solution
167f296bb3SBarry Smithvector. `KSP` uses the same calling sequence for both direct and
177f296bb3SBarry Smithiterative solution of a linear system. In addition, particular solution
187f296bb3SBarry Smithtechniques and their associated options can be selected at runtime.
197f296bb3SBarry Smith
20789736e1SBarry Smith`KSP` can also be used to solve least squares problems, using, for example, `KSPLSQR`. See
21789736e1SBarry Smith`PETSCREGRESSORLINEAR` for tools focusing on linear regression.
22789736e1SBarry Smith
237f296bb3SBarry SmithThe combination of a Krylov subspace method and a preconditioner is at
247f296bb3SBarry Smiththe center of most modern numerical codes for the iterative solution of
257f296bb3SBarry Smithlinear systems. Many textbooks (e.g. {cite}`fgn` {cite}`vandervorst2003`, or {cite}`saad2003`) provide an
267f296bb3SBarry Smithoverview of the theory of such methods.
277f296bb3SBarry SmithThe `KSP` package, discussed in
287f296bb3SBarry Smith{any}`sec_ksp`, provides many popular Krylov subspace
297f296bb3SBarry Smithiterative methods; the `PC` module, described in
307f296bb3SBarry Smith{any}`sec_pc`, includes a variety of preconditioners.
317f296bb3SBarry Smith
327f296bb3SBarry Smith(sec_usingksp)=
337f296bb3SBarry Smith
347f296bb3SBarry Smith## Using KSP
357f296bb3SBarry Smith
367f296bb3SBarry SmithTo solve a linear system with `KSP`, one must first create a solver
377f296bb3SBarry Smithcontext with the command
387f296bb3SBarry Smith
397f296bb3SBarry Smith```
407f296bb3SBarry SmithKSPCreate(MPI_Comm comm,KSP *ksp);
417f296bb3SBarry Smith```
427f296bb3SBarry Smith
437f296bb3SBarry SmithHere `comm` is the MPI communicator and `ksp` is the newly formed
447f296bb3SBarry Smithsolver context. Before actually solving a linear system with `KSP`,
457f296bb3SBarry Smiththe user must call the following routine to set the matrices associated
467f296bb3SBarry Smithwith the linear system:
477f296bb3SBarry Smith
487f296bb3SBarry Smith```
497f296bb3SBarry SmithKSPSetOperators(KSP ksp,Mat Amat,Mat Pmat);
507f296bb3SBarry Smith```
517f296bb3SBarry Smith
527f296bb3SBarry SmithThe argument `Amat`, representing the matrix that defines the linear
537f296bb3SBarry Smithsystem, is a symbolic placeholder for any kind of matrix or operator. In
547f296bb3SBarry Smithparticular, `KSP` *does* support matrix-free methods. The routine
557f296bb3SBarry Smith`MatCreateShell()` in {any}`sec_matrixfree`
567f296bb3SBarry Smithprovides further information regarding matrix-free methods. Typically,
577f296bb3SBarry Smiththe matrix from which the preconditioner is to be constructed, `Pmat`,
587f296bb3SBarry Smithis the same as the matrix that defines the linear system, `Amat`;
597f296bb3SBarry Smithhowever, occasionally these matrices differ (for instance, when a
607addb90fSBarry Smithmatrix used to compute the preconditioner is obtained from a lower order method than that
617f296bb3SBarry Smithemployed to form the linear system matrix).
627f296bb3SBarry Smith
637f296bb3SBarry SmithMuch of the power of `KSP` can be accessed through the single routine
647f296bb3SBarry Smith
657f296bb3SBarry Smith```
667f296bb3SBarry SmithKSPSetFromOptions(KSP ksp);
677f296bb3SBarry Smith```
687f296bb3SBarry Smith
697f296bb3SBarry SmithThis routine accepts the option `-help` as well as any of
707f296bb3SBarry Smiththe `KSP` and `PC` options discussed below. To solve a linear
717f296bb3SBarry Smithsystem, one sets the right hand size and solution vectors using the
727f296bb3SBarry Smithcommand
737f296bb3SBarry Smith
747f296bb3SBarry Smith```
757f296bb3SBarry SmithKSPSolve(KSP ksp,Vec b,Vec x);
767f296bb3SBarry Smith```
777f296bb3SBarry Smith
787f296bb3SBarry Smithwhere `b` and `x` respectively denote the right-hand side and
797f296bb3SBarry Smithsolution vectors. On return, the iteration number at which the iterative
807f296bb3SBarry Smithprocess stopped can be obtained using
817f296bb3SBarry Smith
827f296bb3SBarry Smith```
837f296bb3SBarry SmithKSPGetIterationNumber(KSP ksp, PetscInt *its);
847f296bb3SBarry Smith```
857f296bb3SBarry Smith
867f296bb3SBarry SmithNote that this does not state that the method converged at this
877f296bb3SBarry Smithiteration: it can also have reached the maximum number of iterations, or
887f296bb3SBarry Smithhave diverged.
897f296bb3SBarry Smith
907f296bb3SBarry Smith{any}`sec_convergencetests` gives more details
917f296bb3SBarry Smithregarding convergence testing. Note that multiple linear solves can be
927f296bb3SBarry Smithperformed by the same `KSP` context. Once the `KSP` context is no
937f296bb3SBarry Smithlonger needed, it should be destroyed with the command
947f296bb3SBarry Smith
957f296bb3SBarry Smith```
967f296bb3SBarry SmithKSPDestroy(KSP *ksp);
977f296bb3SBarry Smith```
987f296bb3SBarry Smith
997f296bb3SBarry SmithThe above procedure is sufficient for general use of the `KSP`
1007f296bb3SBarry Smithpackage. One additional step is required for users who wish to customize
1017f296bb3SBarry Smithcertain preconditioners (e.g., see {any}`sec_bjacobi`) or
1027f296bb3SBarry Smithto log certain performance data using the PETSc profiling facilities (as
1037f296bb3SBarry Smithdiscussed in {any}`ch_profiling`). In this case, the user can
1047f296bb3SBarry Smithoptionally explicitly call
1057f296bb3SBarry Smith
1067f296bb3SBarry Smith```
1077f296bb3SBarry SmithKSPSetUp(KSP ksp);
1087f296bb3SBarry Smith```
1097f296bb3SBarry Smith
1107f296bb3SBarry Smithbefore calling `KSPSolve()` to perform any setup required for the
1117f296bb3SBarry Smithlinear solvers. The explicit call of this routine enables the separate
1127f296bb3SBarry Smithprofiling of any computations performed during the set up phase, such
1137f296bb3SBarry Smithas incomplete factorization for the ILU preconditioner.
1147f296bb3SBarry Smith
1157f296bb3SBarry SmithThe default solver within `KSP` is restarted GMRES, `KSPGMRES`, preconditioned for
1167f296bb3SBarry Smiththe uniprocess case with ILU(0), and for the multiprocess case with the
1177f296bb3SBarry Smithblock Jacobi method (with one block per process, each of which is solved
1187f296bb3SBarry Smithwith ILU(0)). A variety of other solvers and options are also available.
1197f296bb3SBarry SmithTo allow application programmers to set any of the preconditioner or
1207f296bb3SBarry SmithKrylov subspace options directly within the code, we provide routines
1217f296bb3SBarry Smiththat extract the `PC` and `KSP` contexts,
1227f296bb3SBarry Smith
1237f296bb3SBarry Smith```
1247f296bb3SBarry SmithKSPGetPC(KSP ksp,PC *pc);
1257f296bb3SBarry Smith```
1267f296bb3SBarry Smith
1277f296bb3SBarry SmithThe application programmer can then directly call any of the `PC` or
1287f296bb3SBarry Smith`KSP` routines to modify the corresponding default options.
1297f296bb3SBarry Smith
1307f296bb3SBarry SmithTo solve a linear system with a direct solver (supported by
1317f296bb3SBarry SmithPETSc for sequential matrices, and by several external solvers through
1327f296bb3SBarry SmithPETSc interfaces, see {any}`sec_externalsol`) one may use
1337f296bb3SBarry Smiththe options `-ksp_type` `preonly` (or the equivalent `-ksp_type` `none`)
1347f296bb3SBarry Smith`-pc_type` `lu` or `-pc_type` `cholesky` (see below).
1357f296bb3SBarry Smith
1367f296bb3SBarry SmithBy default, if a direct solver is used, the factorization is *not* done
1377f296bb3SBarry Smithin-place. This approach prevents the user from the unexpected surprise
1387f296bb3SBarry Smithof having a corrupted matrix after a linear solve. The routine
1397f296bb3SBarry Smith`PCFactorSetUseInPlace()`, discussed below, causes factorization to be
1407f296bb3SBarry Smithdone in-place.
1417f296bb3SBarry Smith
1427f296bb3SBarry Smith## Solving Successive Linear Systems
1437f296bb3SBarry Smith
1447f296bb3SBarry SmithWhen solving multiple linear systems of the same size with the same
1457f296bb3SBarry Smithmethod, several options are available. To solve successive linear
1467addb90fSBarry Smithsystems having the *same* matrix from which to construct the preconditioner (i.e., the same data
1477f296bb3SBarry Smithstructure with exactly the same matrix elements) but different
1487f296bb3SBarry Smithright-hand-side vectors, the user should simply call `KSPSolve()`
1497f296bb3SBarry Smithmultiple times. The preconditioner setup operations (e.g., factorization
1507f296bb3SBarry Smithfor ILU) will be done during the first call to `KSPSolve()` only; such
1517f296bb3SBarry Smithoperations will *not* be repeated for successive solves.
1527f296bb3SBarry Smith
1537f296bb3SBarry SmithTo solve successive linear systems that have *different* matrix values, because you
1547f296bb3SBarry Smithhave changed the matrix values in the `Mat` objects you passed to `KSPSetOperators()`,
1557f296bb3SBarry Smithstill simply call `KPSSolve()`. In this case the preconditioner will be recomputed
1567f296bb3SBarry Smithautomatically. Use the option `-ksp_reuse_preconditioner true`, or call
1577f296bb3SBarry Smith`KSPSetReusePreconditioner()`, to reuse the previously computed preconditioner.
1587f296bb3SBarry SmithFor many problems, if the matrix changes values only slightly, reusing the
1597f296bb3SBarry Smithold preconditioner can be more efficient.
1607f296bb3SBarry Smith
1617f296bb3SBarry SmithIf you wish to reuse the `KSP` with a different sized matrix and vectors, you must
1627f296bb3SBarry Smithcall `KSPReset()` before calling `KSPSetOperators()` with the new matrix.
1637f296bb3SBarry Smith
1647f296bb3SBarry Smith(sec_ksp)=
1657f296bb3SBarry Smith
1667f296bb3SBarry Smith## Krylov Methods
1677f296bb3SBarry Smith
1687f296bb3SBarry SmithThe Krylov subspace methods accept a number of options, many of which
1697f296bb3SBarry Smithare discussed below. First, to set the Krylov subspace method that is to
1707f296bb3SBarry Smithbe used, one calls the command
1717f296bb3SBarry Smith
1727f296bb3SBarry Smith```
1737f296bb3SBarry SmithKSPSetType(KSP ksp,KSPType method);
1747f296bb3SBarry Smith```
1757f296bb3SBarry Smith
1767f296bb3SBarry SmithThe type can be one of `KSPRICHARDSON`, `KSPCHEBYSHEV`, `KSPCG`,
1777f296bb3SBarry Smith`KSPGMRES`, `KSPTCQMR`, `KSPBCGS`, `KSPCGS`, `KSPTFQMR`,
1787f296bb3SBarry Smith`KSPCR`, `KSPLSQR`, `KSPBICG`, `KSPPREONLY` (or the equivalent `KSPNONE`), or others; see
1797f296bb3SBarry Smith{any}`tab-kspdefaults` or the `KSPType` man page for more.
1807f296bb3SBarry SmithThe `KSP` method can also be set with the options database command
1817f296bb3SBarry Smith`-ksp_type`, followed by one of the options `richardson`,
1827f296bb3SBarry Smith`chebyshev`, `cg`, `gmres`, `tcqmr`, `bcgs`, `cgs`,
1837f296bb3SBarry Smith`tfqmr`, `cr`, `lsqr`, `bicg`, `preonly` (or the equivalent `none`), or others (see
1847f296bb3SBarry Smith{any}`tab-kspdefaults` or the `KSPType` man page). There are
1857f296bb3SBarry Smithmethod-specific options. For instance, for the Richardson, Chebyshev, and
1867f296bb3SBarry SmithGMRES methods:
1877f296bb3SBarry Smith
1887f296bb3SBarry Smith```
1897f296bb3SBarry SmithKSPRichardsonSetScale(KSP ksp,PetscReal scale);
1907f296bb3SBarry SmithKSPChebyshevSetEigenvalues(KSP ksp,PetscReal emax,PetscReal emin);
1917f296bb3SBarry SmithKSPGMRESSetRestart(KSP ksp,PetscInt max_steps);
1927f296bb3SBarry Smith```
1937f296bb3SBarry Smith
1947f296bb3SBarry SmithThe default parameter values are
1957f296bb3SBarry Smith`scale=1.0, emax=0.01, emin=100.0`, and `max_steps=30`. The
1967f296bb3SBarry SmithGMRES restart and Richardson damping factor can also be set with the
1977f296bb3SBarry Smithoptions `-ksp_gmres_restart <n>` and
1987f296bb3SBarry Smith`-ksp_richardson_scale <factor>`.
1997f296bb3SBarry Smith
2007f296bb3SBarry SmithThe default technique for orthogonalization of the Krylov vectors in
2017f296bb3SBarry SmithGMRES is the unmodified (classical) Gram-Schmidt method, which can be
2027f296bb3SBarry Smithset with
2037f296bb3SBarry Smith
2047f296bb3SBarry Smith```
2057f296bb3SBarry SmithKSPGMRESSetOrthogonalization(KSP ksp,KSPGMRESClassicalGramSchmidtOrthogonalization);
2067f296bb3SBarry Smith```
2077f296bb3SBarry Smith
2087f296bb3SBarry Smithor the options database command `-ksp_gmres_classicalgramschmidt`. By
2097f296bb3SBarry Smithdefault this will *not* use iterative refinement to improve the
2107f296bb3SBarry Smithstability of the orthogonalization. This can be changed with the option
2117f296bb3SBarry Smith
2127f296bb3SBarry Smith```
2137f296bb3SBarry SmithKSPGMRESSetCGSRefinementType(KSP ksp,KSPGMRESCGSRefinementType type)
2147f296bb3SBarry Smith```
2157f296bb3SBarry Smith
2167f296bb3SBarry Smithor via the options database with
2177f296bb3SBarry Smith
2187f296bb3SBarry Smith```
2197f296bb3SBarry Smith-ksp_gmres_cgs_refinement_type <refine_never,refine_ifneeded,refine_always>
2207f296bb3SBarry Smith```
2217f296bb3SBarry Smith
2227f296bb3SBarry SmithThe values for `KSPGMRESCGSRefinementType()` are
2237f296bb3SBarry Smith`KSP_GMRES_CGS_REFINE_NEVER`, `KSP_GMRES_CGS_REFINE_IFNEEDED`
2247f296bb3SBarry Smithand `KSP_GMRES_CGS_REFINE_ALWAYS`.
2257f296bb3SBarry Smith
2267f296bb3SBarry SmithOne can also use modified Gram-Schmidt, by using the orthogonalization
2277f296bb3SBarry Smithroutine `KSPGMRESModifiedGramSchmidtOrthogonalization()` or by using
2287f296bb3SBarry Smiththe command line option `-ksp_gmres_modifiedgramschmidt`.
2297f296bb3SBarry Smith
2307f296bb3SBarry SmithFor the conjugate gradient method with complex numbers, there are two
2317f296bb3SBarry Smithslightly different algorithms depending on whether the matrix is
2327f296bb3SBarry SmithHermitian symmetric or truly symmetric (the default is to assume that it
2337f296bb3SBarry Smithis Hermitian symmetric). To indicate that it is symmetric, one uses the
2347f296bb3SBarry Smithcommand
2357f296bb3SBarry Smith
2367f296bb3SBarry Smith```
2377f296bb3SBarry SmithKSPCGSetType(ksp,KSP_CG_SYMMETRIC);
2387f296bb3SBarry Smith```
2397f296bb3SBarry Smith
2407f296bb3SBarry SmithNote that this option is not valid for all matrices.
2417f296bb3SBarry Smith
2427f296bb3SBarry SmithSome `KSP` types do not support preconditioning. For instance,
2437f296bb3SBarry Smiththe CGLS algorithm does not involve a preconditioner; any preconditioner
2447f296bb3SBarry Smithset to work with the `KSP` object is ignored if `KSPCGLS` was
2457f296bb3SBarry Smithselected.
2467f296bb3SBarry Smith
2477f296bb3SBarry SmithBy default, `KSP` assumes an initial guess of zero by zeroing the
2487f296bb3SBarry Smithinitial value for the solution vector that is given; this zeroing is
2497f296bb3SBarry Smithdone at the call to `KSPSolve()`. To use a nonzero initial guess, the
2507f296bb3SBarry Smithuser *must* call
2517f296bb3SBarry Smith
2527f296bb3SBarry Smith```
2537f296bb3SBarry SmithKSPSetInitialGuessNonzero(KSP ksp,PetscBool flg);
2547f296bb3SBarry Smith```
2557f296bb3SBarry Smith
2567f296bb3SBarry Smith(sec_ksppc)=
2577f296bb3SBarry Smith
2587f296bb3SBarry Smith### Preconditioning within KSP
2597f296bb3SBarry Smith
2607f296bb3SBarry SmithSince the rate of convergence of Krylov projection methods for a
2617f296bb3SBarry Smithparticular linear system is strongly dependent on its spectrum,
2627f296bb3SBarry Smithpreconditioning is typically used to alter the spectrum and hence
2637f296bb3SBarry Smithaccelerate the convergence rate of iterative techniques. Preconditioning
2647f296bb3SBarry Smithcan be applied to the system {eq}`eq_axeqb` by
2657f296bb3SBarry Smith
2667f296bb3SBarry Smith$$
2677f296bb3SBarry Smith(M_L^{-1} A M_R^{-1}) \, (M_R x) = M_L^{-1} b,
2687f296bb3SBarry Smith$$ (eq_prec)
2697f296bb3SBarry Smith
2707f296bb3SBarry Smithwhere $M_L$ and $M_R$ indicate preconditioning matrices (or,
2717f296bb3SBarry Smithmatrices from which the preconditioner is to be constructed). If
2727f296bb3SBarry Smith$M_L = I$ in {eq}`eq_prec`, right preconditioning
2737f296bb3SBarry Smithresults, and the residual of {eq}`eq_axeqb`,
2747f296bb3SBarry Smith
2757f296bb3SBarry Smith$$
2767f296bb3SBarry Smithr \equiv b - Ax = b - A M_R^{-1} \, M_R x,
2777f296bb3SBarry Smith$$
2787f296bb3SBarry Smith
2797f296bb3SBarry Smithis preserved. In contrast, the residual is altered for left
2807f296bb3SBarry Smith($M_R = I$) and symmetric preconditioning, as given by
2817f296bb3SBarry Smith
2827f296bb3SBarry Smith$$
2837f296bb3SBarry Smithr_L \equiv M_L^{-1} b - M_L^{-1} A x = M_L^{-1} r.
2847f296bb3SBarry Smith$$
2857f296bb3SBarry Smith
2867f296bb3SBarry SmithBy default, most KSP implementations use left preconditioning. Some more
2877f296bb3SBarry Smithnaturally use other options, though. For instance, `KSPQCG` defaults
2887f296bb3SBarry Smithto use symmetric preconditioning and `KSPFGMRES` uses right
2897f296bb3SBarry Smithpreconditioning by default. Right preconditioning can be activated for
2907f296bb3SBarry Smithsome methods by using the options database command
2917f296bb3SBarry Smith`-ksp_pc_side right` or calling the routine
2927f296bb3SBarry Smith
2937f296bb3SBarry Smith```
2947f296bb3SBarry SmithKSPSetPCSide(ksp,PC_RIGHT);
2957f296bb3SBarry Smith```
2967f296bb3SBarry Smith
2977f296bb3SBarry SmithAttempting to use right preconditioning for a method that does not
2987f296bb3SBarry Smithcurrently support it results in an error message of the form
2997f296bb3SBarry Smith
3007f296bb3SBarry Smith```none
3017f296bb3SBarry SmithKSPSetUp_Richardson:No right preconditioning for KSPRICHARDSON
3027f296bb3SBarry Smith```
3037f296bb3SBarry Smith
3047f296bb3SBarry Smith```{eval-rst}
3057f296bb3SBarry Smith.. list-table:: KSP Objects
3067f296bb3SBarry Smith  :name: tab-kspdefaults
3077f296bb3SBarry Smith  :header-rows: 1
3087f296bb3SBarry Smith
3097f296bb3SBarry Smith  * - Method
3107f296bb3SBarry Smith    - KSPType
3117f296bb3SBarry Smith    - Options Database
3127f296bb3SBarry Smith  * - Richardson
3137f296bb3SBarry Smith    - ``KSPRICHARDSON``
3147f296bb3SBarry Smith    - ``richardson``
3157f296bb3SBarry Smith  * - Chebyshev
3167f296bb3SBarry Smith    - ``KSPCHEBYSHEV``
3177f296bb3SBarry Smith    - ``chebyshev``
3187f296bb3SBarry Smith  * - Conjugate Gradient :cite:`hs:52`
3197f296bb3SBarry Smith    - ``KSPCG``
3207f296bb3SBarry Smith    - ``cg``
3217f296bb3SBarry Smith  * - Pipelined Conjugate Gradients :cite:`ghyselsvanroose2014`
3227f296bb3SBarry Smith    - ``KSPPIPECG``
3237f296bb3SBarry Smith    - ``pipecg``
3247f296bb3SBarry Smith  * - Pipelined Conjugate Gradients (Gropp)
3257f296bb3SBarry Smith    - ``KSPGROPPCG``
3267f296bb3SBarry Smith    - ``groppcg``
3277f296bb3SBarry Smith  * - Pipelined Conjugate Gradients with Residual Replacement
3287f296bb3SBarry Smith    - ``KSPPIPECGRR``
3297f296bb3SBarry Smith    - ``pipecgrr``
3307f296bb3SBarry Smith  * - Conjugate Gradients for the Normal Equations
3317f296bb3SBarry Smith    - ``KSPCGNE``
3327f296bb3SBarry Smith    - ``cgne``
3337f296bb3SBarry Smith  * - Flexible Conjugate Gradients :cite:`flexiblecg`
3347f296bb3SBarry Smith    - ``KSPFCG``
3357f296bb3SBarry Smith    - ``fcg``
3367f296bb3SBarry Smith  * -  Pipelined, Flexible Conjugate Gradients :cite:`sananschneppmay2016`
3377f296bb3SBarry Smith    - ``KSPPIPEFCG``
3387f296bb3SBarry Smith    - ``pipefcg``
3397f296bb3SBarry Smith  * - Conjugate Gradients for Least Squares
3407f296bb3SBarry Smith    - ``KSPCGLS``
3417f296bb3SBarry Smith    - ``cgls``
3427f296bb3SBarry Smith  * - Conjugate Gradients with Constraint (1)
3437f296bb3SBarry Smith    - ``KSPNASH``
3447f296bb3SBarry Smith    - ``nash``
3457f296bb3SBarry Smith  * - Conjugate Gradients with Constraint (2)
3467f296bb3SBarry Smith    - ``KSPSTCG``
3477f296bb3SBarry Smith    - ``stcg``
3487f296bb3SBarry Smith  * - Conjugate Gradients with Constraint (3)
3497f296bb3SBarry Smith    - ``KSPGLTR``
3507f296bb3SBarry Smith    - ``gltr``
3517f296bb3SBarry Smith  * - Conjugate Gradients with Constraint (4)
3527f296bb3SBarry Smith    - ``KSPQCG``
3537f296bb3SBarry Smith    - ``qcg``
3547f296bb3SBarry Smith  * - BiConjugate Gradient
3557f296bb3SBarry Smith    - ``KSPBICG``
3567f296bb3SBarry Smith    - ``bicg``
3577f296bb3SBarry Smith  * - BiCGSTAB :cite:`v:92`
3587f296bb3SBarry Smith    - ``KSPBCGS``
3597f296bb3SBarry Smith    - ``bcgs``
3607f296bb3SBarry Smith  * - Improved BiCGSTAB
3617f296bb3SBarry Smith    - ``KSPIBCGS``
3627f296bb3SBarry Smith    - ``ibcgs``
3637f296bb3SBarry Smith  * - QMRCGSTAB :cite:`chan1994qmrcgs`
3647f296bb3SBarry Smith    - ``KSPQMRCGS``
3657f296bb3SBarry Smith    - ``qmrcgs``
3667f296bb3SBarry Smith  * - Flexible BiCGSTAB
3677f296bb3SBarry Smith    - ``KSPFBCGS``
3687f296bb3SBarry Smith    - ``fbcgs``
3697f296bb3SBarry Smith  * - Flexible BiCGSTAB (variant)
3707f296bb3SBarry Smith    - ``KSPFBCGSR``
3717f296bb3SBarry Smith    - ``fbcgsr``
3727f296bb3SBarry Smith  * - Enhanced BiCGSTAB(L)
3737f296bb3SBarry Smith    - ``KSPBCGSL``
3747f296bb3SBarry Smith    - ``bcgsl``
3757f296bb3SBarry Smith  * - Minimal Residual Method :cite:`paige.saunders:solution`
3767f296bb3SBarry Smith    - ``KSPMINRES``
3777f296bb3SBarry Smith    - ``minres``
3787f296bb3SBarry Smith  * - Generalized Minimal Residual :cite:`saad.schultz:gmres`
3797f296bb3SBarry Smith    - ``KSPGMRES``
3807f296bb3SBarry Smith    - ``gmres``
3817f296bb3SBarry Smith  * - Flexible Generalized Minimal Residual :cite:`saad1993`
3827f296bb3SBarry Smith    - ``KSPFGMRES``
3837f296bb3SBarry Smith    - ``fgmres``
3847f296bb3SBarry Smith  * - Deflated Generalized Minimal Residual
3857f296bb3SBarry Smith    - ``KSPDGMRES``
3867f296bb3SBarry Smith    - ``dgmres``
3877f296bb3SBarry Smith  * - Pipelined Generalized Minimal Residual :cite:`ghyselsashbymeerbergenvanroose2013`
3887f296bb3SBarry Smith    - ``KSPPGMRES``
3897f296bb3SBarry Smith    - ``pgmres``
3907f296bb3SBarry Smith  * - Pipelined, Flexible Generalized Minimal Residual :cite:`sananschneppmay2016`
3917f296bb3SBarry Smith    - ``KSPPIPEFGMRES``
3927f296bb3SBarry Smith    - ``pipefgmres``
3937f296bb3SBarry Smith  * - Generalized Minimal Residual with Accelerated Restart
3947f296bb3SBarry Smith    - ``KSPLGMRES``
3957f296bb3SBarry Smith    - ``lgmres``
3967f296bb3SBarry Smith  * - Conjugate Residual :cite:`eisenstat1983variational`
3977f296bb3SBarry Smith    - ``KSPCR``
3987f296bb3SBarry Smith    - ``cr``
3997f296bb3SBarry Smith  * - Generalized Conjugate Residual
4007f296bb3SBarry Smith    - ``KSPGCR``
4017f296bb3SBarry Smith    - ``gcr``
4027f296bb3SBarry Smith  * - Pipelined Conjugate Residual
4037f296bb3SBarry Smith    - ``KSPPIPECR``
4047f296bb3SBarry Smith    - ``pipecr``
4057f296bb3SBarry Smith  * - Pipelined, Flexible Conjugate Residual :cite:`sananschneppmay2016`
4067f296bb3SBarry Smith    - ``KSPPIPEGCR``
4077f296bb3SBarry Smith    - ``pipegcr``
4087f296bb3SBarry Smith  * - FETI-DP
4097f296bb3SBarry Smith    - ``KSPFETIDP``
4107f296bb3SBarry Smith    - ``fetidp``
4117f296bb3SBarry Smith  * - Conjugate Gradient Squared :cite:`so:89`
4127f296bb3SBarry Smith    - ``KSPCGS``
4137f296bb3SBarry Smith    - ``cgs``
4147f296bb3SBarry Smith  * - Transpose-Free Quasi-Minimal Residual (1) :cite:`f:93`
4157f296bb3SBarry Smith    - ``KSPTFQMR``
4167f296bb3SBarry Smith    - ``tfqmr``
4177f296bb3SBarry Smith  * - Transpose-Free Quasi-Minimal Residual (2)
4187f296bb3SBarry Smith    - ``KSPTCQMR``
4197f296bb3SBarry Smith    - ``tcqmr``
4207f296bb3SBarry Smith  * - Least Squares Method
4217f296bb3SBarry Smith    - ``KSPLSQR``
4227f296bb3SBarry Smith    - ``lsqr``
4237f296bb3SBarry Smith  * - Symmetric LQ Method :cite:`paige.saunders:solution`
4247f296bb3SBarry Smith    - ``KSPSYMMLQ``
4257f296bb3SBarry Smith    - ``symmlq``
4267f296bb3SBarry Smith  * - TSIRM
4277f296bb3SBarry Smith    - ``KSPTSIRM``
4287f296bb3SBarry Smith    - ``tsirm``
4297f296bb3SBarry Smith  * - Python Shell
4307f296bb3SBarry Smith    - ``KSPPYTHON``
4317f296bb3SBarry Smith    - ``python``
4327f296bb3SBarry Smith  * - Shell for no ``KSP`` method
4337f296bb3SBarry Smith    - ``KSPNONE``
4347f296bb3SBarry Smith    - ``none``
4357f296bb3SBarry Smith
4367f296bb3SBarry Smith```
4377f296bb3SBarry Smith
4387f296bb3SBarry SmithNote: the bi-conjugate gradient method requires application of both the
4397f296bb3SBarry Smithmatrix and its transpose plus the preconditioner and its transpose.
4407f296bb3SBarry SmithCurrently not all matrices and preconditioners provide this support and
4417f296bb3SBarry Smiththus the `KSPBICG` cannot always be used.
4427f296bb3SBarry Smith
4437f296bb3SBarry SmithNote: PETSc implements the FETI-DP (Finite Element Tearing and
4447f296bb3SBarry SmithInterconnecting Dual-Primal) method as an implementation of `KSP` since it recasts the
4457f296bb3SBarry Smithoriginal problem into a constrained minimization one with Lagrange
4467f296bb3SBarry Smithmultipliers. The only matrix type supported is `MATIS`. Support for
4477f296bb3SBarry Smithsaddle point problems is provided. See the man page for `KSPFETIDP` for
4487f296bb3SBarry Smithfurther details.
4497f296bb3SBarry Smith
4507f296bb3SBarry Smith(sec_convergencetests)=
4517f296bb3SBarry Smith
4527f296bb3SBarry Smith### Convergence Tests
4537f296bb3SBarry Smith
4547f296bb3SBarry SmithThe default convergence test, `KSPConvergedDefault()`, uses the \$ l_2 \$ norm of the preconditioned \$ B(b - A x) \$ or unconditioned residual \$ b - Ax\$, depending on the `KSPType` and the value of `KSPNormType` set with `KSPSetNormType`. For `KSPCG` and `KSPGMRES` the default is the norm of the preconditioned residual.
4557f296bb3SBarry SmithThe preconditioned residual is used by default for
4567f296bb3SBarry Smithconvergence testing of all left-preconditioned `KSP` methods. For the
4577f296bb3SBarry Smithconjugate gradient, Richardson, and Chebyshev methods the true residual
4587f296bb3SBarry Smithcan be used by the options database command
4597f296bb3SBarry Smith`-ksp_norm_type unpreconditioned` or by calling the routine
4607f296bb3SBarry Smith
4617f296bb3SBarry Smith```
4627f296bb3SBarry SmithKSPSetNormType(ksp, KSP_NORM_UNPRECONDITIONED);
4637f296bb3SBarry Smith```
4647f296bb3SBarry Smith
4657f296bb3SBarry Smith`KSPCG` also supports using the natural norm induced by the symmetric positive-definite
4667f296bb3SBarry Smithmatrix that defines the linear system with the options database command `-ksp_norm_type natural` or by calling the routine
4677f296bb3SBarry Smith
4687f296bb3SBarry Smith```
4697f296bb3SBarry SmithKSPSetNormType(ksp, KSP_NORM_NATURAL);
4707f296bb3SBarry Smith```
4717f296bb3SBarry Smith
4727f296bb3SBarry SmithConvergence (or divergence) is decided
4737f296bb3SBarry Smithby three quantities: the decrease of the residual norm relative to the
4747f296bb3SBarry Smithnorm of the right-hand side, `rtol`, the absolute size of the residual
4757f296bb3SBarry Smithnorm, `atol`, and the relative increase in the residual, `dtol`.
4767f296bb3SBarry SmithConvergence is detected at iteration $k$ if
4777f296bb3SBarry Smith
4787f296bb3SBarry Smith$$
4797f296bb3SBarry Smith\| r_k \|_2 < {\rm max} ( \text{rtol} * \| b \|_2, \text{atol}),
4807f296bb3SBarry Smith$$
4817f296bb3SBarry Smith
4827f296bb3SBarry Smithwhere $r_k = b - A x_k$. Divergence is detected if
4837f296bb3SBarry Smith
4847f296bb3SBarry Smith$$
4857f296bb3SBarry Smith\| r_k \|_2 > \text{dtol} * \| b \|_2.
4867f296bb3SBarry Smith$$
4877f296bb3SBarry Smith
4887f296bb3SBarry SmithThese parameters, as well as the maximum number of allowable iterations,
4897f296bb3SBarry Smithcan be set with the routine
4907f296bb3SBarry Smith
4917f296bb3SBarry Smith```
4927f296bb3SBarry SmithKSPSetTolerances(KSP ksp,PetscReal rtol,PetscReal atol,PetscReal dtol,PetscInt maxits);
4937f296bb3SBarry Smith```
4947f296bb3SBarry Smith
4957f296bb3SBarry SmithThe user can retain the current value of any of these parameters by
4967f296bb3SBarry Smithspecifying `PETSC_CURRENT` as the corresponding tolerance; the
4977f296bb3SBarry Smithdefaults are `rtol=1e-5`, `atol=1e-50`, `dtol=1e5`, and
4987f296bb3SBarry Smith`maxits=1e4`. Using `PETSC_DETERMINE` will set the parameters back to their
4997f296bb3SBarry Smithinitial values when the object's type was set. These parameters can also be set from the options
5007f296bb3SBarry Smithdatabase with the commands `-ksp_rtol` `<rtol>`, `-ksp_atol`
5017f296bb3SBarry Smith`<atol>`, `-ksp_divtol` `<dtol>`, and `-ksp_max_it` `<its>`.
5027f296bb3SBarry Smith
5037f296bb3SBarry SmithIn addition to providing an interface to a simple convergence test,
5047f296bb3SBarry Smith`KSP` allows the application programmer the flexibility to provide
5057f296bb3SBarry Smithcustomized convergence-testing routines. The user can specify a
5067f296bb3SBarry Smithcustomized routine with the command
5077f296bb3SBarry Smith
5087f296bb3SBarry Smith```
509*2a8381b2SBarry SmithKSPSetConvergenceTest(KSP ksp, PetscErrorCode (*test)(KSP ksp, PetscInt it, PetscReal rnorm, KSPConvergedReason *reason, PetscCtx ctx), PetscCtx ctx, PetscErrorCode (*destroy)(PetscCtxRt ctx));
5107f296bb3SBarry Smith```
5117f296bb3SBarry Smith
5127f296bb3SBarry SmithThe final routine argument, `ctx`, is an optional context for private
5137f296bb3SBarry Smithdata for the user-defined convergence routine, `test`. Other `test`
5147f296bb3SBarry Smithroutine arguments are the iteration number, `it`, and the residual’s
5157f296bb3SBarry Smithnorm, `rnorm`. The routine for detecting convergence,
5167f296bb3SBarry Smith`test`, should set `reason` to positive for convergence, 0 for no
5177f296bb3SBarry Smithconvergence, and negative for failure to converge. A full list of
5187f296bb3SBarry Smithpossible values is given in the `KSPConvergedReason` manual page.
5197f296bb3SBarry SmithYou can use `KSPGetConvergedReason()` after
5207f296bb3SBarry Smith`KSPSolve()` to see why convergence/divergence was detected.
5217f296bb3SBarry Smith
5227f296bb3SBarry Smith(sec_kspmonitor)=
5237f296bb3SBarry Smith
5247f296bb3SBarry Smith### Convergence Monitoring
5257f296bb3SBarry Smith
5267f296bb3SBarry SmithBy default, the Krylov solvers, `KSPSolve()`, run silently without displaying
5277f296bb3SBarry Smithinformation about the iterations. The user can indicate that the norms
5287f296bb3SBarry Smithof the residuals should be displayed at each iteration by using `-ksp_monitor` with
5297f296bb3SBarry Smiththe options database. To display the residual norms in a graphical
5307f296bb3SBarry Smithwindow (running under X Windows), one should use
5317f296bb3SBarry Smith`-ksp_monitor draw::draw_lg`. Application programmers can also
5327f296bb3SBarry Smithprovide their own routines to perform the monitoring by using the
5337f296bb3SBarry Smithcommand
5347f296bb3SBarry Smith
5357f296bb3SBarry Smith```
536*2a8381b2SBarry SmithKSPMonitorSet(KSP ksp, PetscErrorCode (*mon)(KSP ksp, PetscInt it, PetscReal rnorm, PetscCtx ctx), PetscCtx ctx, (PetscCtxDestroyFn *)mondestroy);
5377f296bb3SBarry Smith```
5387f296bb3SBarry Smith
5397f296bb3SBarry SmithThe final routine argument, `ctx`, is an optional context for private
5407f296bb3SBarry Smithdata for the user-defined monitoring routine, `mon`. Other `mon`
5417f296bb3SBarry Smithroutine arguments are the iteration number (`it`) and the residual’s
5427f296bb3SBarry Smithnorm (`rnorm`), as discussed above in {any}`sec_convergencetests`.
5437f296bb3SBarry SmithA helpful routine within user-defined
5447f296bb3SBarry Smithmonitors is `PetscObjectGetComm((PetscObject)ksp,MPI_Comm *comm)`,
5457f296bb3SBarry Smithwhich returns in `comm` the MPI communicator for the `KSP` context.
5467f296bb3SBarry SmithSee {any}`sec_writing` for more discussion of the use of
5477f296bb3SBarry SmithMPI communicators within PETSc.
5487f296bb3SBarry Smith
5497f296bb3SBarry SmithMany monitoring routines are supplied with PETSc, including
5507f296bb3SBarry Smith
5517f296bb3SBarry Smith```
552*2a8381b2SBarry SmithKSPMonitorResidual(KSP, PetscInt, PetscReal, PetscCtx);
553*2a8381b2SBarry SmithKSPMonitorSingularValue(KSP, PetscInt, PetscReal, PetscCtx);
554*2a8381b2SBarry SmithKSPMonitorTrueResidual(KSP, PetscInt, PetscReal, PetscCtx);
5557f296bb3SBarry Smith```
5567f296bb3SBarry Smith
5577f296bb3SBarry SmithThe default monitor simply prints an estimate of a norm of
5587f296bb3SBarry Smiththe residual at each iteration. The routine
5597f296bb3SBarry Smith`KSPMonitorSingularValue()` is appropriate only for use with the
5607f296bb3SBarry Smithconjugate gradient method or GMRES, since it prints estimates of the
5617f296bb3SBarry Smithextreme singular values of the preconditioned operator at each
5627f296bb3SBarry Smithiteration computed via the Lanczos or Arnoldi algorithms.
5637f296bb3SBarry Smith
5647f296bb3SBarry SmithSince `KSPMonitorTrueResidual()` prints the true
5657f296bb3SBarry Smithresidual at each iteration by actually computing the residual using the
5667f296bb3SBarry Smithformula $r = b - Ax$, the routine is slow and should be used only
5677f296bb3SBarry Smithfor testing or convergence studies, not for timing. These `KSPSolve()` monitors may
5687f296bb3SBarry Smithbe accessed with the command line options `-ksp_monitor`,
5697f296bb3SBarry Smith`-ksp_monitor_singular_value`, and `-ksp_monitor_true_residual`.
5707f296bb3SBarry Smith
5717f296bb3SBarry SmithTo employ the default graphical monitor, one should use the command
5727f296bb3SBarry Smith`-ksp_monitor draw::draw_lg`.
5737f296bb3SBarry Smith
5747f296bb3SBarry SmithOne can cancel hardwired monitoring routines for KSP at runtime with
5757f296bb3SBarry Smith`-ksp_monitor_cancel`.
5767f296bb3SBarry Smith
5777f296bb3SBarry Smith### Understanding the Operator’s Spectrum
5787f296bb3SBarry Smith
5797f296bb3SBarry SmithSince the convergence of Krylov subspace methods depends strongly on the
5807f296bb3SBarry Smithspectrum (eigenvalues) of the preconditioned operator, PETSc has
5817f296bb3SBarry Smithspecific routines for eigenvalue approximation via the Arnoldi or
5827f296bb3SBarry SmithLanczos iteration. First, before the linear solve one must call
5837f296bb3SBarry Smith
5847f296bb3SBarry Smith```
5857f296bb3SBarry SmithKSPSetComputeEigenvalues(ksp,PETSC_TRUE);
5867f296bb3SBarry Smith```
5877f296bb3SBarry Smith
5887f296bb3SBarry SmithThen after the `KSP` solve one calls
5897f296bb3SBarry Smith
5907f296bb3SBarry Smith```
5917f296bb3SBarry SmithKSPComputeEigenvalues(KSP ksp,PetscInt n,PetscReal *realpart,PetscReal *complexpart,PetscInt *neig);
5927f296bb3SBarry Smith```
5937f296bb3SBarry Smith
5947f296bb3SBarry SmithHere, `n` is the size of the two arrays and the eigenvalues are
5957f296bb3SBarry Smithinserted into those two arrays. `neig` is the number of eigenvalues
5967f296bb3SBarry Smithcomputed; this number depends on the size of the Krylov space generated
5977f296bb3SBarry Smithduring the linear system solution, for GMRES it is never larger than the
5987f296bb3SBarry Smith`restart` parameter. There is an additional routine
5997f296bb3SBarry Smith
6007f296bb3SBarry Smith```
6017f296bb3SBarry SmithKSPComputeEigenvaluesExplicitly(KSP ksp, PetscInt n,PetscReal *realpart,PetscReal *complexpart);
6027f296bb3SBarry Smith```
6037f296bb3SBarry Smith
6047f296bb3SBarry Smiththat is useful only for very small problems. It explicitly computes the
6057f296bb3SBarry Smithfull representation of the preconditioned operator and calls LAPACK to
6067f296bb3SBarry Smithcompute its eigenvalues. It should be only used for matrices of size up
6077f296bb3SBarry Smithto a couple hundred. The `PetscDrawSP*()` routines are very useful for
6087f296bb3SBarry Smithdrawing scatter plots of the eigenvalues.
6097f296bb3SBarry Smith
6107f296bb3SBarry SmithThe eigenvalues may also be computed and displayed graphically with the
6117f296bb3SBarry Smithoptions data base commands `-ksp_view_eigenvalues draw` and
6127f296bb3SBarry Smith`-ksp_view_eigenvalues_explicit draw`. Or they can be dumped to the
6137f296bb3SBarry Smithscreen in ASCII text via `-ksp_view_eigenvalues` and
6147f296bb3SBarry Smith`-ksp_view_eigenvalues_explicit`.
6157f296bb3SBarry Smith
6167f296bb3SBarry Smith(sec_flexibleksp)=
6177f296bb3SBarry Smith
6187f296bb3SBarry Smith### Flexible Krylov Methods
6197f296bb3SBarry Smith
6207f296bb3SBarry SmithStandard Krylov methods require that the preconditioner be a linear operator, thus, for example, a standard `KSP` method
6217f296bb3SBarry Smithcannot use a `KSP` in its preconditioner, as is common in the Block-Jacobi method `PCBJACOBI`, for example.
6227f296bb3SBarry SmithFlexible Krylov methods are a subset of methods that allow (with modest additional requirements
6237f296bb3SBarry Smithon memory) the preconditioner to be nonlinear. For example, they can be used with the `PCKSP` preconditioner.
6247f296bb3SBarry SmithThe flexible `KSP` methods have the label "Flexible" in {any}`tab-kspdefaults`.
6257f296bb3SBarry Smith
6267f296bb3SBarry SmithOne can use `KSPMonitorDynamicTolerance()` to control the tolerances used by inner `KSP` solvers in `PCKSP`, `PCBJACOBI`, and `PCDEFLATION`.
6277f296bb3SBarry Smith
6284d4d2bdcSBarry SmithIn addition to supporting `PCKSP`, the flexible methods support `KSPFlexibleSetModifyPC()` to
6297f296bb3SBarry Smithallow the user to provide a callback function that changes the preconditioner at each Krylov iteration. Its calling sequence is as follows.
6307f296bb3SBarry Smith
6317f296bb3SBarry Smith```
632*2a8381b2SBarry SmithPetscErrorCode f(KSP ksp, PetscInt total_its, PetscInt its_since_restart, PetscReal res_norm, PetscCtx ctx);
6337f296bb3SBarry Smith```
6347f296bb3SBarry Smith
6357f296bb3SBarry Smith(sec_pipelineksp)=
6367f296bb3SBarry Smith
6377f296bb3SBarry Smith### Pipelined Krylov Methods
6387f296bb3SBarry Smith
6397f296bb3SBarry SmithStandard Krylov methods have one or more global reductions resulting from the computations of inner products or norms in each iteration.
6407f296bb3SBarry SmithThese reductions need to block until all MPI processes have received the results. For a large number of MPI processes (this number is machine dependent
6417f296bb3SBarry Smithbut can be above 10,000 processes) this synchronization is very time consuming and can significantly slow the computation. Pipelined Krylov
6427f296bb3SBarry Smithmethods overlap the reduction operations with local computations (generally the application of the matrix-vector products and precondtiioners)
6437f296bb3SBarry Smiththus effectively "hiding" the time of the reductions. In addition, they may reduce the number of global synchronizations by rearranging the
6447f296bb3SBarry Smithcomputations in a way that some of them can be collapsed, e.g., two or more calls to `MPI_Allreduce()` may be combined into one call.
6457f296bb3SBarry SmithThe pipeline `KSP` methods have the label "Pipeline" in {any}`tab-kspdefaults`.
6467f296bb3SBarry Smith
6477f296bb3SBarry SmithSpecial configuration of MPI may be necessary for reductions to make asynchronous progress, which is important for
6487f296bb3SBarry Smithperformance of pipelined methods. See {any}`doc_faq_pipelined` for details.
6497f296bb3SBarry Smith
6507f296bb3SBarry Smith### Other KSP Options
6517f296bb3SBarry Smith
6527f296bb3SBarry SmithTo obtain the solution vector and right-hand side from a `KSP`
6537f296bb3SBarry Smithcontext, one uses
6547f296bb3SBarry Smith
6557f296bb3SBarry Smith```
6567f296bb3SBarry SmithKSPGetSolution(KSP ksp,Vec *x);
6577f296bb3SBarry SmithKSPGetRhs(KSP ksp,Vec *rhs);
6587f296bb3SBarry Smith```
6597f296bb3SBarry Smith
6607f296bb3SBarry SmithDuring the iterative process the solution may not yet have been
6617f296bb3SBarry Smithcalculated or it may be stored in a different location. To access the
6627f296bb3SBarry Smithapproximate solution during the iterative process, one uses the command
6637f296bb3SBarry Smith
6647f296bb3SBarry Smith```
6657f296bb3SBarry SmithKSPBuildSolution(KSP ksp,Vec w,Vec *v);
6667f296bb3SBarry Smith```
6677f296bb3SBarry Smith
6687f296bb3SBarry Smithwhere the solution is returned in `v`. The user can optionally provide
6697f296bb3SBarry Smitha vector in `w` as the location to store the vector; however, if `w`
6707f296bb3SBarry Smithis `NULL`, space allocated by PETSc in the `KSP` context is used.
6717f296bb3SBarry SmithOne should not destroy this vector. For certain `KSP` methods (e.g.,
6727f296bb3SBarry SmithGMRES), the construction of the solution is expensive, while for many
6737f296bb3SBarry Smithothers it doesn’t even require a vector copy.
6747f296bb3SBarry Smith
6757f296bb3SBarry SmithAccess to the residual is done in a similar way with the command
6767f296bb3SBarry Smith
6777f296bb3SBarry Smith```
6787f296bb3SBarry SmithKSPBuildResidual(KSP ksp,Vec t,Vec w,Vec *v);
6797f296bb3SBarry Smith```
6807f296bb3SBarry Smith
6817f296bb3SBarry SmithAgain, for GMRES and certain other methods this is an expensive
6827f296bb3SBarry Smithoperation.
6837f296bb3SBarry Smith
6847f296bb3SBarry Smith(sec_pc)=
6857f296bb3SBarry Smith
6867f296bb3SBarry Smith## Preconditioners
6877f296bb3SBarry Smith
6887f296bb3SBarry SmithAs discussed in {any}`sec_ksppc`, Krylov subspace methods
6897f296bb3SBarry Smithare typically used in conjunction with a preconditioner. To employ a
6907f296bb3SBarry Smithparticular preconditioning method, the user can either select it from
6917f296bb3SBarry Smiththe options database using input of the form `-pc_type <methodname>`
6927f296bb3SBarry Smithor set the method with the command
6937f296bb3SBarry Smith
6947f296bb3SBarry Smith```
6957f296bb3SBarry SmithPCSetType(PC pc,PCType method);
6967f296bb3SBarry Smith```
6977f296bb3SBarry Smith
6987f296bb3SBarry SmithIn {any}`tab-pcdefaults` we summarize the basic
6997f296bb3SBarry Smithpreconditioning methods supported in PETSc. See the `PCType` manual
7007f296bb3SBarry Smithpage for a complete list.
7017f296bb3SBarry Smith
7027f296bb3SBarry SmithThe `PCSHELL` preconditioner allows users to provide their own
7037f296bb3SBarry Smithspecific, application-provided custom preconditioner.
7047f296bb3SBarry Smith
7057f296bb3SBarry SmithThe direct
7067f296bb3SBarry Smithpreconditioner, `PCLU` , is, in fact, a direct solver for the linear
7077f296bb3SBarry Smithsystem that uses LU factorization. `PCLU` is included as a
7087f296bb3SBarry Smithpreconditioner so that PETSc has a consistent interface among direct and
7097f296bb3SBarry Smithiterative linear solvers.
7107f296bb3SBarry Smith
7117f296bb3SBarry SmithPETSc provides several domain decomposition methods/preconditioners including
7127f296bb3SBarry Smith`PCASM`, `PCGASM`, `PCBDDC`, and `PCHPDDM`. In addition PETSc provides
7137f296bb3SBarry Smithmultiple multigrid solvers/preconditioners including `PCMG`, `PCGAMG`, `PCHYPRE`,
7147f296bb3SBarry Smithand `PCML`. See further discussion below.
7157f296bb3SBarry Smith
7167f296bb3SBarry Smith```{eval-rst}
7177f296bb3SBarry Smith.. list-table:: PETSc Preconditioners (partial list)
7187f296bb3SBarry Smith   :name: tab-pcdefaults
7197f296bb3SBarry Smith   :header-rows: 1
7207f296bb3SBarry Smith
7217f296bb3SBarry Smith   * - Method
7227f296bb3SBarry Smith     - PCType
7237f296bb3SBarry Smith     - Options Database
7247f296bb3SBarry Smith   * - Jacobi
7257f296bb3SBarry Smith     - ``PCJACOBI``
7267f296bb3SBarry Smith     - ``jacobi``
7277f296bb3SBarry Smith   * - Block Jacobi
7287f296bb3SBarry Smith     - ``PCBJACOBI``
7297f296bb3SBarry Smith     - ``bjacobi``
7307f296bb3SBarry Smith   * - SOR (and SSOR)
7317f296bb3SBarry Smith     - ``PCSOR``
7327f296bb3SBarry Smith     - ``sor``
7337f296bb3SBarry Smith   * - SOR with Eisenstat trick
7347f296bb3SBarry Smith     - ``PCEISENSTAT``
7357f296bb3SBarry Smith     - ``eisenstat``
7367f296bb3SBarry Smith   * - Incomplete Cholesky
7377f296bb3SBarry Smith     - ``PCICC``
7387f296bb3SBarry Smith     - ``icc``
7397f296bb3SBarry Smith   * - Incomplete LU
7407f296bb3SBarry Smith     - ``PCILU``
7417f296bb3SBarry Smith     - ``ilu``
7427f296bb3SBarry Smith   * - Additive Schwarz
7437f296bb3SBarry Smith     - ``PCASM``
7447f296bb3SBarry Smith     - ``asm``
7457f296bb3SBarry Smith   * - Generalized Additive Schwarz
7467f296bb3SBarry Smith     - ``PCGASM``
7477f296bb3SBarry Smith     - ``gasm``
7487f296bb3SBarry Smith   * - Algebraic Multigrid
7497f296bb3SBarry Smith     - ``PCGAMG``
7507f296bb3SBarry Smith     - ``gamg``
7517f296bb3SBarry Smith   * - Balancing Domain Decomposition by Constraints
7527f296bb3SBarry Smith     - ``PCBDDC``
7537f296bb3SBarry Smith     - ``bddc``
7547f296bb3SBarry Smith   * - Linear solver
7557f296bb3SBarry Smith     - ``PCKSP``
7567f296bb3SBarry Smith     - ``ksp``
7577f296bb3SBarry Smith   * - Combination of preconditioners
7587f296bb3SBarry Smith     - ``PCCOMPOSITE``
7597f296bb3SBarry Smith     - ``composite``
7607f296bb3SBarry Smith   * - LU
7617f296bb3SBarry Smith     - ``PCLU``
7627f296bb3SBarry Smith     - ``lu``
7637f296bb3SBarry Smith   * - Cholesky
7647f296bb3SBarry Smith     - ``PCCHOLESKY``
7657f296bb3SBarry Smith     - ``cholesky``
7667f296bb3SBarry Smith   * - No preconditioning
7677f296bb3SBarry Smith     - ``PCNONE``
7687f296bb3SBarry Smith     - ``none``
7697f296bb3SBarry Smith   * - Shell for user-defined ``PC``
7707f296bb3SBarry Smith     - ``PCSHELL``
7717f296bb3SBarry Smith     - ``shell``
7727f296bb3SBarry Smith```
7737f296bb3SBarry Smith
7747f296bb3SBarry SmithEach preconditioner may have associated with it a set of options, which
7757f296bb3SBarry Smithcan be set with routines and options database commands provided for this
7767f296bb3SBarry Smithpurpose. Such routine names and commands are all of the form
7777f296bb3SBarry Smith`PC<TYPE><Option>` and `-pc_<type>_<option> [value]`. A complete
7787f296bb3SBarry Smithlist can be found by consulting the `PCType` manual page; we discuss
7797f296bb3SBarry Smithjust a few in the sections below.
7807f296bb3SBarry Smith
7817f296bb3SBarry Smith(sec_ilu_icc)=
7827f296bb3SBarry Smith
7837f296bb3SBarry Smith### ILU and ICC Preconditioners
7847f296bb3SBarry Smith
7857f296bb3SBarry SmithSome of the options for ILU preconditioner are
7867f296bb3SBarry Smith
7877f296bb3SBarry Smith```
7887f296bb3SBarry SmithPCFactorSetLevels(PC pc,PetscInt levels);
7897f296bb3SBarry SmithPCFactorSetReuseOrdering(PC pc,PetscBool flag);
7907f296bb3SBarry SmithPCFactorSetDropTolerance(PC pc,PetscReal dt,PetscReal dtcol,PetscInt dtcount);
7917f296bb3SBarry SmithPCFactorSetReuseFill(PC pc,PetscBool flag);
7927f296bb3SBarry SmithPCFactorSetUseInPlace(PC pc,PetscBool flg);
7937f296bb3SBarry SmithPCFactorSetAllowDiagonalFill(PC pc,PetscBool flg);
7947f296bb3SBarry Smith```
7957f296bb3SBarry Smith
7967f296bb3SBarry SmithWhen repeatedly solving linear systems with the same `KSP` context,
7977f296bb3SBarry Smithone can reuse some information computed during the first linear solve.
7987f296bb3SBarry SmithIn particular, `PCFactorSetReuseOrdering()` causes the ordering (for
7997f296bb3SBarry Smithexample, set with `-pc_factor_mat_ordering_type` `order`) computed
8007f296bb3SBarry Smithin the first factorization to be reused for later factorizations.
8017f296bb3SBarry Smith`PCFactorSetUseInPlace()` is often used with `PCASM` or
8027f296bb3SBarry Smith`PCBJACOBI` when zero fill is used, since it reuses the matrix space
8037f296bb3SBarry Smithto store the incomplete factorization it saves memory and copying time.
8047f296bb3SBarry SmithNote that in-place factorization is not appropriate with any ordering
8057f296bb3SBarry Smithbesides natural and cannot be used with the drop tolerance
8067f296bb3SBarry Smithfactorization. These options may be set in the database with
8077f296bb3SBarry Smith
8087f296bb3SBarry Smith- `-pc_factor_levels <levels>`
8097f296bb3SBarry Smith- `-pc_factor_reuse_ordering`
8107f296bb3SBarry Smith- `-pc_factor_reuse_fill`
8117f296bb3SBarry Smith- `-pc_factor_in_place`
8127f296bb3SBarry Smith- `-pc_factor_nonzeros_along_diagonal`
8137f296bb3SBarry Smith- `-pc_factor_diagonal_fill`
8147f296bb3SBarry Smith
8157f296bb3SBarry SmithSee {any}`sec_symbolfactor` for information on
8167f296bb3SBarry Smithpreallocation of memory for anticipated fill during factorization. By
8177f296bb3SBarry Smithalleviating the considerable overhead for dynamic memory allocation,
8187f296bb3SBarry Smithsuch tuning can significantly enhance performance.
8197f296bb3SBarry Smith
8207f296bb3SBarry SmithPETSc supports incomplete factorization preconditioners
8217f296bb3SBarry Smithfor several matrix types for sequential matrices (for example
8227f296bb3SBarry Smith`MATSEQAIJ`, `MATSEQBAIJ`, and `MATSEQSBAIJ`).
8237f296bb3SBarry Smith
8247f296bb3SBarry Smith### SOR and SSOR Preconditioners
8257f296bb3SBarry Smith
8267f296bb3SBarry SmithPETSc provides only a sequential SOR preconditioner; it can only be
8277f296bb3SBarry Smithused with sequential matrices or as the subblock preconditioner when
8287f296bb3SBarry Smithusing block Jacobi or ASM preconditioning (see below).
8297f296bb3SBarry Smith
8307f296bb3SBarry SmithThe options for SOR preconditioning with `PCSOR` are
8317f296bb3SBarry Smith
8327f296bb3SBarry Smith```
8337f296bb3SBarry SmithPCSORSetOmega(PC pc,PetscReal omega);
8347f296bb3SBarry SmithPCSORSetIterations(PC pc,PetscInt its,PetscInt lits);
8357f296bb3SBarry SmithPCSORSetSymmetric(PC pc,MatSORType type);
8367f296bb3SBarry Smith```
8377f296bb3SBarry Smith
8387f296bb3SBarry SmithThe first of these commands sets the relaxation factor for successive
8397f296bb3SBarry Smithover (under) relaxation. The second command sets the number of inner
8407f296bb3SBarry Smithiterations `its` and local iterations `lits` (the number of
8417f296bb3SBarry Smithsmoothing sweeps on a process before doing a ghost point update from the
8427f296bb3SBarry Smithother processes) to use between steps of the Krylov space method. The
8437f296bb3SBarry Smithtotal number of SOR sweeps is given by `its*lits`. The third command
8447f296bb3SBarry Smithsets the kind of SOR sweep, where the argument `type` can be one of
8457f296bb3SBarry Smith`SOR_FORWARD_SWEEP`, `SOR_BACKWARD_SWEEP` or
8467f296bb3SBarry Smith`SOR_SYMMETRIC_SWEEP`, the default being `SOR_FORWARD_SWEEP`.
8477f296bb3SBarry SmithSetting the type to be `SOR_SYMMETRIC_SWEEP` produces the SSOR method.
8487f296bb3SBarry SmithIn addition, each process can locally and independently perform the
8497f296bb3SBarry Smithspecified variant of SOR with the types `SOR_LOCAL_FORWARD_SWEEP`,
8507f296bb3SBarry Smith`SOR_LOCAL_BACKWARD_SWEEP`, and `SOR_LOCAL_SYMMETRIC_SWEEP`. These
8517f296bb3SBarry Smithvariants can also be set with the options `-pc_sor_omega <omega>`,
8527f296bb3SBarry Smith`-pc_sor_its <its>`, `-pc_sor_lits <lits>`, `-pc_sor_backward`,
8537f296bb3SBarry Smith`-pc_sor_symmetric`, `-pc_sor_local_forward`,
8547f296bb3SBarry Smith`-pc_sor_local_backward`, and `-pc_sor_local_symmetric`.
8557f296bb3SBarry Smith
8567f296bb3SBarry SmithThe Eisenstat trick {cite}`eisenstat81` for SSOR
8577f296bb3SBarry Smithpreconditioning can be employed with the method `PCEISENSTAT`
8587f296bb3SBarry Smith(`-pc_type` `eisenstat`). By using both left and right
8597f296bb3SBarry Smithpreconditioning of the linear system, this variant of SSOR requires
8607f296bb3SBarry Smithabout half of the floating-point operations for conventional SSOR. The
8617f296bb3SBarry Smithoption `-pc_eisenstat_no_diagonal_scaling` (or the routine
8627f296bb3SBarry Smith`PCEisenstatSetNoDiagonalScaling()`) turns off diagonal scaling in
8637f296bb3SBarry Smithconjunction with Eisenstat SSOR method, while the option
8647f296bb3SBarry Smith`-pc_eisenstat_omega <omega>` (or the routine
8657f296bb3SBarry Smith`PCEisenstatSetOmega(PC pc,PetscReal omega)`) sets the SSOR relaxation
8667f296bb3SBarry Smithcoefficient, `omega`, as discussed above.
8677f296bb3SBarry Smith
8687f296bb3SBarry Smith(sec_factorization)=
8697f296bb3SBarry Smith
8707f296bb3SBarry Smith### LU Factorization
8717f296bb3SBarry Smith
8727f296bb3SBarry SmithThe LU preconditioner provides several options. The first, given by the
8737f296bb3SBarry Smithcommand
8747f296bb3SBarry Smith
8757f296bb3SBarry Smith```
8767f296bb3SBarry SmithPCFactorSetUseInPlace(PC pc,PetscBool flg);
8777f296bb3SBarry Smith```
8787f296bb3SBarry Smith
8797f296bb3SBarry Smithcauses the factorization to be performed in-place and hence destroys the
8807f296bb3SBarry Smithoriginal matrix. The options database variant of this command is
8817f296bb3SBarry Smith`-pc_factor_in_place`. Another direct preconditioner option is
8827f296bb3SBarry Smithselecting the ordering of equations with the command
8837f296bb3SBarry Smith`-pc_factor_mat_ordering_type <ordering>`. The possible orderings are
8847f296bb3SBarry Smith
8857f296bb3SBarry Smith- `MATORDERINGNATURAL` - Natural
8867f296bb3SBarry Smith- `MATORDERINGND` - Nested Dissection
8877f296bb3SBarry Smith- `MATORDERING1WD` - One-way Dissection
8887f296bb3SBarry Smith- `MATORDERINGRCM` - Reverse Cuthill-McKee
8897f296bb3SBarry Smith- `MATORDERINGQMD` - Quotient Minimum Degree
8907f296bb3SBarry Smith
8917f296bb3SBarry SmithThese orderings can also be set through the options database by
8927f296bb3SBarry Smithspecifying one of the following: `-pc_factor_mat_ordering_type`
8937f296bb3SBarry Smith`natural`, or `nd`, or `1wd`, or `rcm`, or `qmd`. In addition,
8947f296bb3SBarry Smithsee `MatGetOrdering()`, discussed in {any}`sec_matfactor`.
8957f296bb3SBarry Smith
8967f296bb3SBarry SmithThe sparse LU factorization provided in PETSc does not perform pivoting
8977f296bb3SBarry Smithfor numerical stability (since they are designed to preserve nonzero
8987f296bb3SBarry Smithstructure), and thus occasionally an LU factorization will fail with a
8997f296bb3SBarry Smithzero pivot when, in fact, the matrix is non-singular. The option
9007f296bb3SBarry Smith`-pc_factor_nonzeros_along_diagonal <tol>` will often help eliminate
9017f296bb3SBarry Smiththe zero pivot, by preprocessing the column ordering to remove small
9027f296bb3SBarry Smithvalues from the diagonal. Here, `tol` is an optional tolerance to
9037f296bb3SBarry Smithdecide if a value is nonzero; by default it is `1.e-10`.
9047f296bb3SBarry Smith
9057f296bb3SBarry SmithIn addition, {any}`sec_symbolfactor` provides information
9067f296bb3SBarry Smithon preallocation of memory for anticipated fill during factorization.
9077f296bb3SBarry SmithSuch tuning can significantly enhance performance, since it eliminates
9087f296bb3SBarry Smiththe considerable overhead for dynamic memory allocation.
9097f296bb3SBarry Smith
9107f296bb3SBarry Smith(sec_bjacobi)=
9117f296bb3SBarry Smith
9127f296bb3SBarry Smith### Block Jacobi and Overlapping Additive Schwarz Preconditioners
9137f296bb3SBarry Smith
9147f296bb3SBarry SmithThe block Jacobi and overlapping additive Schwarz (domain decomposition) methods in PETSc are
9157f296bb3SBarry Smithsupported in parallel; however, only the uniprocess version of the block
9167f296bb3SBarry SmithGauss-Seidel method is available. By default, the PETSc
9177f296bb3SBarry Smithimplementations of these methods employ ILU(0) factorization on each
9187f296bb3SBarry Smithindividual block (that is, the default solver on each subblock is
9197f296bb3SBarry Smith`PCType=PCILU`, `KSPType=KSPPREONLY` (or equivalently `KSPType=KSPNONE`); the user can set alternative
9207f296bb3SBarry Smithlinear solvers via the options `-sub_ksp_type` and `-sub_pc_type`.
9217f296bb3SBarry SmithIn fact, all of the `KSP` and `PC` options can be applied to the
9227f296bb3SBarry Smithsubproblems by inserting the prefix `-sub_` at the beginning of the
9237f296bb3SBarry Smithoption name. These options database commands set the particular options
9247f296bb3SBarry Smithfor *all* of the blocks within the global problem. In addition, the
9257f296bb3SBarry Smithroutines
9267f296bb3SBarry Smith
9277f296bb3SBarry Smith```
9287f296bb3SBarry SmithPCBJacobiGetSubKSP(PC pc,PetscInt *n_local,PetscInt *first_local,KSP **subksp);
9297f296bb3SBarry SmithPCASMGetSubKSP(PC pc,PetscInt *n_local,PetscInt *first_local,KSP **subksp);
9307f296bb3SBarry Smith```
9317f296bb3SBarry Smith
9327f296bb3SBarry Smithextract the `KSP` context for each local block. The argument
9337f296bb3SBarry Smith`n_local` is the number of blocks on the calling process, and
9347f296bb3SBarry Smith`first_local` indicates the global number of the first block on the
9357f296bb3SBarry Smithprocess. The blocks are numbered successively by processes from zero
9367f296bb3SBarry Smiththrough $b_g-1$, where $b_g$ is the number of global blocks.
9377f296bb3SBarry SmithThe array of `KSP` contexts for the local blocks is given by
9387f296bb3SBarry Smith`subksp`. This mechanism enables the user to set different solvers for
9397f296bb3SBarry Smiththe various blocks. To set the appropriate data structures, the user
9407f296bb3SBarry Smith*must* explicitly call `KSPSetUp()` before calling
9417f296bb3SBarry Smith`PCBJacobiGetSubKSP()` or `PCASMGetSubKSP(`). For further details,
9427f296bb3SBarry Smithsee
9437f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ksp/ksp/tutorials/ex7.c.html">KSP Tutorial ex7</a>
9447f296bb3SBarry Smithor
9457f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ksp/ksp/tutorials/ex8.c.html">KSP Tutorial ex8</a>.
9467f296bb3SBarry Smith
9477f296bb3SBarry SmithThe block Jacobi, block Gauss-Seidel, and additive Schwarz
9487f296bb3SBarry Smithpreconditioners allow the user to set the number of blocks into which
9497f296bb3SBarry Smiththe problem is divided. The options database commands to set this value
9507f296bb3SBarry Smithare `-pc_bjacobi_blocks` `n` and `-pc_bgs_blocks` `n`, and,
9517f296bb3SBarry Smithwithin a program, the corresponding routines are
9527f296bb3SBarry Smith
9537f296bb3SBarry Smith```
9547f296bb3SBarry SmithPCBJacobiSetTotalBlocks(PC pc,PetscInt blocks,PetscInt *size);
9557f296bb3SBarry SmithPCASMSetTotalSubdomains(PC pc,PetscInt n,IS *is,IS *islocal);
9567f296bb3SBarry SmithPCASMSetType(PC pc,PCASMType type);
9577f296bb3SBarry Smith```
9587f296bb3SBarry Smith
9597f296bb3SBarry SmithThe optional argument `size` is an array indicating the size of each
9607f296bb3SBarry Smithblock. Currently, for certain parallel matrix formats, only a single
9617f296bb3SBarry Smithblock per process is supported. However, the `MATMPIAIJ` and
9627f296bb3SBarry Smith`MATMPIBAIJ` formats support the use of general blocks as long as no
9637f296bb3SBarry Smithblocks are shared among processes. The `is` argument contains the
9647f296bb3SBarry Smithindex sets that define the subdomains.
9657f296bb3SBarry Smith
9667f296bb3SBarry SmithThe object `PCASMType` is one of `PC_ASM_BASIC`,
9677f296bb3SBarry Smith`PC_ASM_INTERPOLATE`, `PC_ASM_RESTRICT`, or `PC_ASM_NONE` and may
9687f296bb3SBarry Smithalso be set with the options database `-pc_asm_type` `[basic`,
9697f296bb3SBarry Smith`interpolate`, `restrict`, `none]`. The type `PC_ASM_BASIC` (or
9707f296bb3SBarry Smith`-pc_asm_type` `basic`) corresponds to the standard additive Schwarz
9717f296bb3SBarry Smithmethod that uses the full restriction and interpolation operators. The
9727f296bb3SBarry Smithtype `PC_ASM_RESTRICT` (or `-pc_asm_type` `restrict`) uses a full
9737f296bb3SBarry Smithrestriction operator, but during the interpolation process ignores the
9747f296bb3SBarry Smithoff-process values. Similarly, `PC_ASM_INTERPOLATE` (or
9757f296bb3SBarry Smith`-pc_asm_type` `interpolate`) uses a limited restriction process in
9767f296bb3SBarry Smithconjunction with a full interpolation, while `PC_ASM_NONE` (or
9777f296bb3SBarry Smith`-pc_asm_type` `none`) ignores off-process values for both
9787f296bb3SBarry Smithrestriction and interpolation. The ASM types with limited restriction or
9797f296bb3SBarry Smithinterpolation were suggested by Xiao-Chuan Cai and Marcus Sarkis
9807f296bb3SBarry Smith{cite}`cs99`. `PC_ASM_RESTRICT` is the PETSc default, as
9817f296bb3SBarry Smithit saves substantial communication and for many problems has the added
9827f296bb3SBarry Smithbenefit of requiring fewer iterations for convergence than the standard
9837f296bb3SBarry Smithadditive Schwarz method.
9847f296bb3SBarry Smith
9857f296bb3SBarry SmithThe user can also set the number of blocks and sizes on a per-process
9867f296bb3SBarry Smithbasis with the commands
9877f296bb3SBarry Smith
9887f296bb3SBarry Smith```
9897f296bb3SBarry SmithPCBJacobiSetLocalBlocks(PC pc,PetscInt blocks,PetscInt *size);
9907f296bb3SBarry SmithPCASMSetLocalSubdomains(PC pc,PetscInt N,IS *is,IS *islocal);
9917f296bb3SBarry Smith```
9927f296bb3SBarry Smith
9937f296bb3SBarry SmithFor the ASM preconditioner one can use the following command to set the
9947f296bb3SBarry Smithoverlap to compute in constructing the subdomains.
9957f296bb3SBarry Smith
9967f296bb3SBarry Smith```
9977f296bb3SBarry SmithPCASMSetOverlap(PC pc,PetscInt overlap);
9987f296bb3SBarry Smith```
9997f296bb3SBarry Smith
10007f296bb3SBarry SmithThe overlap defaults to 1, so if one desires that no additional overlap
10017f296bb3SBarry Smithbe computed beyond what may have been set with a call to
10027f296bb3SBarry Smith`PCASMSetTotalSubdomains()` or `PCASMSetLocalSubdomains()`, then
10037f296bb3SBarry Smith`overlap` must be set to be 0. In particular, if one does *not*
10047f296bb3SBarry Smithexplicitly set the subdomains in an application code, then all overlap
10057f296bb3SBarry Smithwould be computed internally by PETSc, and using an overlap of 0 would
10067f296bb3SBarry Smithresult in an ASM variant that is equivalent to the block Jacobi
10077f296bb3SBarry Smithpreconditioner. Note that one can define initial index sets `is` with
10087f296bb3SBarry Smith*any* overlap via `PCASMSetTotalSubdomains()` or
10097f296bb3SBarry Smith`PCASMSetLocalSubdomains()`; the routine `PCASMSetOverlap()` merely
10107f296bb3SBarry Smithallows PETSc to extend that overlap further if desired.
10117f296bb3SBarry Smith
10127f296bb3SBarry Smith`PCGASM` is a generalization of `PCASM` that allows
10137f296bb3SBarry Smiththe user to specify subdomains that span multiple MPI processes. This can be
10147f296bb3SBarry Smithuseful for problems where small subdomains result in poor convergence.
10157f296bb3SBarry SmithTo be effective, the multi-processor subproblems must be solved using a
10167f296bb3SBarry Smithsufficiently strong subsolver, such as `PCLU`, for which `SuperLU_DIST` or a
10177f296bb3SBarry Smithsimilar parallel direct solver could be used; other choices may include
10187f296bb3SBarry Smitha multigrid solver on the subdomains.
10197f296bb3SBarry Smith
10207f296bb3SBarry SmithThe interface for `PCGASM` is similar to that of `PCASM`. In
10217f296bb3SBarry Smithparticular, `PCGASMType` is one of `PC_GASM_BASIC`,
10227f296bb3SBarry Smith`PC_GASM_INTERPOLATE`, `PC_GASM_RESTRICT`, `PC_GASM_NONE`. These
10237f296bb3SBarry Smithoptions have the same meaning as with `PCASM` and may also be set with
10247f296bb3SBarry Smiththe options database `-pc_gasm_type` `[basic`, `interpolate`,
10257f296bb3SBarry Smith`restrict`, `none]`.
10267f296bb3SBarry Smith
10277f296bb3SBarry SmithUnlike `PCASM`, however, `PCGASM` allows the user to define
10287f296bb3SBarry Smithsubdomains that span multiple MPI processes. The simplest way to do this is
10297f296bb3SBarry Smithusing a call to `PCGASMSetTotalSubdomains(PC pc,PetscInt N)` with
10307f296bb3SBarry Smiththe total number of subdomains `N` that is smaller than the MPI
10317f296bb3SBarry Smithcommunicator `size`. In this case `PCGASM` will coalesce `size/N`
10327f296bb3SBarry Smithconsecutive single-rank subdomains into a single multi-rank subdomain.
10337f296bb3SBarry SmithThe single-rank subdomains contain the degrees of freedom corresponding
10347addb90fSBarry Smithto the locally-owned rows of the `PCGASM` matrix used to compute the preconditioner –
10357f296bb3SBarry Smiththese are the subdomains `PCASM` and `PCGASM` use by default.
10367f296bb3SBarry Smith
10377f296bb3SBarry SmithEach of the multirank subdomain subproblems is defined on the
10387f296bb3SBarry Smithsubcommunicator that contains the coalesced `PCGASM` processes. In general
10397f296bb3SBarry Smiththis might not result in a very good subproblem if the single-rank
10407f296bb3SBarry Smithproblems corresponding to the coalesced processes are not very strongly
10417f296bb3SBarry Smithconnected. In the future this will be addressed with a hierarchical
10427f296bb3SBarry Smithpartitioner that generates well-connected coarse subdomains first before
10437f296bb3SBarry Smithsubpartitioning them into the single-rank subdomains.
10447f296bb3SBarry Smith
10457f296bb3SBarry SmithIn the meantime the user can provide his or her own multi-rank
10467f296bb3SBarry Smithsubdomains by calling `PCGASMSetSubdomains(PC,IS[],IS[])` where each
10477f296bb3SBarry Smithof the `IS` objects on the list defines the inner (without the
10487f296bb3SBarry Smithoverlap) or the outer (including the overlap) subdomain on the
10497f296bb3SBarry Smithsubcommunicator of the `IS` object. A helper subroutine
10507f296bb3SBarry Smith`PCGASMCreateSubdomains2D()` is similar to PCASM’s but is capable of
10517f296bb3SBarry Smithconstructing multi-rank subdomains that can be then used with
10527f296bb3SBarry Smith`PCGASMSetSubdomains()`. An alternative way of creating multi-rank
10537f296bb3SBarry Smithsubdomains is by using the underlying `DM` object, if it is capable of
10547f296bb3SBarry Smithgenerating such decompositions via `DMCreateDomainDecomposition()`.
10557f296bb3SBarry SmithOrdinarily the decomposition specified by the user via
10567f296bb3SBarry Smith`PCGASMSetSubdomains()` takes precedence, unless
10577f296bb3SBarry Smith`PCGASMSetUseDMSubdomains()` instructs `PCGASM` to prefer
10587f296bb3SBarry Smith`DM`-created decompositions.
10597f296bb3SBarry Smith
10607f296bb3SBarry SmithCurrently there is no support for increasing the overlap of multi-rank
10617f296bb3SBarry Smithsubdomains via `PCGASMSetOverlap()` – this functionality works only
10627f296bb3SBarry Smithfor subdomains that fit within a single MPI process, exactly as in
10637f296bb3SBarry Smith`PCASM`.
10647f296bb3SBarry Smith
10657f296bb3SBarry SmithExamples of the described `PCGASM` usage can be found in
10667f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ksp/ksp/tutorials/ex62.c.html">KSP Tutorial ex62</a>.
10677f296bb3SBarry SmithIn particular, `runex62_superlu_dist` illustrates the use of
10687f296bb3SBarry Smith`SuperLU_DIST` as the subdomain solver on coalesced multi-rank
10697f296bb3SBarry Smithsubdomains. The `runex62_2D_*` examples illustrate the use of
10707f296bb3SBarry Smith`PCGASMCreateSubdomains2D()`.
10717f296bb3SBarry Smith
10727f296bb3SBarry Smith(sec_amg)=
10737f296bb3SBarry Smith
10747f296bb3SBarry Smith### Algebraic Multigrid (AMG) Preconditioners
10757f296bb3SBarry Smith
10767f296bb3SBarry SmithPETSc has a native algebraic multigrid preconditioner `PCGAMG` –
10777f296bb3SBarry Smith*gamg* – and interfaces to three external AMG packages: *hypre*, *ML*
10787f296bb3SBarry Smithand *AMGx* (CUDA platforms only) that can be downloaded in the
10797f296bb3SBarry Smithconfiguration phase (e.g., `--download-hypre` ) and used by
10807f296bb3SBarry Smithspecifying that command line parameter (e.g., `-pc_type hypre`).
10817f296bb3SBarry Smith*Hypre* is relatively monolithic in that a PETSc matrix is converted into a hypre
10827f296bb3SBarry Smithmatrix, and then *hypre* is called to solve the entire problem. *ML* is more
10837f296bb3SBarry Smithmodular because PETSc only has *ML* generate the coarse grid spaces
10847f296bb3SBarry Smith(columns of the prolongation operator), which is the core of an AMG method,
10857f296bb3SBarry Smithand then constructs a `PCMG` with Galerkin coarse grid operator
10867f296bb3SBarry Smithconstruction. `PCGAMG` is designed from the beginning to be modular, to
10877f296bb3SBarry Smithallow for new components to be added easily and also populates a
10887f296bb3SBarry Smithmultigrid preconditioner `PCMG` so generic multigrid parameters are
10897f296bb3SBarry Smithused (see {any}`sec_mg`). PETSc provides a fully supported (smoothed) aggregation AMG, but supports the addition of new methods
10907f296bb3SBarry Smith(`-pc_type gamg -pc_gamg_type agg` or `PCSetType(pc,PCGAMG)` and
10917f296bb3SBarry Smith`PCGAMGSetType(pc, PCGAMGAGG)`. Examples of extension are reference implementations of
10927f296bb3SBarry Smitha classical AMG method (`-pc_gamg_type classical`), a (2D) hybrid geometric
10937f296bb3SBarry SmithAMG method (`-pc_gamg_type geo`) that are not supported. A 2.5D AMG method DofColumns
10947f296bb3SBarry Smith{cite}`isaacstadlerghattas2015` supports 2D coarsenings extruded in the third dimension. `PCGAMG` does require the use
10957f296bb3SBarry Smithof `MATAIJ` matrices. For instance, `MATBAIJ` matrices are not supported. One
10967f296bb3SBarry Smithcan use `MATAIJ` instead of `MATBAIJ` without changing any code other than the
10977f296bb3SBarry Smithconstructor (or the `-mat_type` from the command line). For instance,
10987f296bb3SBarry Smith`MatSetValuesBlocked` works with `MATAIJ` matrices.
10997f296bb3SBarry Smith
11007f296bb3SBarry Smith**Important parameters for PCGAMGAGG**
11017f296bb3SBarry Smith
11027f296bb3SBarry Smith- Control the generation of the coarse grid
11037f296bb3SBarry Smith
11047f296bb3SBarry Smith  > - `-pc_gamg_aggressive_coarsening` \<n:int:1> Use aggressive coarsening on the finest n levels to construct the coarser mesh.
11057f296bb3SBarry Smith  >   See `PCGAMGAGGSetNSmooths()`. The larger value produces a faster preconditioner to create and solve, but the convergence may be slower.
11067f296bb3SBarry Smith  > - `-pc_gamg_low_memory_threshold_filter` \<bool:false> Filter small matrix entries before coarsening the mesh.
11077f296bb3SBarry Smith  >   See `PCGAMGSetLowMemoryFilter()`.
11087f296bb3SBarry Smith  > - `-pc_gamg_threshold` \<tol:real:0.0> The threshold of small values to drop when `-pc_gamg_low_memory_threshold_filter` is used. A
11097f296bb3SBarry Smith  >   negative value means keeping even the locations with 0.0. See `PCGAMGSetThreshold()`
11107f296bb3SBarry Smith  > - `-pc_gamg_threshold_scale` \<v>:real:1.0> Set a scale factor applied to each coarser level when `-pc_gamg_low_memory_threshold_filter` is used.
11117f296bb3SBarry Smith  >   See `PCGAMGSetThresholdScale()`.
11127f296bb3SBarry Smith  > - `-pc_gamg_mat_coarsen_type` \<mis|hem|misk:misk> Algorithm used to coarsen the matrix graph. See `MatCoarsenSetType()`.
11137f296bb3SBarry Smith  > - `-pc_gamg_mat_coarsen_max_it` \<it:int:4> Maximum HEM iterations to use. See `MatCoarsenSetMaximumIterations()`.
11147f296bb3SBarry Smith  > - `-pc_gamg_aggressive_mis_k` \<k:int:2> k distance in MIS coarsening (>2 is 'aggressive') to use in coarsening.
11157f296bb3SBarry Smith  >   See `PCGAMGMISkSetAggressive()`. The larger value produces a preconditioner that is faster to create and solve with but the convergence may be slower.
11167f296bb3SBarry Smith  >   This option and the previous option work to determine how aggressively the grids are coarsened.
1117aea0ef14SMark Adams  > - `-pc_gamg_mis_k_minimum_degree_ordering` \<bool:false> Use a minimum degree ordering in the greedy MIS algorithm used to coarsen.
11187f296bb3SBarry Smith  >   See `PCGAMGMISkSetMinDegreeOrdering()`
11197f296bb3SBarry Smith
11207f296bb3SBarry Smith- Control the generation of the prolongation for `PCGAMGAGG`
11217f296bb3SBarry Smith
11227f296bb3SBarry Smith  > - `-pc_gamg_agg_nsmooths` \<n:int:1> Number of smoothing steps to be used in constructing the prolongation. For symmetric problems,
11237f296bb3SBarry Smith  >   generally, one or more is best. For some strongly nonsymmetric problems, 0 may be best. See `PCGAMGSetNSmooths()`.
11247f296bb3SBarry Smith
11257f296bb3SBarry Smith- Control the amount of parallelism on the levels
11267f296bb3SBarry Smith
11277f296bb3SBarry Smith  > - `-pc_gamg_process_eq_limit` \<n:int:50> Sets the minimum number of equations allowed per process when coarsening (otherwise, fewer MPI processes
11287f296bb3SBarry Smith  >   are used for the coarser mesh). A larger value will cause the coarser problems to be run on fewer MPI processes, resulting
11297f296bb3SBarry Smith  >   in less communication and possibly a faster time to solution. See `PCGAMGSetProcEqLim()`.
11307f296bb3SBarry Smith  >
11317f296bb3SBarry Smith  > - `-pc_gamg_rank_reduction_factors` \<rn,rn-1,...,r1:int> Set a schedule for MPI rank reduction on coarse grids. `See PCGAMGSetRankReductionFactors()`
11327f296bb3SBarry Smith  >   This overrides the lessening of processes that would arise from `-pc_gamg_process_eq_limit`.
11337f296bb3SBarry Smith  >
11347f296bb3SBarry Smith  > - `-pc_gamg_repartition` \<bool:false> Run a partitioner on each coarser mesh generated rather than using the default partition arising from the
11357f296bb3SBarry Smith  >   finer mesh. See `PCGAMGSetRepartition()`. This increases the preconditioner setup time but will result in less time per
11367f296bb3SBarry Smith  >   iteration of the solver.
11377f296bb3SBarry Smith  >
11387f296bb3SBarry Smith  > - `-pc_gamg_parallel_coarse_grid_solver` \<bool:false> Allow the coarse grid solve to run in parallel, depending on the value of `-pc_gamg_coarse_eq_limit`.
11397f296bb3SBarry Smith  >   See `PCGAMGSetParallelCoarseGridSolve()`. If the coarse grid problem is large then this can
11407f296bb3SBarry Smith  >   improve the time to solution.
11417f296bb3SBarry Smith  >
11427f296bb3SBarry Smith  >   - `-pc_gamg_coarse_eq_limit` \<n:int:50> Sets the minimum number of equations allowed per process on the coarsest level when coarsening
11437f296bb3SBarry Smith  >     (otherwise fewer MPI processes will be used). A larger value will cause the coarse problems to be run on fewer MPI processes.
11447f296bb3SBarry Smith  >     This only applies if `-pc_gamg_parallel_coarse_grid_solver` is set to true. See `PCGAMGSetCoarseEqLim()`.
11457f296bb3SBarry Smith
11467f296bb3SBarry Smith- Control the smoothers
11477f296bb3SBarry Smith
11487f296bb3SBarry Smith  > - `-pc_mg_levels` \<n:int> Set the maximum number of levels to use.
11497f296bb3SBarry Smith  > - `-mg_levels_ksp_type` \<KSPType:chebyshev> If `KSPCHEBYSHEV` or `KSPRICHARDSON` is not used, then the Krylov
11507f296bb3SBarry Smith  >   method for the entire multigrid solve has to be a flexible method such as `KSPFGMRES`. Generally, the
11517f296bb3SBarry Smith  >   stronger the Krylov method the faster the convergence, but with more cost per iteration. See `KSPSetType()`.
11527f296bb3SBarry Smith  > - `-mg_levels_ksp_max_it` \<its:int:2> Sets the number of iterations to run the smoother on each level. Generally, the more iterations
11537f296bb3SBarry Smith  >   , the faster the convergence, but with more cost per multigrid iteration. See `PCMGSetNumberSmooth()`.
11547f296bb3SBarry Smith  > - `-mg_levels_ksp_xxx` Sets options for the `KSP` in the smoother on the levels.
11557f296bb3SBarry Smith  > - `-mg_levels_pc_type` \<PCType:jacobi> Sets the smoother to use on each level. See `PCSetType()`. Generally, the
11567f296bb3SBarry Smith  >   stronger the preconditioner the faster the convergence, but with more cost per iteration.
11577f296bb3SBarry Smith  > - `-mg_levels_pc_xxx` Sets options for the `PC` in the smoother on the levels.
11587f296bb3SBarry Smith  > - `-mg_coarse_ksp_type` \<KSPType:none> Sets the solver `KSPType` to use on the coarsest level.
11597f296bb3SBarry Smith  > - `-mg_coarse_pc_type` \<PCType:lu> Sets the solver `PCType` to use on the coarsest level.
11607f296bb3SBarry Smith  > - `-pc_gamg_asm_use_agg` \<bool:false> Use `PCASM` as the smoother on each level with the aggregates defined by the coarsening process are
11617f296bb3SBarry Smith  >   the subdomains. This option automatically switches the smoother on the levels to be `PCASM`.
11627f296bb3SBarry Smith  > - `-mg_levels_pc_asm_overlap` \<n:int:0> Use non-zero overlap with `-pc_gamg_asm_use_agg`. See `PCASMSetOverlap()`.
11637f296bb3SBarry Smith
11647f296bb3SBarry Smith- Control the multigrid algorithm
11657f296bb3SBarry Smith
11667f296bb3SBarry Smith  > - `-pc_mg_type` \<additive|multiplicative|full|kaskade:multiplicative> The type of multigrid to use. Usually, multiplicative is the fastest.
11677f296bb3SBarry Smith  > - `-pc_mg_cycle_type` \<v|w:v> Use V- or W-cycle with `-pc_mg_type` `multiplicative`
11687f296bb3SBarry Smith
11697f296bb3SBarry Smith`PCGAMG` provides unsmoothed aggregation (`-pc_gamg_agg_nsmooths 0`) and
11707f296bb3SBarry Smithsmoothed aggregation (`-pc_gamg_agg_nsmooths 1` or
11717f296bb3SBarry Smith`PCGAMGSetNSmooths(pc,1)`). Smoothed aggregation (SA), {cite}`vanek1996algebraic`, {cite}`vanek2001convergence`, is recommended
11727f296bb3SBarry Smithfor symmetric positive definite systems. Unsmoothed aggregation can be
11737f296bb3SBarry Smithuseful for asymmetric problems and problems where the highest eigenestimates are problematic. If poor convergence rates are observed using
11747f296bb3SBarry Smiththe smoothed version, one can test unsmoothed aggregation.
11757f296bb3SBarry Smith
11767f296bb3SBarry Smith**Eigenvalue estimates:** The parameters for the KSP eigen estimator,
11777f296bb3SBarry Smithused for SA, can be set with `-pc_gamg_esteig_ksp_max_it` and
11787f296bb3SBarry Smith`-pc_gamg_esteig_ksp_type`. For example, CG generally converges to the
11797f296bb3SBarry Smithhighest eigenvalue faster than GMRES (the default for KSP) if your problem
11807f296bb3SBarry Smithis symmetric positive definite. One can specify CG with
11817f296bb3SBarry Smith`-pc_gamg_esteig_ksp_type cg`. The default for
11827f296bb3SBarry Smith`-pc_gamg_esteig_ksp_max_it` is 10, which we have found is pretty safe
11837f296bb3SBarry Smithwith a (default) safety factor of 1.1. One can specify the range of real
11847f296bb3SBarry Smitheigenvalues in the same way as with Chebyshev KSP solvers
11857f296bb3SBarry Smith(smoothers), with `-pc_gamg_eigenvalues <emin,emax>`. GAMG sets the MG
11867f296bb3SBarry Smithsmoother type to chebyshev by default. By default, GAMG uses its eigen
11877f296bb3SBarry Smithestimate, if it has one, for Chebyshev smoothers if the smoother uses
11887f296bb3SBarry SmithJacobi preconditioning. This can be overridden with
11897f296bb3SBarry Smith`-pc_gamg_use_sa_esteig  <true,false>`.
11907f296bb3SBarry Smith
11917f296bb3SBarry SmithAMG methods require knowledge of the number of degrees of freedom per
11927f296bb3SBarry Smithvertex; the default is one (a scalar problem). Vector problems like
11937f296bb3SBarry Smithelasticity should set the block size of the matrix appropriately with
11947f296bb3SBarry Smith`-mat_block_size bs` or `MatSetBlockSize(mat,bs)`. Equations must be
11957f296bb3SBarry Smithordered in “vertex-major” ordering (e.g.,
11967f296bb3SBarry Smith$x_1,y_1,z_1,x_2,y_2,...$).
11977f296bb3SBarry Smith
11987f296bb3SBarry Smith**Near null space:** Smoothed aggregation requires an explicit
11997f296bb3SBarry Smithrepresentation of the (near) null space of the operator for optimal
12007f296bb3SBarry Smithperformance. One can provide an orthonormal set of null space vectors
12017f296bb3SBarry Smithwith `MatSetNearNullSpace()`. The vector of all ones is the default
12027f296bb3SBarry Smithfor each variable given by the block size (e.g., the translational rigid
12037f296bb3SBarry Smithbody modes). For elasticity, where rotational rigid body modes are
12047f296bb3SBarry Smithrequired to complete the near null-space you can use
12057f296bb3SBarry Smith`MatNullSpaceCreateRigidBody()` to create the null space vectors and
12067f296bb3SBarry Smiththen `MatSetNearNullSpace()`.
12077f296bb3SBarry Smith
12087f296bb3SBarry Smith**Coarse grid data model:** The GAMG framework provides for reducing the
12097f296bb3SBarry Smithnumber of active processes on coarse grids to reduce communication costs
12107f296bb3SBarry Smithwhen there is not enough parallelism to keep relative communication
12117f296bb3SBarry Smithcosts down. Most AMG solvers reduce to just one active process on the
12127f296bb3SBarry Smithcoarsest grid (the PETSc MG framework also supports redundantly solving
12137f296bb3SBarry Smiththe coarse grid on all processes to reduce communication
12147f296bb3SBarry Smithcosts potentially). However, this forcing to one process can be overridden if one
12157f296bb3SBarry Smithwishes to use a parallel coarse grid solver. GAMG generalizes this by
12167f296bb3SBarry Smithreducing the active number of processes on other coarse grids.
12177f296bb3SBarry SmithGAMG will select the number of active processors by fitting the desired
12187f296bb3SBarry Smithnumber of equations per process (set with
12197f296bb3SBarry Smith`-pc_gamg_process_eq_limit <50>,`) at each level given that size of
12207f296bb3SBarry Smitheach level. If $P_i < P$ processors are desired on a level
12217f296bb3SBarry Smith$i$, then the first $P_i$ processes are populated with the grid
12227f296bb3SBarry Smithand the remaining are empty on that grid. One can, and probably should,
12237f296bb3SBarry Smithrepartition the coarse grids with `-pc_gamg_repartition <true>`,
12247f296bb3SBarry Smithotherwise an integer process reduction factor ($q$) is selected
12257f296bb3SBarry Smithand the equations on the first $q$ processes are moved to process
12267f296bb3SBarry Smith0, and so on. As mentioned, multigrid generally coarsens the problem
12277f296bb3SBarry Smithuntil it is small enough to be solved with an exact solver (e.g., LU or
12287f296bb3SBarry SmithSVD) in a relatively short time. GAMG will stop coarsening when the
12297f296bb3SBarry Smithnumber of the equation on a grid falls below the threshold given by
12307f296bb3SBarry Smith`-pc_gamg_coarse_eq_limit <50>,`.
12317f296bb3SBarry Smith
12327f296bb3SBarry Smith**Coarse grid parameters:** There are several options to provide
12337f296bb3SBarry Smithparameters to the coarsening algorithm and parallel data layout. Run a
12347f296bb3SBarry Smithcode using `PCGAMG` with `-help` to get a full listing of GAMG
12357f296bb3SBarry Smithparameters with short descriptions. The rate of coarsening is
12367f296bb3SBarry Smithcritical in AMG performance – too slow coarsening will result in an
12377f296bb3SBarry Smithoverly expensive solver per iteration and too fast coarsening will
12387f296bb3SBarry Smithresult in decrease in the convergence rate. `-pc_gamg_threshold <-1>`
12397f296bb3SBarry Smithand `-pc_gamg_aggressive_coarsening <N>` are the primary parameters that
12407f296bb3SBarry Smithcontrol coarsening rates, which is very important for AMG performance. A
12417f296bb3SBarry Smithgreedy maximal independent set (MIS) algorithm is used in coarsening.
12427f296bb3SBarry SmithSquaring the graph implements MIS-2; the root vertex in an
12437f296bb3SBarry Smithaggregate is more than two edges away from another root vertex instead
12447f296bb3SBarry Smithof more than one in MIS. The threshold parameter sets a normalized
12457f296bb3SBarry Smiththreshold for which edges are removed from the MIS graph, thereby
12467f296bb3SBarry Smithcoarsening slower. Zero will keep all non-zero edges, a negative number
12477f296bb3SBarry Smithwill keep zero edges, and a positive number will drop small edges. Typical
12487f296bb3SBarry Smithfinite threshold values are in the range of $0.01 - 0.05$. There
12497f296bb3SBarry Smithare additional parameters for changing the weights on coarse grids.
12507f296bb3SBarry Smith
12517f296bb3SBarry SmithThe parallel MIS algorithms require symmetric weights/matrices. Thus `PCGAMG`
12527f296bb3SBarry Smithwill automatically make the graph symmetric if it is not symmetric. Since this
12537f296bb3SBarry Smithhas additional cost, users should indicate the symmetry of the matrices they
12547f296bb3SBarry Smithprovide by calling
12557f296bb3SBarry Smith
12567f296bb3SBarry Smith```
12577f296bb3SBarry SmithMatSetOption(mat,MAT_SYMMETRIC,PETSC_TRUE (or PETSC_FALSE))
12587f296bb3SBarry Smith```
12597f296bb3SBarry Smith
12607f296bb3SBarry Smithor
12617f296bb3SBarry Smith
12627f296bb3SBarry Smith```
12637f296bb3SBarry SmithMatSetOption(mat,MAT_STRUCTURALLY_SYMMETRIC,PETSC_TRUE (or PETSC_FALSE)).
12647f296bb3SBarry Smith```
12657f296bb3SBarry Smith
12667f296bb3SBarry SmithIf they know that the matrix will always have symmetry despite future changes
12677f296bb3SBarry Smithto the matrix (with, for example, `MatSetValues()`) then they should also call
12687f296bb3SBarry Smith
12697f296bb3SBarry Smith```
12707f296bb3SBarry SmithMatSetOption(mat,MAT_SYMMETRY_ETERNAL,PETSC_TRUE (or PETSC_FALSE))
12717f296bb3SBarry Smith```
12727f296bb3SBarry Smith
12737f296bb3SBarry Smithor
12747f296bb3SBarry Smith
12757f296bb3SBarry Smith```
12767f296bb3SBarry SmithMatSetOption(mat,MAT_STRUCTURAL_SYMMETRY_ETERNAL,PETSC_TRUE (or PETSC_FALSE)).
12777f296bb3SBarry Smith```
12787f296bb3SBarry Smith
12797f296bb3SBarry SmithUsing this information allows the algorithm to skip unnecessary computations.
12807f296bb3SBarry Smith
12817f296bb3SBarry Smith**Troubleshooting algebraic multigrid methods:** If `PCGAMG`, *ML*, *AMGx* or
12827f296bb3SBarry Smith*hypre* does not perform well; the first thing to try is one of the other
12837f296bb3SBarry Smithmethods. Often, the default parameters or just the strengths of different
12847f296bb3SBarry Smithalgorithms can fix performance problems or provide useful information to
12857f296bb3SBarry Smithguide further debugging. There are several sources of poor performance
12867f296bb3SBarry Smithof AMG solvers and often special purpose methods must be developed to
12877f296bb3SBarry Smithachieve the full potential of multigrid. To name just a few sources of
12887f296bb3SBarry Smithperformance degradation that may not be fixed with parameters in PETSc
12897f296bb3SBarry Smithcurrently: non-elliptic operators, curl/curl operators, highly stretched
12907f296bb3SBarry Smithgrids or highly anisotropic problems, large jumps in material
12917f296bb3SBarry Smithcoefficients with complex geometry (AMG is particularly well suited to
12927f296bb3SBarry Smithjumps in coefficients, but it is not a perfect solution), highly
12937f296bb3SBarry Smithincompressible elasticity, not to mention ill-posed problems and many
12947f296bb3SBarry Smithothers. For Grad-Div and Curl-Curl operators, you may want to try the
12957f296bb3SBarry SmithAuxiliary-space Maxwell Solver (AMS,
12967f296bb3SBarry Smith`-pc_type hypre -pc_hypre_type ams`) or the Auxiliary-space Divergence
12977f296bb3SBarry SmithSolver (ADS, `-pc_type hypre -pc_hypre_type ads`) solvers. These
12987f296bb3SBarry Smithsolvers need some additional information on the underlying mesh;
12997f296bb3SBarry Smithspecifically, AMS needs the discrete gradient operator, which can be
13007f296bb3SBarry Smithspecified via `PCHYPRESetDiscreteGradient()`. In addition to the
13017f296bb3SBarry Smithdiscrete gradient, ADS also needs the specification of the discrete curl
13027f296bb3SBarry Smithoperator, which can be set using `PCHYPRESetDiscreteCurl()`.
13037f296bb3SBarry Smith
13047f296bb3SBarry Smith**I am converging slowly, what do I do?** AMG methods are sensitive to
13057f296bb3SBarry Smithcoarsening rates and methods; for GAMG use `-pc_gamg_threshold <x>`
13067f296bb3SBarry Smithor `PCGAMGSetThreshold()` to regulate coarsening rates; higher values decrease
1307aea0ef14SMark Adamsthe coarsening rate. A high threshold (e.g., $x=0.08$) will result in an
13087f296bb3SBarry Smithexpensive but potentially powerful preconditioner, and a low threshold
13097f296bb3SBarry Smith(e.g., $x=0.0$) will result in faster coarsening, fewer levels,
13107f296bb3SBarry Smithcheaper solves, and generally worse convergence rates.
13117f296bb3SBarry Smith
1312aea0ef14SMark AdamsAggressive_coarsening is the second mechanism for
1313aea0ef14SMark Adamsincreasing the coarsening rate and thereby decreasing the cost of the
1314aea0ef14SMark Adamscoarse grids and generally decreasing the solver convergence rate.
1315aea0ef14SMark AdamsUse `-pc_gamg_aggressive_coarsening <N>`, or
1316aea0ef14SMark Adams`PCGAMGSetAggressiveLevels(pc,N)`, to aggressively coarsen the graph on the finest N
1317aea0ef14SMark Adamslevels. The default is $N=1$. There are two options for aggressive coarsening: 1) the
1318aea0ef14SMark Adamsdefault, square graph: use $A^T A$ in the MIS coarsening algorithm and 2) coarsen with MIS-2 (instead
1319aea0ef14SMark Adamsof the default of MIS-1). Use `-pc_gamg_aggressive_square_graph false`
1320aea0ef14SMark Adamsto use MIS-k coarsening and `-pc_gamg_aggressive_mis_k k` to select
1321aea0ef14SMark Adamsthe level of MIS other than the default $k=2$.
1322aea0ef14SMark AdamsThe square graph approach seems to coarsen slower, which results in
1323aea0ef14SMark Adamslarger coarse grids and is more expensive, but generally improves
1324aea0ef14SMark Adamsthe convergence rate.
1325aea0ef14SMark AdamsIf the coarse grids are expensive to compute, and use a lot of memory,
1326aea0ef14SMark Adamsusing MIS-2 is a good alternative (setting MIS-1 effectively turns
1327aea0ef14SMark Adamsaggressive coarsening off). Note that MIS-3 is also supported.
1328aea0ef14SMark Adams
13297f296bb3SBarry SmithOne can run with `-info :pc` and grep for `PCGAMG` to get statistics on
13307f296bb3SBarry Smitheach level, which can be used to see if you are coarsening at an
13317f296bb3SBarry Smithappropriate rate. With smoothed aggregation, you generally want to coarse
13327f296bb3SBarry Smithat about a rate of 3:1 in each dimension. Coarsening too slowly will
13337f296bb3SBarry Smithresult in large numbers of non-zeros per row on coarse grids (this is
13347f296bb3SBarry Smithreported). The number of non-zeros can go up very high, say about 300
13357f296bb3SBarry Smith(times the degrees of freedom per vertex) on a 3D hex mesh. One can also
13367f296bb3SBarry Smithlook at the grid complexity, which is also reported (the ratio of the
13377f296bb3SBarry Smithtotal number of matrix entries for all levels to the number of matrix
13387f296bb3SBarry Smithentries on the fine level). Grid complexity should be well under 2.0 and
13397f296bb3SBarry Smithpreferably around $1.3$ or lower. If convergence is poor and the
13407f296bb3SBarry SmithGalerkin coarse grid construction is much smaller than the time for each
13417f296bb3SBarry Smithsolve, one can safely decrease the coarsening rate.
13427f296bb3SBarry Smith`-pc_gamg_threshold` $-1.0$ is the simplest and most robust
13437f296bb3SBarry Smithoption and is recommended if poor convergence rates are observed, at
13447f296bb3SBarry Smithleast until the source of the problem is discovered. In conclusion, decreasing the coarsening rate (increasing the
13457f296bb3SBarry Smiththreshold) should be tried if convergence is slow.
13467f296bb3SBarry Smith
13477f296bb3SBarry Smith**A note on Chebyshev smoothers.** Chebyshev solvers are attractive as
13487f296bb3SBarry Smithmultigrid smoothers because they can target a specific interval of the
13497f296bb3SBarry Smithspectrum, which is the purpose of a smoother. The spectral bounds for
13507f296bb3SBarry SmithChebyshev solvers are simple to compute because they rely on the highest
13517f296bb3SBarry Smitheigenvalue of your (diagonally preconditioned) operator, which is
13527f296bb3SBarry Smithconceptually simple to compute. However, if this highest eigenvalue
13537f296bb3SBarry Smithestimate is not accurate (too low), the solvers can fail with an
13547f296bb3SBarry Smithindefinite preconditioner message. One can run with `-info` and grep
13557f296bb3SBarry Smithfor `PCGAMG` to get these estimates or use `-ksp_view`. These highest
13567f296bb3SBarry Smitheigenvalues are generally between 1.5-3.0. For symmetric positive
13577f296bb3SBarry Smithdefinite systems, CG is a better eigenvalue estimator
13587f296bb3SBarry Smith`-mg_levels_esteig_ksp_type cg`. Bad Eigen estimates often cause indefinite matrix messages. Explicitly damped Jacobi or Krylov
13597f296bb3SBarry Smithsmoothers can provide an alternative to Chebyshev, and *hypre* has
13607f296bb3SBarry Smithalternative smoothers.
13617f296bb3SBarry Smith
13627f296bb3SBarry Smith**Now, am I solving alright? Can I expect better?** If you find that you
13637f296bb3SBarry Smithare getting nearly one digit in reduction of the residual per iteration
13647f296bb3SBarry Smithand are using a modest number of point smoothing steps (e.g., 1-4
13657f296bb3SBarry Smithiterations of SOR), then you may be fairly close to textbook multigrid
13667f296bb3SBarry Smithefficiency. However, you also need to check the setup costs. This can be
13677f296bb3SBarry Smithdetermined by running with `-log_view` and check that the time for the
13687f296bb3SBarry SmithGalerkin coarse grid construction (`MatPtAP()`) is not (much) more than
13697f296bb3SBarry Smiththe time spent in each solve (`KSPSolve()`). If the `MatPtAP()` time is
13707f296bb3SBarry Smithtoo large, then one can increase the coarsening rate by decreasing the
13717f296bb3SBarry Smiththreshold and using aggressive coarsening
13727f296bb3SBarry Smith(`-pc_gamg_aggressive_coarsening <N>`, squares the graph on the finest N
13737f296bb3SBarry Smithlevels). Likewise, if your `MatPtAP()` time is short and your convergence
13747f296bb3SBarry SmithIf the rate is not ideal, you could decrease the coarsening rate.
13757f296bb3SBarry Smith
13767f296bb3SBarry SmithPETSc’s AMG solver is a framework for developers to
13777f296bb3SBarry Smitheasily add AMG capabilities, like new AMG methods or an AMG component
13787f296bb3SBarry Smithlike a matrix triple product. Contact us directly if you are interested
13797f296bb3SBarry Smithin contributing.
13807f296bb3SBarry Smith
13817f296bb3SBarry SmithUsing algebraic multigrid as a "standalone" solver is possible but not recommended, as it does not accelerate it with a Krylov method.
13827f296bb3SBarry SmithUse a `KSPType` of `KSPRICHARDSON`
13837f296bb3SBarry Smith(or equivalently `-ksp_type richardson`) to achieve this. Using `KSPPREONLY` will not work since it only applies a single multigrid cycle.
13847f296bb3SBarry Smith
13857f296bb3SBarry Smith#### Adaptive Interpolation
13867f296bb3SBarry Smith
13877f296bb3SBarry Smith**Interpolation** transfers a function from the coarse space to the fine space. We would like this process to be accurate for the functions resolved by the coarse grid, in particular the approximate solution computed there. By default, we create these matrices using local interpolation of the fine grid dual basis functions in the coarse basis. However, an adaptive procedure can optimize the coefficients of the interpolator to reproduce pairs of coarse/fine functions which should approximate the lowest modes of the generalized eigenproblem
13887f296bb3SBarry Smith
13897f296bb3SBarry Smith$$
13907f296bb3SBarry SmithA x = \lambda M x
13917f296bb3SBarry Smith$$
13927f296bb3SBarry Smith
13937f296bb3SBarry Smithwhere $A$ is the system matrix and $M$ is the smoother. Note that for defect-correction MG, the interpolated solution from the coarse space need not be as accurate as the fine solution, for the same reason that updates in iterative refinement can be less accurate. However, in FAS or in the final interpolation step for each level of Full Multigrid, we must have interpolation as accurate as the fine solution since we are moving the entire solution itself.
13947f296bb3SBarry Smith
13957f296bb3SBarry Smith**Injection** should accurately transfer the fine solution to the coarse grid. Accuracy here means that the action of a coarse dual function on either should produce approximately the same result. In the structured grid case, this means that we just use the same values on coarse points. This can result in aliasing.
13967f296bb3SBarry Smith
13977f296bb3SBarry Smith**Restriction** is intended to transfer the fine residual to the coarse space. Here we use averaging (often the transpose of the interpolation operation) to damp out the fine space contributions. Thus, it is less accurate than injection, but avoids aliasing of the high modes.
13987f296bb3SBarry Smith
13997f296bb3SBarry SmithFor a multigrid cycle, the interpolator $P$ is intended to accurately reproduce "smooth" functions from the coarse space in the fine space, keeping the energy of the interpolant about the same. For the Laplacian on a structured mesh, it is easy to determine what these low-frequency functions are. They are the Fourier modes. However an arbitrary operator $A$ will have different coarse modes that we want to resolve accurately on the fine grid, so that our coarse solve produces a good guess for the fine problem. How do we make sure that our interpolator $P$ can do this?
14007f296bb3SBarry Smith
14017f296bb3SBarry SmithWe first must decide what we mean by accurate interpolation of some functions. Suppose we know the continuum function $f$ that we care about, and we are only interested in a finite element description of discrete functions. Then the coarse function representing $f$ is given by
14027f296bb3SBarry Smith
14037f296bb3SBarry Smith$$
14047f296bb3SBarry Smithf^C = \sum_i f^C_i \phi^C_i,
14057f296bb3SBarry Smith$$
14067f296bb3SBarry Smith
14077f296bb3SBarry Smithand similarly the fine grid form is
14087f296bb3SBarry Smith
14097f296bb3SBarry Smith$$
14107f296bb3SBarry Smithf^F = \sum_i f^F_i \phi^F_i.
14117f296bb3SBarry Smith$$
14127f296bb3SBarry Smith
14137f296bb3SBarry SmithNow we would like the interpolant of the coarse representer to the fine grid to be as close as possible to the fine representer in a least squares sense, meaning we want to solve the minimization problem
14147f296bb3SBarry Smith
14157f296bb3SBarry Smith$$
14167f296bb3SBarry Smith\min_{P} \| f^F - P f^C \|_2
14177f296bb3SBarry Smith$$
14187f296bb3SBarry Smith
14197f296bb3SBarry SmithNow we can express $P$ as a matrix by looking at the matrix elements $P_{ij} = \phi^F_i P \phi^C_j$. Then we have
14207f296bb3SBarry Smith
14217f296bb3SBarry Smith$$
14227f296bb3SBarry Smith\begin{aligned}
14237f296bb3SBarry Smith  &\phi^F_i f^F - \phi^F_i P f^C \\
14247f296bb3SBarry Smith= &f^F_i - \sum_j P_{ij} f^C_j
14257f296bb3SBarry Smith\end{aligned}
14267f296bb3SBarry Smith$$
14277f296bb3SBarry Smith
14287f296bb3SBarry Smithso that our discrete optimization problem is
14297f296bb3SBarry Smith
14307f296bb3SBarry Smith$$
14317f296bb3SBarry Smith\min_{P_{ij}} \| f^F_i - \sum_j P_{ij} f^C_j \|_2
14327f296bb3SBarry Smith$$
14337f296bb3SBarry Smith
14347f296bb3SBarry Smithand we will treat each row of the interpolator as a separate optimization problem. We could allow an arbitrary sparsity pattern, or try to determine adaptively, as is done in sparse approximate inverse preconditioning. However, we know the supports of the basis functions in finite elements, and thus the naive sparsity pattern from local interpolation can be used.
14357f296bb3SBarry Smith
14367f296bb3SBarry SmithWe note here that the BAMG framework of Brannick et al. {cite}`brandtbrannickkahllivshits2011` does not use fine and coarse functions spaces, but rather a fine point/coarse point division which we will not employ here. Our general PETSc routine should work for both since the input would be the checking set (fine basis coefficients or fine space points) and the approximation set (coarse basis coefficients in the support or coarse points in the sparsity pattern).
14377f296bb3SBarry Smith
14387f296bb3SBarry SmithWe can easily solve the above problem using QR factorization. However, there are many smooth functions from the coarse space that we want interpolated accurately, and a single $f$ would not constrain the values $P_{ij}`$ well. Therefore, we will use several functions $\{f_k\}$ in our minimization,
14397f296bb3SBarry Smith
14407f296bb3SBarry Smith$$
14417f296bb3SBarry Smith\begin{aligned}
14427f296bb3SBarry Smith  &\min_{P_{ij}} \sum_k w_k \| f^{F,k}_i - \sum_j P_{ij} f^{C,k}_j \|_2 \\
14437f296bb3SBarry Smith= &\min_{P_{ij}} \sum_k \| \sqrt{w_k} f^{F,k}_i - \sqrt{w_k} \sum_j P_{ij} f^{C,k}_j \|_2 \\
14447f296bb3SBarry Smith= &\min_{P_{ij}} \| W^{1/2} \mathbf{f}^{F}_i - W^{1/2} \mathbf{f}^{C} p_i \|_2
14457f296bb3SBarry Smith\end{aligned}
14467f296bb3SBarry Smith$$
14477f296bb3SBarry Smith
14487f296bb3SBarry Smithwhere
14497f296bb3SBarry Smith
14507f296bb3SBarry Smith$$
14517f296bb3SBarry Smith\begin{aligned}
14527f296bb3SBarry SmithW         &= \begin{pmatrix} w_0 & & \\ & \ddots & \\ & & w_K \end{pmatrix} \\
14537f296bb3SBarry Smith\mathbf{f}^{F}_i &= \begin{pmatrix} f^{F,0}_i \\ \vdots \\ f^{F,K}_i \end{pmatrix} \\
14547f296bb3SBarry Smith\mathbf{f}^{C}   &= \begin{pmatrix} f^{C,0}_0 & \cdots & f^{C,0}_n \\ \vdots & \ddots &  \vdots \\ f^{C,K}_0 & \cdots & f^{C,K}_n \end{pmatrix} \\
14557f296bb3SBarry Smithp_i       &= \begin{pmatrix} P_{i0} \\ \vdots \\ P_{in} \end{pmatrix}
14567f296bb3SBarry Smith\end{aligned}
14577f296bb3SBarry Smith$$
14587f296bb3SBarry Smith
14597f296bb3SBarry Smithor alternatively
14607f296bb3SBarry Smith
14617f296bb3SBarry Smith$$
14627f296bb3SBarry Smith\begin{aligned}
14637f296bb3SBarry Smith[W]_{kk}     &= w_k \\
14647f296bb3SBarry Smith[f^{F}_i]_k  &= f^{F,k}_i \\
14657f296bb3SBarry Smith[f^{C}]_{kj} &= f^{C,k}_j \\
14667f296bb3SBarry Smith[p_i]_j      &= P_{ij}
14677f296bb3SBarry Smith\end{aligned}
14687f296bb3SBarry Smith$$
14697f296bb3SBarry Smith
14707f296bb3SBarry SmithWe thus have a standard least-squares problem
14717f296bb3SBarry Smith
14727f296bb3SBarry Smith$$
14737f296bb3SBarry Smith\min_{P_{ij}} \| b - A x \|_2
14747f296bb3SBarry Smith$$
14757f296bb3SBarry Smith
14767f296bb3SBarry Smithwhere
14777f296bb3SBarry Smith
14787f296bb3SBarry Smith$$
14797f296bb3SBarry Smith\begin{aligned}
14807f296bb3SBarry SmithA &= W^{1/2} f^{C} \\
14817f296bb3SBarry Smithb &= W^{1/2} f^{F}_i \\
14827f296bb3SBarry Smithx &= p_i
14837f296bb3SBarry Smith\end{aligned}
14847f296bb3SBarry Smith$$
14857f296bb3SBarry Smith
14867f296bb3SBarry Smithwhich can be solved using LAPACK.
14877f296bb3SBarry Smith
14887f296bb3SBarry SmithWe will typically perform this optimization on a multigrid level $l$ when the change in eigenvalue from level $l+1$ is relatively large, meaning
14897f296bb3SBarry Smith
14907f296bb3SBarry Smith$$
14917f296bb3SBarry Smith\frac{|\lambda_l - \lambda_{l+1}|}{|\lambda_l|}.
14927f296bb3SBarry Smith$$
14937f296bb3SBarry Smith
14947f296bb3SBarry SmithThis indicates that the generalized eigenvector associated with that eigenvalue was not adequately represented by $P^l_{l+1}`$, and the interpolator should be recomputed.
14957f296bb3SBarry Smith
14967f296bb3SBarry Smith```{raw} html
14977f296bb3SBarry Smith<hr>
14987f296bb3SBarry Smith```
14997f296bb3SBarry Smith
15007f296bb3SBarry Smith### Balancing Domain Decomposition by Constraints
15017f296bb3SBarry Smith
15027f296bb3SBarry SmithPETSc provides the Balancing Domain Decomposition by Constraints (`PCBDDC`)
15037f296bb3SBarry Smithmethod for preconditioning parallel finite element problems stored in
15047f296bb3SBarry Smithunassembled format (see `MATIS`). `PCBDDC` is a 2-level non-overlapping
15057f296bb3SBarry Smithdomain decomposition method which can be easily adapted to different
15067f296bb3SBarry Smithproblems and discretizations by means of few user customizations. The
15077f296bb3SBarry Smithapplication of the preconditioner to a vector consists in the static
15087f296bb3SBarry Smithcondensation of the residual at the interior of the subdomains by means
15097f296bb3SBarry Smithof local Dirichlet solves, followed by an additive combination of Neumann
15107f296bb3SBarry Smithlocal corrections and the solution of a global coupled coarse problem.
15117f296bb3SBarry SmithCommand line options for the underlying `KSP` objects are prefixed by
15127f296bb3SBarry Smith`-pc_bddc_dirichlet`, `-pc_bddc_neumann`, and `-pc_bddc_coarse`
15137f296bb3SBarry Smithrespectively.
15147f296bb3SBarry Smith
15157f296bb3SBarry SmithThe implementation supports any kind of linear system, and
15167f296bb3SBarry Smithassumes a one-to-one mapping between subdomains and MPI processes.
15177f296bb3SBarry SmithComplex numbers are supported as well. For non-symmetric problems, use
15187f296bb3SBarry Smiththe runtime option `-pc_bddc_symmetric 0`.
15197f296bb3SBarry Smith
15207f296bb3SBarry SmithUnlike conventional non-overlapping methods that iterates just on the
15217f296bb3SBarry Smithdegrees of freedom at the interface between subdomain, `PCBDDC`
15227f296bb3SBarry Smithiterates on the whole set of degrees of freedom, allowing the use of
15237f296bb3SBarry Smithapproximate subdomain solvers. When using approximate solvers, the
15247f296bb3SBarry Smithcommand line switches `-pc_bddc_dirichlet_approximate` and/or
15257f296bb3SBarry Smith`-pc_bddc_neumann_approximate` should be used to inform `PCBDDC`. If
15267f296bb3SBarry Smithany of the local problems is singular, the nullspace of the local
15277f296bb3SBarry Smithoperator should be attached to the local matrix via
15287f296bb3SBarry Smith`MatSetNullSpace()`.
15297f296bb3SBarry Smith
15307f296bb3SBarry SmithAt the basis of the method there’s the analysis of the connected
15317f296bb3SBarry Smithcomponents of the interface for the detection of vertices, edges and
15327f296bb3SBarry Smithfaces equivalence classes. Additional information on the degrees of
15337f296bb3SBarry Smithfreedom can be supplied to `PCBDDC` by using the following functions:
15347f296bb3SBarry Smith
15357f296bb3SBarry Smith- `PCBDDCSetDofsSplitting()`
15367f296bb3SBarry Smith- `PCBDDCSetLocalAdjacencyGraph()`
15377f296bb3SBarry Smith- `PCBDDCSetPrimalVerticesLocalIS()`
15387f296bb3SBarry Smith- `PCBDDCSetNeumannBoundaries()`
15397f296bb3SBarry Smith- `PCBDDCSetDirichletBoundaries()`
15407f296bb3SBarry Smith- `PCBDDCSetNeumannBoundariesLocal()`
15417f296bb3SBarry Smith- `PCBDDCSetDirichletBoundariesLocal()`
15427f296bb3SBarry Smith
15437f296bb3SBarry SmithCrucial for the convergence of the iterative process is the
15447f296bb3SBarry Smithspecification of the primal constraints to be imposed at the interface
15457f296bb3SBarry Smithbetween subdomains. `PCBDDC` uses by default vertex continuities and
15467f296bb3SBarry Smithedge arithmetic averages, which are enough for the three-dimensional
15477f296bb3SBarry SmithPoisson problem with constant coefficients. The user can switch on and
15487f296bb3SBarry Smithoff the usage of vertices, edges or face constraints by using the
15497f296bb3SBarry Smithcommand line switches `-pc_bddc_use_vertices`, `-pc_bddc_use_edges`,
15507f296bb3SBarry Smith`-pc_bddc_use_faces`. A customization of the constraints is available
15517addb90fSBarry Smithby attaching a `MatNullSpace` object to the  matrix used to compute the preconditioner via
15527f296bb3SBarry Smith`MatSetNearNullSpace()`. The vectors of the `MatNullSpace` object
15537f296bb3SBarry Smithshould represent the constraints in the form of quadrature rules;
15547f296bb3SBarry Smithquadrature rules for different classes of the interface can be listed in
15557f296bb3SBarry Smiththe same vector. The number of vectors of the `MatNullSpace` object
15567f296bb3SBarry Smithcorresponds to the maximum number of constraints that can be imposed for
15577f296bb3SBarry Smitheach class. Once all the quadrature rules for a given interface class
15587f296bb3SBarry Smithhave been extracted, an SVD operation is performed to retain the
15597f296bb3SBarry Smithnon-singular modes. As an example, the rigid body modes represent an
15607f296bb3SBarry Smitheffective choice for elasticity, even in the almost incompressible case.
15617f296bb3SBarry SmithFor particular problems, e.g. edge-based discretization with Nedelec
15627f296bb3SBarry Smithelements, a user defined change of basis of the degrees of freedom can
15637f296bb3SBarry Smithbe beneficial for `PCBDDC`; use `PCBDDCSetChangeOfBasisMat()` to
15647f296bb3SBarry Smithcustomize the change of basis.
15657f296bb3SBarry Smith
15667f296bb3SBarry SmithThe `PCBDDC` method is usually robust with respect to jumps in the material
15677f296bb3SBarry Smithparameters aligned with the interface; for PDEs with more than one
15687f296bb3SBarry Smithmaterial parameter you may also consider to use the so-called deluxe
15697f296bb3SBarry Smithscaling, available via the command line switch
15707f296bb3SBarry Smith`-pc_bddc_use_deluxe_scaling`. Other scalings are available, see
15717f296bb3SBarry Smith`PCISSetSubdomainScalingFactor()`,
15727f296bb3SBarry Smith`PCISSetSubdomainDiagonalScaling()` or
15737f296bb3SBarry Smith`PCISSetUseStiffnessScaling()`. However, the convergence properties of
15747f296bb3SBarry Smiththe `PCBDDC` method degrades in presence of large jumps in the material
15757f296bb3SBarry Smithcoefficients not aligned with the interface; for such cases, PETSc has
15767f296bb3SBarry Smiththe capability of adaptively computing the primal constraints. Adaptive
15777f296bb3SBarry Smithselection of constraints could be requested by specifying a threshold
15787f296bb3SBarry Smithvalue at command line by using `-pc_bddc_adaptive_threshold x`. Valid
15797f296bb3SBarry Smithvalues for the threshold `x` ranges from 1 to infinity, with smaller
15807f296bb3SBarry Smithvalues corresponding to more robust preconditioners. For SPD problems in
15817f296bb3SBarry Smith2D, or in 3D with only face degrees of freedom (like in the case of
15827f296bb3SBarry SmithRaviart-Thomas or Brezzi-Douglas-Marini elements), such a threshold is a
15837f296bb3SBarry Smithvery accurate estimator of the condition number of the resulting
15847f296bb3SBarry Smithpreconditioned operator. Since the adaptive selection of constraints for
15857f296bb3SBarry Smith`PCBDDC` methods is still an active topic of research, its implementation is
15867f296bb3SBarry Smithcurrently limited to SPD problems; moreover, because the technique
15877f296bb3SBarry Smithrequires the explicit knowledge of the local Schur complements, it needs
15887f296bb3SBarry Smiththe external package MUMPS.
15897f296bb3SBarry Smith
15907f296bb3SBarry SmithWhen solving problems decomposed in thousands of subdomains or more, the
15917f296bb3SBarry Smithsolution of the `PCBDDC` coarse problem could become a bottleneck; in order
15927f296bb3SBarry Smithto overcome this issue, the user could either consider to solve the
15937f296bb3SBarry Smithparallel coarse problem on a subset of the communicator associated with
15947f296bb3SBarry Smith`PCBDDC` by using the command line switch
15957f296bb3SBarry Smith`-pc_bddc_coarse_redistribute`, or instead use a multilevel approach.
15967f296bb3SBarry SmithThe latter can be requested by specifying the number of requested level
15977f296bb3SBarry Smithat command line (`-pc_bddc_levels`) or by using `PCBDDCSetLevels()`.
15987f296bb3SBarry SmithAn additional parameter (see `PCBDDCSetCoarseningRatio()`) controls
15997f296bb3SBarry Smiththe number of subdomains that will be generated at the next level; the
16007f296bb3SBarry Smithlarger the coarsening ratio, the lower the number of coarser subdomains.
16017f296bb3SBarry Smith
16027f296bb3SBarry SmithFor further details, see the example
16037f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ksp/ksp/tutorials/ex59.c">KSP Tutorial ex59</a>
16047f296bb3SBarry Smithand the online documentation for `PCBDDC`.
16057f296bb3SBarry Smith
16067f296bb3SBarry Smith### Shell Preconditioners
16077f296bb3SBarry Smith
16087f296bb3SBarry SmithThe shell preconditioner simply uses an application-provided routine to
16097f296bb3SBarry Smithimplement the preconditioner. That is, it allows users to write or wrap their
16107f296bb3SBarry Smithown custom preconditioners as a `PC` and use it with `KSP`, etc.
16117f296bb3SBarry Smith
16127f296bb3SBarry SmithTo provide a custom preconditioner application, use
16137f296bb3SBarry Smith
16147f296bb3SBarry Smith```
16157f296bb3SBarry SmithPCShellSetApply(PC pc,PetscErrorCode (*apply)(PC,Vec,Vec));
16167f296bb3SBarry Smith```
16177f296bb3SBarry Smith
16187f296bb3SBarry SmithOften a preconditioner needs access to an application-provided data
16197f296bb3SBarry Smithstructured. For this, one should use
16207f296bb3SBarry Smith
16217f296bb3SBarry Smith```
1622*2a8381b2SBarry SmithPCShellSetContext(PC pc, PetscCtx ctx);
16237f296bb3SBarry Smith```
16247f296bb3SBarry Smith
16257f296bb3SBarry Smithto set this data structure and
16267f296bb3SBarry Smith
16277f296bb3SBarry Smith```
1628*2a8381b2SBarry SmithPCShellGetContext(PC pc, PetscCtxRt ctx);
16297f296bb3SBarry Smith```
16307f296bb3SBarry Smith
16317f296bb3SBarry Smithto retrieve it in `apply`. The three routine arguments of `apply()`
16327f296bb3SBarry Smithare the `PC`, the input vector, and the output vector, respectively.
16337f296bb3SBarry Smith
16347f296bb3SBarry SmithFor a preconditioner that requires some sort of “setup” before being
16357f296bb3SBarry Smithused, that requires a new setup every time the operator is changed, one
16367f296bb3SBarry Smithcan provide a routine that is called every time the operator is changed
16377f296bb3SBarry Smith(usually via `KSPSetOperators()`).
16387f296bb3SBarry Smith
16397f296bb3SBarry Smith```
16407f296bb3SBarry SmithPCShellSetSetUp(PC pc,PetscErrorCode (*setup)(PC));
16417f296bb3SBarry Smith```
16427f296bb3SBarry Smith
16437f296bb3SBarry SmithThe argument to the `setup` routine is the same `PC` object which
16447f296bb3SBarry Smithcan be used to obtain the operators with `PCGetOperators()` and the
16457f296bb3SBarry Smithapplication-provided data structure that was set with
16467f296bb3SBarry Smith`PCShellSetContext()`.
16477f296bb3SBarry Smith
16487f296bb3SBarry Smith(sec_combining_pcs)=
16497f296bb3SBarry Smith
16507f296bb3SBarry Smith### Combining Preconditioners
16517f296bb3SBarry Smith
16527f296bb3SBarry SmithThe `PC` type `PCCOMPOSITE` allows one to form new preconditioners
16537f296bb3SBarry Smithby combining already-defined preconditioners and solvers. Combining
16547f296bb3SBarry Smithpreconditioners usually requires some experimentation to find a
16557f296bb3SBarry Smithcombination of preconditioners that works better than any single method.
16567f296bb3SBarry SmithIt is a tricky business and is not recommended until your application
16577f296bb3SBarry Smithcode is complete and running and you are trying to improve performance.
16587f296bb3SBarry SmithIn many cases using a single preconditioner is better than a
16597f296bb3SBarry Smithcombination; an exception is the multigrid/multilevel preconditioners
16607f296bb3SBarry Smith(solvers) that are always combinations of some sort, see {any}`sec_mg`.
16617f296bb3SBarry Smith
16627f296bb3SBarry SmithLet $B_1$ and $B_2$ represent the application of two
16637f296bb3SBarry Smithpreconditioners of type `type1` and `type2`. The preconditioner
16647f296bb3SBarry Smith$B = B_1 + B_2$ can be obtained with
16657f296bb3SBarry Smith
16667f296bb3SBarry Smith```
16677f296bb3SBarry SmithPCSetType(pc,PCCOMPOSITE);
16687f296bb3SBarry SmithPCCompositeAddPCType(pc,type1);
16697f296bb3SBarry SmithPCCompositeAddPCType(pc,type2);
16707f296bb3SBarry Smith```
16717f296bb3SBarry Smith
16727f296bb3SBarry SmithAny number of preconditioners may added in this way.
16737f296bb3SBarry Smith
16747f296bb3SBarry SmithThis way of combining preconditioners is called additive, since the
16757f296bb3SBarry Smithactions of the preconditioners are added together. This is the default
16767f296bb3SBarry Smithbehavior. An alternative can be set with the option
16777f296bb3SBarry Smith
16787f296bb3SBarry Smith```
16797f296bb3SBarry SmithPCCompositeSetType(pc,PC_COMPOSITE_MULTIPLICATIVE);
16807f296bb3SBarry Smith```
16817f296bb3SBarry Smith
16827f296bb3SBarry SmithIn this form the new residual is updated after the application of each
16837f296bb3SBarry Smithpreconditioner and the next preconditioner applied to the next residual.
16847f296bb3SBarry SmithFor example, with two composed preconditioners: $B_1$ and
16857f296bb3SBarry Smith$B_2$; $y = B x$ is obtained from
16867f296bb3SBarry Smith
16877f296bb3SBarry Smith$$
16887f296bb3SBarry Smith\begin{aligned}
16897f296bb3SBarry Smithy    = B_1 x \\
16907f296bb3SBarry Smithw_1  = x - A y \\
16917f296bb3SBarry Smithy    = y + B_2 w_1\end{aligned}
16927f296bb3SBarry Smith$$
16937f296bb3SBarry Smith
16947f296bb3SBarry SmithLoosely, this corresponds to a Gauss-Seidel iteration, while additive
16957f296bb3SBarry Smithcorresponds to a Jacobi iteration.
16967f296bb3SBarry Smith
16977f296bb3SBarry SmithUnder most circumstances, the multiplicative form requires one-half the
16987f296bb3SBarry Smithnumber of iterations as the additive form; however, the multiplicative
16997f296bb3SBarry Smithform does require the application of $A$ inside the
17007f296bb3SBarry Smithpreconditioner.
17017f296bb3SBarry Smith
17027f296bb3SBarry SmithIn the multiplicative version, the calculation of the residual inside
17037f296bb3SBarry Smiththe preconditioner can be done in two ways: using the original linear
17047f296bb3SBarry Smithsystem matrix or using the matrix used to build the preconditioners
17057f296bb3SBarry Smith$B_1$, $B_2$, etc. By default it uses the “preconditioner
17067f296bb3SBarry Smithmatrix”, to use the `Amat` matrix use the option
17077f296bb3SBarry Smith
17087f296bb3SBarry Smith```
17097f296bb3SBarry SmithPCSetUseAmat(PC pc);
17107f296bb3SBarry Smith```
17117f296bb3SBarry Smith
17127f296bb3SBarry SmithThe individual preconditioners can be accessed (in order to set options)
17137f296bb3SBarry Smithvia
17147f296bb3SBarry Smith
17157f296bb3SBarry Smith```
17167f296bb3SBarry SmithPCCompositeGetPC(PC pc,PetscInt count,PC *subpc);
17177f296bb3SBarry Smith```
17187f296bb3SBarry Smith
17197f296bb3SBarry SmithFor example, to set the first sub preconditioners to use ILU(1)
17207f296bb3SBarry Smith
17217f296bb3SBarry Smith```
17227f296bb3SBarry SmithPC subpc;
17237f296bb3SBarry SmithPCCompositeGetPC(pc,0,&subpc);
17247f296bb3SBarry SmithPCFactorSetFill(subpc,1);
17257f296bb3SBarry Smith```
17267f296bb3SBarry Smith
17277f296bb3SBarry SmithOne can also change the operator that is used to construct a particular
17287f296bb3SBarry Smith`PC` in the composite `PC` calling `PCSetOperators()` on the obtained `PC`.
17297f296bb3SBarry Smith`PCFIELDSPLIT`, {any}`sec_block_matrices`, provides an alternative approach to defining composite preconditioners
17307f296bb3SBarry Smithwith a variety of pre-defined compositions.
17317f296bb3SBarry Smith
17327f296bb3SBarry SmithThese various options can also be set via the options database. For
17337f296bb3SBarry Smithexample, `-pc_type` `composite` `-pc_composite_pcs` `jacobi,ilu`
17347f296bb3SBarry Smithcauses the composite preconditioner to be used with two preconditioners:
17357f296bb3SBarry SmithJacobi and ILU. The option `-pc_composite_type` `multiplicative`
17367f296bb3SBarry Smithinitiates the multiplicative version of the algorithm, while
17377f296bb3SBarry Smith`-pc_composite_type` `additive` the additive version. Using the
17387f296bb3SBarry Smith`Amat` matrix is obtained with the option `-pc_use_amat`. One sets
17397f296bb3SBarry Smithoptions for the sub-preconditioners with the extra prefix `-sub_N_`
17407f296bb3SBarry Smithwhere `N` is the number of the sub-preconditioner. For example,
17417f296bb3SBarry Smith`-sub_0_pc_ifactor_fill` `0`.
17427f296bb3SBarry Smith
17437f296bb3SBarry SmithPETSc also allows a preconditioner to be a complete `KSPSolve()` linear solver. This
17447f296bb3SBarry Smithis achieved with the `PCKSP` type.
17457f296bb3SBarry Smith
17467f296bb3SBarry Smith```
17477f296bb3SBarry SmithPCSetType(PC pc,PCKSP);
17487f296bb3SBarry SmithPCKSPGetKSP(pc,&ksp);
17497f296bb3SBarry Smith /* set any KSP/PC options */
17507f296bb3SBarry Smith```
17517f296bb3SBarry Smith
17527f296bb3SBarry SmithFrom the command line one can use 5 iterations of biCG-stab with ILU(0)
17537f296bb3SBarry Smithpreconditioning as the preconditioner with
17547f296bb3SBarry Smith`-pc_type ksp -ksp_pc_type ilu -ksp_ksp_max_it 5 -ksp_ksp_type bcgs`.
17557f296bb3SBarry Smith
17567f296bb3SBarry SmithBy default the inner `KSP` solver uses the outer preconditioner
17577f296bb3SBarry Smithmatrix, `Pmat`, as the matrix to be solved in the linear system; to
17587f296bb3SBarry Smithuse the matrix that defines the linear system, `Amat` use the option
17597f296bb3SBarry Smith
17607f296bb3SBarry Smith```
17617f296bb3SBarry SmithPCSetUseAmat(PC pc);
17627f296bb3SBarry Smith```
17637f296bb3SBarry Smith
17647f296bb3SBarry Smithor at the command line with `-pc_use_amat`.
17657f296bb3SBarry Smith
17667f296bb3SBarry SmithNaturally, one can use a `PCKSP` preconditioner inside a composite
17677f296bb3SBarry Smithpreconditioner. For example,
17687f296bb3SBarry Smith`-pc_type composite -pc_composite_pcs ilu,ksp -sub_1_pc_type jacobi -sub_1_ksp_max_it 10`
17697f296bb3SBarry Smithuses two preconditioners: ILU(0) and 10 iterations of GMRES with Jacobi
17707f296bb3SBarry Smithpreconditioning. However, it is not clear whether one would ever wish to
17717f296bb3SBarry Smithdo such a thing.
17727f296bb3SBarry Smith
17737f296bb3SBarry Smith(sec_mg)=
17747f296bb3SBarry Smith
17757f296bb3SBarry Smith### Multigrid Preconditioners
17767f296bb3SBarry Smith
17777f296bb3SBarry SmithA large suite of routines is available for using geometric multigrid as
1778efba3485SBarry Smitha preconditioner [^id3]. In the `PCMG` framework, the user is required to
17797f296bb3SBarry Smithprovide the coarse grid solver, smoothers, restriction and interpolation
1780efba3485SBarry Smithoperators, and code to calculate residuals. The `PCMG` package allows
17817f296bb3SBarry Smiththese components to be encapsulated within a PETSc-compliant
17827f296bb3SBarry Smithpreconditioner. We fully support both matrix-free and matrix-based
17837f296bb3SBarry Smithmultigrid solvers.
17847f296bb3SBarry Smith
17857f296bb3SBarry SmithA multigrid preconditioner is created with the four commands
17867f296bb3SBarry Smith
17877f296bb3SBarry Smith```
17887f296bb3SBarry SmithKSPCreate(MPI_Comm comm,KSP *ksp);
17897f296bb3SBarry SmithKSPGetPC(KSP ksp,PC *pc);
17907f296bb3SBarry SmithPCSetType(PC pc,PCMG);
17917f296bb3SBarry SmithPCMGSetLevels(pc,PetscInt levels,MPI_Comm *comms);
17927f296bb3SBarry Smith```
17937f296bb3SBarry Smith
1794efba3485SBarry SmithIf the number of levels is not set with `PCMGSetLevels()` or `-pc_mg_levels` and no `DM`
1795efba3485SBarry Smithhas been attached to the `PCMG` with `KSPSetDM()` (or `SNESSetDM()` or `TSSetDM()`), then
1796efba3485SBarry Smith`PCMG` uses only one level! This is different from the algebraic multigrid methods
1797efba3485SBarry Smithsuch as `PCGAMG`, `PCML`, and `PCHYPRE` which internally determine the number of levels
1798efba3485SBarry Smithto use.
1799efba3485SBarry Smith
18007f296bb3SBarry SmithA large number of parameters affect the multigrid behavior. The command
18017f296bb3SBarry Smith
18027f296bb3SBarry Smith```
18037f296bb3SBarry SmithPCMGSetType(PC pc,PCMGType mode);
18047f296bb3SBarry Smith```
18057f296bb3SBarry Smith
18067f296bb3SBarry Smithindicates which form of multigrid to apply {cite}`1sbg`.
18077f296bb3SBarry Smith
18087f296bb3SBarry SmithFor standard V or W-cycle multigrids, one sets the `mode` to be
18097f296bb3SBarry Smith`PC_MG_MULTIPLICATIVE`; for the additive form (which in certain cases
18107f296bb3SBarry Smithreduces to the BPX method, or additive multilevel Schwarz, or multilevel
18117f296bb3SBarry Smithdiagonal scaling), one uses `PC_MG_ADDITIVE` as the `mode`. For a
18127f296bb3SBarry Smithvariant of full multigrid, one can use `PC_MG_FULL`, and for the
18137f296bb3SBarry SmithKaskade algorithm `PC_MG_KASKADE`. For the multiplicative and full
18147f296bb3SBarry Smithmultigrid options, one can use a W-cycle by calling
18157f296bb3SBarry Smith
18167f296bb3SBarry Smith```
18177f296bb3SBarry SmithPCMGSetCycleType(PC pc,PCMGCycleType ctype);
18187f296bb3SBarry Smith```
18197f296bb3SBarry Smith
18207f296bb3SBarry Smithwith a value of `PC_MG_CYCLE_W` for `ctype`. The commands above can
18217f296bb3SBarry Smithalso be set from the options database. The option names are
18227f296bb3SBarry Smith`-pc_mg_type [multiplicative, additive, full, kaskade]`, and
18237f296bb3SBarry Smith`-pc_mg_cycle_type` `<ctype>`.
18247f296bb3SBarry Smith
18257f296bb3SBarry SmithThe user can control the amount of smoothing by configuring the solvers
18267f296bb3SBarry Smithon the levels. By default, the up and down smoothers are identical. If
18277f296bb3SBarry Smithseparate configuration of up and down smooths is required, it can be
18287f296bb3SBarry Smithrequested with the option `-pc_mg_distinct_smoothup` or the routine
18297f296bb3SBarry Smith
18307f296bb3SBarry Smith```
18317f296bb3SBarry SmithPCMGSetDistinctSmoothUp(PC pc);
18327f296bb3SBarry Smith```
18337f296bb3SBarry Smith
18347f296bb3SBarry SmithThe multigrid routines, which determine the solvers and
18357f296bb3SBarry Smithinterpolation/restriction operators that are used, are mandatory. To set
18367f296bb3SBarry Smiththe coarse grid solver, one must call
18377f296bb3SBarry Smith
18387f296bb3SBarry Smith```
18397f296bb3SBarry SmithPCMGGetCoarseSolve(PC pc,KSP *ksp);
18407f296bb3SBarry Smith```
18417f296bb3SBarry Smith
18427f296bb3SBarry Smithand set the appropriate options in `ksp`. Similarly, the smoothers are
18437f296bb3SBarry Smithcontrolled by first calling
18447f296bb3SBarry Smith
18457f296bb3SBarry Smith```
18467f296bb3SBarry SmithPCMGGetSmoother(PC pc,PetscInt level,KSP *ksp);
18477f296bb3SBarry Smith```
18487f296bb3SBarry Smith
18497f296bb3SBarry Smithand then setting the various options in the `ksp.` For example,
18507f296bb3SBarry Smith
18517f296bb3SBarry Smith```
18527f296bb3SBarry SmithPCMGGetSmoother(pc,1,&ksp);
18537f296bb3SBarry SmithKSPSetOperators(ksp,A1,A1);
18547f296bb3SBarry Smith```
18557f296bb3SBarry Smith
18567f296bb3SBarry Smithsets the matrix that defines the smoother on level 1 of the multigrid.
18577f296bb3SBarry SmithWhile
18587f296bb3SBarry Smith
18597f296bb3SBarry Smith```
18607f296bb3SBarry SmithPCMGGetSmoother(pc,1,&ksp);
18617f296bb3SBarry SmithKSPGetPC(ksp,&pc);
18627f296bb3SBarry SmithPCSetType(pc,PCSOR);
18637f296bb3SBarry Smith```
18647f296bb3SBarry Smith
18657f296bb3SBarry Smithsets SOR as the smoother to use on level 1.
18667f296bb3SBarry Smith
18677f296bb3SBarry SmithTo use a different pre- or postsmoother, one should call the following
18687f296bb3SBarry Smithroutines instead.
18697f296bb3SBarry Smith
18707f296bb3SBarry Smith```
18717f296bb3SBarry SmithPCMGGetSmootherUp(PC pc,PetscInt level,KSP *upksp);
18727f296bb3SBarry SmithPCMGGetSmootherDown(PC pc,PetscInt level,KSP *downksp);
18737f296bb3SBarry Smith```
18747f296bb3SBarry Smith
18757f296bb3SBarry SmithUse
18767f296bb3SBarry Smith
18777f296bb3SBarry Smith```
18787f296bb3SBarry SmithPCMGSetInterpolation(PC pc,PetscInt level,Mat P);
18797f296bb3SBarry Smith```
18807f296bb3SBarry Smith
18817f296bb3SBarry Smithand
18827f296bb3SBarry Smith
18837f296bb3SBarry Smith```
18847f296bb3SBarry SmithPCMGSetRestriction(PC pc,PetscInt level,Mat R);
18857f296bb3SBarry Smith```
18867f296bb3SBarry Smith
18877f296bb3SBarry Smithto define the intergrid transfer operations. If only one of these is
18887f296bb3SBarry Smithset, its transpose will be used for the other.
18897f296bb3SBarry Smith
18907f296bb3SBarry SmithIt is possible for these interpolation operations to be matrix-free (see
18917f296bb3SBarry Smith{any}`sec_matrixfree`); One should then make
18927f296bb3SBarry Smithsure that these operations are defined for the (matrix-free) matrices
18937f296bb3SBarry Smithpassed in. Note that this system is arranged so that if the
18947f296bb3SBarry Smithinterpolation is the transpose of the restriction, you can pass the same
18957f296bb3SBarry Smith`mat` argument to both `PCMGSetRestriction()` and
18967f296bb3SBarry Smith`PCMGSetInterpolation()`.
18977f296bb3SBarry Smith
18987f296bb3SBarry SmithOn each level except the coarsest, one must also set the routine to
18997f296bb3SBarry Smithcompute the residual. The following command suffices:
19007f296bb3SBarry Smith
19017f296bb3SBarry Smith```
19027f296bb3SBarry SmithPCMGSetResidual(PC pc,PetscInt level,PetscErrorCode (*residual)(Mat,Vec,Vec,Vec),Mat mat);
19037f296bb3SBarry Smith```
19047f296bb3SBarry Smith
19057f296bb3SBarry SmithThe `residual()` function normally does not need to be set if one’s
19067f296bb3SBarry Smithoperator is stored in `Mat` format. In certain circumstances, where it
19077f296bb3SBarry Smithis much cheaper to calculate the residual directly, rather than through
19087f296bb3SBarry Smiththe usual formula $b - Ax$, the user may wish to provide an
19097f296bb3SBarry Smithalternative.
19107f296bb3SBarry Smith
19117f296bb3SBarry SmithFinally, the user may provide three work vectors for each level (except
19127f296bb3SBarry Smithon the finest, where only the residual work vector is required). The
19137f296bb3SBarry Smithwork vectors are set with the commands
19147f296bb3SBarry Smith
19157f296bb3SBarry Smith```
19167f296bb3SBarry SmithPCMGSetRhs(PC pc,PetscInt level,Vec b);
19177f296bb3SBarry SmithPCMGSetX(PC pc,PetscInt level,Vec x);
19187f296bb3SBarry SmithPCMGSetR(PC pc,PetscInt level,Vec r);
19197f296bb3SBarry Smith```
19207f296bb3SBarry Smith
19217f296bb3SBarry SmithThe `PC` references these vectors, so you should call `VecDestroy()`
19227f296bb3SBarry Smithwhen you are finished with them. If any of these vectors are not
19237f296bb3SBarry Smithprovided, the preconditioner will allocate them.
19247f296bb3SBarry Smith
19257f296bb3SBarry SmithOne can control the `KSP` and `PC` options used on the various
19267f296bb3SBarry Smithlevels (as well as the coarse grid) using the prefix `mg_levels_`
19277f296bb3SBarry Smith(`mg_coarse_` for the coarse grid). For example,
19287f296bb3SBarry Smith`-mg_levels_ksp_type cg` will cause the CG method to be used as the
19297f296bb3SBarry SmithKrylov method for each level. Or
19307f296bb3SBarry Smith`-mg_levels_pc_type ilu -mg_levels_pc_factor_levels 2` will cause the
19317f296bb3SBarry SmithILU preconditioner to be used on each level with two levels of fill in
19327f296bb3SBarry Smiththe incomplete factorization.
19337f296bb3SBarry Smith
1934efba3485SBarry SmithIf `KSPSetDM()` (or `SNESSetDM()` or `TSSetDM()`) has been called, then `PCMG` will use geometric
1935efba3485SBarry Smithinformation from the `DM` to construct the multigrid hierarchy automatically. In this case one
1936efba3485SBarry Smithdoes not need to call the various `PCMGSet` routines listed above.
1937efba3485SBarry Smith
19387f296bb3SBarry Smith(sec_block_matrices)=
19397f296bb3SBarry Smith
19407f296bb3SBarry Smith## Solving Block Matrices with PCFIELDSPLIT
19417f296bb3SBarry Smith
19427f296bb3SBarry SmithBlock matrices represent an important class of problems in numerical
19437f296bb3SBarry Smithlinear algebra and offer the possibility of far more efficient iterative
19447f296bb3SBarry Smithsolvers than just treating the entire matrix as a black box. In this
19457f296bb3SBarry Smithsection, we use the common linear algebra definition of block matrices, where matrices are divided into a small, problem-size independent (two,
19467f296bb3SBarry Smiththree, or so) number of very large blocks. These blocks arise naturally
19477f296bb3SBarry Smithfrom the underlying physics or discretization of the problem, such as the velocity and pressure. Under a certain numbering of
19487f296bb3SBarry Smithunknowns, the matrix can be written as
19497f296bb3SBarry Smith
19507f296bb3SBarry Smith$$
19517f296bb3SBarry Smith\left( \begin{array}{cccc}
19527f296bb3SBarry SmithA_{00}   & A_{01} & A_{02} & A_{03} \\
19537f296bb3SBarry SmithA_{10}   & A_{11} & A_{12} & A_{13} \\
19547f296bb3SBarry SmithA_{20}   & A_{21} & A_{22} & A_{23} \\
19557f296bb3SBarry SmithA_{30}   & A_{31} & A_{32} & A_{33} \\
19567f296bb3SBarry Smith\end{array} \right),
19577f296bb3SBarry Smith$$
19587f296bb3SBarry Smith
19597f296bb3SBarry Smithwhere each $A_{ij}$ is an entire block. The matrices on a parallel computer are not explicitly stored this way. Instead, each process will
19607f296bb3SBarry Smithown some rows of $A_{0*}$, $A_{1*}$ etc. On a
19617f296bb3SBarry Smithprocess, the blocks may be stored in one block followed by another
19627f296bb3SBarry Smith
19637f296bb3SBarry Smith$$
19647f296bb3SBarry Smith\left( \begin{array}{ccccccc}
19657f296bb3SBarry SmithA_{{00}_{00}}   & A_{{00}_{01}} & A_{{00}_{02}} & ... & A_{{01}_{00}} & A_{{01}_{01}} & ...  \\
19667f296bb3SBarry SmithA_{{00}_{10}}   & A_{{00}_{11}} & A_{{00}_{12}} & ... & A_{{01}_{10}} & A_{{01}_{11}} & ... \\
19677f296bb3SBarry SmithA_{{00}_{20}}   & A_{{00}_{21}} & A_{{00}_{22}} & ... & A_{{01}_{20}} & A_{{01}_{21}}  & ...\\
19687f296bb3SBarry Smith... \\
19697f296bb3SBarry SmithA_{{10}_{00}}   & A_{{10}_{01}} & A_{{10}_{02}} & ... & A_{{11}_{00}} & A_{{11}_{01}}  & ... \\
19707f296bb3SBarry SmithA_{{10}_{10}}   & A_{{10}_{11}} & A_{{10}_{12}} & ... & A_{{11}_{10}} & A_{{11}_{11}}  & ... \\
19717f296bb3SBarry Smith... \\
19727f296bb3SBarry Smith\end{array} \right)
19737f296bb3SBarry Smith$$
19747f296bb3SBarry Smith
19757f296bb3SBarry Smithor interlaced, for example, with four blocks
19767f296bb3SBarry Smith
19777f296bb3SBarry Smith$$
19787f296bb3SBarry Smith\left( \begin{array}{ccccc}
19797f296bb3SBarry SmithA_{{00}_{00}}   & A_{{01}_{00}} &  A_{{00}_{01}} & A_{{01}_{01}} &  ... \\
19807f296bb3SBarry SmithA_{{10}_{00}}   & A_{{11}_{00}} &  A_{{10}_{01}} & A_{{11}_{01}} &  ... \\
19817f296bb3SBarry SmithA_{{00}_{10}}   & A_{{01}_{10}} & A_{{00}_{11}} & A_{{01}_{11}} & ...\\
19827f296bb3SBarry SmithA_{{10}_{10}}   & A_{{11}_{10}} & A_{{10}_{11}} & A_{{11}_{11}} & ...\\
19837f296bb3SBarry Smith...
19847f296bb3SBarry Smith\end{array} \right).
19857f296bb3SBarry Smith$$
19867f296bb3SBarry Smith
19877f296bb3SBarry SmithNote that for interlaced storage, the number of rows/columns of each
19887f296bb3SBarry Smithblock must be the same size. Matrices obtained with `DMCreateMatrix()`
19897f296bb3SBarry Smithwhere the `DM` is a `DMDA` are always stored interlaced. Block
19907f296bb3SBarry Smithmatrices can also be stored using the `MATNEST` format, which holds
19917f296bb3SBarry Smithseparate assembled blocks. Each of these nested matrices is itself
19927f296bb3SBarry Smithdistributed in parallel. It is more efficient to use `MATNEST` with
19937f296bb3SBarry Smiththe methods described in this section because there are fewer copies and
19947f296bb3SBarry Smithbetter formats (e.g., `MATBAIJ` or `MATSBAIJ`) can be used for the
19957f296bb3SBarry Smithcomponents, but it is not possible to use many other methods with
19967f296bb3SBarry Smith`MATNEST`. See {any}`sec_matnest` for more on assembling
19977f296bb3SBarry Smithblock matrices without depending on a specific matrix format.
19987f296bb3SBarry Smith
19997f296bb3SBarry SmithThe PETSc `PCFIELDSPLIT` preconditioner implements the
20007f296bb3SBarry Smith“block” solvers in PETSc, {cite}`elman2008tcp`. There are three ways to provide the
20017f296bb3SBarry Smithinformation that defines the blocks. If the matrices are stored as
20027f296bb3SBarry Smithinterlaced then `PCFieldSplitSetFields()` can be called repeatedly to
20037f296bb3SBarry Smithindicate which fields belong to each block. More generally
20047f296bb3SBarry Smith`PCFieldSplitSetIS()` can be used to indicate exactly which
20057f296bb3SBarry Smithrows/columns of the matrix belong to a particular block (field). You can provide
20067f296bb3SBarry Smithnames for each block with these routines; if you do not, they are numbered from 0. With these two approaches, the blocks may
20077f296bb3SBarry Smithoverlap (though they generally will not overlap). If only one block is defined,
20087f296bb3SBarry Smiththen the complement of the matrices is used to define the other block.
20097f296bb3SBarry SmithFinally, the option `-pc_fieldsplit_detect_saddle_point` causes two
20107f296bb3SBarry Smithdiagonal blocks to be found, one associated with all rows/columns that
20117f296bb3SBarry Smithhave zeros on the diagonals and the rest.
20127f296bb3SBarry Smith
20137f296bb3SBarry Smith**Important parameters for PCFIELDSPLIT**
20147f296bb3SBarry Smith
20157f296bb3SBarry Smith- Control the fields used
20167f296bb3SBarry Smith
20177f296bb3SBarry Smith  - `-pc_fieldsplit_detect_saddle_point` \<bool:false> Generate two fields, the first consists of all rows with a nonzero on the diagonal, and the second will be all rows
20187f296bb3SBarry Smith    with zero on the diagonal. See `PCFieldSplitSetDetectSaddlePoint()`.
20197f296bb3SBarry Smith
20207f296bb3SBarry Smith  - `-pc_fieldsplit_dm_splits` \<bool:true> Use the `DM` attached to the preconditioner to determine the fields. See `PCFieldSplitSetDMSplits()` and
20217f296bb3SBarry Smith    `DMCreateFieldDecomposition()`.
20227f296bb3SBarry Smith
20237f296bb3SBarry Smith  - `-pc_fieldsplit_%d_fields` \<f1,f2,...:int> Use f1, f2, .. to define field `d`. The `fn` are in the range of 0, ..., bs-1 where bs is the block size
20247f296bb3SBarry Smith    of the matrix or set with `PCFieldSplitSetBlockSize()`. See `PCFieldSplitSetFields()`.
20257f296bb3SBarry Smith
20267f296bb3SBarry Smith    - `-pc_fieldsplit_default` \<bool:true> Automatically add any fields needed that have not been supplied explicitly by `-pc_fieldsplit_%d_fields`.
20277f296bb3SBarry Smith
20287f296bb3SBarry Smith  - `DMFieldsplitSetIS()` Provide the `IS` that defines a particular field.
20297f296bb3SBarry Smith
20307f296bb3SBarry Smith- Control the type of the block preconditioner
20317f296bb3SBarry Smith
20327f296bb3SBarry Smith  - `-pc_fieldsplit_type` \<additive|multiplicative|symmetric_multiplicative|schur|gkb:multiplicative> The order in which the field solves are applied.
20337f296bb3SBarry Smith    For symmetric problems where `KSPCG` is used `symmetric_multiplicative` must be used instead of `multiplicative`. `additive` is the least expensive
20347f296bb3SBarry Smith    to apply but provides the worst convergence. `schur` requires either a good preconditioner for the Schur complement or a naturally well-conditioned
20357f296bb3SBarry Smith    Schur complement, but when it works well can be extremely effective. See `PCFieldSplitSetType()`. `gkb` is for symmetric saddle-point problems (the lower-right
20367f296bb3SBarry Smith    the block is zero).
20377f296bb3SBarry Smith
20387f296bb3SBarry Smith  - `-pc_fieldsplit_diag_use_amat` \<bool:false> Use the first matrix that is passed to `KSPSetJacobian()` to construct the block-diagonal sub-matrices used in the algorithms,
20397f296bb3SBarry Smith    by default, the second matrix is used.
20407f296bb3SBarry Smith
20417f296bb3SBarry Smith  - Options for Schur preconditioner: `-pc_fieldsplit_type`
20427f296bb3SBarry Smith    `schur`
20437f296bb3SBarry Smith
20447f296bb3SBarry Smith    - `-pc_fieldsplit_schur_fact_type` \<diag|lower|upper|full:diag> See `PCFieldSplitSetSchurFactType()`. `full` reduces the iterations but each iteration requires additional
20457f296bb3SBarry Smith      field solves.
20467f296bb3SBarry Smith
20477f296bb3SBarry Smith    - `-pc_fieldsplit_schur_precondition` \<self|selfp|user|a11|full:user> How the Schur complement is preconditioned. See `PCFieldSplitSetSchurPre()`.
20487f296bb3SBarry Smith
20497f296bb3SBarry Smith      - `-fieldsplit_1_mat_schur_complement_ainv_type` \<diag|lump:diag> Use the lumped diagonal of $A_{00}$ when `-pc_fieldsplit_schur_precondition`
20507f296bb3SBarry Smith        `selfp` is used.
20517f296bb3SBarry Smith
20521c357dc0SPierre Jolivet    - `-pc_fieldsplit_schur_scale` \<real:-1.0> Controls the sign flip of S for `-pc_fieldsplit_schur_fact_type` `diag`.
20537f296bb3SBarry Smith      See `PCFieldSplitSetSchurScale()`
20547f296bb3SBarry Smith
20557f296bb3SBarry Smith    - `fieldsplit_1_xxx` controls the solver for the Schur complement system.
20567f296bb3SBarry Smith      If a `DM` provided the fields, use the second field name set in the `DM` instead of 1.
20577f296bb3SBarry Smith
20587f296bb3SBarry Smith      - `-fieldsplit_1_pc_type` `lsc` `-fieldsplit_1_lsc_pc_xxx` use
20597f296bb3SBarry Smith        the least squares commutators {cite}`elmanhowleshadidshuttleworthtuminaro2006` {cite}`silvester2001efficient`
20607f296bb3SBarry Smith        preconditioner for the Schur complement with any preconditioner for the least-squares matrix, see `PCLSC`.
20617f296bb3SBarry Smith        If a `DM` provided the fields, use the second field name set in the `DM` instead of 1.
20627f296bb3SBarry Smith
20637f296bb3SBarry Smith    - `-fieldsplit_upper_xxx` Set options for the solver in the upper solver when `-pc_fieldsplit_schur_fact_type`
20647f296bb3SBarry Smith      `upper` or `full` is used. Defaults to
20657f296bb3SBarry Smith      using the solver as provided with `-fieldsplit_0_xxx`.
20667f296bb3SBarry Smith
20677f296bb3SBarry Smith    - `-fieldsplit_1_inner_xxx` Set the options for the solver inside the application of the Schur complement;
20687f296bb3SBarry Smith      defaults to using the solver as provided with `-fieldsplit_0_xxx`. If a `DM` provides the fields use the name of the second field name set in the `DM` instead of 1.
20697f296bb3SBarry Smith
20707f296bb3SBarry Smith  - Options for GKB preconditioner: `-pc_fieldsplit_type` gkb
20717f296bb3SBarry Smith
20721c357dc0SPierre Jolivet    - `-pc_fieldsplit_gkb_tol` \<real:1e-5> See `PCFieldSplitSetGKBTol()`.
20731c357dc0SPierre Jolivet    - `-pc_fieldsplit_gkb_delay` \<int:5> See `PCFieldSplitSetGKBDelay()`.
20741c357dc0SPierre Jolivet    - `-pc_fieldsplit_gkb_nu` \<real:1.0> See `PCFieldSplitSetGKBNu()`.
20751c357dc0SPierre Jolivet    - `-pc_fieldsplit_gkb_maxit` \<int:100> See `PCFieldSplitSetGKBMaxit()`.
20767f296bb3SBarry Smith    - `-pc_fieldsplit_gkb_monitor` \<bool:false> Monitor the convergence of the inner solver.
20777f296bb3SBarry Smith
20787f296bb3SBarry Smith- Options for additive and multiplication field solvers:
20797f296bb3SBarry Smith
20801c357dc0SPierre Jolivet   - `-fieldsplit_%d_xxx` Set options for the solver for field number `d`. For example, `-fieldsplit_0_pc_type`
20811c357dc0SPierre Jolivet    `jacobi`. When the fields are obtained from a `DM` use the
20821c357dc0SPierre Jolivet    field name instead of `d`.
20837f296bb3SBarry Smith
20847f296bb3SBarry SmithFor simplicity, we restrict our matrices to two-by-two blocks in the rest of the section. So the matrix is
20857f296bb3SBarry Smith
20867f296bb3SBarry Smith$$
20877f296bb3SBarry Smith\left( \begin{array}{cc}
20887f296bb3SBarry SmithA_{00}   & A_{01} \\
20897f296bb3SBarry SmithA_{10}   & A_{11} \\
20907f296bb3SBarry Smith\end{array} \right).
20917f296bb3SBarry Smith$$
20927f296bb3SBarry Smith
20937f296bb3SBarry SmithOn occasion, the user may provide another matrix that is used to
20947f296bb3SBarry Smithconstruct parts of the preconditioner
20957f296bb3SBarry Smith
20967f296bb3SBarry Smith$$
20977f296bb3SBarry Smith\left( \begin{array}{cc}
20987f296bb3SBarry SmithAp_{00}   & Ap_{01} \\
20997f296bb3SBarry SmithAp_{10}   & Ap_{11} \\
21007f296bb3SBarry Smith\end{array} \right).
21017f296bb3SBarry Smith$$
21027f296bb3SBarry Smith
21037f296bb3SBarry SmithFor notational simplicity define $\text{ksp}(A,Ap)$ to mean
21047f296bb3SBarry Smithapproximately solving a linear system using `KSP` with the operator
21057f296bb3SBarry Smith$A$ and preconditioner built from matrix $Ap$.
21067f296bb3SBarry Smith
21077f296bb3SBarry SmithFor matrices defined with any number of blocks, there are three “block”
21087f296bb3SBarry Smithalgorithms available: block Jacobi,
21097f296bb3SBarry Smith
21107f296bb3SBarry Smith$$
21117f296bb3SBarry Smith\left( \begin{array}{cc}
21127f296bb3SBarry Smith  \text{ksp}(A_{00},Ap_{00})   & 0 \\
21137f296bb3SBarry Smith  0   & \text{ksp}(A_{11},Ap_{11}) \\
21147f296bb3SBarry Smith\end{array} \right)
21157f296bb3SBarry Smith$$
21167f296bb3SBarry Smith
21177f296bb3SBarry Smithblock Gauss-Seidel,
21187f296bb3SBarry Smith
21197f296bb3SBarry Smith$$
21207f296bb3SBarry Smith\left( \begin{array}{cc}
21217f296bb3SBarry SmithI   & 0 \\
21227f296bb3SBarry Smith0 & A^{-1}_{11} \\
21237f296bb3SBarry Smith\end{array} \right)
21247f296bb3SBarry Smith\left( \begin{array}{cc}
21257f296bb3SBarry SmithI   & 0 \\
21267f296bb3SBarry Smith-A_{10} & I \\
21277f296bb3SBarry Smith\end{array} \right)
21287f296bb3SBarry Smith\left( \begin{array}{cc}
21297f296bb3SBarry SmithA^{-1}_{00}   & 0 \\
21307f296bb3SBarry Smith0 & I \\
21317f296bb3SBarry Smith\end{array} \right)
21327f296bb3SBarry Smith$$
21337f296bb3SBarry Smith
21347f296bb3SBarry Smithwhich is implemented [^id4] as
21357f296bb3SBarry Smith
21367f296bb3SBarry Smith$$
21377f296bb3SBarry Smith\left( \begin{array}{cc}
21387f296bb3SBarry SmithI   & 0 \\
21397f296bb3SBarry Smith  0 & \text{ksp}(A_{11},Ap_{11}) \\
21407f296bb3SBarry Smith\end{array} \right)
21417f296bb3SBarry Smith$$
21427f296bb3SBarry Smith
21437f296bb3SBarry Smith$$
21447f296bb3SBarry Smith\left[
21457f296bb3SBarry Smith\left( \begin{array}{cc}
21467f296bb3SBarry Smith0   & 0 \\
21477f296bb3SBarry Smith0 & I \\
21487f296bb3SBarry Smith\end{array} \right) +
21497f296bb3SBarry Smith\left( \begin{array}{cc}
21507f296bb3SBarry SmithI   & 0 \\
21517f296bb3SBarry Smith-A_{10} & -A_{11} \\
21527f296bb3SBarry Smith\end{array} \right)
21537f296bb3SBarry Smith\left( \begin{array}{cc}
21547f296bb3SBarry SmithI   & 0 \\
21557f296bb3SBarry Smith0 & 0 \\
21567f296bb3SBarry Smith\end{array} \right)
21577f296bb3SBarry Smith\right]
21587f296bb3SBarry Smith$$
21597f296bb3SBarry Smith
21607f296bb3SBarry Smith$$
21617f296bb3SBarry Smith\left( \begin{array}{cc}
21627f296bb3SBarry Smith  \text{ksp}(A_{00},Ap_{00})   & 0 \\
21637f296bb3SBarry Smith0 & I \\
21647f296bb3SBarry Smith\end{array} \right)
21657f296bb3SBarry Smith$$
21667f296bb3SBarry Smith
21677f296bb3SBarry Smithand symmetric block Gauss-Seidel
21687f296bb3SBarry Smith
21697f296bb3SBarry Smith$$
21707f296bb3SBarry Smith\left( \begin{array}{cc}
21717f296bb3SBarry SmithA_{00}^{-1}   & 0 \\
21727f296bb3SBarry Smith0 & I \\
21737f296bb3SBarry Smith\end{array} \right)
21747f296bb3SBarry Smith\left( \begin{array}{cc}
21757f296bb3SBarry SmithI   & -A_{01} \\
21767f296bb3SBarry Smith0 & I \\
21777f296bb3SBarry Smith\end{array} \right)
21787f296bb3SBarry Smith\left( \begin{array}{cc}
21797f296bb3SBarry SmithA_{00}   & 0 \\
21807f296bb3SBarry Smith0 & A_{11}^{-1} \\
21817f296bb3SBarry Smith\end{array} \right)
21827f296bb3SBarry Smith\left( \begin{array}{cc}
21837f296bb3SBarry SmithI   & 0 \\
21847f296bb3SBarry Smith-A_{10} & I \\
21857f296bb3SBarry Smith\end{array} \right)
21867f296bb3SBarry Smith\left( \begin{array}{cc}
21877f296bb3SBarry SmithA_{00}^{-1}   & 0 \\
21887f296bb3SBarry Smith0 & I \\
21897f296bb3SBarry Smith\end{array} \right).
21907f296bb3SBarry Smith$$
21917f296bb3SBarry Smith
21927f296bb3SBarry SmithThese can be accessed with
21931c357dc0SPierre Jolivet`-pc_fieldsplit_type <additive,multiplicative,symmetric_multiplicative>`
21947f296bb3SBarry Smithor the function `PCFieldSplitSetType()`. The option prefixes for the
21957f296bb3SBarry Smithinternal KSPs are given by `-fieldsplit_name_`.
21967f296bb3SBarry Smith
21977f296bb3SBarry SmithBy default blocks $A_{00}, A_{01}$ and so on are extracted out of
21987f296bb3SBarry Smith`Pmat`, the matrix that the `KSP` uses to build the preconditioner,
21997f296bb3SBarry Smithand not out of `Amat` (i.e., $A$ itself). As discussed above, in
22007f296bb3SBarry Smith{any}`sec_combining_pcs`, however, it is
22017f296bb3SBarry Smithpossible to use `Amat` instead of `Pmat` by calling
22027f296bb3SBarry Smith`PCSetUseAmat(pc)` or using `-pc_use_amat` on the command line.
22037f296bb3SBarry SmithAlternatively, you can have `PCFIELDSPLIT` extract the diagonal blocks
22047f296bb3SBarry Smith$A_{00}, A_{11}$ etc. out of `Amat` by calling
22057f296bb3SBarry Smith`PCFieldSplitSetDiagUseAmat(pc,PETSC_TRUE)` or supplying command-line
22067f296bb3SBarry Smithargument `-pc_fieldsplit_diag_use_amat`. Similarly,
22077f296bb3SBarry Smith`PCFieldSplitSetOffDiagUseAmat(pc,{PETSC_TRUE`) or
22087f296bb3SBarry Smith`-pc_fieldsplit_off_diag_use_amat` will cause the off-diagonal blocks
22097f296bb3SBarry Smith$A_{01},A_{10}$ etc. to be extracted out of `Amat`.
22107f296bb3SBarry Smith
22117f296bb3SBarry SmithFor two-by-two blocks only, there is another family of solvers based on
22127f296bb3SBarry SmithSchur complements. The inverse of the Schur complement factorization is
22137f296bb3SBarry Smith
22147f296bb3SBarry Smith$$
22157f296bb3SBarry Smith\left[
22167f296bb3SBarry Smith\left( \begin{array}{cc}
22177f296bb3SBarry SmithI   & 0 \\
22187f296bb3SBarry SmithA_{10}A_{00}^{-1} & I \\
22197f296bb3SBarry Smith\end{array} \right)
22207f296bb3SBarry Smith\left( \begin{array}{cc}
22217f296bb3SBarry SmithA_{00}  & 0 \\
22227f296bb3SBarry Smith0 & S \\
22237f296bb3SBarry Smith\end{array} \right)
22247f296bb3SBarry Smith\left( \begin{array}{cc}
22257f296bb3SBarry SmithI   & A_{00}^{-1} A_{01} \\
22267f296bb3SBarry Smith0 & I \\
22277f296bb3SBarry Smith\end{array} \right)
22287f296bb3SBarry Smith\right]^{-1} =
22297f296bb3SBarry Smith$$
22307f296bb3SBarry Smith
22317f296bb3SBarry Smith$$
22327f296bb3SBarry Smith\left( \begin{array}{cc}
22337f296bb3SBarry SmithI   & A_{00}^{-1} A_{01} \\
22347f296bb3SBarry Smith0 & I \\
22357f296bb3SBarry Smith\end{array} \right)^{-1}
22367f296bb3SBarry Smith\left( \begin{array}{cc}
22377f296bb3SBarry SmithA_{00}^{-1}  & 0 \\
22387f296bb3SBarry Smith0 & S^{-1} \\
22397f296bb3SBarry Smith\end{array} \right)
22407f296bb3SBarry Smith\left( \begin{array}{cc}
22417f296bb3SBarry SmithI   & 0 \\
22427f296bb3SBarry SmithA_{10}A_{00}^{-1} & I \\
22437f296bb3SBarry Smith\end{array} \right)^{-1} =
22447f296bb3SBarry Smith$$
22457f296bb3SBarry Smith
22467f296bb3SBarry Smith$$
22477f296bb3SBarry Smith\left( \begin{array}{cc}
22487f296bb3SBarry SmithI   & -A_{00}^{-1} A_{01} \\
22497f296bb3SBarry Smith0 & I \\
22507f296bb3SBarry Smith\end{array} \right)
22517f296bb3SBarry Smith\left( \begin{array}{cc}
22527f296bb3SBarry SmithA_{00}^{-1}  & 0 \\
22537f296bb3SBarry Smith0 & S^{-1} \\
22547f296bb3SBarry Smith\end{array} \right)
22557f296bb3SBarry Smith\left( \begin{array}{cc}
22567f296bb3SBarry SmithI   & 0 \\
22577f296bb3SBarry Smith-A_{10}A_{00}^{-1} & I \\
22587f296bb3SBarry Smith\end{array} \right) =
22597f296bb3SBarry Smith$$
22607f296bb3SBarry Smith
22617f296bb3SBarry Smith$$
22627f296bb3SBarry Smith\left( \begin{array}{cc}
22637f296bb3SBarry SmithA_{00}^{-1}   & 0 \\
22647f296bb3SBarry Smith0 & I \\
22657f296bb3SBarry Smith\end{array} \right)
22667f296bb3SBarry Smith\left( \begin{array}{cc}
22677f296bb3SBarry SmithI   & -A_{01} \\
22687f296bb3SBarry Smith0 & I \\
22697f296bb3SBarry Smith\end{array} \right)
22707f296bb3SBarry Smith\left( \begin{array}{cc}
22711c357dc0SPierre JolivetA_{00} & 0 \\
22727f296bb3SBarry Smith0 & S^{-1} \\
22737f296bb3SBarry Smith\end{array} \right)
22747f296bb3SBarry Smith\left( \begin{array}{cc}
22757f296bb3SBarry SmithI   & 0 \\
22767f296bb3SBarry Smith-A_{10} & I \\
22777f296bb3SBarry Smith\end{array} \right)
22787f296bb3SBarry Smith\left( \begin{array}{cc}
22797f296bb3SBarry SmithA_{00}^{-1}   & 0 \\
22807f296bb3SBarry Smith0 & I \\
22817f296bb3SBarry Smith\end{array} \right).
22827f296bb3SBarry Smith$$
22837f296bb3SBarry Smith
22847f296bb3SBarry SmithThe preconditioner is accessed with `-pc_fieldsplit_type` `schur` and is
22857f296bb3SBarry Smithimplemented as
22867f296bb3SBarry Smith
22877f296bb3SBarry Smith$$
22887f296bb3SBarry Smith\left( \begin{array}{cc}
22897f296bb3SBarry Smith  \text{ksp}(A_{00},Ap_{00})   & 0 \\
22907f296bb3SBarry Smith0 & I \\
22917f296bb3SBarry Smith\end{array} \right)
22927f296bb3SBarry Smith\left( \begin{array}{cc}
22937f296bb3SBarry SmithI   & -A_{01} \\
22947f296bb3SBarry Smith0 & I \\
22957f296bb3SBarry Smith\end{array} \right)
22967f296bb3SBarry Smith$$
22977f296bb3SBarry Smith
22987f296bb3SBarry Smith$$
22997f296bb3SBarry Smith\left( \begin{array}{cc}
23007f296bb3SBarry SmithI  & 0 \\
23017f296bb3SBarry Smith  0 & \text{ksp}(\hat{S},\hat{S}p) \\
23027f296bb3SBarry Smith\end{array} \right)
23037f296bb3SBarry Smith\left( \begin{array}{cc}
23047f296bb3SBarry SmithI   & 0 \\
23057f296bb3SBarry Smith  -A_{10} \text{ksp}(A_{00},Ap_{00}) & I \\
23067f296bb3SBarry Smith\end{array} \right).
23077f296bb3SBarry Smith$$
23087f296bb3SBarry Smith
23097f296bb3SBarry SmithWhere
23107f296bb3SBarry Smith$\hat{S} = A_{11} - A_{10} \text{ksp}(A_{00},Ap_{00}) A_{01}$ is
23117f296bb3SBarry Smiththe approximate Schur complement.
23127f296bb3SBarry Smith
23137f296bb3SBarry SmithThere are several variants of the Schur complement preconditioner
23147f296bb3SBarry Smithobtained by dropping some of the terms; these can be obtained with
23157f296bb3SBarry Smith`-pc_fieldsplit_schur_fact_type <diag,lower,upper,full>` or the
23167f296bb3SBarry Smithfunction `PCFieldSplitSetSchurFactType()`. Note that the `diag` form
23177f296bb3SBarry Smithuses the preconditioner
23187f296bb3SBarry Smith
23197f296bb3SBarry Smith$$
23207f296bb3SBarry Smith\left( \begin{array}{cc}
23217f296bb3SBarry Smith  \text{ksp}(A_{00},Ap_{00})   & 0 \\
23227f296bb3SBarry Smith  0 & -\text{ksp}(\hat{S},\hat{S}p) \\
23237f296bb3SBarry Smith\end{array} \right).
23247f296bb3SBarry Smith$$
23257f296bb3SBarry Smith
23267f296bb3SBarry SmithThis is done to ensure the preconditioner is positive definite for a
23277f296bb3SBarry Smitha common class of problems, saddle points with a positive definite
23287f296bb3SBarry Smith$A_{00}$: for these, the Schur complement is negative definite.
23297f296bb3SBarry Smith
23307f296bb3SBarry SmithThe effectiveness of the Schur complement preconditioner depends on the
23317f296bb3SBarry Smithavailability of a good preconditioner $\hat Sp$ for the Schur
23327f296bb3SBarry Smithcomplement matrix. In general, you are responsible for supplying
23337f296bb3SBarry Smith$\hat Sp$ via
23347f296bb3SBarry Smith`PCFieldSplitSetSchurPre(pc,PC_FIELDSPLIT_SCHUR_PRE_USER,Sp)`.
23357f296bb3SBarry SmithWithout a good problem-specific $\hat Sp$, you can use
23367f296bb3SBarry Smithsome built-in options.
23377f296bb3SBarry Smith
23387f296bb3SBarry SmithUsing `-pc_fieldsplit_schur_precondition user` on the command line
23397f296bb3SBarry Smithactivates the matrix supplied programmatically, as explained above.
23407f296bb3SBarry Smith
23417f296bb3SBarry SmithWith `-pc_fieldsplit_schur_precondition a11` (default)
23427f296bb3SBarry Smith$\hat Sp = A_{11}$ is used to build a preconditioner for
23437f296bb3SBarry Smith$\hat S$.
23447f296bb3SBarry Smith
23457f296bb3SBarry SmithOtherwise, `-pc_fieldsplit_schur_precondition self` will set
23467f296bb3SBarry Smith$\hat Sp = \hat S$ and use the Schur complement matrix itself to
23477f296bb3SBarry Smithbuild the preconditioner.
23487f296bb3SBarry Smith
23497f296bb3SBarry SmithThe problem with the last approach is that $\hat S$ is used in
23507f296bb3SBarry Smiththe unassembled, matrix-free form, and many preconditioners (e.g., ILU)
23517f296bb3SBarry Smithcannot be built out of such matrices. Instead, you can *assemble* an
23527f296bb3SBarry Smithapproximation to $\hat S$ by inverting $A_{00}$, but only
23537f296bb3SBarry Smithapproximately, to ensure the sparsity of $\hat Sp$ as much
23547f296bb3SBarry Smithas possible. Specifically, using
23557f296bb3SBarry Smith`-pc_fieldsplit_schur_precondition selfp` will assemble
23567f296bb3SBarry Smith$\hat Sp = A_{11} - A_{10} \text{inv}(A_{00}) A_{01}$.
23577f296bb3SBarry Smith
23587f296bb3SBarry SmithBy default $\text{inv}(A_{00})$ is the inverse of the diagonal of
23597f296bb3SBarry Smith$A_{00}$, but using
23607f296bb3SBarry Smith`-fieldsplit_1_mat_schur_complement_ainv_type lump` will lump
23617f296bb3SBarry Smith$A_{00}$ first. Using
23627f296bb3SBarry Smith`-fieldsplit_1_mat_schur_complement_ainv_type blockdiag` will use the
23637f296bb3SBarry Smithinverse of the block diagonal of $A_{00}$. Option
23647f296bb3SBarry Smith`-mat_schur_complement_ainv_type` applies to any matrix of
23657f296bb3SBarry Smith`MatSchurComplement` type and here it is used with the prefix
23667f296bb3SBarry Smith`-fieldsplit_1` of the linear system in the second split.
23677f296bb3SBarry Smith
23687f296bb3SBarry SmithFinally, you can use the `PCLSC` preconditioner for the Schur
23697f296bb3SBarry Smithcomplement with `-pc_fieldsplit_type schur -fieldsplit_1_pc_type lsc`.
23707f296bb3SBarry SmithThis uses for the preconditioner to $\hat{S}$ the operator
23717f296bb3SBarry Smith
23727f296bb3SBarry Smith$$
23737f296bb3SBarry Smith\text{ksp}(A_{10} A_{01},A_{10} A_{01}) A_{10} A_{00} A_{01} \text{ksp}(A_{10} A_{01},A_{10} A_{01})
23747f296bb3SBarry Smith$$
23757f296bb3SBarry Smith
23767f296bb3SBarry SmithWhich, of course, introduces two additional inner solves for each
23777f296bb3SBarry Smithapplication of the Schur complement. The options prefix for this inner
23787f296bb3SBarry Smith`KSP` is `-fieldsplit_1_lsc_`. Instead of constructing the matrix
23797f296bb3SBarry Smith$A_{10} A_{01}$, users can provide their own matrix. This is
23807f296bb3SBarry Smithdone by attaching the matrix/matrices to the $Sp$ matrix they
23817f296bb3SBarry Smithprovide with
23827f296bb3SBarry Smith
23837f296bb3SBarry Smith```
23847f296bb3SBarry SmithPetscObjectCompose((PetscObject)Sp,"LSC_L",(PetscObject)L);
23857f296bb3SBarry SmithPetscObjectCompose((PetscObject)Sp,"LSC_Lp",(PetscObject)Lp);
23867f296bb3SBarry Smith```
23877f296bb3SBarry Smith
23887f296bb3SBarry Smith(sec_singular)=
23897f296bb3SBarry Smith
23907f296bb3SBarry Smith## Solving Singular Systems
23917f296bb3SBarry Smith
23927f296bb3SBarry SmithSometimes one is required to solver singular linear systems. In this
23937f296bb3SBarry Smithcase, the system matrix has a nontrivial null space. For example, the
23947f296bb3SBarry Smithdiscretization of the Laplacian operator with Neumann boundary
23957f296bb3SBarry Smithconditions has a null space of the constant functions. PETSc has tools
23967f296bb3SBarry Smithto help solve these systems. This approach is only guaranteed to work for left preconditioning (see `KSPSetPCSide()`); for example it
23977f296bb3SBarry Smithmay not work in some situations with `KSPFGMRES`.
23987f296bb3SBarry Smith
23997f296bb3SBarry SmithFirst, one must know what the null space is and store it using an
24007f296bb3SBarry Smithorthonormal basis in an array of PETSc Vecs. The constant functions can
24017f296bb3SBarry Smithbe handled separately, since they are such a common case. Create a
24027f296bb3SBarry Smith`MatNullSpace` object with the command
24037f296bb3SBarry Smith
24047f296bb3SBarry Smith```
24057f296bb3SBarry SmithMatNullSpaceCreate(MPI_Comm,PetscBool hasconstants,PetscInt dim,Vec *basis,MatNullSpace *nsp);
24067f296bb3SBarry Smith```
24077f296bb3SBarry Smith
24087f296bb3SBarry SmithHere, `dim` is the number of vectors in `basis` and `hasconstants`
24097f296bb3SBarry Smithindicates if the null space contains the constant functions. If the null
24107f296bb3SBarry Smithspace contains the constant functions you do not need to include it in
24117f296bb3SBarry Smiththe `basis` vectors you provide, nor in the count `dim`.
24127f296bb3SBarry Smith
24137f296bb3SBarry SmithOne then tells the `KSP` object you are using what the null space is
24147f296bb3SBarry Smithwith the call
24157f296bb3SBarry Smith
24167f296bb3SBarry Smith```
24177f296bb3SBarry SmithMatSetNullSpace(Mat Amat,MatNullSpace nsp);
24187f296bb3SBarry Smith```
24197f296bb3SBarry Smith
24207f296bb3SBarry SmithThe `Amat` should be the *first* matrix argument used with
24217f296bb3SBarry Smith`KSPSetOperators()`, `SNESSetJacobian()`, or `TSSetIJacobian()`.
24227f296bb3SBarry SmithThe PETSc solvers will now
24237f296bb3SBarry Smithhandle the null space during the solution process.
24247f296bb3SBarry Smith
24257f296bb3SBarry SmithIf the right-hand side of linear system is not in the range of `Amat`, that is it is not
24267f296bb3SBarry Smithorthogonal to the null space of `Amat` transpose, then the residual
24277f296bb3SBarry Smithnorm of the Krylov iteration will not converge to zero; it will converge to a non-zero value while the
24287f296bb3SBarry Smithsolution is converging to the least squares solution of the linear system. One can, if one desires,
24297f296bb3SBarry Smithapply `MatNullSpaceRemove()` with the null space of `Amat` transpose to the right-hand side before calling
24307f296bb3SBarry Smith`KSPSolve()`. Then the residual norm will converge to zero.
24317f296bb3SBarry Smith
24327f296bb3SBarry SmithIf one chooses a direct solver (or an incomplete factorization) it may
24337f296bb3SBarry Smithstill detect a zero pivot. You can run with the additional options or
24347f296bb3SBarry Smith`-pc_factor_shift_type NONZERO`
24357f296bb3SBarry Smith`-pc_factor_shift_amount  <dampingfactor>` to prevent the zero pivot.
24367f296bb3SBarry SmithA good choice for the `dampingfactor` is 1.e-10.
24377f296bb3SBarry Smith
24387f296bb3SBarry SmithIf the matrix is non-symmetric and you wish to solve the transposed linear system
24397f296bb3SBarry Smithyou must provide the null space of the transposed matrix with `MatSetTransposeNullSpace()`.
24407f296bb3SBarry Smith
24417f296bb3SBarry Smith(sec_externalsol)=
24427f296bb3SBarry Smith
24437f296bb3SBarry Smith## Using External Linear Solvers
24447f296bb3SBarry Smith
24457f296bb3SBarry SmithPETSc interfaces to several external linear solvers (also see {any}`acknowledgements`).
24467f296bb3SBarry SmithTo use these solvers, one may:
24477f296bb3SBarry Smith
24487f296bb3SBarry Smith1. Run `configure` with the additional options
24497f296bb3SBarry Smith   `--download-packagename` e.g. `--download-superlu_dist`
24507f296bb3SBarry Smith   `--download-parmetis` (SuperLU_DIST needs ParMetis) or
24517f296bb3SBarry Smith   `--download-mumps` `--download-scalapack` (MUMPS requires
24527f296bb3SBarry Smith   ScaLAPACK).
24537f296bb3SBarry Smith2. Build the PETSc libraries.
24547f296bb3SBarry Smith3. Use the runtime option: `-ksp_type preonly` (or equivalently `-ksp_type none`) `-pc_type <pctype>`
24557f296bb3SBarry Smith   `-pc_factor_mat_solver_type <packagename>`. For eg:
24567f296bb3SBarry Smith   `-ksp_type preonly` `-pc_type lu`
24577f296bb3SBarry Smith   `-pc_factor_mat_solver_type superlu_dist`.
24587f296bb3SBarry Smith
24597f296bb3SBarry Smith```{eval-rst}
24607f296bb3SBarry Smith.. list-table:: Options for External Solvers
24617f296bb3SBarry Smith   :name: tab-externaloptions
24627f296bb3SBarry Smith   :header-rows: 1
24637f296bb3SBarry Smith
24647f296bb3SBarry Smith   * - MatType
24657f296bb3SBarry Smith     - PCType
24667f296bb3SBarry Smith     - MatSolverType
24677f296bb3SBarry Smith     - Package
24687f296bb3SBarry Smith   * - ``seqaij``
24697f296bb3SBarry Smith     - ``lu``
24707f296bb3SBarry Smith     - ``MATSOLVERESSL``
24717f296bb3SBarry Smith     - ``essl``
24727f296bb3SBarry Smith   * - ``seqaij``
24737f296bb3SBarry Smith     - ``lu``
24747f296bb3SBarry Smith     - ``MATSOLVERLUSOL``
24757f296bb3SBarry Smith     -  ``lusol``
24767f296bb3SBarry Smith   * - ``seqaij``
24777f296bb3SBarry Smith     - ``lu``
24787f296bb3SBarry Smith     - ``MATSOLVERMATLAB``
24797f296bb3SBarry Smith     - ``matlab``
24807f296bb3SBarry Smith   * - ``aij``
24817f296bb3SBarry Smith     - ``lu``
24827f296bb3SBarry Smith     - ``MATSOLVERMUMPS``
24837f296bb3SBarry Smith     - ``mumps``
24847f296bb3SBarry Smith   * - ``aij``
24857f296bb3SBarry Smith     - ``cholesky``
24867f296bb3SBarry Smith     - -
24877f296bb3SBarry Smith     - -
24887f296bb3SBarry Smith   * - ``sbaij``
24897f296bb3SBarry Smith     - ``cholesky``
24907f296bb3SBarry Smith     - -
24917f296bb3SBarry Smith     - -
24927f296bb3SBarry Smith   * - ``seqaij``
24937f296bb3SBarry Smith     - ``lu``
24947f296bb3SBarry Smith     - ``MATSOLVERSUPERLU``
24957f296bb3SBarry Smith     - ``superlu``
24967f296bb3SBarry Smith   * - ``aij``
24977f296bb3SBarry Smith     - ``lu``
24987f296bb3SBarry Smith     - ``MATSOLVERSUPERLU_DIST``
24997f296bb3SBarry Smith     - ``superlu_dist``
25007f296bb3SBarry Smith   * - ``seqaij``
25017f296bb3SBarry Smith     - ``lu``
25027f296bb3SBarry Smith     - ``MATSOLVERUMFPACK``
25037f296bb3SBarry Smith     - ``umfpack``
25047f296bb3SBarry Smith   * - ``seqaij``
25057f296bb3SBarry Smith     - ``cholesky``
25067f296bb3SBarry Smith     - ``MATSOLVERCHOLMOD``
25077f296bb3SBarry Smith     - ``cholmod``
25087f296bb3SBarry Smith   * - ``seqaij``
25097f296bb3SBarry Smith     - ``lu``
25107f296bb3SBarry Smith     - ``MATSOLVERKLU``
25117f296bb3SBarry Smith     -  ``klu``
25127f296bb3SBarry Smith   * - ``dense``
25137f296bb3SBarry Smith     - ``lu``
25147f296bb3SBarry Smith     - ``MATSOLVERELEMENTAL``
25157f296bb3SBarry Smith     -  ``elemental``
25167f296bb3SBarry Smith   * - ``dense``
25177f296bb3SBarry Smith     - ``cholesky``
25187f296bb3SBarry Smith     - -
25197f296bb3SBarry Smith     - -
25207f296bb3SBarry Smith   * - ``seqaij``
25217f296bb3SBarry Smith     - ``lu``
25227f296bb3SBarry Smith     - ``MATSOLVERMKL_PARDISO``
25237f296bb3SBarry Smith     - ``mkl_pardiso``
25247f296bb3SBarry Smith   * - ``aij``
25257f296bb3SBarry Smith     - ``lu``
25267f296bb3SBarry Smith     - ``MATSOLVERMKL_CPARDISO``
25277f296bb3SBarry Smith     - ``mkl_cpardiso``
25287f296bb3SBarry Smith   * - ``aij``
25297f296bb3SBarry Smith     - ``lu``
25307f296bb3SBarry Smith     - ``MATSOLVERPASTIX``
25317f296bb3SBarry Smith     -  ``pastix``
25327f296bb3SBarry Smith   * - ``aij``
25337f296bb3SBarry Smith     - ``cholesky``
25347f296bb3SBarry Smith     - ``MATSOLVERBAS``
25357f296bb3SBarry Smith     -  ``bas``
25367f296bb3SBarry Smith   * - ``aijcusparse``
25377f296bb3SBarry Smith     - ``lu``
25387f296bb3SBarry Smith     - ``MATSOLVERCUSPARSE``
25397f296bb3SBarry Smith     - ``cusparse``
25407f296bb3SBarry Smith   * - ``aijcusparse``
25417f296bb3SBarry Smith     - ``cholesky``
25427f296bb3SBarry Smith     -  -
25437f296bb3SBarry Smith     -  -
25447f296bb3SBarry Smith   * - ``aij``
25457f296bb3SBarry Smith     - ``lu``, ``cholesky``
25467f296bb3SBarry Smith     - ``MATSOLVERPETSC``
25477f296bb3SBarry Smith     - ``petsc``
25487f296bb3SBarry Smith   * - ``baij``
25497f296bb3SBarry Smith     - -
25507f296bb3SBarry Smith     - -
25517f296bb3SBarry Smith     - -
25527f296bb3SBarry Smith   * - ``aijcrl``
25537f296bb3SBarry Smith     - -
25547f296bb3SBarry Smith     - -
25557f296bb3SBarry Smith     - -
25567f296bb3SBarry Smith   * - ``aijperm``
25577f296bb3SBarry Smith     - -
25587f296bb3SBarry Smith     - -
25597f296bb3SBarry Smith     - -
25607f296bb3SBarry Smith   * - ``seqdense``
25617f296bb3SBarry Smith     - -
25627f296bb3SBarry Smith     - -
25637f296bb3SBarry Smith     - -
25647f296bb3SBarry Smith   * - ``aij``
25657f296bb3SBarry Smith     - -
25667f296bb3SBarry Smith     - -
25677f296bb3SBarry Smith     - -
25687f296bb3SBarry Smith   * - ``baij``
25697f296bb3SBarry Smith     - -
25707f296bb3SBarry Smith     - -
25717f296bb3SBarry Smith     - -
25727f296bb3SBarry Smith   * - ``aijcrl``
25737f296bb3SBarry Smith     - -
25747f296bb3SBarry Smith     - -
25757f296bb3SBarry Smith     - -
25767f296bb3SBarry Smith   * - ``aijperm``
25777f296bb3SBarry Smith     - -
25787f296bb3SBarry Smith     - -
25797f296bb3SBarry Smith     - -
25807f296bb3SBarry Smith   * - ``seqdense``
25817f296bb3SBarry Smith     - -
25827f296bb3SBarry Smith     - -
25837f296bb3SBarry Smith     - -
25847f296bb3SBarry Smith```
25857f296bb3SBarry Smith
25867f296bb3SBarry SmithThe default and available input options for each external software can
25877f296bb3SBarry Smithbe found by specifying `-help` at runtime.
25887f296bb3SBarry Smith
25897f296bb3SBarry SmithAs an alternative to using runtime flags to employ these external
25907f296bb3SBarry Smithpackages, procedural calls are provided for some packages. For example,
25917f296bb3SBarry Smiththe following procedural calls are equivalent to runtime options
25927f296bb3SBarry Smith`-ksp_type preonly` (or equivalently `-ksp_type none`) `-pc_type lu`
25937f296bb3SBarry Smith`-pc_factor_mat_solver_type mumps` `-mat_mumps_icntl_7 3`:
25947f296bb3SBarry Smith
25957f296bb3SBarry Smith```
25961c357dc0SPierre JolivetKSPSetType(ksp, KSPPREONLY); // (or equivalently KSPSetType(ksp, KSPNONE))
25977f296bb3SBarry SmithKSPGetPC(ksp, &pc);
25987f296bb3SBarry SmithPCSetType(pc, PCLU);
25997f296bb3SBarry SmithPCFactorSetMatSolverType(pc, MATSOLVERMUMPS);
26007f296bb3SBarry SmithPCFactorSetUpMatSolverType(pc);
26017f296bb3SBarry SmithPCFactorGetMatrix(pc, &F);
26027f296bb3SBarry Smithicntl=7; ival = 3;
26037f296bb3SBarry SmithMatMumpsSetIcntl(F, icntl, ival);
26047f296bb3SBarry Smith```
26057f296bb3SBarry Smith
26067f296bb3SBarry SmithOne can also create matrices with the appropriate capabilities by
26077f296bb3SBarry Smithcalling `MatCreate()` followed by `MatSetType()` specifying the
26087f296bb3SBarry Smithdesired matrix type from {any}`tab-externaloptions`. These
26097f296bb3SBarry Smithmatrix types inherit capabilities from their PETSc matrix parents:
26107f296bb3SBarry Smith`MATSEQAIJ`, `MATMPIAIJ`, etc. As a result, the preallocation routines
26117f296bb3SBarry Smith`MatSeqAIJSetPreallocation()`, `MatMPIAIJSetPreallocation()`, etc.
26127f296bb3SBarry Smithand any other type specific routines of the base class are supported.
26137f296bb3SBarry SmithOne can also call `MatConvert()` inplace to convert the matrix to and
26147f296bb3SBarry Smithfrom its base class without performing an expensive data copy.
26157f296bb3SBarry Smith`MatConvert()` cannot be called on matrices that have already been
26167f296bb3SBarry Smithfactored.
26177f296bb3SBarry Smith
26187f296bb3SBarry SmithIn {any}`tab-externaloptions`, the base class `aij` refers
26197f296bb3SBarry Smithto the fact that inheritance is based on `MATSEQAIJ` when constructed
26207f296bb3SBarry Smithwith a single process communicator, and from `MATMPIAIJ` otherwise.
26217f296bb3SBarry SmithThe same holds for `baij` and `sbaij`. For codes that are intended
26227f296bb3SBarry Smithto be run as both a single process or with multiple processes, depending
26237f296bb3SBarry Smithon the `mpiexec` command, it is recommended that both sets of
26247f296bb3SBarry Smithpreallocation routines are called for these communicator morphing types.
26257f296bb3SBarry SmithThe call for the incorrect type will simply be ignored without any harm
26267f296bb3SBarry Smithor message.
26277f296bb3SBarry Smith
26287f296bb3SBarry Smith(sec_pcmpi)=
26297f296bb3SBarry Smith
26307f296bb3SBarry Smith## Using PETSc's MPI parallel linear solvers from a non-MPI program
26317f296bb3SBarry Smith
26327f296bb3SBarry SmithUsing PETSc's MPI linear solver server it is possible to use multiple MPI processes to solve a
26337f296bb3SBarry Smitha linear system when the application code, including the matrix generation, is run on a single
26347f296bb3SBarry SmithMPI process (with or without OpenMP). The application code must be built with MPI and must call
26357f296bb3SBarry Smith`PetscInitialize()` at the very beginning of the program and end with `PetscFinalize()`. The
26367f296bb3SBarry Smithapplication code may utilize OpenMP.
26377f296bb3SBarry SmithThe code may create multiple matrices and `KSP` objects and call `KSPSolve()`, similarly the
26387f296bb3SBarry Smithcode may utilize the `SNES` nonlinear solvers, the `TS` ODE integrators, and the `Tao` optimization algorithms
26397f296bb3SBarry Smithwhich use `KSP`.
26407f296bb3SBarry Smith
26417f296bb3SBarry SmithThe program must then be launched using the standard approaches for launching MPI programs with the additional
26427f296bb3SBarry SmithPETSc option `-mpi_linear_solver_server`. The linear solves are controlled via the options database in the usual manner (using any options prefix
26437f296bb3SBarry Smithyou may have provided via `KSPSetOptionsPrefix()`, for example `-ksp_type cg -ksp_monitor -pc_type bjacobi -ksp_view`. The solver options cannot be set via
26447f296bb3SBarry Smiththe functional interface, for example `KSPSetType()` etc.
26457f296bb3SBarry Smith
26467f296bb3SBarry SmithThe option `-mpi_linear_solver_server_view` will print
26477f296bb3SBarry Smitha summary of all the systems solved by the MPI linear solver server when the program completes. By default the linear solver server
26487f296bb3SBarry Smithwill only parallelize the linear solve to the extent that it believes is appropriate to obtain speedup for the parallel solve, for example, if the
26497f296bb3SBarry Smithmatrix has 1,000 rows and columns the solution will not be parallelized by default. One can use the option `-mpi_linear_solver_server_minimum_count_per_rank 5000`
26507f296bb3SBarry Smithto cause the linear solver server to allow as few as 5,000 unknowns per MPI process in the parallel solve.
26517f296bb3SBarry Smith
26527f296bb3SBarry SmithSee `PCMPI`, `PCMPIServerBegin()`, and `PCMPIServerEnd()` for more details on the solvers.
26537f296bb3SBarry Smith
26547f296bb3SBarry SmithFor help when anything goes wrong with the MPI linear solver server see `PCMPIServerBegin()`.
26557f296bb3SBarry Smith
26567f296bb3SBarry SmithAmdahl's law makes clear that parallelizing only a portion of a numerical code can only provide a limited improvement
26577f296bb3SBarry Smithin the computation time; thus it is crucial to understand what phases of a computation must be parallelized (via MPI, OpenMP, or some other model)
26587f296bb3SBarry Smithto ensure a useful increase in performance. One of the crucial phases is likely the generation of the matrix entries; the
26597f296bb3SBarry Smithuse of `MatSetPreallocationCOO()` and `MatSetValuesCOO()` in an OpenMP code allows parallelizing the generation of the matrix.
26607f296bb3SBarry Smith
26617f296bb3SBarry SmithSee {any}`sec_pcmpi_study` for a study of the use of `PCMPI` on a specific PETSc application.
26627f296bb3SBarry Smith
26637f296bb3SBarry Smith```{rubric} Footnotes
26647f296bb3SBarry Smith```
26657f296bb3SBarry Smith
26667f296bb3SBarry Smith[^id3]: See {any}`sec_amg` for information on using algebraic multigrid.
26677f296bb3SBarry Smith
26687f296bb3SBarry Smith[^id4]: This may seem an odd way to implement since it involves the "extra"
26697f296bb3SBarry Smith    multiply by $-A_{11}$. The reason is this is implemented this
26707f296bb3SBarry Smith    way is that this approach works for any number of blocks that may
26717f296bb3SBarry Smith    overlap.
26727f296bb3SBarry Smith
26737f296bb3SBarry Smith```{rubric} References
26747f296bb3SBarry Smith```
26757f296bb3SBarry Smith
26767f296bb3SBarry Smith```{eval-rst}
26777f296bb3SBarry Smith.. bibliography:: /petsc.bib
26787f296bb3SBarry Smith   :filter: docname in docnames
26797f296bb3SBarry Smith```
2680