xref: /petsc/doc/manual/advanced.md (revision d000e8ac2eed99d0cf14796aab94de0aed57fd8b)
1(ch_advanced)=
2
3# Advanced Features of Matrices and Solvers
4
5This chapter introduces additional features of the PETSc matrices and
6solvers.
7
8(sec_matsub)=
9
10## Extracting Submatrices
11
12One can extract a (parallel) submatrix from a given (parallel) using
13
14```
15MatCreateSubMatrix(Mat A,IS rows,IS cols,MatReuse call,Mat *B);
16```
17
18This extracts the `rows` and `cols` of the matrix `A` into
19`B`. If call is `MAT_INITIAL_MATRIX` it will create the matrix
20`B`. If call is `MAT_REUSE_MATRIX` it will reuse the `B` created
21with a previous call. This function is used internally by `PCFIELDSPLIT`.
22
23One can also extract one or more submatrices per MPI process with
24
25```
26MatCreateSubMatrices(Mat A,PetscInt n,IS rows[],IS cols[],MatReuse call,Mat *B[]);
27```
28
29This extracts n (zero or more) matrices with the `rows[k]` and `cols[k]` of the matrix `A` into an array of
30sequential matrices `B[k]` on this process. If call is `MAT_INITIAL_MATRIX` it will create the array of matrices
31`B`. If call is `MAT_REUSE_MATRIX` it will reuse the `B` created
32with a previous call. The `IS` arguments are sequential. The array of matrices should be destroyed with `MatDestroySubMatrices()`.
33This function is used by `PCBJACOBI` and `PCASM`.
34
35Each submatrix may be parallel, existing on a `MPI_Comm` associated with each pair of `IS` `rows[k]` and `cols[k]`,
36using
37
38```
39MatCreateSubMatricesMPI(Mat A,PetscInt n,IS rows[],IS cols[],MatReuse call,Mat *B[]);
40```
41
42Finally this version has a specialization
43
44```
45MatGetMultiProcBlock(Mat A, MPI_Comm subComm, MatReuse scall,Mat *subMat);
46```
47
48where collections of non-overlapping MPI processes share a single parallel matrix on their sub-communicator.
49This function is used by `PCBJACOBI` and `PCASM`.
50
51The routine
52
53```
54MatCreateRedundantMatrix(Mat A,PetscInt nsubcomm,MPI_Comm subcomm,MatReuse reuse,Mat *matredundant);
55```
56
57where `nsubcomm` copies of the entire matrix are stored, one on each `subcomm`. The routine `PetscSubcommCreate()` and its
58`PetscSubcomm` object may, but need not be, used to construct the `subcomm`.
59
60The routine
61
62```
63MatMPIAdjToSeq(Mat A,Mat *B);
64```
65
66is a specialization that duplicates an entire `MATMPIADJ` matrix on each MPI process.
67
68(sec_matfactor)=
69
70## Matrix Factorization
71
72Normally, PETSc users will access the matrix solvers through the `KSP`
73interface, as discussed in {any}`ch_ksp`, but the
74underlying factorization and triangular solve routines are also directly
75accessible to the user.
76
77The ILU, LU, ICC, Cholesky, and QR matrix factorizations are split into two or three
78stages depending on the user’s needs. The first stage is to calculate an
79ordering for the matrix. The ordering generally is done to reduce fill
80in a sparse factorization; it does not make much sense for a dense
81matrix.
82
83```
84MatGetOrdering(Mat matrix,MatOrderingType type,IS* rowperm,IS* colperm);
85```
86
87The currently available alternatives for the ordering `type` are
88
89- `MATORDERINGNATURAL` - Natural
90- `MATORDERINGND` - Nested Dissection
91- `MATORDERING1WD` - One-way Dissection
92- `MATORDERINGRCM` - Reverse Cuthill-McKee
93- `MATORDERINGQMD` - Quotient Minimum Degree
94
95These orderings can also be set through the options database.
96
97Certain matrix formats may support only a subset of these. All of
98these orderings are symmetric at the moment; ordering routines that are
99not symmetric may be added. Currently we support orderings only for
100sequential matrices.
101
102Users can add their own ordering routines by providing a function with
103the calling sequence
104
105```
106int reorder(Mat A,MatOrderingType type,IS* rowperm,IS* colperm);
107```
108
109Here `A` is the matrix for which we wish to generate a new ordering,
110`type` may be ignored and `rowperm` and `colperm` are the row and
111column permutations generated by the ordering routine. The user
112registers the ordering routine with the command
113
114```
115MatOrderingRegister(MatOrderingType ordname,char *path,char *sname,PetscErrorCode (*reorder)(Mat,MatOrderingType,IS*,IS*)));
116```
117
118The input argument `ordname` is a string of the user’s choice,
119either an ordering defined in `petscmat.h` or the name
120of a new ordering introduced by the user. See the code in
121`src/mat/impls/order/sorder.c` and other files in that
122directory for examples on how the reordering routines may be written.
123
124Once the reordering routine has been registered, it can be selected for
125use at runtime with the command line option
126`-pc_factor_mat_ordering_type` `ordname`. If reordering from the API, the
127user should provide the `ordname` as the second input argument of
128`MatGetOrdering()`.
129
130PETSc matrices interface to a variety of external factorization/solver packages via the `MatSolverType` which can be
131`MATSOLVERSUPERLU_DIST`, `MATSOLVERMUMPS`, `MATSOLVERPASTIX`, `MATSOLVERMKL_PARDISO`, `MATSOLVERMKL_CPARDISO`,
132`MATSOLVERUMFPACK`, `MATSOLVERCHOLMOD`, `MATSOLVERKLU`,
133`MATSOLVERCUSPARSE`, and `MATSOLVERCUDA`.
134The last three of which can run on GPUs, while `MATSOLVERSUPERLU_DIST` can partially run on GPUs.
135See {any}`doc_linsolve` for a table of the factorization based solvers in PETSc.
136
137Most of these packages compute their own orderings and cannot use ones provided so calls to the following routines with those
138packages can pass NULL as the `IS` permutations.
139
140The following routines perform incomplete and complete, in-place, symbolic, and
141numerical factorizations for symmetric and nonsymmetric matrices:
142
143```
144MatICCFactor(Mat matrix,IS permutation,const MatFactorInfo *info);
145MatCholeskyFactor(Mat matrix,IS permutation,const MatFactorInfo *info);
146MatILUFactor(Mat matrix,IS rowpermutation,IS columnpermutation,const MatFactorInfo *info);
147MatLUFactor(Mat matrix,IS rowpermutation,IS columnpermutation,const MatFactorInfo *info);
148MatQRFactor(Mat matrix, IS columnpermutation, const MatFactorInfo *info);
149```
150
151The argument `info->fill > 1` is the predicted fill expected in the
152factored matrix, as a ratio of the original fill. For example,
153`info->fill = 2.0` would indicate that one expects the factored matrix
154to have twice as many nonzeros as the original.
155
156For sparse matrices it is very unlikely that the factorization is
157actually done in-place. More likely, new space is allocated for the
158factored matrix and the old space deallocated, but to the user it
159appears in-place because the factored matrix replaces the unfactored
160matrix.
161
162The two factorization stages can also be performed separately, by using
163the preferred out-of-place mode, first one obtains that matrix object that will
164hold the factor using
165
166```
167MatGetFactor(Mat matrix,MatSolverType package,MatFactorType ftype,Mat *factor);
168```
169
170and then performs the factorization
171
172```
173MatICCFactorSymbolic(Mat factor,Mat matrix,IS perm,const MatFactorInfo *info);
174MatCholeskyFactorSymbolic(Mat factor,Mat matrix,IS perm,const MatFactorInfo *info);
175MatCholeskyFactorNumeric(Mat factor,Mat matrix,const MatFactorInfo);
176
177MatILUFactorSymbolic(Mat factor,Mat matrix,IS rowperm,IS colperm,const MatFactorInfo *info);
178MatLUFactorSymbolic(Mat factor,Mat matrix,IS rowperm,IS colperm,const MatFactorInfo *info);
179MatLUFactorNumeric(Mat factor,Mat matrix,const MatFactorInfo *info);
180
181MatQRFactorSymbolic(Mat factor,Mat matrix,IS perm,const MatFactorInfo *info);
182MatQRFactorNumeric(Mat factor,Mat matrix,const MatFactorInfo *info);
183```
184
185In this case, the contents of the matrix `factor` is undefined between
186the symbolic and numeric factorization stages. It is possible to reuse
187the symbolic factorization. For the second and succeeding
188factorizations, one simply calls the numerical factorization with a new
189input `matrix` and the *same* factored `factor` matrix. It is
190*essential* that the new input matrix have exactly the same nonzero
191structure as the original factored matrix. (The numerical factorization
192merely overwrites the numerical values in the factored matrix and does
193not disturb the symbolic portion, thus enabling reuse of the symbolic
194phase.) In general, calling `XXXFactorSymbolic` with a dense matrix
195will do nothing except allocate the new matrix; the `XXXFactorNumeric`
196routines will do all of the work.
197
198Why provide the plain `XXXFactor` routines when one could simply call
199the two-stage routines? The answer is that if one desires in-place
200factorization of a sparse matrix, the intermediate stage between the
201symbolic and numeric phases cannot be stored in a `factor` matrix, and
202it does not make sense to store the intermediate values inside the
203original matrix that is being transformed. We originally made the
204combined factor routines do either in-place or out-of-place
205factorization, but then decided that this approach was not needed and
206could easily lead to confusion.
207
208We do not provide our own sparse matrix factorization with pivoting
209for numerical stability. This is because trying to both reduce fill and
210do pivoting can become quite complicated. Instead, we provide a poor
211stepchild substitute. After one has obtained a reordering, with
212`MatGetOrdering(Mat A,MatOrdering type,IS *row,IS *col)` one may call
213
214```
215MatReorderForNonzeroDiagonal(Mat A,PetscReal tol,IS row, IS col);
216```
217
218which will try to reorder the columns to ensure that no values along the
219diagonal are smaller than `tol` in a absolute value. If small values
220are detected and corrected for, a nonsymmetric permutation of the rows
221and columns will result. This is not guaranteed to work, but may help if
222one was simply unlucky in the original ordering. When using the `KSP`
223solver interface the option `-pc_factor_nonzeros_along_diagonal <tol>`
224may be used. Here, `tol` is an optional tolerance to decide if a value
225is nonzero; by default it is `1.e-10`.
226
227The external `MatSolverType`'s `MATSOLVERSUPERLU_DIST` and `MATSOLVERMUMPS`
228do manage numerical pivoting internal to their API.
229
230The external factorization packages each provide a wide number of options to chose from,
231details on these may be found by consulting the manual page for the solver package, such as,
232`MATSOLVERSUPERLU_DIST`. Most of the options can be easily set via the options database
233even when the factorization solvers are accessed via `KSP`.
234
235Once a matrix has been factored, it is natural to solve linear systems.
236The following four routines enable this process:
237
238```
239MatSolve(Mat A,Vec x, Vec y);
240MatSolveTranspose(Mat A, Vec x, Vec y);
241MatSolveAdd(Mat A,Vec x, Vec y, Vec w);
242MatSolveTransposeAdd(Mat A, Vec x, Vec y, Vec w);
243```
244
245matrix `A` of these routines must have been obtained from a
246factorization routine; otherwise, an error will be generated. In
247general, the user should use the `KSP` solvers introduced in the next
248chapter rather than using these factorization and solve routines
249directly.
250
251Some of the factorizations also support solves with multiple right-hand sides stored in a `Mat` using
252
253```
254MatMatSolve(Mat A,Mat B,Mat X);
255```
256
257and
258
259```
260MatMatSolveTranspose(Mat A,Mat B,Mat X);
261```
262
263Finally, `MATSOLVERMUMPS`, provides access to Schur complements obtained after partial factorizations as well
264as the inertia of a matrix via `MatGetInertia()`.
265
266(sec_matmatproduct)=
267
268## Matrix-Matrix Products
269
270PETSc matrices provide code for computing various matrix-matrix products. This section will introduce the two sets of routines
271available. For now consult `MatCreateProduct()` and `MatMatMult()`.
272
273## Creating PC's Directly
274
275Users obtain their preconditioner contexts from the `KSP`
276context with the command `KSPGetPC()`. It is possible to create,
277manipulate, and destroy `PC` contexts directly, although this
278capability should rarely be needed. To create a `PC` context, one uses
279the command
280
281```
282PCCreate(MPI_Comm comm,PC *pc);
283```
284
285The routine
286
287```
288PCSetType(PC pc,PCType method);
289```
290
291sets the preconditioner method to be used. The routine
292
293```
294PCSetOperators(PC pc,Mat Amat,Mat Pmat);
295```
296
297set the matrices that are to be used with the preconditioner. The
298routine
299
300```
301PCGetOperators(PC pc,Mat *Amat,Mat *Pmat);
302```
303
304returns the values set with `PCSetOperators()`.
305
306The preconditioners in PETSc can be used in several ways. The two most
307basic routines simply apply the preconditioner or its transpose and are
308given, respectively, by
309
310```
311PCApply(PC pc,Vec x,Vec y);
312PCApplyTranspose(PC pc,Vec x,Vec y);
313```
314
315In particular, for a preconditioner matrix, `B`, that has been set via
316`PCSetOperators(pc,Amat,Pmat)`, the routine PCApply(pc,x,y) computes
317$y = B^{-1} x$ by solving the linear system $By = x$ with
318the specified preconditioner method.
319
320Additional preconditioner routines are
321
322```
323PCApplyBAorAB(PC pc,PCSide right,Vec x,Vec y,Vec work);
324PCApplyBAorABTranspose(PC pc,PCSide right,Vec x,Vec y,Vec work);
325PCApplyRichardson(PC pc,Vec x,Vec y,Vec work,PetscReal rtol,PetscReal atol, PetscReal dtol,PetscInt maxits,PetscBool zeroguess,PetscInt *outits,PCRichardsonConvergedReason*);
326```
327
328The first two routines apply the action of the matrix followed by the
329preconditioner or the preconditioner followed by the matrix depending on
330whether the `right` is `PC_LEFT` or `PC_RIGHT`. The final routine
331applies `its` iterations of Richardson’s method. The last three
332routines are provided to improve efficiency for certain Krylov subspace
333methods.
334
335A `PC` context that is no longer needed can be destroyed with the
336command
337
338```
339PCDestroy(PC *pc);
340```
341