xref: /petsc/doc/manual/snes.md (revision 7e1a0bbe36d2be40a00a95404ece00db4857f70d)
1(ch_snes)=
2
3# SNES: Nonlinear Solvers
4
5The solution of large-scale nonlinear problems pervades many facets of
6computational science and demands robust and flexible solution
7strategies. The `SNES` library of PETSc provides a powerful suite of
8data-structure-neutral numerical routines for such problems. Built on
9top of the linear solvers and data structures discussed in preceding
10chapters, `SNES` enables the user to easily customize the nonlinear
11solvers according to the application at hand. Also, the `SNES`
12interface is *identical* for the uniprocess and parallel cases; the only
13difference in the parallel version is that each process typically forms
14only its local contribution to various matrices and vectors.
15
16The `SNES` class includes methods for solving systems of nonlinear
17equations of the form
18
19$$
20\mathbf{F}(\mathbf{x}) = 0,
21$$ (fx0)
22
23where $\mathbf{F}: \, \Re^n \to \Re^n$. Newton-like methods provide the
24core of the package, including both line search and trust region
25techniques. A suite of nonlinear Krylov methods and methods based upon
26problem decomposition are also included. The solvers are discussed
27further in {any}`sec_nlsolvers`. Following the PETSc design
28philosophy, the interfaces to the various solvers are all virtually
29identical. In addition, the `SNES` software is completely flexible, so
30that the user can at runtime change any facet of the solution process.
31
32PETSc’s default method for solving the nonlinear equation is Newton’s
33method with line search, `SNESNEWTONLS`. The general form of the $n$-dimensional Newton’s method
34for solving {math:numref}`fx0` is
35
36$$
37\mathbf{x}_{k+1} = \mathbf{x}_k - \mathbf{J}(\mathbf{x}_k)^{-1} \mathbf{F}(\mathbf{x}_k), \;\; k=0,1, \ldots,
38$$ (newton)
39
40where $\mathbf{x}_0$ is an initial approximation to the solution and
41$\mathbf{J}(\mathbf{x}_k) = \mathbf{F}'(\mathbf{x}_k)$, the Jacobian, is nonsingular at each
42iteration. In practice, the Newton iteration {math:numref}`newton` is
43implemented by the following two steps:
44
45$$
46\begin{aligned}
471. & \text{(Approximately) solve} & \mathbf{J}(\mathbf{x}_k) \Delta \mathbf{x}_k &= -\mathbf{F}(\mathbf{x}_k). \\
482. & \text{Update} & \mathbf{x}_{k+1} &\gets \mathbf{x}_k + \Delta \mathbf{x}_k.
49\end{aligned}
50$$
51
52Other defect-correction algorithms can be implemented by using different
53choices for $J(\mathbf{x}_k)$.
54
55(sec_snesusage)=
56
57## Basic SNES Usage
58
59In the simplest usage of the nonlinear solvers, the user must merely
60provide a C, C++, Fortran, or Python routine to evaluate the nonlinear function
61{math:numref}`fx0`. The corresponding Jacobian matrix
62can be approximated with finite differences. For codes that are
63typically more efficient and accurate, the user can provide a routine to
64compute the Jacobian; details regarding these application-provided
65routines are discussed below. To provide an overview of the use of the
66nonlinear solvers, browse the concrete example in {ref}`ex1.c <snes-ex1>` or skip ahead to the discussion.
67
68(snes_ex1)=
69
70:::{admonition} Listing: `src/snes/tutorials/ex1.c`
71```{literalinclude} /../src/snes/tutorials/ex1.c
72:end-before: /*TEST
73```
74:::
75
76To create a `SNES` solver, one must first call `SNESCreate()` as
77follows:
78
79```
80SNESCreate(MPI_Comm comm,SNES *snes);
81```
82
83The user must then set routines for evaluating the residual function {math:numref}`fx0`
84and, *possibly*, its associated Jacobian matrix, as
85discussed in the following sections.
86
87To choose a nonlinear solution method, the user can either call
88
89```
90SNESSetType(SNES snes,SNESType method);
91```
92
93or use the option `-snes_type <method>`, where details regarding the
94available methods are presented in {any}`sec_nlsolvers`. The
95application code can take complete control of the linear and nonlinear
96techniques used in the Newton-like method by calling
97
98```
99SNESSetFromOptions(snes);
100```
101
102This routine provides an interface to the PETSc options database, so
103that at runtime the user can select a particular nonlinear solver, set
104various parameters and customized routines (e.g., specialized line
105search variants), prescribe the convergence tolerance, and set
106monitoring routines. With this routine the user can also control all
107linear solver options in the `KSP`, and `PC` modules, as discussed
108in {any}`ch_ksp`.
109
110After having set these routines and options, the user solves the problem
111by calling
112
113```
114SNESSolve(SNES snes,Vec b,Vec x);
115```
116
117where `x` should be initialized to the initial guess before calling and contains the solution on return.
118In particular, to employ an initial guess of
119zero, the user should explicitly set this vector to zero by calling
120`VecZeroEntries(x)`. Finally, after solving the nonlinear system (or several
121systems), the user should destroy the `SNES` context with
122
123```
124SNESDestroy(SNES *snes);
125```
126
127(sec_snesfunction)=
128
129### Nonlinear Function Evaluation
130
131When solving a system of nonlinear equations, the user must provide a
132a residual function {math:numref}`fx0`, which is set using
133
134```
135SNESSetFunction(SNES snes,Vec f,PetscErrorCode (*FormFunction)(SNES snes,Vec x,Vec f,void *ctx),void *ctx);
136```
137
138The argument `f` is an optional vector for storing the solution; pass `NULL` to have the `SNES` allocate it for you.
139The argument `ctx` is an optional user-defined context, which can
140store any private, application-specific data required by the function
141evaluation routine; `NULL` should be used if such information is not
142needed. In C and C++, a user-defined context is merely a structure in
143which various objects can be stashed; in Fortran a user context can be
144an integer array that contains both parameters and pointers to PETSc
145objects.
146<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex5.c.html">SNES Tutorial ex5</a>
147and
148<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex5f90.F90.html">SNES Tutorial ex5f90</a>
149give examples of user-defined application contexts in C and Fortran,
150respectively.
151
152(sec_snesjacobian)=
153
154### Jacobian Evaluation
155
156The user may also specify a routine to form some approximation of the
157Jacobian matrix, `A`, at the current iterate, `x`, as is typically
158done with
159
160```
161SNESSetJacobian(SNES snes,Mat Amat,Mat Pmat,PetscErrorCode (*FormJacobian)(SNES snes,Vec x,Mat A,Mat B,void *ctx),void *ctx);
162```
163
164The arguments of the routine `FormJacobian()` are the current iterate,
165`x`; the (approximate) Jacobian matrix, `Amat`; the matrix from
166which the preconditioner is constructed, `Pmat` (which is usually the
167same as `Amat`); and an optional user-defined Jacobian context,
168`ctx`, for application-specific data. The `FormJacobian()`
169callback is only invoked if the solver requires it, always
170*after* `FormFunction()` has been called at the current iterate.
171
172Note that the `SNES` solvers
173are all data-structure neutral, so the full range of PETSc matrix
174formats (including “matrix-free” methods) can be used.
175{any}`ch_matrices` discusses information regarding
176available matrix formats and options, while {any}`sec_nlmatrixfree` focuses on matrix-free methods in
177`SNES`. We briefly touch on a few details of matrix usage that are
178particularly important for efficient use of the nonlinear solvers.
179
180A common usage paradigm is to assemble the problem Jacobian in the
181preconditioner storage `B`, rather than `A`. In the case where they
182are identical, as in many simulations, this makes no difference.
183However, it allows us to check the analytic Jacobian we construct in
184`FormJacobian()` by passing the `-snes_mf_operator` flag. This
185causes PETSc to approximate the Jacobian using finite differencing of
186the function evaluation (discussed in {any}`sec_fdmatrix`),
187and the analytic Jacobian becomes merely the preconditioner. Even if the
188analytic Jacobian is incorrect, it is likely that the finite difference
189approximation will converge, and thus this is an excellent method to
190verify the analytic Jacobian. Moreover, if the analytic Jacobian is
191incomplete (some terms are missing or approximate),
192`-snes_mf_operator` may be used to obtain the exact solution, where
193the Jacobian approximation has been transferred to the preconditioner.
194
195One such approximate Jacobian comes from “Picard linearization”, use `SNESSetPicard()`, which
196writes the nonlinear system as
197
198$$
199\mathbf{F}(\mathbf{x}) := \mathbf{A}(\mathbf{x}) \mathbf{x} - \mathbf{b} = 0
200$$
201
202where $\mathbf{A}(\mathbf{x})$ usually contains the lower-derivative parts of the
203equation. For example, the nonlinear diffusion problem
204
205$$
206- \nabla\cdot(\kappa(u) \nabla u) = 0
207$$
208
209would be linearized as
210
211$$
212A(u) v \simeq -\nabla\cdot(\kappa(u) \nabla v).
213$$
214
215Usually this linearization is simpler to implement than Newton and the
216linear problems are somewhat easier to solve. In addition to using
217`-snes_mf_operator` with this approximation to the Jacobian, the
218Picard iterative procedure can be performed by defining $\mathbf{J}(\mathbf{x})$
219to be $\mathbf{A}(\mathbf{x})$. Sometimes this iteration exhibits better global
220convergence than Newton linearization.
221
222During successive calls to `FormJacobian()`, the user can either
223insert new matrix contexts or reuse old ones, depending on the
224application requirements. For many sparse matrix formats, reusing the
225old space (and merely changing the matrix elements) is more efficient;
226however, if the matrix nonzero structure completely changes, creating an
227entirely new matrix context may be preferable. Upon subsequent calls to
228the `FormJacobian()` routine, the user may wish to reinitialize the
229matrix entries to zero by calling `MatZeroEntries()`. See
230{any}`sec_othermat` for details on the reuse of the matrix
231context.
232
233The directory `$PETSC_DIR/src/snes/tutorials` provides a variety of
234examples.
235
236Sometimes a nonlinear solver may produce a step that is not within the domain
237of a given function, for example one with a negative pressure. When this occurs
238one can call `SNESSetFunctionDomainError()` or `SNESSetJacobianDomainError()`
239to indicate to `SNES` the step is not valid. One must also use `SNESGetConvergedReason()`
240and check the reason to confirm if the solver succeeded. See {any}`sec_vi` for how to
241provide `SNES` with bounds on the variables to solve (differential) variational inequalities
242and how to control properties of the line step computed.
243
244(sec_nlsolvers)=
245
246## The Nonlinear Solvers
247
248As summarized in Table {any}`tab-snesdefaults`, `SNES` includes
249several Newton-like nonlinear solvers based on line search techniques
250and trust region methods. Also provided are several nonlinear Krylov
251methods, as well as nonlinear methods involving decompositions of the
252problem.
253
254Each solver may have associated with it a set of options, which can be
255set with routines and options database commands provided for this
256purpose. A complete list can be found by consulting the manual pages or
257by running a program with the `-help` option; we discuss just a few in
258the sections below.
259
260```{eval-rst}
261.. list-table:: PETSc Nonlinear Solvers
262   :name: tab-snesdefaults
263   :header-rows: 1
264
265   * - Method
266     - SNESType
267     - Options Name
268     - Default Line Search
269   * - Line Search Newton
270     - ``SNESNEWTONLS``
271     - ``newtonls``
272     - ``SNESLINESEARCHBT``
273   * - Trust region Newton
274     - ``SNESNEWTONTR``
275     - ``newtontr``
276     - —
277   * - Newton with Arc Length Continuation
278     - ``SNESNEWTONAL``
279     - ``newtonal``
280     - —
281   * - Nonlinear Richardson
282     - ``SNESNRICHARDSON``
283     - ``nrichardson``
284     - ``SNESLINESEARCHSECANT``
285   * - Nonlinear CG
286     - ``SNESNCG``
287     - ``ncg``
288     - ``SNESLINESEARCHCP``
289   * - Nonlinear GMRES
290     - ``SNESNGMRES``
291     - ``ngmres``
292     - ``SNESLINESEARCHSECANT``
293   * - Quasi-Newton
294     - ``SNESQN``
295     - ``qn``
296     - see :any:`tab-qndefaults`
297   * - Full Approximation Scheme
298     - ``SNESFAS``
299     - ``fas``
300     - —
301   * - Nonlinear ASM
302     - ``SNESNASM``
303     - ``nasm``
304     - –
305   * - ASPIN
306     - ``SNESASPIN``
307     - ``aspin``
308     - ``SNESLINESEARCHBT``
309   * - Nonlinear Gauss-Seidel
310     - ``SNESNGS``
311     - ``ngs``
312     - –
313   * - Anderson Mixing
314     - ``SNESANDERSON``
315     - ``anderson``
316     - –
317   * -  Newton with constraints (1)
318     - ``SNESVINEWTONRSLS``
319     - ``vinewtonrsls``
320     - ``SNESLINESEARCHBT``
321   * -  Newton with constraints (2)
322     - ``SNESVINEWTONSSLS``
323     - ``vinewtonssls``
324     - ``SNESLINESEARCHBT``
325   * - Multi-stage Smoothers
326     - ``SNESMS``
327     - ``ms``
328     - –
329   * - Composite
330     - ``SNESCOMPOSITE``
331     - ``composite``
332     - –
333   * - Linear solve only
334     - ``SNESKSPONLY``
335     - ``ksponly``
336     - –
337   * - Python Shell
338     - ``SNESPYTHON``
339     - ``python``
340     - –
341   * - Shell (user-defined)
342     - ``SNESSHELL``
343     - ``shell``
344     - –
345
346```
347
348### Line Search Newton
349
350The method `SNESNEWTONLS` (`-snes_type newtonls`) provides a
351line search Newton method for solving systems of nonlinear equations. By
352default, this technique employs cubic backtracking
353{cite}`dennis:83`. Alternative line search techniques are
354listed in Table {any}`tab-linesearches`.
355
356```{eval-rst}
357.. table:: PETSc Line Search Methods
358   :name: tab-linesearches
359
360   ==================== =========================== ================
361   **Line Search**      **SNESLineSearchType**      **Options Name**
362   ==================== =========================== ================
363   Backtracking         ``SNESLINESEARCHBT``        ``bt``
364   (damped) step        ``SNESLINESEARCHBASIC``     ``basic``
365   identical to above   ``SNESLINESEARCHNONE``      ``none``
366   Secant method        ``SNESLINESEARCHSECANT``    ``secant``
367   Critical point       ``SNESLINESEARCHCP``        ``cp``
368   Error-oriented       ``SNESLINESEARCHNLEQERR``   ``nleqerr``
369   Bisection            ``SNESLINESEARCHBISECTION`` ``bisection``
370   Shell                ``SNESLINESEARCHSHELL``     ``shell``
371   ==================== =========================== ================
372```
373
374Every `SNES` has a line search context of type `SNESLineSearch` that
375may be retrieved using
376
377```
378SNESGetLineSearch(SNES snes,SNESLineSearch *ls);.
379```
380
381There are several default options for the line searches. The order of
382polynomial approximation may be set with `-snes_linesearch_order` or
383
384```
385SNESLineSearchSetOrder(SNESLineSearch ls, PetscInt order);
386```
387
388for instance, 2 for quadratic or 3 for cubic. Sometimes, it may not be
389necessary to monitor the progress of the nonlinear iteration. In this
390case, `-snes_linesearch_norms` or
391
392```
393SNESLineSearchSetComputeNorms(SNESLineSearch ls,PetscBool norms);
394```
395
396may be used to turn off function, step, and solution norm computation at
397the end of the linesearch.
398
399The default line search for the line search Newton method,
400`SNESLINESEARCHBT` involves several parameters, which are set to
401defaults that are reasonable for many applications. The user can
402override the defaults by using the following options:
403
404- `-snes_linesearch_alpha <alpha>`
405- `-snes_linesearch_maxstep <max>`
406- `-snes_linesearch_minlambda <tol>`
407
408Besides the backtracking linesearch, there are `SNESLINESEARCHSECANT`,
409which uses a polynomial secant minimization of $||F(x)||_2$ or an objective function
410if set, and `SNESLINESEARCHCP`, which minimizes $F(x) \cdot Y$ where
411$Y$ is the search direction. These are both potentially iterative
412line searches, which may be used to find a better-fitted steplength in
413the case where a single secant search is not sufficient. The number of
414iterations may be set with `-snes_linesearch_max_it`. In addition, the
415convergence criteria of the iterative line searches may be set using
416function tolerances `-snes_linesearch_rtol` and
417`-snes_linesearch_atol`, and steplength tolerance
418`snes_linesearch_ltol`.
419
420For highly non-linear problems, the bisection line search `SNESLINESEARCHBISECTION`
421may prove useful due to its robustness. Similar to the critical point line search
422`SNESLINESEARCHCP`, it seeks to find the root of $F(x) \cdot Y$.
423While the latter does so through a secant method, the bisection line search
424does so by iteratively bisecting the step length interval.
425It works as follows (with $f(\lambda)=F(x-\lambda Y) \cdot Y / ||Y||$ for brevity):
426
4271. initialize: $j=1$, $\lambda_0 = \lambda_{\text{left}} = 0.0$, $\lambda_j = \lambda_{\text{right}} = \alpha$, compute $f(\lambda_0)$ and $f(\lambda_j)$
428
4292. check whether there is a change of sign in the interval: $f(\lambda_{\text{left}}) f(\lambda_j) \leq 0$; if not accept the full step length $\lambda_1$
430
4313. if there is a change of sign, enter iterative bisection procedure
432
433   1. check convergence/ exit criteria:
434
435      - absolute tolerance $f(\lambda_j) < \mathtt{atol}$
436      - relative tolerance $f(\lambda_j) < \mathtt{rtol} \cdot f(\lambda_0)$
437      - change of step length $\lambda_j - \lambda_{j-1} < \mathtt{ltol}$
438      - number of iterations $j < \mathtt{max\_it}$
439
440   2. if $j > 1$, determine direction of bisection
441
442   $$
443   \begin{aligned}\lambda_{\text{left}} &= \begin{cases}\lambda_{\text{left}} &f(\lambda_{\text{left}}) f(\lambda_j) \leq 0\\\lambda_{j} &\text{else}\\ \end{cases}\\ \lambda_{\text{right}} &= \begin{cases} \lambda_j &f(\lambda_{\text{left}}) f(\lambda_j) \leq 0\\\lambda_{\text{right}} &\text{else}\\ \end{cases}\\\end{aligned}
444   $$
445
446   3. bisect the interval: $\lambda_{j+1} = (\lambda_{\text{left}} + \lambda_{\text{right}})/2$, compute $f(\lambda_{j+1})$
447   4. update variables for the next iteration: $\lambda_j \gets \lambda_{j+1}$, $f(\lambda_j) \gets f(\lambda_{j+1})$, $j \gets j+1$
448
449Custom line search types may either be defined using
450`SNESLineSearchShell`, or by creating a custom user line search type
451in the model of the preexisting ones and register it using
452
453```
454SNESLineSearchRegister(const char sname[],PetscErrorCode (*function)(SNESLineSearch));.
455```
456
457### Trust Region Methods
458
459The trust region method in `SNES` for solving systems of nonlinear
460equations, `SNESNEWTONTR` (`-snes_type newtontr`), is similar to the one developed in the
461MINPACK project {cite}`more84`. Several parameters can be
462set to control the variation of the trust region size during the
463solution process. In particular, the user can control the initial trust
464region radius, computed by
465
466$$
467\Delta = \Delta_0 \| F_0 \|_2,
468$$
469
470by setting $\Delta_0$ via the option `-snes_tr_delta0 <delta0>`.
471
472### Newton with Arc Length Continuation
473
474The Newton method with arc length continuation reformulates the linearized system
475$K\delta \mathbf x = -\mathbf F(\mathbf x)$ by introducing the load parameter
476$\lambda$ and splitting the residual into two components, commonly
477corresponding to internal and external forces:
478
479$$
480\mathbf F(x, \lambda) = \mathbf F^{\mathrm{int}}(\mathbf x) - \mathbf F^{\mathrm{ext}}(\mathbf x, \lambda)
481$$
482
483Often, $\mathbf F^{\mathrm{ext}}(\mathbf x, \lambda)$ is linear in $\lambda$,
484which can be thought of as applying the external force in proportional load
485increments. By default, this is how the right-hand side vector is handled in the
486implemented method. Generally, however, $\mathbf F^{\mathrm{ext}}(\mathbf x, \lambda)$
487may depend non-linearly on $\lambda$ or $\mathbf x$, or both.
488To accommodate this possibility, we provide the `SNESNewtonALGetLoadParameter()`
489function, which allows for the current value of $\lambda$ to be queried in the
490functions provided to `SNESSetFunction()` and `SNESSetJacobian()`.
491
492Additionally, we split the solution update into two components:
493
494$$
495\delta \mathbf x = \delta s\delta\mathbf x^F + \delta\lambda\delta\mathbf x^Q,
496$$
497
498where $\delta s = 1$ unless partial corrections are used (discussed more
499below). Each of $\delta \mathbf x^F$ and $\delta \mathbf x^Q$ are found via
500solving a linear system with the Jacobian $K$:
501
502- $\delta \mathbf x^F$ is the full Newton step for a given value of $\lambda$: $K \delta \mathbf x^F = -\mathbf F(\mathbf x, \lambda)$
503- $\delta \mathbf x^Q$ is the variation in $\mathbf x$ with respect to $\lambda$, computed by $K \delta\mathbf x^Q = \mathbf Q(\mathbf x, \lambda)$, where $\mathbf Q(\mathbf x, \lambda) = -\partial \mathbf F (\mathbf x, \lambda) / \partial \lambda$ is the tangent load vector.
504
505Often, the tangent load vector $\mathbf Q$ is constant within a load increment,
506which corresponds to the case of proportional loading discussed above. By default,
507$\mathbf Q$ is the full right-hand-side vector, if one was provided.
508The user can also provide a function which computes $\mathbf Q$ to
509`SNESNewtonALSetFunction()`. This function should have the same signature as for
510`SNESSetFunction`, and the user should use `SNESNewtonALGetLoadParameter()` to get
511$\lambda$ if it is needed.
512
513**The Constraint Surface.** Considering the $n+1$ dimensional space of
514$\mathbf x$ and $\lambda$, we define the linearized equilibrium line to be
515the set of points for which the linearized equilibrium equations are satisfied.
516Given the previous iterative solution
517$\mathbf t^{(j-1)} = [\mathbf x^{(j-1)}, \lambda^{(j-1)}]$,
518this line is defined by the point $\mathbf t^{(j-1)} + [\delta\mathbf x^F, 0]$ and
519the vector $\mathbf t^Q [\delta\mathbf x^Q, 1]$.
520The arc length method seeks the intersection of this linearized equilibrium line
521with a quadratic constraint surface, defined by
522
523% math::L^2 = \|\Delta x\|^2 + \psi^2 (\Delta\lambda)^2,
524
525where $L$ is a user-provided step size corresponding to the radius of the
526constraint surface, $\Delta\mathbf x$ and $\Delta\lambda$ are the
527accumulated updates over the current load step, and $\psi^2$ is a
528user-provided consistency parameter determining the shape of the constraint surface.
529Generally, $\psi^2 > 0$ leads to a hyper-sphere constraint surface, while
530$\psi^2 = 0$ leads to a hyper-cylinder constraint surface.
531
532Since the solution will always fall on the constraint surface, the method will often
533require multiple incremental steps to fully solve the non-linear problem.
534This is necessary to accurately trace the equilibrium path.
535Importantly, this is fundamentally different from time stepping.
536While a similar process could be implemented as a `TS`, this method is
537particularly designed to be used as a SNES, either standalone or within a `TS`.
538
539To this end, by default, the load parameter is used such that the full external
540forces are applied at $\lambda = 1$, although we allow for the user to specify
541a different value via `-snes_newtonal_lambda_max`.
542To ensure that the solution corresponds exactly to the external force prescribed by
543the user, i.e. that the load parameter is exactly $\lambda_{max}$ at the end
544of the SNES solve, we clamp the value before computing the solution update.
545As such, the final increment will likely be a hybrid of arc length continuation and
546normal Newton iterations.
547
548**Choosing the Continuation Step.** For the first iteration from an equilibrium
549point, there is a single correct way to choose $\delta\lambda$, which follows
550from the constraint equations. Specifically the constraint equations yield the
551quadratic equation $a\delta\lambda^2 + b\delta\lambda + c = 0$, where
552
553$$
554\begin{aligned}
555a &= \|\delta\mathbf x^Q\|^2 + \psi^2,\\
556b &= 2\delta\mathbf x^Q\cdot (\Delta\mathbf x + \delta s\delta\mathbf x^F) + 2\psi^2 \Delta\lambda,\\
557c &= \|\Delta\mathbf x + \delta s\delta\mathbf x^F\|^2 + \psi^2 \Delta\lambda^2 - L^2.
558\end{aligned}
559$$
560
561Since in the first iteration, $\Delta\mathbf x = \delta\mathbf x^F = \mathbf 0$ and
562$\Delta\lambda = 0$, $b = 0$ and the equation simplifies to a pair of
563real roots:
564
565$$
566\delta\lambda = \pm\frac{L}{\sqrt{\|\delta\mathbf x^Q\|^2 + \psi^2}},
567$$
568
569where the sign is positive for the first increment and is determined by the previous
570increment otherwise as
571
572$$
573\text{sign}(\delta\lambda) = \text{sign}\big(\delta\mathbf x^Q \cdot (\Delta\mathbf x)_{i-1} + \psi^2(\Delta\lambda)_{i-1}\big),
574$$
575
576where $(\Delta\mathbf x)_{i-1}$ and $(\Delta\lambda)_{i-1}$ are the
577accumulated updates over the previous load step.
578
579In subsequent iterations, there are different approaches to selecting
580$\delta\lambda$, all of which have trade-offs.
581The main difference is whether the iterative solution falls on the constraint
582surface at every iteration, or only when fully converged.
583PETSc implements two approaches, set via
584`SNESNewtonALSetCorrectionType()` or
585`-snes_newtonal_correction_type <normal|exact>` on the command line.
586
587**Corrections in the Normal Hyperplane.** The `SNES_NEWTONAL_CORRECTION_NORMAL`
588option is simpler and computationally less expensive, but may fail to converge, as
589the constraint equation is not satisfied at every iteration.
590The update $\delta \lambda$ is chosen such that the update is within the
591normal hyper-surface to the quadratic constraint surface.
592Mathematically, that is
593
594$$
595\delta \lambda = -\frac{\Delta \mathbf x \cdot \delta \mathbf x^F}{\Delta\mathbf x \cdot \delta\mathbf x^Q + \psi^2 \Delta\lambda}.
596$$
597
598This implementation is based on {cite}`LeonPaulinoPereiraMenezesLages_2011`.
599
600**Exact Corrections.** The `SNES_NEWTONAL_CORRECTION_EXACT` option is far more
601complex, but ensures that the constraint is exactly satisfied at every Newton
602iteration. As such, it is generally more robust.
603By evaluating the intersection of constraint surface and equilibrium line at each
604iteration, $\delta\lambda$ is chosen as one of the roots of the above
605quadratic equation $a\delta\lambda^2 + b\delta\lambda + c = 0$.
606This method encounters issues, however, if the linearized equilibrium line and
607constraint surface do not intersect due to particularly large linearized error.
608In this case, the roots are complex.
609To continue progressing toward a solution, this method uses a partial correction by
610choosing $\delta s$ such that the quadratic equation has a single real root.
611Geometrically, this is selecting the point on the constraint surface closest to the
612linearized equilibrium line. See the code or {cite}`Ritto-CorreaCamotim2008` for a
613mathematical description of these partial corrections.
614
615### Nonlinear Krylov Methods
616
617A number of nonlinear Krylov methods are provided, including Nonlinear
618Richardson (`SNESNRICHARDSON`), nonlinear conjugate gradient (`SNESNCG`), nonlinear GMRES (`SNESNGMRES`), and Anderson Mixing (`SNESANDERSON`). These
619methods are described individually below. They are all instrumental to
620PETSc’s nonlinear preconditioning.
621
622**Nonlinear Richardson.** The nonlinear Richardson iteration, `SNESNRICHARDSON`, merely
623takes the form of a line search-damped fixed-point iteration of the form
624
625$$
626\mathbf{x}_{k+1} = \mathbf{x}_k - \lambda \mathbf{F}(\mathbf{x}_k), \;\; k=0,1, \ldots,
627$$
628
629where the default linesearch is `SNESLINESEARCHSECANT`. This simple solver
630is mostly useful as a nonlinear smoother, or to provide line search
631stabilization to an inner method.
632
633**Nonlinear Conjugate Gradients.** Nonlinear CG, `SNESNCG`, is equivalent to linear
634CG, but with the steplength determined by line search
635(`SNESLINESEARCHCP` by default). Five variants (Fletcher-Reed,
636Hestenes-Steifel, Polak-Ribiere-Polyak, Dai-Yuan, and Conjugate Descent)
637are implemented in PETSc and may be chosen using
638
639```
640SNESNCGSetType(SNES snes, SNESNCGType btype);
641```
642
643**Anderson Mixing and Nonlinear GMRES Methods.** Nonlinear GMRES (`SNESNGMRES`), and
644Anderson Mixing (`SNESANDERSON`) methods combine the last $m$ iterates, plus a new
645fixed-point iteration iterate, into an approximate residual-minimizing new iterate.
646
647All of the above methods have support for using a nonlinear preconditioner to compute the preliminary update step, rather than the default
648which is the nonlinear function's residual, \$ mathbf\{F}(mathbf\{x}\_k)\$. The different update is obtained by solving a nonlinear preconditioner nonlinear problem, which has its own
649`SNES` object that may be obtained with `SNESGetNPC()`.
650Quasi-Newton Methods
651^^^^^^^^^^^^^^^^^^^^
652
653Quasi-Newton methods store iterative rank-one updates to the Jacobian
654instead of computing the Jacobian directly. Three limited-memory quasi-Newton
655methods are provided, L-BFGS, which are described in
656Table {any}`tab-qndefaults`. These all are encapsulated under
657`-snes_type qn` and may be changed with `snes_qn_type`. The default
658is L-BFGS, which provides symmetric updates to an approximate Jacobian.
659This iteration is similar to the line search Newton methods.
660
661The quasi-Newton methods support the use of a nonlinear preconditioner that can be obtained with `SNESGetNPC()` and then configured; or that can be configured with
662`SNES`, `KSP`, and `PC` options using the options database prefix `-npc_`.
663
664```{eval-rst}
665.. list-table:: PETSc quasi-Newton solvers
666   :name: tab-qndefaults
667   :header-rows: 1
668
669   * - QN Method
670     - ``SNESQNType``
671     - Options Name
672     - Default Line Search
673   * - L-BFGS
674     - ``SNES_QN_LBFGS``
675     - ``lbfgs``
676     - ``SNESLINESEARCHCP``
677   * - “Good” Broyden
678     - ``SNES_QN_BROYDEN``
679     - ``broyden``
680     - ``SNESLINESEARCHBASIC`` (or equivalently ``SNESLINESEARCHNONE``
681   * - “Bad” Broyden
682     - ``SNES_QN_BADBROYDEN``
683     - ``badbroyden``
684     - ``SNESLINESEARCHSECANT``
685```
686
687One may also control the form of the initial Jacobian approximation with
688
689```
690SNESQNSetScaleType(SNES snes, SNESQNScaleType stype);
691```
692
693and the restart type with
694
695```
696SNESQNSetRestartType(SNES snes, SNESQNRestartType rtype);
697```
698
699### The Full Approximation Scheme
700
701The Nonlinear Full Approximation Scheme (FAS) `SNESFAS`, is a nonlinear multigrid method. At
702each level, there is a recursive cycle control `SNES` instance, and
703either one or two nonlinear solvers that act as smoothers (up and down). Problems
704set up using the `SNES` `DMDA` interface are automatically
705coarsened. FAS, `SNESFAS`, differs slightly from linear multigrid `PCMG`, in that the hierarchy is
706constructed recursively. However, much of the interface is a one-to-one
707map. We describe the “get” operations here, and it can be assumed that
708each has a corresponding “set” operation. For instance, the number of
709levels in the hierarchy may be retrieved using
710
711```
712SNESFASGetLevels(SNES snes, PetscInt *levels);
713```
714
715There are four `SNESFAS` cycle types, `SNES_FAS_MULTIPLICATIVE`,
716`SNES_FAS_ADDITIVE`, `SNES_FAS_FULL`, and `SNES_FAS_KASKADE`. The
717type may be set with
718
719```
720SNESFASSetType(SNES snes,SNESFASType fastype);.
721```
722
723and the cycle type, 1 for V, 2 for W, may be set with
724
725```
726SNESFASSetCycles(SNES snes, PetscInt cycles);.
727```
728
729Much like the interface to `PCMG` described in {any}`sec_mg`, there are interfaces to recover the
730various levels’ cycles and smoothers. The level smoothers may be
731accessed with
732
733```
734SNESFASGetSmoother(SNES snes, PetscInt level, SNES *smooth);
735SNESFASGetSmootherUp(SNES snes, PetscInt level, SNES *smooth);
736SNESFASGetSmootherDown(SNES snes, PetscInt level, SNES *smooth);
737```
738
739and the level cycles with
740
741```
742SNESFASGetCycleSNES(SNES snes,PetscInt level,SNES *lsnes);.
743```
744
745Also akin to `PCMG`, the restriction and prolongation at a level may
746be acquired with
747
748```
749SNESFASGetInterpolation(SNES snes, PetscInt level, Mat *mat);
750SNESFASGetRestriction(SNES snes, PetscInt level, Mat *mat);
751```
752
753In addition, FAS requires special restriction for solution-like
754variables, called injection. This may be set with
755
756```
757SNESFASGetInjection(SNES snes, PetscInt level, Mat *mat);.
758```
759
760The coarse solve context may be acquired with
761
762```
763SNESFASGetCoarseSolve(SNES snes, SNES *smooth);
764```
765
766### Nonlinear Additive Schwarz
767
768Nonlinear Additive Schwarz methods (NASM) take a number of local
769nonlinear subproblems, solves them independently in parallel, and
770combines those solutions into a new approximate solution.
771
772```
773SNESNASMSetSubdomains(SNES snes,PetscInt n,SNES subsnes[],VecScatter iscatter[],VecScatter oscatter[],VecScatter gscatter[]);
774```
775
776allows for the user to create these local subdomains. Problems set up
777using the `SNES` `DMDA` interface are automatically decomposed. To
778begin, the type of subdomain updates to the whole solution are limited
779to two types borrowed from `PCASM`: `PC_ASM_BASIC`, in which the
780overlapping updates added. `PC_ASM_RESTRICT` updates in a
781nonoverlapping fashion. This may be set with
782
783```
784SNESNASMSetType(SNES snes,PCASMType type);.
785```
786
787`SNESASPIN` is a helper `SNES` type that sets up a nonlinearly
788preconditioned Newton’s method using NASM as the preconditioner.
789
790## General Options
791
792This section discusses options and routines that apply to all `SNES`
793solvers and problem classes. In particular, we focus on convergence
794tests, monitoring routines, and tools for checking derivative
795computations.
796
797(sec_snesconvergence)=
798
799### Convergence Tests
800
801Convergence of the nonlinear solvers can be detected in a variety of
802ways; the user can even specify a customized test, as discussed below.
803Most of the nonlinear solvers use `SNESConvergenceTestDefault()`,
804however, `SNESNEWTONTR` uses a method-specific additional convergence
805test as well. The convergence tests involves several parameters, which
806are set by default to values that should be reasonable for a wide range
807of problems. The user can customize the parameters to the problem at
808hand by using some of the following routines and options.
809
810One method of convergence testing is to declare convergence when the
811norm of the change in the solution between successive iterations is less
812than some tolerance, `stol`. Convergence can also be determined based
813on the norm of the function. Such a test can use either the absolute
814size of the norm, `atol`, or its relative decrease, `rtol`, from an
815initial guess. The following routine sets these parameters, which are
816used in many of the default `SNES` convergence tests:
817
818```
819SNESSetTolerances(SNES snes,PetscReal atol,PetscReal rtol,PetscReal stol, PetscInt its,PetscInt fcts);
820```
821
822This routine also sets the maximum numbers of allowable nonlinear
823iterations, `its`, and function evaluations, `fcts`. The
824corresponding options database commands for setting these parameters are:
825
826- `-snes_atol <atol>`
827- `-snes_rtol <rtol>`
828- `-snes_stol <stol>`
829- `-snes_max_it <its>`
830- `-snes_max_funcs <fcts>` (use `unlimited` for no maximum)
831
832A related routine is `SNESGetTolerances()`. `PETSC_CURRENT` may be used
833for any parameter to indicate the current value should be retained; use `PETSC_DETERMINE` to restore to the default value from when the object was created.
834
835Users can set their own customized convergence tests in `SNES` by
836using the command
837
838```
839SNESSetConvergenceTest(SNES snes,PetscErrorCode (*test)(SNES snes,PetscInt it,PetscReal xnorm, PetscReal gnorm,PetscReal f,SNESConvergedReason reason, void *cctx),void *cctx,PetscErrorCode (*destroy)(void *cctx));
840```
841
842The final argument of the convergence test routine, `cctx`, denotes an
843optional user-defined context for private data. When solving systems of
844nonlinear equations, the arguments `xnorm`, `gnorm`, and `f` are
845the current iterate norm, current step norm, and function norm,
846respectively. `SNESConvergedReason` should be set positive for
847convergence and negative for divergence. See `include/petscsnes.h` for
848a list of values for `SNESConvergedReason`.
849
850(sec_snesmonitor)=
851
852### Convergence Monitoring
853
854By default the `SNES` solvers run silently without displaying
855information about the iterations. The user can initiate monitoring with
856the command
857
858```
859SNESMonitorSet(SNES snes, PetscErrorCode (*mon)(SNES snes, PetscInt its, PetscReal norm, void* mctx), void *mctx, (PetscCtxDestroyFn *)*monitordestroy);
860```
861
862The routine, `mon`, indicates a user-defined monitoring routine, where
863`its` and `mctx` respectively denote the iteration number and an
864optional user-defined context for private data for the monitor routine.
865The argument `norm` is the function norm.
866
867The routine set by `SNESMonitorSet()` is called once after every
868successful step computation within the nonlinear solver. Hence, the user
869can employ this routine for any application-specific computations that
870should be done after the solution update. The option `-snes_monitor`
871activates the default `SNES` monitor routine,
872`SNESMonitorDefault()`, while `-snes_monitor_lg_residualnorm` draws
873a simple line graph of the residual norm’s convergence.
874
875One can cancel hardwired monitoring routines for `SNES` at runtime
876with `-snes_monitor_cancel`.
877
878As the Newton method converges so that the residual norm is small, say
879$10^{-10}$, many of the final digits printed with the
880`-snes_monitor` option are meaningless. Worse, they are different on
881different machines; due to different round-off rules used by, say, the
882IBM RS6000 and the Sun SPARC. This makes testing between different
883machines difficult. The option `-snes_monitor_short` causes PETSc to
884print fewer of the digits of the residual norm as it gets smaller; thus
885on most of the machines it will always print the same numbers making
886cross-process testing easier.
887
888The routines
889
890```
891SNESGetSolution(SNES snes,Vec *x);
892SNESGetFunction(SNES snes,Vec *r,void *ctx,int(**func)(SNES,Vec,Vec,void*));
893```
894
895return the solution vector and function vector from a `SNES` context.
896These routines are useful, for instance, if the convergence test
897requires some property of the solution or function other than those
898passed with routine arguments.
899
900(sec_snesderivs)=
901
902### Checking Accuracy of Derivatives
903
904Since hand-coding routines for Jacobian matrix evaluation can be error
905prone, `SNES` provides easy-to-use support for checking these matrices
906against finite difference versions. In the simplest form of comparison,
907users can employ the option `-snes_test_jacobian` to compare the
908matrices at several points. Although not exhaustive, this test will
909generally catch obvious problems. One can compare the elements of the
910two matrices by using the option `-snes_test_jacobian_view` , which
911causes the two matrices to be printed to the screen.
912
913Another means for verifying the correctness of a code for Jacobian
914computation is running the problem with either the finite difference or
915matrix-free variant, `-snes_fd` or `-snes_mf`; see {any}`sec_fdmatrix` or {any}`sec_nlmatrixfree`.
916If a
917problem converges well with these matrix approximations but not with a
918user-provided routine, the problem probably lies with the hand-coded
919matrix. See the note in {any}`sec_snesjacobian` about
920assembling your Jabobian in the "preconditioner" slot `Pmat`.
921
922The correctness of user provided `MATSHELL` Jacobians in general can be
923checked with `MatShellTestMultTranspose()` and `MatShellTestMult()`.
924
925The correctness of user provided `MATSHELL` Jacobians via `TSSetRHSJacobian()`
926can be checked with `TSRHSJacobianTestTranspose()` and `TSRHSJacobianTest()`
927that check the correction of the matrix-transpose vector product and the
928matrix-product. From the command line, these can be checked with
929
930- `-ts_rhs_jacobian_test_mult_transpose`
931- `-mat_shell_test_mult_transpose_view`
932- `-ts_rhs_jacobian_test_mult`
933- `-mat_shell_test_mult_view`
934
935## Inexact Newton-like Methods
936
937Since exact solution of the linear Newton systems within {math:numref}`newton`
938at each iteration can be costly, modifications
939are often introduced that significantly reduce these expenses and yet
940retain the rapid convergence of Newton’s method. Inexact or truncated
941Newton techniques approximately solve the linear systems using an
942iterative scheme. In comparison with using direct methods for solving
943the Newton systems, iterative methods have the virtue of requiring
944little space for matrix storage and potentially saving significant
945computational work. Within the class of inexact Newton methods, of
946particular interest are Newton-Krylov methods, where the subsidiary
947iterative technique for solving the Newton system is chosen from the
948class of Krylov subspace projection methods. Note that at runtime the
949user can set any of the linear solver options discussed in {any}`ch_ksp`,
950such as `-ksp_type <ksp_method>` and
951`-pc_type <pc_method>`, to set the Krylov subspace and preconditioner
952methods.
953
954Two levels of iterations occur for the inexact techniques, where during
955each global or outer Newton iteration a sequence of subsidiary inner
956iterations of a linear solver is performed. Appropriate control of the
957accuracy to which the subsidiary iterative method solves the Newton
958system at each global iteration is critical, since these inner
959iterations determine the asymptotic convergence rate for inexact Newton
960techniques. While the Newton systems must be solved well enough to
961retain fast local convergence of the Newton’s iterates, use of excessive
962inner iterations, particularly when $\| \mathbf{x}_k - \mathbf{x}_* \|$ is large,
963is neither necessary nor economical. Thus, the number of required inner
964iterations typically increases as the Newton process progresses, so that
965the truncated iterates approach the true Newton iterates.
966
967A sequence of nonnegative numbers $\{\eta_k\}$ can be used to
968indicate the variable convergence criterion. In this case, when solving
969a system of nonlinear equations, the update step of the Newton process
970remains unchanged, and direct solution of the linear system is replaced
971by iteration on the system until the residuals
972
973$$
974\mathbf{r}_k^{(i)} =  \mathbf{F}'(\mathbf{x}_k) \Delta \mathbf{x}_k + \mathbf{F}(\mathbf{x}_k)
975$$
976
977satisfy
978
979$$
980\frac{ \| \mathbf{r}_k^{(i)} \| }{ \| \mathbf{F}(\mathbf{x}_k) \| } \leq \eta_k \leq \eta < 1.
981$$
982
983Here $\mathbf{x}_0$ is an initial approximation of the solution, and
984$\| \cdot \|$ denotes an arbitrary norm in $\Re^n$ .
985
986By default a constant relative convergence tolerance is used for solving
987the subsidiary linear systems within the Newton-like methods of
988`SNES`. When solving a system of nonlinear equations, one can instead
989employ the techniques of Eisenstat and Walker {cite}`ew96`
990to compute $\eta_k$ at each step of the nonlinear solver by using
991the option `-snes_ksp_ew` . In addition, by adding one’s own
992`KSP` convergence test (see {any}`sec_convergencetests`), one can easily create one’s own,
993problem-dependent, inner convergence tests.
994
995(sec_nlmatrixfree)=
996
997## Matrix-Free Methods
998
999The `SNES` class fully supports matrix-free methods. The matrices
1000specified in the Jacobian evaluation routine need not be conventional
1001matrices; instead, they can point to the data required to implement a
1002particular matrix-free method. The matrix-free variant is allowed *only*
1003when the linear systems are solved by an iterative method in combination
1004with no preconditioning (`PCNONE` or `-pc_type` `none`), a
1005user-provided matrix from which to construct the preconditioner, or a user-provided preconditioner
1006shell (`PCSHELL`, discussed in {any}`sec_pc`); that
1007is, obviously matrix-free methods cannot be used with a direct solver,
1008approximate factorization, or other preconditioner which requires access
1009to explicit matrix entries.
1010
1011The user can create a matrix-free context for use within `SNES` with
1012the routine
1013
1014```
1015MatCreateSNESMF(SNES snes,Mat *mat);
1016```
1017
1018This routine creates the data structures needed for the matrix-vector
1019products that arise within Krylov space iterative
1020methods {cite}`brownsaad:90`.
1021The default `SNES`
1022matrix-free approximations can also be invoked with the command
1023`-snes_mf`. Or, one can retain the user-provided Jacobian
1024preconditioner, but replace the user-provided Jacobian matrix with the
1025default matrix-free variant with the option `-snes_mf_operator`.
1026
1027`MatCreateSNESMF()` uses
1028
1029```
1030MatCreateMFFD(Vec x, Mat *mat);
1031```
1032
1033which can also be used directly for users who need a matrix-free matrix but are not using `SNES`.
1034
1035The user can set one parameter to control the Jacobian-vector product
1036approximation with the command
1037
1038```
1039MatMFFDSetFunctionError(Mat mat,PetscReal rerror);
1040```
1041
1042The parameter `rerror` should be set to the square root of the
1043relative error in the function evaluations, $e_{rel}$; the default
1044is the square root of machine epsilon (about $10^{-8}$ in double
1045precision), which assumes that the functions are evaluated to full
1046floating-point precision accuracy. This parameter can also be set from
1047the options database with `-mat_mffd_err <err>`
1048
1049In addition, PETSc provides ways to register new routines to compute
1050the differencing parameter ($h$); see the manual page for
1051`MatMFFDSetType()` and `MatMFFDRegister()`. We currently provide two
1052default routines accessible via `-mat_mffd_type <ds or wp>`. For
1053the default approach there is one “tuning” parameter, set with
1054
1055```
1056MatMFFDDSSetUmin(Mat mat,PetscReal umin);
1057```
1058
1059This parameter, `umin` (or $u_{min}$), is a bit involved; its
1060default is $10^{-6}$ . Its command line form is `-mat_mffd_umin <umin>`.
1061
1062The Jacobian-vector product is approximated
1063via the formula
1064
1065$$
1066F'(u) a \approx \frac{F(u + h*a) - F(u)}{h}
1067$$
1068
1069where $h$ is computed via
1070
1071$$
1072h = e_{\text{rel}} \cdot \begin{cases}
1073u^{T}a/\lVert a \rVert^2_2                                 & \text{if $|u^T a| > u_{\min} \lVert a \rVert_{1}$} \\
1074u_{\min} \operatorname{sign}(u^{T}a) \lVert a \rVert_{1}/\lVert a\rVert^2_2  & \text{otherwise}.
1075\end{cases}
1076$$
1077
1078This approach is taken from Brown and Saad
1079{cite}`brownsaad:90`. The second approach, taken from Walker and Pernice,
1080{cite}`pw98`, computes $h$ via
1081
1082$$
1083\begin{aligned}
1084        h = \frac{\sqrt{1 + ||u||}e_{rel}}{||a||}\end{aligned}
1085$$
1086
1087This has no tunable parameters, but note that inside the nonlinear solve
1088for the entire *linear* iterative process $u$ does not change
1089hence $\sqrt{1 + ||u||}$ need be computed only once. This
1090information may be set with the options
1091
1092```
1093MatMFFDWPSetComputeNormU(Mat mat,PetscBool );
1094```
1095
1096or `-mat_mffd_compute_normu <true or false>`. This information is used
1097to eliminate the redundant computation of these parameters, therefore
1098reducing the number of collective operations and improving the
1099efficiency of the application code. This takes place automatically for the PETSc GMRES solver with left preconditioning.
1100
1101It is also possible to monitor the differencing parameters h that are
1102computed via the routines
1103
1104```
1105MatMFFDSetHHistory(Mat,PetscScalar *,int);
1106MatMFFDResetHHistory(Mat,PetscScalar *,int);
1107MatMFFDGetH(Mat,PetscScalar *);
1108```
1109
1110We include an explicit example of using matrix-free methods in {any}`ex3.c <snes_ex3>`.
1111Note that by using the option `-snes_mf` one can
1112easily convert any `SNES` code to use a matrix-free Newton-Krylov
1113method without a preconditioner. As shown in this example,
1114`SNESSetFromOptions()` must be called *after* `SNESSetJacobian()` to
1115enable runtime switching between the user-specified Jacobian and the
1116default `SNES` matrix-free form.
1117
1118(snes_ex3)=
1119
1120:::{admonition} Listing: `src/snes/tutorials/ex3.c`
1121```{literalinclude} /../src/snes/tutorials/ex3.c
1122:end-before: /*TEST
1123```
1124:::
1125
1126Table {any}`tab-jacobians` summarizes the various matrix situations
1127that `SNES` supports. In particular, different linear system matrices
1128and preconditioning matrices are allowed, as well as both matrix-free
1129and application-provided preconditioners. If {any}`ex3.c <snes_ex3>` is run with
1130the options `-snes_mf` and `-user_precond` then it uses a
1131matrix-free application of the matrix-vector multiple and a user
1132provided matrix-free Jacobian.
1133
1134```{eval-rst}
1135.. list-table:: Jacobian Options
1136   :name: tab-jacobians
1137
1138   * - Matrix Use
1139     - Conventional Matrix Formats
1140     - Matrix-free versions
1141   * - Jacobian Matrix
1142     - Create matrix with ``MatCreate()``:math:`^*`.  Assemble matrix with user-defined routine :math:`^\dagger`
1143     - Create matrix with ``MatCreateShell()``.  Use ``MatShellSetOperation()`` to set various matrix actions, or use ``MatCreateMFFD()`` or ``MatCreateSNESMF()``.
1144   * - Matrix used to construct the preconditioner
1145     - Create matrix with ``MatCreate()``:math:`^*`.  Assemble matrix with user-defined routine :math:`^\dagger`
1146     - Use ``SNESGetKSP()`` and ``KSPGetPC()`` to access the ``PC``, then use ``PCSetType(pc, PCSHELL)`` followed by ``PCShellSetApply()``.
1147```
1148
1149$^*$ Use either the generic `MatCreate()` or a format-specific variant such as `MatCreateAIJ()`.
1150
1151$^\dagger$ Set user-defined matrix formation routine with `SNESSetJacobian()` or with a `DM` variant such as `DMDASNESSetJacobianLocal()`
1152
1153SNES also provides some less well-integrated code to apply matrix-free finite differencing using an automatically computed measurement of the
1154noise of the functions. This can be selected with `-snes_mf_version 2`; it does not use `MatCreateMFFD()` but has similar options that start with
1155`-snes_mf_` instead of `-mat_mffd_`. Note that this alternative prefix **only** works for version 2 differencing.
1156
1157(sec_fdmatrix)=
1158
1159## Finite Difference Jacobian Approximations
1160
1161PETSc provides some tools to help approximate the Jacobian matrices
1162efficiently via finite differences. These tools are intended for use in
1163certain situations where one is unable to compute Jacobian matrices
1164analytically, and matrix-free methods do not work well without a
1165preconditioner, due to very poor conditioning. The approximation
1166requires several steps:
1167
1168- First, one colors the columns of the (not yet built) Jacobian matrix,
1169  so that columns of the same color do not share any common rows.
1170- Next, one creates a `MatFDColoring` data structure that will be
1171  used later in actually computing the Jacobian.
1172- Finally, one tells the nonlinear solvers of `SNES` to use the
1173  `SNESComputeJacobianDefaultColor()` routine to compute the
1174  Jacobians.
1175
1176A code fragment that demonstrates this process is given below.
1177
1178```
1179ISColoring    iscoloring;
1180MatFDColoring fdcoloring;
1181MatColoring   coloring;
1182
1183/*
1184  This initializes the nonzero structure of the Jacobian. This is artificial
1185  because clearly if we had a routine to compute the Jacobian we wouldn't
1186  need to use finite differences.
1187*/
1188FormJacobian(snes,x, &J, &J, &user);
1189
1190/*
1191   Color the matrix, i.e. determine groups of columns that share no common
1192  rows. These columns in the Jacobian can all be computed simultaneously.
1193*/
1194MatColoringCreate(J, &coloring);
1195MatColoringSetType(coloring,MATCOLORINGSL);
1196MatColoringSetFromOptions(coloring);
1197MatColoringApply(coloring, &iscoloring);
1198MatColoringDestroy(&coloring);
1199/*
1200   Create the data structure that SNESComputeJacobianDefaultColor() uses
1201   to compute the actual Jacobians via finite differences.
1202*/
1203MatFDColoringCreate(J, iscoloring, &fdcoloring);
1204ISColoringDestroy(&iscoloring);
1205MatFDColoringSetFunction(fdcoloring, (MatFDColoringFn *)FormFunction, &user);
1206MatFDColoringSetFromOptions(fdcoloring);
1207
1208/*
1209  Tell SNES to use the routine SNESComputeJacobianDefaultColor()
1210  to compute Jacobians.
1211*/
1212SNESSetJacobian(snes,J,J,SNESComputeJacobianDefaultColor,fdcoloring);
1213```
1214
1215Of course, we are cheating a bit. If we do not have an analytic formula
1216for computing the Jacobian, then how do we know what its nonzero
1217structure is so that it may be colored? Determining the structure is
1218problem dependent, but fortunately, for most structured grid problems
1219(the class of problems for which PETSc was originally designed) if one
1220knows the stencil used for the nonlinear function one can usually fairly
1221easily obtain an estimate of the location of nonzeros in the matrix.
1222This is harder in the unstructured case, but one typically knows where the nonzero entries are from the mesh topology and distribution of degrees of freedom.
1223If using `DMPlex` ({any}`ch_unstructured`) for unstructured meshes, the nonzero locations will be identified in `DMCreateMatrix()` and the procedure above can be used.
1224Most external packages for unstructured meshes have similar functionality.
1225
1226One need not necessarily use a `MatColoring` object to determine a
1227coloring. For example, if a grid can be colored directly (without using
1228the associated matrix), then that coloring can be provided to
1229`MatFDColoringCreate()`. Note that the user must always preset the
1230nonzero structure in the matrix regardless of which coloring routine is
1231used.
1232
1233PETSc provides the following coloring algorithms, which can be selected using `MatColoringSetType()` or via the command line argument `-mat_coloring_type`.
1234
1235```{eval-rst}
1236.. list-table::
1237   :header-rows: 1
1238
1239   * - Algorithm
1240     - ``MatColoringType``
1241     - ``-mat_coloring_type``
1242     - Parallel
1243   * - smallest-last :cite:`more84`
1244     - ``MATCOLORINGSL``
1245     - ``sl``
1246     - No
1247   * - largest-first :cite:`more84`
1248     - ``MATCOLORINGLF``
1249     - ``lf``
1250     - No
1251   * - incidence-degree :cite:`more84`
1252     - ``MATCOLORINGID``
1253     - ``id``
1254     - No
1255   * - Jones-Plassmann :cite:`jp:pcolor`
1256     - ``MATCOLORINGJP``
1257     - ``jp``
1258     - Yes
1259   * - Greedy
1260     - ``MATCOLORINGGREEDY``
1261     - ``greedy``
1262     - Yes
1263   * - Natural (1 color per column)
1264     - ``MATCOLORINGNATURAL``
1265     - ``natural``
1266     - Yes
1267   * - Power (:math:`A^k` followed by 1-coloring)
1268     - ``MATCOLORINGPOWER``
1269     - ``power``
1270     - Yes
1271```
1272
1273As for the matrix-free computation of Jacobians ({any}`sec_nlmatrixfree`), two parameters affect the accuracy of the
1274finite difference Jacobian approximation. These are set with the command
1275
1276```
1277MatFDColoringSetParameters(MatFDColoring fdcoloring,PetscReal rerror,PetscReal umin);
1278```
1279
1280The parameter `rerror` is the square root of the relative error in the
1281function evaluations, $e_{rel}$; the default is the square root of
1282machine epsilon (about $10^{-8}$ in double precision), which
1283assumes that the functions are evaluated approximately to floating-point
1284precision accuracy. The second parameter, `umin`, is a bit more
1285involved; its default is $10^{-6}$. Column $i$ of the
1286Jacobian matrix (denoted by $F_{:i}$) is approximated by the
1287formula
1288
1289$$
1290F'_{:i} \approx \frac{F(u + h*dx_{i}) - F(u)}{h}
1291$$
1292
1293where $h$ is computed via:
1294
1295$$
1296h = e_{\text{rel}} \cdot \begin{cases}
1297u_{i}             &    \text{if $|u_{i}| > u_{\min}$} \\
1298u_{\min} \cdot \operatorname{sign}(u_{i})  & \text{otherwise}.
1299\end{cases}
1300$$
1301
1302for `MATMFFD_DS` or:
1303
1304$$
1305h = e_{\text{rel}} \sqrt{\|u\|}
1306$$
1307
1308for `MATMFFD_WP` (default). These parameters may be set from the options
1309database with
1310
1311```
1312-mat_fd_coloring_err <err>
1313-mat_fd_coloring_umin <umin>
1314-mat_fd_type <htype>
1315```
1316
1317Note that `MatColoring` type `MATCOLORINGSL`, `MATCOLORINGLF`, and
1318`MATCOLORINGID` are sequential algorithms. `MATCOLORINGJP` and
1319`MATCOLORINGGREEDY` are parallel algorithms, although in practice they
1320may create more colors than the sequential algorithms. If one computes
1321the coloring `iscoloring` reasonably with a parallel algorithm or by
1322knowledge of the discretization, the routine `MatFDColoringCreate()`
1323is scalable. An example of this for 2D distributed arrays is given below
1324that uses the utility routine `DMCreateColoring()`.
1325
1326```
1327DMCreateColoring(da,IS_COLORING_GHOSTED, &iscoloring);
1328MatFDColoringCreate(J,iscoloring, &fdcoloring);
1329MatFDColoringSetFromOptions(fdcoloring);
1330ISColoringDestroy( &iscoloring);
1331```
1332
1333Note that the routine `MatFDColoringCreate()` currently is only
1334supported for the AIJ and BAIJ matrix formats.
1335
1336(sec_vi)=
1337
1338## Variational Inequalities
1339
1340`SNES` can also solve (differential) variational inequalities with box (bound) constraints.
1341These are nonlinear algebraic systems with additional inequality
1342constraints on some or all of the variables:
1343$L_i \le u_i \le H_i$. For example, the pressure variable cannot be negative.
1344Some, or all, of the lower bounds may be
1345negative infinity (indicated to PETSc with `SNES_VI_NINF`) and some, or
1346all, of the upper bounds may be infinity (indicated by `SNES_VI_INF`).
1347The commands
1348
1349```
1350SNESVISetVariableBounds(SNES,Vec L,Vec H);
1351SNESVISetComputeVariableBounds(SNES snes, PetscErrorCode (*compute)(SNES,Vec,Vec))
1352```
1353
1354are used to indicate that one is solving a variational inequality. Problems with box constraints can be solved with
1355the reduced space, `SNESVINEWTONRSLS`, and semi-smooth `SNESVINEWTONSSLS` solvers.
1356
1357The
1358option `-snes_vi_monitor` turns on extra monitoring of the active set
1359associated with the bounds and `-snes_vi_type` allows selecting from
1360several VI solvers, the default is preferred.
1361
1362`SNESLineSearchSetPreCheck()` and `SNESLineSearchSetPostCheck()` can also be used to control properties
1363of the steps selected by `SNES`.
1364
1365(sec_snespc)=
1366
1367## Nonlinear Preconditioning
1368
1369The mathematical framework of nonlinear preconditioning is explained in detail in {cite}`bruneknepleysmithtu15`.
1370Nonlinear preconditioning in PETSc involves the use of an inner `SNES`
1371instance to define the step for an outer `SNES` instance. The inner
1372instance may be extracted using
1373
1374```
1375SNESGetNPC(SNES snes,SNES *npc);
1376```
1377
1378and passed run-time options using the `-npc_` prefix. Nonlinear
1379preconditioning comes in two flavors: left and right. The side may be
1380changed using `-snes_npc_side` or `SNESSetNPCSide()`. Left nonlinear
1381preconditioning redefines the nonlinear function as the action of the
1382nonlinear preconditioner $\mathbf{M}$;
1383
1384$$
1385\mathbf{F}_{M}(x) = \mathbf{M}(\mathbf{x},\mathbf{b}) - \mathbf{x}.
1386$$
1387
1388Right nonlinear preconditioning redefines the nonlinear function as the
1389function on the action of the nonlinear preconditioner;
1390
1391$$
1392\mathbf{F}(\mathbf{M}(\mathbf{x},\mathbf{b})) = \mathbf{b},
1393$$
1394
1395which can be interpreted as putting the preconditioner into “striking
1396distance” of the solution by outer acceleration.
1397
1398In addition, basic patterns of solver composition are available with the
1399`SNESType` `SNESCOMPOSITE`. This allows for two or more `SNES`
1400instances to be combined additively or multiplicatively. By command
1401line, a set of `SNES` types may be given by comma separated list
1402argument to `-snes_composite_sneses`. There are additive
1403(`SNES_COMPOSITE_ADDITIVE`), additive with optimal damping
1404(`SNES_COMPOSITE_ADDITIVEOPTIMAL`), and multiplicative
1405(`SNES_COMPOSITE_MULTIPLICATIVE`) variants which may be set with
1406
1407```
1408SNESCompositeSetType(SNES,SNESCompositeType);
1409```
1410
1411New subsolvers may be added to the composite solver with
1412
1413```
1414SNESCompositeAddSNES(SNES,SNESType);
1415```
1416
1417and accessed with
1418
1419```
1420SNESCompositeGetSNES(SNES,PetscInt,SNES *);
1421```
1422
1423```{eval-rst}
1424.. bibliography:: /petsc.bib
1425   :filter: docname in docnames
1426```
1427