xref: /petsc/doc/manual/ts.md (revision 4e8208cbcbc709572b8abe32f33c78b69c819375)
1(ch_ts)=
2
3# TS: Scalable ODE and DAE Solvers
4
5The `TS` library provides a framework for the scalable solution of
6ODEs and DAEs arising from the discretization of time-dependent PDEs.
7
8**Simple Example:** Consider the PDE
9
10$$
11u_t = u_{xx}
12$$
13
14discretized with centered finite differences in space yielding the
15semi-discrete equation
16
17$$
18\begin{aligned}
19          (u_i)_t & =  & \frac{u_{i+1} - 2 u_{i} + u_{i-1}}{h^2}, \\
20           u_t      &  = & \tilde{A} u;\end{aligned}
21$$
22
23or with piecewise linear finite elements approximation in space
24$u(x,t) \doteq \sum_i \xi_i(t) \phi_i(x)$ yielding the
25semi-discrete equation
26
27$$
28B {\xi}'(t) = A \xi(t)
29$$
30
31Now applying the backward Euler method results in
32
33$$
34( B - dt^n A  ) u^{n+1} = B u^n,
35$$
36
37in which
38
39$$
40{u^n}_i = \xi_i(t_n) \doteq u(x_i,t_n),
41$$
42
43$$
44{\xi}'(t_{n+1}) \doteq \frac{{u^{n+1}}_i - {u^{n}}_i }{dt^{n}},
45$$
46
47$A$ is the stiffness matrix, and $B$ is the identity for
48finite differences or the mass matrix for the finite element method.
49
50The PETSc interface for solving time dependent problems assumes the
51problem is written in the form
52
53$$
54F(t,u,\dot{u}) = G(t,u), \quad u(t_0) = u_0.
55$$
56
57In general, this is a differential algebraic equation (DAE) [^id5]. For
58ODE with nontrivial mass matrices such as arise in FEM, the implicit/DAE
59interface significantly reduces overhead to prepare the system for
60algebraic solvers (`SNES`/`KSP`) by having the user assemble the
61correctly shifted matrix. Therefore this interface is also useful for
62ODE systems.
63
64To solve an ODE or DAE one uses:
65
66- Function $F(t,u,\dot{u})$
67
68  ```
69  TSSetIFunction(TS ts, Vec R, PetscErrorCode (*f)(TS, PetscReal, Vec, Vec, Vec, PetscCtx), PetscCtxfunP);
70  ```
71
72  The vector `R` is an optional location to store the residual. The
73  arguments to the function `f()` are the timestep context, current
74  time, input state $u$, input time derivative $\dot{u}$,
75  and the (optional) user-provided context `funP`. If
76  $F(t,u,\dot{u}) = \dot{u}$ then one need not call this
77  function.
78
79- Function $G(t,u)$, if it is nonzero, is provided with the
80  function
81
82  ```
83  TSSetRHSFunction(TS ts, Vec R, PetscErrorCode (*f)(TS, PetscReal, Vec, Vec, PetscCtx), PetscCtxfunP);
84  ```
85
86- Jacobian
87
88
89  $\sigma F_{\dot{u}}(t^n,u^n,\dot{u}^n) + F_u(t^n,u^n,\dot{u}^n)$
90
91  If using a fully implicit or semi-implicit (IMEX) method one also
92  can provide an appropriate (approximate) Jacobian matrix of
93
94
95  $F()$
96
97  .
98
99  ```
100  TSSetIJacobian(TS ts, Mat A, Mat B, PetscErrorCode (*fjac)(TS, PetscReal, Vec, Vec, PetscReal, Mat, Mat, PetscCtx), PetscCtx jacP);
101  ```
102
103  The arguments for the function `fjac()` are the timestep context,
104  current time, input state $u$, input derivative
105  $\dot{u}$, input shift $\sigma$, matrix $A$,
106  matrix used to construct the preconditioner $B$, and the (optional) user-provided
107  context `jacP`.
108
109  The Jacobian needed for the nonlinear system is, by the chain rule,
110
111  $$
112  \begin{aligned}
113      \frac{d F}{d u^n} &  = &  \frac{\partial F}{\partial \dot{u}}|_{u^n} \frac{\partial \dot{u}}{\partial u}|_{u^n} + \frac{\partial F}{\partial u}|_{u^n}.\end{aligned}
114  $$
115
116  For any ODE integration method the approximation of $\dot{u}$
117  is linear in $u^n$ hence
118  $\frac{\partial \dot{u}}{\partial u}|_{u^n} = \sigma$, where
119  the shift $\sigma$ depends on the ODE integrator and time step
120  but not on the function being integrated. Thus
121
122  $$
123  \begin{aligned}
124      \frac{d F}{d u^n} &  = &    \sigma F_{\dot{u}}(t^n,u^n,\dot{u}^n) + F_u(t^n,u^n,\dot{u}^n).\end{aligned}
125  $$
126
127  This explains why the user provide Jacobian is in the given form for
128  all integration methods. An equivalent way to derive the formula is
129  to note that
130
131  $$
132  F(t^n,u^n,\dot{u}^n) = F(t^n,u^n,w+\sigma*u^n)
133  $$
134
135  where $w$ is some linear combination of previous time solutions
136  of $u$ so that
137
138  $$
139  \frac{d F}{d u^n} = \sigma F_{\dot{u}}(t^n,u^n,\dot{u}^n) + F_u(t^n,u^n,\dot{u}^n)
140  $$
141
142  again by the chain rule.
143
144  For example, consider backward Euler’s method applied to the ODE
145  $F(t, u, \dot{u}) = \dot{u} - f(t, u)$ with
146  $\dot{u} = (u^n - u^{n-1})/\delta t$ and
147  $\frac{\partial \dot{u}}{\partial u}|_{u^n} = 1/\delta t$
148  resulting in
149
150  $$
151  \begin{aligned}
152      \frac{d F}{d u^n} & = &   (1/\delta t)F_{\dot{u}} + F_u(t^n,u^n,\dot{u}^n).\end{aligned}
153  $$
154
155  But $F_{\dot{u}} = 1$, in this special case, resulting in the
156  expected Jacobian $I/\delta t - f_u(t,u^n)$.
157
158- Jacobian
159
160  $G_u$
161
162  If using a fully implicit method and the function
163
164  $G()$
165
166   is
167  provided, one also can provide an appropriate (approximate)
168  Jacobian matrix of
169
170  $G()$.
171
172
173  ```
174  TSSetRHSJacobian(TS ts, Mat A, Mat B,
175  PetscErrorCode (*fjac)(TS, PetscReal, Vec, Mat, Mat, PetscCtx), PetscCtx jacP);
176  ```
177
178  The arguments for the function `fjac()` are the timestep context,
179  current time, input state $u$, matrix $A$,
180  matrix used to construct the preconditioner $B$, and the (optional) user-provided
181  context `jacP`.
182
183Providing appropriate $F()$ and $G()$ for your problem
184allows for the easy runtime switching between explicit, semi-implicit
185(IMEX), and fully implicit methods.
186
187(sec_ts_basic)=
188
189## Basic TS Options
190
191The user first creates a `TS` object with the command
192
193```
194int TSCreate(MPI_Comm comm,TS *ts);
195```
196
197```
198int TSSetProblemType(TS ts,TSProblemType problemtype);
199```
200
201The `TSProblemType` is one of `TS_LINEAR` or `TS_NONLINEAR`.
202
203To set up `TS` for solving an ODE, one must set the “initial
204conditions” for the ODE with
205
206```
207TSSetSolution(TS ts, Vec initialsolution);
208```
209
210One can set the solution method with the routine
211
212```
213TSSetType(TS ts,TSType type);
214```
215
216Some of the currently supported types are `TSEULER`, `TSRK` (Runge-Kutta), `TSBEULER`, `TSCN` (Crank-Nicolson), `TSTHETA`, `TSGLLE` (generalized linear), and `TSPSEUDO`.
217They can also be set with the options database option `-ts_type euler, rk, beuler, cn, theta, gl, pseudo, sundials, eimex, arkimex, rosw`.
218A list of available methods is given in {any}`integrator_table`.
219
220Set the initial time with the command
221
222```
223TSSetTime(TS ts,PetscReal time);
224```
225
226One can change the timestep with the command
227
228```
229TSSetTimeStep(TS ts,PetscReal dt);
230```
231
232can determine the current timestep with the routine
233
234```
235TSGetTimeStep(TS ts,PetscReal* dt);
236```
237
238Here, “current” refers to the timestep being used to attempt to promote
239the solution form $u^n$ to $u^{n+1}.$
240
241One sets the total number of timesteps to run or the total time to run
242(whatever is first) with the commands
243
244```
245TSSetMaxSteps(TS ts,PetscInt maxsteps);
246TSSetMaxTime(TS ts,PetscReal maxtime);
247```
248
249and determines the behavior near the final time with
250
251```
252TSSetExactFinalTime(TS ts,TSExactFinalTimeOption eftopt);
253```
254
255where `eftopt` is one of
256`TS_EXACTFINALTIME_STEPOVER`,`TS_EXACTFINALTIME_INTERPOLATE`, or
257`TS_EXACTFINALTIME_MATCHSTEP`. One performs the requested number of
258time steps with
259
260```
261TSSolve(TS ts,Vec U);
262```
263
264The solve call implicitly sets up the timestep context; this can be done
265explicitly with
266
267```
268TSSetUp(TS ts);
269```
270
271One destroys the context with
272
273```
274TSDestroy(TS *ts);
275```
276
277and views it with
278
279```
280TSView(TS ts,PetscViewer viewer);
281```
282
283In place of `TSSolve()`, a single step can be taken using
284
285```
286TSStep(TS ts);
287```
288
289(sec_imex)=
290
291## DAE Formulations
292
293You can find a discussion of DAEs in {cite}`ascherpetzold1998` or [Scholarpedia](http://www.scholarpedia.org/article/Differential-algebraic_equations). In PETSc, TS deals with the semi-discrete form of the equations, so that space has already been discretized. If the DAE depends explicitly on the coordinate $x$, then this will just appear as any other data for the equation, not as an explicit argument. Thus we have
294
295$$
296F(t, u, \dot{u}) = 0
297$$
298
299In this form, only fully implicit solvers are appropriate. However, specialized solvers for restricted forms of DAE are supported by PETSc. Below we consider an ODE which is augmented with algebraic constraints on the variables.
300
301### Hessenberg Index-1 DAE
302
303> This is a Semi-Explicit Index-1 DAE which has the form
304
305$$
306\begin{aligned}
307  \dot{u} &= f(t, u, z) \\
308        0 &= h(t, u, z)
309\end{aligned}
310$$
311
312where $z$ is a new constraint variable, and the Jacobian $\frac{dh}{dz}$ is non-singular everywhere. We have suppressed the $x$ dependence since it plays no role here. Using the non-singularity of the Jacobian and the Implicit Function Theorem, we can solve for $z$ in terms of $u$. This means we could, in principle, plug $z(u)$ into the first equation to obtain a simple ODE, even if this is not the numerical process we use. Below we show that this type of DAE can be used with IMEX schemes.
313
314### Hessenberg Index-2 DAE
315
316> This DAE has the form
317
318$$
319\begin{aligned}
320  \dot{u} &= f(t, u, z) \\
321        0 &= h(t, u)
322\end{aligned}
323$$
324
325Notice that the constraint equation $h$ is not a function of the constraint variable $z$. This means that we cannot naively invert as we did in the index-1 case. Our strategy will be to convert this into an index-1 DAE using a time derivative, which loosely corresponds to the idea of an index being the number of derivatives necessary to get back to an ODE. If we differentiate the constraint equation with respect to time, we can use the ODE to simplify it,
326
327$$
328\begin{aligned}
329        0 &= \dot{h}(t, u) \\
330          &= \frac{dh}{du} \dot{u} + \frac{\partial h}{\partial t} \\
331          &= \frac{dh}{du} f(t, u, z) + \frac{\partial h}{\partial t}
332\end{aligned}
333$$
334
335If the Jacobian $\frac{dh}{du} \frac{df}{dz}$ is non-singular, then we have precisely a semi-explicit index-1 DAE, and we can once again use the PETSc IMEX tools to solve it. A common example of an index-2 DAE is the incompressible Navier-Stokes equations, since the continuity equation $\nabla\cdot u = 0$ does not involve the pressure. Using PETSc IMEX with the above conversion then corresponds to the Segregated Runge-Kutta method applied to this equation {cite}`colomesbadia2016`.
336
337## Using Implicit-Explicit (IMEX) Methods
338
339For “stiff” problems or those with multiple time scales $F()$ will
340be treated implicitly using a method suitable for stiff problems and
341$G()$ will be treated explicitly when using an IMEX method like
342TSARKIMEX. $F()$ is typically linear or weakly nonlinear while
343$G()$ may have very strong nonlinearities such as arise in
344non-oscillatory methods for hyperbolic PDE. The user provides three
345pieces of information, the APIs for which have been described above.
346
347- “Slow” part $G(t,u)$ using `TSSetRHSFunction()`.
348- “Stiff” part $F(t,u,\dot u)$ using `TSSetIFunction()`.
349- Jacobian $F_u + \sigma F_{\dot u}$ using `TSSetIJacobian()`.
350
351The user needs to set `TSSetEquationType()` to `TS_EQ_IMPLICIT` or
352higher if the problem is implicit; e.g.,
353$F(t,u,\dot u) = M \dot u - f(t,u)$, where $M$ is not the
354identity matrix:
355
356- the problem is an implicit ODE (defined implicitly through
357  `TSSetIFunction()`) or
358- a DAE is being solved.
359
360An IMEX problem representation can be made implicit by setting `TSARKIMEXSetFullyImplicit()`.
361Note that multilevel preconditioners (e.g. `PCMG`), won't work in the fully implicit case; the
362same holds true for any other `TS` type requiring a fully implicit formulation in case both
363Jacobians are specified.
364
365In PETSc, DAEs and ODEs are formulated as $F(t,u,\dot{u})=G(t,u)$, where $F()$ is meant to be integrated implicitly and $G()$ explicitly. An IMEX formulation such as $M\dot{u}=f(t,u)+g(t,u)$ requires the user to provide $M^{-1} g(t,u)$ or solve $g(t,u) - M x=0$ in place of $G(t,u)$. General cases such as $F(t,u,\dot{u})=G(t,u)$ are not amenable to IMEX Runge-Kutta, but can be solved by using fully implicit methods. Some use-case examples for `TSARKIMEX` are listed in {numref}`tab_DE_forms` and a list of methods with a summary of their properties is given in {any}`tab_IMEX_RK_PETSc`.
366
367```{eval-rst}
368.. list-table:: Use case examples for ``TSARKIMEX``
369   :name: tab_DE_forms
370   :widths: 40 40 80
371
372   * - :math:`\dot{u} = g(t,u)`
373     - nonstiff ODE
374     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \dot{u} \\ G(t,u) &= g(t,u)\end{aligned}`
375   * - :math:`M \dot{u} = g(t,u)`
376     - nonstiff ODE with mass matrix
377     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \dot{u} \\ G(t,u) &= M^{-1} g(t,u)\end{aligned}`
378   * - :math:`\dot{u} = f(t,u)`
379     - stiff ODE
380     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \dot{u} - f(t,u) \\ G(t,u) &= 0\end{aligned}`
381   * - :math:`M \dot{u} = f(t,u)`
382     - stiff ODE with mass matrix
383     - :math:`\begin{aligned}F(t,u,\dot{u}) &= M \dot{u} - f(t,u) \\ G(t,u) &= 0\end{aligned}`
384   * - :math:`\dot{u} = f(t,u) + g(t,u)`
385     - stiff-nonstiff ODE
386     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \dot{u} - f(t,u) \\ G(t,u) &= g(t,u)\end{aligned}`
387   * - :math:`M \dot{u} = f(t,u) + g(t,u)`
388     - stiff-nonstiff ODE with mass matrix
389     - :math:`\begin{aligned}F(t,u,\dot{u}) &= M\dot{u} - f(t,u) \\ G(t,u) &= M^{-1} g(t,u)\end{aligned}`
390   * - :math:`\begin{aligned}\dot{u} &= f(t,u,z) + g(t,u,z)\\0 &= h(t,y,z)\end{aligned}`
391     - semi-explicit index-1 DAE
392     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \begin{pmatrix}\dot{u} - f(t,u,z)\\h(t, u, z)\end{pmatrix}\\G(t,u) &= g(t,u)\end{aligned}`
393   * - :math:`f(t,u,\dot{u})=0`
394     - fully implicit ODE/DAE
395     - :math:`\begin{aligned}F(t,u,\dot{u}) &= f(t,u,\dot{u})\\G(t,u) &= 0\end{aligned}`; the user needs to set ``TSSetEquationType()`` to ``TS_EQ_IMPLICIT`` or higher
396```
397
398{numref}`tab_IMEX_RK_PETSc` lists of the currently available IMEX Runge-Kutta schemes. For each method, it gives the `-ts_arkimex_type` name, the reference, the total number of stages/implicit stages, the order/stage-order, the implicit stability properties (IM), stiff accuracy (SA), the existence of an embedded scheme, and dense output (DO).
399
400```{eval-rst}
401.. list-table:: IMEX Runge-Kutta schemes
402  :name: tab_IMEX_RK_PETSc
403  :header-rows: 1
404
405  * - Name
406    - Reference
407    - Stages (IM)
408    - Order (Stage)
409    - IM
410    - SA
411    - Embed
412    - DO
413    - Remarks
414  * - a2
415    - based on CN
416    - 2 (1)
417    - 2 (2)
418    - A-Stable
419    - yes
420    - yes (1)
421    - yes (2)
422    -
423  * - l2
424    - SSP2(2,2,2) :cite:`pareschi_2005`
425    - 2 (2)
426    - 2 (1)
427    - L-Stable
428    - yes
429    - yes (1)
430    - yes (2)
431    - SSP SDIRK
432  * - ars122
433    - ARS122 :cite:`ascher_1997`
434    - 2 (1)
435    - 3 (1)
436    - A-Stable
437    - yes
438    - yes (1)
439    - yes (2)
440    -
441  * - 2c
442    - :cite:`giraldo_2013`
443    - 3 (2)
444    - 2 (2)
445    - L-Stable
446    - yes
447    - yes (1)
448    - yes (2)
449    - SDIRK
450  * - 2d
451    - :cite:`giraldo_2013`
452    - 3 (2)
453    - 2 (2)
454    - L-Stable
455    - yes
456    - yes (1)
457    - yes (2)
458    - SDIRK
459  * -  2e
460    - :cite:`giraldo_2013`
461    - 3 (2)
462    - 2 (2)
463    - L-Stable
464    - yes
465    - yes (1)
466    - yes (2)
467    - SDIRK
468  * - prssp2
469    - PRS(3,3,2) :cite:`pareschi_2005`
470    - 3 (3)
471    - 3 (1)
472    - L-Stable
473    - yes
474    - no
475    - no
476    - SSP
477  * - 3
478    - :cite:`kennedy_2003`
479    - 4 (3)
480    - 3 (2)
481    - L-Stable
482    - yes
483    - yes (2)
484    - yes (2)
485    - SDIRK
486  * - bpr3
487    - :cite:`boscarino_tr2011`
488    - 5 (4)
489    - 3 (2)
490    - L-Stable
491    - yes
492    - no
493    - no
494    - SDIRK
495  * - ars443
496    - :cite:`ascher_1997`
497    - 5 (4)
498    - 3 (1)
499    - L-Stable
500    - yes
501    - no
502    - no
503    - SDIRK
504  * - 4
505    - :cite:`kennedy_2003`
506    - 6 (5)
507    - 4 (2)
508    - L-Stable
509    - yes
510    - yes (3)
511    - yes
512    - SDIRK
513  * - 5
514    - :cite:`kennedy_2003`
515    - 8 (7)
516    - 5 (2)
517    - L-Stable
518    - yes
519    - yes (4)
520    - yes (3)
521    - SDIRK
522```
523
524ROSW are linearized implicit Runge-Kutta methods known as Rosenbrock
525W-methods. They can accommodate inexact Jacobian matrices in their
526formulation. A series of methods are available in PETSc are listed in
527{numref}`tab_IMEX_RosW_PETSc` below. For each method, it gives the reference, the total number of stages and implicit stages, the scheme order and stage order, the implicit stability properties (IM), stiff accuracy (SA), the existence of an embedded scheme, dense output (DO), the capacity to use inexact Jacobian matrices (-W), and high order integration of differential algebraic equations (PDAE).
528
529```{eval-rst}
530.. list-table:: Rosenbrock W-schemes
531   :name: tab_IMEX_RosW_PETSc
532   :header-rows: 1
533
534   * - TS
535     - Reference
536     - Stages (IM)
537     - Order (Stage)
538     - IM
539     - SA
540     - Embed
541     - DO
542     - -W
543     - PDAE
544     - Remarks
545   * - theta1
546     - classical
547     - 1(1)
548     - 1(1)
549     - L-Stable
550     - -
551     - -
552     - -
553     - -
554     - -
555     - -
556   * - theta2
557     - classical
558     - 1(1)
559     - 2(2)
560     - A-Stable
561     - -
562     - -
563     - -
564     - -
565     - -
566     - -
567   * - 2m
568     - Zoltan
569     - 2(2)
570     - 2(1)
571     - L-Stable
572     - No
573     - Yes(1)
574     - Yes(2)
575     - Yes
576     - No
577     - SSP
578   * - 2p
579     - Zoltan
580     - 2(2)
581     - 2(1)
582     - L-Stable
583     - No
584     - Yes(1)
585     - Yes(2)
586     - Yes
587     - No
588     - SSP
589   * - ra3pw
590     - :cite:`rang_2005`
591     - 3(3)
592     - 3(1)
593     - A-Stable
594     - No
595     - Yes
596     - Yes(2)
597     - No
598     - Yes(3)
599     - -
600   * - ra34pw2
601     - :cite:`rang_2005`
602     - 4(4)
603     - 3(1)
604     - L-Stable
605     - Yes
606     - Yes
607     - Yes(3)
608     - Yes
609     - Yes(3)
610     - -
611   * - rodas3
612     - :cite:`sandu_1997`
613     - 4(4)
614     - 3(1)
615     - L-Stable
616     - Yes
617     - Yes
618     - No
619     - No
620     - Yes
621     - -
622   * - sandu3
623     - :cite:`sandu_1997`
624     - 3(3)
625     - 3(1)
626     - L-Stable
627     - Yes
628     - Yes
629     - Yes(2)
630     - No
631     - No
632     - -
633   * - assp3p3s1c
634     - unpub.
635     - 3(2)
636     - 3(1)
637     - A-Stable
638     - No
639     - Yes
640     - Yes(2)
641     - Yes
642     - No
643     - SSP
644   * - lassp3p4s2c
645     - unpub.
646     - 4(3)
647     - 3(1)
648     - L-Stable
649     - No
650     - Yes
651     - Yes(3)
652     - Yes
653     - No
654     - SSP
655   * - lassp3p4s2c
656     - unpub.
657     - 4(3)
658     - 3(1)
659     - L-Stable
660     - No
661     - Yes
662     - Yes(3)
663     - Yes
664     - No
665     - SSP
666   * - ark3
667     - unpub.
668     - 4(3)
669     - 3(1)
670     - L-Stable
671     - No
672     - Yes
673     - Yes(3)
674     - Yes
675     - No
676     - IMEX-RK
677```
678
679## IMEX Methods for fast-slow systems
680
681Consider a fast-slow ODE system
682
683$$
684\begin{aligned}
685\dot{u}^{slow} & = f^{slow}(t, u^{slow},u^{fast}) \\
686M \dot{u}^{fast} & = g^{fast}(t, u^{slow},u^{fast}) + f^{fast}(t, u^{slow},u^{fast})
687\end{aligned}
688$$
689
690where $u^{slow}$ is the slow component and $u^{fast}$ is the
691fast component. The fast component can be partitioned additively as
692described above. Thus we want to treat $f^{slow}()$ and
693$f^{fast}()$ explicitly and the other terms implicitly when using
694TSARKIMEX. This is achieved by using the following APIs:
695
696- `TSARKIMEXSetFastSlowSplit()` informs PETSc to use ARKIMEX to solve a fast-slow system.
697- `TSRHSSplitSetIS()` specifies the index set for the slow/fast components.
698- `TSRHSSplitSetRHSFunction()` specifies the parts to be handled explicitly $f^{slow}()$ and $f^{fast}()$.
699- `TSRHSSplitSetIFunction()` and `TSRHSSplitSetIJacobian()` specify the implicit part and its Jacobian.
700
701Note that this ODE system can also be solved by padding zeros in the implicit part and using the standard IMEX methods. However, one needs to provide the full-dimensional Jacobian whereas only a partial Jacobian is needed for the fast-slow split which is more efficient in storage and speed.
702
703(sec_ts_glee)=
704
705## GLEE methods
706
707In this section, we describe explicit and implicit time stepping methods
708with global error estimation that are introduced in
709{cite}`constantinescu_tr2016b`. The solution vector for a
710GLEE method is either \[$y$, $\tilde{y}$\] or
711\[$y$,$\varepsilon$\], where $y$ is the solution,
712$\tilde{y}$ is the “auxiliary solution,” and $\varepsilon$
713is the error. The working vector that `TSGLEE` uses is $Y$ =
714\[$y$,$\tilde{y}$\], or \[$y$,$\varepsilon$\]. A
715GLEE method is defined by
716
717- $(p,r,s)$: (order, steps, and stages),
718- $\gamma$: factor representing the global error ratio,
719- $A, U, B, V$: method coefficients,
720- $S$: starting method to compute the working vector from the
721  solution (say at the beginning of time integration) so that
722  $Y = Sy$,
723- $F$: finalizing method to compute the solution from the working
724  vector,$y = FY$.
725- $F_\text{embed}$: coefficients for computing the auxiliary
726  solution $\tilde{y}$ from the working vector
727  ($\tilde{y} = F_\text{embed} Y$),
728- $F_\text{error}$: coefficients to compute the estimated error
729  vector from the working vector
730  ($\varepsilon = F_\text{error} Y$).
731- $S_\text{error}$: coefficients to initialize the auxiliary
732  solution ($\tilde{y}$ or $\varepsilon$) from a specified
733  error vector ($\varepsilon$). It is currently implemented only
734  for $r = 2$. We have $y_\text{aux} =
735  S_{error}[0]*\varepsilon + S_\text{error}[1]*y$, where
736  $y_\text{aux}$ is the 2nd component of the working vector
737  $Y$.
738
739The methods can be described in two mathematically equivalent forms:
740propagate two components (“$y\tilde{y}$ form”) and propagating the
741solution and its estimated error (“$y\varepsilon$ form”). The two
742forms are not explicitly specified in `TSGLEE`; rather, the specific
743values of $B, U, S, F, F_{embed}$, and $F_{error}$
744characterize whether the method is in $y\tilde{y}$ or
745$y\varepsilon$ form.
746
747The API used by this `TS` method includes:
748
749- `TSGetSolutionComponents`: Get all the solution components of the
750  working vector
751
752  ```
753  PetscCall(TSGetSolutionComponents(TS, int*, Vec*))
754  ```
755
756  Call with `NULL` as the last argument to get the total number of
757  components in the working vector $Y$ (this is $r$ (not
758  $r-1$)), then call to get the $i$-th solution component.
759
760- `TSGetAuxSolution`: Returns the auxiliary solution
761  $\tilde{y}$ (computed as $F_\text{embed} Y$)
762
763  ```
764  PetscCall(TSGetAuxSolution(TS, Vec*))
765  ```
766
767- `TSGetTimeError`: Returns the estimated error vector
768  $\varepsilon$ (computed as $F_\text{error} Y$ if
769  $n=0$ or restores the error estimate at the end of the previous
770  step if $n=-1$)
771
772  ```
773  PetscCall(TSGetTimeError(TS, PetscInt n, Vec*))
774  ```
775
776- `TSSetTimeError`: Initializes the auxiliary solution
777  ($\tilde{y}$ or $\varepsilon$) for a specified initial
778  error.
779
780  ```
781  PetscCall(TSSetTimeError(TS, Vec))
782  ```
783
784The local error is estimated as $\varepsilon(n+1)-\varepsilon(n)$.
785This is to be used in the error control. The error in $y\tilde{y}$
786GLEE is
787$\varepsilon(n) = \frac{1}{1-\gamma} * (\tilde{y}(n) - y(n))$.
788
789Note that $y$ and $\tilde{y}$ are reported to `TSAdapt`
790`basic` (`TSADAPTBASIC`), and thus it computes the local error as
791$\varepsilon_{loc} = (\tilde{y} -
792y)$. However, the actual local error is $\varepsilon_{loc}
793= \varepsilon_{n+1} - \varepsilon_n = \frac{1}{1-\gamma} * [(\tilde{y} -
794y)_{n+1} - (\tilde{y} - y)_n]$.
795
796{numref}`tab_IMEX_GLEE_PETSc` lists currently available GL schemes with global error estimation {cite}`constantinescu_tr2016b`.
797
798```{eval-rst}
799.. list-table:: GL schemes with global error estimation
800   :name: tab_IMEX_GLEE_PETSc
801   :header-rows: 1
802
803   * - TS
804     - Reference
805     - IM/EX
806     - :math:`(p,r,s)`
807     - :math:`\gamma`
808     - Form
809     - Notes
810   * - ``TSGLEEi1``
811     - ``BE1``
812     - IM
813     - :math:`(1,3,2)`
814     - :math:`0.5`
815     - :math:`y\varepsilon`
816     - Based on backward Euler
817   * - ``TSGLEE23``
818     - ``23``
819     - EX
820     - :math:`(2,3,2)`
821     - :math:`0`
822     - :math:`y\varepsilon`
823     -
824   * - ``TSGLEE24``
825     - ``24``
826     - EX
827     - :math:`(2,4,2)`
828     - :math:`0`
829     - :math:`y\tilde{y}`
830     -
831   * - ``TSGLEE25I``
832     - ``25i``
833     - EX
834     - :math:`(2,5,2)`
835     - :math:`0`
836     - :math:`y\tilde{y}`
837     -
838   * - ``TSGLEE35``
839     - ``35``
840     - EX
841     - :math:`(3,5,2)`
842     - :math:`0`
843     - :math:`y\tilde{y}`
844     -
845   * - ``TSGLEEEXRK2A``
846     - ``exrk2a``
847     - EX
848     - :math:`(2,6,2)`
849     - :math:`0.25`
850     - :math:`y\varepsilon`
851     -
852   * - ``TSGLEERK32G1``
853     - ``rk32g1``
854     - EX
855     - :math:`(3,8,2)`
856     - :math:`0`
857     - :math:`y\varepsilon`
858     -
859   * - ``TSGLEERK285EX``
860     - ``rk285ex``
861     - EX
862     - :math:`(2,9,2)`
863     - :math:`0.25`
864     - :math:`y\varepsilon`
865     -
866```
867
868## Using fully implicit methods
869
870To use a fully implicit method like `TSTHETA`, `TSBDF` or `TSDIRK`, either
871provide the Jacobian of $F()$ (and $G()$ if $G()$ is
872provided) or use a `DM` that provides a coloring so the Jacobian can
873be computed efficiently via finite differences.
874
875## Using the Explicit Runge-Kutta timestepper with variable timesteps
876
877The explicit Euler and Runge-Kutta methods require the ODE be in the
878form
879
880$$
881\dot{u} = G(u,t).
882$$
883
884The user can either call `TSSetRHSFunction()` and/or they can call
885`TSSetIFunction()` (so long as the function provided to
886`TSSetIFunction()` is equivalent to $\dot{u} + \tilde{F}(t,u)$)
887but the Jacobians need not be provided. [^id6]
888
889The Explicit Runge-Kutta timestepper with variable timesteps is an
890implementation of the standard Runge-Kutta with an embedded method. The
891error in each timestep is calculated using the solutions from the
892Runge-Kutta method and its embedded method (the 2-norm of the difference
893is used). The default method is the $3$rd-order Bogacki-Shampine
894method with a $2$nd-order embedded method (`TSRK3BS`). Other
895available methods are the $5$th-order Fehlberg RK scheme with a
896$4$th-order embedded method (`TSRK5F`), the
897$5$th-order Dormand-Prince RK scheme with a $4$th-order
898embedded method (`TSRK5DP`), the $5$th-order Bogacki-Shampine
899RK scheme with a $4$th-order embedded method (`TSRK5BS`, and
900the $6$th-, $7$th, and $8$th-order robust Verner
901RK schemes with a $5$th-, $6$th, and $7$th-order
902embedded method, respectively (`TSRK6VR`, `TSRK7VR`, `TSRK8VR`).
903Variable timesteps cannot be used with RK schemes that do not have an
904embedded method (`TSRK1FE` - $1$st-order, $1$-stage
905forward Euler, `TSRK2A` - $2$nd-order, $2$-stage RK
906scheme, `TSRK3` - $3$rd-order, $3$-stage RK scheme,
907`TSRK4` - $4$-th order, $4$-stage RK scheme).
908
909## Special Cases
910
911- $\dot{u} = A u.$ First compute the matrix $A$ then call
912
913  ```
914  TSSetProblemType(ts,TS_LINEAR);
915  TSSetRHSFunction(ts,NULL,TSComputeRHSFunctionLinear,NULL);
916  TSSetRHSJacobian(ts,A,A,TSComputeRHSJacobianConstant,NULL);
917  ```
918
919  or
920
921  ```
922  TSSetProblemType(ts,TS_LINEAR);
923  TSSetIFunction(ts,NULL,TSComputeIFunctionLinear,NULL);
924  TSSetIJacobian(ts,A,A,TSComputeIJacobianConstant,NULL);
925  ```
926
927- $\dot{u} = A(t) u.$ Use
928
929  ```
930  TSSetProblemType(ts,TS_LINEAR);
931  TSSetRHSFunction(ts,NULL,TSComputeRHSFunctionLinear,NULL);
932  TSSetRHSJacobian(ts,A,A,YourComputeRHSJacobian, &appctx);
933  ```
934
935  where `YourComputeRHSJacobian()` is a function you provide that
936  computes $A$ as a function of time. Or use
937
938  ```
939  TSSetProblemType(ts,TS_LINEAR);
940  TSSetIFunction(ts,NULL,TSComputeIFunctionLinear,NULL);
941  TSSetIJacobian(ts,A,A,YourComputeIJacobian, &appctx);
942  ```
943
944## Monitoring and visualizing solutions
945
946- `-ts_monitor` - prints the time and timestep at each iteration.
947- `-ts_adapt_monitor` - prints information about the timestep
948  adaption calculation at each iteration.
949- `-ts_monitor_lg_timestep` - plots the size of each timestep,
950  `TSMonitorLGTimeStep()`.
951- `-ts_monitor_lg_solution` - for ODEs with only a few components
952  (not arising from the discretization of a PDE) plots the solution as
953  a function of time, `TSMonitorLGSolution()`.
954- `-ts_monitor_lg_error` - for ODEs with only a few components plots
955  the error as a function of time, only if `TSSetSolutionFunction()`
956  is provided, `TSMonitorLGError()`.
957- `-ts_monitor_draw_solution` - plots the solution at each iteration,
958  `TSMonitorDrawSolution()`.
959- `-ts_monitor_draw_error` - plots the error at each iteration only
960  if `TSSetSolutionFunction()` is provided,
961  `TSMonitorDrawSolution()`.
962- `-ts_monitor_solution binary[:filename]` - saves the solution at each
963  iteration to a binary file, `TSMonitorSolution()`. Solution viewers work
964  with other time-aware formats, e.g., `-ts_monitor_solution cgns:sol.cgns`,
965  and can output one solution every 10 time steps by adding
966  `-ts_monitor_solution_interval 10`. Use `-ts_monitor_solution_interval -1`
967  to output data only at then end of a time loop.
968- `-ts_monitor_solution_vtk <filename-%03D.vts>` - saves the solution
969  at each iteration to a file in vtk format,
970  `TSMonitorSolutionVTK()`.
971
972
973(sec_ts_error_control)=
974
975## Error control via variable time-stepping
976
977Most of the time stepping methods available in PETSc have an error
978estimation and error control mechanism. This mechanism is implemented by
979changing the step size in order to maintain user specified absolute and
980relative tolerances. The PETSc object responsible with error control is
981`TSAdapt`. The available `TSAdapt` types are listed in the following table.
982
983```{eval-rst}
984.. list-table:: ``TSAdapt``: available adaptors
985   :name: tab_adaptors
986   :header-rows: 1
987
988   * - ID
989     - Name
990     - Notes
991   * - ``TSADAPTNONE``
992     - ``none``
993     - no adaptivity
994   * - ``TSADAPTBASIC``
995     - ``basic``
996     - the default adaptor
997   * - ``TSADAPTGLEE``
998     - ``glee``
999     - extension of the basic adaptor to treat :math:`{\rm Tol}_{\rm A}` and :math:`{\rm Tol}_{\rm R}` as separate criteria. It can also control global errors if the integrator (e.g., ``TSGLEE``) provides this information
1000   * - ``TSADAPTDSP``
1001     - ``dsp``
1002     - adaptive controller for time-stepping based on digital signal processing
1003```
1004
1005When using `TSADAPTBASIC` (the default), the user typically provides a
1006desired absolute ${\rm Tol}_{\rm A}$ or a relative
1007${\rm Tol}_{\rm R}$ error tolerance by invoking
1008`TSSetTolerances()` or at the command line with options `-ts_atol`
1009and `-ts_rtol`. The error estimate is based on the local truncation
1010error, so for every step the algorithm verifies that the estimated local
1011truncation error satisfies the tolerances provided by the user and
1012computes a new step size to be taken. For multistage methods, the local
1013truncation is obtained by comparing the solution $y$ to a lower
1014order $\widehat{p}=p-1$ approximation, $\widehat{y}$, where
1015$p$ is the order of the method and $\widehat{p}$ the order
1016of $\widehat{y}$.
1017
1018The adaptive controller at step $n$ computes a tolerance level
1019
1020$$
1021\begin{aligned}
1022Tol_n(i)&=&{\rm Tol}_{\rm A}(i) +  \max(y_n(i),\widehat{y}_n(i)) {\rm Tol}_{\rm R}(i)\,,\end{aligned}
1023$$
1024
1025and forms the acceptable error level
1026
1027$$
1028\begin{aligned}
1029\rm wlte_n&=& \frac{1}{m} \sum_{i=1}^{m}\sqrt{\frac{\left\|y_n(i)
1030  -\widehat{y}_n(i)\right\|}{Tol(i)}}\,,\end{aligned}
1031$$
1032
1033where the errors are computed componentwise, $m$ is the dimension
1034of $y$ and `-ts_adapt_wnormtype` is `2` (default). If
1035`-ts_adapt_wnormtype` is `infinity` (max norm), then
1036
1037$$
1038\begin{aligned}
1039\rm wlte_n&=& \max_{1\dots m}\frac{\left\|y_n(i)
1040  -\widehat{y}_n(i)\right\|}{Tol(i)}\,.\end{aligned}
1041$$
1042
1043The error tolerances are satisfied when $\rm wlte\le 1.0$.
1044
1045The next step size is based on this error estimate, and determined by
1046
1047$$
1048\begin{aligned}
1049 \Delta t_{\rm new}(t)&=&\Delta t_{\rm{old}} \min(\alpha_{\max},
1050 \max(\alpha_{\min}, \beta (1/\rm wlte)^\frac{1}{\widehat{p}+1}))\,,\end{aligned}
1051$$ (hnew)
1052
1053where $\alpha_{\min}=$`-ts_adapt_clip`[0] and
1054$\alpha_{\max}$=`-ts_adapt_clip`[1] keep the change in
1055$\Delta t$ to within a certain factor, and $\beta<1$ is
1056chosen through `-ts_adapt_safety` so that there is some margin to
1057which the tolerances are satisfied and so that the probability of
1058rejection is decreased.
1059
1060This adaptive controller works in the following way. After completing
1061step $k$, if $\rm wlte_{k+1} \le 1.0$, then the step is
1062accepted and the next step is modified according to
1063{eq}`hnew`; otherwise, the step is rejected and retaken
1064with the step length computed in {eq}`hnew`.
1065
1066## Handling of discontinuities
1067
1068For problems that involve discontinuous right-hand sides, one can set an
1069“event” function $g(t,u)$ for PETSc to detect and locate the times
1070of discontinuities (zeros of $g(t,u)$). Events can be defined
1071through the event monitoring routine
1072
1073```
1074TSSetEventHandler(TS ts,PetscInt nevents,PetscInt *direction,PetscBool *terminate,PetscErrorCode (*indicator)(TS,PetscReal,Vec,PetscScalar*,PetscCtx eventP),PetscErrorCode (*postevent)(TS,PetscInt,PetscInt[],PetscReal,Vec,PetscBool,PetscCtx eventP),PetscCtxeventP);
1075```
1076
1077Here, `nevents` denotes the number of events, `direction` sets the
1078type of zero crossing to be detected for an event (+1 for positive
1079zero-crossing, -1 for negative zero-crossing, and 0 for both),
1080`terminate` conveys whether the time-stepping should continue or halt
1081when an event is located, `eventmonitor` is a user- defined routine
1082that specifies the event description, `postevent` is an optional
1083user-defined routine to take specific actions following an event.
1084
1085The arguments to `indicator()` are the timestep context, current
1086time, input state $u$, array of event function value, and the
1087(optional) user-provided context `eventP`.
1088
1089The arguments to `postevent()` routine are the timestep context,
1090number of events occurred, indices of events occurred, current time, input
1091state $u$, a boolean flag indicating forward solve (1) or adjoint
1092solve (0), and the (optional) user-provided context `eventP`.
1093
1094(sec_tchem)=
1095
1096## Explicit integrators with finite element mass matrices
1097
1098Discretized finite element problems often have the form $M \dot u = G(t, u)$ where $M$ is the mass matrix.
1099Such problems can be solved using `DMTSSetIFunction()` with implicit integrators.
1100When $M$ is nonsingular (i.e., the problem is an ODE, not a DAE), explicit integrators can be applied to $\dot u = M^{-1} G(t, u)$ or $\dot u = \hat M^{-1} G(t, u)$, where $\hat M$ is the lumped mass matrix.
1101While the true mass matrix generally has a dense inverse and thus must be solved iteratively, the lumped mass matrix is diagonal (e.g., computed via collocated quadrature or row sums of $M$).
1102To have PETSc create and apply a (lumped) mass matrix automatically, first use `DMTSSetRHSFunction()` to specify $G$ and set a `PetscFE` using `DMAddField()` and `DMCreateDS()`, then call either `DMTSCreateRHSMassMatrix()` or `DMTSCreateRHSMassMatrixLumped()` to automatically create the mass matrix and a `KSP` that will be used to apply $M^{-1}$.
1103This `KSP` can be customized using the `"mass_"` prefix.
1104
1105(section_sa)=
1106
1107## Performing sensitivity analysis with the TS ODE Solvers
1108
1109The `TS` library provides a framework based on discrete adjoint models
1110for sensitivity analysis for ODEs and DAEs. The ODE/DAE solution process
1111(henceforth called the forward run) can be obtained by using either
1112explicit or implicit solvers in `TS`, depending on the problem
1113properties. Currently supported method types are `TSRK` (Runge-Kutta)
1114explicit methods and `TSTHETA` implicit methods, which include
1115`TSBEULER` and `TSCN`.
1116
1117### Using the discrete adjoint methods
1118
1119Consider the ODE/DAE
1120
1121$$
1122F(t,y,\dot{y},p) = 0, \quad y(t_0)=y_0(p) \quad t_0 \le t \le t_F
1123$$
1124
1125and the cost function(s)
1126
1127$$
1128\Psi_i(y_0,p) = \Phi_i(y_F,p) + \int_{t_0}^{t_F} r_i(y(t),p,t)dt \quad i=1,...,n_\text{cost}.
1129$$
1130
1131The `TSAdjoint` routines of PETSc provide
1132
1133$$
1134\frac{\partial \Psi_i}{\partial y_0} = \lambda_i
1135$$
1136
1137and
1138
1139$$
1140\frac{\partial \Psi_i}{\partial p} = \mu_i + \lambda_i (\frac{\partial y_0}{\partial p}).
1141$$
1142
1143To perform the discrete adjoint sensitivity analysis one first sets up
1144the `TS` object for a regular forward run but with one extra function
1145call
1146
1147```
1148TSSetSaveTrajectory(TS ts),
1149```
1150
1151then calls `TSSolve()` in the usual manner.
1152
1153One must create two arrays of $n_\text{cost}$ vectors
1154$\lambda$ and $\mu$ (if there are no parameters $p$
1155then one can use `NULL` for the $\mu$ array.) The
1156$\lambda$ vectors are the same dimension and parallel layout as
1157the solution vector for the ODE, the $\mu$ vectors are of dimension
1158$p$; when $p$ is small usually all its elements are on the
1159first MPI process, while the vectors have no entries on the other
1160processes. $\lambda_i$ and $\mu_i$ should be initialized with
1161the values $d\Phi_i/dy|_{t=t_F}$ and $d\Phi_i/dp|_{t=t_F}$
1162respectively. Then one calls
1163
1164```
1165TSSetCostGradients(TS ts,PetscInt numcost, Vec *lambda,Vec *mu);
1166```
1167
1168where `numcost` denotes $n_\text{cost}$.
1169If $F()$ is a function of $p$ one needs to also provide the
1170Jacobian $-F_p$ with
1171
1172```
1173TSSetRHSJacobianP(TS ts,Mat Amat,PetscErrorCode (*fp)(TS,PetscReal,Vec,Mat,PetscCtx),PetscCtx ctx)
1174```
1175
1176or
1177
1178```
1179TSSetIJacobianP(TS ts,Mat Amat,PetscErrorCode (*fp)(TS,PetscReal,Vec,Vec,PetscReal,Mat,PetscCtx),PetscCtx ctx)
1180```
1181
1182or both, depending on which form is used to define the ODE.
1183
1184The arguments for the function `fp()` are the timestep context,
1185current time, $y$, and the (optional) user-provided context.
1186
1187If there is an integral term in the cost function, i.e. $r$ is
1188nonzero, it can be transformed into another ODE that is augmented to the
1189original ODE. To evaluate the integral, one needs to create a child
1190`TS` objective by calling
1191
1192```
1193TSCreateQuadratureTS(TS ts,PetscBool fwd,TS *quadts);
1194```
1195
1196and provide the ODE RHS function (which evaluates the integrand
1197$r$) with
1198
1199```
1200TSSetRHSFunction(TS quadts,Vec R,PetscErrorCode (*rf)(TS,PetscReal,Vec,Vec,PetscCtx),PetscCtxctx)
1201```
1202
1203Similar to the settings for the original ODE, Jacobians of the integrand
1204can be provided with
1205
1206```
1207TSSetRHSJacobian(TS quadts,Vec DRDU,Vec DRDU,PetscErrorCode (*drdyf)(TS,PetscReal,Vec,Vec*,PetscCtx),PetscCtxctx)
1208TSSetRHSJacobianP(TS quadts,Vec DRDU,Vec DRDU,PetscErrorCode (*drdyp)(TS,PetscReal,Vec,Vec*,PetscCtx),PetscCtxctx)
1209```
1210
1211where $\mathrm{drdyf}= dr /dy$, $\mathrm{drdpf} = dr /dp$.
1212Since the integral term is additive to the cost function, its gradient
1213information will be included in $\lambda$ and $\mu$.
1214
1215Lastly, one starts the backward run by calling
1216
1217```
1218TSAdjointSolve(TS ts).
1219```
1220
1221One can obtain the value of the integral term by calling
1222
1223```
1224TSGetCostIntegral(TS ts,Vec *q).
1225```
1226
1227or accessing directly the solution vector used by `quadts`.
1228
1229The second argument of `TSCreateQuadratureTS()` allows one to choose
1230if the integral term is evaluated in the forward run (inside
1231`TSSolve()`) or in the backward run (inside `TSAdjointSolve()`) when
1232`TSSetCostGradients()` and `TSSetCostIntegrand()` are called before
1233`TSSolve()`. Note that this also allows for evaluating the integral
1234without having to use the adjoint solvers.
1235
1236To provide a better understanding of the use of the adjoint solvers, we
1237introduce a simple example, corresponding to
1238<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/power_grid/ex3sa.c.html">TS Power Grid Tutorial ex3sa</a>.
1239The problem is to study dynamic security of power system when there are
1240credible contingencies such as short-circuits or loss of generators,
1241transmission lines, or loads. The dynamic security constraints are
1242incorporated as equality constraints in the form of discretized
1243differential equations and inequality constraints for bounds on the
1244trajectory. The governing ODE system is
1245
1246$$
1247\begin{aligned}
1248    \phi' &= &\omega_B (\omega - \omega_S)  \\
1249    2H/\omega_S \, \omega' & =& p_m - p_{max} sin(\phi) -D (\omega - \omega_S), \quad t_0 \leq t \leq t_F,\end{aligned}
1250$$
1251
1252where $\phi$ is the phase angle and $\omega$ is the
1253frequency.
1254
1255The initial conditions at time $t_0$ are
1256
1257$$
1258\begin{aligned}
1259\phi(t_0) &=& \arcsin \left( p_m / p_{max} \right), \\
1260w(t_0) & =& 1.\end{aligned}
1261$$
1262
1263$p_{max}$ is a positive number when the system operates normally.
1264At an event such as fault incidence/removal, $p_{max}$ will change
1265to $0$ temporarily and back to the original value after the fault
1266is fixed. The objective is to maximize $p_m$ subject to the above
1267ODE constraints and $\phi<\phi_S$ during all times. To accommodate
1268the inequality constraint, we want to compute the sensitivity of the
1269cost function
1270
1271$$
1272\Psi(p_m,\phi) = -p_m + c \int_{t_0}^{t_F} \left( \max(0, \phi - \phi_S ) \right)^2 dt
1273$$
1274
1275with respect to the parameter $p_m$. $numcost$ is $1$
1276since it is a scalar function.
1277
1278For ODE solution, PETSc requires user-provided functions to evaluate the
1279system $F(t,y,\dot{y},p)$ (set by `TSSetIFunction()` ) and its
1280corresponding Jacobian $F_y + \sigma F_{\dot y}$ (set by
1281`TSSetIJacobian()`). Note that the solution state $y$ is
1282$[ \phi \;  \omega ]^T$ here. For sensitivity analysis, we need to
1283provide a routine to compute $\mathrm{f}_p=[0 \; 1]^T$ using
1284`TSASetRHSJacobianP()`, and three routines corresponding to the
1285integrand $r=c \left( \max(0, \phi - \phi_S ) \right)^2$,
1286$r_p = [0 \; 0]^T$ and
1287$r_y= [ 2 c \left( \max(0, \phi - \phi_S ) \right) \; 0]^T$ using
1288`TSSetCostIntegrand()`.
1289
1290In the adjoint run, $\lambda$ and $\mu$ are initialized as
1291$[ 0 \;  0 ]^T$ and $[-1]$ at the final time $t_F$.
1292After `TSAdjointSolve()`, the sensitivity of the cost function w.r.t.
1293initial conditions is given by the sensitivity variable $\lambda$
1294(at time $t_0$) directly. And the sensitivity of the cost function
1295w.r.t. the parameter $p_m$ can be computed (by users) as
1296
1297$$
1298\frac{\mathrm{d} \Psi}{\mathrm{d} p_m} = \mu(t_0) + \lambda(t_0)  \frac{\mathrm{d} \left[ \phi(t_0) \; \omega(t_0) \right]^T}{\mathrm{d} p_m}  .
1299$$
1300
1301For explicit methods where one does not need to provide the Jacobian
1302$F_u$ for the forward solve one still does need it for the
1303backward solve and thus must call
1304
1305```
1306TSSetRHSJacobian(TS ts,Mat Amat, Mat Pmat,PetscErrorCode (*f)(TS,PetscReal,Vec,Mat,Mat,PetscCtx),PetscCtxfP);
1307```
1308
1309Examples include:
1310
1311- discrete adjoint sensitivity using explicit and implicit time stepping methods for an ODE problem
1312  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/ex20adj.c.html">TS Tutorial ex20adj</a>,
1313- an optimization problem using the discrete adjoint models of the ERK (for nonstiff ODEs)
1314  and the Theta methods (for stiff DAEs)
1315  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/ex20opt_ic.c.html">TS Tutorial ex20opt_ic</a>
1316  and
1317  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/ex20opt_p.c.html">TS Tutorial ex20opt_p</a>,
1318- an ODE-constrained optimization using the discrete adjoint models of the
1319  Theta methods for cost function with an integral term
1320  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/power_grid/ex3opt.c.html">TS Power Grid Tutorial ex3opt</a>,
1321- discrete adjoint sensitivity using the Crank-Nicolson methods for DAEs with discontinuities
1322  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/power_grid/stability_9bus/ex9busadj.c.html">TS Power Grid Stability Tutorial ex9busadj</a>,
1323- a DAE-constrained optimization problem using the discrete adjoint models of the Crank-Nicolson
1324  methods for cost function with an integral term
1325  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/power_grid/stability_9bus/ex9busopt.c.html">TS Power Grid Tutorial ex9busopt</a>,
1326- discrete adjoint sensitivity using the Crank-Nicolson methods for a PDE problem
1327  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/advection-diffusion-reaction/ex5adj.c.html">TS Advection-Diffusion-Reaction Tutorial ex5adj</a>.
1328
1329### Checkpointing
1330
1331The discrete adjoint model requires the states (and stage values in the
1332context of multistage timestepping methods) to evaluate the Jacobian
1333matrices during the adjoint (backward) run. By default, PETSc stores the
1334whole trajectory to disk as binary files, each of which contains the
1335information for a single time step including state, time, and stage
1336values (optional). One can also make PETSc store the trajectory to
1337memory with the option `-ts_trajectory_type memory`. However, there
1338might not be sufficient memory capacity especially for large-scale
1339problems and long-time integration.
1340
1341A so-called checkpointing scheme is needed to solve this problem. The
1342scheme stores checkpoints at selective time steps and recomputes the
1343missing information. The `revolve` library is used by PETSc
1344`TSTrajectory` to generate an optimal checkpointing schedule that
1345minimizes the recomputations given a limited number of available
1346checkpoints. One can specify the number of available checkpoints with
1347the option
1348`-ts_trajectory_max_cps_ram [maximum number of checkpoints in RAM]`.
1349Note that one checkpoint corresponds to one time step.
1350
1351The `revolve` library also provides an optimal multistage
1352checkpointing scheme that uses both RAM and disk for storage. This
1353scheme is automatically chosen if one uses both the option
1354`-ts_trajectory_max_cps_ram [maximum number of checkpoints in RAM]`
1355and the option
1356`-ts_trajectory_max_cps_disk [maximum number of checkpoints on disk]`.
1357
1358Some other useful options are listed below.
1359
1360- `-ts_trajectory_view` prints the total number of recomputations,
1361- `-ts_monitor` and `-ts_adjoint_monitor` allow users to monitor
1362  the progress of the adjoint work flow,
1363- `-ts_trajectory_type visualization` may be used to save the whole
1364  trajectory for visualization. It stores the solution and the time,
1365  but no stage values. The binary files generated can be read into
1366  MATLAB via the script
1367  `$PETSC_DIR/share/petsc/matlab/PetscReadBinaryTrajectory.m`.
1368
1369(sec_sundials)=
1370
1371## Using Sundials from PETSc
1372
1373Sundials is a parallel ODE solver developed by Hindmarsh et al. at LLNL.
1374The `TS` library provides an interface to use the CVODE component of
1375Sundials directly from PETSc. (To configure PETSc to use Sundials, see
1376the installation guide, `installation/index.htm`.)
1377
1378To use the Sundials integrators, call
1379
1380```
1381TSSetType(TS ts,TSType TSSUNDIALS);
1382```
1383
1384or use the command line option `-ts_type` `sundials`.
1385
1386Sundials’ CVODE solver comes with two main integrator families, Adams
1387and BDF (backward differentiation formula). One can select these with
1388
1389```
1390TSSundialsSetType(TS ts,TSSundialsLmmType [SUNDIALS_ADAMS,SUNDIALS_BDF]);
1391```
1392
1393or the command line option `-ts_sundials_type <adams,bdf>`. BDF is the
1394default.
1395
1396Sundials does not use the `SNES` library within PETSc for its
1397nonlinear solvers, so one cannot change the nonlinear solver options via
1398`SNES`. Rather, Sundials uses the preconditioners within the `PC`
1399package of PETSc, which can be accessed via
1400
1401```
1402TSSundialsGetPC(TS ts,PC *pc);
1403```
1404
1405The user can then directly set preconditioner options; alternatively,
1406the usual runtime options can be employed via `-pc_xxx`.
1407
1408Finally, one can set the Sundials tolerances via
1409
1410```
1411TSSundialsSetTolerance(TS ts,double abs,double rel);
1412```
1413
1414where `abs` denotes the absolute tolerance and `rel` the relative
1415tolerance.
1416
1417Other PETSc-Sundials options include
1418
1419```
1420TSSundialsSetGramSchmidtType(TS ts,TSSundialsGramSchmidtType type);
1421```
1422
1423where `type` is either `SUNDIALS_MODIFIED_GS` or
1424`SUNDIALS_UNMODIFIED_GS`. This may be set via the options data base
1425with `-ts_sundials_gramschmidt_type <modifed,unmodified>`.
1426
1427The routine
1428
1429```
1430TSSundialsSetMaxl(TS ts,PetscInt restart);
1431```
1432
1433sets the number of vectors in the Krylov subpspace used by GMRES. This
1434may be set in the options database with `-ts_sundials_maxl` `maxl`.
1435
1436## Using TChem from PETSc
1437
1438TChem [^id7] is a package originally developed at Sandia National
1439Laboratory that can read in CHEMKIN [^id8] data files and compute the
1440right-hand side function and its Jacobian for a reaction ODE system. To
1441utilize PETSc’s ODE solvers for these systems, first install PETSc with
1442the additional `configure` option `--download-tchem`. We currently
1443provide two examples of its use; one for single cell reaction and one
1444for an “artificial” one dimensional problem with periodic boundary
1445conditions and diffusion of all species. The self-explanatory examples
1446are the
1447<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/extchem.c.html">The TS tutorial extchem</a>
1448and
1449<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/extchemfield.c.html">The TS tutorial extchemfield</a>.
1450
1451[^id5]: If the matrix $F_{\dot{u}}(t) = \partial F
1452    / \partial \dot{u}$ is nonsingular then it is an ODE and can be
1453    transformed to the standard explicit form, although this
1454    transformation may not lead to efficient algorithms.
1455
1456[^id6]: PETSc will automatically translate the function provided to the
1457    appropriate form.
1458
1459[^id7]: [bitbucket.org/jedbrown/tchem](https://bitbucket.org/jedbrown/tchem)
1460
1461[^id8]: [en.wikipedia.org/wiki/CHEMKIN](https://en.wikipedia.org/wiki/CHEMKIN)
1462
1463```{raw} html
1464<hr>
1465```
1466
1467# Solving Steady-State Problems with Pseudo-Timestepping
1468
1469**Simple Example:** `TS` provides a general code for performing pseudo
1470timestepping with a variable timestep at each physical node point. For
1471example, instead of directly attacking the steady-state problem
1472
1473$$
1474G(u) = 0,
1475$$
1476
1477we can use pseudo-transient continuation by solving
1478
1479$$
1480u_t = G(u).
1481$$
1482
1483Using time differencing
1484
1485$$
1486u_t \doteq \frac{{u^{n+1}} - {u^{n}} }{dt^{n}}
1487$$
1488
1489with the backward Euler method, we obtain nonlinear equations at a
1490series of pseudo-timesteps
1491
1492$$
1493\frac{1}{dt^n} B (u^{n+1} - u^{n} ) = G(u^{n+1}).
1494$$
1495
1496For this problem the user must provide $G(u)$, the time steps
1497$dt^{n}$ and the left-hand-side matrix $B$ (or optionally,
1498if the timestep is position independent and $B$ is the identity
1499matrix, a scalar timestep), as well as optionally the Jacobian of
1500$G(u)$.
1501
1502More generally, this can be applied to implicit ODE and DAE for which
1503the transient form is
1504
1505$$
1506F(u,\dot{u}) = 0.
1507$$
1508
1509For solving steady-state problems with pseudo-timestepping one proceeds
1510as follows.
1511
1512- Provide the function `G(u)` with the routine
1513
1514  ```
1515  TSSetRHSFunction(TS ts,Vec r,PetscErrorCode (*f)(TS,PetscReal,Vec,Vec,PetscCtx),PetscCtxfP);
1516  ```
1517
1518  The arguments to the function `f()` are the timestep context, the
1519  current time, the input for the function, the output for the function
1520  and the (optional) user-provided context variable `fP`.
1521
1522- Provide the (approximate) Jacobian matrix of `G(u)` and a function
1523  to compute it at each Newton iteration. This is done with the command
1524
1525  ```
1526  TSSetRHSJacobian(TS ts,Mat Amat, Mat Pmat,PetscErrorCode (*f)(TS,PetscReal,Vec,Mat,Mat,PetscCtx),PetscCtxfP);
1527  ```
1528
1529  The arguments for the function `f()` are the timestep context, the
1530  current time, the location where the Jacobian is to be computed, the
1531  (approximate) Jacobian matrix, an alternative approximate Jacobian
1532  matrix used to construct the preconditioner, and the optional
1533  user-provided context, passed in as `fP`. The user must provide the
1534  Jacobian as a matrix; thus, if using a matrix-free approach, one must
1535  create a `MATSHELL` matrix.
1536
1537In addition, the user must provide a routine that computes the
1538pseudo-timestep. This is slightly different depending on if one is using
1539a constant timestep over the entire grid, or it varies with location.
1540
1541- For location-independent pseudo-timestepping, one uses the routine
1542
1543  ```
1544  TSPseudoSetTimeStep(TS ts,PetscInt(*dt)(TS,PetscReal*,PetscCtx),PetscCtx dtctx);
1545  ```
1546
1547  The function `dt` is a user-provided function that computes the
1548  next pseudo-timestep. As a default one can use
1549  `TSPseudoTimeStepDefault(TS,PetscReal*,PetscCtx)` for `dt`. This
1550  routine updates the pseudo-timestep with one of two strategies: the
1551  default
1552
1553  $$
1554  dt^{n} = dt_{\mathrm{increment}}*dt^{n-1}*\frac{|| F(u^{n-1}) ||}{|| F(u^{n})||}
1555  $$
1556
1557  or, the alternative,
1558
1559  $$
1560  dt^{n} = dt_{\mathrm{increment}}*dt^{0}*\frac{|| F(u^{0}) ||}{|| F(u^{n})||}
1561  $$
1562
1563  which can be set with the call
1564
1565  ```
1566  TSPseudoIncrementDtFromInitialDt(TS ts);
1567  ```
1568
1569  or the option `-ts_pseudo_increment_dt_from_initial_dt`. The value
1570  $dt_{\mathrm{increment}}$ is by default $1.1$, but can be
1571  reset with the call
1572
1573  ```
1574  TSPseudoSetTimeStepIncrement(TS ts,PetscReal inc);
1575  ```
1576
1577  or the option `-ts_pseudo_increment <inc>`.
1578
1579- For location-dependent pseudo-timestepping, the interface function
1580  has not yet been created.
1581
1582```{eval-rst}
1583.. bibliography:: /petsc.bib
1584   :filter: docname in docnames
1585
1586```
1587