xref: /petsc/doc/manual/ts.md (revision 174dc0c8cee294b82b85e4dd3b331b29396264fc)
1(ch_ts)=
2
3# TS: Scalable ODE and DAE Solvers
4
5The `TS` library provides a framework for the scalable solution of
6ODEs and DAEs arising from the discretization of time-dependent PDEs.
7
8**Simple Example:** Consider the PDE
9
10$$
11u_t = u_{xx}
12$$
13
14discretized with centered finite differences in space yielding the
15semi-discrete equation
16
17$$
18\begin{aligned}
19          (u_i)_t & =  & \frac{u_{i+1} - 2 u_{i} + u_{i-1}}{h^2}, \\
20           u_t      &  = & \tilde{A} u;\end{aligned}
21$$
22
23or with piecewise linear finite elements approximation in space
24$u(x,t) \doteq \sum_i \xi_i(t) \phi_i(x)$ yielding the
25semi-discrete equation
26
27$$
28B {\xi}'(t) = A \xi(t)
29$$
30
31Now applying the backward Euler method results in
32
33$$
34( B - dt^n A  ) u^{n+1} = B u^n,
35$$
36
37in which
38
39$$
40{u^n}_i = \xi_i(t_n) \doteq u(x_i,t_n),
41$$
42
43$$
44{\xi}'(t_{n+1}) \doteq \frac{{u^{n+1}}_i - {u^{n}}_i }{dt^{n}},
45$$
46
47$A$ is the stiffness matrix, and $B$ is the identity for
48finite differences or the mass matrix for the finite element method.
49
50The PETSc interface for solving time dependent problems assumes the
51problem is written in the form
52
53$$
54F(t,u,\dot{u}) = G(t,u), \quad u(t_0) = u_0.
55$$
56
57In general, this is a differential algebraic equation (DAE) [^id5]. For
58ODE with nontrivial mass matrices such as arise in FEM, the implicit/DAE
59interface significantly reduces overhead to prepare the system for
60algebraic solvers (`SNES`/`KSP`) by having the user assemble the
61correctly shifted matrix. Therefore this interface is also useful for
62ODE systems.
63
64To solve an ODE or DAE one uses:
65
66- Function $F(t,u,\dot{u})$
67
68  ```
69  TSSetIFunction(TS ts,Vec R,PetscErrorCode (*f)(TS,PetscReal,Vec,Vec,Vec,void*),void *funP);
70  ```
71
72  The vector `R` is an optional location to store the residual. The
73  arguments to the function `f()` are the timestep context, current
74  time, input state $u$, input time derivative $\dot{u}$,
75  and the (optional) user-provided context `funP`. If
76  $F(t,u,\dot{u}) = \dot{u}$ then one need not call this
77  function.
78
79- Function $G(t,u)$, if it is nonzero, is provided with the
80  function
81
82  ```
83  TSSetRHSFunction(TS ts,Vec R,PetscErrorCode (*f)(TS,PetscReal,Vec,Vec,void*),void *funP);
84  ```
85
86- Jacobian
87
88
89  $\sigma F_{\dot{u}}(t^n,u^n,\dot{u}^n) + F_u(t^n,u^n,\dot{u}^n)$
90
91  If using a fully implicit or semi-implicit (IMEX) method one also
92  can provide an appropriate (approximate) Jacobian matrix of
93
94
95  $F()$
96
97  .
98
99  ```
100  TSSetIJacobian(TS ts,Mat A,Mat B,PetscErrorCode (*fjac)(TS,PetscReal,Vec,Vec,PetscReal,Mat,Mat,void*),void *jacP);
101  ```
102
103  The arguments for the function `fjac()` are the timestep context,
104  current time, input state $u$, input derivative
105  $\dot{u}$, input shift $\sigma$, matrix $A$,
106  preconditioning matrix $B$, and the (optional) user-provided
107  context `jacP`.
108
109  The Jacobian needed for the nonlinear system is, by the chain rule,
110
111  $$
112  \begin{aligned}
113      \frac{d F}{d u^n} &  = &  \frac{\partial F}{\partial \dot{u}}|_{u^n} \frac{\partial \dot{u}}{\partial u}|_{u^n} + \frac{\partial F}{\partial u}|_{u^n}.\end{aligned}
114  $$
115
116  For any ODE integration method the approximation of $\dot{u}$
117  is linear in $u^n$ hence
118  $\frac{\partial \dot{u}}{\partial u}|_{u^n} = \sigma$, where
119  the shift $\sigma$ depends on the ODE integrator and time step
120  but not on the function being integrated. Thus
121
122  $$
123  \begin{aligned}
124      \frac{d F}{d u^n} &  = &    \sigma F_{\dot{u}}(t^n,u^n,\dot{u}^n) + F_u(t^n,u^n,\dot{u}^n).\end{aligned}
125  $$
126
127  This explains why the user provide Jacobian is in the given form for
128  all integration methods. An equivalent way to derive the formula is
129  to note that
130
131  $$
132  F(t^n,u^n,\dot{u}^n) = F(t^n,u^n,w+\sigma*u^n)
133  $$
134
135  where $w$ is some linear combination of previous time solutions
136  of $u$ so that
137
138  $$
139  \frac{d F}{d u^n} = \sigma F_{\dot{u}}(t^n,u^n,\dot{u}^n) + F_u(t^n,u^n,\dot{u}^n)
140  $$
141
142  again by the chain rule.
143
144  For example, consider backward Euler’s method applied to the ODE
145  $F(t, u, \dot{u}) = \dot{u} - f(t, u)$ with
146  $\dot{u} = (u^n - u^{n-1})/\delta t$ and
147  $\frac{\partial \dot{u}}{\partial u}|_{u^n} = 1/\delta t$
148  resulting in
149
150  $$
151  \begin{aligned}
152      \frac{d F}{d u^n} & = &   (1/\delta t)F_{\dot{u}} + F_u(t^n,u^n,\dot{u}^n).\end{aligned}
153  $$
154
155  But $F_{\dot{u}} = 1$, in this special case, resulting in the
156  expected Jacobian $I/\delta t - f_u(t,u^n)$.
157
158- Jacobian
159
160  $G_u$
161
162  If using a fully implicit method and the function
163
164  $G()$
165
166   is
167  provided, one also can provide an appropriate (approximate)
168  Jacobian matrix of
169
170  $G()$
171
172  .
173
174  ```
175  TSSetRHSJacobian(TS ts,Mat A,Mat B,
176  PetscErrorCode (*fjac)(TS,PetscReal,Vec,Mat,Mat,void*),void *jacP);
177  ```
178
179  The arguments for the function `fjac()` are the timestep context,
180  current time, input state $u$, matrix $A$,
181  preconditioning matrix $B$, and the (optional) user-provided
182  context `jacP`.
183
184Providing appropriate $F()$ and $G()$ for your problem
185allows for the easy runtime switching between explicit, semi-implicit
186(IMEX), and fully implicit methods.
187
188(sec_ts_basic)=
189
190## Basic TS Options
191
192The user first creates a `TS` object with the command
193
194```
195int TSCreate(MPI_Comm comm,TS *ts);
196```
197
198```
199int TSSetProblemType(TS ts,TSProblemType problemtype);
200```
201
202The `TSProblemType` is one of `TS_LINEAR` or `TS_NONLINEAR`.
203
204To set up `TS` for solving an ODE, one must set the “initial
205conditions” for the ODE with
206
207```
208TSSetSolution(TS ts, Vec initialsolution);
209```
210
211One can set the solution method with the routine
212
213```
214TSSetType(TS ts,TSType type);
215```
216
217Some of the currently supported types are `TSEULER`, `TSRK` (Runge-Kutta), `TSBEULER`, `TSCN` (Crank-Nicolson), `TSTHETA`, `TSGLLE` (generalized linear), and `TSPSEUDO`.
218They can also be set with the options database option `-ts_type euler, rk, beuler, cn, theta, gl, pseudo, sundials, eimex, arkimex, rosw`.
219A list of available methods is given in {any}`integrator_table`.
220
221Set the initial time with the command
222
223```
224TSSetTime(TS ts,PetscReal time);
225```
226
227One can change the timestep with the command
228
229```
230TSSetTimeStep(TS ts,PetscReal dt);
231```
232
233can determine the current timestep with the routine
234
235```
236TSGetTimeStep(TS ts,PetscReal* dt);
237```
238
239Here, “current” refers to the timestep being used to attempt to promote
240the solution form $u^n$ to $u^{n+1}.$
241
242One sets the total number of timesteps to run or the total time to run
243(whatever is first) with the commands
244
245```
246TSSetMaxSteps(TS ts,PetscInt maxsteps);
247TSSetMaxTime(TS ts,PetscReal maxtime);
248```
249
250and determines the behavior near the final time with
251
252```
253TSSetExactFinalTime(TS ts,TSExactFinalTimeOption eftopt);
254```
255
256where `eftopt` is one of
257`TS_EXACTFINALTIME_STEPOVER`,`TS_EXACTFINALTIME_INTERPOLATE`, or
258`TS_EXACTFINALTIME_MATCHSTEP`. One performs the requested number of
259time steps with
260
261```
262TSSolve(TS ts,Vec U);
263```
264
265The solve call implicitly sets up the timestep context; this can be done
266explicitly with
267
268```
269TSSetUp(TS ts);
270```
271
272One destroys the context with
273
274```
275TSDestroy(TS *ts);
276```
277
278and views it with
279
280```
281TSView(TS ts,PetscViewer viewer);
282```
283
284In place of `TSSolve()`, a single step can be taken using
285
286```
287TSStep(TS ts);
288```
289
290(sec_imex)=
291
292## DAE Formulations
293
294You can find a discussion of DAEs in {cite}`ascherpetzold1998` or [Scholarpedia](http://www.scholarpedia.org/article/Differential-algebraic_equations). In PETSc, TS deals with the semi-discrete form of the equations, so that space has already been discretized. If the DAE depends explicitly on the coordinate $x$, then this will just appear as any other data for the equation, not as an explicit argument. Thus we have
295
296$$
297F(t, u, \dot{u}) = 0
298$$
299
300In this form, only fully implicit solvers are appropriate. However, specialized solvers for restricted forms of DAE are supported by PETSc. Below we consider an ODE which is augmented with algebraic constraints on the variables.
301
302### Hessenberg Index-1 DAE
303
304> This is a Semi-Explicit Index-1 DAE which has the form
305
306$$
307\begin{aligned}
308  \dot{u} &= f(t, u, z) \\
309        0 &= h(t, u, z)
310\end{aligned}
311$$
312
313where $z$ is a new constraint variable, and the Jacobian $\frac{dh}{dz}$ is non-singular everywhere. We have suppressed the $x$ dependence since it plays no role here. Using the non-singularity of the Jacobian and the Implicit Function Theorem, we can solve for $z$ in terms of $u$. This means we could, in principle, plug $z(u)$ into the first equation to obtain a simple ODE, even if this is not the numerical process we use. Below we show that this type of DAE can be used with IMEX schemes.
314
315### Hessenberg Index-2 DAE
316
317> This DAE has the form
318
319$$
320\begin{aligned}
321  \dot{u} &= f(t, u, z) \\
322        0 &= h(t, u)
323\end{aligned}
324$$
325
326Notice that the constraint equation $h$ is not a function of the constraint variable $z$. This means that we cannot naively invert as we did in the index-1 case. Our strategy will be to convert this into an index-1 DAE using a time derivative, which loosely corresponds to the idea of an index being the number of derivatives necessary to get back to an ODE. If we differentiate the constraint equation with respect to time, we can use the ODE to simplify it,
327
328$$
329\begin{aligned}
330        0 &= \dot{h}(t, u) \\
331          &= \frac{dh}{du} \dot{u} + \frac{\partial h}{\partial t} \\
332          &= \frac{dh}{du} f(t, u, z) + \frac{\partial h}{\partial t}
333\end{aligned}
334$$
335
336If the Jacobian $\frac{dh}{du} \frac{df}{dz}$ is non-singular, then we have precisely a semi-explicit index-1 DAE, and we can once again use the PETSc IMEX tools to solve it. A common example of an index-2 DAE is the incompressible Navier-Stokes equations, since the continuity equation $\nabla\cdot u = 0$ does not involve the pressure. Using PETSc IMEX with the above conversion then corresponds to the Segregated Runge-Kutta method applied to this equation {cite}`colomesbadia2016`.
337
338## Using Implicit-Explicit (IMEX) Methods
339
340For “stiff” problems or those with multiple time scales $F()$ will
341be treated implicitly using a method suitable for stiff problems and
342$G()$ will be treated explicitly when using an IMEX method like
343TSARKIMEX. $F()$ is typically linear or weakly nonlinear while
344$G()$ may have very strong nonlinearities such as arise in
345non-oscillatory methods for hyperbolic PDE. The user provides three
346pieces of information, the APIs for which have been described above.
347
348- “Slow” part $G(t,u)$ using `TSSetRHSFunction()`.
349- “Stiff” part $F(t,u,\dot u)$ using `TSSetIFunction()`.
350- Jacobian $F_u + \sigma F_{\dot u}$ using `TSSetIJacobian()`.
351
352The user needs to set `TSSetEquationType()` to `TS_EQ_IMPLICIT` or
353higher if the problem is implicit; e.g.,
354$F(t,u,\dot u) = M \dot u - f(t,u)$, where $M$ is not the
355identity matrix:
356
357- the problem is an implicit ODE (defined implicitly through
358  `TSSetIFunction()`) or
359- a DAE is being solved.
360
361An IMEX problem representation can be made implicit by setting `TSARKIMEXSetFullyImplicit()`.
362Note that multilevel preconditioners (e.g. `PCMG`), won't work in the fully implicit case; the
363same holds true for any other `TS` type requiring a fully implicit formulation in case both
364Jacobians are specified.
365
366In PETSc, DAEs and ODEs are formulated as $F(t,u,\dot{u})=G(t,u)$, where $F()$ is meant to be integrated implicitly and $G()$ explicitly. An IMEX formulation such as $M\dot{u}=f(t,u)+g(t,u)$ requires the user to provide $M^{-1} g(t,u)$ or solve $g(t,u) - M x=0$ in place of $G(t,u)$. General cases such as $F(t,u,\dot{u})=G(t,u)$ are not amenable to IMEX Runge-Kutta, but can be solved by using fully implicit methods. Some use-case examples for `TSARKIMEX` are listed in {numref}`tab_DE_forms` and a list of methods with a summary of their properties is given in {any}`tab_IMEX_RK_PETSc`.
367
368```{eval-rst}
369.. list-table:: Use case examples for ``TSARKIMEX``
370   :name: tab_DE_forms
371   :widths: 40 40 80
372
373   * - :math:`\dot{u} = g(t,u)`
374     - nonstiff ODE
375     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \dot{u} \\ G(t,u) &= g(t,u)\end{aligned}`
376   * - :math:`M \dot{u} = g(t,u)`
377     - nonstiff ODE with mass matrix
378     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \dot{u} \\ G(t,u) &= M^{-1} g(t,u)\end{aligned}`
379   * - :math:`\dot{u} = f(t,u)`
380     - stiff ODE
381     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \dot{u} - f(t,u) \\ G(t,u) &= 0\end{aligned}`
382   * - :math:`M \dot{u} = f(t,u)`
383     - stiff ODE with mass matrix
384     - :math:`\begin{aligned}F(t,u,\dot{u}) &= M \dot{u} - f(t,u) \\ G(t,u) &= 0\end{aligned}`
385   * - :math:`\dot{u} = f(t,u) + g(t,u)`
386     - stiff-nonstiff ODE
387     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \dot{u} - f(t,u) \\ G(t,u) &= g(t,u)\end{aligned}`
388   * - :math:`M \dot{u} = f(t,u) + g(t,u)`
389     - stiff-nonstiff ODE with mass matrix
390     - :math:`\begin{aligned}F(t,u,\dot{u}) &= M\dot{u} - f(t,u) \\ G(t,u) &= M^{-1} g(t,u)\end{aligned}`
391   * - :math:`\begin{aligned}\dot{u} &= f(t,u,z) + g(t,u,z)\\0 &= h(t,y,z)\end{aligned}`
392     - semi-explicit index-1 DAE
393     - :math:`\begin{aligned}F(t,u,\dot{u}) &= \begin{pmatrix}\dot{u} - f(t,u,z)\\h(t, u, z)\end{pmatrix}\\G(t,u) &= g(t,u)\end{aligned}`
394   * - :math:`f(t,u,\dot{u})=0`
395     - fully implicit ODE/DAE
396     - :math:`\begin{aligned}F(t,u,\dot{u}) &= f(t,u,\dot{u})\\G(t,u) &= 0\end{aligned}`; the user needs to set ``TSSetEquationType()`` to ``TS_EQ_IMPLICIT`` or higher
397```
398
399{numref}`tab_IMEX_RK_PETSc` lists of the currently available IMEX Runge-Kutta schemes. For each method, it gives the `-ts_arkimex_type` name, the reference, the total number of stages/implicit stages, the order/stage-order, the implicit stability properties (IM), stiff accuracy (SA), the existence of an embedded scheme, and dense output (DO).
400
401```{eval-rst}
402.. list-table:: IMEX Runge-Kutta schemes
403  :name: tab_IMEX_RK_PETSc
404  :header-rows: 1
405
406  * - Name
407    - Reference
408    - Stages (IM)
409    - Order (Stage)
410    - IM
411    - SA
412    - Embed
413    - DO
414    - Remarks
415  * - a2
416    - based on CN
417    - 2 (1)
418    - 2 (2)
419    - A-Stable
420    - yes
421    - yes (1)
422    - yes (2)
423    -
424  * - l2
425    - SSP2(2,2,2) :cite:`pareschi_2005`
426    - 2 (2)
427    - 2 (1)
428    - L-Stable
429    - yes
430    - yes (1)
431    - yes (2)
432    - SSP SDIRK
433  * - ars122
434    - ARS122 :cite:`ascher_1997`
435    - 2 (1)
436    - 3 (1)
437    - A-Stable
438    - yes
439    - yes (1)
440    - yes (2)
441    -
442  * - 2c
443    - :cite:`giraldo_2013`
444    - 3 (2)
445    - 2 (2)
446    - L-Stable
447    - yes
448    - yes (1)
449    - yes (2)
450    - SDIRK
451  * - 2d
452    - :cite:`giraldo_2013`
453    - 3 (2)
454    - 2 (2)
455    - L-Stable
456    - yes
457    - yes (1)
458    - yes (2)
459    - SDIRK
460  * -  2e
461    - :cite:`giraldo_2013`
462    - 3 (2)
463    - 2 (2)
464    - L-Stable
465    - yes
466    - yes (1)
467    - yes (2)
468    - SDIRK
469  * - prssp2
470    - PRS(3,3,2) :cite:`pareschi_2005`
471    - 3 (3)
472    - 3 (1)
473    - L-Stable
474    - yes
475    - no
476    - no
477    - SSP
478  * - 3
479    - :cite:`kennedy_2003`
480    - 4 (3)
481    - 3 (2)
482    - L-Stable
483    - yes
484    - yes (2)
485    - yes (2)
486    - SDIRK
487  * - bpr3
488    - :cite:`boscarino_tr2011`
489    - 5 (4)
490    - 3 (2)
491    - L-Stable
492    - yes
493    - no
494    - no
495    - SDIRK
496  * - ars443
497    - :cite:`ascher_1997`
498    - 5 (4)
499    - 3 (1)
500    - L-Stable
501    - yes
502    - no
503    - no
504    - SDIRK
505  * - 4
506    - :cite:`kennedy_2003`
507    - 6 (5)
508    - 4 (2)
509    - L-Stable
510    - yes
511    - yes (3)
512    - yes
513    - SDIRK
514  * - 5
515    - :cite:`kennedy_2003`
516    - 8 (7)
517    - 5 (2)
518    - L-Stable
519    - yes
520    - yes (4)
521    - yes (3)
522    - SDIRK
523```
524
525ROSW are linearized implicit Runge-Kutta methods known as Rosenbrock
526W-methods. They can accommodate inexact Jacobian matrices in their
527formulation. A series of methods are available in PETSc are listed in
528{numref}`tab_IMEX_RosW_PETSc` below. For each method, it gives the reference, the total number of stages and implicit stages, the scheme order and stage order, the implicit stability properties (IM), stiff accuracy (SA), the existence of an embedded scheme, dense output (DO), the capacity to use inexact Jacobian matrices (-W), and high order integration of differential algebraic equations (PDAE).
529
530```{eval-rst}
531.. list-table:: Rosenbrock W-schemes
532   :name: tab_IMEX_RosW_PETSc
533   :header-rows: 1
534
535   * - TS
536     - Reference
537     - Stages (IM)
538     - Order (Stage)
539     - IM
540     - SA
541     - Embed
542     - DO
543     - -W
544     - PDAE
545     - Remarks
546   * - theta1
547     - classical
548     - 1(1)
549     - 1(1)
550     - L-Stable
551     - -
552     - -
553     - -
554     - -
555     - -
556     - -
557   * - theta2
558     - classical
559     - 1(1)
560     - 2(2)
561     - A-Stable
562     - -
563     - -
564     - -
565     - -
566     - -
567     - -
568   * - 2m
569     - Zoltan
570     - 2(2)
571     - 2(1)
572     - L-Stable
573     - No
574     - Yes(1)
575     - Yes(2)
576     - Yes
577     - No
578     - SSP
579   * - 2p
580     - Zoltan
581     - 2(2)
582     - 2(1)
583     - L-Stable
584     - No
585     - Yes(1)
586     - Yes(2)
587     - Yes
588     - No
589     - SSP
590   * - ra3pw
591     - :cite:`rang_2005`
592     - 3(3)
593     - 3(1)
594     - A-Stable
595     - No
596     - Yes
597     - Yes(2)
598     - No
599     - Yes(3)
600     - -
601   * - ra34pw2
602     - :cite:`rang_2005`
603     - 4(4)
604     - 3(1)
605     - L-Stable
606     - Yes
607     - Yes
608     - Yes(3)
609     - Yes
610     - Yes(3)
611     - -
612   * - rodas3
613     - :cite:`sandu_1997`
614     - 4(4)
615     - 3(1)
616     - L-Stable
617     - Yes
618     - Yes
619     - No
620     - No
621     - Yes
622     - -
623   * - sandu3
624     - :cite:`sandu_1997`
625     - 3(3)
626     - 3(1)
627     - L-Stable
628     - Yes
629     - Yes
630     - Yes(2)
631     - No
632     - No
633     - -
634   * - assp3p3s1c
635     - unpub.
636     - 3(2)
637     - 3(1)
638     - A-Stable
639     - No
640     - Yes
641     - Yes(2)
642     - Yes
643     - No
644     - SSP
645   * - lassp3p4s2c
646     - unpub.
647     - 4(3)
648     - 3(1)
649     - L-Stable
650     - No
651     - Yes
652     - Yes(3)
653     - Yes
654     - No
655     - SSP
656   * - lassp3p4s2c
657     - unpub.
658     - 4(3)
659     - 3(1)
660     - L-Stable
661     - No
662     - Yes
663     - Yes(3)
664     - Yes
665     - No
666     - SSP
667   * - ark3
668     - unpub.
669     - 4(3)
670     - 3(1)
671     - L-Stable
672     - No
673     - Yes
674     - Yes(3)
675     - Yes
676     - No
677     - IMEX-RK
678```
679
680## IMEX Methods for fast-slow systems
681
682Consider a fast-slow ODE system
683
684$$
685\begin{aligned}
686\dot{u}^{slow} & = f^{slow}(t, u^{slow},u^{fast}) \\
687M \dot{u}^{fast} & = g^{fast}(t, u^{slow},u^{fast}) + f^{fast}(t, u^{slow},u^{fast})
688\end{aligned}
689$$
690
691where $u^{slow}$ is the slow component and $u^{fast}$ is the
692fast component. The fast component can be partitioned additively as
693described above. Thus we want to treat $f^{slow}()$ and
694$f^{fast}()$ explicitly and the other terms implicitly when using
695TSARKIMEX. This is achieved by using the following APIs:
696
697- `TSARKIMEXSetFastSlowSplit()` informs PETSc to use ARKIMEX to solve a fast-slow system.
698- `TSRHSSplitSetIS()` specifies the index set for the slow/fast components.
699- `TSRHSSplitSetRHSFunction()` specifies the parts to be handled explicitly $f^{slow}()$ and $f^{fast}()$.
700- `TSRHSSplitSetIFunction()` and `TSRHSSplitSetIJacobian()` specify the implicit part and its Jacobian.
701
702Note that this ODE system can also be solved by padding zeros in the implicit part and using the standard IMEX methods. However, one needs to provide the full-dimensional Jacobian whereas only a partial Jacobian is needed for the fast-slow split which is more efficient in storage and speed.
703
704## GLEE methods
705
706In this section, we describe explicit and implicit time stepping methods
707with global error estimation that are introduced in
708{cite}`constantinescu_tr2016b`. The solution vector for a
709GLEE method is either \[$y$, $\tilde{y}$\] or
710\[$y$,$\varepsilon$\], where $y$ is the solution,
711$\tilde{y}$ is the “auxiliary solution,” and $\varepsilon$
712is the error. The working vector that `TSGLEE` uses is $Y$ =
713\[$y$,$\tilde{y}$\], or \[$y$,$\varepsilon$\]. A
714GLEE method is defined by
715
716- $(p,r,s)$: (order, steps, and stages),
717- $\gamma$: factor representing the global error ratio,
718- $A, U, B, V$: method coefficients,
719- $S$: starting method to compute the working vector from the
720  solution (say at the beginning of time integration) so that
721  $Y = Sy$,
722- $F$: finalizing method to compute the solution from the working
723  vector,$y = FY$.
724- $F_\text{embed}$: coefficients for computing the auxiliary
725  solution $\tilde{y}$ from the working vector
726  ($\tilde{y} = F_\text{embed} Y$),
727- $F_\text{error}$: coefficients to compute the estimated error
728  vector from the working vector
729  ($\varepsilon = F_\text{error} Y$).
730- $S_\text{error}$: coefficients to initialize the auxiliary
731  solution ($\tilde{y}$ or $\varepsilon$) from a specified
732  error vector ($\varepsilon$). It is currently implemented only
733  for $r = 2$. We have $y_\text{aux} =
734  S_{error}[0]*\varepsilon + S_\text{error}[1]*y$, where
735  $y_\text{aux}$ is the 2nd component of the working vector
736  $Y$.
737
738The methods can be described in two mathematically equivalent forms:
739propagate two components (“$y\tilde{y}$ form”) and propagating the
740solution and its estimated error (“$y\varepsilon$ form”). The two
741forms are not explicitly specified in `TSGLEE`; rather, the specific
742values of $B, U, S, F, F_{embed}$, and $F_{error}$
743characterize whether the method is in $y\tilde{y}$ or
744$y\varepsilon$ form.
745
746The API used by this `TS` method includes:
747
748- `TSGetSolutionComponents`: Get all the solution components of the
749  working vector
750
751  ```
752  ierr = TSGetSolutionComponents(TS,int*,Vec*)
753  ```
754
755  Call with `NULL` as the last argument to get the total number of
756  components in the working vector $Y$ (this is $r$ (not
757  $r-1$)), then call to get the $i$-th solution component.
758
759- `TSGetAuxSolution`: Returns the auxiliary solution
760  $\tilde{y}$ (computed as $F_\text{embed} Y$)
761
762  ```
763  ierr = TSGetAuxSolution(TS,Vec*)
764  ```
765
766- `TSGetTimeError`: Returns the estimated error vector
767  $\varepsilon$ (computed as $F_\text{error} Y$ if
768  $n=0$ or restores the error estimate at the end of the previous
769  step if $n=-1$)
770
771  ```
772  ierr = TSGetTimeError(TS,PetscInt n,Vec*)
773  ```
774
775- `TSSetTimeError`: Initializes the auxiliary solution
776  ($\tilde{y}$ or $\varepsilon$) for a specified initial
777  error.
778
779  ```
780  ierr = TSSetTimeError(TS,Vec)
781  ```
782
783The local error is estimated as $\varepsilon(n+1)-\varepsilon(n)$.
784This is to be used in the error control. The error in $y\tilde{y}$
785GLEE is
786$\varepsilon(n) = \frac{1}{1-\gamma} * (\tilde{y}(n) - y(n))$.
787
788Note that $y$ and $\tilde{y}$ are reported to `TSAdapt`
789`basic` (`TSADAPTBASIC`), and thus it computes the local error as
790$\varepsilon_{loc} = (\tilde{y} -
791y)$. However, the actual local error is $\varepsilon_{loc}
792= \varepsilon_{n+1} - \varepsilon_n = \frac{1}{1-\gamma} * [(\tilde{y} -
793y)_{n+1} - (\tilde{y} - y)_n]$.
794
795{numref}`tab_IMEX_GLEE_PETSc` lists currently available GL schemes with global error estimation {cite}`constantinescu_tr2016b`.
796
797```{eval-rst}
798.. list-table:: GL schemes with global error estimation
799   :name: tab_IMEX_GLEE_PETSc
800   :header-rows: 1
801
802   * - TS
803     - Reference
804     - IM/EX
805     - :math:`(p,r,s)`
806     - :math:`\gamma`
807     - Form
808     - Notes
809   * - ``TSGLEEi1``
810     - ``BE1``
811     - IM
812     - :math:`(1,3,2)`
813     - :math:`0.5`
814     - :math:`y\varepsilon`
815     - Based on backward Euler
816   * - ``TSGLEE23``
817     - ``23``
818     - EX
819     - :math:`(2,3,2)`
820     - :math:`0`
821     - :math:`y\varepsilon`
822     -
823   * - ``TSGLEE24``
824     - ``24``
825     - EX
826     - :math:`(2,4,2)`
827     - :math:`0`
828     - :math:`y\tilde{y}`
829     -
830   * - ``TSGLEE25I``
831     - ``25i``
832     - EX
833     - :math:`(2,5,2)`
834     - :math:`0`
835     - :math:`y\tilde{y}`
836     -
837   * - ``TSGLEE35``
838     - ``35``
839     - EX
840     - :math:`(3,5,2)`
841     - :math:`0`
842     - :math:`y\tilde{y}`
843     -
844   * - ``TSGLEEEXRK2A``
845     - ``exrk2a``
846     - EX
847     - :math:`(2,6,2)`
848     - :math:`0.25`
849     - :math:`y\varepsilon`
850     -
851   * - ``TSGLEERK32G1``
852     - ``rk32g1``
853     - EX
854     - :math:`(3,8,2)`
855     - :math:`0`
856     - :math:`y\varepsilon`
857     -
858   * - ``TSGLEERK285EX``
859     - ``rk285ex``
860     - EX
861     - :math:`(2,9,2)`
862     - :math:`0.25`
863     - :math:`y\varepsilon`
864     -
865```
866
867## Using fully implicit methods
868
869To use a fully implicit method like `TSTHETA`, `TSBDF` or `TSDIRK`, either
870provide the Jacobian of $F()$ (and $G()$ if $G()$ is
871provided) or use a `DM` that provides a coloring so the Jacobian can
872be computed efficiently via finite differences.
873
874## Using the Explicit Runge-Kutta timestepper with variable timesteps
875
876The explicit Euler and Runge-Kutta methods require the ODE be in the
877form
878
879$$
880\dot{u} = G(u,t).
881$$
882
883The user can either call `TSSetRHSFunction()` and/or they can call
884`TSSetIFunction()` (so long as the function provided to
885`TSSetIFunction()` is equivalent to $\dot{u} + \tilde{F}(t,u)$)
886but the Jacobians need not be provided. [^id6]
887
888The Explicit Runge-Kutta timestepper with variable timesteps is an
889implementation of the standard Runge-Kutta with an embedded method. The
890error in each timestep is calculated using the solutions from the
891Runge-Kutta method and its embedded method (the 2-norm of the difference
892is used). The default method is the $3$rd-order Bogacki-Shampine
893method with a $2$nd-order embedded method (`TSRK3BS`). Other
894available methods are the $5$th-order Fehlberg RK scheme with a
895$4$th-order embedded method (`TSRK5F`), the
896$5$th-order Dormand-Prince RK scheme with a $4$th-order
897embedded method (`TSRK5DP`), the $5$th-order Bogacki-Shampine
898RK scheme with a $4$th-order embedded method (`TSRK5BS`, and
899the $6$th-, $7$th, and $8$th-order robust Verner
900RK schemes with a $5$th-, $6$th, and $7$th-order
901embedded method, respectively (`TSRK6VR`, `TSRK7VR`, `TSRK8VR`).
902Variable timesteps cannot be used with RK schemes that do not have an
903embedded method (`TSRK1FE` - $1$st-order, $1$-stage
904forward Euler, `TSRK2A` - $2$nd-order, $2$-stage RK
905scheme, `TSRK3` - $3$rd-order, $3$-stage RK scheme,
906`TSRK4` - $4$-th order, $4$-stage RK scheme).
907
908## Special Cases
909
910- $\dot{u} = A u.$ First compute the matrix $A$ then call
911
912  ```
913  TSSetProblemType(ts,TS_LINEAR);
914  TSSetRHSFunction(ts,NULL,TSComputeRHSFunctionLinear,NULL);
915  TSSetRHSJacobian(ts,A,A,TSComputeRHSJacobianConstant,NULL);
916  ```
917
918  or
919
920  ```
921  TSSetProblemType(ts,TS_LINEAR);
922  TSSetIFunction(ts,NULL,TSComputeIFunctionLinear,NULL);
923  TSSetIJacobian(ts,A,A,TSComputeIJacobianConstant,NULL);
924  ```
925
926- $\dot{u} = A(t) u.$ Use
927
928  ```
929  TSSetProblemType(ts,TS_LINEAR);
930  TSSetRHSFunction(ts,NULL,TSComputeRHSFunctionLinear,NULL);
931  TSSetRHSJacobian(ts,A,A,YourComputeRHSJacobian, &appctx);
932  ```
933
934  where `YourComputeRHSJacobian()` is a function you provide that
935  computes $A$ as a function of time. Or use
936
937  ```
938  TSSetProblemType(ts,TS_LINEAR);
939  TSSetIFunction(ts,NULL,TSComputeIFunctionLinear,NULL);
940  TSSetIJacobian(ts,A,A,YourComputeIJacobian, &appctx);
941  ```
942
943## Monitoring and visualizing solutions
944
945- `-ts_monitor` - prints the time and timestep at each iteration.
946- `-ts_adapt_monitor` - prints information about the timestep
947  adaption calculation at each iteration.
948- `-ts_monitor_lg_timestep` - plots the size of each timestep,
949  `TSMonitorLGTimeStep()`.
950- `-ts_monitor_lg_solution` - for ODEs with only a few components
951  (not arising from the discretization of a PDE) plots the solution as
952  a function of time, `TSMonitorLGSolution()`.
953- `-ts_monitor_lg_error` - for ODEs with only a few components plots
954  the error as a function of time, only if `TSSetSolutionFunction()`
955  is provided, `TSMonitorLGError()`.
956- `-ts_monitor_draw_solution` - plots the solution at each iteration,
957  `TSMonitorDrawSolution()`.
958- `-ts_monitor_draw_error` - plots the error at each iteration only
959  if `TSSetSolutionFunction()` is provided,
960  `TSMonitorDrawSolution()`.
961- `-ts_monitor_solution binary[:filename]` - saves the solution at each
962  iteration to a binary file, `TSMonitorSolution()`. Solution viewers work
963  with other time-aware formats, e.g., `-ts_monitor_solution cgns:sol.cgns`,
964  and can output one solution every 10 time steps by adding
965  `-ts_monitor_solution_interval 10`. Use `-ts_monitor_solution_interval -1`
966  to output data only at then end of a time loop.
967- `-ts_monitor_solution_vtk <filename-%03D.vts>` - saves the solution
968  at each iteration to a file in vtk format,
969  `TSMonitorSolutionVTK()`.
970
971## Error control via variable time-stepping
972
973Most of the time stepping methods available in PETSc have an error
974estimation and error control mechanism. This mechanism is implemented by
975changing the step size in order to maintain user specified absolute and
976relative tolerances. The PETSc object responsible with error control is
977`TSAdapt`. The available `TSAdapt` types are listed in the following table.
978
979```{eval-rst}
980.. list-table:: ``TSAdapt``: available adaptors
981   :name: tab_adaptors
982   :header-rows: 1
983
984   * - ID
985     - Name
986     - Notes
987   * - ``TSADAPTNONE``
988     - ``none``
989     - no adaptivity
990   * - ``TSADAPTBASIC``
991     - ``basic``
992     - the default adaptor
993   * - ``TSADAPTGLEE``
994     - ``glee``
995     - extension of the basic adaptor to treat :math:`{\rm Tol}_{\rm A}` and :math:`{\rm Tol}_{\rm R}` as separate criteria. It can also control global errors if the integrator (e.g., ``TSGLEE``) provides this information
996   * - ``TSADAPTDSP``
997     - ``dsp``
998     - adaptive controller for time-stepping based on digital signal processing
999```
1000
1001When using `TSADAPTBASIC` (the default), the user typically provides a
1002desired absolute ${\rm Tol}_{\rm A}$ or a relative
1003${\rm Tol}_{\rm R}$ error tolerance by invoking
1004`TSSetTolerances()` or at the command line with options `-ts_atol`
1005and `-ts_rtol`. The error estimate is based on the local truncation
1006error, so for every step the algorithm verifies that the estimated local
1007truncation error satisfies the tolerances provided by the user and
1008computes a new step size to be taken. For multistage methods, the local
1009truncation is obtained by comparing the solution $y$ to a lower
1010order $\widehat{p}=p-1$ approximation, $\widehat{y}$, where
1011$p$ is the order of the method and $\widehat{p}$ the order
1012of $\widehat{y}$.
1013
1014The adaptive controller at step $n$ computes a tolerance level
1015
1016$$
1017\begin{aligned}
1018Tol_n(i)&=&{\rm Tol}_{\rm A}(i) +  \max(y_n(i),\widehat{y}_n(i)) {\rm Tol}_{\rm R}(i)\,,\end{aligned}
1019$$
1020
1021and forms the acceptable error level
1022
1023$$
1024\begin{aligned}
1025\rm wlte_n&=& \frac{1}{m} \sum_{i=1}^{m}\sqrt{\frac{\left\|y_n(i)
1026  -\widehat{y}_n(i)\right\|}{Tol(i)}}\,,\end{aligned}
1027$$
1028
1029where the errors are computed componentwise, $m$ is the dimension
1030of $y$ and `-ts_adapt_wnormtype` is `2` (default). If
1031`-ts_adapt_wnormtype` is `infinity` (max norm), then
1032
1033$$
1034\begin{aligned}
1035\rm wlte_n&=& \max_{1\dots m}\frac{\left\|y_n(i)
1036  -\widehat{y}_n(i)\right\|}{Tol(i)}\,.\end{aligned}
1037$$
1038
1039The error tolerances are satisfied when $\rm wlte\le 1.0$.
1040
1041The next step size is based on this error estimate, and determined by
1042
1043$$
1044\begin{aligned}
1045 \Delta t_{\rm new}(t)&=&\Delta t_{\rm{old}} \min(\alpha_{\max},
1046 \max(\alpha_{\min}, \beta (1/\rm wlte)^\frac{1}{\widehat{p}+1}))\,,\end{aligned}
1047$$ (hnew)
1048
1049where $\alpha_{\min}=$`-ts_adapt_clip`[0] and
1050$\alpha_{\max}$=`-ts_adapt_clip`[1] keep the change in
1051$\Delta t$ to within a certain factor, and $\beta<1$ is
1052chosen through `-ts_adapt_safety` so that there is some margin to
1053which the tolerances are satisfied and so that the probability of
1054rejection is decreased.
1055
1056This adaptive controller works in the following way. After completing
1057step $k$, if $\rm wlte_{k+1} \le 1.0$, then the step is
1058accepted and the next step is modified according to
1059{eq}`hnew`; otherwise, the step is rejected and retaken
1060with the step length computed in {eq}`hnew`.
1061
1062`TSADAPTGLEE` is an extension of the basic
1063adaptor to treat ${\rm Tol}_{\rm A}$ and ${\rm Tol}_{\rm R}$
1064as separate criteria. it can also control global errors if the
1065integrator (e.g., `TSGLEE`) provides this information.
1066
1067## Handling of discontinuities
1068
1069For problems that involve discontinuous right-hand sides, one can set an
1070“event” function $g(t,u)$ for PETSc to detect and locate the times
1071of discontinuities (zeros of $g(t,u)$). Events can be defined
1072through the event monitoring routine
1073
1074```
1075TSSetEventHandler(TS ts,PetscInt nevents,PetscInt *direction,PetscBool *terminate,PetscErrorCode (*indicator)(TS,PetscReal,Vec,PetscScalar*,void* eventP),PetscErrorCode (*postevent)(TS,PetscInt,PetscInt[],PetscReal,Vec,PetscBool,void* eventP),void *eventP);
1076```
1077
1078Here, `nevents` denotes the number of events, `direction` sets the
1079type of zero crossing to be detected for an event (+1 for positive
1080zero-crossing, -1 for negative zero-crossing, and 0 for both),
1081`terminate` conveys whether the time-stepping should continue or halt
1082when an event is located, `eventmonitor` is a user- defined routine
1083that specifies the event description, `postevent` is an optional
1084user-defined routine to take specific actions following an event.
1085
1086The arguments to `indicator()` are the timestep context, current
1087time, input state $u$, array of event function value, and the
1088(optional) user-provided context `eventP`.
1089
1090The arguments to `postevent()` routine are the timestep context,
1091number of events occurred, indices of events occurred, current time, input
1092state $u$, a boolean flag indicating forward solve (1) or adjoint
1093solve (0), and the (optional) user-provided context `eventP`.
1094
1095(sec_tchem)=
1096
1097## Explicit integrators with finite element mass matrices
1098
1099Discretized finite element problems often have the form $M \dot u = G(t, u)$ where $M$ is the mass matrix.
1100Such problems can be solved using `DMTSSetIFunction()` with implicit integrators.
1101When $M$ is nonsingular (i.e., the problem is an ODE, not a DAE), explicit integrators can be applied to $\dot u = M^{-1} G(t, u)$ or $\dot u = \hat M^{-1} G(t, u)$, where $\hat M$ is the lumped mass matrix.
1102While the true mass matrix generally has a dense inverse and thus must be solved iteratively, the lumped mass matrix is diagonal (e.g., computed via collocated quadrature or row sums of $M$).
1103To have PETSc create and apply a (lumped) mass matrix automatically, first use `DMTSSetRHSFunction()` to specify $G$ and set a `PetscFE` using `DMAddField()` and `DMCreateDS()`, then call either `DMTSCreateRHSMassMatrix()` or `DMTSCreateRHSMassMatrixLumped()` to automatically create the mass matrix and a `KSP` that will be used to apply $M^{-1}$.
1104This `KSP` can be customized using the `"mass_"` prefix.
1105
1106(section_sa)=
1107
1108## Performing sensitivity analysis with the TS ODE Solvers
1109
1110The `TS` library provides a framework based on discrete adjoint models
1111for sensitivity analysis for ODEs and DAEs. The ODE/DAE solution process
1112(henceforth called the forward run) can be obtained by using either
1113explicit or implicit solvers in `TS`, depending on the problem
1114properties. Currently supported method types are `TSRK` (Runge-Kutta)
1115explicit methods and `TSTHETA` implicit methods, which include
1116`TSBEULER` and `TSCN`.
1117
1118### Using the discrete adjoint methods
1119
1120Consider the ODE/DAE
1121
1122$$
1123F(t,y,\dot{y},p) = 0, \quad y(t_0)=y_0(p) \quad t_0 \le t \le t_F
1124$$
1125
1126and the cost function(s)
1127
1128$$
1129\Psi_i(y_0,p) = \Phi_i(y_F,p) + \int_{t_0}^{t_F} r_i(y(t),p,t)dt \quad i=1,...,n_\text{cost}.
1130$$
1131
1132The `TSAdjoint` routines of PETSc provide
1133
1134$$
1135\frac{\partial \Psi_i}{\partial y_0} = \lambda_i
1136$$
1137
1138and
1139
1140$$
1141\frac{\partial \Psi_i}{\partial p} = \mu_i + \lambda_i (\frac{\partial y_0}{\partial p}).
1142$$
1143
1144To perform the discrete adjoint sensitivity analysis one first sets up
1145the `TS` object for a regular forward run but with one extra function
1146call
1147
1148```
1149TSSetSaveTrajectory(TS ts),
1150```
1151
1152then calls `TSSolve()` in the usual manner.
1153
1154One must create two arrays of $n_\text{cost}$ vectors
1155$\lambda$ and $\mu$ (if there are no parameters $p$
1156then one can use `NULL` for the $\mu$ array.) The
1157$\lambda$ vectors are the same dimension and parallel layout as
1158the solution vector for the ODE, the $\mu$ vectors are of dimension
1159$p$; when $p$ is small usually all its elements are on the
1160first MPI process, while the vectors have no entries on the other
1161processes. $\lambda_i$ and $\mu_i$ should be initialized with
1162the values $d\Phi_i/dy|_{t=t_F}$ and $d\Phi_i/dp|_{t=t_F}$
1163respectively. Then one calls
1164
1165```
1166TSSetCostGradients(TS ts,PetscInt numcost, Vec *lambda,Vec *mu);
1167```
1168
1169where `numcost` denotes $n_\text{cost}$.
1170If $F()$ is a function of $p$ one needs to also provide the
1171Jacobian $-F_p$ with
1172
1173```
1174TSSetRHSJacobianP(TS ts,Mat Amat,PetscErrorCode (*fp)(TS,PetscReal,Vec,Mat,void*),void *ctx)
1175```
1176
1177or
1178
1179```
1180TSSetIJacobianP(TS ts,Mat Amat,PetscErrorCode (*fp)(TS,PetscReal,Vec,Vec,PetscReal,Mat,void*),void *ctx)
1181```
1182
1183or both, depending on which form is used to define the ODE.
1184
1185The arguments for the function `fp()` are the timestep context,
1186current time, $y$, and the (optional) user-provided context.
1187
1188If there is an integral term in the cost function, i.e. $r$ is
1189nonzero, it can be transformed into another ODE that is augmented to the
1190original ODE. To evaluate the integral, one needs to create a child
1191`TS` objective by calling
1192
1193```
1194TSCreateQuadratureTS(TS ts,PetscBool fwd,TS *quadts);
1195```
1196
1197and provide the ODE RHS function (which evaluates the integrand
1198$r$) with
1199
1200```
1201TSSetRHSFunction(TS quadts,Vec R,PetscErrorCode (*rf)(TS,PetscReal,Vec,Vec,void*),void *ctx)
1202```
1203
1204Similar to the settings for the original ODE, Jacobians of the integrand
1205can be provided with
1206
1207```
1208TSSetRHSJacobian(TS quadts,Vec DRDU,Vec DRDU,PetscErrorCode (*drdyf)(TS,PetscReal,Vec,Vec*,void*),void *ctx)
1209TSSetRHSJacobianP(TS quadts,Vec DRDU,Vec DRDU,PetscErrorCode (*drdyp)(TS,PetscReal,Vec,Vec*,void*),void *ctx)
1210```
1211
1212where $\mathrm{drdyf}= dr /dy$, $\mathrm{drdpf} = dr /dp$.
1213Since the integral term is additive to the cost function, its gradient
1214information will be included in $\lambda$ and $\mu$.
1215
1216Lastly, one starts the backward run by calling
1217
1218```
1219TSAdjointSolve(TS ts).
1220```
1221
1222One can obtain the value of the integral term by calling
1223
1224```
1225TSGetCostIntegral(TS ts,Vec *q).
1226```
1227
1228or accessing directly the solution vector used by `quadts`.
1229
1230The second argument of `TSCreateQuadratureTS()` allows one to choose
1231if the integral term is evaluated in the forward run (inside
1232`TSSolve()`) or in the backward run (inside `TSAdjointSolve()`) when
1233`TSSetCostGradients()` and `TSSetCostIntegrand()` are called before
1234`TSSolve()`. Note that this also allows for evaluating the integral
1235without having to use the adjoint solvers.
1236
1237To provide a better understanding of the use of the adjoint solvers, we
1238introduce a simple example, corresponding to
1239<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/power_grid/ex3sa.c.html">TS Power Grid Tutorial ex3sa</a>.
1240The problem is to study dynamic security of power system when there are
1241credible contingencies such as short-circuits or loss of generators,
1242transmission lines, or loads. The dynamic security constraints are
1243incorporated as equality constraints in the form of discretized
1244differential equations and inequality constraints for bounds on the
1245trajectory. The governing ODE system is
1246
1247$$
1248\begin{aligned}
1249    \phi' &= &\omega_B (\omega - \omega_S)  \\
1250    2H/\omega_S \, \omega' & =& p_m - p_{max} sin(\phi) -D (\omega - \omega_S), \quad t_0 \leq t \leq t_F,\end{aligned}
1251$$
1252
1253where $\phi$ is the phase angle and $\omega$ is the
1254frequency.
1255
1256The initial conditions at time $t_0$ are
1257
1258$$
1259\begin{aligned}
1260\phi(t_0) &=& \arcsin \left( p_m / p_{max} \right), \\
1261w(t_0) & =& 1.\end{aligned}
1262$$
1263
1264$p_{max}$ is a positive number when the system operates normally.
1265At an event such as fault incidence/removal, $p_{max}$ will change
1266to $0$ temporarily and back to the original value after the fault
1267is fixed. The objective is to maximize $p_m$ subject to the above
1268ODE constraints and $\phi<\phi_S$ during all times. To accommodate
1269the inequality constraint, we want to compute the sensitivity of the
1270cost function
1271
1272$$
1273\Psi(p_m,\phi) = -p_m + c \int_{t_0}^{t_F} \left( \max(0, \phi - \phi_S ) \right)^2 dt
1274$$
1275
1276with respect to the parameter $p_m$. $numcost$ is $1$
1277since it is a scalar function.
1278
1279For ODE solution, PETSc requires user-provided functions to evaluate the
1280system $F(t,y,\dot{y},p)$ (set by `TSSetIFunction()` ) and its
1281corresponding Jacobian $F_y + \sigma F_{\dot y}$ (set by
1282`TSSetIJacobian()`). Note that the solution state $y$ is
1283$[ \phi \;  \omega ]^T$ here. For sensitivity analysis, we need to
1284provide a routine to compute $\mathrm{f}_p=[0 \; 1]^T$ using
1285`TSASetRHSJacobianP()`, and three routines corresponding to the
1286integrand $r=c \left( \max(0, \phi - \phi_S ) \right)^2$,
1287$r_p = [0 \; 0]^T$ and
1288$r_y= [ 2 c \left( \max(0, \phi - \phi_S ) \right) \; 0]^T$ using
1289`TSSetCostIntegrand()`.
1290
1291In the adjoint run, $\lambda$ and $\mu$ are initialized as
1292$[ 0 \;  0 ]^T$ and $[-1]$ at the final time $t_F$.
1293After `TSAdjointSolve()`, the sensitivity of the cost function w.r.t.
1294initial conditions is given by the sensitivity variable $\lambda$
1295(at time $t_0$) directly. And the sensitivity of the cost function
1296w.r.t. the parameter $p_m$ can be computed (by users) as
1297
1298$$
1299\frac{\mathrm{d} \Psi}{\mathrm{d} p_m} = \mu(t_0) + \lambda(t_0)  \frac{\mathrm{d} \left[ \phi(t_0) \; \omega(t_0) \right]^T}{\mathrm{d} p_m}  .
1300$$
1301
1302For explicit methods where one does not need to provide the Jacobian
1303$F_u$ for the forward solve one still does need it for the
1304backward solve and thus must call
1305
1306```
1307TSSetRHSJacobian(TS ts,Mat Amat, Mat Pmat,PetscErrorCode (*f)(TS,PetscReal,Vec,Mat,Mat,void*),void *fP);
1308```
1309
1310Examples include:
1311
1312- discrete adjoint sensitivity using explicit and implicit time stepping methods for an ODE problem
1313  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/ex20adj.c.html">TS Tutorial ex20adj</a>,
1314- an optimization problem using the discrete adjoint models of the ERK (for nonstiff ODEs)
1315  and the Theta methods (for stiff DAEs)
1316  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/ex20opt_ic.c.html">TS Tutorial ex20opt_ic</a>
1317  and
1318  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/ex20opt_p.c.html">TS Tutorial ex20opt_p</a>,
1319- an ODE-constrained optimization using the discrete adjoint models of the
1320  Theta methods for cost function with an integral term
1321  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/power_grid/ex3opt.c.html">TS Power Grid Tutorial ex3opt</a>,
1322- discrete adjoint sensitivity using the Crank-Nicolson methods for DAEs with discontinuities
1323  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/power_grid/stability_9bus/ex9busadj.c.html">TS Power Grid Stability Tutorial ex9busadj</a>,
1324- a DAE-constrained optimization problem using the discrete adjoint models of the Crank-Nicolson
1325  methods for cost function with an integral term
1326  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/power_grid/stability_9bus/ex9busopt.c.html">TS Power Grid Tutorial ex9busopt</a>,
1327- discrete adjoint sensitivity using the Crank-Nicolson methods for a PDE problem
1328  <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/advection-diffusion-reaction/ex5adj.c.html">TS Advection-Diffusion-Reaction Tutorial ex5adj</a>.
1329
1330### Checkpointing
1331
1332The discrete adjoint model requires the states (and stage values in the
1333context of multistage timestepping methods) to evaluate the Jacobian
1334matrices during the adjoint (backward) run. By default, PETSc stores the
1335whole trajectory to disk as binary files, each of which contains the
1336information for a single time step including state, time, and stage
1337values (optional). One can also make PETSc store the trajectory to
1338memory with the option `-ts_trajectory_type memory`. However, there
1339might not be sufficient memory capacity especially for large-scale
1340problems and long-time integration.
1341
1342A so-called checkpointing scheme is needed to solve this problem. The
1343scheme stores checkpoints at selective time steps and recomputes the
1344missing information. The `revolve` library is used by PETSc
1345`TSTrajectory` to generate an optimal checkpointing schedule that
1346minimizes the recomputations given a limited number of available
1347checkpoints. One can specify the number of available checkpoints with
1348the option
1349`-ts_trajectory_max_cps_ram [maximum number of checkpoints in RAM]`.
1350Note that one checkpoint corresponds to one time step.
1351
1352The `revolve` library also provides an optimal multistage
1353checkpointing scheme that uses both RAM and disk for storage. This
1354scheme is automatically chosen if one uses both the option
1355`-ts_trajectory_max_cps_ram [maximum number of checkpoints in RAM]`
1356and the option
1357`-ts_trajectory_max_cps_disk [maximum number of checkpoints on disk]`.
1358
1359Some other useful options are listed below.
1360
1361- `-ts_trajectory_view` prints the total number of recomputations,
1362- `-ts_monitor` and `-ts_adjoint_monitor` allow users to monitor
1363  the progress of the adjoint work flow,
1364- `-ts_trajectory_type visualization` may be used to save the whole
1365  trajectory for visualization. It stores the solution and the time,
1366  but no stage values. The binary files generated can be read into
1367  MATLAB via the script
1368  `$PETSC_DIR/share/petsc/matlab/PetscReadBinaryTrajectory.m`.
1369
1370(sec_sundials)=
1371
1372## Using Sundials from PETSc
1373
1374Sundials is a parallel ODE solver developed by Hindmarsh et al. at LLNL.
1375The `TS` library provides an interface to use the CVODE component of
1376Sundials directly from PETSc. (To configure PETSc to use Sundials, see
1377the installation guide, `installation/index.htm`.)
1378
1379To use the Sundials integrators, call
1380
1381```
1382TSSetType(TS ts,TSType TSSUNDIALS);
1383```
1384
1385or use the command line option `-ts_type` `sundials`.
1386
1387Sundials’ CVODE solver comes with two main integrator families, Adams
1388and BDF (backward differentiation formula). One can select these with
1389
1390```
1391TSSundialsSetType(TS ts,TSSundialsLmmType [SUNDIALS_ADAMS,SUNDIALS_BDF]);
1392```
1393
1394or the command line option `-ts_sundials_type <adams,bdf>`. BDF is the
1395default.
1396
1397Sundials does not use the `SNES` library within PETSc for its
1398nonlinear solvers, so one cannot change the nonlinear solver options via
1399`SNES`. Rather, Sundials uses the preconditioners within the `PC`
1400package of PETSc, which can be accessed via
1401
1402```
1403TSSundialsGetPC(TS ts,PC *pc);
1404```
1405
1406The user can then directly set preconditioner options; alternatively,
1407the usual runtime options can be employed via `-pc_xxx`.
1408
1409Finally, one can set the Sundials tolerances via
1410
1411```
1412TSSundialsSetTolerance(TS ts,double abs,double rel);
1413```
1414
1415where `abs` denotes the absolute tolerance and `rel` the relative
1416tolerance.
1417
1418Other PETSc-Sundials options include
1419
1420```
1421TSSundialsSetGramSchmidtType(TS ts,TSSundialsGramSchmidtType type);
1422```
1423
1424where `type` is either `SUNDIALS_MODIFIED_GS` or
1425`SUNDIALS_UNMODIFIED_GS`. This may be set via the options data base
1426with `-ts_sundials_gramschmidt_type <modifed,unmodified>`.
1427
1428The routine
1429
1430```
1431TSSundialsSetMaxl(TS ts,PetscInt restart);
1432```
1433
1434sets the number of vectors in the Krylov subpspace used by GMRES. This
1435may be set in the options database with `-ts_sundials_maxl` `maxl`.
1436
1437## Using TChem from PETSc
1438
1439TChem [^id7] is a package originally developed at Sandia National
1440Laboratory that can read in CHEMKIN [^id8] data files and compute the
1441right-hand side function and its Jacobian for a reaction ODE system. To
1442utilize PETSc’s ODE solvers for these systems, first install PETSc with
1443the additional `configure` option `--download-tchem`. We currently
1444provide two examples of its use; one for single cell reaction and one
1445for an “artificial” one dimensional problem with periodic boundary
1446conditions and diffusion of all species. The self-explanatory examples
1447are the
1448<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/extchem.c.html">The TS tutorial extchem</a>
1449and
1450<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ts/tutorials/extchemfield.c.html">The TS tutorial extchemfield</a>.
1451
1452[^id5]: If the matrix $F_{\dot{u}}(t) = \partial F
1453    / \partial \dot{u}$ is nonsingular then it is an ODE and can be
1454    transformed to the standard explicit form, although this
1455    transformation may not lead to efficient algorithms.
1456
1457[^id6]: PETSc will automatically translate the function provided to the
1458    appropriate form.
1459
1460[^id7]: [bitbucket.org/jedbrown/tchem](https://bitbucket.org/jedbrown/tchem)
1461
1462[^id8]: [en.wikipedia.org/wiki/CHEMKIN](https://en.wikipedia.org/wiki/CHEMKIN)
1463
1464```{raw} html
1465<hr>
1466```
1467
1468# Solving Steady-State Problems with Pseudo-Timestepping
1469
1470**Simple Example:** `TS` provides a general code for performing pseudo
1471timestepping with a variable timestep at each physical node point. For
1472example, instead of directly attacking the steady-state problem
1473
1474$$
1475G(u) = 0,
1476$$
1477
1478we can use pseudo-transient continuation by solving
1479
1480$$
1481u_t = G(u).
1482$$
1483
1484Using time differencing
1485
1486$$
1487u_t \doteq \frac{{u^{n+1}} - {u^{n}} }{dt^{n}}
1488$$
1489
1490with the backward Euler method, we obtain nonlinear equations at a
1491series of pseudo-timesteps
1492
1493$$
1494\frac{1}{dt^n} B (u^{n+1} - u^{n} ) = G(u^{n+1}).
1495$$
1496
1497For this problem the user must provide $G(u)$, the time steps
1498$dt^{n}$ and the left-hand-side matrix $B$ (or optionally,
1499if the timestep is position independent and $B$ is the identity
1500matrix, a scalar timestep), as well as optionally the Jacobian of
1501$G(u)$.
1502
1503More generally, this can be applied to implicit ODE and DAE for which
1504the transient form is
1505
1506$$
1507F(u,\dot{u}) = 0.
1508$$
1509
1510For solving steady-state problems with pseudo-timestepping one proceeds
1511as follows.
1512
1513- Provide the function `G(u)` with the routine
1514
1515  ```
1516  TSSetRHSFunction(TS ts,Vec r,PetscErrorCode (*f)(TS,PetscReal,Vec,Vec,void*),void *fP);
1517  ```
1518
1519  The arguments to the function `f()` are the timestep context, the
1520  current time, the input for the function, the output for the function
1521  and the (optional) user-provided context variable `fP`.
1522
1523- Provide the (approximate) Jacobian matrix of `G(u)` and a function
1524  to compute it at each Newton iteration. This is done with the command
1525
1526  ```
1527  TSSetRHSJacobian(TS ts,Mat Amat, Mat Pmat,PetscErrorCode (*f)(TS,PetscReal,Vec,Mat,Mat,void*),void *fP);
1528  ```
1529
1530  The arguments for the function `f()` are the timestep context, the
1531  current time, the location where the Jacobian is to be computed, the
1532  (approximate) Jacobian matrix, an alternative approximate Jacobian
1533  matrix used to construct the preconditioner, and the optional
1534  user-provided context, passed in as `fP`. The user must provide the
1535  Jacobian as a matrix; thus, if using a matrix-free approach, one must
1536  create a `MATSHELL` matrix.
1537
1538In addition, the user must provide a routine that computes the
1539pseudo-timestep. This is slightly different depending on if one is using
1540a constant timestep over the entire grid, or it varies with location.
1541
1542- For location-independent pseudo-timestepping, one uses the routine
1543
1544  ```
1545  TSPseudoSetTimeStep(TS ts,PetscInt(*dt)(TS,PetscReal*,void*),void* dtctx);
1546  ```
1547
1548  The function `dt` is a user-provided function that computes the
1549  next pseudo-timestep. As a default one can use
1550  `TSPseudoTimeStepDefault(TS,PetscReal*,void*)` for `dt`. This
1551  routine updates the pseudo-timestep with one of two strategies: the
1552  default
1553
1554  $$
1555  dt^{n} = dt_{\mathrm{increment}}*dt^{n-1}*\frac{|| F(u^{n-1}) ||}{|| F(u^{n})||}
1556  $$
1557
1558  or, the alternative,
1559
1560  $$
1561  dt^{n} = dt_{\mathrm{increment}}*dt^{0}*\frac{|| F(u^{0}) ||}{|| F(u^{n})||}
1562  $$
1563
1564  which can be set with the call
1565
1566  ```
1567  TSPseudoIncrementDtFromInitialDt(TS ts);
1568  ```
1569
1570  or the option `-ts_pseudo_increment_dt_from_initial_dt`. The value
1571  $dt_{\mathrm{increment}}$ is by default $1.1$, but can be
1572  reset with the call
1573
1574  ```
1575  TSPseudoSetTimeStepIncrement(TS ts,PetscReal inc);
1576  ```
1577
1578  or the option `-ts_pseudo_increment <inc>`.
1579
1580- For location-dependent pseudo-timestepping, the interface function
1581  has not yet been created.
1582
1583```{eval-rst}
1584.. bibliography:: /petsc.bib
1585   :filter: docname in docnames
1586
1587```
1588