xref: /petsc/doc/manual/vec.md (revision b59054469e8e75e7f1217f4456448261033ebd2d) !
17f296bb3SBarry Smith(ch_vectors)=
27f296bb3SBarry Smith
37f296bb3SBarry Smith# Vectors and Parallel Data
47f296bb3SBarry Smith
57f296bb3SBarry SmithVectors (denoted by `Vec`) are used to store discrete PDE solutions, right-hand sides for
67f296bb3SBarry Smithlinear systems, etc. Users can create and manipulate entries in vectors directly with a basic, low-level interface or
77f296bb3SBarry Smiththey can use the PETSc `DM` objects to connect actions on vectors to the type of discretization and grid that they are
87f296bb3SBarry Smithworking with. These higher-level interfaces handle much of the details of the interactions with vectors and hence, are preferred
97f296bb3SBarry Smithin most situations. This chapter is organized as follows:
107f296bb3SBarry Smith
117f296bb3SBarry Smith- {any}`sec_veccreate`
127f296bb3SBarry Smith
137f296bb3SBarry Smith  - User managed
147f296bb3SBarry Smith  - {any}`sec_struct`
157f296bb3SBarry Smith  - {any}`sec_stag`
167f296bb3SBarry Smith  - {any}`sec_unstruct`
177f296bb3SBarry Smith  - {any}`sec_network`
187f296bb3SBarry Smith
197f296bb3SBarry Smith- Setting vector values
207f296bb3SBarry Smith
217f296bb3SBarry Smith  - For generic vectors
227f296bb3SBarry Smith  - {any}`sec_struct_set`
237f296bb3SBarry Smith  - {any}`sec_stag_set`
247f296bb3SBarry Smith  - {any}`sec_unstruct_set`
257f296bb3SBarry Smith  - {any}`sec_network_set`
267f296bb3SBarry Smith
277f296bb3SBarry Smith- {any}`sec_vecbasic`
287f296bb3SBarry Smith
297f296bb3SBarry Smith- {any}`sec_localglobal`
307f296bb3SBarry Smith
317f296bb3SBarry Smith- {any}`sec_scatter`
327f296bb3SBarry Smith
337f296bb3SBarry Smith  - {any}`sec_islocaltoglobalmap`
347f296bb3SBarry Smith  - {any}`sec_vecghost`
357f296bb3SBarry Smith
367f296bb3SBarry Smith- {any}`sec_ao`
377f296bb3SBarry Smith
387f296bb3SBarry Smith(sec_veccreate)=
397f296bb3SBarry Smith
407f296bb3SBarry Smith## Creating Vectors
417f296bb3SBarry Smith
427f296bb3SBarry SmithPETSc provides many ways to create vectors. The most basic, where the user is responsible for managing the
437f296bb3SBarry Smithparallel distribution of the vector entries, and a variety of higher-level approaches, based on `DM`, for classes of problems such
447f296bb3SBarry Smithas structured grids, staggered grids, unstructured grids, networks, and particles.
457f296bb3SBarry Smith
467f296bb3SBarry SmithThe most basic way to create a vector with a local size of `m` and a global size of `M`, is to
477f296bb3SBarry Smithuse
487f296bb3SBarry Smith
497f296bb3SBarry Smith```
507f296bb3SBarry SmithVecCreate(MPI_Comm comm, Vec *v);
517f296bb3SBarry SmithVecSetSizes(Vec v, PetscInt m, PetscInt M);
527f296bb3SBarry SmithVecSetFromOptions(Vec v);
537f296bb3SBarry Smith```
547f296bb3SBarry Smith
557f296bb3SBarry Smithwhich automatically generates the appropriate vector type (sequential or
567f296bb3SBarry Smithparallel) over all processes in `comm`. The option `-vec_type <type>`
577f296bb3SBarry Smithcan be used in conjunction with
587f296bb3SBarry Smith`VecSetFromOptions()` to specify the use of a particular type of vector. For example, for NVIDIA GPU CUDA, use `cuda`.
597f296bb3SBarry SmithThe GPU-based vectors allow
607f296bb3SBarry Smithone to set values on either the CPU or GPU but do their computations on the GPU.
617f296bb3SBarry Smith
627f296bb3SBarry SmithWe emphasize that all processes in `comm` *must* call the vector
637f296bb3SBarry Smithcreation routines since these routines are collective on all
647f296bb3SBarry Smithprocesses in the communicator. If you are unfamiliar with MPI
657f296bb3SBarry Smithcommunicators, see the discussion in {any}`sec_writing`. In addition, if a sequence of creation routines is
667f296bb3SBarry Smithused, they must be called in the same order for each process in the
677f296bb3SBarry Smithcommunicator.
687f296bb3SBarry Smith
697f296bb3SBarry SmithInstead of, or before calling `VecSetFromOptions()`, one can call
707f296bb3SBarry Smith
717f296bb3SBarry Smith```
724558fef0SBarry SmithVecSetType(Vec v, VecType <VECCUDA, VECHIP, VECKOKKOS, etc.>)
737f296bb3SBarry Smith```
747f296bb3SBarry Smith
757f296bb3SBarry SmithOne can create vectors whose entries are stored on GPUs using the convenience routine,
767f296bb3SBarry Smith
777f296bb3SBarry Smith```
787f296bb3SBarry SmithVecCreateMPICUDA(MPI_Comm comm, PetscInt m, PetscInt M, Vec *x);
797f296bb3SBarry Smith```
807f296bb3SBarry Smith
817f296bb3SBarry SmithThere are convenience creation routines for almost all vector types; we recommend using the more verbose form because it allows
827f296bb3SBarry Smithselecting CPU or GPU simulations at runtime.
837f296bb3SBarry Smith
847f296bb3SBarry SmithFor applications running in parallel that involve multi-dimensional structured grids, unstructured grids, networks, etc, it is cumbersome for users to explicitly manage the needed local and global sizes of the vectors.
857f296bb3SBarry SmithHence, PETSc provides two powerful abstract objects (lower level) `PetscSection` (see {any}`ch_petscsection`) and (higher level) `DM` (see {any}`ch_dmbase`) to help manage the vectors and matrices needed for such applications. Using `DM`, parallel vectors can be created easily with
867f296bb3SBarry Smith
877f296bb3SBarry Smith```
887f296bb3SBarry SmithDMCreateGlobalVector(DM dm, Vec *v)
897f296bb3SBarry Smith```
907f296bb3SBarry Smith
917f296bb3SBarry SmithThe `DM` object, see {any}`sec_struct`, {any}`sec_stag`, and {any}`ch_unstructured` for more details on `DM` for structured grids, staggered
927f296bb3SBarry Smithstructured grids, and for unstructured grids,
937f296bb3SBarry Smithmanages creating the correctly sized parallel vectors efficiently. One controls the type of vector that `DM` creates by calling
947f296bb3SBarry Smith
957f296bb3SBarry Smith```
967f296bb3SBarry SmithDMSetVecType(DM dm, VecType vt)
977f296bb3SBarry Smith```
987f296bb3SBarry Smith
997f296bb3SBarry Smithor by calling `DMSetFromOptions(DM dm)` and using the option `-dm_vec_type <standard or cuda or kokkos etc>`
1007f296bb3SBarry Smith
1017f296bb3SBarry Smith(sec_struct)=
1027f296bb3SBarry Smith
1037f296bb3SBarry Smith### DMDA - Creating vectors for structured grids
1047f296bb3SBarry Smith
1057f296bb3SBarry SmithEach `DM` type is suitable for a family of problems. The first of these, `DMDA`
1067f296bb3SBarry Smithare intended for use with *logically structured rectangular grids*
1077f296bb3SBarry Smithwhen communication of nonlocal data is needed before certain local
1087f296bb3SBarry Smithcomputations can occur. `DMDA` is designed only for
1097f296bb3SBarry Smiththe case in which data can be thought of as being stored in a standard
1107f296bb3SBarry Smithmultidimensional array; thus, `DMDA` are *not* intended for
111*dac9a9d1SBarry Smithparallelizing staggered arrays/grids, `DMSTAG` -- {any}`ch_stag`, or unstructured grid problems, `DMPLEX` -- {any}`ch_unstructured`, etc.
1127f296bb3SBarry Smith
1137f296bb3SBarry SmithFor example, a typical situation one encounters in solving PDEs in
1147f296bb3SBarry Smithparallel is that, to evaluate a local function, `f(x)`, each process
1157f296bb3SBarry Smithrequires its local portion of the vector `x` as well as its ghost
1167f296bb3SBarry Smithpoints (the bordering portions of the vector that are owned by
1177f296bb3SBarry Smithneighboring processes). Figure {any}`fig_ghosts` illustrates the
1187f296bb3SBarry Smithghost points for the seventh process of a two-dimensional, structured
1197f296bb3SBarry Smithparallel grid. Each box represents a process; the ghost points for the
1207f296bb3SBarry Smithseventh process’s local part of a parallel array are shown in gray.
1217f296bb3SBarry Smith
1227f296bb3SBarry Smith:::{figure} /images/manual/ghost.*
1237f296bb3SBarry Smith:alt: Ghost Points for Two Stencil Types on the Seventh Process
1247f296bb3SBarry Smith:name: fig_ghosts
1257f296bb3SBarry Smith
1267f296bb3SBarry SmithGhost Points for Two Stencil Types on the Seventh Process
1277f296bb3SBarry Smith:::
1287f296bb3SBarry Smith
1297f296bb3SBarry SmithThe `DMDA` object
1307f296bb3SBarry Smithcontains parallel data layout information and communication
1317f296bb3SBarry Smithinformation and is used to create vectors and matrices with
1327f296bb3SBarry Smiththe proper layout.
1337f296bb3SBarry Smith
1347f296bb3SBarry SmithOne creates a `DMDA` two
1357f296bb3SBarry Smithdimensions with the convenience routine
1367f296bb3SBarry Smith
1377f296bb3SBarry Smith```
1387f296bb3SBarry SmithDMDACreate2d(MPI_Comm comm, DMBoundaryType xperiod, DMBoundaryType yperiod, DMDAStencilType st, PetscInt M, PetscInt N, PetscInt m, PetscInt n, PetscInt dof, PetscInt s, PetscInt *lx, PetscInt *ly, DM *da);
1397f296bb3SBarry Smith```
1407f296bb3SBarry Smith
1417f296bb3SBarry SmithThe arguments `M` and `N` indicate the global numbers of grid points
1427f296bb3SBarry Smithin each direction, while `m` and `n` denote the process partition in
1437f296bb3SBarry Smitheach direction; `m*n` must equal the number of processes in the MPI
1447f296bb3SBarry Smithcommunicator, `comm`. Instead of specifying the process layout, one
1457f296bb3SBarry Smithmay use `PETSC_DECIDE` for `m` and `n` so that PETSc will
1467f296bb3SBarry Smithselect the partition. The type of periodicity of the array
1477f296bb3SBarry Smithis specified by `xperiod` and `yperiod`, which can be
1487f296bb3SBarry Smith`DM_BOUNDARY_NONE` (no periodicity), `DM_BOUNDARY_PERIODIC`
1497f296bb3SBarry Smith(periodic in that direction), `DM_BOUNDARY_TWIST` (periodic in that
1507f296bb3SBarry Smithdirection, but identified in reverse order), `DM_BOUNDARY_GHOSTED` ,
1517f296bb3SBarry Smithor `DM_BOUNDARY_MIRROR`. The argument `dof` indicates the number of
1527f296bb3SBarry Smithdegrees of freedom at each array point, and `s` is the stencil width
1537f296bb3SBarry Smith(i.e., the width of the ghost point region). The optional arrays `lx`
1547f296bb3SBarry Smithand `ly` may contain the number of nodes along the x and y axis for
1557f296bb3SBarry Smitheach cell, i.e. the dimension of `lx` is `m` and the dimension of
1567f296bb3SBarry Smith`ly` is `n`; alternately, `NULL` may be passed in.
1577f296bb3SBarry Smith
1587f296bb3SBarry SmithTwo types of `DMDA` communication data structures can be
1597f296bb3SBarry Smithcreated, as specified by `st`. Star-type stencils that radiate outward
1607f296bb3SBarry Smithonly in the coordinate directions are indicated by
1617f296bb3SBarry Smith`DMDA_STENCIL_STAR`, while box-type stencils are specified by
1627f296bb3SBarry Smith`DMDA_STENCIL_BOX`. For example, for the two-dimensional case,
1637f296bb3SBarry Smith`DMDA_STENCIL_STAR` with width 1 corresponds to the standard 5-point
1647f296bb3SBarry Smithstencil, while `DMDA_STENCIL_BOX` with width 1 denotes the standard
1657f296bb3SBarry Smith9-point stencil. In both instances, the ghost points are identical, the
1667f296bb3SBarry Smithonly difference being that with star-type stencils, certain ghost points
1677f296bb3SBarry Smithare ignored, substantially decreasing the number of messages sent. Note
1687f296bb3SBarry Smiththat the `DMDA_STENCIL_STAR` stencils can save interprocess
1697f296bb3SBarry Smithcommunication in two and three dimensions.
1707f296bb3SBarry Smith
1717f296bb3SBarry SmithThese `DMDA` stencils have nothing directly to do with a specific finite
1727f296bb3SBarry Smithdifference stencil one might choose to use for discretization; they
1737f296bb3SBarry Smithonly ensure that the correct values are in place for the application of a
1747f296bb3SBarry Smithuser-defined finite difference stencil (or any other discretization
1757f296bb3SBarry Smithtechnique).
1767f296bb3SBarry Smith
1777f296bb3SBarry SmithThe commands for creating `DMDA`
1787f296bb3SBarry Smithin one and three dimensions are analogous:
1797f296bb3SBarry Smith
1807f296bb3SBarry Smith```
1817f296bb3SBarry SmithDMDACreate1d(MPI_Comm comm, DMBoundaryType xperiod, PetscInt M, PetscInt w, PetscInt s, PetscInt *lc, DM *inra);
1827f296bb3SBarry Smith```
1837f296bb3SBarry Smith
1847f296bb3SBarry Smith```
1857f296bb3SBarry SmithDMDACreate3d(MPI_Comm comm, DMBoundaryType xperiod, DMBoundaryType yperiod, DMBoundaryType zperiod, DMDAStencilType stencil_type, PetscInt M, PetscInt N, PetscInt P, PetscInt m, PetscInt n, PetscInt p, PetscInt w, PetscInt s, PetscInt *lx, PetscInt *ly, PetscInt *lz, DM *inra);
1867f296bb3SBarry Smith```
1877f296bb3SBarry Smith
1887f296bb3SBarry SmithThe routines to create a `DM` are collective so that all
1897f296bb3SBarry Smithprocesses in the communicator `comm` must call the same creation routines in the same order.
1907f296bb3SBarry Smith
1917f296bb3SBarry SmithA `DM` may be created, and its type set with
1927f296bb3SBarry Smith
1937f296bb3SBarry Smith```
1947f296bb3SBarry SmithDMCreate(comm,&dm);
1957f296bb3SBarry SmithDMSetType(dm,"Typename");  // for example, "DMDA"
1967f296bb3SBarry Smith```
1977f296bb3SBarry Smith
1987f296bb3SBarry SmithThen `DMType` specific operations can be performed to provide information from which the specifics of the
1997f296bb3SBarry Smith`DM` will be provided. For example,
2007f296bb3SBarry Smith
2017f296bb3SBarry Smith```
2027f296bb3SBarry SmithDMSetDimension(dm, 1);
2037f296bb3SBarry SmithDMDASetSizes(dm, M, 1, 1));
2047f296bb3SBarry SmithDMDASetDof(dm, 1));
2057f296bb3SBarry SmithDMSetUp(dm);
2067f296bb3SBarry Smith```
2077f296bb3SBarry Smith
2087f296bb3SBarry SmithWe now very briefly introduce a few more `DMType`.
2097f296bb3SBarry Smith
2107f296bb3SBarry Smith(sec_stag)=
2117f296bb3SBarry Smith
2127f296bb3SBarry Smith### DMSTAG - Creating vectors for staggered grids
2137f296bb3SBarry Smith
2147f296bb3SBarry SmithFor structured grids with staggered data (living on elements, faces, edges,
2157f296bb3SBarry Smithand/or vertices), the `DMSTAG` object is available. It behaves much
2167f296bb3SBarry Smithlike `DMDA`.
2177f296bb3SBarry SmithSee {any}`ch_stag` for discussion of creating vectors with `DMSTAG`.
2187f296bb3SBarry Smith
2197f296bb3SBarry Smith(sec_unstruct)=
2207f296bb3SBarry Smith
2217f296bb3SBarry Smith### DMPLEX - Creating vectors for unstructured grids
2227f296bb3SBarry Smith
2237f296bb3SBarry SmithSee {any}`ch_unstructured` for a discussion of creating vectors with `DMPLEX`.
2247f296bb3SBarry Smith
2257f296bb3SBarry Smith(sec_network)=
2267f296bb3SBarry Smith
2277f296bb3SBarry Smith### DMNETWORK - Creating vectors for networks
2287f296bb3SBarry Smith
2297f296bb3SBarry SmithSee {any}`ch_network` for discussion of creating vectors with `DMNETWORK`.
2307f296bb3SBarry Smith
2317f296bb3SBarry Smith## Common vector functions and operations
2327f296bb3SBarry Smith
2337f296bb3SBarry SmithOne can examine (print out) a vector with the command
2347f296bb3SBarry Smith
2357f296bb3SBarry Smith```
2367f296bb3SBarry SmithVecView(Vec x, PetscViewer v);
2377f296bb3SBarry Smith```
2387f296bb3SBarry Smith
2397f296bb3SBarry SmithTo print the vector to the screen, one can use the viewer
2407f296bb3SBarry Smith`PETSC_VIEWER_STDOUT_WORLD`, which ensures that parallel vectors are
2417f296bb3SBarry Smithprinted correctly to `stdout`. To display the vector in an X-window,
2427f296bb3SBarry Smithone can use the default X-windows viewer `PETSC_VIEWER_DRAW_WORLD`, or
2437f296bb3SBarry Smithone can create a viewer with the routine `PetscViewerDrawOpen()`. A
2447f296bb3SBarry Smithvariety of viewers are discussed further in
2457f296bb3SBarry Smith{any}`sec_viewers`.
2467f296bb3SBarry Smith
2477f296bb3SBarry SmithTo create a new vector of the same format and parallel layout as an existing vector,
2487f296bb3SBarry Smithuse
2497f296bb3SBarry Smith
2507f296bb3SBarry Smith```
2517f296bb3SBarry SmithVecDuplicate(Vec old, Vec *new);
2527f296bb3SBarry Smith```
2537f296bb3SBarry Smith
2547f296bb3SBarry SmithTo create several new vectors of the same format as an existing vector,
2557f296bb3SBarry Smithuse
2567f296bb3SBarry Smith
2577f296bb3SBarry Smith```
2587f296bb3SBarry SmithVecDuplicateVecs(Vec old, PetscInt n, Vec **new);
2597f296bb3SBarry Smith```
2607f296bb3SBarry Smith
2617f296bb3SBarry SmithThis routine creates an array of pointers to vectors. The two routines
2627f296bb3SBarry Smithare useful because they allow one to write library code that does
2637f296bb3SBarry Smithnot depend on the particular format of the vectors being used. Instead,
2647f296bb3SBarry Smiththe subroutines can automatically create work vectors based on
2657f296bb3SBarry Smiththe specified existing vector.
2667f296bb3SBarry Smith
2677f296bb3SBarry SmithWhen a vector is no longer needed, it should be destroyed with the
2687f296bb3SBarry Smithcommand
2697f296bb3SBarry Smith
2707f296bb3SBarry Smith```
2717f296bb3SBarry SmithVecDestroy(Vec *x);
2727f296bb3SBarry Smith```
2737f296bb3SBarry Smith
2747f296bb3SBarry SmithTo destroy an array of vectors, use the command
2757f296bb3SBarry Smith
2767f296bb3SBarry Smith```
2777f296bb3SBarry SmithVecDestroyVecs(PetscInt n,Vec **vecs);
2787f296bb3SBarry Smith```
2797f296bb3SBarry Smith
2807f296bb3SBarry SmithIt is also possible to create vectors that use an array the user provides rather than having PETSc internally allocate the array space. Such
2817f296bb3SBarry Smithvectors can be created with the routines such as
2827f296bb3SBarry Smith
2837f296bb3SBarry Smith```
2847f296bb3SBarry SmithVecCreateSeqWithArray(PETSC_COMM_SELF, PetscInt bs, PetscInt n, PetscScalar *array, Vec *V);
2857f296bb3SBarry Smith```
2867f296bb3SBarry Smith
2877f296bb3SBarry Smith```
2887f296bb3SBarry SmithVecCreateMPIWithArray(MPI_Comm comm, PetscInt bs, PetscInt n, PetscInt N, PetscScalar *array, Vec *V);
2897f296bb3SBarry Smith```
2907f296bb3SBarry Smith
2917f296bb3SBarry Smith```
2927f296bb3SBarry SmithVecCreateMPICUDAWithArray(MPI_Comm comm, PetscInt bs, PetscInt n, PetscInt N, PetscScalar *array, Vec *V);
2937f296bb3SBarry Smith```
2947f296bb3SBarry Smith
2957f296bb3SBarry SmithThe `array` pointer should be a GPU memory location for GPU vectors.
2967f296bb3SBarry Smith
2977f296bb3SBarry SmithNote that here, one must provide the value `n`; it cannot be
2987f296bb3SBarry Smith`PETSC_DECIDE` and the user is responsible for providing enough space
2997f296bb3SBarry Smithin the array; `n*sizeof(PetscScalar)`.
3007f296bb3SBarry Smith
3017f296bb3SBarry Smith## Assembling (putting values in) vectors
3027f296bb3SBarry Smith
3037f296bb3SBarry SmithOne can assign a single value to all components of a vector with
3047f296bb3SBarry Smith
3057f296bb3SBarry Smith```
3067f296bb3SBarry SmithVecSet(Vec x, PetscScalar value);
3077f296bb3SBarry Smith```
3087f296bb3SBarry Smith
3097f296bb3SBarry SmithAssigning values to individual vector components is more
3107f296bb3SBarry Smithcomplicated to make it possible to write efficient parallel
3117f296bb3SBarry Smithcode. Assigning a set of components on a CPU is a two-step process: one first
3127f296bb3SBarry Smithcalls
3137f296bb3SBarry Smith
3147f296bb3SBarry Smith```
3157f296bb3SBarry SmithVecSetValues(Vec x, PetscInt n, PetscInt *indices, PetscScalar *values, INSERT_VALUES);
3167f296bb3SBarry Smith```
3177f296bb3SBarry Smith
3187f296bb3SBarry Smithany number of times on any or all of the processes. The argument `n`
3197f296bb3SBarry Smithgives the number of components being set in this insertion. The integer
3207f296bb3SBarry Smitharray `indices` contains the *global component indices*, and
3217f296bb3SBarry Smith`values` is the array of values to be inserted at those global component index locations. Any process can set
3227f296bb3SBarry Smithany vector components; PETSc ensures that they are automatically
3237f296bb3SBarry Smithstored in the correct location. Once all of the values have been
3247f296bb3SBarry Smithinserted with `VecSetValues()`, one must call
3257f296bb3SBarry Smith
3267f296bb3SBarry Smith```
3277f296bb3SBarry SmithVecAssemblyBegin(Vec x);
3287f296bb3SBarry Smith```
3297f296bb3SBarry Smith
3307f296bb3SBarry Smithfollowed by
3317f296bb3SBarry Smith
3327f296bb3SBarry Smith```
3337f296bb3SBarry SmithVecAssemblyEnd(Vec x);
3347f296bb3SBarry Smith```
3357f296bb3SBarry Smith
3367f296bb3SBarry Smithto perform any needed message passing of nonlocal components. In order
3377f296bb3SBarry Smithto allow the overlap of communication and calculation, the user’s code
3387f296bb3SBarry Smithcan perform any series of other actions between these two calls while
3397f296bb3SBarry Smiththe messages are in transition.
3407f296bb3SBarry Smith
3417f296bb3SBarry SmithExample usage of `VecSetValues()` may be found in
3427f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/vec/vec/tutorials/ex2.c.html">src/vec/vec/tutorials/ex2.c</a>
3437f296bb3SBarry Smithor <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/vec/vec/tutorials/ex2f.F90.html">src/vec/vec/tutorials/exf.F90</a>.
3447f296bb3SBarry Smith
3457f296bb3SBarry SmithRather than inserting elements in a vector, one may wish to add
3467f296bb3SBarry Smithvalues. This process is also done with the command
3477f296bb3SBarry Smith
3487f296bb3SBarry Smith```
3497f296bb3SBarry SmithVecSetValues(Vec x, PetscInt n, PetscInt *indices, PetscScalar *values, ADD_VALUES);
3507f296bb3SBarry Smith```
3517f296bb3SBarry Smith
3527f296bb3SBarry SmithAgain, one must call the assembly routines `VecAssemblyBegin()` and
3537f296bb3SBarry Smith`VecAssemblyEnd()` after all of the values have been added. Note that
3547f296bb3SBarry Smithaddition and insertion calls to `VecSetValues()` *cannot* be mixed.
3557f296bb3SBarry SmithInstead, one must add and insert vector elements in phases, with
3567f296bb3SBarry Smithintervening calls to the assembly routines. This phased assembly
3577f296bb3SBarry Smithprocedure overcomes the nondeterministic behavior that would occur if
3587f296bb3SBarry Smithtwo different processes generated values for the same location, with one
3597f296bb3SBarry Smithprocess adding while the other is inserting its value. (In this case, the
3607f296bb3SBarry Smithaddition and insertion actions could be performed in either order, thus
3617f296bb3SBarry Smithresulting in different values at the particular location. Since PETSc
3627f296bb3SBarry Smithdoes not allow the simultaneous use of `INSERT_VALUES` and
3637f296bb3SBarry Smith`ADD_VALUES` this nondeterministic behavior will not occur in PETSc.)
3647f296bb3SBarry Smith
3657f296bb3SBarry SmithYou can call `VecGetValues()` to pull local values from a vector (but
3667f296bb3SBarry Smithnot off-process values).
3677f296bb3SBarry Smith
3687f296bb3SBarry SmithFor vectors obtained with `DMCreateGlobalVector()`, one can use `VecSetValuesLocal()` to set values into
3697f296bb3SBarry Smitha global vector but using the local (ghosted) vector indexing of the vector entries. See also {any}`sec_islocaltoglobalmap`
3707f296bb3SBarry Smiththat allows one to provide arbitrary local-to-global mapping when not working with a `DM`.
3717f296bb3SBarry Smith
3727f296bb3SBarry SmithIt is also possible to interact directly with the arrays that the vector values are stored
3737f296bb3SBarry Smithin. The routine `VecGetArray()` returns a pointer to the elements local to
3747f296bb3SBarry Smiththe process:
3757f296bb3SBarry Smith
3767f296bb3SBarry Smith```
3777f296bb3SBarry SmithVecGetArray(Vec v, PetscScalar **array);
3787f296bb3SBarry Smith```
3797f296bb3SBarry Smith
3807f296bb3SBarry SmithWhen access to the array is no longer needed, the user should call
3817f296bb3SBarry Smith
3827f296bb3SBarry Smith```
3837f296bb3SBarry SmithVecRestoreArray(Vec v, PetscScalar **array);
3847f296bb3SBarry Smith```
3857f296bb3SBarry Smith
386c0f9d5adSBarry SmithFor vectors that may also have the array data in GPU memory, for example, `VECCUDA`, this call ensures the CPU array has the
387c0f9d5adSBarry Smithmost recent array values by copying the data from the GPU memory if needed.
388c0f9d5adSBarry Smith
3897f296bb3SBarry SmithIf the values do not need to be modified, the routines
3907f296bb3SBarry Smith
3917f296bb3SBarry Smith```
3927f296bb3SBarry SmithVecGetArrayRead(Vec v, const PetscScalar **array);
3937f296bb3SBarry SmithVecRestoreArrayRead(Vec v, const PetscScalar **array);
3947f296bb3SBarry Smith```
3957f296bb3SBarry Smith
3967f296bb3SBarry Smithshould be used instead.
3977f296bb3SBarry Smith
3987f296bb3SBarry Smith:::{admonition} Listing: <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex1.c.html">SNES Tutorial src/snes/tutorials/ex1.c</a>
3997f296bb3SBarry Smith```{literalinclude} /../src/snes/tutorials/ex1.c
4007f296bb3SBarry Smith:end-at: PetscFunctionReturn(PETSC_SUCCESS);
4017f296bb3SBarry Smith:name: snesex1
4022a8381b2SBarry Smith:start-at: PetscErrorCode FormFunction1(SNES snes, Vec x, Vec f, PetscCtx ctx)
4037f296bb3SBarry Smith```
4047f296bb3SBarry Smith:::
4057f296bb3SBarry Smith
4067f296bb3SBarry SmithMinor differences exist in the Fortran interface for `VecGetArray()`
4077f296bb3SBarry Smithand `VecRestoreArray()`, as discussed in
4087f296bb3SBarry Smith{any}`sec_fortranarrays`. It is important to note that
4097f296bb3SBarry Smith`VecGetArray()` and `VecRestoreArray()` do *not* copy the vector
4107f296bb3SBarry Smithelements; they merely give users direct access to the vector elements.
4117f296bb3SBarry SmithThus, these routines require essentially no time to call and can be used
4127f296bb3SBarry Smithefficiently.
4137f296bb3SBarry Smith
4147f296bb3SBarry SmithFor GPU vectors, one can access either the values on the CPU as described above or one
4157f296bb3SBarry Smithcan call, for example,
4167f296bb3SBarry Smith
4177f296bb3SBarry Smith```
4187f296bb3SBarry SmithVecCUDAGetArray(Vec v, PetscScalar **array);
4197f296bb3SBarry Smith```
4207f296bb3SBarry Smith
4217f296bb3SBarry Smith:::{admonition} Listing: <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex47cu.cu.html">SNES Tutorial src/snes/tutorials/ex47cu.cu</a>
4227f296bb3SBarry Smith```{literalinclude} /../src/snes/tutorials/ex47cu.cu
4237f296bb3SBarry Smith:end-at: '}'
4247f296bb3SBarry Smith:name: snesex47
4257f296bb3SBarry Smith:start-at: PetscCall(VecCUDAGetArrayRead(xlocal, &xarray));
4267f296bb3SBarry Smith```
4277f296bb3SBarry Smith:::
4287f296bb3SBarry Smith
4297f296bb3SBarry Smithor
4307f296bb3SBarry Smith
4317f296bb3SBarry Smith```
4327f296bb3SBarry SmithVecGetArrayAndMemType(Vec v, PetscScalar **array, PetscMemType *mtype);
4337f296bb3SBarry Smith```
4347f296bb3SBarry Smith
4357f296bb3SBarry Smithwhich, in the first case, returns a GPU memory address and, in the second case, returns either a CPU or GPU memory
4367f296bb3SBarry Smithaddress depending on the type of the vector. One can then launch a GPU kernel function that accesses the
4377f296bb3SBarry Smithvector's memory for usage with GPUs. When computing on GPUs, `VecSetValues()` is not used! One always accesses the vector's arrays and passes them
4387f296bb3SBarry Smithto the GPU code.
4397f296bb3SBarry Smith
4407f296bb3SBarry SmithIt can also be convenient to treat the vector entries as a Kokkos view. One first creates Kokkos vectors and then calls
4417f296bb3SBarry Smith
4427f296bb3SBarry Smith```
4437f296bb3SBarry SmithVecGetKokkosView(Vec v, Kokkos::View<const PetscScalar*, MemorySpace> *kv)
4447f296bb3SBarry Smith```
4457f296bb3SBarry Smith
4467f296bb3SBarry Smithto set or access the vector entries.
4477f296bb3SBarry Smith
4487f296bb3SBarry SmithOf course, to provide the correct values to a vector, one must know what parts of the vector are owned by each MPI process.
4497f296bb3SBarry SmithFor parallel vectors, either CPU or GPU-based, it is possible to determine a process’s local range with the
4507f296bb3SBarry Smithroutine
4517f296bb3SBarry Smith
4527f296bb3SBarry Smith```
4537f296bb3SBarry SmithVecGetOwnershipRange(Vec vec, PetscInt *start, PetscInt *end);
4547f296bb3SBarry Smith```
4557f296bb3SBarry Smith
4567f296bb3SBarry SmithThe argument `start` indicates the first component owned by the local
4577f296bb3SBarry Smithprocess, while `end` specifies *one more than* the last owned by the
4587f296bb3SBarry Smithlocal process. This command is useful, for instance, in assembling
4597f296bb3SBarry Smithparallel vectors.
4607f296bb3SBarry Smith
4617f296bb3SBarry SmithIf the `Vec` was obtained from a `DM` with `DMCreateGlobalVector()`, then the range values are determined by the specific `DM`.
4627f296bb3SBarry SmithIf the `Vec` was created directly, the range values are determined by the local size passed to `VecSetSizes()` or `VecCreateMPI()`.
4637f296bb3SBarry SmithIf `PETSC_DECIDE` was passed as the local size, then the vector uses default values for the range using `PetscSplitOwnership()`.
4647f296bb3SBarry SmithFor certain `DM`, such as `DMDA`, it is better to use `DM` specific routines, such as `DMDAGetGhostCorners()`, to determine
4657f296bb3SBarry Smiththe local values in the vector.
4667f296bb3SBarry Smith
4677f296bb3SBarry SmithVery occasionally, all MPI processes need to know all the range values, these can be obtained with
4687f296bb3SBarry Smith
4697f296bb3SBarry Smith```
4707f296bb3SBarry SmithVecGetOwnershipRanges(Vec vec, PetscInt range[]);
4717f296bb3SBarry Smith```
4727f296bb3SBarry Smith
4737f296bb3SBarry SmithThe number of elements stored locally can be accessed with
4747f296bb3SBarry Smith
4757f296bb3SBarry Smith```
4767f296bb3SBarry SmithVecGetLocalSize(Vec v, PetscInt *size);
4777f296bb3SBarry Smith```
4787f296bb3SBarry Smith
4797f296bb3SBarry SmithThe global vector length can be determined by
4807f296bb3SBarry Smith
4817f296bb3SBarry Smith```
4827f296bb3SBarry SmithVecGetSize(Vec v, PetscInt *size);
4837f296bb3SBarry Smith```
4847f296bb3SBarry Smith
4857f296bb3SBarry Smith(sec_struct_set)=
4867f296bb3SBarry Smith
4877f296bb3SBarry Smith### DMDA - Setting vector values
4887f296bb3SBarry Smith
4897f296bb3SBarry SmithPETSc provides an easy way to set values into the `DMDA` vectors and
4907f296bb3SBarry Smithaccess them using the natural grid indexing. This is done with the
4917f296bb3SBarry Smithroutines
4927f296bb3SBarry Smith
4937f296bb3SBarry Smith```
494b5ef2b50SBarry SmithDMDAVecGetArray(DM dm, Vec l, void *array);
4957f296bb3SBarry Smith... use the array indexing it with 1, 2, or 3 dimensions ...
4967f296bb3SBarry Smith... depending on the dimension of the DMDA ...
497b5ef2b50SBarry SmithDMDAVecRestoreArray(DM dm, Vec l, void *array);
498b5ef2b50SBarry SmithDMDAVecGetArrayRead(DM dm, Vec l, void *array);
4997f296bb3SBarry Smith... use the array indexing it with 1, 2, or 3 dimensions ...
5007f296bb3SBarry Smith... depending on the dimension of the DMDA ...
501b5ef2b50SBarry SmithDMDAVecRestoreArrayRead(DM dm, Vec l, void *array);
5027f296bb3SBarry Smith```
5037f296bb3SBarry Smith
5047f296bb3SBarry Smithwhere `array` is a multidimensional C array with the same dimension as `da`, and
5057f296bb3SBarry Smith
5067f296bb3SBarry Smith```
507b5ef2b50SBarry SmithDMDAVecGetArrayDOF(DM dm, Vec l, void *array);
5087f296bb3SBarry Smith... use the array indexing it with 2, 3, or 4 dimensions ...
5097f296bb3SBarry Smith... depending on the dimension of the DMDA ...
510b5ef2b50SBarry SmithDMDAVecRestoreArrayDOF(DM dm, Vec l, void *array);
511b5ef2b50SBarry SmithDMDAVecGetArrayDOFRead(DM dm, Vec l, void *array);
5127f296bb3SBarry Smith... use the array indexing it with 2, 3, or 4 dimensions ...
5137f296bb3SBarry Smith... depending on the dimension of the DMDA ...
514b5ef2b50SBarry SmithDMDAVecRestoreArrayDOFRead(DM dm, Vec l, void *array);
5157f296bb3SBarry Smith```
5167f296bb3SBarry Smith
5177f296bb3SBarry Smithwhere `array` is a multidimensional C array with one more dimension than
5187f296bb3SBarry Smith`da`. The vector `l` can be either a global vector or a local
5197f296bb3SBarry Smithvector. The `array` is accessed using the usual *global* indexing on
5207f296bb3SBarry Smiththe entire grid, but the user may *only* refer to this array's local and ghost
5217f296bb3SBarry Smithentries as all other entries are undefined. For example,
5227f296bb3SBarry Smithfor a scalar problem in two dimensions, one could use
5237f296bb3SBarry Smith
5247f296bb3SBarry Smith```
5257f296bb3SBarry SmithPetscScalar **f, **u;
5267f296bb3SBarry Smith...
527b5ef2b50SBarry SmithDMDAVecGetArrayRead(DM dm, Vec local, &u);
528b5ef2b50SBarry SmithDMDAVecGetArray(DM dm, Vec global, &f);
5297f296bb3SBarry Smith...
5307f296bb3SBarry Smith  f[i][j] = u[i][j] - ...
5317f296bb3SBarry Smith...
532b5ef2b50SBarry SmithDMDAVecRestoreArrayRead(DM dm, Vec local, &u);
533b5ef2b50SBarry SmithDMDAVecRestoreArray(DM dm, Vec global, &f);
5347f296bb3SBarry Smith```
5357f296bb3SBarry Smith
5367f296bb3SBarry Smith:::{admonition} Listing: <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex3.c.html">SNES Tutorial src/snes/tutorials/ex3.c</a>
5377f296bb3SBarry Smith```{literalinclude} /../src/snes/tutorials/ex3.c
5387f296bb3SBarry Smith:end-at: PetscFunctionReturn(PETSC_SUCCESS);
5397f296bb3SBarry Smith:name: snesex3
5402a8381b2SBarry Smith:start-at: PetscErrorCode FormFunction(SNES snes, Vec x, Vec f, PetscCtx ctx)
5417f296bb3SBarry Smith```
5427f296bb3SBarry Smith:::
5437f296bb3SBarry Smith
5447f296bb3SBarry SmithThe recommended approach for multi-component PDEs is to declare a
5457f296bb3SBarry Smith`struct` representing the fields defined at each node of the grid,
5467f296bb3SBarry Smithe.g.
5477f296bb3SBarry Smith
5487f296bb3SBarry Smith```
5497f296bb3SBarry Smithtypedef struct {
5507f296bb3SBarry Smith  PetscScalar u, v, omega, temperature;
5517f296bb3SBarry Smith} Node;
5527f296bb3SBarry Smith```
5537f296bb3SBarry Smith
5547f296bb3SBarry Smithand write the residual evaluation using
5557f296bb3SBarry Smith
5567f296bb3SBarry Smith```
5577f296bb3SBarry SmithNode **f, **u;
558b5ef2b50SBarry SmithDMDAVecGetArray(DM dm, Vec local, &u);
559b5ef2b50SBarry SmithDMDAVecGetArray(DM dm, Vec global, &f);
5607f296bb3SBarry Smith ...
5617f296bb3SBarry Smith    f[i][j].omega = ...
5627f296bb3SBarry Smith ...
563b5ef2b50SBarry SmithDMDAVecRestoreArray(DM dm, Vec local, &u);
564b5ef2b50SBarry SmithDMDAVecRestoreArray(DM dm, Vec global, &f);
5657f296bb3SBarry Smith```
5667f296bb3SBarry Smith
5677f296bb3SBarry SmithThe `DMDAVecGetArray` routines are also provided for GPU access with CUDA, HIP, and Kokkos. For example,
5687f296bb3SBarry Smith
5697f296bb3SBarry Smith```
570b5ef2b50SBarry SmithDMDAVecGetKokkosOffsetView(DM dm, Vec vec, Kokkos::View<const PetscScalar*XX*, MemorySpace> *ov)
5717f296bb3SBarry Smith```
5727f296bb3SBarry Smith
5737f296bb3SBarry Smithwhere `*XX*` can contain any number of `*`. This allows one to write very natural Kokkos multi-dimensional parallel for kernels
5747f296bb3SBarry Smiththat act on the local portion of `DMDA` vectors.
5757f296bb3SBarry Smith
5767f296bb3SBarry Smith:::{admonition} Listing: <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex3k.kokkos.cxx.html">SNES Tutorial src/snes/tutorials/ex3k.kokkos.cxx</a>
5777f296bb3SBarry Smith:name: snes-ex3-kokkos
5787f296bb3SBarry Smith
5797f296bb3SBarry Smith```{literalinclude} /../src/snes/tutorials/ex3k.kokkos.cxx
5807f296bb3SBarry Smith:end-at: PetscFunctionReturn(PETSC_SUCCESS);
5812a8381b2SBarry Smith:start-at: PetscErrorCode KokkosFunction(SNES snes, Vec x, Vec r, PetscCtx ctx)
5827f296bb3SBarry Smith```
5837f296bb3SBarry Smith:::
5847f296bb3SBarry Smith
5857f296bb3SBarry SmithThe global indices of the lower left corner of the local portion of vectors obtained from `DMDA`
5867f296bb3SBarry Smithas well as the local array size can be obtained with the commands
5877f296bb3SBarry Smith
5887f296bb3SBarry Smith```
589b5ef2b50SBarry SmithDMDAGetCorners(DM dm, PetscInt *x, PetscInt *y, PetscInt *z, PetscInt *m, PetscInt *n, PetscInt *p);
590b5ef2b50SBarry SmithDMDAGetGhostCorners(DM dm, PetscInt *x, PetscInt *y, PetscInt *z, PetscInt *m, PetscInt *n, PetscInt *p);
5917f296bb3SBarry Smith```
5927f296bb3SBarry Smith
5937f296bb3SBarry SmithThese values can then be used as loop bounds for local function evaluations as demonstrated in the function examples above.
5947f296bb3SBarry Smith
5957f296bb3SBarry SmithThe first version excludes ghost points, while the second
5967f296bb3SBarry Smithincludes them. The routine `DMDAGetGhostCorners()` deals with the fact
5977f296bb3SBarry Smiththat subarrays along boundaries of the problem domain have ghost points
5987f296bb3SBarry Smithonly on their interior edges, but not on their boundary edges.
5997f296bb3SBarry Smith
6007f296bb3SBarry SmithWhen either type of stencil is used, `DMDA_STENCIL_STAR` or
6017f296bb3SBarry Smith`DMDA_STENCIL_BOX`, the local vectors (with the ghost points)
6027f296bb3SBarry Smithrepresent rectangular arrays, including the extra corner elements in the
6037f296bb3SBarry Smith`DMDA_STENCIL_STAR` case. This configuration provides simple access to
6047f296bb3SBarry Smiththe elements by employing two- (or three--) dimensional indexing. The
6057f296bb3SBarry Smithonly difference between the two cases is that when `DMDA_STENCIL_STAR`
6067f296bb3SBarry Smithis used, the extra corner components are *not* scattered between the
6077f296bb3SBarry Smithprocesses and thus contain undefined values that should *not* be used.
6087f296bb3SBarry Smith
6097f296bb3SBarry Smith(sec_stag_set)=
6107f296bb3SBarry Smith
6117f296bb3SBarry Smith### DMSTAG - Setting vector values
6127f296bb3SBarry Smith
6137f296bb3SBarry SmithFor structured grids with staggered data (living on elements, faces, edges,
6147f296bb3SBarry Smithand/or vertices), the `DMStag` object is available. It behaves
6157f296bb3SBarry Smithlike `DMDA`; see the `DMSTAG` manual page for more information.
6167f296bb3SBarry Smith
6177f296bb3SBarry Smith:::{admonition} Listing: <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/dm/impls/stag/tutorials/ex6.c.html">SNES Tutorial src/dm/impls/stag/tutorials/ex6.c</a>
6187f296bb3SBarry Smith```{literalinclude} /../src/dm/impls/stag/tutorials/ex6.c
6197f296bb3SBarry Smith:end-at: /* Update x-velocity */
6207f296bb3SBarry Smith:name: stagex6
6217f296bb3SBarry Smith:start-at: static PetscErrorCode UpdateVelocity_2d(const Ctx *ctx, Vec velocity, Vec
6227f296bb3SBarry Smith:  stress, Vec buoyancy)
6237f296bb3SBarry Smith```
6247f296bb3SBarry Smith:::
6257f296bb3SBarry Smith
6267f296bb3SBarry Smith(sec_unstruct_set)=
6277f296bb3SBarry Smith
6287f296bb3SBarry Smith### DMPLEX - Setting vector values
6297f296bb3SBarry Smith
6307f296bb3SBarry SmithSee {any}`ch_unstructured` for a discussion on setting vector values with `DMPLEX`.
6317f296bb3SBarry Smith
6327f296bb3SBarry Smith(sec_network_set)=
6337f296bb3SBarry Smith
6347f296bb3SBarry Smith### DMNETWORK - Setting vector values
6357f296bb3SBarry Smith
6367f296bb3SBarry SmithSee {any}`ch_network` for a discussion on setting vector values with `DMNETWORK`.
6377f296bb3SBarry Smith
6387f296bb3SBarry Smith(sec_vecbasic)=
6397f296bb3SBarry Smith
6407f296bb3SBarry Smith## Basic Vector Operations
6417f296bb3SBarry Smith
6427f296bb3SBarry Smith% Should make the table more attractive by using, for example, cloud_sptheme.ext.table_styling and the lines below
6437f296bb3SBarry Smith% :column-alignment: left left
6447f296bb3SBarry Smith% :widths: 72 28
6457f296bb3SBarry Smith
6467f296bb3SBarry Smith:::{container}
6477f296bb3SBarry Smith:name: fig_vectorops
6487f296bb3SBarry Smith
6497f296bb3SBarry Smith```{eval-rst}
6507f296bb3SBarry Smith.. table:: PETSc Vector Operations
6517f296bb3SBarry Smith
652eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6537f296bb3SBarry Smith   | **Function Name**                                            | **Operation**                     |
654eb9bb3e1SBarry Smith   +==============================================================+===================================+
6557f296bb3SBarry Smith   | ``VecAXPY(Vec y, PetscScalar a, Vec x);``                    | :math:`y = y + a*x`               |
656eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6577f296bb3SBarry Smith   | ``VecAYPX(Vec y, PetscScalar a, Vec x);``                    | :math:`y = x + a*y`               |
658eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6597f296bb3SBarry Smith   | ``VecWAXPY(Vec  w, PetscScalar a, Vec x, Vec y);``           | :math:`w = a*x + y`               |
660eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6617f296bb3SBarry Smith   | ``VecAXPBY(Vec y, PetscScalar a, PetscScalar b, Vec x);``    | :math:`y = a*x + b*y`             |
662eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6637f296bb3SBarry Smith   | ``VecAXPBYPCZ(Vec z, PetscScalar a, PetscScalar b,           | :math:`z = a*x + b*y + c*z`       |
6647f296bb3SBarry Smith   | PetscScalar c, Vec x, Vec y);``                              |                                   |
665eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6667f296bb3SBarry Smith   | ``VecScale(Vec x, PetscScalar a);``                          | :math:`x = a*x`                   |
667eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6687f296bb3SBarry Smith   | ``VecDot(Vec x, Vec y, PetscScalar *r);``                    | :math:`r = \bar{x}^T*y`           |
669eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6707f296bb3SBarry Smith   | ``VecTDot(Vec x, Vec y, PetscScalar *r);``                   | :math:`r = x'*y`                  |
671eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6727f296bb3SBarry Smith   | ``VecNorm(Vec x, NormType type, PetscReal *r);``             | :math:`r = ||x||_{type}`          |
673eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6747f296bb3SBarry Smith   | ``VecSum(Vec x, PetscScalar *r);``                           | :math:`r = \sum x_{i}`            |
675eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6767f296bb3SBarry Smith   | ``VecCopy(Vec x, Vec y);``                                   | :math:`y = x`                     |
677eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6787f296bb3SBarry Smith   | ``VecSwap(Vec x, Vec y);``                                   | :math:`y = x` while               |
6797f296bb3SBarry Smith   |                                                              | :math:`x = y`                     |
680eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6817f296bb3SBarry Smith   | ``VecPointwiseMult(Vec w, Vec x, Vec y);``                   | :math:`w_{i} = x_{i}*y_{i}`       |
682eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6837f296bb3SBarry Smith   | ``VecPointwiseDivide(Vec w, Vec x, Vec y);``                 | :math:`w_{i} = x_{i}/y_{i}`       |
684eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6857f296bb3SBarry Smith   | ``VecMDot(Vec x, PetscInt n, Vec y[], PetscScalar *r);``     | :math:`r[i] = \bar{x}^T*y[i]`     |
686eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6877f296bb3SBarry Smith   | ``VecMTDot(Vec x, PetscInt n, Vec y[], PetscScalar *r);``    | :math:`r[i] = x^T*y[i]`           |
688eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6897f296bb3SBarry Smith   | ``VecMAXPY(Vec y, PetscInt n, PetscScalar *a, Vec x[]);``    | :math:`y = y + \sum_i a_{i}*x[i]` |
690eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6917f296bb3SBarry Smith   | ``VecMax(Vec x, PetscInt *idx, PetscReal *r);``              | :math:`r = \max x_{i}`            |
692eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6937f296bb3SBarry Smith   | ``VecMin(Vec x, PetscInt *idx, PetscReal *r);``              | :math:`r = \min x_{i}`            |
694eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6957f296bb3SBarry Smith   | ``VecAbs(Vec x);``                                           | :math:`x_i = |x_{i}|`             |
696eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6977f296bb3SBarry Smith   | ``VecReciprocal(Vec x);``                                    | :math:`x_i = 1/x_{i}`             |
698eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
6997f296bb3SBarry Smith   | ``VecShift(Vec x, PetscScalar s);``                          | :math:`x_i = s + x_{i}`           |
700eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
7017f296bb3SBarry Smith   | ``VecSet(Vec x, PetscScalar alpha);``                        | :math:`x_i = \alpha`              |
702eb9bb3e1SBarry Smith   +--------------------------------------------------------------+-----------------------------------+
7037f296bb3SBarry Smith```
7047f296bb3SBarry Smith:::
7057f296bb3SBarry Smith
7067f296bb3SBarry SmithAs the table lists, we have chosen certain
7077f296bb3SBarry Smithbasic vector operations to support within the PETSc vector library.
7087f296bb3SBarry SmithThese operations were selected because they often arise in application
7097f296bb3SBarry Smithcodes. The `NormType` argument to `VecNorm()` is one of `NORM_1`,
7107f296bb3SBarry Smith`NORM_2`, or `NORM_INFINITY`. The 1-norm is $\sum_i |x_{i}|$,
7117f296bb3SBarry Smiththe 2-norm is $( \sum_{i} x_{i}^{2})^{1/2}$ and the infinity norm
7127f296bb3SBarry Smithis $\max_{i} |x_{i}|$.
7137f296bb3SBarry Smith
7147f296bb3SBarry SmithIn addition to `VecDot()` and `VecMDot()` and `VecNorm()`, PETSc
7157f296bb3SBarry Smithprovides split phase versions of this functionality that allow several independent
7167f296bb3SBarry Smithinner products and/or norms to share the same communication (thus
7177f296bb3SBarry Smithimproving parallel efficiency). For example, one may have code such as
7187f296bb3SBarry Smith
7197f296bb3SBarry Smith```
7207f296bb3SBarry SmithVecDot(Vec x, Vec y, PetscScalar *dot);
7217f296bb3SBarry SmithVecMDot(Vec x, PetscInt nv, Vec y[], PetscScalar *dot);
7227f296bb3SBarry SmithVecNorm(Vec x, NormType NORM_2, PetscReal *norm2);
7237f296bb3SBarry SmithVecNorm(Vec x, NormType NORM_1, PetscReal *norm1);
7247f296bb3SBarry Smith```
7257f296bb3SBarry Smith
7267f296bb3SBarry SmithThis code works fine, but it performs four separate parallel
7277f296bb3SBarry Smithcommunication operations. Instead, one can write
7287f296bb3SBarry Smith
7297f296bb3SBarry Smith```
7307f296bb3SBarry SmithVecDotBegin(Vec x, Vec y, PetscScalar *dot);
7317f296bb3SBarry SmithVecMDotBegin(Vec x, PetscInt nv, Vec y[], PetscScalar *dot);
7327f296bb3SBarry SmithVecNormBegin(Vec x, NormType NORM_2, PetscReal *norm2);
7337f296bb3SBarry SmithVecNormBegin(Vec x, NormType NORM_1, PetscReal *norm1);
7347f296bb3SBarry SmithVecDotEnd(Vec x, Vec y, PetscScalar *dot);
7357f296bb3SBarry SmithVecMDotEnd(Vec x,  PetscInt nv, Vec y[], PetscScalar *dot);
7367f296bb3SBarry SmithVecNormEnd(Vec x, NormType NORM_2, PetscReal *norm2);
7377f296bb3SBarry SmithVecNormEnd(Vec x, NormType NORM_1, PetscReal *norm1);
7387f296bb3SBarry Smith```
7397f296bb3SBarry Smith
7407f296bb3SBarry SmithWith this code, the communication is delayed until the first call to
7417f296bb3SBarry Smith`VecxxxEnd()` at which a single MPI reduction is used to communicate
7427f296bb3SBarry Smithall the values. It is required that the calls to the
7437f296bb3SBarry Smith`VecxxxEnd()` are performed in the same order as the calls to the
7447f296bb3SBarry Smith`VecxxxBegin()`; however, if you mistakenly make the calls in the
7457f296bb3SBarry Smithwrong order, PETSc will generate an error informing you of this. There
7467f296bb3SBarry Smithare additional routines `VecTDotBegin()` and `VecTDotEnd()`,
7477f296bb3SBarry Smith`VecMTDotBegin()`, `VecMTDotEnd()`.
7487f296bb3SBarry Smith
7497f296bb3SBarry SmithFor GPU vectors (like CUDA), the numerical computations will, by default, run on the GPU. Any
7507f296bb3SBarry Smithscalar output, like the result of a `VecDot()` are placed in CPU memory.
7517f296bb3SBarry Smith
7527f296bb3SBarry Smith(sec_localglobal)=
7537f296bb3SBarry Smith
7547f296bb3SBarry Smith## Local/global vectors and communicating between vectors
7557f296bb3SBarry Smith
7567f296bb3SBarry SmithMany PDE problems require ghost (or halo) values in each MPI process or even more general parallel communication
7577f296bb3SBarry Smithof vector values. These values are needed
7587f296bb3SBarry Smithto perform function evaluation on that MPI process. The exact structure of the ghost values needed
7597f296bb3SBarry Smithdepends on the type of grid being used. `DM` provides a uniform API for communicating the needed
7607f296bb3SBarry Smithvalues. We introduce the concept in detail for `DMDA`.
7617f296bb3SBarry Smith
7627f296bb3SBarry SmithEach `DM` object defines the layout of two vectors: a distributed
7637f296bb3SBarry Smithglobal vector and a local vector that includes room for the appropriate
7647f296bb3SBarry Smithghost points. The `DM` object provides information about the size
7657f296bb3SBarry Smithand layout of these vectors. The user can create
7667f296bb3SBarry Smithvector objects that use the `DM` layout information with the
7677f296bb3SBarry Smithroutines
7687f296bb3SBarry Smith
7697f296bb3SBarry Smith```
770b5ef2b50SBarry SmithDMCreateGlobalVector(DM dm, Vec *g);
771b5ef2b50SBarry SmithDMCreateLocalVector(DM dm, Vec *l);
7727f296bb3SBarry Smith```
7737f296bb3SBarry Smith
7747f296bb3SBarry SmithThese vectors will generally serve as the building blocks for local and
7757f296bb3SBarry Smithglobal PDE solutions, etc. If additional vectors with such layout
7767f296bb3SBarry Smithinformation are needed in a code, they can be obtained by duplicating
7777f296bb3SBarry Smith`l` or `g` via `VecDuplicate()` or `VecDuplicateVecs()`.
7787f296bb3SBarry Smith
7797f296bb3SBarry SmithWe emphasize that a `DM` provides the information needed to
7807f296bb3SBarry Smithcommunicate the ghost value information between processes. In most
7817f296bb3SBarry Smithcases, several different vectors can share the same communication
7827f296bb3SBarry Smithinformation (or, in other words, can share a given `DM`). The design
7837f296bb3SBarry Smithof the `DM` object makes this easy, as each `DM` operation may
7847f296bb3SBarry Smithoperate on vectors of the appropriate size, as obtained via
7857f296bb3SBarry Smith`DMCreateLocalVector()` and `DMCreateGlobalVector()` or as produced
7867f296bb3SBarry Smithby `VecDuplicate()`.
7877f296bb3SBarry Smith
7887f296bb3SBarry SmithAt certain stages of many applications, there is a need to work on a
7897f296bb3SBarry Smithlocal portion of the vector that includes the ghost points. This may be
7907f296bb3SBarry Smithdone by scattering a global vector into its local parts by using the
7917f296bb3SBarry Smithtwo-stage commands
7927f296bb3SBarry Smith
7937f296bb3SBarry Smith```
794b5ef2b50SBarry SmithDMGlobalToLocalBegin(DM dm, Vec g, InsertMode iora, Vec l);
795b5ef2b50SBarry SmithDMGlobalToLocalEnd(DM dm, Vec g, InsertMode iora, Vec l);
7967f296bb3SBarry Smith```
7977f296bb3SBarry Smith
7987f296bb3SBarry Smithwhich allows the overlap of communication and computation. Since the
7997f296bb3SBarry Smithglobal and local vectors, given by `g` and `l`, respectively, must
8007f296bb3SBarry Smithbe compatible with the `DM`, `da`, they should be
8017f296bb3SBarry Smithgenerated by `DMCreateGlobalVector()` and `DMCreateLocalVector()`
8027f296bb3SBarry Smith(or be duplicates of such a vector obtained via `VecDuplicate()`). The
8037f296bb3SBarry Smith`InsertMode` can be `ADD_VALUES` or `INSERT_VALUES` among other possible values.
8047f296bb3SBarry Smith
8057f296bb3SBarry SmithOne can scatter the local vectors into the distributed global vector with the
8067f296bb3SBarry Smithcommand
8077f296bb3SBarry Smith
8087f296bb3SBarry Smith```
809b5ef2b50SBarry SmithDMLocalToGlobal(DM dm, Vec l, InsertMode mode, Vec g);
8107f296bb3SBarry Smith```
8117f296bb3SBarry Smith
8127f296bb3SBarry Smithor the commands
8137f296bb3SBarry Smith
8147f296bb3SBarry Smith```
815b5ef2b50SBarry SmithDMLocalToGlobalBegin(DM dm, Vec l, InsertMode mode, Vec g);
8167f296bb3SBarry Smith/* (Computation to overlap with communication) */
817b5ef2b50SBarry SmithDMLocalToGlobalEnd(DM dm, Vec l, InsertMode mode, Vec g);
8187f296bb3SBarry Smith```
8197f296bb3SBarry Smith
8207f296bb3SBarry SmithIn general this is used with an `InsertMode` of `ADD_VALUES`,
8217f296bb3SBarry Smithbecause if one wishes to insert values into the global vector, they
8227f296bb3SBarry Smithshould access the global vector directly and put in the values.
8237f296bb3SBarry Smith
8247f296bb3SBarry SmithA third type of `DM` scatter is from a local vector
8257f296bb3SBarry Smith(including ghost points that contain irrelevant values) to a local
8267f296bb3SBarry Smithvector with correct ghost point values. This scatter may be done with
8277f296bb3SBarry Smiththe commands
8287f296bb3SBarry Smith
8297f296bb3SBarry Smith```
830b5ef2b50SBarry SmithDMLocalToLocalBegin(DM dm, Vec l1, InsertMode iora, Vec l2);
831b5ef2b50SBarry SmithDMLocalToLocalEnd(DM dm, Vec l1, InsertMode iora, Vec l2);
8327f296bb3SBarry Smith```
8337f296bb3SBarry Smith
8347f296bb3SBarry SmithSince both local vectors, `l1` and `l2`, must be compatible with `da`, they should be generated by
8357f296bb3SBarry Smith`DMCreateLocalVector()` (or be duplicates of such vectors obtained via
8367f296bb3SBarry Smith`VecDuplicate()`). The `InsertMode` can be either `ADD_VALUES` or
8377f296bb3SBarry Smith`INSERT_VALUES`.
8387f296bb3SBarry Smith
8397f296bb3SBarry SmithIn most applications, the local ghosted vectors are only needed temporarily during
8407f296bb3SBarry Smithuser “function evaluations”. PETSc provides an easy, light-weight
8417f296bb3SBarry Smith(requiring essentially no CPU time) way to temporarily obtain these work vectors and
8427f296bb3SBarry Smithreturn them when no longer needed. This is done with the
8437f296bb3SBarry Smithroutines
8447f296bb3SBarry Smith
8457f296bb3SBarry Smith```
846b5ef2b50SBarry SmithDMGetLocalVector(DM dm, Vec *l);
8477f296bb3SBarry Smith... use the local vector l ...
848b5ef2b50SBarry SmithDMRestoreLocalVector(DM dm, Vec *l);
8497f296bb3SBarry Smith```
8507f296bb3SBarry Smith
8517f296bb3SBarry Smith(sec_scatter)=
8527f296bb3SBarry Smith
8537f296bb3SBarry Smith## Low-level Vector Communication
8547f296bb3SBarry Smith
8557f296bb3SBarry SmithMost users of PETSc who can utilize a `DM` will not need to utilize the lower-level routines discussed in the rest of this section
8567f296bb3SBarry Smithand should skip ahead to {any}`ch_matrices`.
8577f296bb3SBarry Smith
8587f296bb3SBarry SmithTo facilitate creating general vector scatters and gathers used, for example, in
8597f296bb3SBarry Smithupdating ghost points for problems for which no `DM` currently exists
8607f296bb3SBarry SmithPETSc employs the concept of an *index set*, via the `IS` class. An
8617f296bb3SBarry Smithindex set, a generalization of a set of integer indices, is
8627f296bb3SBarry Smithused to define scatters, gathers, and similar operations on vectors and
8637f296bb3SBarry Smithmatrices. Much of the underlying code that implements `DMGlobalToLocal` communication is built
8647f296bb3SBarry Smithon the infrastructure discussed below.
8657f296bb3SBarry Smith
8667f296bb3SBarry SmithThe following command creates an index set based on a list of integers:
8677f296bb3SBarry Smith
8687f296bb3SBarry Smith```
8697f296bb3SBarry SmithISCreateGeneral(MPI_Comm comm, PetscInt n, PetscInt *indices, PetscCopyMode mode, IS *is);
8707f296bb3SBarry Smith```
8717f296bb3SBarry Smith
8727f296bb3SBarry SmithWhen `mode` is `PETSC_COPY_VALUES`, this routine copies the `n`
8737f296bb3SBarry Smithindices passed to it by the integer array `indices`. Thus, the user
8747f296bb3SBarry Smithshould be sure to free the integer array `indices` when it is no
8757f296bb3SBarry Smithlonger needed, perhaps directly after the call to `ISCreateGeneral()`.
8767f296bb3SBarry SmithThe communicator, `comm`, should include all processes
8777f296bb3SBarry Smithusing the `IS`.
8787f296bb3SBarry Smith
8797f296bb3SBarry SmithAnother standard index set is defined by a starting point (`first`)
8807f296bb3SBarry Smithand a stride (`step`), and can be created with the command
8817f296bb3SBarry Smith
8827f296bb3SBarry Smith```
8837f296bb3SBarry SmithISCreateStride(MPI_Comm comm, PetscInt n, PetscInt first, PetscInt step, IS *is);
8847f296bb3SBarry Smith```
8857f296bb3SBarry Smith
8867f296bb3SBarry SmithThe meaning of `n`, `first`, and `step` correspond to the MATLAB notation
8877f296bb3SBarry Smith`first:step:first+n*step`.
8887f296bb3SBarry Smith
8897f296bb3SBarry SmithIndex sets can be destroyed with the command
8907f296bb3SBarry Smith
8917f296bb3SBarry Smith```
8927f296bb3SBarry SmithISDestroy(IS &is);
8937f296bb3SBarry Smith```
8947f296bb3SBarry Smith
8957f296bb3SBarry SmithOn rare occasions, the user may need to access information directly from
8967f296bb3SBarry Smithan index set. Several commands assist in this process:
8977f296bb3SBarry Smith
8987f296bb3SBarry Smith```
8997f296bb3SBarry SmithISGetSize(IS is, PetscInt *size);
9007f296bb3SBarry SmithISStrideGetInfo(IS is, PetscInt *first, PetscInt *stride);
9017f296bb3SBarry SmithISGetIndices(IS is, PetscInt **indices);
9027f296bb3SBarry Smith```
9037f296bb3SBarry Smith
9047f296bb3SBarry SmithThe function `ISGetIndices()` returns a pointer to a list of the
9057f296bb3SBarry Smithindices in the index set. For certain index sets, this may be a
9067f296bb3SBarry Smithtemporary array of indices created specifically for the call.
9077f296bb3SBarry SmithThus, once the user finishes using the array of indices, the routine
9087f296bb3SBarry Smith
9097f296bb3SBarry Smith```
9107f296bb3SBarry SmithISRestoreIndices(IS is, PetscInt **indices);
9117f296bb3SBarry Smith```
9127f296bb3SBarry Smith
9137f296bb3SBarry Smithshould be called to ensure that the system can free the space it may
9147f296bb3SBarry Smithhave used to generate the list of indices.
9157f296bb3SBarry Smith
9167f296bb3SBarry SmithA blocked version of index sets can be created with the command
9177f296bb3SBarry Smith
9187f296bb3SBarry Smith```
9197f296bb3SBarry SmithISCreateBlock(MPI_Comm comm, PetscInt bs, PetscInt n, PetscInt *indices, PetscCopyMode mode, IS *is);
9207f296bb3SBarry Smith```
9217f296bb3SBarry Smith
9227f296bb3SBarry SmithThis version is used for defining operations in which each element of
9237f296bb3SBarry Smiththe index set refers to a block of `bs` vector entries. Related
9247f296bb3SBarry Smithroutines analogous to those described above exist as well, including
9257f296bb3SBarry Smith`ISBlockGetIndices()`, `ISBlockGetSize()`,
9267f296bb3SBarry Smith`ISBlockGetLocalSize()`, `ISGetBlockSize()`.
9277f296bb3SBarry Smith
9287f296bb3SBarry SmithMost PETSc applications use a particular `DM` object to manage the communication details needed for their grids.
9297f296bb3SBarry SmithIn some rare cases, however, codes need to directly set up their required communication patterns. This is done using
9307f296bb3SBarry SmithPETSc's `VecScatter` and `PetscSF` (for more general data than vectors). One
9317f296bb3SBarry Smithcan select any subset of the components of a vector to insert or add to
9327f296bb3SBarry Smithany subset of the components of another vector. We refer to these
9337f296bb3SBarry Smithoperations as *generalized scatters*, though they are a
9347f296bb3SBarry Smithcombination of scatters and gathers.
9357f296bb3SBarry Smith
9367f296bb3SBarry SmithTo copy selected components from one vector to another, one uses the
9377f296bb3SBarry Smithfollowing set of commands:
9387f296bb3SBarry Smith
9397f296bb3SBarry Smith```
9407f296bb3SBarry SmithVecScatterCreate(Vec x, IS ix, Vec y, IS iy, VecScatter *ctx);
9417f296bb3SBarry SmithVecScatterBegin(VecScatter ctx, Vec x, Vec y, INSERT_VALUES, SCATTER_FORWARD);
9427f296bb3SBarry SmithVecScatterEnd(VecScatter ctx, Vec x, Vec y, INSERT_VALUES, SCATTER_FORWARD);
9437f296bb3SBarry SmithVecScatterDestroy(VecScatter *ctx);
9447f296bb3SBarry Smith```
9457f296bb3SBarry Smith
9467f296bb3SBarry SmithHere `ix` denotes the index set of the first vector, while `iy`
9477f296bb3SBarry Smithindicates the index set of the destination vector. The vectors can be
9487f296bb3SBarry Smithparallel or sequential. The only requirements are that the number of
9497f296bb3SBarry Smithentries in the index set of the first vector, `ix`, equals the number
9507f296bb3SBarry Smithin the destination index set, `iy`, and that the vectors be long
9517f296bb3SBarry Smithenough to contain all the indices referred to in the index sets. If both
9527f296bb3SBarry Smith`x` and `y` are parallel, their communicator must have the same set
9537f296bb3SBarry Smithof processes, but their process order can differ. The argument
9547f296bb3SBarry Smith`INSERT_VALUES` specifies that the vector elements will be inserted
9557f296bb3SBarry Smithinto the specified locations of the destination vector, overwriting any
9567f296bb3SBarry Smithexisting values. To add the components, rather than insert them, the
9577f296bb3SBarry Smithuser should select the option `ADD_VALUES` instead of
9587f296bb3SBarry Smith`INSERT_VALUES`. One can also use `MAX_VALUES` or `MIN_VALUES` to
9597f296bb3SBarry Smithreplace the destination with the maximal or minimal of its current value and
9607f296bb3SBarry Smiththe scattered values.
9617f296bb3SBarry Smith
9627f296bb3SBarry SmithTo perform a conventional gather operation, the user makes the
9637f296bb3SBarry Smithdestination index set, `iy`, be a stride index set with a stride of
9647f296bb3SBarry Smithone. Similarly, a conventional scatter can be done with an initial
9657f296bb3SBarry Smith(sending) index set consisting of a stride. The scatter routines are
9667f296bb3SBarry Smithcollective operations (i.e. all processes that own a parallel vector
9677f296bb3SBarry Smith*must* call the scatter routines). When scattering from a parallel
9687f296bb3SBarry Smithvector to sequential vectors, each process has its own sequential vector
9697f296bb3SBarry Smiththat receives values from locations as indicated in its own index set.
9707f296bb3SBarry SmithSimilarly, in scattering from sequential vectors to a parallel vector,
9717f296bb3SBarry Smitheach process has its own sequential vector that contributes to
9727f296bb3SBarry Smiththe parallel vector.
9737f296bb3SBarry Smith
9747f296bb3SBarry Smith*Caution*: When `INSERT_VALUES` is used, if two different processes
9757f296bb3SBarry Smithcontribute different values to the same component in a parallel vector,
9767f296bb3SBarry Smitheither value may be inserted. When `ADD_VALUES` is used, the
9777f296bb3SBarry Smithcorrect sum is added to the correct location.
9787f296bb3SBarry Smith
9797f296bb3SBarry SmithIn some cases, one may wish to “undo” a scatter, that is, perform the
9807f296bb3SBarry Smithscatter backward, switching the roles of the sender and receiver. This
9817f296bb3SBarry Smithis done by using
9827f296bb3SBarry Smith
9837f296bb3SBarry Smith```
9847f296bb3SBarry SmithVecScatterBegin(VecScatter ctx, Vec y, Vec x, INSERT_VALUES, SCATTER_REVERSE);
9857f296bb3SBarry SmithVecScatterEnd(VecScatter ctx, Vec y, Vec x, INSERT_VALUES, SCATTER_REVERSE);
9867f296bb3SBarry Smith```
9877f296bb3SBarry Smith
9887f296bb3SBarry SmithNote that the roles of the first two arguments to these routines must be
9897f296bb3SBarry Smithswapped whenever the `SCATTER_REVERSE` option is used.
9907f296bb3SBarry Smith
9917f296bb3SBarry SmithOnce a `VecScatter` object has been created, it may be used with any
9927f296bb3SBarry Smithvectors that have the same parallel data layout. That is, one can
9937f296bb3SBarry Smithcall `VecScatterBegin()` and `VecScatterEnd()` with different
9947f296bb3SBarry Smithvectors than used in the call to `VecScatterCreate()` as long as they
9957f296bb3SBarry Smithhave the same parallel layout (the number of elements on each process are
9967f296bb3SBarry Smiththe same). Usually, these “different” vectors would have been obtained
9977f296bb3SBarry Smithvia calls to `VecDuplicate()` from the original vectors used in the
9987f296bb3SBarry Smithcall to `VecScatterCreate()`.
9997f296bb3SBarry Smith
10007f296bb3SBarry Smith`VecGetValues()` can only access
10017f296bb3SBarry Smithlocal values from the vector. To get off-process values, the user should
10027f296bb3SBarry Smithcreate a new vector where the components will be stored and then
10037f296bb3SBarry Smithperform the appropriate vector scatter. For example, if one desires to
10047f296bb3SBarry Smithobtain the values of the 100th and 200th entries of a parallel vector,
10057f296bb3SBarry Smith`p`, one could use a code such as that below. In this example, the
10067f296bb3SBarry Smithvalues of the 100th and 200th components are placed in the array values.
10077f296bb3SBarry SmithIn this example, each process now has the 100th and 200th component, but
10087f296bb3SBarry Smithobviously, each process could gather any elements it needed, or none by
10097f296bb3SBarry Smithcreating an index set with no entries.
10107f296bb3SBarry Smith
10117f296bb3SBarry Smith```
10127f296bb3SBarry SmithVec         p, x;         /* initial vector, destination vector */
10137f296bb3SBarry SmithVecScatter  scatter;      /* scatter context */
10147f296bb3SBarry SmithIS          from, to;     /* index sets that define the scatter */
10157f296bb3SBarry SmithPetscScalar *values;
10167f296bb3SBarry SmithPetscInt    idx_from[] = {100, 200}, idx_to[] = {0, 1};
10177f296bb3SBarry Smith
10187f296bb3SBarry SmithVecCreateSeq(PETSC_COMM_SELF, 2, &x);
10197f296bb3SBarry SmithISCreateGeneral(PETSC_COMM_SELF, 2, idx_from, PETSC_COPY_VALUES, &from);
10207f296bb3SBarry SmithISCreateGeneral(PETSC_COMM_SELF, 2, idx_to, PETSC_COPY_VALUES, &to);
10217f296bb3SBarry SmithVecScatterCreate(p, from, x, to, &scatter);
10227f296bb3SBarry SmithVecScatterBegin(scatter, p, x, INSERT_VALUES, SCATTER_FORWARD);
10237f296bb3SBarry SmithVecScatterEnd(scatter, p, x, INSERT_VALUES, SCATTER_FORWARD);
10247f296bb3SBarry SmithVecGetArray(x, &values);
10257f296bb3SBarry SmithISDestroy(&from);
10267f296bb3SBarry SmithISDestroy(&to);
10277f296bb3SBarry SmithVecScatterDestroy(&scatter);
10287f296bb3SBarry Smith```
10297f296bb3SBarry Smith
10307f296bb3SBarry SmithThe scatter comprises two stages to allow for the overlap of
10317f296bb3SBarry Smithcommunication and computation. The introduction of the `VecScatter`
10327f296bb3SBarry Smithcontext allows the communication patterns for the scatter to be computed
10337f296bb3SBarry Smithonce and reused repeatedly. Generally, even setting up the
10347f296bb3SBarry Smithcommunication for a scatter requires communication; hence, it is best to
10357f296bb3SBarry Smithreuse such information when possible.
10367f296bb3SBarry Smith
10377f296bb3SBarry SmithScatters provide a very general method for managing the
10387f296bb3SBarry Smithcommunication of required ghost values for unstructured grid
10397f296bb3SBarry Smithcomputations. One scatters the global vector into a local “ghosted” work
10407f296bb3SBarry Smithvector, performs the computation on the local work vectors, and then
10417f296bb3SBarry Smithscatters back into the global solution vector. In the simplest case, this
10427f296bb3SBarry Smithmay be written as
10437f296bb3SBarry Smith
10447f296bb3SBarry Smith```
10457f296bb3SBarry SmithVecScatterBegin(VecScatter scatter, Vec globalin, Vec localin, InsertMode INSERT_VALUES, ScatterMode SCATTER_FORWARD);
10467f296bb3SBarry SmithVecScatterEnd(VecScatter scatter, Vec globalin, Vec localin, InsertMode INSERT_VALUES, ScatterMode SCATTER_FORWARD);
10477f296bb3SBarry Smith/* For example, do local calculations from localin to localout */
10487f296bb3SBarry Smith...
10497f296bb3SBarry SmithVecScatterBegin(VecScatter scatter, Vec localout, Vec globalout, InsertMode ADD_VALUES, ScatterMode SCATTER_REVERSE);
10507f296bb3SBarry SmithVecScatterEnd(VecScatter scatter, Vec localout, Vec globalout, InsertMode ADD_VALUES, ScatterMode SCATTER_REVERSE);
10517f296bb3SBarry Smith```
10527f296bb3SBarry Smith
10537f296bb3SBarry SmithIn this case, the scatter is used in a way similar to the usage of `DMGlobalToLocal()` and `DMLocalToGlobal()` discussed above.
10547f296bb3SBarry Smith
10557f296bb3SBarry Smith(sec_islocaltoglobalmap)=
10567f296bb3SBarry Smith
10577f296bb3SBarry Smith### Local to global mappings
10587f296bb3SBarry Smith
10597f296bb3SBarry SmithWhen working with a global representation of a vector
10607f296bb3SBarry Smith(usually on a vector obtained with `DMCreateGlobalVector()`) and a local
10617f296bb3SBarry Smithrepresentation of the same vector that includes ghost points required
10627f296bb3SBarry Smithfor local computation (obtained with `DMCreateLocalVector()`). PETSc provides routines to help map indices from
10637f296bb3SBarry Smitha local numbering scheme to the PETSc global numbering scheme, recall their use above for the routine `VecSetValuesLocal()` introduced above.
10647f296bb3SBarry SmithThis is done via the following routines
10657f296bb3SBarry Smith
10667f296bb3SBarry Smith```
10677f296bb3SBarry SmithISLocalToGlobalMappingCreate(MPI_Comm comm, PetscInt bs, PetscInt N, PetscInt* globalnum, PetscCopyMode mode, ISLocalToGlobalMapping* ctx);
10687f296bb3SBarry SmithISLocalToGlobalMappingApply(ISLocalToGlobalMapping ctx, PetscInt n, PetscInt *in, PetscInt *out);
10697f296bb3SBarry SmithISLocalToGlobalMappingApplyIS(ISLocalToGlobalMapping ctx, IS isin, IS* isout);
10707f296bb3SBarry SmithISLocalToGlobalMappingDestroy(ISLocalToGlobalMapping *ctx);
10717f296bb3SBarry Smith```
10727f296bb3SBarry Smith
10737f296bb3SBarry SmithHere `N` denotes the number of local indices, `globalnum` contains
10747f296bb3SBarry Smiththe global number of each local number, and `ISLocalToGlobalMapping`
10757f296bb3SBarry Smithis the resulting PETSc object that contains the information needed to
10767f296bb3SBarry Smithapply the mapping with either `ISLocalToGlobalMappingApply()` or
10777f296bb3SBarry Smith`ISLocalToGlobalMappingApplyIS()`.
10787f296bb3SBarry Smith
10797f296bb3SBarry SmithNote that the `ISLocalToGlobalMapping` routines serve a different
10807f296bb3SBarry Smithpurpose than the `AO` routines. In the former case, they provide a
10817f296bb3SBarry Smithmapping from a local numbering scheme (including ghost points) to a
10827f296bb3SBarry Smithglobal numbering scheme, while in the latter, they provide a mapping
10837f296bb3SBarry Smithbetween two global numbering schemes. Many applications may use
10847f296bb3SBarry Smithboth `AO` and `ISLocalToGlobalMapping` routines. The `AO` routines
10857f296bb3SBarry Smithare first used to map from an application global ordering (that has no
10867f296bb3SBarry Smithrelationship to parallel processing, etc.) to the PETSc ordering scheme
10877f296bb3SBarry Smith(where each process has a contiguous set of indices in the numbering).
10887f296bb3SBarry SmithThen, to perform function or Jacobian evaluations locally on
10897f296bb3SBarry Smitheach process, one works with a local numbering scheme that includes
10907f296bb3SBarry Smithghost points. The mapping from this local numbering scheme back to the
10917f296bb3SBarry Smithglobal PETSc numbering can be handled with the
10927f296bb3SBarry Smith`ISLocalToGlobalMapping` routines.
10937f296bb3SBarry Smith
10947f296bb3SBarry SmithIf one is given a list of block indices in a global numbering, the
10957f296bb3SBarry Smithroutine
10967f296bb3SBarry Smith
10977f296bb3SBarry Smith```
10987f296bb3SBarry SmithISGlobalToLocalMappingApplyBlock(ISLocalToGlobalMapping ctx, ISGlobalToLocalMappingMode type, PetscInt nin, PetscInt idxin[], PetscInt *nout, PetscInt idxout[]);
10997f296bb3SBarry Smith```
11007f296bb3SBarry Smith
11017f296bb3SBarry Smithwill provide a new list of indices in the local numbering. Again,
11027f296bb3SBarry Smithnegative values in `idxin` are left unmapped. But in addition, if
11037f296bb3SBarry Smith`type` is set to `IS_GTOLM_MASK` , then `nout` is set to `nin`
11047f296bb3SBarry Smithand all global values in `idxin` that are not represented in the local
11057f296bb3SBarry Smithto global mapping are replaced by -1. When `type` is set to
11067f296bb3SBarry Smith`IS_GTOLM_DROP`, the values in `idxin` that are not represented
11077f296bb3SBarry Smithlocally in the mapping are not included in `idxout`, so that
11087f296bb3SBarry Smithpotentially `nout` is smaller than `nin`. One must pass in an array
11097f296bb3SBarry Smithlong enough to hold all the indices. One can call
11107f296bb3SBarry Smith`ISGlobalToLocalMappingApplyBlock()` with `idxout` equal to `NULL`
11117f296bb3SBarry Smithto determine the required length (returned in `nout`) and then
11127f296bb3SBarry Smithallocate the required space and call
11137f296bb3SBarry Smith`ISGlobalToLocalMappingApplyBlock()` a second time to set the values.
11147f296bb3SBarry Smith
11157f296bb3SBarry SmithOften it is convenient to set elements into a vector using the local
11167f296bb3SBarry Smithnode numbering rather than the global node numbering (e.g., each process
11177f296bb3SBarry Smithmay maintain its own sublist of vertices and elements and number them
11187f296bb3SBarry Smithlocally). To set values into a vector with the local numbering, one must
11197f296bb3SBarry Smithfirst call
11207f296bb3SBarry Smith
11217f296bb3SBarry Smith```
11227f296bb3SBarry SmithVecSetLocalToGlobalMapping(Vec v, ISLocalToGlobalMapping ctx);
11237f296bb3SBarry Smith```
11247f296bb3SBarry Smith
11257f296bb3SBarry Smithand then call
11267f296bb3SBarry Smith
11277f296bb3SBarry Smith```
11287f296bb3SBarry SmithVecSetValuesLocal(Vec x, PetscInt n, const PetscInt indices[], const PetscScalar values[], INSERT_VALUES);
11297f296bb3SBarry Smith```
11307f296bb3SBarry Smith
11317f296bb3SBarry SmithNow the `indices` use the local numbering rather than the global,
11327f296bb3SBarry Smithmeaning the entries lie in $[0,n)$ where $n$ is the local
11337f296bb3SBarry Smithsize of the vector. Global vectors obtained from a `DM` already have the global to local mapping
11347f296bb3SBarry Smithprovided by the `DM`.
11357f296bb3SBarry Smith
11367f296bb3SBarry SmithOne can use global indices
11377f296bb3SBarry Smithwith `MatSetValues()` or `MatSetValuesStencil()` to assemble global stiffness matrices. Alternately, the
11387f296bb3SBarry Smithglobal node number of each local node, including the ghost nodes, can be
11397f296bb3SBarry Smithobtained by calling
11407f296bb3SBarry Smith
11417f296bb3SBarry Smith```
1142b5ef2b50SBarry SmithDMGetLocalToGlobalMapping(DM dm, ISLocalToGlobalMapping *map);
11437f296bb3SBarry Smith```
11447f296bb3SBarry Smith
11457f296bb3SBarry Smithfollowed by
11467f296bb3SBarry Smith
11477f296bb3SBarry Smith```
11487f296bb3SBarry SmithVecSetLocalToGlobalMapping(Vec v, ISLocalToGlobalMapping map);
11497f296bb3SBarry SmithMatSetLocalToGlobalMapping(Mat A, ISLocalToGlobalMapping rmapping, ISLocalToGlobalMapping cmapping);
11507f296bb3SBarry Smith```
11517f296bb3SBarry Smith
11527f296bb3SBarry SmithNow, entries may be added to the vector and matrix using the local
11537f296bb3SBarry Smithnumbering and `VecSetValuesLocal()` and `MatSetValuesLocal()`.
11547f296bb3SBarry Smith
11557f296bb3SBarry SmithThe example
11567f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex5.c.html">SNES Tutorial ex5</a>
11577f296bb3SBarry Smithillustrates the use of a `DMDA` in the solution of a
11587f296bb3SBarry Smithnonlinear problem. The analogous Fortran program is
11597f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex5f90.F90.html">SNES Tutorial ex5f90</a>;
11607f296bb3SBarry Smithsee {any}`ch_snes` for a discussion of the
11617f296bb3SBarry Smithnonlinear solvers.
11627f296bb3SBarry Smith
11637f296bb3SBarry Smith(sec_vecghost)=
11647f296bb3SBarry Smith
11657f296bb3SBarry Smith### Global Vectors with locations for ghost values
11667f296bb3SBarry Smith
11677f296bb3SBarry SmithThere are two minor drawbacks to the basic approach described above for unstructured grids:
11687f296bb3SBarry Smith
11697f296bb3SBarry Smith- the extra memory requirement for the local work vector, `localin`,
11707f296bb3SBarry Smith  which duplicates the local values in the memory in `globalin`, and
11717f296bb3SBarry Smith- the extra time required to copy the local values from `localin` to
11727f296bb3SBarry Smith  `globalin`.
11737f296bb3SBarry Smith
11747f296bb3SBarry SmithAn alternative approach is to allocate global vectors with space
11757f296bb3SBarry Smithpreallocated for the ghost values.
11767f296bb3SBarry Smith
11777f296bb3SBarry Smith```
11787f296bb3SBarry SmithVecCreateGhost(MPI_Comm comm, PetscInt n, PetscInt N, PetscInt nghost, PetscInt *ghosts, Vec *vv)
11797f296bb3SBarry Smith```
11807f296bb3SBarry Smith
11817f296bb3SBarry Smithor
11827f296bb3SBarry Smith
11837f296bb3SBarry Smith```
11847f296bb3SBarry SmithVecCreateGhostWithArray(MPI_Comm comm, PetscInt n, PetscInt N, PetscInt nghost, PetscInt *ghosts, PetscScalar *array, Vec *vv)
11857f296bb3SBarry Smith```
11867f296bb3SBarry Smith
11877f296bb3SBarry SmithHere `n` is the number of local vector entries, `N` is the number of
11887f296bb3SBarry Smithglobal entries (or `NULL`), and `nghost` is the number of ghost
11897f296bb3SBarry Smithentries. The array `ghosts` is of size `nghost` and contains the
11907f296bb3SBarry Smithglobal vector location for each local ghost location. Using
11917f296bb3SBarry Smith`VecDuplicate()` or `VecDuplicateVecs()` on a ghosted vector will
11927f296bb3SBarry Smithgenerate additional ghosted vectors.
11937f296bb3SBarry Smith
11947f296bb3SBarry SmithIn many ways, a ghosted vector behaves like any other MPI vector
11957f296bb3SBarry Smithcreated by `VecCreateMPI()`. The difference is that the ghosted vector
11967f296bb3SBarry Smithhas an additional “local” representation that allows one to access the
11977f296bb3SBarry Smithghost locations. This is done through the call to
11987f296bb3SBarry Smith
11997f296bb3SBarry Smith```
12007f296bb3SBarry SmithVecGhostGetLocalForm(Vec g,Vec *l);
12017f296bb3SBarry Smith```
12027f296bb3SBarry Smith
12037f296bb3SBarry SmithThe vector `l` is a sequential representation of the parallel vector
12047f296bb3SBarry Smith`g` that shares the same array space (and hence numerical values); but
12057f296bb3SBarry Smithallows one to access the “ghost” values past “the end of the” array.
12067f296bb3SBarry SmithNote that one accesses the entries in `l` using the local numbering of
12077f296bb3SBarry Smithelements and ghosts, while they are accessed in `g` using the global
12087f296bb3SBarry Smithnumbering.
12097f296bb3SBarry Smith
12107f296bb3SBarry SmithA common usage of a ghosted vector is given by
12117f296bb3SBarry Smith
12127f296bb3SBarry Smith```
12137f296bb3SBarry SmithVecGhostUpdateBegin(Vec globalin, InsertMode INSERT_VALUES, ScatterMode SCATTER_FORWARD);
12147f296bb3SBarry SmithVecGhostUpdateEnd(Vec globalin, InsertMode INSERT_VALUES, ScatterMode SCATTER_FORWARD);
12157f296bb3SBarry SmithVecGhostGetLocalForm(Vec globalin, Vec *localin);
12167f296bb3SBarry SmithVecGhostGetLocalForm(Vec globalout, Vec *localout);
12177f296bb3SBarry Smith...  Do local calculations from localin to localout ...
12187f296bb3SBarry SmithVecGhostRestoreLocalForm(Vec globalin, Vec *localin);
12197f296bb3SBarry SmithVecGhostRestoreLocalForm(Vec globalout, Vec *localout);
12207f296bb3SBarry SmithVecGhostUpdateBegin(Vec globalout, InsertMode ADD_VALUES, ScatterMode SCATTER_REVERSE);
12217f296bb3SBarry SmithVecGhostUpdateEnd(Vec globalout, InsertMode ADD_VALUES, ScatterMode SCATTER_REVERSE);
12227f296bb3SBarry Smith```
12237f296bb3SBarry Smith
12247f296bb3SBarry SmithThe routines `VecGhostUpdateBegin()` and `VecGhostUpdateEnd()` are
12257f296bb3SBarry Smithequivalent to the routines `VecScatterBegin()` and `VecScatterEnd()`
12267f296bb3SBarry Smithabove, except that since they are scattering into the ghost locations,
12277f296bb3SBarry Smiththey do not need to copy the local vector values, which are already in
12287f296bb3SBarry Smithplace. In addition, the user does not have to allocate the local work
12297f296bb3SBarry Smithvector since the ghosted vector already has allocated slots to contain
12307f296bb3SBarry Smiththe ghost values.
12317f296bb3SBarry Smith
12327f296bb3SBarry SmithThe input arguments `INSERT_VALUES` and `SCATTER_FORWARD` cause the
12337f296bb3SBarry Smithghost values to be correctly updated from the appropriate process. The
12347f296bb3SBarry Smitharguments `ADD_VALUES` and `SCATTER_REVERSE` update the “local”
12357f296bb3SBarry Smithportions of the vector from all the other processes’ ghost values. This
12367f296bb3SBarry Smithwould be appropriate, for example, when performing a finite element
12377f296bb3SBarry Smithassembly of a load vector. One can also use `MAX_VALUES` or
12387f296bb3SBarry Smith`MIN_VALUES` with `SCATTER_REVERSE`.
12397f296bb3SBarry Smith
12407f296bb3SBarry Smith`DMPLEX` does not yet support ghosted vectors sharing memory with the global representation.
12417f296bb3SBarry SmithThis is a work in progress; if you are interested in this feature, please contact the PETSc community members.
12427f296bb3SBarry Smith
12437f296bb3SBarry Smith{any}`sec_partitioning` discusses the important topic of
12447f296bb3SBarry Smithpartitioning an unstructured grid.
12457f296bb3SBarry Smith
12467f296bb3SBarry Smith(sec_ao)=
12477f296bb3SBarry Smith
12487f296bb3SBarry Smith## Application Orderings
12497f296bb3SBarry Smith
12507f296bb3SBarry SmithWhen writing parallel PDE codes, there is extra complexity caused by
12517f296bb3SBarry Smithhaving multiple ways of indexing (numbering) and ordering objects such
12527f296bb3SBarry Smithas vertices and degrees of freedom. For example, a grid generator or
12537f296bb3SBarry Smithpartitioner may renumber the nodes, requiring adjustment of the other
12547f296bb3SBarry Smithdata structures that refer to these objects; see Figure
12557f296bb3SBarry Smith{any}`fig_daao`.
12567f296bb3SBarry SmithPETSc provides various tools to help manage the mapping amongst
12577f296bb3SBarry Smiththe various numbering systems. The most basic is the `AO`
12587f296bb3SBarry Smith(application ordering), which enables mapping between different global
12597f296bb3SBarry Smith(cross-process) numbering schemes.
12607f296bb3SBarry Smith
12617f296bb3SBarry SmithIn many applications, it is desirable to work with one or more
12627f296bb3SBarry Smith“orderings” (or numberings) of degrees of freedom, cells, nodes, etc.
12637f296bb3SBarry SmithDoing so in a parallel environment is complicated by the fact that each
12647f296bb3SBarry Smithprocess cannot keep complete lists of the mappings between different
12657f296bb3SBarry Smithorderings. In addition, the orderings used in the PETSc linear algebra
12667f296bb3SBarry Smithroutines (often contiguous ranges) may not correspond to the “natural”
12677f296bb3SBarry Smithorderings for the application.
12687f296bb3SBarry Smith
12697f296bb3SBarry SmithPETSc provides certain utility routines that allow one to deal cleanly
12707f296bb3SBarry Smithand efficiently with the various orderings. To define a new application
12717f296bb3SBarry Smithordering (called an `AO` in PETSc), one can call the routine
12727f296bb3SBarry Smith
12737f296bb3SBarry Smith```
12747f296bb3SBarry SmithAOCreateBasic(MPI_Comm comm, PetscInt n, const PetscInt apordering[], const PetscInt petscordering[], AO *ao);
12757f296bb3SBarry Smith```
12767f296bb3SBarry Smith
12777f296bb3SBarry SmithThe arrays `apordering` and `petscordering`, respectively, contain a
12787f296bb3SBarry Smithlist of integers in the application ordering and their corresponding
12797f296bb3SBarry Smithmapped values in the PETSc ordering. Each process can provide whatever
12807f296bb3SBarry Smithsubset of the ordering it chooses, but multiple processes should never
12817f296bb3SBarry Smithcontribute duplicate values. The argument `n` indicates the number of
12827f296bb3SBarry Smithlocal contributed values.
12837f296bb3SBarry Smith
12847f296bb3SBarry SmithFor example, consider a vector of length 5, where node 0 in the
12857f296bb3SBarry Smithapplication ordering corresponds to node 3 in the PETSc ordering. In
12867f296bb3SBarry Smithaddition, nodes 1, 2, 3, and 4 of the application ordering correspond,
12877f296bb3SBarry Smithrespectively, to nodes 2, 1, 4, and 0 of the PETSc ordering. We can
12887f296bb3SBarry Smithwrite this correspondence as
12897f296bb3SBarry Smith
12907f296bb3SBarry Smith$$
12917f296bb3SBarry Smith\{ 0, 1, 2, 3, 4 \}  \to  \{ 3, 2, 1, 4, 0 \}.
12927f296bb3SBarry Smith$$
12937f296bb3SBarry Smith
12947f296bb3SBarry SmithThe user can create the PETSc `AO` mappings in several ways. For
12957f296bb3SBarry Smithexample, if using two processes, one could call
12967f296bb3SBarry Smith
12977f296bb3SBarry Smith```
12987f296bb3SBarry SmithAOCreateBasic(PETSC_COMM_WORLD, 2,{0, 3}, {3, 4}, &ao);
12997f296bb3SBarry Smith```
13007f296bb3SBarry Smith
13017f296bb3SBarry Smithon the first process and
13027f296bb3SBarry Smith
13037f296bb3SBarry Smith```
13047f296bb3SBarry SmithAOCreateBasic(PETSC_COMM_WORLD, 3, {1, 2, 4}, {2, 1, 0}, &ao);
13057f296bb3SBarry Smith```
13067f296bb3SBarry Smith
13077f296bb3SBarry Smithon the other process.
13087f296bb3SBarry Smith
13097f296bb3SBarry SmithOnce the application ordering has been created, it can be used with
13107f296bb3SBarry Smitheither of the commands
13117f296bb3SBarry Smith
13127f296bb3SBarry Smith```
13137f296bb3SBarry SmithAOPetscToApplication(AO ao, PetscInt n, PetscInt *indices);
13147f296bb3SBarry SmithAOApplicationToPetsc(AO ao, PetscInt n, PetscInt *indices);
13157f296bb3SBarry Smith```
13167f296bb3SBarry Smith
13177f296bb3SBarry SmithUpon input, the `n`-dimensional array `indices` specifies the
13187f296bb3SBarry Smithindices to be mapped, while upon output, `indices` contains the mapped
13197f296bb3SBarry Smithvalues. Since we, in general, employ a parallel database for the `AO`
13207f296bb3SBarry Smithmappings, it is crucial that all processes that called
13217f296bb3SBarry Smith`AOCreateBasic()` also call these routines; these routines *cannot* be
13227f296bb3SBarry Smithcalled by just a subset of processes in the MPI communicator that was
13237f296bb3SBarry Smithused in the call to `AOCreateBasic()`.
13247f296bb3SBarry Smith
13257f296bb3SBarry SmithAn alternative routine to create the application ordering, `AO`, is
13267f296bb3SBarry Smith
13277f296bb3SBarry Smith```
13287f296bb3SBarry SmithAOCreateBasicIS(IS apordering, IS petscordering, AO *ao);
13297f296bb3SBarry Smith```
13307f296bb3SBarry Smith
13317f296bb3SBarry Smithwhere index sets are used
13327f296bb3SBarry Smithinstead of integer arrays.
13337f296bb3SBarry Smith
13347f296bb3SBarry SmithThe mapping routines
13357f296bb3SBarry Smith
13367f296bb3SBarry Smith```
13377f296bb3SBarry SmithAOPetscToApplicationIS(AO ao, IS indices);
13387f296bb3SBarry SmithAOApplicationToPetscIS(AO ao, IS indices);
13397f296bb3SBarry Smith```
13407f296bb3SBarry Smith
13417f296bb3SBarry Smithwill map index sets (`IS` objects) between orderings. Both the
13427f296bb3SBarry Smith`AOXxxToYyy()` and `AOXxxToYyyIS()` routines can be used regardless
13437f296bb3SBarry Smithof whether the `AO` was created with a `AOCreateBasic()` or
13447f296bb3SBarry Smith`AOCreateBasicIS()`.
13457f296bb3SBarry Smith
13467f296bb3SBarry SmithThe `AO` context should be destroyed with `AODestroy(AO *ao)` and
13477f296bb3SBarry Smithviewed with `AOView(AO ao,PetscViewer viewer)`.
13487f296bb3SBarry Smith
13497f296bb3SBarry SmithAlthough we refer to the two orderings as “PETSc” and “application”
13507f296bb3SBarry Smithorderings, the user is free to use them both for application orderings
13517f296bb3SBarry Smithand to maintain relationships among a variety of orderings by employing
13527f296bb3SBarry Smithseveral `AO` contexts.
13537f296bb3SBarry Smith
13547f296bb3SBarry SmithThe `AOxxToxx()` routines allow negative entries in the input integer
13557f296bb3SBarry Smitharray. These entries are not mapped; they remain unchanged. This
13567f296bb3SBarry Smithfunctionality enables, for example, mapping neighbor lists that use
13577f296bb3SBarry Smithnegative numbers to indicate nonexistent neighbors due to boundary
13587f296bb3SBarry Smithconditions, etc.
13597f296bb3SBarry Smith
13607f296bb3SBarry SmithSince the global ordering that PETSc uses to manage its parallel vectors
13617f296bb3SBarry Smith(and matrices) does not usually correspond to the “natural” ordering of
13627f296bb3SBarry Smitha two- or three-dimensional array, the `DMDA` structure provides an
13637f296bb3SBarry Smithapplication ordering `AO` (see {any}`sec_ao`) that maps
13647f296bb3SBarry Smithbetween the natural ordering on a rectangular grid and the ordering
13657f296bb3SBarry SmithPETSc uses to parallelize. This ordering context can be obtained with
13667f296bb3SBarry Smiththe command
13677f296bb3SBarry Smith
13687f296bb3SBarry Smith```
1369b5ef2b50SBarry SmithDMDAGetAO(DM dm, AO *ao);
13707f296bb3SBarry Smith```
13717f296bb3SBarry Smith
13727f296bb3SBarry SmithIn Figure {any}`fig_daao`, we indicate the orderings for a
13737f296bb3SBarry Smithtwo-dimensional `DMDA`, divided among four processes.
13747f296bb3SBarry Smith
13757f296bb3SBarry Smith:::{figure} /images/manual/danumbering.*
13767f296bb3SBarry Smith:alt: Natural Ordering and PETSc Ordering for a 2D Distributed Array (Four Processes)
13777f296bb3SBarry Smith:name: fig_daao
13787f296bb3SBarry Smith
13797f296bb3SBarry SmithNatural Ordering and PETSc Ordering for a 2D Distributed Array (Four
13807f296bb3SBarry SmithProcesses)
13817f296bb3SBarry Smith:::
13822f04c522SBarry Smith
13832f04c522SBarry Smith
13842f04c522SBarry Smith(sec_petscsf)=
13852f04c522SBarry Smith
13862f04c522SBarry Smith# PetscSF - an alternative to low-level MPI calls for data communication
13872f04c522SBarry Smith
13882f04c522SBarry SmithAs discussed above, the `VecScatter` object allows one to define parallel communication between vectors by listing, with `IS` objects, which vector entries from one vector
13892f04c522SBarry Smithare to be communicated to another vector and where in the second vector they are to be inserted. `PetscSF` provides a similar more general functionality for arrays of any MPI datatype.
13902f04c522SBarry Smith
13912f04c522SBarry Smith`PetscSF` communicates between `rootdata` and `leafdata` arrays. `rootdata` is distributed across the MPI processes and its entries are indicated by a `PetscSFNode` pair consisting of the MPI `rank` the
13922f04c522SBarry Smithentry is located on and the `index` in the array on that MPI process.
13932f04c522SBarry Smith
13942f04c522SBarry Smith```
13952f04c522SBarry Smithtypedef struct {
13962f04c522SBarry Smith  PetscInt rank;  /* MPI rank of owner */
13972f04c522SBarry Smith  PetscInt index; /* Index of node on rank */
13982f04c522SBarry Smith} PetscSFNode;
13992f04c522SBarry Smith```
14002f04c522SBarry Smith
14012f04c522SBarry SmithEach entry is uniquely owned at that location; in the same way a PETSc global vector has unique MPI process ownership of each entry.
14022f04c522SBarry Smith
14032f04c522SBarry Smith`leafdata` is similar to PETSc local vectors; each MPI process's `leafdata` array can contain "ghost values" that match values in other locations of the `leafdata` (on the same or different
14042f04c522SBarry SmithMPI processes). All these matching ghost values share a common root value in `rootdata`.
14052f04c522SBarry Smith
14062f04c522SBarry Smith
14072f04c522SBarry SmithWe begin to explain the use of `PetscSF` with an example. First we construct an array that tells for each leaf entry on that MPI process where its root entry is:
14082f04c522SBarry Smith
14092f04c522SBarry Smith```
14102f04c522SBarry Smith  PetscInt    nroots, nleaves;
14112f04c522SBarry Smith  PetscSFNode *roots;
14122f04c522SBarry Smith
14132f04c522SBarry Smith  if (rank == 0) {
14142f04c522SBarry Smith    // the number of entries in rootdata on this MPI process
14152f04c522SBarry Smith    nroots = 2;
14162f04c522SBarry Smith    // provide the matching location in rootdata for each entry in leafdata
14172f04c522SBarry Smith    nleaves = 3;
14182f04c522SBarry Smith    roots[0].rank = 0; roots[0].index = 0;
14192f04c522SBarry Smith    roots[1].rank = 1; roots[1].index = 0;
14202f04c522SBarry Smith    roots[2].rank = 0; roots[2].index = 1;
14212f04c522SBarry Smith  } else {
14222f04c522SBarry Smith    nroots = 1;
14232f04c522SBarry Smith    nleaves = 3;
14242f04c522SBarry Smith    roots[0].rank = 0; roots[0].index = 0;
14252f04c522SBarry Smith    roots[1].rank = 0; roots[1].index = 1;
14262f04c522SBarry Smith    roots[2].rank = 1; roots[1].index = 0;
14272f04c522SBarry Smith  }
14282f04c522SBarry Smith```
14292f04c522SBarry Smith
14302f04c522SBarry SmithNext, we construct the `PetscSF` that encapsulates this information needed for communication:
14312f04c522SBarry Smith
14322f04c522SBarry Smith```
14332f04c522SBarry Smith  PetscSF sf;
14342f04c522SBarry Smith
14352f04c522SBarry Smith  PetscSFCreate(PETSC_COMM_WORLD, &sf);
14362f04c522SBarry Smith  PetscSFSetFromOptions(sf);
14372f04c522SBarry Smith  PetscSFSetGraph(sf, nroots, nleaves, NULL, PETSC_OWN_POINTER, roots, PETSC_OWN_POINTER);
14382f04c522SBarry Smith  PetscSFSetUp(sf);
14392f04c522SBarry Smith```
14402f04c522SBarry Smith
14412f04c522SBarry SmithNext we fill `rootdata`:
14422f04c522SBarry Smith
14432f04c522SBarry Smith```
14442f04c522SBarry Smith  PetscInt    *rootdata, *leafdata;
14452f04c522SBarry Smith
14462f04c522SBarry Smith  if (rank == 0) {
14472f04c522SBarry Smith    rootdata[0] = 1;
14482f04c522SBarry Smith    rootdata[1] = 2;
14492f04c522SBarry Smith  } else {
14502f04c522SBarry Smith    rootdata[0] = 3;
14512f04c522SBarry Smith  }
14522f04c522SBarry Smith```
14532f04c522SBarry Smith
14542f04c522SBarry SmithFinally, we use the `PetscSF` to communicate `rootdata` to `leafdata`:
14552f04c522SBarry Smith
14562f04c522SBarry Smith```
14572f04c522SBarry Smith  PetscSFBcastBegin(sf, MPIU_INT, rootdata, leafdata, MPI_REPLACE);
14582f04c522SBarry Smith  PetscSFBcastEnd(sf, MPIU_INT, rootdata, leafdata, MPI_REPLACE);
14592f04c522SBarry Smith```
14602f04c522SBarry Smith
14612f04c522SBarry SmithNow `leafdata` on MPI rank 0 contains (1, 3, 2) and on MPI rank 1 contains (1, 2, 3).
14622f04c522SBarry Smith
14632f04c522SBarry SmithIt is also possible to move `leafdata` to `rootdata` using
14642f04c522SBarry Smith
14652f04c522SBarry Smith```
14662f04c522SBarry Smith  PetscSFReduceBegin(sf, MPIU_INT, leafdata, rootdata, MPIU_SUM);
14672f04c522SBarry Smith  PetscSFReduceEnd(sf, MPIU_INT, leafdata, rootdata, MPIU_SUM);
14682f04c522SBarry Smith```
14692f04c522SBarry Smith
14702f04c522SBarry SmithIn this case, since the reduction operation performed (the final argument of `PetscSFReduceBegin()`), is `MPIU_SUM` the final result in each entry of `rootdata` is the sum
14712f04c522SBarry Smithof the previous value at that location plus all the values it that entries leafs. So `rootdata` on MPI rank 0 contains (3, 6) while on MPI rank
14722f04c522SBarry Smith1 it contains (9).
14732f04c522SBarry Smith
14742f04c522SBarry SmithAs shown in the example above, `PetscSFBcastBegin()` and `PetscSFBcastEnd()` (as well as other `PetscSF` functions) also take an `MPI_Op` reduction argument, though that is almost always `MPI_REPLACE`.
14752f04c522SBarry Smith
14762f04c522SBarry Smith## Non-contiguous storage of leafdata
14772f04c522SBarry Smith
14782f04c522SBarry SmithIn the example above we treated the `leafdata` as sitting in a contiguous array with entries from 0 to one less than `nleaves`. This is indicated by the
14792f04c522SBarry Smith`NULL` argument in the call to `PetscSFSetGraph()`. More generally the `leafdata` array can have entries in it that are not accessed by the `PetscSF` operations. For example,
14802f04c522SBarry Smith
14812f04c522SBarry Smith```
14822f04c522SBarry Smith PetscInt *leaves;
14832f04c522SBarry Smith
14842f04c522SBarry Smith if (rank == 0) {
14852f04c522SBarry Smith   leaves[0] = 1;
14862f04c522SBarry Smith   leaves[1] = 2;
14872f04c522SBarry Smith   leaves[2] = 4;
14882f04c522SBarry Smith } else {
14892f04c522SBarry Smith   leaves[0] = 2;
14902f04c522SBarry Smith   leaves[1] = 1;
14912f04c522SBarry Smith   leaves[2] = 0;
14922f04c522SBarry Smith }
14932f04c522SBarry Smith  PetscSFSetGraph(sf, nroots, nleaves, leaves, PETSC_OWN_POINTER, roots, PETSC_OWN_POINTER);
14942f04c522SBarry Smith```
14952f04c522SBarry Smith
14962f04c522SBarry Smithmeans that the three entries of `leafdata` affected by `PetscSF` communication on MPI rank 0 are the array locations (1, 2, 4); meaning also that `leafdata` must be of length at least 5.
14972f04c522SBarry SmithOn MPI rank 1, the arriving values from the three roots listed in `roots` are placed backwards in `leafdata`. Note that providing the `leaves` permutation array on MPI rank 1 is
14982f04c522SBarry Smithequivalent to listing the three values in `roots` in the opposite order.
14992f04c522SBarry Smith
15002f04c522SBarry SmithIf we reran the initial communication with `PetscSFBcastBegin()` and `PetscSFBcastEnd()` using the modified `sf` the resulting values in `leavedata` would be on MPI rank 0 (x, 1, 3, x, 2) and on
15012f04c522SBarry SmithMPI rank 1 (3, 2, 1) where x indicates the previous value in `leafdata` that was unchanged.
15022f04c522SBarry Smith
15032f04c522SBarry Smith
15042f04c522SBarry Smith
15052f04c522SBarry Smith## GPU usage
15062f04c522SBarry Smith
15072f04c522SBarry Smith`rootdata` and `leafdata` can live either on CPU memory or GPU memory. The `PetscSF` routines automatically detect the memory type. But the time for the calls to the `CUDA` or `HIP`
15082f04c522SBarry Smithroutines for doing this determination (`cudaPointerGetAttributes()` or `hipPointerGetAttributes()`) is not trivial. To avoid the cost of the check,
15092f04c522SBarry Smith`PetscSF` provides the routines `PetscSFBcastWithMemTypeBegin()` and `PetscSFReduceWithMemTypeBegin()` where the user provides the memory type information.
15102f04c522SBarry Smith
15112f04c522SBarry Smith## Gathering leafdata but not reducing it
15122f04c522SBarry Smith
15132f04c522SBarry SmithOne may wish to gather the entries of the `leafdata` for each root but not reduce them to a single value. This is done with
15142f04c522SBarry Smith
15152f04c522SBarry Smith```
15162f04c522SBarry Smith  PetscSFGatherBegin(sf, MPIU_INT, leafdata, multirootdata);
15172f04c522SBarry Smith  PetscSFGatherEnd(sf, MPIU_INT, leafdata, multirootdata);
15182f04c522SBarry Smith```
15192f04c522SBarry Smith
15202f04c522SBarry SmithHere `multirootdata` is (generally) an array larger than `rootdata` that has enough locations to store the value of each `leaf` of each local root. The values are stored contiguously
15212f04c522SBarry Smithfor each root; that is `multirootdata` will contain
15222f04c522SBarry Smith
15232f04c522SBarry Smith```
15242f04c522SBarry Smith(first leaf of first root, second leaf of first root, third leaf of first root, ..., last leaf of first root, first leaf of second root, second leaf of second root, ...)
15252f04c522SBarry Smith```
15262f04c522SBarry Smith
15272f04c522SBarry SmithThe number of leaves for each local root (sometimes called the degree of the root) can be obtained with calls to `PetscSFComputeDegreeBegin()` and `PetscSFComputeDegreeEnd()`.
15282f04c522SBarry Smith
15292f04c522SBarry SmithThe data in `multirootdata` can be communicated to `leafdata` using
15302f04c522SBarry Smith
15312f04c522SBarry Smith```
15322f04c522SBarry Smith  PetscSFScatterBegin(sf, MPIU_INT, multirootdata, leafdata);
15332f04c522SBarry Smith  PetscSFScatterEnd(sf, MPIU_INT, multirootdata, leafdata);
15342f04c522SBarry Smith```
15352f04c522SBarry Smith
15362f04c522SBarry Smith## Optimized communication patterns
15372f04c522SBarry Smith
15382f04c522SBarry Smith
15392f04c522SBarry SmithA performance drawback to using `PetscSFSetGraph()` is that it requires explicitly listing in arrays all the entries of `roots`.
15402f04c522SBarry Smith`PetscSFSetGraphWithPattern()` provides an alternative way to indicate the communication graph for specific communication patterns.
15412f04c522SBarry Smith
15422f04c522SBarry Smith
15432f04c522SBarry Smith
15442f04c522SBarry Smith
15452f04c522SBarry Smith
15462f04c522SBarry Smith
15472f04c522SBarry Smith
15482f04c522SBarry Smith
1549