xref: /petsc/doc/install/install.md (revision 51619389660697cb6ab811d9eb05c4522bec4803)
1(doc_config_faq)=
2
3# Configuring PETSc
4
5:::{important}
6Obtain PETSc via the repository or download the latest tarball: {ref}`download documentation <doc_download>`.
7
8See {ref}`quick-start tutorial <tut_install>` for a step-by-step walk-through of the installation process.
9:::
10
11```{contents} Table of Contents
12:backlinks: entry
13:depth: 1
14:local: true
15```
16
17## Common Example Usages
18
19:::{attention}
20There are many example `configure` scripts at `config/examples/*.py`. These cover a
21wide variety of systems, and we use some of these scripts locally for testing. One can
22modify these files and run them in lieu of writing one yourself. For example:
23
24```console
25$ ./config/examples/arch-ci-osx-dbg.py
26```
27
28If there is a system for which we do not yet have such a `configure` script and/or
29the script in the examples directory is outdated we welcome your feedback by submitting
30your recommendations to <mailto:petsc-maint@mcs.anl.gov>. See bug report {ref}`documentation
31<doc_creepycrawly>` for more information.
32:::
33
34- If you do not have a Fortran compiler or [MPICH](https://www.mpich.org/) installed
35  locally (and want to use PETSc from C only).
36
37  ```console
38  $ ./configure --with-cc=gcc --with-cxx=0 --with-fc=0 --download-f2cblaslapack --download-mpich
39  ```
40
41- Same as above - but install in a user specified (prefix) location.
42
43  ```console
44  $ ./configure --prefix=/home/user/soft/petsc-install --with-cc=gcc --with-cxx=0 --with-fc=0 --download-f2cblaslapack --download-mpich
45  ```
46
47- If [BLAS/LAPACK], MPI sources (in "-devel" packages in most Linux distributions) are already
48  installed in default system/compiler locations and `mpicc`, `mpif90`, mpiexec are available
49  via `$PATH` - configure does not require any additional options.
50
51  ```console
52  $ ./configure
53  ```
54
55- If [BLAS/LAPACK], MPI are already installed in known user location use:
56
57  ```console
58  $ ./configure --with-blaslapack-dir=/usr/local/blaslapack --with-mpi-dir=/usr/local/mpich
59  ```
60
61  or
62
63  ```console
64  $ ./configure --with-blaslapack-dir=/usr/local/blaslapack --with-cc=/usr/local/mpich/bin/mpicc --with-mpi-f90=/usr/local/mpich/bin/mpif90 --with-mpiexec=/usr/local/mpich/bin/mpiexec
65  ```
66
67:::{admonition} Note
68:class: yellow
69
70The configure options `CFLAGS`, `CXXFLAGS`, and `FFLAGS` overwrite most of the flags that PETSc would use by default. This is generally undesirable. To
71add to the default flags instead use `COPTFLAGS`, `CXXOPTFLAGS`, and `FOPTFLAGS` (these work for all uses of ./configure). The same holds for
72`CUDAFLAGS`, `HIPFLAGS`, and `SYCLFLAGS`.
73:::
74
75:::{admonition} Note
76:class: yellow
77
78Do not specify `--with-cc`, `--with-fc` etc for the above when using
79`--with-mpi-dir` - so that `mpicc`/ `mpif90` will be picked up from mpi-dir!
80:::
81
82- Build Complex version of PETSc (using c++ compiler):
83
84  ```console
85  $ ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-clanguage=cxx --download-fblaslapack --download-mpich --with-scalar-type=complex
86  ```
87
88- Install 2 variants of PETSc, one with gnu, the other with Intel compilers. Specify
89  different `$PETSC_ARCH` for each build. See multiple PETSc install {ref}`documentation
90  <doc_multi>` for further recommendations:
91
92  ```console
93  $ ./configure PETSC_ARCH=linux-gnu --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich
94  $ make PETSC_ARCH=linux-gnu all test
95  $ ./configure PETSC_ARCH=linux-gnu-intel --with-cc=icc --with-cxx=icpc --with-fc=ifort --download-mpich --with-blaslapack-dir=/usr/local/mkl
96  $ make PETSC_ARCH=linux-gnu-intel all test
97  ```
98
99(doc_config_compilers)=
100
101## Compilers
102
103:::{important}
104If no compilers are specified - configure will automatically look for available MPI or
105regular compilers in the user's `$PATH` in the following order:
106
1071. `mpicc`/`mpicxx`/`mpif90`
1082. `gcc`/`g++`/`gfortran`
1093. `cc`/`CC` etc..
110:::
111
112- Specify compilers using the options `--with-cc`/`--with-cxx`/`--with-fc` for c,
113  c++, and fortran compilers respectively:
114
115  ```console
116  $ ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
117  ```
118
119:::{important}
120It's best to use MPI compiler wrappers [^id9]. This can be done by either specifying
121`--with-cc=mpicc` or `--with-mpi-dir` (and not `--with-cc=gcc`)
122
123```console
124$ ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
125```
126
127or the following (but **without** `--with-cc=gcc`)
128
129```console
130$ ./configure --with-mpi-dir=/opt/mpich2-1.1
131```
132
133See {any}`doc_config_mpi` for details on how to select specific MPI compiler wrappers or the
134specific compiler used by the MPI compiler wrapper.
135:::
136
137- If a Fortran compiler is not available or not needed - disable using:
138
139  ```console
140  $ ./configure --with-fc=0
141  ```
142
143- If a c++ compiler is not available or not needed - disable using:
144
145  ```console
146  $ ./configure --with-cxx=0
147  ```
148
149`configure` defaults to building PETSc in debug mode. One can switch to optimized
150mode with the `configure` option `--with-debugging=0` (we suggest using a different
151`$PETSC_ARCH` for debug and optimized builds, for example arch-debug and arch-opt, this
152way you can switch between debugging your code and running for performance by simply
153changing the value of `$PETSC_ARCH`). See multiple install {ref}`documentation
154<doc_multi>` for further details.
155
156Additionally one can specify more suitable optimization flags with the options
157`COPTFLAGS`, `FOPTFLAGS`, `CXXOPTFLAGS`. For example when using gnu compilers with
158corresponding optimization flags:
159
160```console
161$ ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 -march=native -mtune=native' --download-mpich
162```
163
164:::{warning}
165`configure` cannot detect compiler libraries for certain set of compilers. In this
166case one can specify additional system/compiler libraries using the `LIBS` option:
167
168```console
169$ ./configure --LIBS='-ldl /usr/lib/libm.a'
170```
171:::
172
173(doc_config_externalpack)=
174
175## External Packages
176
177:::{admonition} Note
178:class: yellow
179
180[BLAS/LAPACK] is the only **required** {ref}`external package <doc_externalsoftware>`
181(other than of course build tools such as compilers and `make`). PETSc may be built
182and run without MPI support if processing only in serial.
183
184For any {ref}`external packages <doc_externalsoftware>` used with PETSc we highly
185recommend you have PETSc download and install the packages, rather than you installing
186them separately first. This insures that:
187
188- The packages are installed with the same compilers and compiler options as PETSc
189  so that they can work together.
190- A **compatible** version of the package is installed. A generic install of this
191  package might not be compatible with PETSc (perhaps due to version differences - or
192  perhaps due to the requirement of additional patches for it to work with PETSc).
193- Some packages have bug fixes, portability patches, and upgrades for dependent
194  packages that have not yet been included in an upstream release, and hence may not
195  play nice with PETSc.
196:::
197
198PETSc provides interfaces to various {ref}`external packages <doc_externalsoftware>`. One
199can optionally use external solvers like [HYPRE], [MUMPS], and others from within PETSc
200applications.
201
202PETSc `configure` has the ability to download and install these {ref}`external packages
203<doc_externalsoftware>`. Alternatively if these packages are already installed, then
204`configure` can detect and use them.
205
206If you are behind a firewall and cannot use a proxy for the downloads or have a very slow
207network, use the additional option `--with-packages-download-dir=/path/to/dir`. This
208will trigger `configure` to print the URLs of all the packages you must download. You
209may then download the packages to some directory (do not uncompress or untar the files)
210and then point `configure` to these copies of the packages instead of trying to download
211them directly from the internet.
212
213The following modes can be used to download/install {ref}`external packages
214<doc_externalsoftware>` with `configure`.
215
216- `--download-PACKAGENAME`: Download specified package and install it, enabling PETSc to
217  use this package. **This is the recommended method to couple any external packages with PETSc**:
218
219  ```console
220  $ ./configure --download-fblaslapack --download-mpich
221  ```
222
223- `--download-PACKAGENAME=/path/to/PACKAGENAME.tar.gz`: If `configure` cannot
224  automatically download the package (due to network/firewall issues), one can download
225  the package by alternative means (perhaps wget, curl, or scp via some other
226  machine). Once the tarfile is downloaded, the path to this file can be specified to
227  configure with this option. `configure` will proceed to install this package and then
228  configure PETSc with it:
229
230  ```console
231  $ ./configure --download-mpich=/home/petsc/mpich2-1.0.4p1.tar.gz
232  ```
233
234- `--with-PACKAGENAME-dir=/path/to/dir`: If the external package is already installed -
235  specify its location to `configure` (it will attempt to detect and include relevant
236  library files from this location). Normally this corresponds to the top-level
237  installation directory for the package:
238
239  ```console
240  $ ./configure --with-mpi-dir=/home/petsc/software/mpich2-1.0.4p1
241  ```
242
243- `--with-PACKAGENAME-include=/path/to/include/dir` and
244  `--with-PACKAGENAME-lib=LIBRARYLIST`: Usually a package is defined completely by its
245  include file location and library list. If the package is already installed one can use
246  these two options to specify the package to `configure`. For example:
247
248  ```console
249  $ ./configure --with-superlu-include=/home/petsc/software/superlu/include --with-superlu-lib=/home/petsc/software/superlu/lib/libsuperlu.a
250  ```
251
252  or
253
254  ```console
255  $ ./configure --with-parmetis-include=/sandbox/balay/parmetis/include --with-parmetis-lib="-L/sandbox/balay/parmetis/lib -lparmetis -lmetis"
256  ```
257
258  or
259
260  ```console
261  $ ./configure --with-parmetis-include=/sandbox/balay/parmetis/include --with-parmetis-lib=[/sandbox/balay/parmetis/lib/libparmetis.a,libmetis.a]
262  ```
263
264:::{note}
265- Run `./configure --help` to get the list of {ref}`external packages
266  <doc_externalsoftware>` and corresponding additional options (for example
267  `--with-mpiexec` for [MPICH]).
268- Generally one would use either one of the above installation modes for any given
269  package - and not mix these. (i.e combining `--with-mpi-dir` and
270  `--with-mpi-include` etc. should be avoided).
271- Some packages might not support certain options like `--download-PACKAGENAME` or
272  `--with-PACKAGENAME-dir`. Architectures like Microsoft Windows might have issues
273  with these options. In these cases, `--with-PACKAGENAME-include` and
274  `--with-PACKAGENAME-lib` options should be preferred.
275:::
276
277- `--with-packages-build-dir=PATH`: By default, external packages will be unpacked and
278  the build process is run in `$PETSC_DIR/$PETSC_ARCH/externalpackages`. However one
279  can choose a different location where these packages are unpacked and the build process
280  is run.
281
282(doc_config_blaslapack)=
283
284## BLAS/LAPACK
285
286These packages provide some basic numeric kernels used by PETSc. `configure` will
287automatically look for [BLAS/LAPACK] in certain standard locations, on most systems you
288should not need to provide any information about [BLAS/LAPACK] in the `configure`
289command.
290
291One can use the following options to let `configure` download/install [BLAS/LAPACK]
292automatically:
293
294- When fortran compiler is present:
295
296  ```console
297  $ ./configure --download-fblaslapack
298  ```
299
300- Or when configuring without a Fortran compiler - i.e `--with-fc=0`:
301
302  ```console
303  $ ./configure --download-f2cblaslapack
304  ```
305
306Alternatively one can use other options like one of the following:
307
308```console
309$ ./configure --with-blaslapack-lib=libsunperf.a
310$ ./configure --with-blas-lib=libblas.a --with-lapack-lib=liblapack.a
311$ ./configure --with-blaslapack-dir=/soft/com/packages/intel/13/079/mkl
312```
313
314### Intel MKL
315
316Intel provides [BLAS/LAPACK] via the [MKL] library. One can specify it
317to PETSc `configure` with `--with-blaslapack-dir=$MKLROOT` or
318`--with-blaslapack-dir=/soft/com/packages/intel/13/079/mkl`. If the above option does
319not work - one could determine the correct library list for your compilers using Intel
320[MKL Link Line Advisor] and specify with the `configure` option
321`--with-blaslapack-lib`
322
323### IBM ESSL
324
325Sadly, IBM's [ESSL] does not have all the routines of [BLAS/LAPACK] that some
326packages, such as [SuperLU] expect; in particular slamch, dlamch and xerbla. In this
327case instead of using [ESSL] we suggest `--download-fblaslapack`. If you really want
328to use [ESSL], see <https://www.pdc.kth.se/hpc-services>.
329
330(doc_config_mpi)=
331
332## MPI
333
334The Message Passing Interface (MPI) provides the parallel functionality for PETSc.
335
336MPI might already be installed. IBM, Intel, NVIDIA, and Cray provide their own and Linux and macOS package
337managers also provide open-source versions called MPICH and Open MPI. If MPI is not already installed use
338the following options to let PETSc's `configure` download and install MPI.
339
340- For [MPICH]:
341
342  ```console
343  $ ./configure --download-mpich
344  ```
345  If `--with-cuda` or `--with-hip` is also specified, MPICH will automatically be built to be GPU-aware.
346
347
348- For [Open MPI]:
349
350  ```console
351  $ ./configure --download-openmpi
352  ```
353  If `--with-cuda` is also specified, then Open MPI will automatically be built to be CUDA-aware. However to build ROCm-aware Open MPI, specify `--with-hip` and `--download-ucx` (or specify a prebuilt ROCm-enabled ucx with `--with-ucx-dir=<DIR>`).
354
355
356- To not use MPI:
357
358  ```console
359  $ ./configure --with-mpi=0
360  ```
361
362- To use an installed version of MPI
363
364  ```console
365  $ ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
366  ```
367
368- The Intel MPI library provides MPI compiler wrappers with compiler specific names.
369
370  GNU compilers: `gcc`, `g++`, `gfortran`:
371
372  ```console
373  $ ./configure --with-cc=mpigcc --with-cxx=mpigxx --with-fc=mpif90
374  ```
375
376  "Old" Intel compilers: `icc`, `icpc`, and `ifort`:
377
378  ```console
379  $ ./configure --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort
380  ```
381
382  they might not work with some Intel MPI library versions. In those cases, use
383
384  ```console
385  $ export I_MPI_CC=icc && export I_MPI_CXX=icpc && export I_MPI_F90=ifort
386  $ ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
387  ```
388
389- "New" oneAPI Intel compilers: `icx`, `icpx`, and `ifx`:
390
391  ```console
392  $ ./configure --with-cc=mpiicx --with-cxx=mpiicpx --with-fc=mpiifx
393  ```
394
395  they might not work with some Intel MPI library versions. In those cases, use
396
397  ```console
398  $ export I_MPI_CC=icx && export I_MPI_CXX=icpx && export I_MPI_F90=ifx
399  $ ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
400  ```
401
402- On Cray systems, after loading the appropriate MPI module, the regular compilers `cc`, `CC`, and `ftn`
403  automatically become MPI compiler wrappers.
404
405  ```console
406  $ ./configure --with-cc=cc --with-cxx=CC --with-fc=ftn
407  ```
408
409- Instead of providing the MPI compiler wrappers, one can provide the MPI installation directory, where the MPI compiler wrappers are available in the bin directory,
410  (without additionally specifying `--with-cc` etc.) using
411
412  ```console
413  $  ./configure --with-mpi-dir=/absolute/path/to/mpi/install/directory
414  ```
415
416- To control the compilers selected by `mpicc`, `mpicxx`, and `mpif90` one may use environmental
417  variables appropriate for the MPI libraries. For Intel MPI, MPICH, and Open MPI they are
418
419  ```console
420  $ export I_MPI_CC=c_compiler && export I_MPI_CXX=c++_compiler && export I_MPI_F90=fortran_compiler
421  $ export MPICH_CC=c_compiler && export MPICH_CXX=c++_compiler && export MPICH_FC=fortran_compiler
422  $ export OMPI_CC=c_compiler && export OMPI_CXX=c++_compiler && export OMPI_FC=fortran_compiler
423  ```
424
425  Then, use
426
427  ```console
428  $ ./configure --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx
429  ```
430
431  We recommend avoiding these environmental variables unless absolutely necessary.
432  They are easy to forget or they may be set and then forgotten, thus resulting in unexpected behavior.
433
434  And avoid using the syntax `--with-cc="mpicc -cc=icx"` - this can break some builds (for example: external packages that use CMake)
435
436  :::{note}
437  The Intel environmental variables `I_MPI_CC`, `I_MPI_CXX`, and `I_MPI_F90` also changing the
438  behavior of the compiler-specific MPI compiler wrappers `mpigcc`, `mpigxx`, `mpif90`, `mpiicx`,
439  `mpiicpx`, `mpiifx`, `mpiicc`, `mpiicpc`, and `mpiifort`. These variables may be automatically
440  set by certain modules. So one must be careful to ensure they are using the desired compilers.
441  :::
442
443### Installing With Open MPI With Shared MPI Libraries
444
445[Open MPI] defaults to building shared libraries for MPI. However, the binaries generated
446by MPI compiler wrappers `mpicc`/`mpif90` etc. require `$LD_LIBRARY_PATH` to be set to the
447location of these libraries.
448
449Due to this [Open MPI] restriction one has to set `$LD_LIBRARY_PATH` correctly (per [Open MPI] [installation instructions]), before running PETSc `configure`. If you do not set this environmental variables you will get messages when running `configure` such as:
450
451```text
452UNABLE to EXECUTE BINARIES for config/configure.py
453-------------------------------------------------------------------------------
454Cannot run executables created with C. If this machine uses a batch system
455to submit jobs you will need to configure using/configure.py with the additional option --with-batch.
456Otherwise there is problem with the compilers. Can you compile and run code with your C/C++ (and maybe Fortran) compilers?
457```
458
459or when running a code compiled with [Open MPI]:
460
461```text
462error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory
463```
464
465(doc_macos_install)=
466
467## Installing On macOS
468
469For development on macOS we recommend installing **both** the Apple Xcode GUI development system (install from the Apple macOS store) and the Xcode Command Line tools [^id10] install with
470
471```console
472$ xcode-select --install
473```
474
475The Apple compilers are `clang` and `clang++` [^id11]. Apple also provides `/usr/bin/gcc`, which is, confusingly, a link to the `clang` compiler, not the GNU compiler.
476
477We also recommend installing the package manager [homebrew](https://brew.sh/). To install `gfortran` one can use
478
479```console
480$ brew update
481$ brew list            # Show all packages installed through brew
482$ brew upgrade         # Update packages already installed through brew
483$ brew install gcc
484```
485
486This installs gfortran, gcc, and g++ with the compiler names
487`gfortran-version` (also available as `gfortran`), `gcc-version` and `g++-version`, for example `gfortran-12`, `gcc-12`, and `g++-12`.
488
489After upgrading macOS, you generally need to update the Xcode GUI development system (using the standard Apple software update system),
490and the Xcode Command Line tools (run `xcode-select --install` again).
491
492Its best to update `brew` after all macOS or Xcode upgrades (use `brew upgrade`). Sometimes gfortran will not work correctly after an upgrade. If this happens
493it is best to reinstall all `brew` packages using, for example,
494
495```console
496$ brew leaves > list.txt         # save list of formulae to re-install
497$ brew list --casks >> list.txt  # save list of casks to re-install
498$ emacs list.txt                 # edit list.txt to remove any unneeded formulae or casks
499$ brew uninstall `brew list`     # delete all installed formulae and casks
500$ brew cleanup
501$ brew update
502$ brew install `cat list.txt`    # install needed formulae and casks
503```
504
505(doc_config_install)=
506
507## Installation Location: In-place or Out-of-place
508
509By default, PETSc does an in-place installation, meaning the libraries are kept in the
510same directories used to compile PETSc. This is particularly useful for those application
511developers who follow the PETSc git repository main or release branches since rebuilds
512for updates are very quick and painless.
513
514:::{note}
515The libraries and include files are located in `$PETSC_DIR/$PETSC_ARCH/lib` and
516`$PETSC_DIR/$PETSC_ARCH/include`
517:::
518
519### Out-of-place Installation With `--prefix`
520
521To install the libraries and include files in another location use the `--prefix` option
522
523```console
524$ ./configure --prefix=/home/userid/my-petsc-install --some-other-options
525```
526
527The libraries and include files will be located in `/home/userid/my-petsc-install/lib`
528and `/home/userid/my-petsc-install/include`.
529
530### Installation in Root Location, **Not Recommended** (Uncommon)
531
532:::{warning}
533One should never run `configure` or make on any package using root access. **Do so at
534your own risk**.
535:::
536
537If one wants to install PETSc in a common system location like `/usr/local` or `/opt`
538that requires root access we suggest creating a directory for PETSc with user privileges,
539and then do the PETSc install as a **regular/non-root** user:
540
541```console
542$ sudo mkdir /opt/petsc
543$ sudo chown user:group /opt/petsc
544$ cd /home/userid/petsc
545$ ./configure --prefix=/opt/petsc/my-root-petsc-install --some-other-options
546$ make
547$ make install
548```
549
550### Installs For Package Managers: Using `DESTDIR` (Very uncommon)
551
552```console
553$ ./configure --prefix=/opt/petsc/my-root-petsc-install
554$ make
555$ make install DESTDIR=/tmp/petsc-pkg
556```
557
558Package up `/tmp/petsc-pkg`. The package should then be installed at
559`/opt/petsc/my-root-petsc-install`
560
561### Multiple Installs Using `--prefix` (See `DESTDIR`)
562
563Specify a different `--prefix` location for each configure of different options - at
564configure time. For example:
565
566```console
567$ ./configure --prefix=/opt/petsc/petsc-3.24.0-mpich --with-mpi-dir=/opt/mpich
568$ make
569$ make install [DESTDIR=/tmp/petsc-pkg]
570$ ./configure --prefix=/opt/petsc/petsc-3.24.0-openmpi --with-mpi-dir=/opt/openmpi
571$ make
572$ make install [DESTDIR=/tmp/petsc-pkg]
573```
574
575### In-place Installation
576
577The PETSc libraries and generated included files are placed in the sub-directory off the
578current directory `$PETSC_ARCH` which is either provided by the user with, for example:
579
580```console
581$ export PETSC_ARCH=arch-debug
582$ ./configure
583$ make
584$ export PETSC_ARCH=arch-opt
585$ ./configure --some-optimization-options
586$ make
587```
588
589or
590
591```console
592$ ./configure PETSC_ARCH=arch-debug
593$ make
594$ ./configure --some-optimization-options PETSC_ARCH=arch-opt
595$ make
596```
597
598If not provided `configure` will generate a unique value automatically (for in-place non
599`--prefix` configurations only).
600
601```console
602$ ./configure
603$ make
604$ ./configure --with-debugging=0
605$ make
606```
607
608Produces the directories (on an Apple macOS machine) `$PETSC_DIR/arch-darwin-c-debug` and
609`$PETSC_DIR/arch-darwin-c-opt`.
610
611## Installing On Machine Requiring Cross Compiler Or A Job Scheduler
612
613On systems where you need to use a job scheduler or batch submission to run jobs use the
614`configure` option `--with-batch`. **On such systems the make check option will not
615work**.
616
617- You must first ensure you have loaded appropriate modules for the compilers etc that you
618  wish to use. Often the compilers are provided automatically for you and you do not need
619  to provide `--with-cc=XXX` etc. Consult with the documentation and local support for
620  such systems for information on these topics.
621- On such systems you generally should not use `--with-blaslapack-dir` or
622  `--download-fblaslapack` since the systems provide those automatically (sometimes
623  appropriate modules must be loaded first).
624- Some package's `--download-package` options do not work on these systems, for example
625  [HDF5]. Thus you must use modules to load those packages and `--with-package` to
626  configure with the package.
627- Since building {ref}`external packages <doc_externalsoftware>` on these systems is often
628  troublesome and slow we recommend only installing PETSc with those configuration
629  packages that you need for your work, not extras.
630
631(doc_config_tau)=
632
633## Installing With TAU Instrumentation Package
634
635[TAU] package and the prerequisite [PDT] packages need to be installed separately (perhaps with MPI). Now use tau_cc.sh as compiler to PETSc configure:
636
637```console
638$ export TAU_MAKEFILE=/home/balay/soft/linux64/tau-2.20.3/x86_64/lib/Makefile.tau-mpi-pdt
639$ ./configure CC=/home/balay/soft/linux64/tau-2.20.3/x86_64/bin/tau_cc.sh --with-fc=0 PETSC_ARCH=arch-tau
640```
641
642(doc_config_accel)=
643
644## Installing PETSc To Use GPUs And Accelerators
645
646PETSc is able to take advantage of GPU's and certain accelerator libraries, however some require additional `configure` options.
647
648(doc_config_accel_openmp)=
649
650### `OpenMP`
651
652Use `--with-openmp` to allow PETSc to be used within an OpenMP application; this also turns on OpenMP for all the packages that
653PETSc builds using `--download-xxx`. If your application calls PETSc from within OpenMP threads then also use `--with-threadsafety`.
654
655Use `--with-openmp-kernels` to have some PETSc numerical routines use OpenMP to speed up their computations. This requires `--with-openmp`.
656
657Note that using OpenMP within MPI code must be done carefully to prevent too many OpenMP threads that overload the number of cores.
658
659(doc_config_accel_cuda)=
660
661### [CUDA]
662
663:::{important}
664An NVIDIA GPU is **required** to use [CUDA]-accelerated code. Check that your machine
665has a [CUDA] enabled GPU by consulting <https://developer.nvidia.com/cuda-gpus>.
666:::
667
668On Linux - verify [^id12] that CUDA compatible [NVIDIA driver](https://www.nvidia.com/en-us/drivers) is installed.
669
670On Microsoft Windows - Use either [Cygwin] or [WSL] the latter of which is entirely untested right
671now. If you have experience with [WSL] and/or have successfully built PETSc on Microsoft Windows
672for use with [CUDA] we welcome your input at <mailto:petsc-maint@mcs.anl.gov>. See the
673bug-reporting {ref}`documentation <doc_creepycrawly>` for more details.
674
675In most cases you need only pass the configure option `--with-cuda`; check
676`config/examples/arch-ci-linux-cuda-double.py` for example usage.
677
678CUDA build of PETSc currently works on Mac OS X, Linux, Microsoft Windows with [Cygwin].
679
680Examples that use CUDA have the suffix .cu; see `$PETSC_DIR/src/snes/tutorials/ex47cu.cu`
681
682(doc_config_accel_kokkos)=
683
684### [Kokkos]
685
686In most cases you need only pass the configure option `--download-kokkos` `--download-kokkos-kernels`
687and one of `--with-cuda`, `--with-hip`, `--with-sycl`, `--with-openmp`, or `--with-pthread` (or nothing to use sequential
688[Kokkos]). See the {ref}`CUDA installation documentation <doc_config_accel_cuda>`,
689{ref}`OpenMP installation documentation <doc_config_accel_openmp>` for further reference on
690respective requirements of some installations.
691
692Examples that use [Kokkos] at user-level have the suffix .kokkos.cxx; see
693`src/snes/tutorials/ex3k.kokkos.cxx`. More examples use [Kokkos] through options database;
694search them with `grep -r -l "requires:.*kokkos_kernels" src/`.
695
696(doc_config_accel_opencl)=
697
698### [OpenCL]/[ViennaCL]
699
700Requires the [OpenCL] shared library, which is shipped in the vendor graphics driver and
701the [OpenCL] headers; if needed you can download them from the Khronos Group
702directly. Package managers on Linux provide these headers through a package named
703'opencl-headers' or similar. On Apple systems the [OpenCL] drivers and headers are always
704available and do not need to be downloaded.
705
706Always make sure you have the latest GPU driver installed. There are several known issues
707with older driver versions.
708
709Run `configure` with `--download-viennacl`; check
710`config/examples/arch-ci-linux-viennacl.py` for example usage.
711
712[OpenCL]/[ViennaCL] builds of PETSc currently work on Mac OS X, Linux, and Microsoft Windows.
713
714(doc_emcc)=
715
716## Installing To Run in Browser with Emscripten
717
718PETSc can be used to run applications in the browser using <https://emscripten.org>, see <https://emscripten.org/docs/getting_started/downloads.html>,
719for instructions on installing Emscripten. Run
720
721```console
722$  ./configure --with-cc=emcc --with-cxx=0 --with-fc=0 --with-ranlib=emranlib --with-ar=emar --with-shared-libraries=0 --download-f2cblaslapack=1 --with-mpi=0 --with-batch
723```
724
725Applications may be compiled with, for example,
726
727```console
728$  make ex19.html
729```
730
731The rule for linking may be found in <a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/lib/petsc/conf/rules">lib/petsc/conf/rules></a>
732
733(doc_config_hpc)=
734
735## Installing On Large Scale DOE Systems
736
737There are some notes on our [GitLab Wiki](https://gitlab.com/petsc/petsc/-/wikis/Installing-and-Running-on-Large-Scale-Systems)
738which may be helpful in installing and running PETSc on large scale
739systems. Also note the configuration examples in `config/examples`.
740
741```{rubric} Footnotes
742```
743
744[^id9]: All MPI implementations provide convenience scripts for compiling MPI codes that internally call regular compilers, they are commonly named `mpicc`, `mpicxx`, and `mpif90`. We call these "MPI compiler wrappers".
745
746[^id10]: The two packages provide slightly different (though largely overlapping) functionality which can only be fully used if both packages are installed.
747
748[^id11]: Apple provides customized `clang` and `clang++` for its system. To use the unmodified LLVM project `clang` and `clang++`
749    install them with brew.
750
751[^id12]: To verify CUDA compatible Nvidia driver on Linux - run the utility `nvidia-smi` - it should provide the version of the Nvidia driver currently installed, and the maximum CUDA version it supports.
752
753[blas/lapack]: https://www.netlib.org/lapack/lug/node11.html
754[cuda]: https://developer.nvidia.com/cuda-toolkit
755[cygwin]: https://www.cygwin.com/
756[essl]: https://www.ibm.com/support/knowledgecenter/en/SSFHY8/essl_welcome.html
757[hdf5]: https://www.hdfgroup.org/solutions/hdf5/
758[hypre]: https://computing.llnl.gov/projects/hypre-scalable-linear-solvers-multigrid-methods
759[installation instructions]: https://www.open-mpi.org/faq/?category=building
760[kokkos]: https://github.com/kokkos/kokkos
761[metis]: http://glaros.dtc.umn.edu/gkhome/metis/metis/overview
762[mkl]: https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onemkl.html
763[mkl link line advisor]: https://software.intel.com/content/www/us/en/develop/articles/intel-mkl-link-line-advisor.html
764[modules]: https://www.alcf.anl.gov/support-center/theta/compiling-and-linking-overview-theta-thetagpu
765[mpich]: https://www.mpich.org/
766[mumps]: https://mumps-solver.org/
767[open mpi]: https://www.open-mpi.org/
768[opencl]: https://www.khronos.org/opencl/
769[parmetis]: http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview
770[pdt]: https://www.cs.uoregon.edu/research/pdt/home.php
771[superlu]: https://portal.nersc.gov/project/sparse/superlu/
772[superlu_dist]: https://github.com/xiaoyeli/superlu_dist
773[tau]: https://www.cs.uoregon.edu/research/tau/home.php
774[viennacl]: http://viennacl.sourceforge.net/
775[wsl]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
776