1(acknowledgements)= 2 3# Acknowledgements 4 5We thank all PETSc/Tao/petsc4py users for their many suggestions, bug reports, and encouragement. 6 7Recent contributors to PETSc are listed in the [repository system](https://gitlab.com/petsc/petsc). The history can be visualized at 8[github.com/petsc/petsc/graphs/contributors](https://github.com/petsc/petsc/graphs/contributors). 9 10Earlier contributors to PETSc that are not captured in the repository system include: 11 12- Asbjorn Hoiland Aarrestad, the explicit Runge-Kutta implementations, `TSRK`. 13- Guillaume Anciaux and Jose E. Roman, the interfaces to the partitioning packages PTScotch, Chaco, and Party. 14- Allison Baker, `KSPFGMRES` and `KSPLGMRES`. 15- Chad Carroll, the Win32 graphics. 16- Ethan Coon, `PetscBag` and many bug fixes. 17- Cameron Cooper, portions of `VecScatter` routines. 18- Patrick Farrell and Florian Wechsung, `PCPATCH` and `SNESPATCH`. 19- Paulo Goldfeld, early versions of the balancing Neumann-Neumann preconditioner `PCNN`. 20- Matt Hille. 21- Joel Malard, `KSPBCGS`. 22- Paul Mullowney, enhancements to portions of the original CUDA GPU interface. 23- Dave May, `KSPGCR`. 24- Peter Mell, portions of `DMDA` routines. 25- Richard Mills, the `MATAIJPERM` matrix format for the Cray X1; universal F90 array 26 interface; enhancements to `KSPIBCGS`; the `MATAIJMKL` matrix subclass. 27- Victor Minden, the original CUDA GPU interface. 28- Todd Munson, `MATSOLVERLUSOL` as well as `KSPNASH`, `KSPSTCG`, and 29 `KSPGLTR`. 30- Robert Scheichl, the original `KSPMINRES` implementation. 31- Karen Toonen, designed and implemented most of the original PETSc web pages. 32- Desire Nuentsa Wakam, `KSPDGMRES`. 33- Liyang Xu, the interface to PVODE (now Sundials/CVODE) `TSSUNDIALS`. 34 35The Toolkit for Advanced Optimization (Tao) developers especially thank Jorge Moré 36for his leadership, vision, and effort on previous versions of Tao. Tao has 37also benefited from the work of various researchers who have provided solvers, test problems, 38and interfaces. In particular, we acknowledge: Adam Denchfield, Elizabeth Dolan, Evan Gawlik, 39Michael Gertz, Xiang Huang, Lisa Grignon, Manojkumar Krishnan, Gabriel Lopez-Calva, 40Jarek Nieplocha, Boyana Norris, Hansol Suh, Stefan Wild, Limin Zhang, and 41Yurii Zinchenko. 42 43PETSc source code contains modified routines from the following public 44domain software packages: 45 46- LINPACK - dense matrix factorization and solve; converted to C using 47 `f2c` and then hand-optimized for small matrix sizes, for block 48 matrix data structures; 49- MINPACK - sequential matrix coloring routines for finite 50 difference Jacobian evaluations; converted to C using `f2c`; 51- SPARSPAK - matrix reordering routines, converted to C 52 using `f2c`; 53- libtfs - the efficient, parallel direct solver developed by Henry 54 Tufo and Paul Fischer for the direct solution of a coarse grid 55 problem (a linear system with very few degrees of freedom per 56 processor). 57 58PETSc interfaces to many external software packages including: 59 60- BLAS and LAPACK - numerical linear algebra; 61- Chaco - A graph partitioning package; 62 <http://www.cs.sandia.gov/CRF/chac.html> 63- Elemental - Jack Poulson’s parallel dense matrix solver package; 64 <http://libelemental.org/> 65- HDF5 - the data model, library, and file format for storing and 66 managing data, 67 <https://support.hdfgroup.org/HDF5/> 68- hypre - the LLNL preconditioner library; 69 <https://computation.llnl.gov/projects/hypre-scalable-linear-solvers-multigrid-methods> 70- LUSOL - sparse LU factorization code (part of MINOS) developed by 71 Michael Saunders, Systems Optimization Laboratory, Stanford 72 University; 73 <http://www.sbsi-sol-optimize.com/> 74- MATLAB 75- Metis/ParMeTiS - parallel graph partitioner, 76 <https://www-users.cs.umn.edu/~karypis/metis/> 77- MUMPS - MUltifrontal Massively Parallel sparse direct 78 Solver developed by Patrick Amestoy, Iain Duff, Jacko Koster, and 79 Jean-Yves L’Excellent; 80 <https://mumps-solver.org/> 81- Party - A graph partitioning package; 82- PaStiX - Parallel sparse LU and Cholesky solvers; 83 <http://pastix.gforge.inria.fr/> 84- PTScotch - A graph partitioning package; 85 <http://www.labri.fr/Perso/~pelegrin/scotch/> 86- SPAI - for parallel sparse approximate inverse preconditioning; 87 <https://cccs.unibas.ch/lehre/software-packages/> 88- SuiteSparse - sequential sparse solvers, developed by 89 Timothy A. Davis; 90 <http://faculty.cse.tamu.edu/davis/suitesparse.html> 91- SUNDIALS/CVODE - (now an out-dated version), parallel ODE integrator; 92 <https://computation.llnl.gov/projects/sundials> 93- SuperLU and SuperLU_Dist - the efficient sparse LU codes 94 developed by Jim Demmel, Xiaoye S. Li, and John Gilbert; 95 <https://crd-legacy.lbl.gov/~xiaoye/SuperLU> 96- STRUMPACK - the STRUctured Matrix Package; 97 <https://portal.nersc.gov/project/sparse/strumpack/> 98- Triangle and Tetgen - mesh generation packages; 99 <https://www.cs.cmu.edu/~quake/triangle.html> 100 <http://wias-berlin.de/software/tetgen/> 101- Trilinos/ML - Sandia’s main multigrid preconditioning package; 102 <https://trilinos.github.io/> 103- Zoltan - graph partitioners from Sandia National Laboratory; 104 <http://www.cs.sandia.gov/zoltan/> 105 106These are all optional packages and do not need to be installed to use 107PETSc. 108 109PETSc software is developed and maintained using 110 111- [Git](https://git-scm.com/) revision control system 112 113PETSc documentation has been generated using 114 115- <https://www.sphinx-doc.org> 116- [Sowing text processing tools developed by Bill Gropp](http://wgropp.cs.illinois.edu/projects/software/sowing/) 117- c2html 118