xref: /libCEED/README.md (revision ed094490f53e580908aa80e9fe815a6fd76d7526)
1# libCEED: Efficient Extensible Discretization
2
3[![GitHub Actions][github-badge]][github-link]
4[![GitLab-CI][gitlab-badge]][gitlab-link]
5[![Code coverage][codecov-badge]][codecov-link]
6[![BSD-2-Clause][license-badge]][license-link]
7[![Documentation][doc-badge]][doc-link]
8[![JOSS paper][joss-badge]][joss-link]
9[![Binder][binder-badge]][binder-link]
10
11## Summary and Purpose
12
13libCEED provides fast algebra for element-based discretizations, designed for performance portability, run-time flexibility, and clean embedding in higher level libraries and applications.
14It offers a C99 interface as well as bindings for Fortran, Python, Julia, and Rust.
15While our focus is on high-order finite elements, the approach is mostly algebraic and thus applicable to other discretizations in factored form, as explained in the [user manual](https://libceed.org/en/latest/) and API implementation portion of the [documentation](https://libceed.org/en/latest/api/).
16
17One of the challenges with high-order methods is that a global sparse matrix is no longer a good representation of a high-order linear operator, both with respect to the FLOPs needed for its evaluation, as well as the memory transfer needed for a matvec.
18Thus, high-order methods require a new "format" that still represents a linear (or more generally non-linear) operator, but not through a sparse matrix.
19
20The goal of libCEED is to propose such a format, as well as supporting implementations and data structures, that enable efficient operator evaluation on a variety of computational device types (CPUs, GPUs, etc.).
21This new operator description is based on algebraically [factored form](https://libceed.org/en/latest/libCEEDapi/#finite-element-operator-decomposition), which is easy to incorporate in a wide variety of applications, without significant refactoring of their own discretization infrastructure.
22
23The repository is part of the [CEED software suite](http://ceed.exascaleproject.org/software/), a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods.
24See <http://github.com/ceed> for more information and source code availability.
25
26The CEED research is supported by the [Exascale Computing Project](https://exascaleproject.org/exascale-computing-project) (17-SC-20-SC), a collaborative effort of two U.S. Department of Energy organizations (Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a [capable exascale ecosystem](https://exascaleproject.org/what-is-exascale), including software, applications, hardware, advanced system engineering and early testbed platforms, in support of the nation’s exascale computing imperative.
27
28For more details on the CEED API see the [user manual](https://libceed.org/en/latest/).
29
30<!-- getting-started-inclusion -->
31
32## Building
33
34The CEED library, `libceed`, is a C99 library with no required dependencies, and with Fortran, Python, Julia, and Rust interfaces.
35It can be built using:
36
37```console
38$ make
39```
40
41or, with optimization flags:
42
43```console
44$ make OPT='-O3 -march=skylake-avx512 -ffp-contract=fast'
45```
46
47These optimization flags are used by all languages (C, C++, Fortran) and this makefile variable can also be set for testing and examples (below).
48
49The library attempts to automatically detect support for the AVX instruction set using gcc-style compiler options for the host.
50Support may need to be manually specified via:
51
52```console
53$ make AVX=1
54```
55
56or:
57
58```console
59$ make AVX=0
60```
61
62if your compiler does not support gcc-style options, if you are cross compiling, etc.
63
64To enable CUDA support, add `CUDA_DIR=/opt/cuda` or an appropriate directory to your `make` invocation.
65To enable HIP support, add `ROCM_DIR=/opt/rocm` or an appropriate directory.
66To enable SYCL support, add `SYCL_DIR=/opt/sycl` or an appropriate directory.
67Note that SYCL backends require building with oneAPI compilers as well:
68
69```console
70$ . /opt/intel/oneapi/setvars.sh
71$ make SYCL_DIR=/opt/intel/oneapi/compiler/latest/linux SYCLCXX=icpx CC=icx CXX=icpx
72```
73
74The library can be configured for host applications which use OpenMP paralellism via:
75
76```console
77$ make OPENMP=1
78```
79
80which will allow operators created and applied from different threads inside an `omp parallel` region.
81
82To store these or other arguments as defaults for future invocations of `make`, use:
83
84```console
85$ make configure CUDA_DIR=/usr/local/cuda ROCM_DIR=/opt/rocm OPT='-O3 -march=znver2'
86```
87
88which stores these variables in `config.mk`.
89
90### WebAssembly
91
92libCEED can be built for WASM using [Emscripten](https://emscripten.org). For example, one can build the library and run a standalone WASM executable using
93
94``` console
95$ emmake make build/ex2-surface.wasm
96$ wasmer build/ex2-surface.wasm -- -s 200000
97```
98
99## Additional Language Interfaces
100
101The Fortran interface is built alongside the library automatically.
102
103Python users can install using:
104
105```console
106$ pip install libceed
107```
108
109or in a clone of the repository via `pip install .`.
110
111Julia users can install using:
112
113```console
114$ julia
115julia> ]
116pkg> add LibCEED
117```
118
119See the [LibCEED.jl documentation](http://ceed.exascaleproject.org/libCEED-julia-docs/dev/) for more information.
120
121Rust users can include libCEED via `Cargo.toml`:
122
123```toml
124[dependencies]
125libceed = "0.12.0"
126```
127
128See the [Cargo documentation](https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#specifying-dependencies-from-git-repositories) for details.
129
130## Testing
131
132The test suite produces [TAP](https://testanything.org) output and is run by:
133
134```console
135$ make test
136```
137
138or, using the `prove` tool distributed with Perl (recommended):
139
140```console
141$ make prove
142```
143
144## Backends
145
146There are multiple supported backends, which can be selected at runtime in the examples:
147
148| CEED resource              | Backend                                           | Deterministic Capable |
149| :---                       | :---                                              | :---:                 |
150||
151| **CPU Native**             |
152| `/cpu/self/ref/serial`     | Serial reference implementation                   | Yes                   |
153| `/cpu/self/ref/blocked`    | Blocked reference implementation                  | Yes                   |
154| `/cpu/self/opt/serial`     | Serial optimized C implementation                 | Yes                   |
155| `/cpu/self/opt/blocked`    | Blocked optimized C implementation                | Yes                   |
156| `/cpu/self/avx/serial`     | Serial AVX implementation                         | Yes                   |
157| `/cpu/self/avx/blocked`    | Blocked AVX implementation                        | Yes                   |
158||
159| **CPU Valgrind**           |
160| `/cpu/self/memcheck/*`     | Memcheck backends, undefined value checks         | Yes                   |
161||
162| **CPU LIBXSMM**            |
163| `/cpu/self/xsmm/serial`    | Serial LIBXSMM implementation                     | Yes                   |
164| `/cpu/self/xsmm/blocked`   | Blocked LIBXSMM implementation                    | Yes                   |
165||
166| **CUDA Native**            |
167| `/gpu/cuda/ref`            | Reference pure CUDA kernels                       | Yes                   |
168| `/gpu/cuda/shared`         | Optimized pure CUDA kernels using shared memory   | Yes                   |
169| `/gpu/cuda/gen`            | Optimized pure CUDA kernels using code generation | No                    |
170||
171| **HIP Native**             |
172| `/gpu/hip/ref`             | Reference pure HIP kernels                        | Yes                   |
173| `/gpu/hip/shared`          | Optimized pure HIP kernels using shared memory    | Yes                   |
174| `/gpu/hip/gen`             | Optimized pure HIP kernels using code generation  | No                    |
175||
176| **SYCL Native**            |
177| `/gpu/sycl/ref`            | Reference pure SYCL kernels                       | Yes                   |
178| `/gpu/sycl/shared`         | Optimized pure SYCL kernels using shared memory   | Yes                   |
179||
180| **MAGMA**                  |
181| `/gpu/cuda/magma`          | CUDA MAGMA kernels                                | No                    |
182| `/gpu/cuda/magma/det`      | CUDA MAGMA kernels                                | Yes                   |
183| `/gpu/hip/magma`           | HIP MAGMA kernels                                 | No                    |
184| `/gpu/hip/magma/det`       | HIP MAGMA kernels                                 | Yes                   |
185||
186| **OCCA**                   |
187| `/*/occa`                  | Selects backend based on available OCCA modes     | Yes                   |
188| `/cpu/self/occa`           | OCCA backend with serial CPU kernels              | Yes                   |
189| `/cpu/openmp/occa`         | OCCA backend with OpenMP kernels                  | Yes                   |
190| `/cpu/dpcpp/occa`          | OCCA backend with DPC++ kernels                   | Yes                   |
191| `/gpu/cuda/occa`           | OCCA backend with CUDA kernels                    | Yes                   |
192| `/gpu/hip/occa`            | OCCA backend with HIP kernels                     | Yes                   |
193
194The `/cpu/self/*/serial` backends process one element at a time and are intended for meshes with a smaller number of high order elements.
195The `/cpu/self/*/blocked` backends process blocked batches of eight interlaced elements and are intended for meshes with higher numbers of elements.
196
197The `/cpu/self/ref/*` backends are written in pure C and provide basic functionality.
198
199The `/cpu/self/opt/*` backends are written in pure C and use partial e-vectors to improve performance.
200
201The `/cpu/self/avx/*` backends rely upon AVX instructions to provide vectorized CPU performance.
202
203The `/cpu/self/memcheck/*` backends rely upon the [Valgrind](https://valgrind.org/) Memcheck tool to help verify that user QFunctions have no undefined values.
204To use, run your code with Valgrind and the Memcheck backends, e.g. `valgrind ./build/ex1 -ceed /cpu/self/ref/memcheck`.
205A 'development' or 'debugging' version of Valgrind with headers is required to use this backend.
206This backend can be run in serial or blocked mode and defaults to running in the serial mode if `/cpu/self/memcheck` is selected at runtime.
207
208The `/cpu/self/xsmm/*` backends rely upon the [LIBXSMM](https://github.com/libxsmm/libxsmm) package to provide vectorized CPU performance.
209If linking MKL and LIBXSMM is desired but the Makefile is not detecting `MKLROOT`, linking libCEED against MKL can be forced by setting the environment variable `MKL=1`.
210The LIBXSMM `main` development branch from 7 April 2024 or newer is required.
211
212The `/gpu/cuda/*` backends provide GPU performance strictly using CUDA.
213
214The `/gpu/hip/*` backends provide GPU performance strictly using HIP.
215They are based on the `/gpu/cuda/*` backends.
216ROCm version 4.2 or newer is required.
217
218The `/gpu/sycl/*` backends provide GPU performance strictly using SYCL.
219They are based on the `/gpu/cuda/*` and `/gpu/hip/*` backends.
220
221The `/gpu/*/magma/*` backends rely upon the [MAGMA](https://bitbucket.org/icl/magma) package.
222To enable the MAGMA backends, the environment variable `MAGMA_DIR` must point to the top-level MAGMA directory, with the MAGMA library located in `$(MAGMA_DIR)/lib/`.
223By default, `MAGMA_DIR` is set to `../magma`; to build the MAGMA backends with a MAGMA installation located elsewhere, create a link to `magma/` in libCEED's parent directory, or set `MAGMA_DIR` to the proper location.
224MAGMA version 2.5.0 or newer is required.
225Currently, each MAGMA library installation is only built for either CUDA or HIP.
226The corresponding set of libCEED backends (`/gpu/cuda/magma/*` or `/gpu/hip/magma/*`) will automatically be built for the version of the MAGMA library found in `MAGMA_DIR`.
227
228Users can specify a device for all CUDA, HIP, and MAGMA backends through adding `:device_id=#` after the resource name.
229For example:
230
231> - `/gpu/cuda/gen:device_id=1`
232
233The `/*/occa` backends rely upon the [OCCA](http://github.com/libocca/occa) package to provide cross platform performance.
234To enable the OCCA backend, the environment variable `OCCA_DIR` must point to the top-level OCCA directory, with the OCCA library located in the `${OCCA_DIR}/lib` (By default, `OCCA_DIR` is set to `../occa`).
235OCCA version 1.6.0 or newer is required.
236
237Users can pass specific OCCA device properties after setting the CEED resource.
238For example:
239
240> - `"/*/occa:mode='CUDA',device_id=0"`
241
242Bit-for-bit reproducibility is important in some applications.
243However, some libCEED backends use non-deterministic operations, such as `atomicAdd` for increased performance.
244The backends which are capable of generating reproducible results, with the proper compilation options, are highlighted in the list above.
245
246<!-- getting-started-exclusion -->
247
248## Examples
249
250libCEED comes with several examples of its usage, ranging from standalone C codes in the `/examples/ceed` directory to examples based on external packages, such as MFEM, PETSc, and Nek5000.
251Nek5000 v18.0 or greater is required.
252
253To build the examples, set the `MFEM_DIR`, `PETSC_DIR` (and optionally `PETSC_ARCH`), and `NEK5K_DIR` variables and run:
254
255```console
256$ cd examples/
257```
258
259<!-- running-examples-inclusion -->
260
261```console
262# libCEED examples on CPU and GPU
263$ cd ceed/
264$ make
265$ ./ex1-volume -ceed /cpu/self
266$ ./ex1-volume -ceed /gpu/cuda
267$ ./ex2-surface -ceed /cpu/self
268$ ./ex2-surface -ceed /gpu/cuda
269$ cd ..
270
271# MFEM+libCEED examples on CPU and GPU
272$ cd mfem/
273$ make
274$ ./bp1 -ceed /cpu/self -no-vis
275$ ./bp3 -ceed /gpu/cuda -no-vis
276$ cd ..
277
278# Nek5000+libCEED examples on CPU and GPU
279$ cd nek/
280$ make
281$ ./nek-examples.sh -e bp1 -ceed /cpu/self -b 3
282$ ./nek-examples.sh -e bp3 -ceed /gpu/cuda -b 3
283$ cd ..
284
285# PETSc+libCEED examples on CPU and GPU
286$ cd petsc/
287$ make
288$ ./bps -problem bp1 -ceed /cpu/self
289$ ./bps -problem bp2 -ceed /gpu/cuda
290$ ./bps -problem bp3 -ceed /cpu/self
291$ ./bps -problem bp4 -ceed /gpu/cuda
292$ ./bps -problem bp5 -ceed /cpu/self
293$ ./bps -problem bp6 -ceed /gpu/cuda
294$ cd ..
295
296$ cd petsc/
297$ make
298$ ./bpsraw -problem bp1 -ceed /cpu/self
299$ ./bpsraw -problem bp2 -ceed /gpu/cuda
300$ ./bpsraw -problem bp3 -ceed /cpu/self
301$ ./bpsraw -problem bp4 -ceed /gpu/cuda
302$ ./bpsraw -problem bp5 -ceed /cpu/self
303$ ./bpsraw -problem bp6 -ceed /gpu/cuda
304$ cd ..
305
306$ cd petsc/
307$ make
308$ ./bpssphere -problem bp1 -ceed /cpu/self
309$ ./bpssphere -problem bp2 -ceed /gpu/cuda
310$ ./bpssphere -problem bp3 -ceed /cpu/self
311$ ./bpssphere -problem bp4 -ceed /gpu/cuda
312$ ./bpssphere -problem bp5 -ceed /cpu/self
313$ ./bpssphere -problem bp6 -ceed /gpu/cuda
314$ cd ..
315
316$ cd petsc/
317$ make
318$ ./area -problem cube -ceed /cpu/self -degree 3
319$ ./area -problem cube -ceed /gpu/cuda -degree 3
320$ ./area -problem sphere -ceed /cpu/self -degree 3 -dm_refine 2
321$ ./area -problem sphere -ceed /gpu/cuda -degree 3 -dm_refine 2
322
323$ cd fluids/
324$ make
325$ ./navierstokes -ceed /cpu/self -degree 1
326$ ./navierstokes -ceed /gpu/cuda -degree 1
327$ cd ..
328
329$ cd solids/
330$ make
331$ ./elasticity -ceed /cpu/self -mesh [.exo file] -degree 2 -E 1 -nu 0.3 -problem Linear -forcing mms
332$ ./elasticity -ceed /gpu/cuda -mesh [.exo file] -degree 2 -E 1 -nu 0.3 -problem Linear -forcing mms
333$ cd ..
334```
335
336For the last example shown, sample meshes to be used in place of `[.exo file]` can be found at <https://github.com/jeremylt/ceedSampleMeshes>
337
338The above code assumes a GPU-capable machine with the CUDA backends enabled.
339Depending on the available backends, other CEED resource specifiers can be provided with the `-ceed` option.
340Other command line arguments can be found in [examples/petsc](https://github.com/CEED/libCEED/blob/main/examples/petsc/README.md).
341
342<!-- running-examples-exclusion -->
343
344## Benchmarks
345
346A sequence of benchmarks for all enabled backends can be run using:
347
348```console
349$ make benchmarks
350```
351
352The results from the benchmarks are stored inside the `benchmarks/` directory and they can be viewed using the commands (requires python with matplotlib):
353
354```console
355$ cd benchmarks
356$ python postprocess-plot.py petsc-bps-bp1-*-output.txt
357$ python postprocess-plot.py petsc-bps-bp3-*-output.txt
358```
359
360Using the `benchmarks` target runs a comprehensive set of benchmarks which may take some time to run.
361Subsets of the benchmarks can be run using the scripts in the `benchmarks` folder.
362
363For more details about the benchmarks, see the `benchmarks/README.md` file.
364
365## Install
366
367To install libCEED, run:
368
369```console
370$ make install prefix=/path/to/install/dir
371```
372
373or (e.g., if creating packages):
374
375```console
376$ make install prefix=/usr DESTDIR=/packaging/path
377```
378
379To build and install in separate steps, run:
380
381```console
382$ make for_install=1 prefix=/path/to/install/dir
383$ make install prefix=/path/to/install/dir
384```
385
386The usual variables like `CC` and `CFLAGS` are used, and optimization flags for all languages can be set using the likes of `OPT='-O3 -march=native'`.
387Use `STATIC=1` to build static libraries (`libceed.a`).
388
389To install libCEED for Python, run:
390
391```console
392$ pip install libceed
393```
394
395with the desired setuptools options, such as `--user`.
396
397### pkg-config
398
399In addition to library and header, libCEED provides a [pkg-config](https://en.wikipedia.org/wiki/Pkg-config) file that can be used to easily compile and link.
400[For example](https://people.freedesktop.org/~dbn/pkg-config-guide.html#faq), if `$prefix` is a standard location or you set the environment variable `PKG_CONFIG_PATH`:
401
402```console
403$ cc `pkg-config --cflags --libs ceed` -o myapp myapp.c
404```
405
406will build `myapp` with libCEED.
407This can be used with the source or installed directories.
408Most build systems have support for pkg-config.
409
410## Contact
411
412You can reach the libCEED team by emailing [ceed-users@llnl.gov](mailto:ceed-users@llnl.gov) or by leaving a comment in the [issue tracker](https://github.com/CEED/libCEED/issues).
413
414## How to Cite
415
416If you utilize libCEED please cite:
417
418```bibtex
419@article{libceed-joss-paper,
420  author       = {Jed Brown and Ahmad Abdelfattah and Valeria Barra and Natalie Beams and Jean Sylvain Camier and Veselin Dobrev and Yohann Dudouit and Leila Ghaffari and Tzanio Kolev and David Medina and Will Pazner and Thilina Ratnayaka and Jeremy Thompson and Stan Tomov},
421  title        = {{libCEED}: Fast algebra for high-order element-based discretizations},
422  journal      = {Journal of Open Source Software},
423  year         = {2021},
424  publisher    = {The Open Journal},
425  volume       = {6},
426  number       = {63},
427  pages        = {2945},
428  doi          = {10.21105/joss.02945}
429}
430```
431
432The archival copy of the libCEED user manual is maintained on [Zenodo](https://doi.org/10.5281/zenodo.4302736).
433To cite the user manual:
434
435```bibtex
436@misc{libceed-user-manual,
437  author       = {Abdelfattah, Ahmad and
438                  Barra, Valeria and
439                  Beams, Natalie and
440                  Brown, Jed and
441                  Camier, Jean-Sylvain and
442                  Dobrev, Veselin and
443                  Dudouit, Yohann and
444                  Ghaffari, Leila and
445                  Grimberg, Sebastian and
446                  Kolev, Tzanio and
447                  Medina, David and
448                  Pazner, Will and
449                  Ratnayaka, Thilina and
450                  Shakeri, Rezgar and
451                  Thompson, Jeremy L and
452                  Tomov, Stanimire and
453                  Wright III, James},
454  title        = {{libCEED} User Manual},
455  month        = nov,
456  year         = 2023,
457  publisher    = {Zenodo},
458  version      = {0.12.0},
459  doi          = {10.5281/zenodo.10062388}
460}
461```
462
463For libCEED's Python interface please cite:
464
465```bibtex
466@InProceedings{libceed-paper-proc-scipy-2020,
467  author    = {{V}aleria {B}arra and {J}ed {B}rown and {J}eremy {T}hompson and {Y}ohann {D}udouit},
468  title     = {{H}igh-performance operator evaluations with ease of use: lib{C}{E}{E}{D}'s {P}ython interface},
469  booktitle = {{P}roceedings of the 19th {P}ython in {S}cience {C}onference},
470  pages     = {85 - 90},
471  year      = {2020},
472  editor    = {{M}eghann {A}garwal and {C}hris {C}alloway and {D}illon {N}iederhut and {D}avid {S}hupe},
473  doi       = {10.25080/Majora-342d178e-00c}
474}
475```
476
477The BibTeX entries for these references can be found in the `doc/bib/references.bib` file.
478
479## Copyright
480
481The following copyright applies to each file in the CEED software suite, unless otherwise stated in the file:
482
483> Copyright (c) 2017-2025, Lawrence Livermore National Security, LLC and other CEED contributors.
484> All rights reserved.
485
486See files LICENSE and NOTICE for details.
487
488[github-badge]: https://github.com/CEED/libCEED/workflows/C/Fortran/badge.svg
489[github-link]: https://github.com/CEED/libCEED/actions
490[gitlab-badge]: https://gitlab.com/libceed/libCEED/badges/main/pipeline.svg?key_text=GitLab-CI
491[gitlab-link]: https://gitlab.com/libceed/libCEED/-/pipelines?page=1&scope=all&ref=main
492[codecov-badge]: https://codecov.io/gh/CEED/libCEED/branch/main/graphs/badge.svg
493[codecov-link]: https://codecov.io/gh/CEED/libCEED/
494[license-badge]: https://img.shields.io/badge/License-BSD%202--Clause-orange.svg
495[license-link]: https://opensource.org/licenses/BSD-2-Clause
496[doc-badge]: https://readthedocs.org/projects/libceed/badge/?version=latest
497[doc-link]: https://libceed.org/en/latest/?badge=latest
498[joss-badge]: https://joss.theoj.org/papers/10.21105/joss.02945/status.svg
499[joss-link]: https://doi.org/10.21105/joss.02945
500[binder-badge]: http://mybinder.org/badge_logo.svg
501[binder-link]: https://mybinder.org/v2/gh/CEED/libCEED/main?urlpath=lab/tree/examples/python/tutorial-0-ceed.ipynb
502