xref: /libCEED/backends/magma/tuning/README.md (revision ed094490f53e580908aa80e9fe815a6fd76d7526)
1# MAGMA Backend Autotuning (Non-tensor Basis)
2
3The `magma` backend uses specialized GPU kernels for a non-tensor basis with
4`P`, `Q` less than a prescribed value, and above this cutoff uses a standard
5library GEMM implementation. The specialized kernels have a single tunable
6blocking factor parameter, `NB`, which varies with `P` and `Q` as well as the
7size of the number of elements `N`. This folder contains the tuning data, in
8header files called `<ARCH>_rtc.h`, where `<ARCH>` is the GPU name, as well as a
9simple C++ program (`tuning.cpp`) and Python driver (`generate_tuning.py`) to
10generate the optimal `NB` selections for a new target architecture.
11
12## Generating Autotuning Data
13
14A sample run to generate the tuning data for an A100 GPU, considering values of
15`NB` from 1 to 32 and saved to `a100_rtc.h`, is:
16
17```sh
18python generate_tuning.py -arch a100 -max-nb 32 -build-cmd "make" -ceed "/gpu/cuda/magma"
19```
20
21The `-build-cmd` parameter specifies the command which should be used to compile
22the libCEED library. For example, this may be a build script which calls `make`
23internally with the desired parameters, or might just be `make` if a previous
24call to `make configure` has configured the build. Finally, the `-ceed`
25specifies the backend to use, typically one of `/gpu/cuda/magma` or
26`/gpu/hip/magma`.
27
28Alternatively, the `tuning` program can be built and run on its own to benchmark
29the basis application for a given backend. Run `make tuning` from this directory
30and call the program as:
31
32```sh
33./tuning "/gpu/cuda/magma"
34````
35
36Note that in order for the benchmarks to make sense for `magma` backends, the
37`ceed_magma_queue_sync` in `ceed-magma.h` should be set to
38`cudaDeviceSynchronize()` or `hipDeviceSynchronize()`.
39