xref: /petsc/doc/developers/testing.md (revision 3f02e49b19195914bf17f317a25cb39636853415)
1(test_harness)=
2
3# PETSc Testing System
4
5The PETSc test system consists of
6
7- Formatted comments at the bottom of the tutorials and test source files that describes the tests to be run.
8- The *test generator* (`config/gmakegentest.py`) that parses the tutorial and test source files and generates the makefiles and shell scripts. This is run
9  automatically by the make system and rarely is run directly.
10- The *PETSc test harness* that consists of makefile and shell scripts that runs the executables with several logging and reporting features.
11
12Details on using the harness may be found in the {ref}`user's manual <sec_runningtests>`. The testing system is used by {any}`pipelines`.
13
14## PETSc Test Description Language
15
16PETSc tests and tutorials contain at the bottom of the their source files a simple language to
17describe tests and subtests required to run executables associated with
18compilation of that file. The general skeleton of the file is
19
20```
21static const char help[] = "A simple MOAB example\n";
22
23...
24<source code>
25...
26
27/*TEST
28   build:
29     requires: moab
30   testset:
31     suffix: 1
32     requires: !complex
33   testset:
34     suffix: 2
35     args: -debug -fields v1,v2,v3
36     test:
37     test:
38       args: -foo bar
39TEST*/
40```
41
42For our language, a *test* is associated with the following
43
44- A single shell script
45
46- A single makefile
47
48- An output file that represents the *expected results*. It is also possible -- though unusual -- to have multiple output files for a single test
49
50- Two or more command tests, usually:
51
52  - one or more `mpiexec` tests that run the executable
53  - one or more `diff` tests to compare output with the expected result
54
55Our language also supports a *testset* that specifies either a new test
56entirely or multiple executable/diff tests within a single test. At the
57core, the executable/diff test combination will look something like
58this:
59
60```sh
61mpiexec -n 1 ../ex1 1> ex1.tmp 2> ex1.err
62diff ex1.tmp output/ex1.out 1> diff-ex1.tmp 2> diff-ex1.err
63```
64
65In practice, we want to do various logging and counting by the test
66harness; as are explained further below. The input language supports
67simple yet flexible test control.
68
69(test_harness_data)=
70
71### Datafiles needed for some tests
72
73Some tests require matrices or meshes that are too large for the primary PETSc Git repository.
74The repository [datafiles](https://gitlab.com/petsc/datafiles) contains all the test files needed for the test suite.
75To run these tests one must first clone the datafiles repository and then set the environmental variable `DATAFILESPATH`.
76For these tests `requires: datafilespath` should be specified.
77
78### Runtime Language Options
79
80At the end of each test file, a marked comment block is
81inserted to describe the test(s) to be run. The elements of the test are
82done with a set of supported key words that sets up the test.
83
84The goals of the language are to be
85
86- as minimal as possible with the simplest test requiring only one keyword,
87- independent of the filename such that a file can be renamed without rewriting the tests, and
88- intuitive.
89
90In order to enable the second goal, the *basestring* of the filename is
91defined as the filename without the extension; for example, if the
92filename is `ex1.c`, then `basestring=ex1`.
93
94With this background, these keywords are as follows.
95
96- **testset** or **test**: (*Required*)
97
98  - At the top level either a single test or a test set must be
99    specified. All other keywords are sub-entries of this keyword.
100
101- **suffix**: (*Optional*; *Default:* `suffix=""`)
102
103  - The test name is given by `testname = basestring` if the suffix
104    is set to an empty string, and by
105    `testname = basestring + "_" + suffix` otherwise.
106  - This can be specified only for top level test nodes.
107
108- **output_file**: (*Optional*; *Default:*
109  `output_file = "output/" + testname + ".out"`)
110
111  - A reference file with the *expected output from the test run*,
112    the output from the test run is compared with it (with a difftool).
113  - This file is described relative to the source directory of the
114    source file and should be in the `output` subdirectory (for example,
115    `output/ex1.out`).
116
117- **nsize**: (*Optional*; *Default:* `nsize=1`)
118
119  - This integer is passed to mpiexec; i.e., `mpiexec -n nsize`
120
121- **args**: (*Optional*; *Default:* `""`)
122
123  - These arguments are passed to the executable.
124
125- **diff_args**: (*Optional*; *Default:* `""`)
126
127  - These arguments are passed to the `lib/petsc/bin/petscdiff` script that
128    is used in the diff part of the test. For example, `-j` enables testing
129    the floating point numbers.
130
131- **TODO**: (*Optional*; *Default:* `False`)
132
133  - Setting this Boolean to True will tell the test to appear in the
134    test harness but report only TODO per the TAP standard. Optionally
135    provide a string indicating why it is todo.
136  - A runscript will be generated and can easily be modified by hand
137    to run.
138
139- **filter**: (*Optional*; *Default:* `""`)
140
141  - Sometimes only a subset of the output is meant to be tested
142    against the expected result. If this keyword is used, it filters
143    the executable output to
144    compare with `output_file`.
145  - The value of this is the command to be run, for example,
146    `grep foo` or `sort -nr`.
147  - **NOTE: this method of testing error output is NOT recommended. See section on**
148    {ref}`testing errors <sec_testing_error_testing>` **instead.** If the filter begins
149    with `Error:`, then the test is assumed to be testing the `stderr` output, and the
150    error code and output are set up to be tested.
151
152- **filter_output**: (*Optional*; *Default:* `""`)
153
154  - Sometimes filtering the output file is useful for standardizing
155    tests. For example, in order to handle the issues related to
156    parallel output, both the output from the test example and the
157    output file need to be sorted (since sort does not produce the
158    same output on all machines). This works the same as filter to
159    implement this feature
160
161- **localrunfiles**: (*Optional*; *Default:* `""`)
162
163  - Some tests
164    require runtime files that are maintained in the source tree.
165    Files in this (space-delimited) list will be copied over to the
166    testing directory so they will be found by the executable. If you
167    list a directory instead of files, it will copy the entire
168    directory (this is limited currently to a single directory)
169  - The copying is done by the test generator and not by creating
170    makefile dependencies.
171
172- **temporaries**: (*Optional*; *Default:* `""`)
173
174  - Some tests produce temporary files that are read by the filter
175    to compare to expected results.
176    Files in this (space-delimited) list will cleared before
177    the test is run to ensure that stale temporary files are not read.
178
179- **requires**: (*Optional*; *Default:* `""`)
180
181  - This is a space-delimited list of run requirements (not build
182    requirements; see Build requirements below).
183  - In general, the language supports `and` and `not` constructs
184    using `! => not` and `, => and`.
185  - MPIUNI should work for all -n 1 examples so this need not be in
186    the requirements list.
187  - Some tests require matrices or meshes contained in the
188    directory given by the environmental variable `DATAFILESPATH`.
189    For these tests `requires: datafilespath` is
190    specified. See {any}`test harness data<test_harness_data>`
191  - Packages are indicated with lower-case specification, for example,
192    `requires: superlu_dist`.
193  - Any defined variable in petscconf.h can be specified with the
194    `defined(...)` syntax, for example, `defined(PETSC_USE_INFO)`.
195  - Any definition of the form `PETSC_HAVE_FOO` can just use
196    `requires: foo` similar to how third-party packages are handled.
197
198- **timeoutfactor**: (*Optional*; *Default:* `"1"`)
199
200  - This parameter allows you to extend the default timeout for an
201    individual test such that the new timeout time is
202    `timeout=(default timeout) x (timeoutfactor)`.
203  - Tests are limited to a set time that is found at the top of
204    `"config/petsc_harness.sh"` and can be overwritten by passing in
205    the `TIMEOUT` argument to `gmakefile`
206
207- **env**: (*Optional*; *Default:* `env=""`)
208
209  - Allows you to set environment variables for the test. Values are copied verbatim to
210    the runscript and defined and exported prior to all other variables.
211
212  - Variables defined within `env:` blocks are expanded and processed by the shell that
213    runs the runscript. No prior preprocessing (other than splitting the lines into
214    separate declarations) is done. This means that any escaping of special characters
215    must be done in the text of the `TEST` block.
216
217  - Defining the `env:` keyword more than once is allowed. Subsequent declarations are
218    then appended to prior list of declarations . Multiple environment variables may also
219    be defined in the same `env:` block, i.e. given a test `ex1.c` with the following
220    spec:
221
222    ```yaml
223    test:
224      env: FOO=1 BAR=1
225
226    # equivalently
227    test:
228      env: FOO=1
229      env: BAR=1
230    ```
231
232    results in
233
234    ```console
235    $ export FOO=1; export BAR=1; ./ex1
236    ```
237
238  - Variables defined in an `env:` block are evaluated by the runscript in the order in
239    which they are defined in the `TEST` block. Thus it is possible for later variables
240    to refer to previously defined ones:
241
242    ```yaml
243    test:
244      env: FOO='hello' BAR=${FOO}
245    ```
246
247    results in
248
249    ```console
250    $ export FOO='hello'; export BAR=${FOO}; ./ex1
251    # expanded by shell to
252    $ export FOO='hello'; export BAR='hello'; ./ex1
253    ```
254
255    Note this also implies that
256
257    ```yaml
258    test:
259      env: FOO=1 FOO=0
260    ```
261
262    results in
263
264    ```console
265    $ export FOO=1; export FOO=0; ./ex1
266    ```
267
268### Additional Specifications
269
270In addition to the above keywords, other language features are
271supported.
272
273- **for loops**: Specifying `{{list of values}}` will generate a loop over
274  an enclosed space-delimited list of values.
275  It is supported within `nsize` and `args`. For example,
276
277  ```
278  nsize: {{1 2 4}}
279  args: -matload_block_size {{2 3}shared output}
280  ```
281
282  Here the output for each `-matload_block_size` value is assumed to be
283  the same so that only one output file is needed.
284
285  If the loop causes different output for each loop iteration, then `separate output` needs to be used:
286
287  ```
288  args: -matload_block_size {{2 3}separate output}
289  ```
290
291  In this case, each loop value generates a separate script,
292  and uses a separate output file for comparison.
293
294  Note that `{{...}}` is equivalent to `{{...}shared output}`.
295
296(sec_testing_error_testing)=
297
298### Testing Errors And Exceptional Code
299
300It is possible (and encouraged!) to test error conditions within the test harness. Since
301error messages produced by `SETERRQ()` and friends are not portable between systems,
302additional arguments must be passed to tests to modify error handling, specifically:
303
304```yaml
305args: -petsc_ci_portable_error_output -error_output_stdout
306```
307
308These arguments have the following effect:
309
310- `-petsc_ci_portable_error_output`: Strips system or configuration-specific information
311  from error messages. Specifically this:
312
313  - Removes all path components except the file name from the traceback
314  - Removes line and column numbers from the traceback
315  - Removes PETSc version information
316  - Removes `configure` options used
317  - Removes system name
318  - Removes hostname
319  - Removes date
320
321  With this option error messages will be identical across systems, runs, and PETSc
322  configurations (barring of course configurations in which the error is not raised).
323
324  Furthermore, this option also changes the default behavior of the error handler to
325  **gracefully** exit where possible. For single-ranked runs this means returning with
326  exit-code `0` and calling `MPI_Finalize()` instead of `MPI_Abort()`. Multi-rank
327  tests will call `MPI_Abort()` on errors raised on `PETSC_COMM_SELF`, but will call
328  `MPI_Finalize()` otherwise.
329
330- `-error_output_stdout`: Forces `SETERRQ()` and friends to dump error messages to
331  `stdout` instead of `stderr`. While using `stderr` (alongside the `Error:`
332  sub-directive under `filter:`) also works it appears to be unstable under heavy
333  load, especially in CI.
334
335Using both options in tandem allows one to use the normal `output:` mechanism to compare
336expected and actual error outputs.
337
338When writing ASCII output that may be not portable, so one wants `-petsc_ci_portable_error_output` to
339cause the output to be skipped, enclose the output with code such as
340
341```
342if (!PetscCIEnabledPortableErrorOutput)
343```
344
345to prevent it from being output when the CI test harness is running.
346
347### Test Block Examples
348
349The following is the simplest test block:
350
351```yaml
352/*TEST
353  test:
354TEST*/
355```
356
357If this block is in `src/a/b/examples/tutorials/ex1.c`, then it will
358create `a_b_tutorials-ex1` test that requires only one
359process, with no arguments, and diff the resultant output with
360`src/a/b/examples/tutorials/output/ex1.out`.
361
362For Fortran, the equivalent is
363
364```fortran
365!/*TEST
366!  test:
367!TEST*/
368```
369
370A more complete example, showing just the lines between `/*TEST` and `TEST*/`:
371
372```yaml
373test:
374test:
375  suffix: 1
376  nsize: 2
377  args: -t 2 -pc_type jacobi -ksp_monitor_short -ksp_type gmres
378  args: -ksp_gmres_cgs_refinement_type refine_always -s2_ksp_type bcgs
379  args: -s2_pc_type jacobi -s2_ksp_monitor_short
380  requires: x
381```
382
383This creates two tests. Assuming that this is
384`src/a/b/examples/tutorials/ex1.c`, the tests would be
385`a_b_tutorials-ex1` and `a_b_tutorials-ex1_1`.
386
387Following is an example of how to test a permutation of arguments
388against the same output file:
389
390```yaml
391testset:
392  suffix: 19
393  requires: datafilespath
394  args: -f0 ${DATAFILESPATH}/matrices/poisson1
395  args: -ksp_type cg -pc_type icc -pc_factor_levels 2
396  test:
397  test:
398    args: -mat_type seqsbaij
399```
400
401Assuming that this is `ex10.c`, there would be two mpiexec/diff
402invocations in `runex10_19.sh`.
403
404Here is a similar example, but the permutation of arguments creates
405different output:
406
407```yaml
408testset:
409  requires: datafilespath
410  args: -f0 ${DATAFILESPATH}/matrices/medium
411  args: -ksp_type bicg
412  test:
413    suffix: 4
414    args: -pc_type lu
415  test:
416    suffix: 5
417```
418
419Assuming that this is `ex10.c`, two shell scripts will be created:
420`runex10_4.sh` and `runex10_5.sh`.
421
422An example using a for loop is:
423
424```yaml
425testset:
426  suffix: 1
427  args: -f ${DATAFILESPATH}/matrices/small -mat_type aij
428  requires: datafilespath
429testset:
430  suffix: 2
431  output_file: output/ex138_1.out
432  args: -f ${DATAFILESPATH}/matrices/small
433  args: -mat_type baij -matload_block_size {{2 3}shared output}
434  requires: datafilespath
435```
436
437In this example, `ex138_2` will invoke `runex138_2.sh` twice with
438two different arguments, but both are diffed with the same file.
439
440Following is an example showing the hierarchical nature of the test
441specification.
442
443```yaml
444testset:
445  suffix:2
446  output_file: output/ex138_1.out
447  args: -f ${DATAFILESPATH}/matrices/small -mat_type baij
448  test:
449    args: -matload_block_size 2
450  test:
451    args: -matload_block_size 3
452```
453
454This is functionally equivalent to the for loop shown above.
455
456Here is a more complex example using for loops:
457
458```yaml
459testset:
460  suffix: 19
461  requires: datafilespath
462  args: -f0 ${DATAFILESPATH}/matrices/poisson1
463  args: -ksp_type cg -pc_type icc
464  args: -pc_factor_levels {{0 2 4}separate output}
465  test:
466  test:
467    args: -mat_type seqsbaij
468```
469
470If this is in `ex10.c`, then the shell scripts generated would be
471
472- `runex10_19_pc_factor_levels-0.sh`
473- `runex10_19_pc_factor_levels-2.sh`
474- `runex10_19_pc_factor_levels-4.sh`
475
476Each shell script would invoke twice.
477
478### Build Language Options
479
480You can specify issues related to the compilation of the source file
481with the `build:` block. The language is as follows.
482
483- **requires:** (*Optional*; *Default:* `""`)
484
485  - Same as the runtime requirements (for example, can include
486    `requires: fftw`) but also requirements related to types:
487
488    1. Precision types: `single`, `double`, `quad`, `int32`
489    2. Scalar types: `complex` (and `!complex`)
490
491  - In addition, `TODO` is available to allow you to skip the build
492    of this file but still maintain it in the source tree.
493
494- **depends:** (*Optional*; *Default:* `""`)
495
496  - List any dependencies required to compile the file
497
498A typical example for compiling for only real numbers is
499
500```
501/*TEST
502  build:
503    requires: !complex
504  test:
505TEST*/
506```
507
508## Running the tests
509
510The make rules for running tests are contained in `gmakefile.test` in the PETSc root directory. They can usually be accessed by
511simply using commands such as
512
513```console
514$ make test
515```
516
517or, for a list of test options,
518
519```console
520$ make help-test
521```
522
523### Determining the failed jobs of a given run
524
525The running of the test harness will show which tests fail, but you may not have
526logged the output or run without showing the full error. The best way of
527examining the errors is with this command:
528
529```console
530$ $EDITOR $PETSC_DIR/$PETSC_ARCH/tests/test*err.log
531```
532
533This method can also be used for the PETSc continuous integration (CI) pipeline jobs. For failed jobs you can download the
534log files from the `artifacts download` tab on the right side:
535
536:::{figure} /images/developers/test-artifacts.png
537:alt: Test Artifacts at Gitlab
538
539Test artifacts can be downloaded from GitLab.
540:::
541
542To see the list of all tests that failed from the last run, you can also run this command:
543
544```console
545$ make print-test test-fail=1
546```
547
548To print it out in a column format:
549
550```console
551$ make print-test test-fail=1 | tr ' ' '\n' | sort
552```
553
554Once you know which tests failed, the question is how to debug them.
555
556### Introduction to debugging workflows
557
558Here, two different workflows on developing with the test harness are presented,
559and then the language for adding a new test is described. Before describing the
560workflow, we first discuss the output of the test harness and how it maps onto
561makefile targets and shell scripts.
562
563Consider this line from running the PETSc test system:
564
565```
566TEST arch-ci-linux-uni-pkgs/tests/counts/vec_is_sf_tests-ex1_basic_1.counts
567```
568
569The string `vec_is_sf_tests-ex1_basic_1` gives the following information:
570
571- The file generating the tests is found in `$PETSC_DIR/src/vec/is/sf/tests/ex1.c`
572- The makefile target for the *test* is `vec_is_sf_tests-ex1_basic_1`
573- The makefile target for the *executable* is `$PETSC_ARCH/tests/vec/is/sf/tests/ex1`
574- The shell script running the test is located at: `$PETSC_DIR/$PETSC_ARCH/tests/vec/is/sf/tests/runex1_basic_1.sh`
575
576Let's say that you want to debug a single test as part of development. There
577are two basic methods of doing this: 1) use shell script directly in test
578directory, or 2) use the gmakefile.test from the top level directory. We present both
579workflows.
580
581### Debugging a test using shell the generated scripts
582
583First, look at the working directory and the options for the
584scripts:
585
586```console
587$ cd $PETSC_ARCH/tests/vec/is/sf/tests
588$ ./runex1_basic_1.sh -h
589Usage: ./runex1_basic_1.sh [options]
590
591OPTIONS
592  -a <args> ......... Override default arguments
593  -c ................ Cleanup (remove generated files)
594  -C ................ Compile
595  -d ................ Launch in debugger
596  -e <args> ......... Add extra arguments to default
597  -f ................ force attempt to run test that would otherwise be skipped
598  -h ................ help: print this message
599  -n <integer> ...... Override the number of processors to use
600  -j ................ Pass -j to petscdiff (just use diff)
601  -J <arg> .......... Pass -J to petscdiff (just use diff with arg)
602  -m ................ Update results using petscdiff
603  -M ................ Update alt files using petscdiff
604  -o <arg> .......... Output format: 'interactive', 'err_only'
605  -p ................ Print command: Print first command and exit
606  -t ................ Override the default timeout (default=60 sec)
607  -U ................ run cUda-memcheck
608  -V ................ run Valgrind
609  -v ................ Verbose: Print commands
610```
611
612We will be using the `-C`, `-V`, and `-p` flags.
613
614A basic workflow is something similar to:
615
616```console
617$ <edit>
618$ runex1_basic_1.sh -C
619$ <edit>
620$ ...
621$ runex1_basic_1.sh -m # If need to update results
622$ ...
623$ runex1_basic_1.sh -V # Make sure valgrind clean
624$ cd $PETSC_DIR
625$ git commit -a
626```
627
628For loops it sometimes can become onerous to run the whole test.
629In this case, you can use the `-p` flag to print just the first
630command. It will print a command suitable for running from
631`$PETSC_DIR`, but it is easy to modify for execution in the test
632directory:
633
634```console
635$ runex1_basic_1.sh -p
636```
637
638### Debugging a PETSc test using the gmakefile.test
639
640First recall how to find help for the options:
641
642```console
643$ make help-test
644Test usage:
645   /usr/bin/gmake --no-print-directory test <options>
646
647Options:
648  NO_RM=1           Do not remove the executables after running
649  REPLACE=1         Replace the output in PETSC_DIR source tree (-m to test scripts)
650  OUTPUT=1          Show only the errors on stdout
651  ALT=1             Replace 'alt' output in PETSC_DIR source tree (-M to test scripts)
652  DIFF_NUMBERS=1    Diff the numbers in the output (-j to test scripts and petscdiff)
653  CUDAMEMCHECK=1    Execute the tests using cuda-memcheck (-U to test scripts)
654                    Use PETSC_CUDAMEMCHECK_COMMAND to change the executable to run and
655                    PETSC_CUDAMEMCHECK_ARGS to change the arguments (note: both
656                    cuda-memcheck and compute-sanitizer are supported)
657  VALGRIND=1        Execute the tests using valgrind (-V to test scripts)
658  DEBUG=1           Launch tests in the debugger (-d to the scripts)
659  NP=<num proc>     Set a number of processors to pass to scripts.
660  FORCE=1           Force SKIP or TODO tests to run
661  PRINTONLY=1       Print the command, but do not run.  For loops print first command
662  TIMEOUT=<time>    Test timeout limit in seconds (default in config/petsc_harness.sh)
663  TESTDIR='tests'   Subdirectory where tests are run ($PETSC_DIR/$PETSC_ARCH
664                    or /
665                    or /share/petsc/examples/)
666  TESTBASE='tests'   Subdirectory where tests are run ($PETSC_DIR/$PETSC_ARCH)
667  OPTIONS='<args>'  Override options to scripts (-a to test scripts)
668  EXTRA_OPTIONS='<args>'  Add options to scripts (-e to test scripts)
669
670Special options for macOS:
671  MACOS_FIREWALL=1  Add each built test to the macOS firewall list to prevent popups. Configure --with-macos-firewall-rules to make this default
672
673Tests can be generated by searching with multiple methods
674  For general searching (using config/query_tests.py):
675    /usr/bin/gmake --no-print-directory test search='sys*ex2*'
676   or the shortcut using s
677    /usr/bin/gmake --no-print-directory test s='sys*ex2*'
678  You can also use the full path to a file directory
679    /usr/bin/gmake --no-print-directory test s='src/sys/tests/'
680   or a file
681    /usr/bin/gmake --no-print-directory test s='src/sys/tests/ex1.c'
682
683  To search for fields from the original test definitions:
684    /usr/bin/gmake --no-print-directory test query='requires' queryval='*MPI_PROCESS_SHARED_MEMORY*'
685   or the shortcut using q and qv
686    /usr/bin/gmake --no-print-directory test q='requires' qv='*MPI_PROCESS_SHARED_MEMORY*'
687  To filter results from other searches, use searchin
688    /usr/bin/gmake --no-print-directory test s='src/sys/tests/' searchin='*options*'
689
690  To re-run the last tests which failed:
691    /usr/bin/gmake --no-print-directory test test-fail='1'
692
693  To see which targets match a given pattern (useful for doing a specific target):
694    /usr/bin/gmake --no-print-directory print-test search=sys*
695
696  To build an executable, give full path to location:
697    /usr/bin/gmake --no-print-directory ${PETSC_ARCH}/tests/sys/tests/ex1
698  or make the test with NO_RM=1
699```
700
701To compile the test and run it:
702
703```console
704$ make test search=vec_is_sf_tests-ex1_basic_1
705```
706
707This can consist of your basic workflow. However,
708for the normal compile and edit, running the entire harness with search can be
709cumbersome. So first get the command:
710
711```console
712$ make vec_is_sf_tests-ex1_basic_1 PRINTONLY=1
713<copy command>
714<edit>
715$ make $PETSC_ARCH/tests/vec/is/sf/tests/ex1
716$ /scratch/kruger/contrib/petsc-mpich-cxx/bin/mpiexec -n 1 arch-mpich-cxx-py3/tests/vec/is/sf/tests/ex1
717...
718$ cd $PETSC_DIR
719$ git commit -a
720```
721
722### Advanced searching
723
724For forming a search, it is recommended to always use `print-test` instead of
725`test` to make sure it is returning the values that you want.
726
727The three basic and recommended arguments are:
728
729- `search` (or `s`)
730
731  - Searches based on name of test target (see above)
732
733  - Use the familiar glob syntax (like the Unix `ls` command). Example:
734
735    ```console
736    $ make print-test search='vec_is*ex1*basic*1'
737    ```
738
739    Equivalently:
740
741    ```console
742    $ make print-test s='vec_is*ex1*basic*1'
743    ```
744
745  - It also takes full paths. Examples:
746
747    ```console
748    $ make print-test s='src/vec/is/tests/ex1.c'
749    ```
750
751    ```console
752    $ make print-test s='src/dm/impls/plex/tests/'
753    ```
754
755    ```console
756    $ make print-test s='src/dm/impls/plex/tests/ex1.c'
757    ```
758
759- `query` and `queryval` (or `q` and `qv`)
760
761  - `query` corresponds to test harness keyword, `queryval` to the value. Example:
762
763    ```console
764    $ make print-test query='suffix' queryval='basic_1'
765    ```
766
767  - Invokes `config/query_tests.py` to query the tests (see
768    `config/query_tests.py --help` for more information).
769
770  - See below for how to use as it has many features
771
772- `searchin` (or `i`)
773
774  - Filters results of above searches. Example:
775
776    ```console
777    $ make print-test s='src/dm/impls/plex/tests/ex1.c' i='*refine_overlap_2d*'
778    ```
779
780Searching using GNU make's native regexp functionality is kept for people who like it, but most developers will likely prefer the above methods:
781
782- `gmakesearch`
783
784  - Use GNU make's own filter capability.
785
786  - Fast, but requires knowing GNU make regex syntax which uses `%` instead of `*`
787
788  - Also very limited (cannot use two `%`'s for example)
789
790  - Example:
791
792    ```console
793    $ make test gmakesearch='vec_is%ex1_basic_1'
794    ```
795
796- `gmakesearchin`
797
798  - Use GNU make's own filter capability to search in previous results. Example:
799
800    ```console
801    $ make test gmakesearch='vec_is%1' gmakesearchin='basic'
802    ```
803
804### Query-based searching
805
806Note the use of glob style matching is also accepted in the value field:
807
808```console
809$ make print-test query='suffix' queryval='basic_1'
810```
811
812```console
813$ make print-test query='requires' queryval='cuda'
814```
815
816```console
817$ make print-test query='requires' queryval='defined(PETSC_HAVE_MPI_GPU_AWARE)'
818```
819
820```console
821$ make print-test query='requires' queryval='*GPU_AWARE*'
822```
823
824Using the `name` field is equivalent to the search above:
825
826- Example:
827
828  ```console
829  $ make print-test query='name' queryval='vec_is*ex1*basic*1'
830  ```
831
832- This can be combined with union/intersect queries as discussed below
833
834Arguments are tricky to search for. Consider
835
836```none
837args: -ksp_monitor_short -pc_type ml -ksp_max_it 3
838```
839
840Search terms are
841
842```none
843ksp_monitor, pc_type ml, ksp_max_it
844```
845
846Certain items are ignored:
847
848- Numbers (see `ksp_max_it` above), but floats are ignored as well.
849- Loops: `args: -pc_fieldsplit_diag_use_amat {{0 1}}` gives `pc_fieldsplit_diag_use_amat` as the search term
850- Input files: `-f *`
851
852Examples of argument searching:
853
854```console
855$ make print-test query='args' queryval='ksp_monitor'
856```
857
858```console
859$ make print-test query='args' queryval='*monitor*'
860```
861
862```console
863$ make print-test query='args' queryval='pc_type ml'
864```
865
866Multiple simultaneous queries can be performed with union (`,`), and intersection
867(`|`) operators in the `query` field. One may also use their alternate spellings
868(`%AND%` and `%OR%` respectively). The alternate spellings are useful in cases where
869one cannot avoid (possibly multiple) shell expansions that might otherwise interpret the
870`|` operator as a shell pipe. Examples:
871
872- All examples using `cuda` and all examples using `hip`:
873
874  ```console
875  $ make print-test query='requires,requires' queryval='cuda,hip'
876  # equivalently
877  $ make print-test query='requires%AND%requires' queryval='cuda%AND%hip'
878  ```
879
880- Examples that require both triangle and ctetgen (intersection of tests)
881
882  ```console
883  $ make print-test query='requires|requires' queryval='ctetgen,triangle'
884  # equivalently
885  $ make print-test query='requires%OR%requires' queryval='ctetgen%AND%triangle'
886  ```
887
888- Tests that require either `ctetgen` or `triangle`
889
890  ```console
891  $ make print-test query='requires,requires' queryval='ctetgen,triangle'
892  # equivalently
893  $ make print-test query='requires%AND%requires' queryval='ctetgen%AND%triangle'
894  ```
895
896- Find `cuda` examples in the `dm` package.
897
898  ```console
899  $ make print-test query='requires|name' queryval='cuda,dm*'
900  # equivalently
901  $ make print-test query='requires%OR%name' queryval='cuda%AND%dm*'
902  ```
903
904Here is a way of getting a feel for how the union and intersect operators work:
905
906```console
907$ make print-test query='requires' queryval='ctetgen' | tr ' ' '\n' | wc -l
908170
909$ make print-test query='requires' queryval='triangle' | tr ' ' '\n' | wc -l
910330
911$ make print-test query='requires,requires' queryval='ctetgen,triangle' | tr ' ' '\n' | wc -l
912478
913$ make print-test query='requires|requires' queryval='ctetgen,triangle' | tr ' ' '\n' | wc -l
91422
915```
916
917The total number of tests for running only ctetgen or triangle is 500. They have 22 tests in common, and 478 that
918run independently of each other.
919
920The union and intersection have fixed grouping. So this string argument
921
922```none
923query='requires,requires|args' queryval='cuda,hip,*log*'
924# equivalently
925query='requires%AND%requires%OR%args' queryval='cuda%AND%hip%AND%*log*'
926```
927
928will can be read as
929
930```none
931requires:cuda && (requires:hip || args:*log*)
932```
933
934which is probably not what is intended.
935
936`query/queryval` also support negation (`!`, alternate `%NEG%`), but is limited.
937The negation only applies to tests that have a related field in it. So for example, the
938arguments of
939
940```console
941query=requires queryval='!cuda'
942# equivalently
943query=requires queryval='%NEG%cuda'
944```
945
946will only match if they explicitly have:
947
948```
949requires: !cuda
950```
951
952It does not match all cases that do not require cuda.
953
954### Debugging for loops
955
956One of the more difficult issues is how to debug for loops when a subset of the
957arguments are the ones that cause a code crash. The default naming scheme is
958not always helpful for figuring out the argument combination.
959
960For example:
961
962```console
963$ make test s='src/ksp/ksp/tests/ex9.c' i='*1'
964Using MAKEFLAGS: i=*1 s=src/ksp/ksp/tests/ex9.c
965        TEST arch-osx-pkgs-opt-new/tests/counts/ksp_ksp_tests-ex9_1.counts
966 ok ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive
967 not ok diff-ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive
968 ok ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-multiplicative
969 ...
970```
971
972In this case, the trick is to use the verbose option, `V=1` (or for the shell script workflows, `-v`) to have it show the commands:
973
974```console
975$ make test s='src/ksp/ksp/tests/ex9.c' i='*1' V=1
976Using MAKEFLAGS: V=1 i=*1 s=src/ksp/ksp/tests/ex9.c
977arch-osx-pkgs-opt-new/tests/ksp/ksp/tests/runex9_1.sh  -v
978 ok ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive # mpiexec  -n 1 ../ex9 -ksp_converged_reason -ksp_error_if_not_converged  -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_type additive > ex9_1.tmp 2> runex9_1.err
979...
980```
981
982This can still be hard to read and pick out what you want. So use the fact that you want `not ok`
983combined with the fact that `#` is the delimiter:
984
985```console
986$ make test s='src/ksp/ksp/tests/ex9.c' i='*1' v=1 | grep 'not ok' | cut -d# -f2
987mpiexec  -n 1 ../ex9 -ksp_converged_reason -ksp_error_if_not_converged  -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_type multiplicative > ex9_1.tmp 2> runex9_1.err
988```
989
990## PETSc Test Harness
991
992The goals of the PETSc test harness are threefold.
993
9941. Provide standard output used by other testing tools
9952. Be as lightweight as possible and easily fit within the PETSc build chain
9963. Provide information on all tests, even those that are not built or run because they do not meet the configuration requirements
997
998Before understanding the test harness, you should first understand the
999desired requirements for reporting and logging.
1000
1001### Testing the Parsing
1002
1003After inserting the language into the file, you can test the parsing by
1004executing
1005
1006A dictionary will be pretty-printed. From this dictionary printout, any
1007problems in the parsing are is usually obvious. This python file is used
1008by
1009
1010in generating the test harness.
1011
1012## Test Output Standards: TAP
1013
1014The PETSc test system is designed to be compliant with the [Test Anything Protocol (TAP)](https://testanything.org/tap-specification.html).
1015
1016This is a simple standard designed to allow testing tools to work
1017together easily. There are libraries to enable the output to be used
1018easily, including sharness, which is used by the Git team. However, the
1019simplicity of the PETSc tests and TAP specification means that we use
1020our own simple harness given by a single shell script that each file
1021sources: `$PETSC_DIR/config/petsc_harness.sh`.
1022
1023As an example, consider this test input:
1024
1025```yaml
1026test:
1027  suffix: 2
1028  output_file: output/ex138.out
1029  args: -f ${DATAFILESPATH}/matrices/small -mat_type {{aij baij sbaij}} -matload_block_size {{2 3}}
1030  requires: datafilespath
1031```
1032
1033A sample output from this would be:
1034
1035```
1036ok 1 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 2"
1037ok 2 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 2"
1038ok 3 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 3"
1039ok 4 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 3"
1040ok 5 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type baij -matload_block_size 2"
1041ok 6 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type baij -matload_block_size 2"
1042...
1043
1044ok 11 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type saij -matload_block_size 2"
1045ok 12 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 2"
1046```
1047
1048## Test Harness Implementation
1049
1050Most of the requirements for being TAP-compliant lie in the shell
1051scripts, so we focus on that description.
1052
1053A sample shell script is given the following.
1054
1055```sh
1056#!/bin/sh
1057. petsc_harness.sh
1058
1059petsc_testrun ./ex1 ex1.tmp ex1.err
1060petsc_testrun 'diff ex1.tmp output/ex1.out' diff-ex1.tmp diff-ex1.err
1061
1062petsc_testend
1063```
1064
1065`petsc_harness.sh` is a small shell script that provides the logging and reporting
1066functions `petsc_testrun` and `petsc_testend`.
1067
1068A small sample of the output from the test harness is as follows.
1069
1070```none
1071ok 1 ./ex1
1072ok 2 diff ex1.tmp output/ex1.out
1073not ok 4 ./ex2
1074#   ex2: Error: cannot read file
1075not ok 5 diff ex2.tmp output/ex2.out
1076ok 7 ./ex3 -f /matrices/small -mat_type aij -matload_block_size 2
1077ok 8 diff ex3.tmp output/ex3.out
1078ok 9 ./ex3 -f /matrices/small -mat_type aij -matload_block_size 3
1079ok 10 diff ex3.tmp output/ex3.out
1080ok 11 ./ex3 -f /matrices/small -mat_type baij -matload_block_size 2
1081ok 12 diff ex3.tmp output/ex3.out
1082ok 13 ./ex3 -f /matrices/small -mat_type baij -matload_block_size 3
1083ok 14 diff ex3.tmp output/ex3.out
1084ok 15 ./ex3 -f /matrices/small -mat_type sbaij -matload_block_size 2
1085ok 16 diff ex3.tmp output/ex3.out
1086ok 17 ./ex3 -f /matrices/small -mat_type sbaij -matload_block_size 3
1087ok 18 diff ex3.tmp output/ex3.out
1088# FAILED   4 5
1089# failed 2/16 tests; 87.500% ok
1090```
1091
1092For developers, modifying the lines that get written to the file can be
1093done by modifying `$PETSC_DIR/config/example_template.py`.
1094
1095To modify the test harness, you can modify `$PETSC_DIR/config/petsc_harness.sh`.
1096
1097### Additional Tips
1098
1099To rerun just the reporting use
1100
1101```console
1102$ config/report_tests.py
1103```
1104
1105To see the full options use
1106
1107```console
1108$ config/report_tests.py -h
1109```
1110
1111To see the full timing information for the five most expensive tests use
1112
1113```console
1114$ config/report_tests.py -t 5
1115```
1116