xref: /petsc/doc/developers/testing.md (revision 4c2e35b6396e22c1c2cca8135aae69ecaa138bb7)
1(test_harness)=
2
3# PETSc Testing System
4
5The PETSc test system consists of
6
7- Formatted comments at the bottom of the tutorials and test source files that describes the tests to be run.
8- The *test generator* (`config/gmakegentest.py`) that parses the tutorial and test source files and generates the makefiles and shell scripts. This is run
9  automatically by the make system and rarely is run directly.
10- The *PETSc test harness* that consists of makefile and shell scripts that runs the executables with several logging and reporting features.
11
12Details on using the harness may be found in the {ref}`user's manual <sec_runningtests>`. The testing system is used by {any}`pipelines`.
13
14## PETSc Test Description Language
15
16PETSc tests and tutorials contain at the bottom of the their source files a simple language to
17describe tests and subtests required to run executables associated with
18compilation of that file. The general skeleton of the file is
19
20```
21static const char help[] = "A simple MOAB example\n";
22
23...
24<source code>
25...
26
27/*TEST
28   build:
29     requires: moab
30   testset:
31     suffix: 1
32     requires: !complex
33   testset:
34     suffix: 2
35     args: -debug -fields v1,v2,v3
36     test:
37     test:
38       args: -foo bar
39TEST*/
40```
41
42For our language, a *test* is associated with the following
43
44- A single shell script
45
46- A single makefile
47
48- An output file that represents the *expected results*. It is also possible -- though unusual -- to have multiple output files for a single test
49
50- Two or more command tests, usually:
51
52  - one or more `mpiexec` tests that run the executable
53  - one or more `diff` tests to compare output with the expected result
54
55Our language also supports a *testset* that specifies either a new test
56entirely or multiple executable/diff tests within a single test. At the
57core, the executable/diff test combination will look something like
58this:
59
60```sh
61mpiexec -n 1 ../ex1 1> ex1.tmp 2> ex1.err
62diff ex1.tmp output/ex1.out 1> diff-ex1.tmp 2> diff-ex1.err
63```
64
65In practice, we want to do various logging and counting by the test
66harness; as are explained further below. The input language supports
67simple yet flexible test control.
68
69### Runtime Language Options
70
71At the end of each test file, a marked comment block is
72inserted to describe the test(s) to be run. The elements of the test are
73done with a set of supported key words that sets up the test.
74
75The goals of the language are to be
76
77- as minimal as possible with the simplest test requiring only one keyword,
78- independent of the filename such that a file can be renamed without rewriting the tests, and
79- intuitive.
80
81In order to enable the second goal, the *basestring* of the filename is
82defined as the filename without the extension; for example, if the
83filename is `ex1.c`, then `basestring=ex1`.
84
85With this background, these keywords are as follows.
86
87- **testset** or **test**: (*Required*)
88
89  - At the top level either a single test or a test set must be
90    specified. All other keywords are sub-entries of this keyword.
91
92- **suffix**: (*Optional*; *Default:* `suffix=""`)
93
94  - The test name is given by `testname = basestring` if the suffix
95    is set to an empty string, and by
96    `testname = basestring + "_" + suffix` otherwise.
97  - This can be specified only for top level test nodes.
98
99- **output_file**: (*Optional*; *Default:*
100  `output_file = "output/" + testname + ".out"`)
101
102  - The output of the test is to be compared with an *expected result*
103    whose name is given by `output_file`.
104  - This file is described relative to the source directory of the
105    source file and should be in the output subdirectory (for example,
106    `output/ex1.out`)
107
108- **nsize**: (*Optional*; *Default:* `nsize=1`)
109
110  - This integer is passed to mpiexec; i.e., `mpiexec -n nsize`
111
112- **args**: (*Optional*; *Default:* `""`)
113
114  - These arguments are passed to the executable.
115
116- **diff_args**: (*Optional*; *Default:* `""`)
117
118  - These arguments are passed to the `lib/petsc/bin/petscdiff` script that
119    is used in the diff part of the test. For example, `-j` enables testing
120    the floating point numbers.
121
122- **TODO**: (*Optional*; *Default:* `False`)
123
124  - Setting this Boolean to True will tell the test to appear in the
125    test harness but report only TODO per the TAP standard. Optionally
126    provide a string indicating why it is todo.
127  - A runscript will be generated and can easily be modified by hand
128    to run.
129
130- **filter**: (*Optional*; *Default:* `""`)
131
132  - Sometimes only a subset of the output is meant to be tested
133    against the expected result. If this keyword is used, it filters
134    the executable output to
135    compare with `output_file`.
136  - The value of this is the command to be run, for example,
137    `grep foo` or `sort -nr`.
138  - **NOTE: this method of testing error output is NOT recommended. See section on**
139    {ref}`testing errors <sec_testing_error_testing>` **instead.** If the filter begins
140    with `Error:`, then the test is assumed to be testing the `stderr` output, and the
141    error code and output are set up to be tested.
142
143- **filter_output**: (*Optional*; *Default:* `""`)
144
145  - Sometimes filtering the output file is useful for standardizing
146    tests. For example, in order to handle the issues related to
147    parallel output, both the output from the test example and the
148    output file need to be sorted (since sort does not produce the
149    same output on all machines). This works the same as filter to
150    implement this feature
151
152- **localrunfiles**: (*Optional*; *Default:* `""`)
153
154  - Some tests
155    require runtime files that are maintained in the source tree.
156    Files in this (space-delimited) list will be copied over to the
157    testing directory so they will be found by the executable. If you
158    list a directory instead of files, it will copy the entire
159    directory (this is limited currently to a single directory)
160  - The copying is done by the test generator and not by creating
161    makefile dependencies.
162
163- **temporaries**: (*Optional*; *Default:* `""`)
164
165  - Some tests produce temporary files that are read by the filter
166    to compare to expected results.
167    Files in this (space-delimited) list will cleared before
168    the test is run to ensure that stale temporary files are not read.
169
170- **requires**: (*Optional*; *Default:* `""`)
171
172  - This is a space-delimited list of run requirements (not build
173    requirements; see Build requirements below).
174  - In general, the language supports `and` and `not` constructs
175    using `! => not` and `, => and`.
176  - MPIUNI should work for all -n 1 examples so this need not be in
177    the requirements list.
178  - Inputs sometimes require external matrices that are found in the
179    directory given by the environmental variable `DATAFILESPATH`.
180    The repository [datafiles](https://gitlab.com/petsc/datafiles)
181    contains all the test files needed for the test suite.
182    For these tests `requires: datafilespath` can be
183    specified.
184  - Packages are indicated with lower-case specification, for example,
185    `requires: superlu_dist`.
186  - Any defined variable in petscconf.h can be specified with the
187    `defined(...)` syntax, for example, `defined(PETSC_USE_INFO)`.
188  - Any definition of the form `PETSC_HAVE_FOO` can just use
189    `requires: foo` similar to how third-party packages are handled.
190
191- **timeoutfactor**: (*Optional*; *Default:* `"1"`)
192
193  - This parameter allows you to extend the default timeout for an
194    individual test such that the new timeout time is
195    `timeout=(default timeout) x (timeoutfactor)`.
196  - Tests are limited to a set time that is found at the top of
197    `"config/petsc_harness.sh"` and can be overwritten by passing in
198    the `TIMEOUT` argument to `gmakefile`
199
200- **env**: (*Optional*; *Default:* `env=""`)
201
202  - Allows you to set environment variables for the test. Values are copied verbatim to
203    the runscript and defined and exported prior to all other variables.
204
205  - Variables defined within `env:` blocks are expanded and processed by the shell that
206    runs the runscript. No prior preprocessing (other than splitting the lines into
207    separate declarations) is done. This means that any escaping of special characters
208    must be done in the text of the `TEST` block.
209
210  - Defining the `env:` keyword more than once is allowed. Subsequent declarations are
211    then appended to prior list of declarations . Multiple environment variables may also
212    be defined in the same `env:` block, i.e. given a test `ex1.c` with the following
213    spec:
214
215    ```yaml
216    test:
217      env: FOO=1 BAR=1
218
219    # equivalently
220    test:
221      env: FOO=1
222      env: BAR=1
223    ```
224
225    results in
226
227    ```console
228    $ export FOO=1; export BAR=1; ./ex1
229    ```
230
231  - Variables defined in an `env:` block are evaluated by the runscript in the order in
232    which they are defined in the `TEST` block. Thus it is possible for later variables
233    to refer to previously defined ones:
234
235    ```yaml
236    test:
237      env: FOO='hello' BAR=${FOO}
238    ```
239
240    results in
241
242    ```console
243    $ export FOO='hello'; export BAR=${FOO}; ./ex1
244    # expanded by shell to
245    $ export FOO='hello'; export BAR='hello'; ./ex1
246    ```
247
248    Note this also implies that
249
250    ```yaml
251    test:
252      env: FOO=1 FOO=0
253    ```
254
255    results in
256
257    ```console
258    $ export FOO=1; export FOO=0; ./ex1
259    ```
260
261### Additional Specifications
262
263In addition to the above keywords, other language features are
264supported.
265
266- **for loops**: Specifying `{{list of values}}` will generate a loop over
267  an enclosed space-delimited list of values.
268  It is supported within `nsize` and `args`. For example,
269
270  ```
271  nsize: {{1 2 4}}
272  args: -matload_block_size {{2 3}shared output}
273  ```
274
275  Here the output for each `-matload_block_size` value is assumed to be
276  the same so that only one output file is needed.
277
278  If the loop causes different output for each loop iteration, then `separate output` needs to be used:
279
280  ```
281  args: -matload_block_size {{2 3}separate output}
282  ```
283
284  In this case, each loop value generates a separate script,
285  and uses a separate output file for comparison.
286
287  Note that `{{...}}` is equivalent to `{{...}shared output}`.
288
289(sec_testing_error_testing)=
290
291### Testing Errors And Exceptional Code
292
293It is possible (and encouraged!) to test error conditions within the test harness. Since
294error messages produced by `SETERRQ()` and friends are not portable between systems,
295additional arguments must be passed to tests to modify error handling, specifically:
296
297```yaml
298args: -petsc_ci_portable_error_output -error_output_stdout
299```
300
301These arguments have the following effect:
302
303- `-petsc_ci_portable_error_output`: Strips system or configuration-specific information
304  from error messages. Specifically this:
305
306  - Removes all path components except the file name from the traceback
307  - Removes line and column numbers from the traceback
308  - Removes PETSc version information
309  - Removes `configure` options used
310  - Removes system name
311  - Removes hostname
312  - Removes date
313
314  With this option error messages will be identical across systems, runs, and PETSc
315  configurations (barring of course configurations in which the error is not raised).
316
317  Furthermore, this option also changes the default behavior of the error handler to
318  **gracefully** exit where possible. For single-ranked runs this means returning with
319  exit-code `0` and calling `MPI_Finalize()` instead of `MPI_Abort()`. Multi-rank
320  tests will call `MPI_Abort()` on errors raised on `PETSC_COMM_SELF`, but will call
321  `MPI_Finalize()` otherwise.
322
323- `-error_output_stdout`: Forces `SETERRQ()` and friends to dump error messages to
324  `stdout` instead of `stderr`. While using `stderr` (alongside the `Error:`
325  sub-directive under `filter:`) also works it appears to be unstable under heavy
326  load, especially in CI.
327
328Using both options in tandem allows one to use the normal `output:` mechanism to compare
329expected and actual error outputs.
330
331When writing ASCII output that may be not portable, so one wants `-petsc_ci_portable_error_output` to
332cause the output to be skipped, enclose the output with code such as
333
334```
335if (!PetscCIEnabledPortableErrorOutput)
336```
337
338to prevent it from being output when the CI test harness is running.
339
340### Test Block Examples
341
342The following is the simplest test block:
343
344```yaml
345/*TEST
346  test:
347TEST*/
348```
349
350If this block is in `src/a/b/examples/tutorials/ex1.c`, then it will
351create `a_b_tutorials-ex1` test that requires only one
352process, with no arguments, and diff the resultant output with
353`src/a/b/examples/tutorials/output/ex1.out`.
354
355For Fortran, the equivalent is
356
357```fortran
358!/*TEST
359!  test:
360!TEST*/
361```
362
363A more complete example, showing just the lines between `/*TEST` and `TEST*/`:
364
365```yaml
366test:
367test:
368  suffix: 1
369  nsize: 2
370  args: -t 2 -pc_type jacobi -ksp_monitor_short -ksp_type gmres
371  args: -ksp_gmres_cgs_refinement_type refine_always -s2_ksp_type bcgs
372  args: -s2_pc_type jacobi -s2_ksp_monitor_short
373  requires: x
374```
375
376This creates two tests. Assuming that this is
377`src/a/b/examples/tutorials/ex1.c`, the tests would be
378`a_b_tutorials-ex1` and `a_b_tutorials-ex1_1`.
379
380Following is an example of how to test a permutation of arguments
381against the same output file:
382
383```yaml
384testset:
385  suffix: 19
386  requires: datafilespath
387  args: -f0 ${DATAFILESPATH}/matrices/poisson1
388  args: -ksp_type cg -pc_type icc -pc_factor_levels 2
389  test:
390  test:
391    args: -mat_type seqsbaij
392```
393
394Assuming that this is `ex10.c`, there would be two mpiexec/diff
395invocations in `runex10_19.sh`.
396
397Here is a similar example, but the permutation of arguments creates
398different output:
399
400```yaml
401testset:
402  requires: datafilespath
403  args: -f0 ${DATAFILESPATH}/matrices/medium
404  args: -ksp_type bicg
405  test:
406    suffix: 4
407    args: -pc_type lu
408  test:
409    suffix: 5
410```
411
412Assuming that this is `ex10.c`, two shell scripts will be created:
413`runex10_4.sh` and `runex10_5.sh`.
414
415An example using a for loop is:
416
417```yaml
418testset:
419  suffix: 1
420  args: -f ${DATAFILESPATH}/matrices/small -mat_type aij
421  requires: datafilespath
422testset:
423  suffix: 2
424  output_file: output/ex138_1.out
425  args: -f ${DATAFILESPATH}/matrices/small
426  args: -mat_type baij -matload_block_size {{2 3}shared output}
427  requires: datafilespath
428```
429
430In this example, `ex138_2` will invoke `runex138_2.sh` twice with
431two different arguments, but both are diffed with the same file.
432
433Following is an example showing the hierarchical nature of the test
434specification.
435
436```yaml
437testset:
438  suffix:2
439  output_file: output/ex138_1.out
440  args: -f ${DATAFILESPATH}/matrices/small -mat_type baij
441  test:
442    args: -matload_block_size 2
443  test:
444    args: -matload_block_size 3
445```
446
447This is functionally equivalent to the for loop shown above.
448
449Here is a more complex example using for loops:
450
451```yaml
452testset:
453  suffix: 19
454  requires: datafilespath
455  args: -f0 ${DATAFILESPATH}/matrices/poisson1
456  args: -ksp_type cg -pc_type icc
457  args: -pc_factor_levels {{0 2 4}separate output}
458  test:
459  test:
460    args: -mat_type seqsbaij
461```
462
463If this is in `ex10.c`, then the shell scripts generated would be
464
465- `runex10_19_pc_factor_levels-0.sh`
466- `runex10_19_pc_factor_levels-2.sh`
467- `runex10_19_pc_factor_levels-4.sh`
468
469Each shell script would invoke twice.
470
471### Build Language Options
472
473You can specify issues related to the compilation of the source file
474with the `build:` block. The language is as follows.
475
476- **requires:** (*Optional*; *Default:* `""`)
477
478  - Same as the runtime requirements (for example, can include
479    `requires: fftw`) but also requirements related to types:
480
481    1. Precision types: `single`, `double`, `quad`, `int32`
482    2. Scalar types: `complex` (and `!complex`)
483
484  - In addition, `TODO` is available to allow you to skip the build
485    of this file but still maintain it in the source tree.
486
487- **depends:** (*Optional*; *Default:* `""`)
488
489  - List any dependencies required to compile the file
490
491A typical example for compiling for only real numbers is
492
493```
494/*TEST
495  build:
496    requires: !complex
497  test:
498TEST*/
499```
500
501## Running the tests
502
503The make rules for running tests are contained in `gmakefile.test` in the PETSc root directory. They can usually be accessed by
504simply using commands such as
505
506```console
507$ make test
508```
509
510or, for a list of test options,
511
512```console
513$ make help-test
514```
515
516### Determining the failed jobs of a given run
517
518The running of the test harness will show which tests fail, but you may not have
519logged the output or run without showing the full error. The best way of
520examining the errors is with this command:
521
522```console
523$ $EDITOR $PETSC_DIR/$PETSC_ARCH/tests/test*err.log
524```
525
526This method can also be used for the PETSc continuous integration (CI) pipeline jobs. For failed jobs you can download the
527log files from the `artifacts download` tab on the right side:
528
529:::{figure} /images/developers/test-artifacts.png
530:alt: Test Artifacts at Gitlab
531
532Test artifacts can be downloaded from GitLab.
533:::
534
535To see the list of all tests that failed from the last run, you can also run this command:
536
537```console
538$ make print-test test-fail=1
539```
540
541To print it out in a column format:
542
543```console
544$ make print-test test-fail=1 | tr ' ' '\n' | sort
545```
546
547Once you know which tests failed, the question is how to debug them.
548
549### Introduction to debugging workflows
550
551Here, two different workflows on developing with the test harness are presented,
552and then the language for adding a new test is described. Before describing the
553workflow, we first discuss the output of the test harness and how it maps onto
554makefile targets and shell scripts.
555
556Consider this line from running the PETSc test system:
557
558```
559TEST arch-ci-linux-uni-pkgs/tests/counts/vec_is_sf_tests-ex1_basic_1.counts
560```
561
562The string `vec_is_sf_tests-ex1_basic_1` gives the following information:
563
564- The file generating the tests is found in `$PETSC_DIR/src/vec/is/sf/tests/ex1.c`
565- The makefile target for the *test* is `vec_is_sf_tests-ex1_basic_1`
566- The makefile target for the *executable* is `$PETSC_ARCH/tests/vec/is/sf/tests/ex1`
567- The shell script running the test is located at: `$PETSC_DIR/$PETSC_ARCH/tests/vec/is/sf/tests/runex1_basic_1.sh`
568
569Let's say that you want to debug a single test as part of development. There
570are two basic methods of doing this: 1) use shell script directly in test
571directory, or 2) use the gmakefile.test from the top level directory. We present both
572workflows.
573
574### Debugging a test using shell the generated scripts
575
576First, look at the working directory and the options for the
577scripts:
578
579```console
580$ cd $PETSC_ARCH/tests/vec/is/sf/tests
581$ ./runex1_basic_1.sh -h
582Usage: ./runex1_basic_1.sh [options]
583
584OPTIONS
585  -a <args> ......... Override default arguments
586  -c ................ Cleanup (remove generated files)
587  -C ................ Compile
588  -d ................ Launch in debugger
589  -e <args> ......... Add extra arguments to default
590  -f ................ force attempt to run test that would otherwise be skipped
591  -h ................ help: print this message
592  -n <integer> ...... Override the number of processors to use
593  -j ................ Pass -j to petscdiff (just use diff)
594  -J <arg> .......... Pass -J to petscdiff (just use diff with arg)
595  -m ................ Update results using petscdiff
596  -M ................ Update alt files using petscdiff
597  -o <arg> .......... Output format: 'interactive', 'err_only'
598  -p ................ Print command: Print first command and exit
599  -t ................ Override the default timeout (default=60 sec)
600  -U ................ run cUda-memcheck
601  -V ................ run Valgrind
602  -v ................ Verbose: Print commands
603```
604
605We will be using the `-C`, `-V`, and `-p` flags.
606
607A basic workflow is something similar to:
608
609```console
610$ <edit>
611$ runex1_basic_1.sh -C
612$ <edit>
613$ ...
614$ runex1_basic_1.sh -m # If need to update results
615$ ...
616$ runex1_basic_1.sh -V # Make sure valgrind clean
617$ cd $PETSC_DIR
618$ git commit -a
619```
620
621For loops it sometimes can become onerous to run the whole test.
622In this case, you can use the `-p` flag to print just the first
623command. It will print a command suitable for running from
624`$PETSC_DIR`, but it is easy to modify for execution in the test
625directory:
626
627```console
628$ runex1_basic_1.sh -p
629```
630
631### Debugging a PETSc test using the gmakefile.test
632
633First recall how to find help for the options:
634
635```console
636$ make help-test
637Test usage:
638   /usr/bin/gmake --no-print-directory test <options>
639
640Options:
641  NO_RM=1           Do not remove the executables after running
642  REPLACE=1         Replace the output in PETSC_DIR source tree (-m to test scripts)
643  OUTPUT=1          Show only the errors on stdout
644  ALT=1             Replace 'alt' output in PETSC_DIR source tree (-M to test scripts)
645  DIFF_NUMBERS=1    Diff the numbers in the output (-j to test scripts and petscdiff)
646  CUDAMEMCHECK=1    Execute the tests using cuda-memcheck (-U to test scripts)
647                    Use PETSC_CUDAMEMCHECK_COMMAND to change the executable to run and
648                    PETSC_CUDAMEMCHECK_ARGS to change the arguments (note: both
649                    cuda-memcheck and compute-sanitizer are supported)
650  VALGRIND=1        Execute the tests using valgrind (-V to test scripts)
651  DEBUG=1           Launch tests in the debugger (-d to the scripts)
652  NP=<num proc>     Set a number of processors to pass to scripts.
653  FORCE=1           Force SKIP or TODO tests to run
654  PRINTONLY=1       Print the command, but do not run.  For loops print first command
655  TIMEOUT=<time>    Test timeout limit in seconds (default in config/petsc_harness.sh)
656  TESTDIR='tests'   Subdirectory where tests are run ($PETSC_DIR/$PETSC_ARCH
657                    or /
658                    or /share/petsc/examples/)
659  TESTBASE='tests'   Subdirectory where tests are run ($PETSC_DIR/$PETSC_ARCH)
660  OPTIONS='<args>'  Override options to scripts (-a to test scripts)
661  EXTRA_OPTIONS='<args>'  Add options to scripts (-e to test scripts)
662
663Special options for macOS:
664  MACOS_FIREWALL=1  Add each built test to the macOS firewall list to prevent popups. Configure --with-macos-firewall-rules to make this default
665
666Tests can be generated by searching with multiple methods
667  For general searching (using config/query_tests.py):
668    /usr/bin/gmake --no-print-directory test search='sys*ex2*'
669   or the shortcut using s
670    /usr/bin/gmake --no-print-directory test s='sys*ex2*'
671  You can also use the full path to a file directory
672    /usr/bin/gmake --no-print-directory test s='src/sys/tests/'
673   or a file
674    /usr/bin/gmake --no-print-directory test s='src/sys/tests/ex1.c'
675
676  To search for fields from the original test definitions:
677    /usr/bin/gmake --no-print-directory test query='requires' queryval='*MPI_PROCESS_SHARED_MEMORY*'
678   or the shortcut using q and qv
679    /usr/bin/gmake --no-print-directory test q='requires' qv='*MPI_PROCESS_SHARED_MEMORY*'
680  To filter results from other searches, use searchin
681    /usr/bin/gmake --no-print-directory test s='src/sys/tests/' searchin='*options*'
682
683  To re-run the last tests which failed:
684    /usr/bin/gmake --no-print-directory test test-fail='1'
685
686  To see which targets match a given pattern (useful for doing a specific target):
687    /usr/bin/gmake --no-print-directory print-test search=sys*
688
689  To build an executable, give full path to location:
690    /usr/bin/gmake --no-print-directory ${PETSC_ARCH}/tests/sys/tests/ex1
691  or make the test with NO_RM=1
692```
693
694To compile the test and run it:
695
696```console
697$ make test search=vec_is_sf_tests-ex1_basic_1
698```
699
700This can consist of your basic workflow. However,
701for the normal compile and edit, running the entire harness with search can be
702cumbersome. So first get the command:
703
704```console
705$ make vec_is_sf_tests-ex1_basic_1 PRINTONLY=1
706<copy command>
707<edit>
708$ make $PETSC_ARCH/tests/vec/is/sf/tests/ex1
709$ /scratch/kruger/contrib/petsc-mpich-cxx/bin/mpiexec -n 1 arch-mpich-cxx-py3/tests/vec/is/sf/tests/ex1
710...
711$ cd $PETSC_DIR
712$ git commit -a
713```
714
715### Advanced searching
716
717For forming a search, it is recommended to always use `print-test` instead of
718`test` to make sure it is returning the values that you want.
719
720The three basic and recommended arguments are:
721
722- `search` (or `s`)
723
724  - Searches based on name of test target (see above)
725
726  - Use the familiar glob syntax (like the Unix `ls` command). Example:
727
728    ```console
729    $ make print-test search='vec_is*ex1*basic*1'
730    ```
731
732    Equivalently:
733
734    ```console
735    $ make print-test s='vec_is*ex1*basic*1'
736    ```
737
738  - It also takes full paths. Examples:
739
740    ```console
741    $ make print-test s='src/vec/is/tests/ex1.c'
742    ```
743
744    ```console
745    $ make print-test s='src/dm/impls/plex/tests/'
746    ```
747
748    ```console
749    $ make print-test s='src/dm/impls/plex/tests/ex1.c'
750    ```
751
752- `query` and `queryval` (or `q` and `qv`)
753
754  - `query` corresponds to test harness keyword, `queryval` to the value. Example:
755
756    ```console
757    $ make print-test query='suffix' queryval='basic_1'
758    ```
759
760  - Invokes `config/query_tests.py` to query the tests (see
761    `config/query_tests.py --help` for more information).
762
763  - See below for how to use as it has many features
764
765- `searchin` (or `i`)
766
767  - Filters results of above searches. Example:
768
769    ```console
770    $ make print-test s='src/dm/impls/plex/tests/ex1.c' i='*refine_overlap_2d*'
771    ```
772
773Searching using GNU make's native regexp functionality is kept for people who like it, but most developers will likely prefer the above methods:
774
775- `gmakesearch`
776
777  - Use GNU make's own filter capability.
778
779  - Fast, but requires knowing GNU make regex syntax which uses `%` instead of `*`
780
781  - Also very limited (cannot use two `%`'s for example)
782
783  - Example:
784
785    ```console
786    $ make test gmakesearch='vec_is%ex1_basic_1'
787    ```
788
789- `gmakesearchin`
790
791  - Use GNU make's own filter capability to search in previous results. Example:
792
793    ```console
794    $ make test gmakesearch='vec_is%1' gmakesearchin='basic'
795    ```
796
797### Query-based searching
798
799Note the use of glob style matching is also accepted in the value field:
800
801```console
802$ make print-test query='suffix' queryval='basic_1'
803```
804
805```console
806$ make print-test query='requires' queryval='cuda'
807```
808
809```console
810$ make print-test query='requires' queryval='defined(PETSC_HAVE_MPI_GPU_AWARE)'
811```
812
813```console
814$ make print-test query='requires' queryval='*GPU_AWARE*'
815```
816
817Using the `name` field is equivalent to the search above:
818
819- Example:
820
821  ```console
822  $ make print-test query='name' queryval='vec_is*ex1*basic*1'
823  ```
824
825- This can be combined with union/intersect queries as discussed below
826
827Arguments are tricky to search for. Consider
828
829```none
830args: -ksp_monitor_short -pc_type ml -ksp_max_it 3
831```
832
833Search terms are
834
835```none
836ksp_monitor, pc_type ml, ksp_max_it
837```
838
839Certain items are ignored:
840
841- Numbers (see `ksp_max_it` above), but floats are ignored as well.
842- Loops: `args: -pc_fieldsplit_diag_use_amat {{0 1}}` gives `pc_fieldsplit_diag_use_amat` as the search term
843- Input files: `-f *`
844
845Examples of argument searching:
846
847```console
848$ make print-test query='args' queryval='ksp_monitor'
849```
850
851```console
852$ make print-test query='args' queryval='*monitor*'
853```
854
855```console
856$ make print-test query='args' queryval='pc_type ml'
857```
858
859Multiple simultaneous queries can be performed with union (`,`), and intersection
860(`|`) operators in the `query` field. One may also use their alternate spellings
861(`%AND%` and `%OR%` respectively). The alternate spellings are useful in cases where
862one cannot avoid (possibly multiple) shell expansions that might otherwise interpret the
863`|` operator as a shell pipe. Examples:
864
865- All examples using `cuda` and all examples using `hip`:
866
867  ```console
868  $ make print-test query='requires,requires' queryval='cuda,hip'
869  # equivalently
870  $ make print-test query='requires%AND%requires' queryval='cuda%AND%hip'
871  ```
872
873- Examples that require both triangle and ctetgen (intersection of tests)
874
875  ```console
876  $ make print-test query='requires|requires' queryval='ctetgen,triangle'
877  # equivalently
878  $ make print-test query='requires%OR%requires' queryval='ctetgen%AND%triangle'
879  ```
880
881- Tests that require either `ctetgen` or `triangle`
882
883  ```console
884  $ make print-test query='requires,requires' queryval='ctetgen,triangle'
885  # equivalently
886  $ make print-test query='requires%AND%requires' queryval='ctetgen%AND%triangle'
887  ```
888
889- Find `cuda` examples in the `dm` package.
890
891  ```console
892  $ make print-test query='requires|name' queryval='cuda,dm*'
893  # equivalently
894  $ make print-test query='requires%OR%name' queryval='cuda%AND%dm*'
895  ```
896
897Here is a way of getting a feel for how the union and intersect operators work:
898
899```console
900$ make print-test query='requires' queryval='ctetgen' | tr ' ' '\n' | wc -l
901170
902$ make print-test query='requires' queryval='triangle' | tr ' ' '\n' | wc -l
903330
904$ make print-test query='requires,requires' queryval='ctetgen,triangle' | tr ' ' '\n' | wc -l
905478
906$ make print-test query='requires|requires' queryval='ctetgen,triangle' | tr ' ' '\n' | wc -l
90722
908```
909
910The total number of tests for running only ctetgen or triangle is 500. They have 22 tests in common, and 478 that
911run independently of each other.
912
913The union and intersection have fixed grouping. So this string argument
914
915```none
916query='requires,requires|args' queryval='cuda,hip,*log*'
917# equivalently
918query='requires%AND%requires%OR%args' queryval='cuda%AND%hip%AND%*log*'
919```
920
921will can be read as
922
923```none
924requires:cuda && (requires:hip || args:*log*)
925```
926
927which is probably not what is intended.
928
929`query/queryval` also support negation (`!`, alternate `%NEG%`), but is limited.
930The negation only applies to tests that have a related field in it. So for example, the
931arguments of
932
933```console
934query=requires queryval='!cuda'
935# equivalently
936query=requires queryval='%NEG%cuda'
937```
938
939will only match if they explicitly have:
940
941```
942requires: !cuda
943```
944
945It does not match all cases that do not require cuda.
946
947### Debugging for loops
948
949One of the more difficult issues is how to debug for loops when a subset of the
950arguments are the ones that cause a code crash. The default naming scheme is
951not always helpful for figuring out the argument combination.
952
953For example:
954
955```console
956$ make test s='src/ksp/ksp/tests/ex9.c' i='*1'
957Using MAKEFLAGS: i=*1 s=src/ksp/ksp/tests/ex9.c
958        TEST arch-osx-pkgs-opt-new/tests/counts/ksp_ksp_tests-ex9_1.counts
959 ok ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive
960 not ok diff-ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive
961 ok ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-multiplicative
962 ...
963```
964
965In this case, the trick is to use the verbose option, `V=1` (or for the shell script workflows, `-v`) to have it show the commands:
966
967```console
968$ make test s='src/ksp/ksp/tests/ex9.c' i='*1' V=1
969Using MAKEFLAGS: V=1 i=*1 s=src/ksp/ksp/tests/ex9.c
970arch-osx-pkgs-opt-new/tests/ksp/ksp/tests/runex9_1.sh  -v
971 ok ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive # mpiexec  -n 1 ../ex9 -ksp_converged_reason -ksp_error_if_not_converged  -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_type additive > ex9_1.tmp 2> runex9_1.err
972...
973```
974
975This can still be hard to read and pick out what you want. So use the fact that you want `not ok`
976combined with the fact that `#` is the delimiter:
977
978```console
979$ make test s='src/ksp/ksp/tests/ex9.c' i='*1' v=1 | grep 'not ok' | cut -d# -f2
980mpiexec  -n 1 ../ex9 -ksp_converged_reason -ksp_error_if_not_converged  -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_type multiplicative > ex9_1.tmp 2> runex9_1.err
981```
982
983## PETSC Test Harness
984
985The goals of the PETSc test harness are threefold.
986
9871. Provide standard output used by other testing tools
9882. Be as lightweight as possible and easily fit within the PETSc build chain
9893. Provide information on all tests, even those that are not built or run because they do not meet the configuration requirements
990
991Before understanding the test harness, you should first understand the
992desired requirements for reporting and logging.
993
994### Testing the Parsing
995
996After inserting the language into the file, you can test the parsing by
997executing
998
999A dictionary will be pretty-printed. From this dictionary printout, any
1000problems in the parsing are is usually obvious. This python file is used
1001by
1002
1003in generating the test harness.
1004
1005## Test Output Standards: TAP
1006
1007The PETSc test system is designed to be compliant with the [Test Anything Protocol (TAP)](https://testanything.org/tap-specification.html).
1008
1009This is a simple standard designed to allow testing tools to work
1010together easily. There are libraries to enable the output to be used
1011easily, including sharness, which is used by the Git team. However, the
1012simplicity of the PETSc tests and TAP specification means that we use
1013our own simple harness given by a single shell script that each file
1014sources: `$PETSC_DIR/config/petsc_harness.sh`.
1015
1016As an example, consider this test input:
1017
1018```yaml
1019test:
1020  suffix: 2
1021  output_file: output/ex138.out
1022  args: -f ${DATAFILESPATH}/matrices/small -mat_type {{aij baij sbaij}} -matload_block_size {{2 3}}
1023  requires: datafilespath
1024```
1025
1026A sample output from this would be:
1027
1028```
1029ok 1 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 2"
1030ok 2 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 2"
1031ok 3 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 3"
1032ok 4 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 3"
1033ok 5 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type baij -matload_block_size 2"
1034ok 6 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type baij -matload_block_size 2"
1035...
1036
1037ok 11 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type saij -matload_block_size 2"
1038ok 12 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 2"
1039```
1040
1041## Test Harness Implementation
1042
1043Most of the requirements for being TAP-compliant lie in the shell
1044scripts, so we focus on that description.
1045
1046A sample shell script is given the following.
1047
1048```sh
1049#!/bin/sh
1050. petsc_harness.sh
1051
1052petsc_testrun ./ex1 ex1.tmp ex1.err
1053petsc_testrun 'diff ex1.tmp output/ex1.out' diff-ex1.tmp diff-ex1.err
1054
1055petsc_testend
1056```
1057
1058`petsc_harness.sh` is a small shell script that provides the logging and reporting
1059functions `petsc_testrun` and `petsc_testend`.
1060
1061A small sample of the output from the test harness is as follows.
1062
1063```none
1064ok 1 ./ex1
1065ok 2 diff ex1.tmp output/ex1.out
1066not ok 4 ./ex2
1067#   ex2: Error: cannot read file
1068not ok 5 diff ex2.tmp output/ex2.out
1069ok 7 ./ex3 -f /matrices/small -mat_type aij -matload_block_size 2
1070ok 8 diff ex3.tmp output/ex3.out
1071ok 9 ./ex3 -f /matrices/small -mat_type aij -matload_block_size 3
1072ok 10 diff ex3.tmp output/ex3.out
1073ok 11 ./ex3 -f /matrices/small -mat_type baij -matload_block_size 2
1074ok 12 diff ex3.tmp output/ex3.out
1075ok 13 ./ex3 -f /matrices/small -mat_type baij -matload_block_size 3
1076ok 14 diff ex3.tmp output/ex3.out
1077ok 15 ./ex3 -f /matrices/small -mat_type sbaij -matload_block_size 2
1078ok 16 diff ex3.tmp output/ex3.out
1079ok 17 ./ex3 -f /matrices/small -mat_type sbaij -matload_block_size 3
1080ok 18 diff ex3.tmp output/ex3.out
1081# FAILED   4 5
1082# failed 2/16 tests; 87.500% ok
1083```
1084
1085For developers, modifying the lines that get written to the file can be
1086done by modifying `$PETSC_DIR/config/example_template.py`.
1087
1088To modify the test harness, you can modify `$PETSC_DIR/config/petsc_harness.sh`.
1089
1090### Additional Tips
1091
1092To rerun just the reporting use
1093
1094```console
1095$ config/report_tests.py
1096```
1097
1098To see the full options use
1099
1100```console
1101$ config/report_tests.py -h
1102```
1103
1104To see the full timing information for the five most expensive tests use
1105
1106```console
1107$ config/report_tests.py -t 5
1108```
1109