xref: /petsc/doc/developers/testing.md (revision 6dd63270497ad23dcf16ae500a87ff2b2a0b7474)
1(test_harness)=
2
3# PETSc Testing System
4
5The PETSc test system consists of
6
7- Formatted comments at the bottom of the tutorials and test source files that describes the tests to be run.
8- The *test generator* (`config/gmakegentest.py`) that parses the tutorial and test source files and generates the makefiles and shell scripts. This is run
9  automatically by the make system and rarely is run directly.
10- The *PETSc test harness* that consists of makefile and shell scripts that runs the executables with several logging and reporting features.
11
12Details on using the harness may be found in the {ref}`user's manual <sec_runningtests>`. The testing system is used by {any}`pipelines`.
13
14## PETSc Test Description Language
15
16PETSc tests and tutorials contain at the bottom of the their source files a simple language to
17describe tests and subtests required to run executables associated with
18compilation of that file. The general skeleton of the file is
19
20```
21static const char help[] = "A simple MOAB example\n";
22
23...
24<source code>
25...
26
27/*TEST
28   build:
29     requires: moab
30   testset:
31     suffix: 1
32     requires: !complex
33   testset:
34     suffix: 2
35     args: -debug -fields v1,v2,v3
36     test:
37     test:
38       args: -foo bar
39TEST*/
40```
41
42For our language, a *test* is associated with the following
43
44- A single shell script
45
46- A single makefile
47
48- An output file that represents the *expected results*. It is also possible -- though unusual -- to have multiple output files for a single test
49
50- Two or more command tests, usually:
51
52  - one or more `mpiexec` tests that run the executable
53  - one or more `diff` tests to compare output with the expected result
54
55Our language also supports a *testset* that specifies either a new test
56entirely or multiple executable/diff tests within a single test. At the
57core, the executable/diff test combination will look something like
58this:
59
60```sh
61mpiexec -n 1 ../ex1 1> ex1.tmp 2> ex1.err
62diff ex1.tmp output/ex1.out 1> diff-ex1.tmp 2> diff-ex1.err
63```
64
65In practice, we want to do various logging and counting by the test
66harness; as are explained further below. The input language supports
67simple yet flexible test control.
68
69### Runtime Language Options
70
71At the end of each test file, a marked comment block is
72inserted to describe the test(s) to be run. The elements of the test are
73done with a set of supported key words that sets up the test.
74
75The goals of the language are to be
76
77- as minimal as possible with the simplest test requiring only one keyword,
78- independent of the filename such that a file can be renamed without rewriting the tests, and
79- intuitive.
80
81In order to enable the second goal, the *basestring* of the filename is
82defined as the filename without the extension; for example, if the
83filename is `ex1.c`, then `basestring=ex1`.
84
85With this background, these keywords are as follows.
86
87- **testset** or **test**: (*Required*)
88
89  - At the top level either a single test or a test set must be
90    specified. All other keywords are sub-entries of this keyword.
91
92- **suffix**: (*Optional*; *Default:* `suffix=""`)
93
94  - The test name is given by `testname = basestring` if the suffix
95    is set to an empty string, and by
96    `testname = basestring + "_" + suffix` otherwise.
97  - This can be specified only for top level test nodes.
98
99- **output_file**: (*Optional*; *Default:*
100  `output_file = "output/" + testname + ".out"`)
101
102  - The output of the test is to be compared with an *expected result*
103    whose name is given by `output_file`.
104  - This file is described relative to the source directory of the
105    source file and should be in the output subdirectory (for example,
106    `output/ex1.out`)
107
108- **nsize**: (*Optional*; *Default:* `nsize=1`)
109
110  - This integer is passed to mpiexec; i.e., `mpiexec -n nsize`
111
112- **args**: (*Optional*; *Default:* `""`)
113
114  - These arguments are passed to the executable.
115
116- **diff_args**: (*Optional*; *Default:* `""`)
117
118  - These arguments are passed to the `lib/petsc/bin/petscdiff` script that
119    is used in the diff part of the test. For example, `-j` enables testing
120    the floating point numbers.
121
122- **TODO**: (*Optional*; *Default:* `False`)
123
124  - Setting this Boolean to True will tell the test to appear in the
125    test harness but report only TODO per the TAP standard. Optionally
126    provide a string indicating why it is todo.
127  - A runscript will be generated and can easily be modified by hand
128    to run.
129
130- **filter**: (*Optional*; *Default:* `""`)
131
132  - Sometimes only a subset of the output is meant to be tested
133    against the expected result. If this keyword is used, it filters
134    the executable output to
135    compare with `output_file`.
136  - The value of this is the command to be run, for example,
137    `grep foo` or `sort -nr`.
138  - **NOTE: this method of testing error output is NOT recommended. See section on**
139    {ref}`testing errors <sec_testing_error_testing>` **instead.** If the filter begins
140    with `Error:`, then the test is assumed to be testing the `stderr` output, and the
141    error code and output are set up to be tested.
142
143- **filter_output**: (*Optional*; *Default:* `""`)
144
145  - Sometimes filtering the output file is useful for standardizing
146    tests. For example, in order to handle the issues related to
147    parallel output, both the output from the test example and the
148    output file need to be sorted (since sort does not produce the
149    same output on all machines). This works the same as filter to
150    implement this feature
151
152- **localrunfiles**: (*Optional*; *Default:* `""`)
153
154  - Some tests
155    require runtime files that are maintained in the source tree.
156    Files in this (space-delimited) list will be copied over to the
157    testing directory so they will be found by the executable. If you
158    list a directory instead of files, it will copy the entire
159    directory (this is limited currently to a single directory)
160  - The copying is done by the test generator and not by creating
161    makefile dependencies.
162
163- **temporaries**: (*Optional*; *Default:* `""`)
164
165  - Some tests produce temporary files that are read by the filter
166    to compare to expected results.
167    Files in this (space-delimited) list will cleared before
168    the test is run to ensure that stale temporary files are not read.
169
170- **requires**: (*Optional*; *Default:* `""`)
171
172  - This is a space-delimited list of run requirements (not build
173    requirements; see Build requirements below).
174  - In general, the language supports `and` and `not` constructs
175    using `! => not` and `, => and`.
176  - MPIUNI should work for all -n 1 examples so this need not be in
177    the requirements list.
178  - Inputs sometimes require external matrices that are found in the
179    directory given by the environmental variable `DATAFILESPATH`.
180    The repository [datafiles](https://gitlab.com/petsc/datafiles)
181    contains all the test files needed for the test suite.
182    For these tests `requires: datafilespath` can be
183    specified.
184  - Packages are indicated with lower-case specification, for example,
185    `requires: superlu_dist`.
186  - Any defined variable in petscconf.h can be specified with the
187    `defined(...)` syntax, for example, `defined(PETSC_USE_INFO)`.
188  - Any definition of the form `PETSC_HAVE_FOO` can just use
189    `requires: foo` similar to how third-party packages are handled.
190
191- **timeoutfactor**: (*Optional*; *Default:* `"1"`)
192
193  - This parameter allows you to extend the default timeout for an
194    individual test such that the new timeout time is
195    `timeout=(default timeout) x (timeoutfactor)`.
196  - Tests are limited to a set time that is found at the top of
197    `"config/petsc_harness.sh"` and can be overwritten by passing in
198    the `TIMEOUT` argument to `gmakefile`
199
200- **env**: (*Optional*; *Default:* `env=""`)
201
202  - Allows you to set environment variables for the test. Values are copied verbatim to
203    the runscript and defined and exported prior to all other variables.
204
205  - Variables defined within `env:` blocks are expanded and processed by the shell that
206    runs the runscript. No prior preprocessing (other than splitting the lines into
207    separate declarations) is done. This means that any escaping of special characters
208    must be done in the text of the `TEST` block.
209
210  - Defining the `env:` keyword more than once is allowed. Subsequent declarations are
211    then appended to prior list of declarations . Multiple environment variables may also
212    be defined in the same `env:` block, i.e. given a test `ex1.c` with the following
213    spec:
214
215    ```yaml
216    test:
217      env: FOO=1 BAR=1
218
219    # equivalently
220    test:
221      env: FOO=1
222      env: BAR=1
223    ```
224
225    results in
226
227    ```console
228    $ export FOO=1; export BAR=1; ./ex1
229    ```
230
231  - Variables defined in an `env:` block are evaluated by the runscript in the order in
232    which they are defined in the `TEST` block. Thus it is possible for later variables
233    to refer to previously defined ones:
234
235    ```yaml
236    test:
237      env: FOO='hello' BAR=${FOO}
238    ```
239
240    results in
241
242    ```console
243    $ export FOO='hello'; export BAR=${FOO}; ./ex1
244    # expanded by shell to
245    $ export FOO='hello'; export BAR='hello'; ./ex1
246    ```
247
248    Note this also implies that
249
250    ```yaml
251    test:
252      env: FOO=1 FOO=0
253    ```
254
255    results in
256
257    ```console
258    $ export FOO=1; export FOO=0; ./ex1
259    ```
260
261### Additional Specifications
262
263In addition to the above keywords, other language features are
264supported.
265
266- **for loops**: Specifying `{{list of values}}` will generate a loop over
267  an enclosed space-delimited list of values.
268  It is supported within `nsize` and `args`. For example,
269
270  ```
271  nsize: {{1 2 4}}
272  args: -matload_block_size {{2 3}shared output}
273  ```
274
275  Here the output for each `-matload_block_size` value is assumed to be
276  the same so that only one output file is needed.
277
278  If the loop causes different output for each loop iteration, then `separate output` needs to be used:
279
280  ```
281  args: -matload_block_size {{2 3}separate output}
282  ```
283
284  In this case, each loop value generates a separate script,
285  and uses a separate output file for comparison.
286
287  Note that `{{...}}` is equivalent to `{{...}shared output}`.
288
289(sec_testing_error_testing)=
290
291### Testing Errors And Exceptional Code
292
293It is possible (and encouraged!) to test error conditions within the test harness. Since
294error messages produced by `SETERRQ()` and friends are not portable between systems,
295additional arguments must be passed to tests to modify error handling, specifically:
296
297```yaml
298args: -petsc_ci_portable_error_output -error_output_stdout
299```
300
301These arguments have the following effect:
302
303- `-petsc_ci_portable_error_output`: Strips system or configuration-specific information
304  from error messages. Specifically this:
305
306  - Removes all path components except the file name from the traceback
307  - Removes line and column numbers from the traceback
308  - Removes PETSc version information
309  - Removes `configure` options used
310  - Removes system name
311  - Removes hostname
312  - Removes date
313
314  With this option error messages will be identical across systems, runs, and PETSc
315  configurations (barring of course configurations in which the error is not raised).
316
317  Furthermore, this option also changes the default behavior of the error handler to
318  **gracefully** exit where possible. For single-ranked runs this means returning with
319  exit-code `0` and calling `MPI_Finalize()` instead of `MPI_Abort()`. Multi-rank
320  tests will call `MPI_Abort()` on errors raised on `PETSC_COMM_SELF`, but will call
321  `MPI_Finalize()` otherwise.
322
323- `-error_output_stdout`: Forces `SETERRQ()` and friends to dump error messages to
324  `stdout` instead of `stderr`. While using `stderr` (alongside the `Error:`
325  sub-directive under `filter:`) also works it appears to be unstable under heavy
326  load, especially in CI.
327
328Using both options in tandem allows one to use the normal `output:` mechanism to compare
329expected and actual error outputs.
330
331When writing ASCII output that may be not portable, so one wants `-petsc_ci_portable_error_output` to
332cause the output to be skipped, enclose the output with code such as
333
334```
335if (!PetscCIEnabledPortableErrorOutput)
336```
337
338to prevent it from being output when the CI test harness is running.
339
340### Test Block Examples
341
342The following is the simplest test block:
343
344```yaml
345/*TEST
346  test:
347TEST*/
348```
349
350If this block is in `src/a/b/examples/tutorials/ex1.c`, then it will
351create `a_b_tutorials-ex1` test that requires only one
352process, with no arguments, and diff the resultant output with
353`src/a/b/examples/tutorials/output/ex1.out`.
354
355For Fortran, the equivalent is
356
357```fortran
358!/*TEST
359!  test:
360!TEST*/
361```
362
363A more complete example, showing just the lines between `/*TEST` and `TEST*/`:
364
365```yaml
366test:
367test:
368  suffix: 1
369  nsize: 2
370  args: -t 2 -pc_type jacobi -ksp_monitor_short -ksp_type gmres
371  args: -ksp_gmres_cgs_refinement_type refine_always -s2_ksp_type bcgs
372  args: -s2_pc_type jacobi -s2_ksp_monitor_short
373  requires: x
374```
375
376This creates two tests. Assuming that this is
377`src/a/b/examples/tutorials/ex1.c`, the tests would be
378`a_b_tutorials-ex1` and `a_b_tutorials-ex1_1`.
379
380Following is an example of how to test a permutation of arguments
381against the same output file:
382
383```yaml
384testset:
385  suffix: 19
386  requires: datafilespath
387  args: -f0 ${DATAFILESPATH}/matrices/poisson1
388  args: -ksp_type cg -pc_type icc -pc_factor_levels 2
389  test:
390  test:
391    args: -mat_type seqsbaij
392```
393
394Assuming that this is `ex10.c`, there would be two mpiexec/diff
395invocations in `runex10_19.sh`.
396
397Here is a similar example, but the permutation of arguments creates
398different output:
399
400```yaml
401testset:
402  requires: datafilespath
403  args: -f0 ${DATAFILESPATH}/matrices/medium
404  args: -ksp_type bicg
405  test:
406    suffix: 4
407    args: -pc_type lu
408  test:
409    suffix: 5
410```
411
412Assuming that this is `ex10.c`, two shell scripts will be created:
413`runex10_4.sh` and `runex10_5.sh`.
414
415An example using a for loop is:
416
417```yaml
418testset:
419  suffix: 1
420  args: -f ${DATAFILESPATH}/matrices/small -mat_type aij
421  requires: datafilespath
422testset:
423  suffix: 2
424  output_file: output/ex138_1.out
425  args: -f ${DATAFILESPATH}/matrices/small
426  args: -mat_type baij -matload_block_size {{2 3}shared output}
427  requires: datafilespath
428```
429
430In this example, `ex138_2` will invoke `runex138_2.sh` twice with
431two different arguments, but both are diffed with the same file.
432
433Following is an example showing the hierarchical nature of the test
434specification.
435
436```yaml
437testset:
438  suffix:2
439  output_file: output/ex138_1.out
440  args: -f ${DATAFILESPATH}/matrices/small -mat_type baij
441  test:
442    args: -matload_block_size 2
443  test:
444    args: -matload_block_size 3
445```
446
447This is functionally equivalent to the for loop shown above.
448
449Here is a more complex example using for loops:
450
451```yaml
452testset:
453  suffix: 19
454  requires: datafilespath
455  args: -f0 ${DATAFILESPATH}/matrices/poisson1
456  args: -ksp_type cg -pc_type icc
457  args: -pc_factor_levels {{0 2 4}separate output}
458  test:
459  test:
460    args: -mat_type seqsbaij
461```
462
463If this is in `ex10.c`, then the shell scripts generated would be
464
465- `runex10_19_pc_factor_levels-0.sh`
466- `runex10_19_pc_factor_levels-2.sh`
467- `runex10_19_pc_factor_levels-4.sh`
468
469Each shell script would invoke twice.
470
471### Build Language Options
472
473You can specify issues related to the compilation of the source file
474with the `build:` block. The language is as follows.
475
476- **requires:** (*Optional*; *Default:* `""`)
477
478  - Same as the runtime requirements (for example, can include
479    `requires: fftw`) but also requirements related to types:
480
481    1. Precision types: `single`, `double`, `quad`, `int32`
482    2. Scalar types: `complex` (and `!complex`)
483
484  - In addition, `TODO` is available to allow you to skip the build
485    of this file but still maintain it in the source tree.
486
487- **depends:** (*Optional*; *Default:* `""`)
488
489  - List any dependencies required to compile the file
490
491A typical example for compiling for only real numbers is
492
493```
494/*TEST
495  build:
496    requires: !complex
497  test:
498TEST*/
499```
500
501## Running the tests
502
503The make rules for running tests are contained in `gmakefile.test` in the PETSc root directory. They can usually be accessed by
504simply using commands such as
505
506```console
507$ make test
508```
509
510or, for a list of test options,
511
512```console
513$ make help-test
514```
515
516### Determining the failed jobs of a given run
517
518The running of the test harness will show which tests fail, but you may not have
519logged the output or run without showing the full error. The best way of
520examining the errors is with this command:
521
522```console
523$ $EDITOR $PETSC_DIR/$PETSC_ARCH/tests/test*err.log
524```
525
526This method can also be used for the PETSc continuous integration (CI) pipeline jobs. For failed jobs you can download the
527log files from the `artifacts download` tab on the right side:
528
529:::{figure} /images/developers/test-artifacts.png
530:alt: Test Artifacts at Gitlab
531
532Test artifacts can be downloaded from GitLab.
533:::
534
535To see the list of all tests that failed from the last run, you can also run this command:
536
537```console
538$ make print-test test-fail=1
539```
540
541To print it out in a column format:
542
543```console
544$ make print-test test-fail=1 | tr ' ' '\n' | sort
545```
546
547Once you know which tests failed, the question is how to debug them.
548
549### Introduction to debugging workflows
550
551Here, two different workflows on developing with the test harness are presented,
552and then the language for adding a new test is described. Before describing the
553workflow, we first discuss the output of the test harness and how it maps onto
554makefile targets and shell scripts.
555
556Consider this line from running the PETSc test system:
557
558```
559TEST arch-ci-linux-uni-pkgs/tests/counts/vec_is_sf_tests-ex1_basic_1.counts
560```
561
562The string `vec_is_sf_tests-ex1_basic_1` gives the following information:
563
564- The file generating the tests is found in `$PETSC_DIR/src/vec/is/sf/tests/ex1.c`
565- The makefile target for the *test* is `vec_is_sf_tests-ex1_basic_1`
566- The makefile target for the *executable* is `$PETSC_ARCH/tests/vec/is/sf/tests/ex1`
567- The shell script running the test is located at: `$PETSC_DIR/$PETSC_ARCH/tests/vec/is/sf/tests/runex1_basic_1.sh`
568
569Let's say that you want to debug a single test as part of development. There
570are two basic methods of doing this: 1) use shell script directly in test
571directory, or 2) use the gmakefile.test from the top level directory. We present both
572workflows.
573
574### Debugging a test using shell the generated scripts
575
576First, look at the working directory and the options for the
577scripts:
578
579```console
580$ cd $PETSC_ARCH/tests/vec/is/sf/tests
581$ ./runex1_basic_1.sh -h
582Usage: ./runex1_basic_1.sh [options]
583
584OPTIONS
585  -a <args> ......... Override default arguments
586  -c ................ Cleanup (remove generated files)
587  -C ................ Compile
588  -d ................ Launch in debugger
589  -e <args> ......... Add extra arguments to default
590  -f ................ force attempt to run test that would otherwise be skipped
591  -h ................ help: print this message
592  -n <integer> ...... Override the number of processors to use
593  -j ................ Pass -j to petscdiff (just use diff)
594  -J <arg> .......... Pass -J to petscdiff (just use diff with arg)
595  -m ................ Update results using petscdiff
596  -M ................ Update alt files using petscdiff
597  -o <arg> .......... Output format: 'interactive', 'err_only'
598  -p ................ Print command: Print first command and exit
599  -t ................ Override the default timeout (default=60 sec)
600  -U ................ run cUda-memcheck
601  -V ................ run Valgrind
602  -v ................ Verbose: Print commands
603```
604
605We will be using the `-C`, `-V`, and `-p` flags.
606
607A basic workflow is something similar to:
608
609```console
610$ <edit>
611$ runex1_basic_1.sh -C
612$ <edit>
613$ ...
614$ runex1_basic_1.sh -m # If need to update results
615$ ...
616$ runex1_basic_1.sh -V # Make sure valgrind clean
617$ cd $PETSC_DIR
618$ git commit -a
619```
620
621For loops it sometimes can become onerous to run the whole test.
622In this case, you can use the `-p` flag to print just the first
623command. It will print a command suitable for running from
624`$PETSC_DIR`, but it is easy to modify for execution in the test
625directory:
626
627```console
628$ runex1_basic_1.sh -p
629```
630
631### Debugging a PETSc test using the gmakefile.test
632
633First recall how to find help for the options:
634
635```console
636$ make help-test
637```
638
639To compile the test and run it:
640
641```console
642$ make test search=vec_is_sf_tests-ex1_basic_1
643```
644
645This can consist of your basic workflow. However,
646for the normal compile and edit, running the entire harness with search can be
647cumbersome. So first get the command:
648
649```console
650$ make vec_is_sf_tests-ex1_basic_1 PRINTONLY=1
651<copy command>
652<edit>
653$ make $PETSC_ARCH/tests/vec/is/sf/tests/ex1
654$ /scratch/kruger/contrib/petsc-mpich-cxx/bin/mpiexec -n 1 arch-mpich-cxx-py3/tests/vec/is/sf/tests/ex1
655...
656$ cd $PETSC_DIR
657$ git commit -a
658```
659
660### Advanced searching
661
662For forming a search, it is recommended to always use `print-test` instead of
663`test` to make sure it is returning the values that you want.
664
665The three basic and recommended arguments are:
666
667- `search` (or `s`)
668
669  - Searches based on name of test target (see above)
670
671  - Use the familiar glob syntax (like the Unix `ls` command). Example:
672
673    ```console
674    $ make print-test search='vec_is*ex1*basic*1'
675    ```
676
677    Equivalently:
678
679    ```console
680    $ make print-test s='vec_is*ex1*basic*1'
681    ```
682
683  - It also takes full paths. Examples:
684
685    ```console
686    $ make print-test s='src/vec/is/tests/ex1.c'
687    ```
688
689    ```console
690    $ make print-test s='src/dm/impls/plex/tests/'
691    ```
692
693    ```console
694    $ make print-test s='src/dm/impls/plex/tests/ex1.c'
695    ```
696
697- `query` and `queryval` (or `q` and `qv`)
698
699  - `query` corresponds to test harness keyword, `queryval` to the value. Example:
700
701    ```console
702    $ make print-test query='suffix' queryval='basic_1'
703    ```
704
705  - Invokes `config/query_tests.py` to query the tests (see
706    `config/query_tests.py --help` for more information).
707
708  - See below for how to use as it has many features
709
710- `searchin` (or `i`)
711
712  - Filters results of above searches. Example:
713
714    ```console
715    $ make print-test s='src/dm/impls/plex/tests/ex1.c' i='*refine_overlap_2d*'
716    ```
717
718Searching using GNU make's native regexp functionality is kept for people who like it, but most developers will likely prefer the above methods:
719
720- `gmakesearch`
721
722  - Use GNU make's own filter capability.
723
724  - Fast, but requires knowing GNU make regex syntax which uses `%` instead of `*`
725
726  - Also very limited (cannot use two `%`'s for example)
727
728  - Example:
729
730    ```console
731    $ make test gmakesearch='vec_is%ex1_basic_1'
732    ```
733
734- `gmakesearchin`
735
736  - Use GNU make's own filter capability to search in previous results. Example:
737
738    ```console
739    $ make test gmakesearch='vec_is%1' gmakesearchin='basic'
740    ```
741
742### Query-based searching
743
744Note the use of glob style matching is also accepted in the value field:
745
746```console
747$ make print-test query='suffix' queryval='basic_1'
748```
749
750```console
751$ make print-test query='requires' queryval='cuda'
752```
753
754```console
755$ make print-test query='requires' queryval='defined(PETSC_HAVE_MPI_GPU_AWARE)'
756```
757
758```console
759$ make print-test query='requires' queryval='*GPU_AWARE*'
760```
761
762Using the `name` field is equivalent to the search above:
763
764- Example:
765
766  ```console
767  $ make print-test query='name' queryval='vec_is*ex1*basic*1'
768  ```
769
770- This can be combined with union/intersect queries as discussed below
771
772Arguments are tricky to search for. Consider
773
774```none
775args: -ksp_monitor_short -pc_type ml -ksp_max_it 3
776```
777
778Search terms are
779
780```none
781ksp_monitor, pc_type ml, ksp_max_it
782```
783
784Certain items are ignored:
785
786- Numbers (see `ksp_max_it` above), but floats are ignored as well.
787- Loops: `args: -pc_fieldsplit_diag_use_amat {{0 1}}` gives `pc_fieldsplit_diag_use_amat` as the search term
788- Input files: `-f *`
789
790Examples of argument searching:
791
792```console
793$ make print-test query='args' queryval='ksp_monitor'
794```
795
796```console
797$ make print-test query='args' queryval='*monitor*'
798```
799
800```console
801$ make print-test query='args' queryval='pc_type ml'
802```
803
804Multiple simultaneous queries can be performed with union (`,`), and intersection
805(`|`) operators in the `query` field. One may also use their alternate spellings
806(`%AND%` and `%OR%` respectively). The alternate spellings are useful in cases where
807one cannot avoid (possibly multiple) shell expansions that might otherwise interpret the
808`|` operator as a shell pipe. Examples:
809
810- All examples using `cuda` and all examples using `hip`:
811
812  ```console
813  $ make print-test query='requires,requires' queryval='cuda,hip'
814  # equivalently
815  $ make print-test query='requires%AND%requires' queryval='cuda%AND%hip'
816  ```
817
818- Examples that require both triangle and ctetgen (intersection of tests)
819
820  ```console
821  $ make print-test query='requires|requires' queryval='ctetgen,triangle'
822  # equivalently
823  $ make print-test query='requires%OR%requires' queryval='ctetgen%AND%triangle'
824  ```
825
826- Tests that require either `ctetgen` or `triangle`
827
828  ```console
829  $ make print-test query='requires,requires' queryval='ctetgen,triangle'
830  # equivalently
831  $ make print-test query='requires%AND%requires' queryval='ctetgen%AND%triangle'
832  ```
833
834- Find `cuda` examples in the `dm` package.
835
836  ```console
837  $ make print-test query='requires|name' queryval='cuda,dm*'
838  # equivalently
839  $ make print-test query='requires%OR%name' queryval='cuda%AND%dm*'
840  ```
841
842Here is a way of getting a feel for how the union and intersect operators work:
843
844```console
845$ make print-test query='requires' queryval='ctetgen' | tr ' ' '\n' | wc -l
846170
847$ make print-test query='requires' queryval='triangle' | tr ' ' '\n' | wc -l
848330
849$ make print-test query='requires,requires' queryval='ctetgen,triangle' | tr ' ' '\n' | wc -l
850478
851$ make print-test query='requires|requires' queryval='ctetgen,triangle' | tr ' ' '\n' | wc -l
85222
853```
854
855The total number of tests for running only ctetgen or triangle is 500. They have 22 tests in common, and 478 that
856run independently of each other.
857
858The union and intersection have fixed grouping. So this string argument
859
860```none
861query='requires,requires|args' queryval='cuda,hip,*log*'
862# equivalently
863query='requires%AND%requires%OR%args' queryval='cuda%AND%hip%AND%*log*'
864```
865
866will can be read as
867
868```none
869requires:cuda && (requires:hip || args:*log*)
870```
871
872which is probably not what is intended.
873
874`query/queryval` also support negation (`!`, alternate `%NEG%`), but is limited.
875The negation only applies to tests that have a related field in it. So for example, the
876arguments of
877
878```console
879query=requires queryval='!cuda'
880# equivalently
881query=requires queryval='%NEG%cuda'
882```
883
884will only match if they explicitly have:
885
886```
887requires: !cuda
888```
889
890It does not match all cases that do not require cuda.
891
892### Debugging for loops
893
894One of the more difficult issues is how to debug for loops when a subset of the
895arguments are the ones that cause a code crash. The default naming scheme is
896not always helpful for figuring out the argument combination.
897
898For example:
899
900```console
901$ make test s='src/ksp/ksp/tests/ex9.c' i='*1'
902Using MAKEFLAGS: i=*1 s=src/ksp/ksp/tests/ex9.c
903        TEST arch-osx-pkgs-opt-new/tests/counts/ksp_ksp_tests-ex9_1.counts
904 ok ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive
905 not ok diff-ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive
906 ok ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-multiplicative
907 ...
908```
909
910In this case, the trick is to use the verbose option, `V=1` (or for the shell script workflows, `-v`) to have it show the commands:
911
912```console
913$ make test s='src/ksp/ksp/tests/ex9.c' i='*1' V=1
914Using MAKEFLAGS: V=1 i=*1 s=src/ksp/ksp/tests/ex9.c
915arch-osx-pkgs-opt-new/tests/ksp/ksp/tests/runex9_1.sh  -v
916 ok ksp_ksp_tests-ex9_1+pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_diag_use_amat-0_pc_fieldsplit_type-additive # mpiexec  -n 1 ../ex9 -ksp_converged_reason -ksp_error_if_not_converged  -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_type additive > ex9_1.tmp 2> runex9_1.err
917...
918```
919
920This can still be hard to read and pick out what you want. So use the fact that you want `not ok`
921combined with the fact that `#` is the delimiter:
922
923```console
924$ make test s='src/ksp/ksp/tests/ex9.c' i='*1' v=1 | grep 'not ok' | cut -d# -f2
925mpiexec  -n 1 ../ex9 -ksp_converged_reason -ksp_error_if_not_converged  -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_diag_use_amat 0 -pc_fieldsplit_type multiplicative > ex9_1.tmp 2> runex9_1.err
926```
927
928## PETSC Test Harness
929
930The goals of the PETSc test harness are threefold.
931
9321. Provide standard output used by other testing tools
9332. Be as lightweight as possible and easily fit within the PETSc build chain
9343. Provide information on all tests, even those that are not built or run because they do not meet the configuration requirements
935
936Before understanding the test harness, you should first understand the
937desired requirements for reporting and logging.
938
939### Testing the Parsing
940
941After inserting the language into the file, you can test the parsing by
942executing
943
944A dictionary will be pretty-printed. From this dictionary printout, any
945problems in the parsing are is usually obvious. This python file is used
946by
947
948in generating the test harness.
949
950## Test Output Standards: TAP
951
952The PETSc test system is designed to be compliant with the [Test Anything Protocol (TAP)](https://testanything.org/tap-specification.html).
953
954This is a simple standard designed to allow testing tools to work
955together easily. There are libraries to enable the output to be used
956easily, including sharness, which is used by the Git team. However, the
957simplicity of the PETSc tests and TAP specification means that we use
958our own simple harness given by a single shell script that each file
959sources: `$PETSC_DIR/config/petsc_harness.sh`.
960
961As an example, consider this test input:
962
963```yaml
964test:
965  suffix: 2
966  output_file: output/ex138.out
967  args: -f ${DATAFILESPATH}/matrices/small -mat_type {{aij baij sbaij}} -matload_block_size {{2 3}}
968  requires: datafilespath
969```
970
971A sample output from this would be:
972
973```
974ok 1 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 2"
975ok 2 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 2"
976ok 3 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 3"
977ok 4 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 3"
978ok 5 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type baij -matload_block_size 2"
979ok 6 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type baij -matload_block_size 2"
980...
981
982ok 11 In mat...tests: "./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type saij -matload_block_size 2"
983ok 12 In mat...tests: "Diff of ./ex138 -f ${DATAFILESPATH}/matrices/small -mat_type aij -matload_block_size 2"
984```
985
986## Test Harness Implementation
987
988Most of the requirements for being TAP-compliant lie in the shell
989scripts, so we focus on that description.
990
991A sample shell script is given the following.
992
993```sh
994#!/bin/sh
995. petsc_harness.sh
996
997petsc_testrun ./ex1 ex1.tmp ex1.err
998petsc_testrun 'diff ex1.tmp output/ex1.out' diff-ex1.tmp diff-ex1.err
999
1000petsc_testend
1001```
1002
1003`petsc_harness.sh` is a small shell script that provides the logging and reporting
1004functions `petsc_testrun` and `petsc_testend`.
1005
1006A small sample of the output from the test harness is as follows.
1007
1008```none
1009ok 1 ./ex1
1010ok 2 diff ex1.tmp output/ex1.out
1011not ok 4 ./ex2
1012#   ex2: Error: cannot read file
1013not ok 5 diff ex2.tmp output/ex2.out
1014ok 7 ./ex3 -f /matrices/small -mat_type aij -matload_block_size 2
1015ok 8 diff ex3.tmp output/ex3.out
1016ok 9 ./ex3 -f /matrices/small -mat_type aij -matload_block_size 3
1017ok 10 diff ex3.tmp output/ex3.out
1018ok 11 ./ex3 -f /matrices/small -mat_type baij -matload_block_size 2
1019ok 12 diff ex3.tmp output/ex3.out
1020ok 13 ./ex3 -f /matrices/small -mat_type baij -matload_block_size 3
1021ok 14 diff ex3.tmp output/ex3.out
1022ok 15 ./ex3 -f /matrices/small -mat_type sbaij -matload_block_size 2
1023ok 16 diff ex3.tmp output/ex3.out
1024ok 17 ./ex3 -f /matrices/small -mat_type sbaij -matload_block_size 3
1025ok 18 diff ex3.tmp output/ex3.out
1026# FAILED   4 5
1027# failed 2/16 tests; 87.500% ok
1028```
1029
1030For developers, modifying the lines that get written to the file can be
1031done by modifying `$PETSC_DIR/config/example_template.py`.
1032
1033To modify the test harness, you can modify `$PETSC_DIR/config/petsc_harness.sh`.
1034
1035### Additional Tips
1036
1037To rerun just the reporting use
1038
1039```console
1040$ config/report_tests.py
1041```
1042
1043To see the full options use
1044
1045```console
1046$ config/report_tests.py -h
1047```
1048
1049To see the full timing information for the five most expensive tests use
1050
1051```console
1052$ config/report_tests.py -t 5
1053```
1054