Skip to content
Open
2 changes: 1 addition & 1 deletion doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@

# General information about the project.
project = u'Clawpack'
copyright = u'CC-BY 2024, The Clawpack Development Team'
copyright = u'CC-BY 2026, The Clawpack Development Team'

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
Expand Down
1 change: 1 addition & 0 deletions doc/contents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ Examples and Applications
fvmbook
contribute_apps
testing
testing_refactor
sphinxdoc

.. _contents_fortcodes:
Expand Down
23 changes: 14 additions & 9 deletions doc/fortran_compilers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,20 @@ and some testing abilities. The `PPFLAGS` environment variable is meant to
provide further control of the pre-processor.


.. _fortran_NETCDF:

Compiling with NetCDF Support
-----------------------------

For NetCDF we provide convenience flags for compiling with the NetCDF library::

FFLAGS = -DNETCDF $(NETCDF_FFLAGS)
LFLAGS = $(NETCDF_LFLAGS)

These flags are determined using the utility `nf-config` and `pkg-config`. If
these are not available the older `NETCDF4_DIR` is used and still supported.


.. _fortran_gfortran:

gfortran compiler
Expand All @@ -102,15 +116,6 @@ gfortran compiler

**Note:** Versions of gfortran before 4.6 are known to have OpenMP bugs.

* For using NetCDF::

FFLAGS = -DNETCDF -lnetcdf -I$(NETCDF4_DIR)/include
LFLAGS = -lnetcdf

The `FFLAGS` can also be put into `PPFLAGS`. Note that the variable
`NETCDF4_DIR` should be defined in the environment.


.. _fortran_intel:

Intel fortran compiler
Expand Down
186 changes: 184 additions & 2 deletions doc/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,189 @@
Testing your installation
===================================================================

Clawpack has switched from using `nose` tests to
`pytest <https://docs.pytest.org/>`_.

See :ref:`testing_refactor` for more information about the switch,
and :ref:`legacy_testing` for some notes on using `nose`.

PyClaw Tests
------------

You can exercise all the tests in PyClaw by running the following command from
the base of the `pyclaw directory`:

.. code-block:: console

cd $CLAW/pyclaw
pytest


Fortran Regression Tests
-------------------------

The Fortran code in Clawpack has a suite of regression tests that can be run to
check that the code is working properly. In each of the Fortran packages there
are a series of regression tests along side some of the examples as well as some
tests for Python functionality. All these tests can be run by going to the base
directory of the corresponding pacakge and running:

.. code-block:: console

pytest

The most useful option for debugging a failing test is to use:

.. code-block:: console

pytest --basetemp=./test_output

which will save the output from the tests into the directory `test_output`. The
package `pytest` also has a number of additional debugging options that you can
use. See the `pytest documentation <https://docs.pytest.org/>`_ for more
details.

Hints
^^^^^
- Often times the output from a failing test will overwhelm the console output.
In this case, you can use the following to pipe the output into the file
`log.txt` and look at it directly:

.. code-block:: console

pytest --basetemp=./test_output > log.txt 2>&1

- If you would like to use a different default `setrun.py` file for testing you
can modify the test script to use a different `setrun.py` file.
- If you would like to plot the output of a test, you can use the same plotting
tools that are used for the examples. You can find the output of the test in
the `test_output` directory if you used the `\--basetemp` option above. You
can then use the plotting tools to plot the output from the test. For
example this code will run the test and save the output into a subdirectory
of `test_output`. The plotting command will then plot the output from the
appropriate subdirectory specified:

.. code-block:: console

cd $CLAW/classic/examples/acoustics_1d_example1
pytest --basetemp=./test_output .
python plotclaw.py test_output/test_acoustics_1d_example1/ ./_plots ./setplot.py

- If you would like to plot output from a test that the output was saved for,
e.g. with `\--basetemp=./test_output`, you can use the same plotting commands
to plot the output from the test. For example this code will plot the output
from the test `test_acoustics_1d_example1`:

.. code-block:: console

python plotclaw.py test_output/test_acoustics_1d_example1#/ ./_plots ./setplot.py

Note that the `#` in the command above is used to specify the subdirectory of
`test_output` that contains the output from the test. You can use this same
command to plot the output from any test that you have saved the output for.
The script `plotclaw.py` is in VisClaw.

Adding Regression Tests
-----------------------

If you want to add a new regression test using the new `pytest` framework, you
can follow along with this example for the acoustics_1d_example1 test. If
something more complicated is needed, take a look at the other tests available
in the packages, or reach out to the developers for help.

Adding a Test for `acoustics_1d_example1`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

1. Create a new file in the `examples/acoustics_1d_example1` directory called `test_acoustics_1d_example1.py` by:

.. code-block:: console

touch examples/acoustics_1d_example1/test_acoustics_1d_example1.py

and place the following content in it:

.. code-block:: python
:linenos:

#!/usr/bin/env python

from pathlib import Path
import pytest

import clawpack.classic.test as test


def test_acoustics_1d_example1(tmp_path: Path, save: bool):
runner = test.ClassicTestRunner(tmp_path,
test_path=Path(__file__).parent)

runner.set_data()

runner.rundata.clawdata.num_output_times = 2
runner.rundata.clawdata.tfinal = 1.0
runner.rundata.clawdata.output_t0 = False

runner.write_data()

runner.executable_name = "xclaw"
runner.build_executable()
runner.run_code()

runner.check_frame(1, indices=(0, 1), save=save)
runner.check_frame(2, indices=(0, 1), save=save)

if __name__=="__main__":
pytest.main([__file__])

This file is executable from the command line. The middle section modifies what is in the local `setrun.py` file to make the test small and deterministic. The final section runs the test when the file is executed from the command line. You can run this test with:

.. code-block:: console

python test_acoustics_1d_example1.py

or with:

.. code-block:: console

pytest test_acoustics_1d_example1.py


2. We now need to generate the expected results for this test. To do this, run the test with the `\--save` option:

.. code-block:: console

pytest test_acoustics_1d_example1.py --save

This will run the test and save the results in a directory called `regression_data` in the same directory as the test. This file contains the expected results for the test, which will be used to compare against future runs of the test. Note that if you would like to see the full output of the test, you can add `\--basetemp=./test_output` to the command above, which will save the output from the test into the directory `test_output`.


3. Now you can run the test without the `\--save` option to check that it is working properly. If the test passes, you should see output similar to this:

.. code-block:: console

============================= test session starts ==============================
platform darwin -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /path/to/clawpack/classic/examples/acoustics_1d_example1
collected 1 item

test_acoustics_1d_example1.py . [100%]

============================== 1 passed in 5.00s ===============================

To complete the test you will want to add the test script `test_acoustics_1d_example1.py` add the regression data to the repository.

.. _legacy_testing:

Legacy Testing
-------------------------


Tests via `nose` are no longer supported, but if you have an older version of
Clawpack installed and `nostests` available, you can still run the old tests.
These are not as comprehensive as the new `pytest` tests, but they can be useful
for checking that your installation is working properly.


PyClaw
------
If you downloaded Clawpack manually, you can test your :ref:`pyclaw`
Expand All @@ -27,7 +210,7 @@ As a first test of the Fortran code, try the following::


This will run several tests and compare a few numbers from the solution with
archived results. The tests should run in a few seconds and
archived results. The tests should run in a few seconds and
you should see output similar to this::

runTest (tests.acoustics_1d_heterogeneous.regression_tests.Acoustics1DHeterogeneousTest) ... ok
Expand All @@ -46,4 +229,3 @@ There are similar `tests` subdirectories of `$CLAW/amrclaw` and
More extensive tests can be performed by running all of the examples in the
`examples` directory and comparing the resulting plots against those
archived in the :ref:`galleries`. See also :ref:`regression`.

120 changes: 120 additions & 0 deletions doc/testing_refactor.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
.. _testing_refactor:

=========================
Clawpack Testing Refactor
=========================

.. seealso::
- :ref:`testing`

Overview
--------

Clawpack is moving to a pytest-based testing model built around example-local regression tests and shared test infrastructure in clawutil.

This refactor is motivated by the need to:
- simplify test authoring
- reduce custom test scaffolding
- better match pytest conventions
- improve CI integration
- support incremental migration from the legacy regression framework

Current reference implementations include:
- https://github.com/clawpack/clawutil/issues/187
- https://github.com/clawpack/classic/issues/96
- https://github.com/clawpack/amrclaw/issues/310

Design decisions
----------------

1. **Pytest is the system-wide test runner** - All new tests should be written
for pytest.
2. **Example-based regression tests are the primary solver test model** - For
solver-heavy code, the canonical test is a small example that:
- writes input data
- builds using the example Makefile
- runs in a temporary directory
- compares output to saved regression data
3. **Shared testing infrastructure lives in clawutil** - Common runner logic and
helpers should be centralized rather than duplicated across repositories.
4. **Tests should use the real build workflow** - Tests should exercise the same
example Makefile workflow that users rely on.
5. **Fresh builds should be explicit** - Tests should request a fresh build
through the runner or build target, rather than relying on import-time
cleanup or hidden state mutation.
6. **Legacy test infrastructure is transitional** - Existing legacy tests may
remain temporarily, but new tests should follow the pytest model and old
tests should be migrated over time.

Test layout
-----------

A typical migrated example should contain::

example_name/
Makefile
setrun.py
test_example_name.py
regression_data/
frame0001.txt
frame0002.txt

Typical test workflow
---------------------

A typical example test:
1. creates or modifies rundata
2. writes data files
3. builds the executable
4. runs in tmp_path
5. compares selected frames or diagnostics

Regression data policy
----------------------

Regression data should be:
- small
- reviewable in a PR
- deterministic
- specific to the example

Use `--save` to regenerate baselines intentionally.

CI policy
---------

CI should:
- run pytest directly
- store test artifacts in a predictable directory
- prefer fast, stable examples in PR checks
- allow broader coverage in scheduled or extended workflows

Data Included in the Repository for CI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Example regression tests should avoid external downloads when possible. Small,
stable input files should be checked into the repository. Download and
conversion logic should be tested separately in focused utility tests.

Compiler Flags and Numerical Reproducibility
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Regression tests are sensitive to floating-point roundoff and compiler
optimizations. To ensure stable and reproducible results across platforms,
CI uses conservative optimization flags (e.g., `-O1`).

Higher optimization levels may produce small numerical differences and are
not currently used for regression validation.

Migration guidance
------------------

When migrating an old test:
- prefer example-local placement
- move shared behavior into clawutil
- remove hidden setup side effects
- keep the test close to the user-facing workflow

Reference example
-----------------
`$CLAW/classic/examples/acoustics_1d_heterogeneous/test_acoustics_1d_heterogeneous.py`
is intended to serve as an example setup.