Skip to content

Regression Testing

Cameron Smith edited this page Oct 9, 2018 · 8 revisions

Testing is crucial to developing reliable software. SCOREC/core testing comes in two main forms:

  1. Continuous integration tests using Travis CI.
  2. Regression tests that developers are encouraged to run locally on their own machines after making changes.
  3. Nightly automation which runs those same regression tests and automatically merges changes if tests pass.

CTest and CDash

We use CTest as our testing framework, which is closely related to the CMake build system that use.

CMake is a build system with its own programming language in which one specifies how software is compiled. CTest comes in two parts:

  • A set of commands in the CMake language that allow describing your tests and how to run them.
  • A program ctest which runs the specified tests and reports results.

CDash is a server system for collecting and displaying testing results. It is a program that runs on a web server, receiving reports from CTest and displaying them as web pages.

Finally, my.dash.org is a free hosting service that is running many instances of CDash. People can sign up and request their own CDash instance which will run on Kitware servers. Then all they have to do is run CTest locally which uploads their results to this CDash instance, and everyone else can view the results on a web page.

Here is SCOREC's CDash web page, and Albany's.

SCOREC Regression tests

The SCOREC regression tests are specified in this file. It is a CMake language file using CTest commands. Specifically, the add_test command creates a test. We have also created our own mpi_test command which handles two things:

  • MPI parallel tests
  • Running the Valgrind memory bug detector

Test Meshes

Most codes which do something with meshes require a mesh file as input, ours is no exception. One of the key dilemmas in software engineering today is where to place such input files. Here are the reasons why mesh files should be in something like a repository:

  • Mesh files should be readily available for download by developers and users who intend to run tests.
  • We would like to keep careful track of mesh files which define expected regression test results, i.e. there should be no ambiguity about which files to use.
  • With proper access controls, users should be able to change the mesh files as needed to reflect changes in the software.

However, here are some reasons why it would be bad to put them in today's source code repositories:

  • Mesh files are large, typically consuming orders of magnitude more disk space than the source code. If mesh files are placed in the same repository as the source code, it may force all users to download all mesh files even they only need to download the source code (this is especially true for a Git repository). Due to their size, they would slow downloads significantly.
  • Mesh files are typically modified in a wholesale manner, meaning that if we replace one with another there is little in common between the two files. Version control systems tend to store differences between different versions of file, which saves significant space in the case of source code which is modified a little at a time. This does not help in the case of mesh files, exacerbating the size issue.
  • Git additionally forces users to download all history by default, doubly exacerbating the size issue.

Our current approach leaves mesh files outside the source code repository to ensure fast and small downloads for users who do not need to run tests. They are currently kept in a single archive file in the SCOREC web server which can be downloaded here. Because it is not a repository, it is somewhat inconvenient to make modifications, but the process is documented here and here.

When looking at the testing file, one sees many references to a directory variable MESHES. This is a user-specified CMake variable which users should set to be the location where they have extracted the test meshes from the archive, if they intend to run tests. In addition, the IS_TESTING variable should be set to true:

cmake \
-DMESHES="/lore/dibanez/meshes" \
-DIS_TESTING=True

Running regression tests

After obtaining test meshes as described in the above section, and compiling with the described CMake options, one should be able to run the ctest command in the build directory and see tests run and get a report of the results. Note that results will not be reported to CDash, only to the local terminal.

Developers are strongly encouraged to run CTest locally before pushing changes.

Nightly Testing

The Nightly testing system does a more thorough job of testing our software on a daily basis, and automates merging as described here. It is composed of several pieces, and as far as we know is a fairly typical setup, similar to what is done even at Kitware.

Cron Job

The tool for scheduling something to run periodically on Unix/Linux systems is called [cron][https://en.wikipedia.org/wiki/Cron]. So our process begins with a so-called crontab file that schedules a job. You can see this file here. In order to schedule the nightly job, one should input the contents of that file via the crontab editor, which is run like so:

crontab -e

The cron job runs a shell script which can be found here. This shell script does several things:

  • Load the proper environment
  • Run CTest several ways and upload the results to CDash
  • Update the online documentation for our software, which involves reading the latest annotated comments in the source code. This is done by the Doxygen tool.
  • Prepare information for Coverity analysis of our source code.

CTest script

As noted earlier, a regular CTest run by a developer will not upload results to CDash, but the Nightly job will do that and more. The difference is contained in this script. It indicates to CTest where to upload all results, where to store intermediate files, and what configurations of our repository to test. It tests the develop and master branches of both core and core-sim, as well as possible merges from one of these branches to another, and pushes the merges for which the build and tests pass. This process is described in more detail here.

Coverity

Coverity is a product of the Synopsys company, which reads source code and tries to spot bugs or bad style. Coverity Scan is an online service where open-source projects can upload a distilled version of their source code and Coverity will look for bugs in it. The results can be viewed in a web browser.

The nightly shell script distills our source code into a form that Coverity will accept. However, the Coverity Scan servers are currently fairly backed up, meaning that an analysis result has a turnaround time of approximately four days. For this reason, we only upload our distilled source code once a week (on Wednesday), and get the result back sometime near the weekend.