Testing Infrastructure¶
This section documents the decisions made regarding the EasyNMEA testing infrastructure.
Testing Framework¶
The EasyNMEA testing framework has to cope with the following requirements:
Easy to integrate with CMake
Easy to integrate with GitHub actions
Large acceptance, so new contributors can write tests effortlessly
Mocking capabilities. This is because at least Asio will have to be mocked
Extense documentation
Easy to find answers to common problems.
Should be able to be used to create tests for the documentation
To satisfy these requirements EasyNMEA uses Gtest as testing framework. This decision is taken for a number or reasons:
Huge acceptance
Very large community, which means tons of Q&A everywhere
Very good documentation with examples
Out-of-the box mock support
Direct integration with CMake
GitHub integration merely consists on an action which installs GTest.
Other testing framework such as Catch and Boost.Test, however they were discarded:
Catch seemed very promising, specially being a header only library, but the lack of mocking support is unfortunately a no-go for EasyNMEA.
Boost.Test, which also offers a header only version, but again, it does not have built-in mocking support.
Build Tests¶
The EasyNMEA tests can be divided into two large categories:
Library tests: Unit and system tests for the EasyNMEA library itself.
Documentation tests: Automated tests for the documentation.
Although none of these tests are built by default, it is possible to build them separately. This is because not everyone would build the documentation. To do that, 3 CMake options are added:
BUILD_LIBRARY_TESTS
: Builds the library testsBUILD_DOCUMENTATION_TESTS
: Builds the documentation tests. This entails building the documentation.BUILD_TESTS
: Builds all the EasyNMEA tests, meaning both library and documentation tests.
Furthermore, the system tests within the Library tests do require the installation of some extra python dependencies, which are listed in <path_to_repo>/test/system/requirements.txt. These are necessary to simulate a serial connection and a NMEA device. They can be installed with:
python3 install -r <path_to_repo>/test/system/requirements.txt
Directories¶
The EasyNMEA tests are held in the following directory structure:
<repo-root>/test/unit
: For unit tests<repo-root>/test/system
: For system tests<repo-root>/docs/test
: For documentation tests
Automated Testing Jobs¶
All the EasyNMEA tests run automatically once a day for the main
branch, as well as for the supported
versions’ branches.
Furthermore, all the tests are run whenever a pull request is opened and with every commit pushed to an open pull
request.
To automate these tasks, since the public repository is hosted on GitHub,
GitHub actions are used.
This tool enables to create as many workflows with as many jobs in them as desired, making it ideal for test automation.
Moreover, the jobs run on GitHub maintained servers, so the only thing we have to do is to define those workflows.
This is done in <repo-root>/.github/workflows
.
EasyNMEA contains the following workflows and jobs:
automated_testing
, defined in<repo-root>/.github/workflows/automated_testing.yml
. This workflow runs on pushes tomain
and any other maintained branch, on pull request creation or update, and once a day. It contains the following jobs:ubuntu-build-test
, which runs in the latest Ubuntu distribution available. This job installs all the necessary dependencies, builds all the tests and documentation, runs the all tests, and uploads the sphinx-generated HTML documentation so reviewers can check it.
Code Coverage Reporting¶
As stated in Automated Testing Jobs, EasyNMEA tests are run with every push to main
and
supported version branches, as well as with every push to any open pull request.
This is done to make sure that every aspect of the library works as expected, as well as to guarantee that new changes
do not break any established behaviour.
Code coverage reporting takes this a step further, not only guaranteeing that all the tests pass at all times, but also
checking whether those tests reach every possible source code outcome.
This is done using compiler specific flags that report every branch generated by the compiler and reached by the tests. These reports are then gather under one single human-readable code coverage report that is uploaded to an online platform, which in turn can keep track of the coverage progress with changes.
Presently, the coverage reports are generated in the ubuntu-build-test
job, passing specific flags to
GCC.
Those flags are: --coverage
, -fprofile-arcs
, and -ftest-coverage
.
To ease the compilation, a CMake option GCC_CODE_COVERAGE
has been created, which enables the code coverage
flags if the compiler used is indeed GCC.
Then, the job uses gcovr to generate a report that is uploaded to Codecov. In turn, Codecov checks the code coverage on the changes proposed in the pull request, as well as the overall coverage. If any of those two decreases, the code coverage check fails, and the pull request cannot be merged.
Code Quality Analysis¶
With every push to main
, and with every pull request targeting it, and automated job is run to check code
vulnerabilities using CodeQL.
This job presents vulnerabilities in the form of code scanning alerts (see
About code scanning with CodeQL).