GNU Build System - Testing
Testing in a GNU Build System project can be done in different ways. In this article we are going to discover a basic testing setup. We will use the protocol-less approach, which only uses the test programs' exit statuses to determine the outcome of tests (success, fail, skipped…).
Setup the project
In order to get started quickly, we will make use of the same project we created in the Getting Started article. If you are new to GNU Build System you might want to go through that article first. Otherwise you can just clone and use the getting-started
branch from the support repository.
First of, we are going to create a function to test. To do so, we will create two files. The header file lib.h and the implementation file lib.c. In which we will put the following code.
lib.h
#ifndef LIB_H
#define LIB_H
int foo ();
#endif /* LIB_H */
lib.c
#include "lib.h"
int
foo ()
{
return 0;
}
Now to the test code. To demonstrate the three possible test outcomes. We are going to create three tests. foo_test.c
a test that checks if the foo ()
function returns 0, foo_skipped_test.c
a test that is going to be skipped and foo_xfail_test.c
that is going to be expected to fail.
The protocol-less testing works like this. If the test program returns 0 as its exit status, the test is considered to be successful, 77 to skip the test, 99 for a hard error and any other value is considered to be a failure.
foo_test.c
#include "lib.h"
int
main (int argc, char** argv)
{
if (foo() == 0)
{
return 0;
}
else
{
return 1;
}
}
Here we return 0 if the foo () returns 0 and 1 otherwise.
foo_skipped_test.c
#include "lib.h"
int
main (int argc, char** argv)
{
return 77;
}
This test will be skipped since we return 77.
foo_xfail_test.c
#include "lib.h"
int
main (int argc, char** argv)
{
if (foo() == 1)
{
return 0;
}
else
{
return 1;
}
}
This one is going to fail since it expects foo ()
to return 1 while it returns 0.
Now, we have to tell Automake to compile those three programs and use them as the test programs. Our Makefile.am
would look like this after the modification.
bin_PROGRAMS = hello
hello_SOURCES = main.c
lib_SOURCES = lib.c
TESTS = foo_test foo_xfail_test foo_skipped_test
XFAIL_TESTS = foo_xfail_test
check_PROGRAMS = $(TESTS)
foo_test_SOURCES = foo_test.c $(lib_SOURCES)
foo_xfail_test_SOURCES = foo_xfail_test.c $(lib_SOURCES)
foo_skipped_test_SOURCES = foo_skipped_test.c $(lib_SOURCES)
The two first lines are from the Getting Started article. We didn't change that. The TESTS
variable should contain the name of all the test programs. XFAIL_TESTS
contains the names of the test programs that are expected to fail. check_PROGRAMS
variable takes the programs that should be generated before the test begins. And the last three variables footestSOURCES, foo_xfail_test_SOURCES
and foo_skipped_test_SOURCES
tells Automake the sources that need to be compiled to generate each test program. lib_SOURCES
is just a variable that contains the file where the function is defined. I added it just to make things cleaner. We could have used lib.c
directly.
Running the tests
Since we added tests for the first time to the project. Autotools need to add something called a test driver. To do so we have run the autoreconf -i
command in order to regenerate a new configure
script that supports testing.
$ autoreconf -i
configure.ac:3: installing './compile'
configure.ac:2: installing './install.sh'
configure.ac:2: installing './missing'
Makefile.am: installing './depcomp'
parallel-tests: installing './test-driver'
Time to run the configure
script.
$ ./configure
checking for ...
...
config.status: creating Makefile
config.status: executing depfiles commands
To run the test the command is make check
. Go ahead and run it.
$ make check
...
PASS: foo_test
XFAIL: foo_xfail_test
SKIP: foo_skipped_test
=============================================
Testsuite summary for Hello World 1.0
=============================================
# TOTAL: 3
# PASS: 1
# SKIP: 1
# XFAIL: 1
# FAIL: 0
# XPASS: 0
# ERROR: 0
=============================================
...
As expected you can see in test results that foo_test
passed successfully, foo_xfail_test
was expected to fail and it did and foo_skipped_test
was skipped.
If you want you can go ahead and break the function and run the tests again to make sure the tests fail.
You can retrieve the code from the testing
branch in the support repository.
Over and out,
AA