[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Generating Test Suites with Autotest
From: |
Noah Misch |
Subject: |
Re: Generating Test Suites with Autotest |
Date: |
Mon, 27 Jun 2005 19:43:16 -0700 |
User-agent: |
Mutt/1.5.5.1i |
On Mon, Jun 27, 2005 at 02:43:20PM +0200, Stepan Kasal wrote:
> actually, there is a connected question: why should one use Autotest
> for automated testing?
>
> Even though I like m4 and Autoconf internals, Autotest still scares me.
> The simple *.test files used by Automake test suite seems to be much
> simpler.
>
> What are the advantages of Autotest? Perhaps the Automake system is more
> relaxed? Does it compare stdout and stderr with the expected one?
>
> If someone knowledgable could produce a quick comparison, I'd be interested.
> (Noah? or someone from the team publishing under the name of Alexandre? ;-)
The Automake simple test suite support (`info Automake Tests') runs programs
named in the `TESTS' Make variable and observes their exit status: 0->success,
77->skip, other->failure. That's all. In the two such test suites with which I
have some familiarity, those of Automake and Libtool 1.5, each test sources a
`defs' script that performs some common initialization like redirecting output
and creating a sandbox directory for the test.
With Autotest, one writes tests in an Autoconf-like M4 language and uses
Autom4te to generate a monolithic test suite script. The framework provides
core services like output redirection and sandboxing, and the suite script
provides facilities for viewing the list of tests and selecting a slice thereof
based on ordinals or keywords. Autotest ships a macro for checking the exit
status, standard output, and standard error of a command with expected values.
Autotest effectively captures information about failed tests. Ones sees only
terse information on standard output, a line to indicate the outcome of each
test group. When finished, the test suite script produces a log file that
contains the exit status, stdout, and stderr of all failed commands. One can
specify additional files to capture from the working directories of failed test
groups; Autoconf uses this to capture the `config.log' from failed `configure'
runs. This `testsuite.log' usually contains all the information we need to
handle a bug report.
Compare the Automake-style test suite of Libtool 1.5. Users would post the
output of a failing `make check', but that only indicated the outcome of each
tests. Developers would request the output of `VERBOSE=1 make check' to acquire
the information needed to identify the underlying problem. To be sure, this is
not an inherent limitation of Automake; the `defs' script discards output in the
absence of VERBOSE, and one could modify `defs' to log everything to a file.
With an Autotest suite, one regenerates the test suite script with every change
to the suite. That is time-consuming; generating the Autoconf test suite takes
17 s on my system, 1% to 2% of the time needed to run the suite. That is not
bad in the context of running the entire suite, but it makes debugging a single
test case tiring; I sometimes accelerate things by temporarily commenting out
the test files in which I have no immediate interest.
Automake probably scales more smoothly; Autotest suites can become very large
due to M4-induced code replication.
> And then there is dejagnu: what's its role here?
DejaGnu is built on Tcl and Expect, so you normally install those and DejaGnu
itself before running a DejaGnu test suite. DejaGnu supports copying programs
to and running them on remote systems via a number of channels (ftp, kermit
rsh/ssh, tip). It can test interactive programs like GDB. It can run tests on
a simulator. It scales fine for suites with tens of thousands of tests.
DejaGnu is primarily the test harness of the GNU toolchain components housed at
sourceware.org:/cvs/src.