guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Treating tests as special case


From: Pjotr Prins
Subject: Re: Treating tests as special case
Date: Thu, 5 Apr 2018 16:59:29 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Apr 05, 2018 at 04:14:19PM +0200, Ludovic Courtès wrote:
> I sympathize with what you write about the inconvenience of running
> tests, when substitutes aren’t available.  However, I do think running
> tests has real value.
> 
> Of course sometimes we just spend time fiddling with the tests so they
> would run in the isolated build environment, and they do run flawlessly
> once we’ve done the usual adjustments (no networking, no /bin/sh, etc.)
> 
> However, in many packages we found integration issues that we would just
> have missed had we not run the tests; that in turn can lead to very bad
> user experience.  In other cases we found real upstream bugs and were
> able to report them
> (cf. <https://github.com/TaylanUB/scheme-bytestructures/issues/30> for
> an example from today.)  Back when I contributed to Nixpkgs, tests were
> not run by default and I think that it had a negative impact on QA.
> 
> So to me, not running tests is not an option.

I am *not* suggesting we stop testing and stop writing tests. They are
extremely important for integration (thought we could do with a lot
less and more focussed integration tests - ref Hickey). What I am
writing is that we don't have to rerun tests for everyone *once* they
succeed *somewhere*. If you have a successful reproducible build and
tests on a platform there is really no point in rerunning tests
everywhere for the exact same setup. It is a nice property of our FP
approach. Proof that it is not necessary is the fact that we
distribute substitute binaries without running tests there. What I am
proposing in essence is 'substitute tests'. 

Ricardo is suggesting an implementation. I think it is simpler. When
building a derivation we know the hash. If we have a list of hashes in
the database for successful tests (hash-tests-passed) it is
essentially queriable and done. Even when the substitute gets removed,
that item can still remain at almost no cost.

Ludo, I think we need to do this. There is no point in running tests
that already have been run. Hickey is right. I have reached
enlightment. Almost everything I thought about testing is wrong. If
all the inputs are the same the test will *always* pass. There is no
point to it! The only way such a test won't pass it by divine
intervention or real hardware problems. Both we don't want to test
for.

If tests are so important to rerun: tell me why we are not running
tests when substituting binaries?

> The problem I’m more interested in is: can we provide substitutes more
> quickly?  Can we grow an infrastructure such that ‘master’, by default,
> contains software that has already been built?

Sure, that is another challenge and an important one.

> Ricardo Wurmus <address@hidden> skribis:
> 
> > An idea that came up on #guix several months ago was to separate the
> > building of packages from testing.  Testing would be a continuation of
> > the build, like grafts could be envisioned as a continuation of the
> > build.
> 
> I agree it would be nice, but I think there’s a significant technical
> issue: test suites usually expect to run from the build tree.

What I understand is that Nix already does something like this. they
have split testing out to allow for network access. I don't propose to
split the process. I propose to cache testing as part of the build.

Pj.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]