[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Bug-wget] [Bug-Wget] Issues with Perl-based test suite
From: |
Darshit Shah |
Subject: |
Re: [Bug-wget] [Bug-Wget] Issues with Perl-based test suite |
Date: |
Sun, 28 Sep 2014 09:33:30 +0530 |
User-agent: |
Mutt/1.5.23 (2014-03-12) |
On 09/27, Tim Rühsen wrote:
Hi Darshit,
I am answering inline...
Am Sonntag, 28. September 2014, 01:23:08 schrieb Darshit Shah:
There are a few issues that I've been facing with the old perl based test
suite that I'd like to highlight and discuss here.
1. The way the test suite has been written, I was unable to hack together a
patch that will allow the various tests to be run under valgrind. I'd
like a similar functionality like the new Python based test suite where the
environment variable, VALGRIND_TESTS causes the Wget executable to be
invoked under valgrind for memrory leak checks.
If anyone could help by contributing a patch / sharing their wisdom on
how this can be achieved, I'd be very grateful.
This is pretty easy once we have the parallel test suite up and running.
I already did this in configure.ac for the Mget project, so there is not
secrecy about it. I'll make a patch when the time comes...
I haven't seen how this is done through configure.ac. I'll have to look up your
sources in Mget. I set up the valgrind testing from within the test suite
itself.
2. Race Conditions: The tets suite seems to have some races somewhere in it.
Over the last year or so, I've often seen Test-proxied-https.px fail and
then pass in the second invokation. This seemed like some race, but
occurred infrequently enopugh to be a pain point. However, Tim's recent
patch for using the parallel tets harness seems to be causing more tests to
fail for me. Now I have all the Test-iri* tests also failing very randomly
and erratically. A second/third/nth invokation of make check will generally
see them pass successfully. Without Tim's patch, these tests always passed
without issues. I'm loath to believe that the patch itself is the cause of
failure. My understanding is that, it is only triggering the issue more
often leading to a very high rate of false positives.
Believe me, I made many test runs with the parallel test suite with and
without -jn. The Test-iri* tests *always* succeeded with LC_ALL=C. They
*never* succeeded when using TESTS_ENVIRONMENT to set a turkish locale.
I started investigating on friday and will continue next week.
My environment only has LANG=en_US.utf-8 but no LC_ALL variable exported. Maybe
I should export that one too.
I'm having not having consistent failures, but they tend to fail 6/10 times
which is very painful. I'll also take a look into this in a while.
But I never saw spurious failures which might indicate races (But I'll have an
eye on that). How much work is it for you to install a Debian unstable into a
virtual machine ? Just for comparison with my machine. If these races persist
in the virtual machine, something would likely be wrong with your hardware
(RAM ?) If not, something is wrong with your environment (I remember you have
a cutting edge Arch Linux box).
Shouldn't be too hard to find out...
With Test-proxied-https.px, I remember having first encountered the random
failure sometime around March '13. It's been happening very randomly for me
since then. The Test-iri* failures, I saw for the forst time yesterday after
your patch. In fact, an interesting analysis is that I see the failures occuring
fewer times when I run make check from the root of the repository and a lot more
times when I run make check from the tests/ directory.
I'll set up a Debian unstable virtual machine sometime next week and test it
out. I'm currently running a Void Linux virtual machine, which is bleeding edge
too. I will run the tests through that too.
My hardware might be at fault, it's a little old now too. And with Arch, it's
not unthinkable of hitting a bad environment configuration issue. I'll look into
it.
Again, I urge everyone reading this to share their insights on what /
where the issue lies and how we can fix it.
In general, I'd like to see the Perl based test suite deprecated in the near
future. The Python based test suite suffers from none of the above
mentioned issues and is highly flexible and extendable. The HTTP Server in
that test suite is feature complete and all tests can now be ported to it.
The FTP module however does not yet exist and hence, we must keep the perl
based tests around till that requirement is fulfilled.
I agree in general, but still have some problems with the python test suite
(e.g. I am not talking python so far, python test suite seems slower than the
perl test suite, it seems to be more complex to set up a new test). But I
promise to work into it in the near future to give you a more detailed
feedback and/or a helping hand.
I've tried to make setting up new tests to be easier. Though if you have any
gripes, please let me know and I'll try to fix those.
In general, even I've noticed that the Python based tests are running a lot
slower, much slower than when I originally wrote the test suite. I'll profile
the whole thing and try to optimize the code when I can.
I've attempted to keep the codebase simple and readable for the Python Tests so
that extending it is easy for someone new. Oftentimes I traded a little
inefficiency in the code for readability. I'll work on it.
Tim
--
Thanking You,
Darshit Shah
pgpaMEJ_3keFl.pgp
Description: PGP signature