[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Mauve test question
From: |
Michael Koch |
Subject: |
Re: Mauve test question |
Date: |
Tue, 28 Dec 2004 21:18:57 +0100 |
User-agent: |
KMail/1.6.2 |
Am Dienstag, 28. Dezember 2004 20:43 schrieb Archie Cobbs:
> Thomas Zander wrote:
> >>>Huh? Why is adding broken tests the right thing to do? And
> >>> besides, if a broken test is added, this way there will be
> >>> motivation to resolve the discrepancy. With a whitelist, a
> >>> broken test can get added but no one will notice and then it
> >>> just sits there getting stale.
> >>
> >>Its common practise to add new code to one implementation, e.g
> >> GNU classpath or libgcj, and test it for a while and later merge
> >> it to kaffe. According to you the mauve tests don't need to be
> >> added before it's included in all implementations because
> >> nothing may be broken.
> >
> > Ehm; just being a bystander; a broken test in Archies email is a
> > test that does not work properly (harness.check(1 ==2)).
> > A broken test we are talking about, and what Michael seems to
> > imply; is a test that is fully correct, and will (probably) run
> > correctly on Suns JVM, but fails on another.
> >
> > Lets call the former a broken, and the latter a failing test,
> > please :)
> >
> > Mauve is not suppost to hold _any_ broken tests, right?
>
> Thanks, I was misreading the original comment. We all agree Mauve
> may contain "failing" tests but should never contain "broken"
> tests.
>
> Now, back to the original point... I've made a proposal for
> cleaning up the Mauve mess. For those who don't like it, please
> explain your proposal and most importantly exactly how this
> "whitelist" (or whatver) will be maintained. Who is going to do the
> maintenance work? When are they going to do it? Etc.
>
> I don't really care how we do it, but these seem to be reasonable
> requirements. Maybe we should try to agree on these first:
>
> - It should be possible to test any JVM using some "official" set
> of tests which a Classpath JVM should pass and see clearly and
> obviously any failures; a "perfect" JVM would have no failures.
>
> - When a new test is added to Mauve, by default it is automatically
> added to the set of Classpath tests. I.e., it's not possible for
> newly added tests to not actually run against Classpath without
> explicitly configuring things that way.
>
> The point is that every test has a known state, one of:
>
> (a) don't even bother trying it (doesn't compile or we know it
> fails) (b) it is a known failure (i.e., a Classpath bug)
> (c) it should pass (i.e., if it fails, it must be a JVM bug)
> (d) it should pass but there can be false negatives depending on
> the JVM
>
> and most importantly the maintenance of these states is simle
> enough that we don't lose track and get back into the mess we're in
> now.
>
> By the way, saying "fix ./batch_run to respect xfails" or "fix the
> normal mauve run to not ignore tests that don't compile" is fair
> game (as long as you are willing to do the work :-)
Perhaps both should be done. Then all would be happy.
Note that Thomas Fitzsimmons wanted to work on this too. I don't know
if he has done already something thin this direction.
Michael
--
Homepage: http://www.worldforge.org/
- Re: Mauve test question, (continued)
Re: Mauve test question, Archie Cobbs, 2004/12/28
RE: Mauve test question, Jeroen Frijters, 2004/12/28
RE: Mauve test question, Jeroen Frijters, 2004/12/28
RE: Mauve test question, Jeroen Frijters, 2004/12/28
RE: Mauve test question, Jeroen Frijters, 2004/12/30