bug-gnubg
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug-gnubg] Gnubg's Cache and Plies > 3 - problem?


From: Michael Petch
Subject: [Bug-gnubg] Gnubg's Cache and Plies > 3 - problem?
Date: Tue, 01 Sep 2009 02:16:15 -0600
User-agent: Microsoft-Entourage/12.20.0.090605


Hi Guys,

We seem to have a problem with the cache and it seems to relate to 4ply. The
issue Michael Depreli found seems to shift around based on size of cache.
>From my tests things seem correct if cache is 0 and the bigger it gets the
more likely there are problems.

I began reviewing cache.c and have a major concern that 4 ply exceeds our
ability to represent a cache key uniquely. I am thinking out loud here, so
anyone can correct me. Eval.c has this:

 /*
   * Bit 00-01: nPlies
   * Bit 02   : fCubeful
   * Bit 03-10: rNoise
   * Bit 11   : fMove
   * Bit 12   : fUsePrune
   * Bit 13-17: anScore[ 0 ]
   * Bit 18-22: anScore[ 1 ]
   * Bit 23-26: log2(nCube)
   * Bit 27-28: fCubeOwner
   * Bit 29   : fCrawford
   */

  iKey = (
           ( nPlies ) |
           ( pec->fCubeful << 2 ) |
           ( ( ( (int) ( pec->rNoise * 1000 ) ) & 0x00FF ) << 3 ) |
           ( pci->fMove << 11 ) );

Clearly nPlies is forced to the 2 low bits.

The problem with this is that 2 bits represents 4 values (plies):
00,01,10,11 (0-3) Of course nPlies = 4 will be 100 (3 bits) when or'ed with
( pec->fCubeful << 2 ) will inadvertently turn the cubeful flag on even when
its fCubeful is 0. As well 4 ply now becomes 0 ply!. One might say the code
could be revised to be:

 iKey = (
           ( nPlies & 0x03 ) |
           ( pec->fCubeful << 2 ) |
           ( ( ( (int) ( pec->rNoise * 1000 ) ) & 0x00FF ) << 3 ) |
           ( pci->fMove << 11 ) );

The code is also automatically assuming  other variables don't exceed their
expected values (Except Noise, it masks off any extra bits in the code
snippet above. This would sop the plies from colliding with the other flags
but 0ply and 4ply are the same, 1ply and 5ply are the same etc).

To make things worse cache.c also uses these keys in the hashing function.
In cache.c comparisons are done based on the raw keys (eg):

    if ((pc->entries[l].nd_primary.nEvalContext != e->nEvalContext ||
        memcmp(pc->entries[l].nd_primary.key.auch, e->key.auch,
sizeof(e->key.auch)) != 0))
    {    /* Not in primary slot */
        if ((pc->entries[l].nd_secondary.nEvalContext != e->nEvalContext ||
            memcmp(pc->entries[l].nd_secondary.key.auch, e->key.auch,
sizeof(e->key.auch)) != 0))
        {    /* Cache miss */

So now the raw bits of the keys are being compared. But we know ply levels
can't be uniquely store for plies above 3. So now we get false hits in the
cache (4ply return 0 ply results). This seems to jive with what Michael
Depreli and I are seeing when a large cache size is used.

Basically 2 bits isn't enough to accurately describe plies, and is
ptoentially cascading into cache issues. There seems to be an extra 3 bits
of the structure above that aren't used. Coincidentally There has been
discussion of changing this Key (for performance reasons I believe). I think
we need to do something immediately to deal with the ply issues and the
cache.

When the cache size is set to 0 there is no problem because nothing gets
stored and all evals are done from scratch. The bigger the cache the more
likely you will have 4 ply colliding with 0 ply, 5ply with 1 ply etc. And of
course When ply > 3 of one can't trust the cubeful flag isn't being
obliterated, and who knows what that is doing to results.

I consider this a serious issue. Any thought or feedback would be welcome. I
May also be missing something, so let me know

Michael


reply via email to

[Prev in Thread] Current Thread [Next in Thread]