gnugo-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnugo-devel] possible reading.c speedups


From: Arend Bayer
Subject: [gnugo-devel] possible reading.c speedups
Date: Thu, 7 Feb 2002 10:32:01 +0100 (CET)

I wondered which search engine algorithms from chess programs might be
adaptable to the reading code in GNU Go. I browsed a little through the
web, and here is a sample of what I found with comments.

With the techniques applied in current chess programs, the search
depth is apparently appr. twice as big as with a pure minimax/alpha-beta cut
search at same speed. I would not expect similar results for the reading
code at all; however, a reduction of the branching factor just by 10% would
already mean a drastic speed improvement.

Here a quote from the WWW-pages of DarkThought, (apparently the most
successful non-commercial chess program today):

http://supertech.lcs.mit.edu/~heinz/dt/node9.html
>DARKTHOUGHT is a sophisticated alpha-beta searcher using PVS/NEGASCOUT [51,174]
>with state-of-the-art enhancements like aggressive futility pruning [95,182],
>internal iterative deepening [7,184], dynamic move ordering (history+killer
>heuristic) [3,76,180,183,191], recursive null-move pruning [20,62,77],
>selective extensions [7,17], interior-node recognizers [94,190], and an
>extended transposition table [161,191]. On average, all enhancements taken
>together reduce the effective branching factor of DARKTHOUGHT to 2-3 and its
>search-tree size to roughly 55% of that of the according minimal tree [124].

I found a nice introduction to a couple of these algorithms in
http://www.gamedev.net/reference/programming/features/chess4/page4.asp.

Another list with short descriptions is at:
http://www.xs4all.nl/~verhelst/chess/search.html)

I follow this list and the methods mentioned in the above quote, trying
to guess what might be useful for GNU Go:
(All remarks are intended for do_attack/do_find_defense in reading.c.)

                        implemented     possibly        effort needed
                        in GNU Go       usefulness      for implementation

alpha-beta cut          not for KO      small gain(?)   some

PVS/NegaScout/
Aspiration S./          -               -
Memory Enhanced Test

Move Ordering:
Killer heurist.         X
history table           -               yes             reasonably small

Iterative deepening     -               big             fairly big

Transpositon Table      X

Enhanced Tranposition   -               20-25% (?)      small
Cut off

Null move heuristic     -               -
Quiescense search       -               -
futility pruning        s.th. similar


Comments:

- alpha-beta cut:
This would only give a refinement for Ko situations. I had already tried
to implement it; however, I later realized that my implementation was
very incomplete. It did save ~8% of reading nodes for reading.tst, however
for owl.tst there was hardly a gain. With a complete implementation, I
might expect a small gain.

- PVS (Principal Varition Search)/NegaScout/Aspiration Search/Memory
Enhanced Test:
These are all a sophisticated use of alpha-beta cut and will hardly be useful 
with only four possible results as in the reading code.

- Dynamic move ordering: killer/history heuristic.
Killer heuristic means trying obvious captures etc. first to try to refute
a variation. I think the algorithm used in GNU Go is already fairly
sophisticated. However, a history heuristic (i.e. applying moves first that
have been successful in other positions in the same search tree) might
be worth experimenting with. This might be easy to implement.

- Iterative deepening:
This means the following: The search algorithm is called repeatedly with
increased search depth until the default depth is reached (or a time limit
or similar exceeded). Due to the exponential growth, the additional cost
is not big, and the results of the previous search can be used for the
improved move ordering. I think that this idea could prove very useful if
the reading functions would return not just WIN or LOOSE but the
number of liberties during the "test searches". This takes some effort to
implement, however.

- Transposition table:
This means result caching.

- Enhanced Transposition cut off:
BEFORE we start searching from a given position, we try whether we can
find a cached result for any of the positions that would get reached with
the moves we want to try from here. This should be very easy to implement
with an enhancement of the ADD_CANDIDATE_MOVE macro. In chess, this
allegedly saves 20-25% of tree size. My guess is that this would be
similar in go.

- Null move heuristic:
This idea seems very weird and, given its proven succes, quite cool.
(Try a pass first; if the position you obtain that way is good enough,
you assume that there will be a move that gets an even better result and
you save the effort of trying which one this might be.)
However, this is certainly not useful in go.

- Quiescense search, selective extension, interior-node recognizers:
I don't think these apply to the reading code. These should be left for
a global alpha-beta reader once GNU Go is dan level :-) 

- futility pruning:
This summarizes, if I understood correctly, a couple of heuristics to
cut the search one or two levels below the set search depth, if the
position is egregiously good or bad.
The variety of search depth limits used in GNU Go seems to be s.th.
analogous.


Some of these techniques should be applicable to owl as well, where
they might enable us to increase the maximal number of moves tried.

Arend




reply via email to

[Prev in Thread] Current Thread [Next in Thread]