gnugo-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnugo-devel] parallel GNU Go (was: TODO revisions)


From: David G Doshay
Subject: [gnugo-devel] parallel GNU Go (was: TODO revisions)
Date: Mon, 6 Sep 2004 19:42:16 -0700

On Sep 6, 2004, at 10:38 AM, Arend Bayer wrote:

On Wed, 25 Aug 2004, David G Doshay wrote:

As per my previous offer, if anyone wants to think about how to
parallelize the engine we may be able to offer part of our cluster
for testing and development. We are open to suggestions and
proposals.

I have been thinking about this a little. While the biggest gain would
probably come from parallelizing the owl code (our life and death
analysis) itself, this would also involve substantial changes.

An easier approach would be to distribute complete owl readings. There
are a few places where GNU Go mostly start one owl analysis after
another; see e.g. the loop in the function "make_dragon()" in dragon.c
containing the calls to owl_attack() and owl_defense(). One could just
run this loop twice: The first time just to collect all owl calls; a
controller will then distribut them among GNU Go engines running across
the cluster.
Once all the other engines are finished, we re-run the loop, this using
the results computed by the other processes.

There are maybe 3 or 4 loops that would have to be run twice in this
manner. (See the function "find_more_owl_attack_and_defense_moves()" in
value_moves.c for another example.)
Plus one would have to intercept the calls to owl_attack/defense.

There would be no changes necessary for the other GNU Go engines, as
all we would need to do there can already be done via GTP.

The CS professor in the SlugGo project has started talking about trying
to do some things that would speed up SlugGo. We have 2 ideas, one
of which is only good in our SlugGo layer above GNU Go. The other is
to do something inside of GNU Go that would also spread things over
the available cluster nodes. So, we are tempted. The "dispatch and
collect" model you discuss above works easily with the rest of the cluster
infrastructure we developed.

Prior to this point we have tried our best to avoid changes inside of GNU
Go, partly to minimize delays due to the learning curve and partly to
minimize the changes we need to do whenever we update to a newer
version of GNU Go.

Right now I am committed to cleaning up SlugGo. We can wait until I
have finished that, or we can wait for a new crop of graduate students
to show some interest, or we can talk with some of you who already
understand the code and figure out how to collaborate. I have no
problem at all with any resulting code getting the normal GPL treatment.
While I doubt that the cluster-infrastructure code we have developed
is ready for public release, I have not yet decided what its eventual
fate will be.

Cheers,
David






reply via email to

[Prev in Thread] Current Thread [Next in Thread]