bug-gnubg
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnubg] Re: How fast can you cheat??


From: Roy A. Crabtree
Subject: Re: [Bug-gnubg] Re: How fast can you cheat??
Date: Fri, 21 Aug 2009 14:26:21 -0400

Must have caught some of what you missed I guess.

I od not have further resources to chase down the avoidance and change-the-definittion style of rhetorical gambits being used, whether conscious and intentional, or reflexive and unintentional.

Cheers to when you see your points overtaken by public scientific recognition you missed the point.

See more below.

On Fri, Aug 21, 2009 at 05:45, Michael Petch <address@hidden> wrote:
Howdy Roy, et All

As a follow up to my original post I have a couple of things to add. I thank Christopher Yep for chatting with me tonight regarding an assumption I may have made about how Roy perceives the operation of GnuBG’s neural net and game play. He also directed me to a number of posts that were on this mailing list and elsewhere.

I’m going to keep this brief. When I wrote my original post, I assumed that it was known that the Neural Net is static – meaning that It is not self learning while you play. Given the same set of rolls played over and over again, the Bot will play the same way every time with one exception.

sorry, the learning that is relevant, Michael, is the learning that went into the DB PRIOR to that point.

Avoiding the point only makes it obvoious you are avoiding the point.



First of all Roy, in 2006 you had an email exchange with Albert Silver. See: http://lists.gnu.org/archive/html/bug-gnubg/2006-09/msg00079.html . One particular question you asked Albert was:

Roy asked:” Second kvetch: Am I incorrect in assuming that the net is not locked during successive plays? That it learns in the current match as well?
Albert Silver  Responded:  “No, it doesn't learn during the match, so your assumption is correct.”
the point i was raising then ws that it would be easier to make the case when the DB is unlocked during hte currewnt game.

But it does not defeat it.

When you convolve one state space into a smaller one, you still get part of the information.

And "what is learnable" into "what is learned" is not always obvious to proof OR prove.

But that was obvious form the firs statement, which is why you tried to redefine the meanings i chose to use.

Basically, you have an emotional reaction to the term "cheating" and

nine tenths of the fluff and fury here is due to that bias on your part.

 


First of all I have to apologize. Albert was correct in saying “No, it doesn't learn during the match, “ but was incorrect in saying “so your assumption is correct.”. I think Albert meant to say “so your assumption is incorrect.” and this confusion may have been because of the double negative in your assumption. I am unsure if this is why you believe the Bot seems to be self-learning as it goes – your post suggests you may believe this to be the case.

No, I was not confused on that point.  You are confused in believing it.



The GnuBG Neural Net is static (Training is done independent of the product you download). It doesn’t learn from previous  moves and cube play, and doesn’t base any decision making on player patterns while playing against you during matches.

Already covered this in my second response.

 


During the training phase to generate the static gnubg.weights file, the bot did play against itself ,and humans but only during that training.

Which is where the information unassessed and unaddressed in your arguments creeps in.
 


If you download any copy of GnuBG on the website and install it on 2 virgin computer (one that has never seen GnuBG before)  and then install the same copy of GnuBG on each you can verify that a clean system and one that has been playing matches ultimately plays the same.

You so assert.  Do you have any proof of this?

If you do, how is it that it disproves the point i am making?

In that you are not addressing the full quartiles of the Venn: only one of the four here.
 


On nne virgin computer – install GnuBG but don’t play any matches on it. On the second system – install GnuBG and play matches against it for a period of time (For example a month). Then, using the process that I described in http://lists.gnu.org/archive/html/bug-gnubg/2009-08/msg00239.html set up  a match with the same seed on both computers. As long as you use the same seed (you can choose whatever seed you wish). Start playing a match. On each computer you should get the same dice. Enter the SAME moves for yourself on each system. GnuBG should respond with the same moves on each computer. If GnUBG had been learning the potential moves would have changed and the game outcome been altered.

Well, see, you just bypassed the initial learning.  The builtin DB,

Otherwise, GNUBG would never improves its play by learning at ANY time.

You have to measure at the level of the critical mass of the hysteresis
   already inherent in the eigenstate of the scenario you convolve to deny decry my  point.

A DB with millions of game splayed is a high hysteresis.

Otherwise you will not be playing "fair"

since the scenario you denote would take potentially a few million games to overcome the inherent learning present

  you present a task that would take years ot complete.

And avoid dealing with the initial baby learning steps already done and present in the hysteretic phase-shift mass needed to cause a cascade showing the falsity of your argument.
 


There is only one non deterministic factor that I know of in the Neural Net that will alter the outcome of the Bots play. It is not previous moves by a player or learned knowledge – it is the “Noise” feature you can set for the Computerized Player (Go to Settings/Players and select the Bot player). You will notice that there is a noise option. There is a deterministic noise option and non deterministic. If you use deterministic noise, the noise generated for the Neural Net Is always the same given the same position. If you use this noise and play matches with the same seed, the bot will make the same plays. If you use nondeterministic noise then the noise is random, and not reproducible. If you have this option set, the bot will appear to play differently, AND in doing so the match will unfold quite differently.

With all this being said, during training many years ago its quite conceivable that the

Here you admit to bypassing the training state where the hysteretic mass is low enough to practicably run the test you suggest.

PRNG used was not Mersenne twister. It was likely something much simpler (and

So: you do not actually know.

It appears that your earlier post said that you DID know.

Were you simply mistaken, are you lying here, or something else?
 
sometimes not the same on each platform it was built on – this is based on a code review of the original 0.0 and 0.1 releases with the Training function of the day). If there was any bias because of PRNG bias/patterns, then that is set static in the neural net. However, since the neural net can use a multitude of PRNG’s now and is NOT self learning while you play, it is not plausible for the Bot to get an advantage by using potential PRNG biases while playing. The way it plays is fixed based on static constructs within the engine itself, the weights file, and the bearoff database, and the match equity table.

so: what you have stated is that a full replay of the DB build phase has never been done.


Occasionally over the years there are bugs that are changed between releases that fix or improve the Neural Net engine. The weights file itself has not been changed since 2006. Its possible for different versions of Gnubg to produce differing results because of changes in the code, but not because of on the fly learning. Take two copies of the

the"on the fly learning" hysteresis at this point is IN THE CODE changes, and you are avoiding the point I am making.
 
same code and run them on different computers with same seeds (And no nondeterministic noise) and the way the Bot plays against a human will be the same no matter how many games were played previously.

Only when the DB is preloaded.  And only to the level you are measuring.

And more correctly, asserting.

because: you have never actually dome this for all possible games.

If you have, give me the log of the output.  You know that it has not been done or analyzed for completeness when PARTIALLY run.

This is a common form of simplisitci prevarication typically accepted as valid scientific reasoning.



If you follow the steps to reproduce the rolls for a match as stated above and you can get the bot to play differently starting with the same seed (and the human making all the same plays), and you get differing results, the GnuBG team would like to see it – because likely there is a bug, or the product is not being used properly.

again, you avoid the point I _was_ making maladroitly here.
 


Michael





--
Use Reply-To: & thread your email
after the first: or it may take a while, as
I get 2000+ emails per day.
--

Roy A. Crabtree
UNC '76 gaa.lifer#  11086

(mail, residence)
Roy A. Crabtree
3322 Wheeler Road SE
Oak Hill Apartments #T-4
Washington, DC 20032-4166
202-562-1909 US no voicemail
   (try after 2100EST)

(secondary mail)
Roy A. Crabtree
USPS POB 58097
Washington, DC 20034-8097
703-318-2106
(msgs only, use my name)
(best effort next day M-F pickup)

[When you hear/read/see/feel what a y*ehudi plays/writes/sculpts/holds]
[(n)either violinist {Menuhin} (n)or writer {"The Y*ehudi Principle"} (n)or molder (n)or older]
[you must strive/think/look/sense all of it, or you will miss the meanings of it all]

address@hidden Forwards only to:
address@hidden
address@hidden CC: auto to ^

http://musings-roy-crabtree.blogspot.com [& others]
http://www.authorsden.com/royacrabtree
http://skyscraper.fortunecity.com/activex/720/resume/full.doc
--
(c) RAC/IP, ARE,PRO,PAST
(Copyright) Roy Andrew Crabtree/In Perpetuity
   All Rights/Reserved Explicitly
   Public Reuse Only
   Profits Always Safe Traded

reply via email to

[Prev in Thread] Current Thread [Next in Thread]