Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: confronting Bob with an old posting from him

Author: Vincent Diepeveen

Date: 17:06:52 06/28/98

Go up one level in this thread



On June 28, 1998 at 14:16:03, Robert Hyatt wrote:

>On June 28, 1998 at 10:44:29, Robert Hyatt wrote:
>
>>On June 28, 1998 at 07:02:15, Vincent Diepeveen wrote:
>>
>>
>>I'm going to add my notes here rather than later as this is a long
>>post with lots of stuff below, (quoted).
>>
>>two points:  the original 20 ply suggestion for DB was and still is
>>correct.  They can't.  And the math I gave below certainly supports
>>that (since they don't do null-move and must live with a branching
>>factor of 5.5 or so).
>>
>>Second, my math below was based on a branching factor of 3.  I am now
>>doing better than that after the heavy pruning I added in the q-search
>>a year or so ago.  2.5 vs 3 is a big change when you are cmputing
>>logarithms.
>>
>>My current analysis says that "current crafty" can do a 19 ply search,
>>if it could search 200M nodes per second exactly as I search now.  This
>>isn't completely possible due to the architecture of deep blue (hash is
>>not shared across all chess processors, they don't prune in q-search
>>because they use MVV/LVA and generate one capture (or out of check) move
>>at a time, etc.)
>>
>>So, after reading this old analysis, it is still correct.  20 plies is
>>still out of reach, although one more hardware redesign, or quadrupuling
>>the number of processors might bring that within reach.  *IF* we could
>>make the deep blue hardware do all the things I currently do (I do R=2
>>null move, their hardware can't do this at all, I prune losers in the
>>capture search, they can't even estimate whether a move seems to lose
>>material or not in the current hardware) it would be possible to do 20
>>plies.
>>
>>*as* the hardware exists, 20 was, and still is, not doable.  Maybe that
>>is a clearer statement, as if you read my most recent post, and the one
>>vincent quoted, this isn't so clear.  My most recent post simply said that
>>*if* crafty could do 200M, it could hit 19 plies deep.  But not on the
>>current DB hardware due to the above limitations.
>>
>>I was a little careless in explaining everything I said, clearly enough so
>>that it could not be interpreted in a way I didn't intend.
>>
>>Bob
>>
>>
>
>
>
>Even that didn't sound too clear.  Here's a simpler version...
>
>in my original post, I factored everything known about DB and their
>hardware into the equation, and found that 20 plies was unlikely.  In my
>more recent analysis, I too a "better" branching factor that I am now seeing
>in most cases (2.5 or so, sometimes better) and re-did the calculations, but
>with no regard to what their hardware *can't* do.  (ie no null-move in the
>hardware, no pruning captures in the q-search.)  So my second set of cal-
>culations were off by at least a couple of plies, maybe more.  *IF* crafty
>could run 200M+ nodes per second, *in its present form* it could get close
>to 20 plies.  at least 19 there where it does 12 now, 17 there where it does
>10 now, and so forth.  200M doesn't seem difficult when Cray Blitz could hit
>10M.  It seems daunting on a PC of course, unless you factor in a bunch of
>processors and a parallel search.
>
>So, I think my original post was more accurate.  My "20 plies is possible" is
>probably way too optimistic at present.
>
>my mistake...

In those days the issue was: suppose my program gets so much nodes
a second how deep can i search, no matter how stupid the DB team does it!

So when i said: 18-20 ply is easy to do, then people laughed at me.

Right now, diep gets after a day of search already 18-20.
It needs around 10k * 3600 * 24 = 840M nodes.

That's with R=3 (for most programs R=2 and R=3 make no diff, but in
Diep it does), but nevertheless, this was considered *undoable* 2.5 years ago.

Note that this is just with 60MB for hash, and at those slow levels
a doubling of hash give another ply because of the huge load factor.

How opinions change. So 20+ ply for Diep is easily doable with 200M
nodes a second. In fact with say 1 gig for hashtables instead of the
60M i'm using now, i'll get 20 within few
seconds.

Further we know that DB just got 11 ply from the printouts.
200M nodes * 180 seconds = 36 billion seconds.
So their branching factor is: 9.11. That's not near the porsches 911,
but it's quite huge actually.

It's more like minimax.

Vincent




This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.