Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: More correct analysis here...

Author: Ed Schröder

Date: 12:19:38 01/31/02

Go up one level in this thread


On January 31, 2002 at 10:33:22, Robert Hyatt wrote:

>On January 31, 2002 at 03:35:50, Ed Schröder wrote:
>
>>
>>
>>Is this really you Bob?
>>
>>I have seen Cray Blitz playing, Mike Valvo in ecstasy calling through the
>>microphone to the participants and spectators, "Cray Blitz is hitting the 9th
>>ply folks!". And all the programmers trembled all over, myself included, gee 9
>>plies, who can win from that hardware monster.
>>
>>We are talking about Munich 1986, Cray Blitz was considered somewhat faster than
>>Hi-Tech from Hans Berliner. Hi-Tech searched 8 plies average in the middle game
>>and so did Cray Blitz. Been there, seen it.
>
>Not in Cologne it didn't.  I still have the logs.  Cray Blitz searched
>to 9 plies on occasion and 10 plies many times.  I can certainly post one
>if you want to see it.  We were searching 8 plies in 1983 at 40K nodes per
>second on a dual cpu XMP...
>
>
>
>
>
>>
>>Hi-Tech was able to get a 100K NPS, you somewhat higher, period!
>
>We were doing 200K-300K as I said.  If you are talking about the Summer
>when the WCCC was held, we were doing 200K.  If you talk about the Fall,
>Cray had a faster machine and we were doing 300K.  I was talking about
>the latter...
>
>
>
>>
>>With a 100K NPS you typically search 8 plies (brute force!) in the middle game
>>and not 10-12 plies as you imply.
>
>I'm not going to make this a big argument as I wrote the thing and in COlogne
>you could _not_ see Cray Blitz's output.  Because I was operating Cray Blitz
>in Birmingham and relaying just the best move to Harry...  I have no idea
>what you thought you saw, but it wasn't _my_ program.  As I said, in 1983 we
>were doing 8 plies, _just_ like Belle which was running at 160K nodes per
>second with a somewhat less efficient hardware search.  In 1986 we were
>hitting 9 all the time and saw 10 about every third search or so.  Deep Thought
>in 1989 was  a ply or two deeper than us...  and in 1989 we were doing 10 all
>the time at about 500K nodes per second...


Okay 9 plies it is, it does not matter.

Here is a snip from the "IEEE MICRO" journal from 1999. It says 4 plies in the
hardware AS PART of the iteration, thus not ADD TO the iteration. The text is
below.

Reading the June 2001 article I seriously doubt DB has shared memory for its
hash table although it's not entirely clear to me. If true that is about the
biggest search killer there you can imagine which makes a 16-18 ply (brute
force) search claim even more ridiculous. Text below.

>Did you see the email from the DB team?  Is there any misunderstanding that?

Please post.

Ed

=======================================================

Deep Blue is a massively parallel system designed for carrying out chess game
tree searches. The system is composed of a 30-node (30-processor) IBM RS/6000
SP computer and 480 single-chip chess search engines, with 16 chess chips per SP
processor. The SP system consists of 28 nodes with 120 MHz P2SC processors,
and 2 nodes with 135 MHz P2SC processors. The nodes communicate with each
other via a high speed switch. All nodes have 1GB of RAM, and 4 GB of disk.

=======================================================

The search occurs in parallel on two levels,
one distributed over the IBM RS/6000 SP
switching network and the other over the
Micro Channel bus inside a workstation node.
For, say, a 12-ply search, one of the workstation
nodes—working as the master for the entire
system—would search the first four plies in
software. (A ply represents a move by either
player.)

After four plies from the current game
position, the number of positions increases
about a thousand times. All 30 workstation
nodes, including the master node, then search
these new positions in software for four more
plies. The number of positions increases by
another thousand times.

At this point, the chess chips jump in and finish
the last four plies of the search, including quiescence
search.

Partitioning the search into the (two-level)
software search and the hardware search per-mitted
a great deal of design flexibility, yet
maintained overall search speed. The software
handled less than one percent of the total posi-tions
searched, but it controlled about two
thirds of the search depth. The software por-tion
of the search can be arbitrarily selective
without slowing down the system.

The eight plies of software search performed
on the RS/6000 SP included many compli-cated
search extensions, which extended the
search deeper along lines the computer con-sidered
“forcing.” Some experimental evidence
suggested that the playing strength would
increase significantly if the search extensions
went all the way down to quiescence search.

Implementing the full software search exten-sions
on the chess chip seemed too risky a
proposition, given the design time constraint.
During the 1997 match, the software search
extended the search to about 40 plies along the
forcing lines, even though the nonextended
search reached only about 12 plies.

DEEP BLUE IEEE MICRO

=================================================








>>Say 200K is good for 8 plies average, being 1000 x faster with a branching
>>factor of 4 gives: 4x4x4x4x4 = 1024 -> 5 extra plies.
>>
>>So with 200M NPS you might be able to search 13 plies brute force in best case.
>>
>>Subtract a couple of plies (1 to 3) for the way DB did singular extensions and
>>the picture fits, that is: DB was searching 10-12 plies as the log files
>>confirm.
>>
>>This 12(6) isn't 18, you must have misunderstood its meaning.
>>
>>Ed
>
>
>Did you see the email from the DB team?  Is there any misunderstanding that?
>
>It seems pretty clear to me.  And although I busted the math yesterday, here
>is a better analysis:
>
>their branching factor was roughly 4, obtained from their logs.  that means
>that they multiply the time by 4 for each iteration.  Looking at their logs,
>they typically searched to 10(6) or 11(6).  On occasion they got to 12(6) but
>it seemed to timeout before finishing so I didn't count those.
>
>10(6) is 16 plies according to Hsu.
>
>I tried Crafty on several opening, middlegame and endgame positions.  I averaged
>the total nodes searched for a 1 ply search and got roughly 100.
>
>16 plies requires 4^15 more nodes than 1 ply...  4^15 is 2^30, which is
>one billion.  They need to search 100 billion nodes to get to depth=16, if
>we assume their q-search looks something like mine.  100 billion nodes only
>needs 1000 seconds if they searched 100M nodes per second.  But we know they
>averaged 200M according to Hsu/Campbell, which drops that to 500 seconds.
>And we also know that deeper searches might not always need that many nodes to
>complete when move ordering is good and hashing is lucky.
>
>I don't see _anything_ that says they can't reach 16-17 plies on normal
>searches, and go beyond that in special cases.  Crafty seems to search about
>12 plies or so in the 60 10 time controls we used in CCT, but on occasion it
>will run out to 15 or 16 in certain types of tactically obvious positions...



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.