Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: IBM's latest monster

Author: Vincent Diepeveen

Date: 17:38:22 12/07/99

Go up one level in this thread


On December 07, 1999 at 16:07:36, Robert Hyatt wrote:

>On December 07, 1999 at 15:25:11, Vincent Diepeveen wrote:
>
>>On December 07, 1999 at 14:31:17, Robert Hyatt wrote:
>>
>>>On December 07, 1999 at 09:02:37, Vincent Diepeveen wrote:
>>>
>>>>On December 06, 1999 at 15:33:03, Robert Hyatt wrote:
>>>>
>>>>>On December 06, 1999 at 13:00:56, Georg v. Zimmermann wrote:
>>>>>
>>>>>>>A thousand fold increase would be
>>>>>>>what, an additional 6 ply search in the same time?
>>>>>>
>>>>>>Lets do some math. 40^x = 1000,  40log 1000 = x, x = 10log1000 / 10log40, x =
>>>>>>3/10log40 = 3 / 1.5 = 1.9
>>>>>>
>>>>>>I think it gets you "1.9 ply" deeper if you do brute force. Now we need someone
>>>>>>to tell us how much that is if you add HT and other modern wunder drugs.
>>>>>>But I would be very very suprised if you'd reach +6ply.
>>>>>
>>>>>
>>>>>DB has an effective branching factor of roughly 6, about the same as Cray
>>>>>Blitz, which didn't use R=2/recursive null move.  Log6(1000) is at most 4,
>>>>>so it would get about 4 plies deeper.  Certainly nothing to sneeze at...
>>>>
>>>>see different post of me. DB may be happy with a b.f. from 10.33
>>>>
>>>>>But then again, this math is really wrong, because for each cpu, DB used
>>>>>16 chess processors.  Each chess processor could search about 2.4M nodes per
>>>>>second (they used almost 500 for DB2 the last match).  With one million
>>>>>processors, they would then have 16M chess processors, and would be
>>>>>searching about 40,000,000,000,000 nodes per second.  At about 1 billion
>>>>>(max) for DB2, this would be 40,000 times faster.  and log6(40000) is 6,
>>>>>so they could hit about 6 plies deeper.  Very dangerous box...
>>>>
>>>>the more processors the smaller the speedup. just attaching all processors
>>>>to the search might take a few minutes.
>>>>
>>>>Note that HSU writes that they got very close to 1 billion positions a
>>>>second but never hit the magic 1 billion positions a second number.
>>>>
>>>>Vincent
>>>
>>>
>>>Sure....  hitting 1B is not easy when you have _just enough_ chess processors
>>>to peak at 1B.  But to hit 1B requires perfect speed-matching between the
>>>chess processors and the SP, which doesn't happen.  I think he said that the
>>>chess processors were running at about 70% of max speed because of this.  And
>>>he also claims 30% efficiency (in a linear way) in his parallel search.  Which
>>>means that no matter how many processors he adds, he gets about 30% of each one.
>>>
>>>As far as branching factor, he uses normal alpha/beta, so I have no idea where
>>>you would get 10+.
>>
>>See a post some higher.
>>
>>axb5 was a fail low. way over 3 minutes.
>>
>>800M * 180 seconds = 144 * 10^9 nodes.
>>11th root out of that is 10.33
>>
>>simple nah?
>>
>>but the reason why is obvious:
>>   - normal alpha beta without good move ordering is a crime
>>   - no hashtables
>>   - in the normal search DB did a lot of extensions
>>     blowing up the search. extensions especially blow up the
>>     search if you don't nullmove.
>>   - i don't believe his 30% claim unless he was minimaxing.
>>
>>Vincent
>
>
>1.  your math doesn't work.. because you have _no_ idea how many nodes it takes
>him to search a 10 ply tree.  Effective branching factor = 11 ply time / 10 ply
>time.  Anything else is a pure guess.  I see nothing that they do that would
>drive the EBF beyond sqrt(38) which is roughly what alpha/beta is supposed to
>be.

All measurements i do i get average number of legal moves = 40 at
depths above 10 ply.

Now there are 2 options
  - no extensions
  - a lot of extensions

Secondly sorting moves the last 4 ply is near to impossible, no
hashtable hits, so we can't compare it with our own good searches.

No matter what you do, you're always getting very close to b.f. = 10
At like 8 to 11 ply you can be still lucky, but then the
extensions of deep blue will blow it up, except if it does a lot of
less extensions than you claim it does.

We see this blowing up the search in older King versions
very good. I've had it in DIEP too. Also Genius used to
have big problems at depths above 11 ply, though i didn't
check how genius6 is searching. can't imagine it changed.

Zarkov is doing threat extensions and also gets blown up above 10 or
11 ply.

branching factors doing the extensions you described deep blue is
doing is gonna blow its search.

I know you don't have that experience yourselve, but it's easy
to experiment with it if you are ready to burn a few billion nodes
a search to test it out.

If the opposite is true: that they don't have such a bad b.f. above
depths of say 11 ply, then DB doesn't extend much.

Personally i would bet they didn't experiment with the full blown machine
much either to searches deeper than tournament level... ...it was said
by the PR department of big brother to have been assembled ONLY to play
kasparov... ...and that they had problems during the games it crashing
against kasparov once because of this hastely assembly...

Make your choice...

Vincent


>2.  Move ordering that they do is very similar to ours...  Particularly in the
>software (first 8 plies + all the extensions).  Move ordering in the hardware is
>more simplistic of course, using MVV/LVA to sort captures.
>
>3.  They have hashing in software... but not in hardware.  The hardware supports
>hashing, but he lacked time to design/build a big multi-port memory for each
>group of 16 cpus...

30 x 16 = 480

>4.  I believe anything he says until I see evidence that he is misleading
>everyone.  So far it hasn't happened.  They did a lot of testing and the
>30% seemed pretty accurate.  Not good, of course...  but 30% of 512 is still
>a huge speed-up...

480 chessprocessors




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.