Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: DB vs Kasparov Game 2 35. axb5

Author: Eugene Nalimov

Date: 12:15:01 11/21/98

Go up one level in this thread


On November 21, 1998 at 13:03:07, Robert Hyatt wrote:

>On November 21, 1998 at 11:55:18, Amir Ban wrote:
>
>>On November 21, 1998 at 10:34:55, Robert Hyatt wrote:
>>
>>>On November 20, 1998 at 23:25:37, James Robertson wrote:
>>>
>>
>>>>
>>>>Deep blue was searching 250,000,000 nps right? Would it take roughly 160 seconds
>>>>(40,000,000,000 / 250,000,000) for Deep Blue to search the same number of nodes?
>>>>
>>
>>You have to divide the Deep Blue NPS by some factor due to the unavoidable loss
>>when doing parallel search. This factor depends on the number of processors and
>>the efficiency of the algorithm. I think we can safely assume a factor of 5, at
>>least.
>
>It's not that bad.  Hsu reported roughly 30% loss... ie if he searches 250M
>nodes per second, that is something close to a single processor searching
>about 170M nodes per second.  I see similar numbers in crafty between 1-8
>processors and also saw similar (actually maybe a little better) numbers
>with Cray Blitz...


I beleive that Murray Campbell said that their "effective" node
rate is about 25-30% of "nominal" because of parallel overhead.
They have not 8 or 16 processors, but hundreds...

Eugene

>
>>
>>
>>>
>>>just don't overlook the *huge* difference in the "shape" of the trees.  The
>>>tree by dark thought is basically shallow and wide, when compared to deep
>>>blue.  Because at this point Dark Thought has searched 20 plies, about
>>>*double" the depth of Deep Blue... Yet apparently DB went far deeper along
>>>the critical lines (seems singular extensions and other things they do work
>>>very well here)...
>>>
>>
>>It's true that Deep Blue did brute force (no forward pruning like null move),
>>but you expect a 20-ply null-move search to find what an (incomplete) 11-ply
>>full-width search supposedly found.
>>
>>One way to compare DB to other search engines is to consider the following: DB
>>gets fail-high on Qb6 after 1 sec., and this resolves after 5 sec.
>>
>>Amir
>
>
>I think their search is difficult to understand.  IE I'll point back to the
>position I posted last year on r.g.c.c about the c5 move in a game against
>Cray Blitz, in Orlando at the 88 or 89 ACM event.  They played c5 after
>failing high to +2.x, the game went *10* full moves further before *we*
>failed low to -2.x...  I was looking right at their output and they had
>this incredibly long PV showing that the bishop was going to be lost.  They
>saw it 20 full plies before we did.  Lots of micros tried this position last
>year, and almost all would play c5 (as we expected that reply ourselves in
>the real game).  But *none* had any clue that it was winning material.. even
>when they went far into the variation...
>
>The stuff they do with singular extensions and threats really shines in some
>positions... and probably costs them dearly in others...  But they have the
>horsepower to pay the price when it doesn't work, and then they kill us when
>it does..
>
>In Cape May (94 ACM) we ran with the full singular-extension algorithm
>enabled and promptly lost the first game we played, because we searched out
>the bottom of our 60 ply limit, and we didn't detect that...



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.