Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Mathematical impossibilities regarding Deep Blue statements by Bob

Author: Vincent Diepeveen

Date: 08:37:05 01/30/02

Go up one level in this thread


On January 30, 2002 at 11:19:25, Gian-Carlo Pascutto wrote:

>On January 30, 2002 at 11:06:56, Vincent Diepeveen wrote:
>
>>>The branching factor is _directly_ related to the number
>>>of nodes needed to search to a certain depth.
>>
>>branching factor is the time needed to get to the next ply n+1,
>>this says nothing about the total number of nodes needed to
>>get to ply n.
>
>*Sigh*
>
>>Secondly there are 480 processors which all idle at the start,
>>and after playing a move it is a hell of a job to get them all
>>searching, and they had in the software part (first 5 plies)
>>already a lot of nonsense stored which helps relatively more
>>than it does for you and me, because we search more efficiently.
>
>I'm not aware of the exact details of their hardware and configuration.
>And neither are you as far as I know. What I do know is that the massively
>parallel machines I have seen had no big problems in utilizing all CPU's
>in a standard time control match.

there are publications of Hsu about this in IEEE99 did you never get
that? If so get it.

He also mentions search depth 12 there.

Please talk to Rainer Feldmann from Zugzwang what time it takes to get
all his 512 processors to work.

When i played it in IPCC99 it took 30 seconds, and that wasn't 512
processors but less if i remember well (256 or so).

Now there is another problem where deep blue was faced with and that
is that Hsu describes he could not do dangerous extensions in hardware.
The reason is hardware timeout. Every processor must deliver within
a certain amount of time a result.

DB used 2 different processors, in total 480 (25Mhz and 33Mhz or
something, not many statements on this). On paper it could search over
a billion nodes a second or something, but in reality they searched
200 million nodes a second, which was what they made big PR with.

So Hsu only lost a factor of 5 there because of the hardware problems.

Effectively he 'estimated' that from that 200MLN nodes about 20% was
effective working for alfabeta search (no PVS or whatsoever), in IEEE99
it is noted that other forms of search (PVS and such) did not work
better for them than normal alfabeta (reason is guessing
and my guess is that the SP processors already give jobs to
hardware processors without having alfabeta values yet and because
of not using YBW but a direct 'start to search' stuff. Note in DIEP
i use in principle YBW but also not always, in fact i already divide
stuff much sooner at the start of the search to not let processors
idle. Dual this is much less of a problem than with 4 processors,
4 processors is hell in this sense. Nevertheless imagine to have
32 SP processors. a single SP processor is controlling 32
hardware processors, so keeping all the processors busy is nearly
impossible. Losing factor 6 here is *real* good.

Then a speedup of 20% over 200MLN nodes, knowing it is in fact
480 processors, is REALISTIC too, whereas the speedup of
50% speedup claims by zugzwang i take with a bit of salt, but
do not forget that zugzwang could get slowed down a lot in order
to get a better speedup, whereas deep blue could get a very bad
speedup as long as it was searching as fast as possible :)

Best regards,
Vincent

>--
>GCP



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.