Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: More correct analysis here...

Author: Vincent Diepeveen

Date: 05:35:03 02/02/02

Go up one level in this thread


On February 01, 2002 at 00:24:07, Robert Hyatt wrote:

>On January 31, 2002 at 15:19:38, Ed Schröder wrote:
>
>>
>>
>>Okay 9 plies it is, it does not matter.
>>
>>Here is a snip from the "IEEE MICRO" journal from 1999. It says 4 plies in the
>>hardware AS PART of the iteration, thus not ADD TO the iteration. The text is
>>below.
>>
>>Reading the June 2001 article I seriously doubt DB has shared memory for its
>>hash table although it's not entirely clear to me. If true that is about the
>>biggest search killer there you can imagine which makes a 16-18 ply (brute
>>force) search claim even more ridiculous. Text below.
>
>
>DB definitely did _not_ have shared memory.  The SP2 has no shared memory, it
>is a very fast message-passing system.
>
>The 4 plies makes no sense to me in any context, as Deep Thought searched

As shown in the paper i emailed you it did 4 ply.

The 5-7 ply is their 'expected maximum depth' as shown in the paper.

12(6) very clearly is shown to be 8 ply software and 4 ply hardware.
On paper it could do 23 ply in software and they expected
6 ply more to that by hardware.

Amazingly the hardware processors are only having a bound AND doing
singular extension on the first of the 4 plies hardware search. So
in case of a beta fail high this will give in pretty much positions
an extension with massive search overhead as there are no hashtables
to store info in to reuse nor killertables, nothing.

>4-5 plies in hardware, while deep blue searched 5-7 according to Hsu.  This
>depth is pretty-well fixed by the speed of the SP2.  The deeper the hardware
>goes, the slower a search goes and the host software ends up waiting on the

with so many processors the opposite should be true the deeper the search
the faster the thing goes as you have better parallellism. but that is
how it works for us. Not for deep blue as the system very obviously
as shown in the paper was optimized for number of nodes a second.

DB2 aborted hardwareprocessors if it got a hardware timeout or when
more than 8000 nodes were searched in the software part, at deeper
searches all programs search more nonsense usually, especially with
so many singular extensions as they did. this means in short that
many processors timed out getting them on average 126 million nodes
a second at searches longer than one minute against kasparov.

The way they 'estimated' the speedup by 480 hardware processors is
NOT accurate. They did a test for 1 processor versus a few and
then extrapolated that for 480, which is no good idea.

The speedup because the sp2 machine was not shared memory was
obviously horrible. Most likely a single node with 16 hardware
processors searches nearly as deep for them, as was proven in
kasparov-kramnik.

>chess hardware.  If the chess hardware searches too shallowly, say 4 plies,
>then the host software can't keep up.  For a given position/depth, there is
>a very precise "hardware depth" that optimizes performance...

>All explained by Hsu several times of course...

Very clear described it is now yes.

>>
>>>Did you see the email from the DB team?  Is there any misunderstanding that?
>>
>>Please post.
>>
>>Ed
>>
>
>I did, twice, in response to Vincent...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.