Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: IBM's latest monster

Author: Robert Hyatt

Date: 20:07:36 12/07/99

Go up one level in this thread


On December 07, 1999 at 20:38:22, Vincent Diepeveen wrote:

>On December 07, 1999 at 16:07:36, Robert Hyatt wrote:
>
>>On December 07, 1999 at 15:25:11, Vincent Diepeveen wrote:
>>
>>>On December 07, 1999 at 14:31:17, Robert Hyatt wrote:
>>>
>>>>On December 07, 1999 at 09:02:37, Vincent Diepeveen wrote:
>>>>
>>>>>On December 06, 1999 at 15:33:03, Robert Hyatt wrote:
>>>>>
>>>>>>On December 06, 1999 at 13:00:56, Georg v. Zimmermann wrote:
>>>>>>
>>>>>>>>A thousand fold increase would be
>>>>>>>>what, an additional 6 ply search in the same time?
>>>>>>>
>>>>>>>Lets do some math. 40^x = 1000,  40log 1000 = x, x = 10log1000 / 10log40, x =
>>>>>>>3/10log40 = 3 / 1.5 = 1.9
>>>>>>>
>>>>>>>I think it gets you "1.9 ply" deeper if you do brute force. Now we need someone
>>>>>>>to tell us how much that is if you add HT and other modern wunder drugs.
>>>>>>>But I would be very very suprised if you'd reach +6ply.
>>>>>>
>>>>>>
>>>>>>DB has an effective branching factor of roughly 6, about the same as Cray
>>>>>>Blitz, which didn't use R=2/recursive null move.  Log6(1000) is at most 4,
>>>>>>so it would get about 4 plies deeper.  Certainly nothing to sneeze at...
>>>>>
>>>>>see different post of me. DB may be happy with a b.f. from 10.33
>>>>>
>>>>>>But then again, this math is really wrong, because for each cpu, DB used
>>>>>>16 chess processors.  Each chess processor could search about 2.4M nodes per
>>>>>>second (they used almost 500 for DB2 the last match).  With one million
>>>>>>processors, they would then have 16M chess processors, and would be
>>>>>>searching about 40,000,000,000,000 nodes per second.  At about 1 billion
>>>>>>(max) for DB2, this would be 40,000 times faster.  and log6(40000) is 6,
>>>>>>so they could hit about 6 plies deeper.  Very dangerous box...
>>>>>
>>>>>the more processors the smaller the speedup. just attaching all processors
>>>>>to the search might take a few minutes.
>>>>>
>>>>>Note that HSU writes that they got very close to 1 billion positions a
>>>>>second but never hit the magic 1 billion positions a second number.
>>>>>
>>>>>Vincent
>>>>
>>>>
>>>>Sure....  hitting 1B is not easy when you have _just enough_ chess processors
>>>>to peak at 1B.  But to hit 1B requires perfect speed-matching between the
>>>>chess processors and the SP, which doesn't happen.  I think he said that the
>>>>chess processors were running at about 70% of max speed because of this.  And
>>>>he also claims 30% efficiency (in a linear way) in his parallel search.  Which
>>>>means that no matter how many processors he adds, he gets about 30% of each one.
>>>>
>>>>As far as branching factor, he uses normal alpha/beta, so I have no idea where
>>>>you would get 10+.
>>>
>>>See a post some higher.
>>>
>>>axb5 was a fail low. way over 3 minutes.
>>>
>>>800M * 180 seconds = 144 * 10^9 nodes.
>>>11th root out of that is 10.33
>>>
>>>simple nah?
>>>
>>>but the reason why is obvious:
>>>   - normal alpha beta without good move ordering is a crime
>>>   - no hashtables
>>>   - in the normal search DB did a lot of extensions
>>>     blowing up the search. extensions especially blow up the
>>>     search if you don't nullmove.
>>>   - i don't believe his 30% claim unless he was minimaxing.
>>>
>>>Vincent
>>
>>
>>1.  your math doesn't work.. because you have _no_ idea how many nodes it takes
>>him to search a 10 ply tree.  Effective branching factor = 11 ply time / 10 ply
>>time.  Anything else is a pure guess.  I see nothing that they do that would
>>drive the EBF beyond sqrt(38) which is roughly what alpha/beta is supposed to
>>be.
>
>All measurements i do i get average number of legal moves = 40 at
>depths above 10 ply.
>
>Now there are 2 options
>  - no extensions
>  - a lot of extensions
>
>Secondly sorting moves the last 4 ply is near to impossible, no
>hashtable hits, so we can't compare it with our own good searches.
>
>No matter what you do, you're always getting very close to b.f. = 10
>At like 8 to 11 ply you can be still lucky, but then the
>extensions of deep blue will blow it up, except if it does a lot of
>less extensions than you claim it does.

I just _love_ that kind of nonsense statement.  I don't claim _anything_
about DB's extensions.  I report what has been written in multiple ICCA
articles, in multiple books, and things discussed at several ACM tournaments
with several others listening on.  So forget this "it does a lot less than I
claim" unless you want to claim that Hsu and company are making this stuff up.
Of course then they somehow beat Kasparov with only a simply 11 ply search,
if we listen to you?  and that is _poppycock_.  _NOBODY_ is going to beat
kasparov with an 11 ply search as we are doing.  And I mean _NOBODY_.






>
>We see this blowing up the search in older King versions
>very good. I've had it in DIEP too. Also Genius used to
>have big problems at depths above 11 ply, though i didn't
>check how genius6 is searching. can't imagine it changed.


Just because it doesn't work for you, doesn't mean it doesn't work for
them.  I used singular extensions in CB and didn't see this branching
factor increase.  Bruce uses a version he developed.  He isn't seeing
any serious "blow-up".  Maybe I (and they) spent more time studying how
it works than you have?

It makes no sense to dismiss what you don't understand or can't get to
work.  Just because _you_ can't get it to work doesn't mean others can't...
An important lesson to remember...



>
>Zarkov is doing threat extensions and also gets blown up above 10 or
>11 ply.
>
>branching factors doing the extensions you described deep blue is
>doing is gonna blow its search.

Didn't for CB, didn't for Ferret.  Didn't for DB either, based on their
results...



>
>I know you don't have that experience yourselve, but it's easy
>to experiment with it if you are ready to burn a few billion nodes
>a search to test it out.

What on earth do you mean?  I have run _huge_ searches on the Cray in years
past.  With SE and without.  I know _exactly_ how it affected my search.  It
cost me about 1 ply.  My branching factor stayed fairly constant..




>
>If the opposite is true: that they don't have such a bad b.f. above
>depths of say 11 ply, then DB doesn't extend much.

you are doing A->B    X->Y  so A->X.  I don't see how you can make that
leap of faith.

Particularly I_cant_do_it -> nobody_can_do_it.

I don't follow that deduction either...



>
>Personally i would bet they didn't experiment with the full blown machine
>much either to searches deeper than tournament level... ...it was said
>by the PR department of big brother to have been assembled ONLY to play
>kasparov... ...and that they had problems during the games it crashing
>against kasparov once because of this hastely assembly...


This is definitely a problem... but they ran DB junior for _long_ searches
so they had little unexpected search behavior...


>
>Make your choice...
>
>Vincent
>
>



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.