Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The King's News Clothes (Re: DB vs)

Author: Robert Hyatt

Date: 16:07:31 11/24/98

Go up one level in this thread


On November 24, 1998 at 18:54:58, blass uri wrote:

>
>On November 24, 1998 at 17:43:11, Robert Hyatt wrote:
>
>>On November 24, 1998 at 09:59:02, Amir Ban wrote:
>>
>>>On November 24, 1998 at 08:40:01, Robert Hyatt wrote:
>>>
>>>>On November 24, 1998 at 03:14:02, Christophe Theron wrote:
>>>>
>>>>>On November 23, 1998 at 22:38:17, Robert Hyatt wrote:
>>>>>
>>>>>>If they take (say) 5 minutes to do a 10 ply search, at 250M+ nodes per second,
>>>>>>that is over 300X the number of nodes a full-width search to depth=10 should
>>>>>>search.  If you factor in a q-search that is the same size as the full-width
>>>>>>part, we have a missing factor of 150 to account for.  I say that is *all*
>>>>>>search extensions.  And I say that is *far* more than any of the rest of us do
>>>>>>in terms of extensions.  How *else* would you characterize this?
>>>>>
>>>>>I don't want to go into a heated discussion, but I notice:
>>>>>
>>>>>A program that does no forward pruning has a branching factor of (roughly) 5.
>>>>>
>>>>>A program that uses null move has a branching factor of (roughly) 3.
>>>>>
>>>>>  (5^10) / (3^10) = 165.38
>>>>>
>>>>>Weren't you looking for a factor of 150 or so ?
>>>>>
>>>>>If the IBM team is interested I can provide some help for their null move
>>>>>implementation. This way we could have the power of Deeper Blue with only one of
>>>>>their chip stuffed into a simple PC. :)
>>>>>
>>>>>
>>>>>    Christophe
>>>>
>>>>
>>>>One minor flaw in this discussion.  :)  Neither Junior *nor* Deep Blue use
>>>>null-move.  *now* how would you explain that factor of 1,000?
>>>>
>>>>Bob
>>>
>>>
>>>It seems you agree with Christophe's math. Strange because 10 minutes earlier in
>>>another post you said the opposite:
>>>
>>>Quote:
>>>
>>>>>1. I'm not an expert on null-move, but I got the impression that it buys much
>>>>>more than a factor of 10 at great depths. The factor should be an exponential
>>>>>anyway, no ?
>>>>
>>
>>
>>Sorry... didn't mean to imply that... I was responding to his null-move
>>point only.  *no* null move is *not* exponential... and it is quite easy
>>to figure this out.  If your branching factor is 3.0, null move makes you
>>about 10X faster... because you lop off 2 plies here and there with R=2.
>>Which is why I said "about 10X".  I can certainly provide some real data if
>>you'd like to see this...  but otherwise wouldn't we keep getting *faster* as
>>we go deeper?  Doesn't work like that.  And that factor of 168 would be *very*
>>useful because that translates into log base 3 of 160 which would *five* more
>>plies?  Sounds good to me.  *too* good.  :)
>>
>>
>>IE I can run crafty with null off and null on to depth=5, then 6, then 7,
>>and so forth... and show that rough factor of 10x or so carries from
>>iteration to iteration.
>>
>>here's a sample, position about 15 moves into a normal game:
>>with null on:  8 plies 8 seconds, 9 plies 40 seconds.  null off:
>>8 plies 110 seconds 410 seconds..  may not be a good case as odd things
>>happen with null on and off... but that is a "trend"...
>>
>>
>>>>yes, but so is the tree... the result is linear.  It (R=2, recursive) is worth
>>>>about 2 plies (maybe less) in the opening/middlegame, more in the endgame.  >But
>>>>not a factor of 10.  At least for my implementation.
>>>
>>>End quote.
>>>
>>>But this doesn't concern me. In fact I no longer understand what the problem is
>>>here. What factor of 1000 about Junior needs to be explained, and why ?
>>
>>Here, again:  If we take Rebel, or any program without null-move, they are
>>currently searching to about 10 plies in 3-5 minutes.  At least that is
>>what I see running rebel on my P6/200.  And seems to be what Crafty might
>>do with a decent hash table (above numbers were with default which is small,
>>I didn't think about that, so the non-null search got zapped a bit due to
>>overrunning the hash table.)  So.. 10 plies at 80-100K nodes per second.
>>DB is doing 10-11 at 250M nodes per second.  *that* is the factor of 1,000
>>(actually 2,500) that I am looking at.  How come they don't get any deeper
>>than rebel (nominal depth) (which I assume is somehow equivalent to the
>>depth you reach although you count things differently) yet they are searching
>>2500 times faster?
>>
>>*that* is the question.  I propose that they *must* be doing far more extensions
>>than we are doing, to consume that many nodes to reach that nominal search
>>depth...
>>
>>
>>>
>>>To recap: The argument was if the search trees of PC programs and Junior in
>>>particular are deeper and thinner in comparison to the Deep Blue search tree.
>>>
>>>Bob said that the fact that DB reaches 10-11 ply is proof that this is not so. I
>>>believe he said that because he knows that null-movers can match this nominal
>>>depth in about equal time (but about 1000 lower NPS). Christophe's math shows
>>>why, and incidentally confirms the point for null-movers.
>>
>>
>>Nope... his math doesn't work, and a trivial inspection shows why.  You
>>lose two plies here and there.  But I certainly don't get a factor of 100x
>>faster.  Above numbers support that.  And rebel/deep blue/junior don't use
>>null-move, so I don't see the connection anyway...
>>
>>Note that I changed to "rebel" because ed reports "traditional plies" which
>>can be directly compared.  Your programs seem similar in strength, so I assume
>>your "different" numbers are somehow similar.
>>
>>>
>>>Junior doesn't match this nominal depth (would be depth 19-21 for Junior), so
>>>there is no factor to explain. Junior is more aggressively extended than most PC
>>>programs, and therefore more than DB.
>>>
>>>Amir
>>
>>
>>how deep would you go if you were 1000 times faster?  And compare *that* to
>>deep blue's numbers that you have there.  I say their number will be much
>>*less* than yours, even after your correct for half-plies.  And if that is
>>true, then *they* must be extending more than you.  My complete point...
>
>Their number is similiar and not much less.
>
>It depends on the position but on my hardware(pentium200MMX) it needs sometime
>some hours to finish depth 18(9 plies)
>
>they serch 2000 times faster than Junior5 on pentium200MMX so I need to search
>100 hours.
>
>I did not try to see what happen after 100 hours but if I assume that I need one
>hour to get 9 plies and I assume a factor of 5 then I get 12 plies after 125
>hours
>and they got 11-12 plies
>
>Uri


Try rebel...  Amir's numbers are too difficult to compare since he doesn't
count plies like everybody else...  On my machine rebel gets to depth=10 with
little trouble in 5 minutes...  in the position for axb5.  DB hit 11 plies.
close enough... yet the speed is about 2500 times slower for rebel...

which was my point...  DB is 2500 times faster, with a similar search (no null
move) yet no deeper?  where do you suppose all those nodes are going?



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.