Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Crafty modified to Deep Blue - Crafty needs testers to produce outputs

Author: Robert Hyatt

Date: 21:46:25 06/18/01

Go up one level in this thread


On June 18, 2001 at 19:06:05, Vincent Diepeveen wrote:

>
>This is utterly nonsense
>
>   a) their hashtable was already filled
>   b) first 30 seconds or so they search from parallel viewpoint
>      very inefficient. First 10 seconds or so they can only
>      get a few processors to work efficiently together, only
>     after perhaps a minute all processors slowly are working a
>     bit efficient.
>     I already have this problem for 4 processors, Zugzwang clearly
>     had this problem. etcetera.


_you_ might have this problem. I don't.  I will be quite happy to
post some speedup numbers for 1-16 processors with Cray Blitz, where
the search time limit is 1 second max.  And that speedup is no different
than the speedup produced for 60 second searches.  Ditto for Crafty, which
you can prove easily enough.



>   c) crafty has an even better branching factor when it is
>      searching in the same way as Deep Blue, as deep blue had.
>      When crafty dies because of extensions, then prob DB too,
>      but the searches were simply too short (3 minutes at most)
>      to show this. Also older The King versions simply die
>      because of extensions and fullwidth search after it gets to
>      around 6 ply fullwidth search depths (same depth as which
>      DB got). Diep also had the same problem in my tests. You simply
>      die around 11 to 13 ply fullwidth. Extensions completely blow up
>      the tree then.
>      There is too little data to suggest DBs branching factor was good.
>      I never saw outputs of hours of them. Just calculate the error
>      margin. First few ply get quick out of hashtable (because of
>      better sorting), more processors start to join the search tree).
>      So in short there is no way to base your conclusion upon, whereas
>
>      Both me and Uri have given loads of proof, insight that the opposite
>      is true!



You say "first few ply out of hash table" but then you say "they can't hash in
last 6-7 plies which is killing performance."  Which is it?  It can _not_ be
both.

But we can solve this easier.  Lets take the first move out of book for all 6
games.  And compute the branching factor for those.  No hash there.



>
>>Knowing that, how can you compare the two?  Answer:  you can't...  not without
>>a lot more information that we simply don't have..
>
>What i know is that DB 100% sure had no hashtable in hardware search.
>This is so easy to proof.
>
>What i know 100% sure is that i completely outsearch deep blue at 3 minutes
>a move, both positionally, strategically, perhaps only tactical it sees the
>same.

<sigh>.....  Perhaps one day you will get to tangle with DB Jr and we can
put this nonsense to rest, once and for all.



>
>Perhaps i'm tactical not much stronger, it seems they extend a load of things
>first few plies. But if my new machine arrives (dual 1.2Ghz K7), then
>i'm pretty sure i'm tactical also stronger as DB.
>
>This is however *so hard* to proof, that it's irrelevant for the discussion.
>
>Objective analysis of strong chess players indicate clearly that DB made
>huge amounts of bad moves, where nowadays chessprograms do not make
>the same mistakes. the good moves all get reproduced too by the same
>programs.
>
>'saving your ass' nowadays programs are very good at when compared to 1997.
>
>In general the weakest chain which programs had in 1997 is completely
>away.
>
>I think it more than logically that *everything* on DB sucks when using
>todays standards. Just like everything on todays programs will suck
>when compared to DIEP version 2004.
>
>Schach 3.0 from 1997, in those days the tactical standard (nullmove,
>hashtable, singular extensions, etcetera), i'm tactical outgunning
>it *everywhere* and on nearly *every* trick.
>
>It's branching factor sucks.
>
>What we all forget in all these discussions is that in 1997 i was
>laughed to hell in rgcc especially when i said that branching factors
>could be getting much better above 10 ply when using nullmove and
>loads of hashtables.
>
>it was even posted that getting a branching factor of under 4.0 would
>be *impossible* by a guy called Robert H.
>
>With 200M nodes a second and a search depth of 11 to 13 ply with
>a load of extensions and with an evaluation considering at least
>mobility a bit (DB clearly did a simplistic count on how many squares
>a queen occupies for example), 11 to 13 ply was *GREAT* then.
>
>Especially knowing the design of the chip wasn't done in 1997 but
>the years before that.
>
>Now we are all USED to nullmove. Used to big hashtables.
>Both things DB lacked.
>
>Logically we outclass it by todays standards now.
>
>Just for fun someone might want to do a match.
>
>rebel 8 at a 200Mhz MMX machine (which was the fastest hardware for
>Rebel available start of 1997, K6 only was released later that year)
>versus a top program of today at 1.33Ghz.
>
>Of course both using the books of that date they were released,
>so rebel usin gthe 1997 book, and the todays program using the todays
>book.
>
>Of course all games 40 in 2 level. Other levels are no fun and will be
>of course an even clearer walk over.
>
>I can tell you even a funnier story. Jan Louwman had matched diep against
>nimzo 1998 with the 98 book. both k7-900. 3 hours a game.
>
>*complete* walkover.
>
>in 1998 nimzo98 was the best program at SSDF list, not world champion,
>but definitely one of the stronger programs.
>
>Played at todays hardware it gets completely destroyed by todays
>software.
>
>in 1997 and 1998, outsearching the opponent still dominated the
>commercial programmers. quality at other terrains was not so relevant,
>relatively seen.
>
>Now a program must be strong *everywhere*.
>
>  - good book (so for example either playing 1.e4 well prepared or
>               never playing 1.e4)
>  - good development of pieces and quick castling (shredder!!!)
>  - reasonable buildup in middlegame
>  - good endgame
>  - EGTBs
>
>And during the play no weird things in pruning which cause weird scores
>to get backupped to the root.
>
>So the search must be very 'pure' and not disturbed.
>
>In 1997 the common believe of most scientists in rgcc was that nullmove
>was very dubious. Of course most programmers already believed in it.
>
>Some simply couldn't believe in it, as their search was build upon pruning
>and fullwidth search.
>
>One of them simply skipped all those pruning problems (which would
>have taken another 10 years of design) and searched 11 to 13 ply fullwidth.
>
>Can't blame him for doing that.
>
>But what was wise to do in 1997, is now of course completely outdated.
>
>Best regards,
>Vincent



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.