Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: "Deep Blue ..." in 1995

Author: Robert Hyatt

Date: 13:26:08 10/14/02

Go up one level in this thread


On October 14, 2002 at 12:00:09, Vincent Diepeveen wrote:

>
>But still 12 ply was good in 1997 and getting 126 million nodes a second
>was what the marketing department of IBM could do something with.
>
>In fact it was design criterium 1: get more nodes a second than
>others. The rest was irrelevant.
>
>The best proof for maximizing number of nodes a second we can find in all
>kind of idiot conclusions they drew.
>
>For example:
>  - normal alfabeta is just as good as other forms of alfabeta
>    (IEEE99)

It is within 10% of other approaches, and it eliminates a few issues that are
sometimes troubling...  failhi/faillow in succession, etc...

>  - processors get timed out after a while and other processors

What is the problem there?  That is a design decision.

>    resplit then and more processors get joined in
>    (2001 document)


No document of theirs said that.  They said resplit _deeper_, not necessarily
_with more
processors_.  They wanted to keep the processors doing short tasks only, not
deep searches
that held _everything_ up.  If you had read his thesis you would understand this
point
without commenting...



>  - nullmove is not useful

What does that have to do with NPS?  You once said null-move was the reason we
can't
get good parallel speedup.  That was false and it was proven false by
experimental testing.
You now say null-move slows down the NPS?  Not for me, and it can easily be
proven...

In fact, you actually argue with yourself, as you say NPS and speedup increases
_without_
null-move.  We just completed _that_ argument a few weeks ago.  Remember?  Now
you
are on the other side of the fence?




>
>In short everything was done to get as much as possible processors
>running without worrying about efficiency.
>
>Even in 1997 everyone on this planet who had a reasonable chessprogram
>realized that normal alfabeta was inferior to for example pvs, aspiration
>search, and all kind of other forms.
>

PVS is a 10% improvement over alpha/beta plain-vanilla.  10% is "not inferior"
in a significant way.  It is just 10%.




>They wasted no time in algorithmic optimizations of say 10-20% even.
>
>Incredible, but real true.
>
>Nullmove was not used of course because it was brandnew in 1997 and
>some said it was dubious even. In fact the popular testset in 1997
>was if i remember well a testset with some zugzwangs in it. Of course
>completely anti-nullmove...
>



null-move was _not_ "real-new" in 1997.  It was used in Cray Blitz in the
1980's,
and Crafty and Ferret (and probably Fritz and others) were using R=2 in 1995...





>But more important is that when you use nullmove, that your nodes a second
>goes down. fullwidth you can do *much* faster.
>

Crafty sel=0/0, on a single 550mhz cpu goes 152K nodes per second from the
opening
position.  Normal crafty goes 151K.  That is "slower"?  "significantly slower"?
< 1%
slower is significant???



>The resplitting of the processors is the real sick thing it did.
>It is a pathetic design issue. If you let the hardware processors
>search a bigger depth, say 4 ply instead of 2 or 3, then of course
>you get less communication between the slow 100Mhz RS6000 processors
>and the hardware processors. So you get more nodes a second. However
>it is very inefficient to do searches bigger than 2 ply with hardware
>processors.
>
>In fact a statement from Chrilly was that the move ordering in hardware
>is so bad, that he could not believe deep blue doing 4 to 6 ply
>in hardware processors unless they were just out for nodes a second.
>
>The move ordering gets real horrible, comparable with a random move
>ordering. No killermoves, no hashtable info to use etcetera!
>
>Brutus is doing 2 ply searches in hardware and in endgame 3 ply. That's
>WITH nullmove.
>
>They did 4 to 6 ply. No nullmove. Fullwidth even. Just some forward pruning
>last few plies. Most likely 1 to 2 ply.
>
>There are another load of design issues taken to get a bigger nps.
>
>Take for example the slow eval and quick eval.
>
>That's a form of lazy evaluation. It won't be long before all commercial
>programs will not use lazy eval anymore. I do not want to mention names
>here but i know from 2 more commercials who stopped doing lazy evaluation.
>
>there is a big difference between forward pruning last few plies and
>lazy evaluation for correctness in the search tree.
>
>you can see forward pruning as something that you do not see now, but
>which you see in a later stadium perhaps. So you weaken your search
>with it, but you should get in the end the same moves getting played.
>
>On the other hand lazy evaluation CHANGES the evaluation and limits
>its capabilities.
>
>there is no difference between lazy evaluation and calling it a quick
>evaluation. the principle is the same.
>
>In fact i only experimented with a quick evaluation which can approach
>the big evaluation with an accuracy of 3 pawns in 99% of the positions.
>
>It's horrible if i use it.
>
>Deep Blue did use it. Of course. It would be 2 times slower if it didn't.
>
>For interesting to see about deep blue is why it is hardly changing its
>best move. If it locks into a certain move it most likely is going to
>play it. Usually that happens only if you search very dubiously, and
>even then only when also the evaluation is very consequent.
>
>Best regards,
>Vincent



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.