Computer Chess Club Archives




Subject: Re: LCT II Fin4, Deep Thought, and Deep Blue (was Re: LCT II results...)

Author: Don Dailey

Date: 12:58:46 01/07/98

Go up one level in this thread

Hi Bob,

A lot of your response I can pass off as a difference of opinion which
is a good thing if learning takes place.   I just want to address a
few issues:

I mentioned something about Deep Blue not doing repetitions in the
hardware part of the search.   You assumed I said quies part of the
search.   My program also limits the rep tests, but not on the last
4 ply of the MAIN search.   I think you just misread my section,
you were probably just too eager to respond.

The cute anecdote about Deep Blue and a private match would not
impress any scientist.   I don't mind hearing these kind of
stories when I'm bullshitting around (I do it too.)   But
presenting this as some kind of refutation is ridiculous and
I cannot let it pass.   First of all 10 games is way to small
do draw a  solid conclusion from.  Some piece of hardware
doing 100 thousand nodes still sounds like a mismatch to me
(of course depending on the what the micros were running on
which you didn't specify) and what time control, and who were
the witnesses etc.

Tell me your stories because they are fun to hear, but please
don't expect me to assign a great deal of weight to the conclusions
you draw from them.

Next time I will want thorough documentation describing who
did the experiment, the exact conditions of the experiment,
who the witnesses were and so on.   If this already exists
or was written up somewhere I will take a look and then I
have a basis for changing my mind.  But I already know in
advance that the 10 game sample is not enough.

You also made it sound like a micro has no chance against
a GM.   When is the last time you got out of the house?
It would be good for you to get some air.

You mentioned in a later post that parallel programs are
not that slow.  Now this is something I know about and
you do too.  But you admitted a 3/4 slowdown for 16
processors.  This is all I need to make my point that
NPS for a parallel machine is not equivalent.  It should
be noted that these are YOUR numbers (mine are similar)
but they are not DEEP BLUE's numbers.  They do much
worse than this.

You claimed micros make more compromises than deep blue.
Please don't insult my intelligence.  Don't assume I
know nothing about the issues they deal with.  I'll
give you and the readers an example of the type of
compromise they must take.  The hash table implementation
is different in the hardware portion (last few plies)
of the search.   They use small local tables.  This
is not what you would like to do (global tables are best)
but it's a good compromise for them, otherwise they
would have to give up too much speed.   I talked to
them at the Deep Blue match and they were not even
using it at the time, I think there was a bug or
something, I don't really know why.   This is a typical
tradeoff you're forced to make with hardware.  The
end result is good, that's why you do it in the first
place.  But saying they didn't have to compromise anything
is simply inacurate.

The final point I want to make is that I'm not in the
"deep blue sucks" camp.   I got the idea you percieved
my post as an attack on  YOU.   I don't know why, I
was talking about Deep Blue, didn't even mention your
name, and even then I wasn't attacking it.

It was like your post was designed to make it seem like
I was attacking deep blue (and you.)

- Don

On January 06, 1998 at 20:10:07, Robert Hyatt wrote:

>On January 06, 1998 at 17:30:40, Don Dailey wrote:
>>Hi guys,
>>Here is my take on Deep Blue and it's algorithms.  First of all their
>>approach is based on lots of hardware which gives them a HUGE problem.
>>If something is wrong with our software we quickly fix it.  If something
>>is wrong with their hardware they have a huge problem that will take
>>months to get to the next version.   So there is a lot of flexibility
>>we take for granted that they do not get.  What they have done is very
>>impressive indeed and took a great deal of engineering talent.
>don't take this "too" far.  Much of what they do was microcode, and I
>this has changed, so that many things can be changed.  Also, they use
>chess hardware out near the tips.  The main search is done on the IBM
>IE they look a lot like Crafty, the hardware handles the last N plies in
>a simplistic manner, plus the capture search.  The SP (software) handles
>the first M plies, does the singular extension stuff and everything else
>they do.
>They wouldn't need to modify the search in the hardware very often.  The
>evaluation is already programmable, so that can be changed easily and
>without modifying the hardware at all...
>>As far as the classic question about how would they do against the
>>best micro's on equal hardware?   First of all it's not easy to define
>>what equal hardware is at all.   But I'll take a stab and give you
>>my sense of the issues involved.
>>Let's use REBEL as representative of the best software available.
>>If you scaled Rebel up to do the same Nodes per second as Deep Blue
>>there would be no contest, Rebel would be a HUGE favorite.
>I *totally* disagree.  I'd peg DB's evaluation at being approximately
>100X more complex than Rebel's based on Ed's current NPS rate.  Don't
>forget the match that caused so much discussion where a single DB
>running at 100K nodes per second really smacked Rebel and Genius 10-0 in
>Hsu's lab.  So you are *way* off base here.  *way* off...
>>But this is hardly a fair comparison, Rebel is a SERIAL program and
>>is clearly more efficient than a parallel program which tends to look
>>at many extra nodes to do the same amount of effective processing.
>>So let's "pretend" we can run the pure Deep Blue algorithm in SERIAL
>>mode and match up both Rebel and Deep Blue, let's say 2 million nodes
>>per second (and equal hash tables.)
>>The winner?   REBEL wins again!  But we are still being quite unfair.
>>Deep Blue is forced to accept compromises and inflexibilities that
>>REBEL does not have to deal with.  It's quite certain that many design
>>choices were optimized for the exact approach each side was using.
>>From Deep Blues point of view, the stuff in Rebel would be wrong to
>>attempt to implement in Deep Blue.
>So far as I know, there are *no* compromises in DB.  They have done
>*everything* they wanted...  evaluation, attack detection, threat
>detection, I mean *everything*...
>Again, you are taking your knowledge of Rebel, but comparing it to
>almost no knowledge about DB.  That gadget is *far* more sophisticated
>than anything else currently playing chess.  Not just faster, but
>as well.
>>An example of this will suffice.  Until recently Deep Blue could not
>>even pick up repetition in the hardware portion of the search.  No micro
>>program would dare leave this out, it's a bad idea.  But at the time
>>choosing to leave it out seemed right for Deep Blue because it added
>>too much complexity to the chips that did the end node searching.
>>When we played them in Hong Kong they were quite afraid we might get
>>a draw (we did not) because there were long checking lines for us.
>>They were noticably disturbed by the possibility.
>Guess again.  Crafty doesn't catch repetitions in the q-search, because
>they are *impossible* in my q-search, which only includes captures and
>promotions.  Ditto for Ferret.  We're both doing reasonably well.  They
>have handled repetitions correctly since Deep Blue started playing.  The
>older Deep Thought and "deep blue prototype" had that bug, if you call
>that... but it wasn't a serious issue.
>>Well since then they have corrected this problem but there was no
>>easy fix.  It took a complete re-engineering of the chip and probably
>>at least a YEAR or more to go through the whole cycle.
>>The real bottom line here is that it is almost silly to compare the
>>two programs except on absolute strength.   Deep Blue could probably
>>not hold up MOST of top micro's if you tried to equalize everything
>>in this manner but it's no reflection on the Deep Blue team.   In
>>every way (except raw speed) the Deep Blue team is handicapped so you
>>can not expect them to compete with the highly tuned micro programs.
>Don, your lack of hardware design experience shows here, no insult
>intended.  They can do *anything* they want, and with "silicon
>it is trivial to do for them.  Hardware design is now more like
>than designing.  But there are fewer compromises in DB than in the micro
>IE I'd *love* to design such a chip, because my rotated bitmaps would be
>perfect for that type of hardware, because I could do the rotation in 0
>cycles.  In fact, a "crafty on a chip" would not be difficult to do, if
>there was funding to pay the bill.  But your "DB is full of compromises"
>is simply off-base.  You ought to poke Hsu over the phone or via email
>get a better feel for what they have done.  It's most impressive...  and
>not just because it is fast...  They have it *all*...
>>Would you compare a world class human sprinter to a cheetah and say
>>how fast would the Cheetah be if it were only human?
>>So does Deep Blue suck?   In rating points per node searched, YES.
>>In absolute strength of course NOT.  It's unclear (to me) if they
>>are much better than the very best micro's but I'm pretty sure it
>>would win a long match against any of them (this year anyway.)
>>Deep Blue's performance seems to be about as good as the top micro's
>>based on the few tournaments it's played in and the close (but very
>>short) match against Kasparov is a good indication that it's quite
>This I don't follow.  What micro has beaten a GM in 40/2?  In a match
>of 40/2?  What micro has beaten as many GM's as DB in anything
>blitz, where most micros do ok at times)...
>>Sorry Bruce, I know you didn't want to hear about this!   I carefully
>>avoided singing their praises or saying they sucked!
>>- Don

This page took 0.03 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.