Computer Chess Club Archives




Subject: Re: LCT II Fin4, Deep Thought, and Deep Blue (was Re: LCT II results...)

Author: Ed Schröder

Date: 16:03:08 01/06/98

Go up one level in this thread

>Hi guys,

>Here is my take on Deep Blue and it's algorithms.  First of all their
>approach is based on lots of hardware which gives them a HUGE problem.
>If something is wrong with our software we quickly fix it.  If something
>is wrong with their hardware they have a huge problem that will take
>months to get to the next version.   So there is a lot of flexibility
>we take for granted that they do not get.  What they have done is very
>impressive indeed and took a great deal of engineering talent.

>As far as the classic question about how would they do against the
>best micro's on equal hardware?   First of all it's not easy to define
>what equal hardware is at all.   But I'll take a stab and give you
>my sense of the issues involved.

>Let's use REBEL as representative of the best software available.
>If you scaled Rebel up to do the same Nodes per second as Deep Blue
>there would be no contest, Rebel would be a HUGE favorite.

>But this is hardly a fair comparison, Rebel is a SERIAL program and
>is clearly more efficient than a parallel program which tends to look
>at many extra nodes to do the same amount of effective processing.

>So let's "pretend" we can run the pure Deep Blue algorithm in SERIAL
>mode and match up both Rebel and Deep Blue, let's say 2 million nodes
>per second (and equal hash tables.)

>The winner?   REBEL wins again!  But we are still being quite unfair.

One remark although I like all of the above very much :)

I never tested a Rebel hitting 2 Mb NPS. I might find out that the
used search or hash table technique fails. It doesn't fail on 100,000
NPS because this is fully tested but it "could be" inefficient using
2 Mb NPS and must be tuned and in worse case rewritten again.

In fact it is quite likely a program needs new tuning after such a
huge speed-up of factor 20 in order to get the maximum out of it.

When I worked with the 6502 processor I hardly did any extensions as
with the slow 4-5 Mhz this was quite dangerous. I tried more extensions
in the Milano program which should have been the main improvement over
the Polgar. It didn't work. 500-600 NPS doesn't justify that.

Then I changed to the ChessMachine hitting between 3000-5000 NPS and
suddenly extensions did a much better job.

Then the Pentium came, NPS raised to 30,000-40,000. Again new tunings,
more extensions, more chess knowledge and it all seems to work better
than leaving the program unchanged. It's clear to me, if you have
much faster hardware then you can improve your program by tuning the
search, hash tables and extensions and gain extra strength.

In other words if Rebel would run on 2 Mb NPS I am sure I would rewrite
major stuff like the selective part and selections all tuned for THAT
specific speed.

I also have wondered HOW the DB team have tested their program. I mean
if your program hits 200 Mb NPS and searches on a ply-depth of 13-17
how in the world can you understand the actual move played?

I often feel already lost after Rebel is hitting 10 plies or higher
as it becomes more and more complicated to judge the move played is
good or bad, or simply good enough or whatever.

>Deep Blue is forced to accept compromises and inflexibilities that
>REBEL does not have to deal with.  It's quite certain that many design
>choices were optimized for the exact approach each side was using.
>>From Deep Blues point of view, the stuff in Rebel would be wrong to
>attempt to implement in Deep Blue.

>An example of this will suffice.  Until recently Deep Blue could not
>even pick up repetition in the hardware portion of the search.  No micro
>program would dare leave this out, it's a bad idea.  But at the time
>choosing to leave it out seemed right for Deep Blue because it added
>too much complexity to the chips that did the end node searching.
>When we played them in Hong Kong they were quite afraid we might get
>a draw (we did not) because there were long checking lines for us.
>They were noticably disturbed by the possibility.

>Well since then they have corrected this problem but there was no
>easy fix.  It took a complete re-engineering of the chip and probably
>at least a YEAR or more to go through the whole cycle.

>The real bottom line here is that it is almost silly to compare the
>two programs except on absolute strength.   Deep Blue could probably
>not hold up MOST of top micro's if you tried to equalize everything
>in this manner but it's no reflection on the Deep Blue team.   In
>every way (except raw speed) the Deep Blue team is handicapped so you
>can not expect them to compete with the highly tuned micro programs.

>Would you compare a world class human sprinter to a cheetah and say
>how fast would the Cheetah be if it were only human?

>So does Deep Blue suck?   In rating points per node searched, YES.
>In absolute strength of course NOT.  It's unclear (to me) if they
>are much better than the very best micro's but I'm pretty sure it
>would win a long match against any of them (this year anyway.)

>Deep Blue's performance seems to be about as good as the top micro's
>based on the few tournaments it's played in and the close (but very
>short) match against Kasparov is a good indication that it's quite

Yes, I was impressed.
DB played a lot better than in the first match.

And perhaps the DB team NEEDED the extra year to tune the new speed
when they went up from 2-3-4 Mb NPS to first 100 Mb and later to 200

>Sorry Bruce, I know you didn't want to hear about this!   I carefully
>avoided singing their praises or saying they sucked!

We have opinions.
An opinion could be wrong or right.
Nothing wrong with opinions IMO.
Why not keep writing them?
And allow us to change our wrong opinions.

- Ed -

This page took 0.02 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.