Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Wanted: Deep Blue vs. today's top programs recap

Author: Tom Kerrigan

Date: 16:40:42 08/27/01

Go up one level in this thread


On August 27, 2001 at 14:43:34, Robert Hyatt wrote:

>On August 27, 2001 at 13:35:59, Tom Kerrigan wrote:
>
>>On August 27, 2001 at 08:59:00, Robert Hyatt wrote:
>>
>>>On August 27, 2001 at 04:14:33, Tom Kerrigan wrote:
>>>
>>>>There are some issues here that have not received due attention.
>>>>
>>>>First, [as most of you already know,] part of DB's search algorithms and all of
>>>>DB's evaluation function algorithms were implemented in custom VLSI chips. This
>>>>made it phenominally fast and also means that it can't exist as a PC program
>>>>(because you don't have the chips). However, PCs have general purpose
>>>>processors, which means they can run any algorithm you can think of, so the idea
>>>>of running DB on a PC isn't quite as stupid as most people seem to think, if
>>>>you're talking about the algorithms. There are two issues at play when
>>>>discussing implementing DB as PC software:
>>>>
>>>>1) Work involved. Speaking from experience, the time-consuming part of writing
>>>>an evaluation function is not the actual coding, but instead deciding which
>>>>terms to include and what their weights should be. If you already know _exactly_
>>>>what an evaluation function is supposed to do, (and the DB team does,) I bet
>>>>implementing even the most complicated one would only take a couple of weeks.
>>>>Moreover, I believe that most, if not all, of DB's evaluation function already
>>>>exists as software. It takes months, if not years, to design a chip of this
>>>>sophistication, and more months and millions of dollars to get it manufactured.
>>>>It's absurd to think that anybody would put this much effort and money into a
>>>>project without first experimenting with the algorithms to see if they were any
>>>>good. Additionally, it has been reported that the evaluation function was being
>>>>tuned by the DB team long before they had any of their chips manufactured.
>>>>Exactly what were they tuning, then, if not a software implementation?
>>>
>>>According to what they have published, they were tuning the "weights".  There
>>>is a _huge_ difference between having code that will take weights and produce a
>>>score based on a position, and having code that will do this in a chess engine.
>>>The difference is the speed requirement.  If the hardware can do over 1B
>>>evaluations per second, and software can do 1000, then the two are certainly
>>>not comparable if they are going to be used in a chess engine.
>>
>>I'm glad to see that we now agree they had a software implementation of the
>>evaluation function (which you categorically denied in earlier conversations).
>
>I do not call this a "software implementation of the evaluation function."  Not
>in any form.  It was a software "emulation" (for lack of a better word) that in
...
>I have the _code_.  It was on my web site.  It was (or still is) on Tim Mann's
>web site (the eval tuner for deep thought, anyway).  As far as the 1000, I got

Exactly. I've heard you say over and over that DB is vastly different from DT.
And the code that's on Tim Mann's page is for DT. And it's a program for doing
automatic tuning against GM games anyway, not the kind of tuning that was
reportedly done for DB. Is it safe to assume that, because this is the best code
you can produce, that you don't have _any_ actual DB-related code? And because
you have to guess at the speed of DB code based on CB speeds, that you don't
know _any_ specifics of the code they used? If that's the case, and it seems
like it is, I don't see what business you have making the guesses you've been
making and passing them off as informed estimates.

>There is no "knee-jerk".  Hsu says "XXX".  You say "I don't believe XXX".  >There
>is little to justify that when _you_ don't _know_.

I said "I don't believe this" to the idea that a software implementation of DB
would be "so slow as to be worthless." When did Hsu say that a software
implementation of DB would be so slow as to be worthless? In fact, when did Hsu
say anything? I did some web searching and all I could find of his was some open
letters about unrelated issues and an early paper on DB, with the estimate that
a general purpose CPU would have to run at 10k MIPS to do what the DB chip does.
Well, CPUs aren't THAT far away from 10k MIPS these days, so if you want to read
anything into Hsu's words, it seems like he's siding with me.

(BTW, if you're interested, the same paper says that the DB chip took three
years to create. This is a far cry from the 9 months that you stated in another
post.)

>>You may think the cost is too high, but I know for a fact that there are a ton
>>of extremely strong programs out there that have these terms.
>
>Name that "ton".  I've seen Rebel play.  It doesn't.  I have seen most every
>micro play, and fall victim to attacks that say "I don't understand how all
>those pieces are attacking my king-side..."

I won't name the programs because I don't know if the authors would want me to.
And I wasn't thinking of Rebel.

>What is there to understand?  A potentially open file is a very concrete
>thing, just like an open file or a half-open file is.  No confusing definitions.
>No multiple meanings.

Okay, so what is it? Is it one with a pawn lever? Or one without a pawn ram?
Seems like both of those could be considered potentially open files, and they
aren't exactly expensive to evaluate.

>Not "difficult to do".  I believe I said "impossibly slow".  There _is_ a
>difference.  Everything they do in parallel, you would get to do serially.
>All the special-purpose things they do in a circuit, you get to use lots of
>code to emulate.  I estimated a slow-down of 1M.  I don't think I would change
>this.  Cray Blitz lost a factor of 7,000 from a Cray to a PC of the same
>time period.  Solely because of vectors and memory bandwidth.  Crafty on a cray
>gets population count, leading zeros, all for free.  Because there are special-
>purpose instructions to do these quickly.  DB was full of those sorts of
>special-purpose gates.

No, you're completely confusing the entire issue. Was DB written in Fortran, or
Cray assembly? Did it run on a Cray? Does it have anything to do with a Cray?
Does it even implement the same evaluation function? How about the same search?
There are enough variables in your "estimation" here to make any legitimate
scientist puke.

>>You've spent years building up DB's evaluation function. Surely you can see some
>>benefits (even aside from commercial) of having this thing run on widely
>>available hardware.
>
>at 1/1,000,000th the speed of the real mccoy?  Again, what would one learn from
>such a thing?  What could I learn from working with a 1nps version of Crafty,
>when it is going to run at 1M nodes per second when I get ready to play a real
>game?

Again, assuming your 1M figure is anywhere near accurate. You're claiming that a
DB node is worth about five thousand (5,000) (!!) "regular" PC program nodes.
What on EARTH can POSSIBLY take 5,000 nodes worth of computation to figure out?
You're going to have to do way better than your lame "potentially open file"
thing to sell that to anyone.

>We know how DB (single-chip) did when slowed to 1/10th its nominal speed
>and played against top commercial programs.  That was reported by me first,
>then others asked about it at lectures by the DB team and we got even more
>information from those reports.

No, we don't "know" that. Where are the reports? Where are the game scores?

>I am _certain_ that taking DB from hardware to software would cost a lot.
>You would lose a factor of 480 because of the chess chips.  You would lose
>a factor of 32 because of the SP.  You would lose a factor of something due
>to the cost of doing Make/UnMake/Generate/Evaluate in software during the
>software part of the search, rather than getting to use the hardware they
>had to handle these mundane parts of the software search.  32 X 500 is over
>10,000 already.  And it is only going to get worse.

10k is a _really_ far cry from 1M. Besides, if you think that DB's algorithms
are completely worthless if they aren't running on their fast hardware, why
doesn't that apply to any other PC program? Are they all worthless because they
don't search 200M NPS? Or because they can be run on slower PCs? Or because they
will be run on faster PCs in the future? What you're saying is basically, "why
have a chess program?" I'm surprised you haven't thought of any reasons by now.

>When your data is flawed, you need more.  Crafty lost one game at a time
>handicap.  Ed then played more games with crafty at the same time control,
>but with rebel at that time limit also.  And the result was much different.
>Which suggests that the first (and only) handicap game was a fluke, which
>is certainly the most likely truth.

Changing the experiment does not magically invalidate data. If you want to call
all of your losses "flukes," fine.

>I won't try to speculate why they reported 200M.  Hsu was a scientist.  With

Why is there any need to speculate? I think I posted a perfectly legitimate
potential explanation for the number. There are probably more possible
explanations. Why in the world do you refuse to take his number at face value?

-Tom



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.