Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Tiger against Deep Blue Junior: what really happened.

Author: Robert Hyatt

Date: 18:58:03 07/26/00

Go up one level in this thread


On July 26, 2000 at 18:35:57, Tom Kerrigan wrote:

>On July 26, 2000 at 18:04:58, Robert Hyatt wrote:
>
>>>"At 2 to 2.5 million chess positions per second, one chess chip is equivalent >If you take 2 to 2.5 (actually 2 to 2.4 according to Hsu's numbers) and
>
>Ah, I see. Hsu was trying to fake out IEEE.

I don't see any faking.  He has specifically stated, in more than one article,
the some of the chess processors ran at 20mhz, and others ran at 24mhz.  I
believe his book says the same thing although I will check when I get to the
office tomorrow. He has _always_ explained that his chip takes 10 clocks per
node.  20mhz/10 = 2M nodes per second.  24mhz/10 = 2.4M nodes per second.

The math isn't challenging.

I am not certain that the split was 50-50 with 20mhz vs 24mhz chips.  I actually
asked that at one point but I don't remember his exact answer (In fact, I think
his answer was something somewhat vague like "about 1/2" or something).




>
>>>So who is manufacturing data? It sure doesn't look like Chris.
>>Does it look like me?  is 480 * 2.2M pretty close to 1B (remember that I
>
>Yes, it does look like you. You are going through amazing contortions to
>fabricate a number that's directly contradicted by _everything_. You're the hero
>of "evidence" and "academia" yet you prefer some stupid multiplication problem
>to data that was published by the creators of the machine itself. Hopefully you
>have more integrity when you're doing research.



I suppose you have some idea about what you are talking about?  Because you
certainly are missing the boat in the above calculations.  But I think that
is intentional, so there is nothing to be done about it.  My numbers are
_right_ out of the IEEE article.  And can easily be derived from the snip
from your quote above...  If you want to of course...




>
>>said this was the theoretical _peak_ NPS for DB.  PEAK.  Not typical.  I
>
>Your multiplication problem assumes that all of the processors are active and
>every single one is crunching a position that allows absolute maximum
>throughput. How many times do you think this has happened? I'm willing to bet
>"never." I'm much more willing to believe a PEAK rate of 555M NPS, which has
>actually been published somewhere. (IEEE)

I happen to know _exactly_ now his SMP search works.  Why?  Because I happen
to have read his thesis.  Did you?  Of course not.  Otherwise you would under-
stand my reference to his claim of running the chess processors at a 70% duty
cycle.  That was his number.  And it is an _average_ number.  Which is simply
a product of a mismatch between the chess hardware speed and the SP2 search
speed.  I'll be happy to explain why this happens, since you haven't seen the
details.  Or you might try to grab a copy of his thesis and get out of the
dark on how it operates...

There is absolutely no doubt that 480 processors _can_ reach 1B nodes per
second.  There is also no doubt that in some positions, they run at well under
a 50% duty cycle.  Which is why everyone (that knows beans about parallel
programming and the like when relating it to chess) quotes _average_ numbers.

Parallel search is non-deterministic in its behavior.  And when you do a two-
level search as done in DB, there are hardware mismatches that hurt.  So there
are peak numbers, typical numbers, and pessimistic numbers.  I have certainly
given all three of _my_ numbers here many times.



>
>>>Remember not long ago when I was quoting Hsu's estimate of how many general
>>>purpose instructions it would take to search a DB node? And you told me that Hsu
>>>obviously didn't know what he was talking about and the estimate was worthless?
>>>Well, the estimate was published in the IEEE journal by the man who built the
>>>chip. It was staring you right in the face, and against all common sense you
>>>chose to ignore it. So who is being academic here? It doesn't look like you.
>>I said that it is very hard to decide how many GP instructions it would take
>>to emulate the DB hardware.  As the GP instruction set for the pentiums has
>>changed significantly...
>
>Right, you said that it was very hard and that Hsu was wrong. It was a pretty
>slick job of contradicting evidence.
>
>-Tom


_I_ would be hard-pressed to guess how many instructions some complex piece of
hardware would require to emulate it.  _particularly_ when I have programmed in
various assembly languages like the SPARC, X86, IBM (big iron), XEROX, CRAY,
Data General, VAXes, and others.  And the number of instructions could vary by
a factor of 10 quite easily.  Comparing a Vax to a SPARC might vary by a factor
of 50, easily.

That was my point.  Being off by a factor of 10, or even 50, is a _huge_ margin
of error.  When you are talking about 2000, or 10000.   Just multiplying by 10
changes things significantly.

Sorry it went over your head...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.