Author: Robert Hyatt
Date: 21:25:52 05/21/99
Go up one level in this thread
On May 21, 1999 at 18:21:27, James B. Shearer wrote:
>On May 21, 1999 at 15:37:15, Robert Hyatt wrote:
>
>>On May 21, 1999 at 12:26:12, James B. Shearer wrote:
>
>>> Well as a reality check, I would suggest rereading Hsu's 1990
>>>Scientific American article ("A Grandmaster Chess Machine", by F. Hsu, T.
>>>Anantharaman, M. Campbell and A. Nowatzyk, Scientific American, October 1990, p.
>>>44-50). Some quotes from the last page:
>>> "... The machine we have in mind will therefore examine more than a
>>>billion positions per second, ... . If the observed relation between processing
>>>speed and playing strength holds, the next generation machine will play at a
>>>3400 level, about 800 points above today's Deep Thought and 500 points above
>>>Kasparov's rating record."
>>> "We believe the system will be strong enough, by virtue of its speed
>>>alone, to mount a serious challenge to the world champion. We further believe
>>>that the addition of other planned improvements will enable the machine to
>>>prevail, perhaps as soon as 1992."
>>> Obviously with hindsight this was optimistic.
>>> James B. Shearer
>>
>>
>>Looks like the last quoted sentence was dead on, although the date was missed
>>a bit. But he didn't say 'guaranteed to prevail no later than 1992' he said
>>" we believe ... perhaps as soon as..." Which I would say ended up pretty
>>prophetic?
>
> Well the year 2000 date in the IEEE micro article is similarly hedged.
>Missing by the same amount would delay to 2005. Such delays can easily kill a
>start-up.
>
>>In in 1997 they 'delivered'. They were able to peak at over 1B nodes per
>>second (480 chess processors X 2.0~2.4M nodes per second per processor is
>>well beyond that). They did beat the world champion, not "just mount a serious
>>challenge to him."
>
> You are assuming no parallel search loss. To quote some more from the
>Scientific American article.
> "To achieve this speed, Hsu is designing a chess-specific processor
>chip that is projected to search at least three million moves per second - more
>than three times faster than the current Deep Thought. He is also designing a
>highly parallel computing system that will combine the power of 1,000 such
>chips, for a further gain of at least 300-fold. ... "
I'm not really assuming anything at all, because search loss is _not_
constant. It is easily possible (and 100% probable) that many searches
with N processors run N times faster than with 1. I see this regularly.
Yes, there are cases where it is less. But in terms of NPS, they 'delivered'
what they said. Because when I quote NPS figures for Crafty (as does everyone
with a parallel search) I give "raw nps" numbers. Because 'effective nps' is
impossible to calculate.
>
> This makes it clear that even in 1997 they missed on the per chip
>performance and on the number of chips they could use. (I believe the parallel
>efficiency achieved was also less than the 30% assumed but I am not sure about
>that.) Factoring in the general advances in technology expected during a 5 year
>schedule slip it is clear the original projections were optimistic to say the
>least. Also the article is referring to the "next generation of the machine,
>expected to play its first game some time in 1992". The 1997 machine is
>arguably one or more generations beyond this.
> James B. Shearer
I don't interpret it like that at all. The first re-design of the hardware
was not finished until 1996 (the first kasparov match). Until that time, they
used the older deep thought 2 hardware and played with the software a lot to
decide _what_ they wanted to do in the next generation. I think they decided
to 'slow down' the design cycle a bit as mistakes made in the hardware design
are only repairable by doing another design/fab cycle. Which the 'bean
counters' doubtlessly watch carefully (the accounting department).
As far as their 1992 prediction, I wouldn't be surprised if they had made a
much faster DT machine, had they had a hard deadline. But saying what you think
can be done and then doing that are two different things. I sensed no 'hurry'
from the IBM folks. In fact, it was probably quite the contrary as they would
want to be 'sure' of what they were doing.
As far as chess chips go, they can use as many as they can build, so long as
IBM allows the SP to be scaled equally, which it pretty will can. IE a 1024
chip machine would easily be doable, assuming someone had the money to fab
another 500+ chips, and then give them an SP with 2x the number of processors
to use. He has always referenced "30%" as his efficiency value. Which is a
number I don't like personally (I've never had a chess program stuck at 30%
efficiency, but then again, I have never tried to use 480 chess processors on
top of a set of general purpose processors, either, so our approach is quite
different.)
But obviously they did deliver a machine that plays chess at a level that no
other program has come close to. And at a level that is beyond all but a very
few human players as well (even Kasparov said this.) And if Hsu does deliver
a single-processor chess engine for the PC, running at 36M nodes per second, it
will definitely be frightening, because there will be _no_ loss of efficiency in
a one-processor implementation. And with 4, a 75% efficiency level is hard to
hit, as that is what I am getting. It will definitely be a beast...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.