Author: Robert Hyatt
Date: 11:35:33 07/26/00
Go up one level in this thread
On July 26, 2000 at 02:40:26, Ed Schröder wrote: >On July 25, 2000 at 17:52:02, Robert Hyatt wrote: > >>On July 25, 2000 at 15:51:30, Ed Schröder wrote: >> >>>On July 25, 2000 at 14:39:59, Robert Hyatt wrote: >>> >>>>On July 25, 2000 at 11:15:45, Ed Schröder wrote: >>>> >>>>>On July 25, 2000 at 10:44:20, Chris Carson wrote: >>>>> >>>>>>On July 25, 2000 at 10:19:10, Ed Schröder wrote: >>>>>> >>>>>>>On July 25, 2000 at 08:44:57, Dave Gomboc wrote: >>>>>>> >>>>>>>>- the "1 million nodes/sec" figure is a peak figure, not an average >>>>>>>> - average is 200k nodes/sec >>>>>>> >>>>>>>From the IBM site (may 1997): >>>>>>> >>>>>>> "Deep Blue was now capable of examining and >>>>>>> evaluating an average of 100 >>>>>>> million chess positions per >>>>>>> second." >>>>>>> >>>>>>>Ed >>>>>> >>>>>>Thanks Ed! Accurate and factual as always. :) >>>>> >>>>>Somewhere else the 200M is mentioned (as a peak?). The text also mentions >>>>>DB doing some pre-processor stuff (I think). >>>> >>>>This is all scrambled. Here are the right numbers: >>>> >>>>single chip: 2M or 2.4M nodes per second. >>>> >>>>DB2 (1997 Kasparov match): >>>> >>>>480 chess chips, half at 2M, half at 2.4M nodes per second. 1B nodes per >>>>second peak, 700M nodes per second actually searched, roughly 70% of those >>>>nodes are often referred to as "search overhead" reducing the effective NPS >>>>for DB2 to 200M. DB1 (1996 Kasparov match) searched 100M effective nodes per >>>>second... >>>> >>>>Those are straight from Hsu, so I feel pretty sure they are right... The others >>>>are smeared across a time line that contains DB1 _and_ DB2... Where DB2 was >>>>2x faster + move eval. >>> >>>The IBM pages say 256 processors and not 480. How come that Hsu's >>>informations don't correlate with IBM's all the time? >> >>I have no idea where you are looking. 256 was the 1996 DB. 1997 DB had 480. >>This number is on IBM's web site... > >http://www.research.ibm.com/deepblue/meet/html/d.3.2.html > >It says 256 processors. Don't know what you are looking at, but the last time I probed around up there, I found the reference to 30 SP2 nodes, each with two boards with 8 chess processors each. 30 * 16 would be what? When I saw it I reported it here because until that point in time, there was lots of speculation about the number of processors, as Hsu had said about 500, while others were saying 256, which was the number used in the 1996 DB machine. > > >>Maybe you are reading something written in 1997, but prior to the DB2 match >>being played. Until right before the match, I didn't know that DB2 had 480 >>processors either, until I heard it from the horse's mouth... > >If you look at the logo of the IBM page you see it is about the re-match. Yes... but was it written _before_ or _after_ the "new DB2 machine was rolled out?" IE the new machine wasn't ready until just prior to the match... so that was probably borrowed from the year before since Hsu kept it pretty quiet that he had completely redesigned the chess chip for 1997's match. Again, email him if you think 480 is wrong. That is the easiest way to get factual answers. That is what _I_ do... > > >>>And now we have a new item. It was not 200M nodes but suddenly it is >>>1000M nodes said by Hsu. Again it contradicts the IBM pages you know. >>> >>>Maybe you should not use the name of Hsu so much speaking on his behalf. >>> >> >>I have answered this already. If you multiply 480 * 2.2M, you get the >>theoretical peak NPS that DB can search. Hsu said that he keeps the chess >>processors running at about 70% of capacity due to the speed of the processors >>vs the speed of the SP nodes. After that, he claims 30% efficiency on the >>parallel search. If you compute 480 * 2.2M * .7 * .3, you get 200M, which >>is the efficiency figure he has _always_ quoted. but that is not the same >>as everybody else is reporting RAW NPS in parallel programs. My raw NPS in >>Crafty is 1M. My actual efficiency is .8 roughly. But since the efficiency >>varies, I don't try to correct the NPS reported because I can't do this very >>easily. Hsu simply reports the 'pessimistic typical value' and lets it go >>at that. >> >>Does that explain this??? > > >Your math is fine as long as you don't want to lift the 200M nps average into >1000M nps average because this is what you were trying to do in "your math" >posting 2-3 days ago which was a misleading try to save a lost argument of >yours. I am saying that _everybody_ else is claiming a NPS that is equivalent to 1B in DB nodes. _nobody_ is factoring out search overhead nodes. Hsu is. However, 1B or 200M is within one order of magnitude, so it isn't a signficant difference as compared to the multiple orders of difference between PC machines and DB2. > >You said DT/DB had improved a factor of 3.33 over the micro's concerning >hardware since 1988 and you used 1000M nps as a base for you calculations >which is misleading and you know it. Based on your own math the micro's >improved more than the DT/DB hardware. Factor 3.5 to be exactly. If you pick the right year, you can make any statement you want. You picked 1988. I would have picked 1986 which was the very first version of the machine that played at the ACM event that year... > >It's known from multi processors the higher the number of processors the >fewer its efficiency. Calling efficiency a "pessimistic typical value" is >beyond the truth. Efficiency = average NPS and is all what counts. Tell Amir, Conners, Feldman, (hyatt), Moreland, Diepeveen, and anybody else doing a parallel search to adjust their NPS _downward_ to reflect efficiency. _nobody_ but Hsu does that. If you want to compare apples with oranges, that is fine... I would prefer one uniform measure that everybody could use, and Hsu has pretty well defined that within reason. Nobody else is doing it just yet, however. I don't even like NPS as a comparison of anything. I prefer search depth, or time to solution, or whatever.. > >A friendly advice: I think that you should stop speaking on behalf Hsu. Your >way of reasoning fires back on him in a negative way. I do hope this is an >item for you. I don't see how it does that. I have been crystal clear in how I compared NPS values. And how his 200M is different from my 1M. Or from Deep Junior's 2.5M I don't think clarity can backfire... > > >> >> >> >>> >>> >>> >>>>>Quote: >>>>> >>>>> "Deep Blue uses "live" software that can actually generate up >>>>> to 200,000,000 positions per second when searching for >>>>> the optimum move. The software begins this process by >>>>> taking a strategic look at the board. It then computes >>>>> everything it knows about the current position, integrates >>>>> the chess information pre-programmed by the development >>>>> team, and then generates a multitude of new possible >>>>> arrangements. From these, it then chooses its best possible >>>>> next move." >>>>> >>>>>Ed >>>>> >>>> >>>>Sounds like something written for the general public, by someone that didn't >>>>have any idea of how a computer plays chess in general. IE someone in a P/R >>>>department writing about something he "thinks" he understands. The words sound >>>>good. The paragraph is nearly meaningless.. >>> >>>"Sounds like..." >>> >>>"The paragraph is nearly meaningless......" >>> >>>"IBM P/R people are stupid......" >>> >>>Be careful, IBM might sue you one day :) >>> >>>Ed >>> >>> >> >> >> >> >>I didn't say they were "stupid". I said it was written by a P/R type >>person, intending it for the masses. Not the technical folks. To me >>it reads like gibberish. To a non-computer person, it probably sounds >>great (if meaningless). > >Writing stuff for the masses doesn't mean posting incorrect information. >P/R is about numbers. Numbers can't be interpreted wrongly. Even a 80 >IQ person understands numbers. There are so many contradicting numbers >between the IBM pages and what you call the horse's mouth. > >Ed Not really. The web page you are apparently looking at is talking about the 1996 hardware, obviously. IEEE Micro reports the number 480. As does several other publications. As did several people here that went to Hsu's talks. So it is not _me_ that is making the number up. And I am sure Hsu is not. The only other explanation is that old data got into the web page you are seeing... 480 _is_ the right number. Each chip _does_ search either 2.0M or 2.4M nodes per second. Some are clocked at 20mhz, others at 24mhz, with 10 clocks per node. All from IEEE Micro and other sources (not to mention private emails of course which give the same numbers). I am _certain_ those are right. Web site or not.. > > > > >>> >>>>> >>>>> >>>>>>Best Regards, >>>>>>Chris Carson >>>>>> >>>>>>> >>>>>>>> - you will have to verify for yourself if that figure is for one chip or more >>>>>>>>- whether db uses forward pruning or not is obviously not clear >>>>>>>> - bob says it doesn't >>>>>>>> - article i read implies it does >>>>>>>> - db logs also imply it according to ed >>>>>>>> >>>>>>>>Dave
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.