Author: Robert Hyatt
Date: 22:02:12 08/24/02
Go up one level in this thread
On August 23, 2002 at 10:57:15, Uri Blass wrote: >On August 22, 2002 at 20:18:38, Robert Hyatt wrote: > >>On August 22, 2002 at 18:29:38, Uri Blass wrote: >> >>>On August 22, 2002 at 17:20:00, Robert Hyatt wrote: >>> >>>>On August 22, 2002 at 14:15:54, Gian-Carlo Pascutto wrote: >>>> >>>>>On August 22, 2002 at 13:47:46, Robert Hyatt wrote: >>>>> >>>>>>Doesn't it depend on the definition of "ply"? >>>>> >>>>>If they use a nonstandard definition of 'ply', then it's meaningless >>>>>to say that they did 18 ply and therefor must have been great. >>>>> >>>>>None of the papers imply they do anything like that. >>>>> >>>>>There is a very simple explanation that makes everything come >>>>>out logical: they didn't do 18 ply but 12. But then again, that's >>>>>not an acceptable idea to some people. >>>>> >>>>>-- >>>>>GCP >>>> >>>>It simply isn't _reasonable_. Based on having watched them search 10-11 >>>>plies on deep thought. To assume that they get nothing from going 100X >>>>faster? Do you _really_ believe that? Then why not stick with the original >>>>deep thought hardware??? >>> >>>Explanations: >>>1)The assumption of 100xfaster was wrong. >> >>That isn't an assumption. We know the average speed of deep thought was >>2M nodes per second, directly from Hsu. We also know that the average >>speed of DB was 200M from the same source... > >We know 126M nodes directly from Hsu. >see page 4 of the paper. We _really_ don't know that. I suspect it is right, but with multiple authors, Hsu having left IBM years ago, their carelessness in using multiple NPS figures depending on who you talked to (I explained the two different types of NPS figures in another post a few minutes back) I would say "nothing can be assumed here..." > >We also know that the estimate for the efficiency of deeper blue was also 8-12% >see page 18 > >I do not know if they mean 8-12% of 126M nodes or 8-12% of a different number. > This is a measure of _total_ performance. IE take 8-12% of the 480 chess processors. According to Hsu, 1/2 of them ran at 20mhz and the other half ran at 24mhz. Which means that they searched an average of 2.2M nodes per second (10 clocks per node was the way they worked). so for the worst case, take .08 * 480 * 2.2, and for the best case, take .12 * 480 * 2.2, and you will be spot-on for their "effective serial NPS" number... The latter number is remarkably close to 120M of course. > >> >>>2)They used more extensions. >> >>that could certainly possible... >> >>> >>>I remember that you claimed that 2 is not correct but >>>I did not see it in the paper. >> >> >>I didn't say it wasn't correct. I said that deep blue's search was definitely >>derived from deep thought's search. > >You said that the paper say exactly the opposite to the claim that they did more >extensions and the paper does not say it The paper clearly says that the last deep thought search was the search used in deep blue. They might have added a few things. But the DT search was the basis, according to the paper, and Hsu, and others at IBM. As I mentioned, this was their justification for the name "deep blue prototype"... Which they specifically explained as "a new search designed for deep blue's hardware, but using the deep thought hardware." They used this "name" for 3-4 years prior to 1996. Which means the base search was also used for 3-4 years prior to the first Deep Blue. That was my point. The extensions were probably very _close_... > >see http://www.talkchess.com/forums/1/message.html?246929 > >Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.