Author: Uri Blass
Date: 09:10:37 08/27/02
Go up one level in this thread
On August 27, 2002 at 11:42:35, Robert Hyatt wrote: >On August 27, 2002 at 06:14:03, Uri Blass wrote: > >>On August 27, 2002 at 04:34:16, Dave Gomboc wrote: >> >>>On August 25, 2002 at 23:39:29, Robert Hyatt wrote: >>> >>>>On August 25, 2002 at 21:56:03, Dave Gomboc wrote: >>>> >>>>>On August 22, 2002 at 16:09:07, Uri Blass wrote: >>>>> >>>>>>On August 22, 2002 at 15:51:28, Jeremiah Penery wrote: >>>>>> >>>>>>>On August 22, 2002 at 06:47:34, Gian-Carlo Pascutto wrote: >>>>>>> >>>>>>>>That does not make sense - it only does when you take the first number as >>>>>>>>the nominal ply depth and the second number as the part of that that was >>>>>>>>done by the hardware searches. >>>>>>> >>>>>>> >>>>>>>So what does it mean when you have searches like this, >>>>>>> >>>>>>>--> 17. Be3 <-- 23/113:12 >>>>>>>--------------------------------------- >>>>>>>Guessing Qc7 >>>>>>> 3(4) 25 T=0 >>>>>>>qd1d2 Pc5c4 pb3c4P >>>>>>> 4(5) 25 T=0 >>>>>>>qd1d2 Pc5c4 pb3c4P >>>>>>> 5(5)[Qd2](25) 25 T=1 >>>>>>>qd1d2 Pc5c4 pb3c4P >>>>>>> 6(5)[Qd2](25) 25 T=2 >>>>>>>qd1d2 Pc5c4 pb3c4P Qc7c4p >>>>>>> 7(5) #[Qd2](28)##################################### 28 T=4 >>>>>>>qd1d2 Re8b8 nf3e5P Pd6e5n >>>>>>> 8(6) #[Qd2](28)##################################### 28 T=12 >>>>>>>qd1d2 Re8b8 bc2d3 Pa6a5 pc3c4 >>>>>>> 9(6)<ch> 'ng6' >>>>>>>--------------------------------------- >>>>>>>--> Ne7g6 <-- >>>>>>>--------------------------------------- >>>>>>> 28 T=19 >>>>>>>qd1d2 >>>>>>> 3(4)[Qd2](30) 30^ T=1 >>>>>>>qd1d2 Pc5c4 pb3c4P Pb5c4p >>>>>>> 3(5) 35 T=1 >>>>>>>qd1d2 Qd8c7 pb3b4 Pc5c4 be3h6P >>>>>>> 4(5) 35 T=1 >>>>>>>qd1d2 Pa6a5 pa2a3 >>>>>>> >>>>>>> >>>>>>>where you have depths like 3(4)? They can't have 3 nominal plies, where 4 of >>>>>>>those plies come from the hardware, because obviously that's impossible. >>>>>> >>>>>>A good question. >>>>>> >>>>>>I do not understand the meaning of the second mnumber >>>>>>but the first number is clearly the brute force depth based on their paper. >>>>>> >>>>>>Maybe the second number is about some limit about the extensions but OI do not >>>>>>know. >>>>>> >>>>>>Uri >>>>> >>>>>Uh, is that what you guys are all discussing _again_? >>>>> >>>>>Sheesh. >>>>> >>>>>The first number is the depth of the software search. The second number is the >>>>>depth of the hardware search. I posted this _years_ ago after asking a member >>>>>of the DB team directly: check the archives. >>>>> >>>>>Dave >>>> >>>>That is what I was told also. However, a fairly new paper really clouds the >>>>issue in that they mix depths between DB2 in the 1997 match, DB Jr on slower >>>>hardware, etc... >>>> >>>>I think that the only explanation for the (x) number is the one given by the >>>>team to me. And apparently to you as well, and probably others that simply >>>>don't post here... >>> >>>Often when they refer to their search tree they refer to the software depth >>>only. Which paper is causing the kerfuffle? >>> >>>Dave >> >>A paper by Murray Campbell,Joseph Hoane Jr and Feng-hsiung Hsu (august 1 2001) >> >>In that paper they said the following in page 5: >> >>"A three minute search on deep blue would reach a full width depth of 12.2 on >>average." >> >>"The estimate is based on a linear least squares fit on all the iteration,log >>time data points from the 1997 match against kasparov." >> >>They also say in page 13 the following about deep blue Jr: >>"For a given iteration i,the software is assigned i-4 ply which represent the >>minimum depth search in software." >> >>They never said that iteration means different things in deep blue Jr and deep >>blue so it is logical to assume that if iteration is software+hardware in deep >>blue junior it is also the case in deep blue. >> >>I have some questions for them that I did not understand from the paper: >>I will be happy if they answer only by yes/no when they can answer the last >>question by a number. >> >>1)Does iteration mean the minimal depth that they could not miss tactics(In >>other words in the worst case they could miss tactical line of 13 plies when >>they searched iteration 12)? >> >>2)Does iteration mean the software search in deep blue II? >> >>3)Did they use only selective search in the hardware(they say in comment 3 in >>page 5 that their experiment showed that deep blue sacrificed 2 plies >>of full width searrch in order to execute the selective search algorithm)? > >You are misunderstanding things here. > >There are two kinds of selective search: > >1. forward pruning, which they seem to do in hardware as Hsu had mentioned >futility pruning in the hardware several times. > >2. selective extensions. I found in Cray Blitz that implementing SE as >they reported in the SE paper costs at least a ply. But then they later >added to this by doing the special cases of only two good replies or only >three good replies. That is likely what they mean, but that is _not_ done >only in hardware, which makes the above pretty much impossible to interpret. > >I have no idea what the above sentence means however. Perhaps it is the >futility pruning, which seems to be only in the hardware. But it is ambiguous >as is several other parts of the paper. I thought that it is possible that what they did in the hardware of deeper blue is a real selective search from the first ply and that they sacrified 2 plies of brute force search to get plies of selective search. > > > >> >>4)Did they do test games to prove that the sacrifice of 2 plies was productive >>(I can understand if they assumed it without testing because they had not enough >>time to test)?. > >Yes. This was reported in their initial paper on SE. They both played >games with and without, which exaggerated the improvement, and they tried >test suites which suggested the improvement was more modest. This was >published in the JICCA somewhere in the late 1980's... The question is if they mean to the plies that they sacrificed for the singular extensions. > >> >>5)They give average number of nodes of 126M nodes per second. >>Is it the number of nodes(not only effective nodes) that they searched? > >That I can answer. It is, at best, an approximation, since they didn't count >node. As I said before, I can think of two ways they could produce such a >number. One would be to measure the effective duty cycle of each chess >processor as a percentage from 0% to 100%, and multiply each of those by the >speed of that chess processor. Sum 'em up and you get a raw nodes searched. > >But Hsu generally used his "efficiency" value to report effective nodes >instead, something he started in his thesis. IE I can't imagine any way >possible where he would take a machine capable of searching > 1 billion nodes >per second, and be happy if it only searched at 10% of its rated speed. Which >leads me to believe that 126 is "effective serial nodes" so it could be compared >to a normal machine someone might use. I prefer to hear what the deep blue team say. They said in the paper that they thought that the evaluation is more important than the speed so I can understand if they did not care about the fact that the machine could search more nodes. > >> >>6)They give estimate of efficiency of 8-12% >>I am not sure about the meaning of it. > >This is from his dissertation. And it is the same value we all report, >although I try to do it in the inverse way. > >It means that his search acts like it is running on a machine that is only >8-12% of the speed of the real machine. Due to search overhead and idle >chess processor time. For Crafty, this number is about 75% on a quad, since >I typically run about 3x faster, rather than 4x that would be optimal. > >Anybody that uses this kind of number implies "x% efficiency means that I >effectively use x% of the total hardware." In Crafty the processors always work and in deeper blue it was different. The deeper blue team need to explain if 8-12% efficiency mean effectively use 8-12% of the total hardware or effectively use 8-12% of the total nodes. Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.