Author: Robert Hyatt
Date: 10:18:23 08/21/02
Go up one level in this thread
On August 21, 2002 at 12:05:12, Uri Blass wrote: >On August 21, 2002 at 11:21:18, Robert Hyatt wrote: > >>On August 21, 2002 at 10:35:13, Vincent Diepeveen wrote: >> >>>On August 21, 2002 at 10:26:57, Terry McCracken wrote: >>> >>>hello, >>> >>>Another thing. >>> >>>In 1998/1999 Hyatt claimed that deep blue searched 11 to 12 ply, >>>but that their *extensions* were better than everyone else. >>> >>>They used singular extensions. >>> >>>In 2000/2001, i also was using singular extensions, as well as >>>several other programs. >> >> >>And as I said, your implementation is pitiful compared to _real_ SE >>as implemented in Deep Thought/Deep Blue, Cray Blitz and HiTech. You >>totally fail to handle the FH-singular case which is complex and expensive. >> >>And you _also_ fail to remember that you have said _many_ times "singular >>extensions suck and what I do is much better". Only later you discover that >>your _implementation_ sucked and than when you finally got it right, then it >>did work pretty well. >> >>Which is typical for you, of course... >> >> >>> Then suddenly when we searched above 11 to 12 ply >>>depths, it was said that the 12(6) of the machine which means 12 ply >>>nominal search depth, was excluding 6 ply hardware search depth. >> >>Directly from the team, remember. I posted the email _right here_ to >>make it public... >> >> >>> >>>Which is bloody idiocy in itself, because the thing had no hashtable. >>> >>>The big theoretician Knuth has written a lemma for game tree search. >>> >>>The minimum tree to search at 18 ply search depth using alfabeta >>>(without nullmove which wasn't used by deep blue): >>> >>>2 * (squareroot(number of legal moves) ^ depth) >>> >>>or: >>> >>>2 * sqrt(40)^18 = 524288000000000 nodes needed to search it *minimum*. >>> >>>As you can imagine, getting 11 to 12 ply fullwidth search was already a >>>very good achievement in 1997. >> >> >>Again, your theoretical explanation is wrong. How do you explain that Knuth's >>"lemma" (as you wrongly call it) predicts a branching factor of > 6, when we can >>_prove_ that DB's branching factor was under 4? > >I do not think to continue to argue here but only one point: >We cannot prove that the branching factor was less than 4. > >The output is not a proof because people can choose not to believe that 12(6) >mean 18. > >If someone can make a free program with branching factor of less than 4 inspite >of no pruning(except futility pruning) then it may be interesting to see. > >Uri It doesn't matter how you interpret 12(6). All you have to do is take the iteration-by-iteration time values and subtract them to get the raw time for completing each iteration, then doing the division. You definitely see things like 9(5) 10(5) 11(6) just take those three iterations and compute the branching factor, without making any assumption about what the 9(5) means other than it means that a particular "depth iteration" was done...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.