Author: Vincent Diepeveen
Date: 06:43:49 01/30/02
Hello, I would like to adress some major nonsense about search depth deep blue. First of all, we know that deep blue was doing only fullwidth with loads of extensions. a) 12(6) doesn't mean they searched 18 ply. It means they searched 12 plies fullwidth from which 6 ply in hardware Nevertheless Bob again claims it is 18 ply fullwidth. Note that he bases his claim on an EMAILED and not PUBLISHED statement from a non-programmer who said it was Deep Thought printing stuff out like that. Note that the only published depth is 12 ply in IEEE99, an OFFICIAL document. Let's assume Bob as only defender from IBM is right, let's do some math on the MINIMUM number of nodes they would need for a 18 ply search. Also we know they didn't use a hashtable in hardware. It is not amazing to see that in the complex middlegame Deep Blue searched just as deep as in the far endgame, about 11(6) to 12(6). That means Knuth's lemma is of course working for them, otherwise they would have searched many plies deeper in endgames. Knuths lemma says that you need about the square root of the average number of moves for even depths. Now everyone can easily figure out what the average number of legal moves is (considering they extend checks so these positions do not count) This was measured by me at 40. There i am the only person here who recently measured this, the previous measurement of 35 was INCLUDING checks and is like 50 years old, and was just saying something about positions in GM games, it is obviously logical that 40 is pretty true in the middlegame. Note that in this math we assume a PERFECT move ordering, we do not count the huge number of nodes singular extensions need and we do forget the fact that deep blue proved in the games tactical weaker than crafty would be at 15 ply even, whereas the truth is that they DID do some extensions of course, it's even mentionned that in the software part they did do 'dangerous extensions'. For a 18 ply fullwidth search: squareroot (40^18) = 40^9 = 262,144,000,000,000 Now we know Deep Blue ran at 200M nodes a second, which from hardware viewpoint is an incredible achievement and the only thing IBM cared for. IBM *never* said something about searching deeper. So for a 18 ply search the theoretical minimum would have been (assuming 100% speedup from deep blue, whereas in reality they wrote down they got 20% out of it this was published in IEEE99 by Hsu): 262144000000000 / 200,000,000 nodes a second = 1310720 seconds needed to get 18 ply We definitely need to note that in 1995 not a single program with mobility did get *near* 11 or 12 ply at 3 minutes a move, not even WITH nullmove. Even in 1997 it was a pain to reach 11 to 12 ply with just nullmove for programs with mobility. Only years after that things happened soon and fast, but a single email from a person who didn't make the chip is quoted by Bob and believed by some idiots here who are bad in math. 18 ply fullwidth without hashtable is unreachable CCC audience, with nullmove things change rapid however. This is why nullmove earns the nobel prize and the defense of deep blue must admit that in 2002 it is hard to say that a program that just searched fullwidth and which was promoted only by nodes a second, that it got 11 to 12 ply and not 17 to 18. 12 ply is in fact already a real good search depth fullwidth, if i let diep search fullwidth WITH hashtables and singular extensions, then i don't even get over 10 ply in fact. The reason is real easy. I do my singular extensions also 4 ply from the leafs, whereas DB 'only' did it 7 ply from the leafs. If never forward pruning is used (like nullmove R=3 as i use in DIEP) it is simply impossible to explain the huge problems singular extensions cause. Best regards, Vincent Diepeveen
This page took 0.05 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.