Author: Keith Evans
Date: 20:41:08 02/01/03
Go up one level in this thread
On February 01, 2003 at 21:57:36, Robert Hyatt wrote: >On February 01, 2003 at 20:52:11, Keith Evans wrote: > >>On February 01, 2003 at 14:57:42, Jorge Pichard wrote: >> >>>How much better Deep Junior program is in comparison to Deeper Blue, considering >>>that Deeper Blue was about 67.8 times faster than Deep Junior and Kasparov is >>>much better of a player now than he was back in 1997? I must also indicate that >>>if Deep Junior or Deep Fritz was running on a special hard core super parallel >>>computer like Deeper Blue, what would be the chance of Kramnik or Kasparov >>>winning at least one game out of 6 in standard time control. >>> >>>http://www.ishipress.com/manmach2.htm >>> >>>pichard >> >>I have a dumb question... >> >>If you do 6 ply full width search without hashtables and do the same search but >>with hashtables how will the node counts compare? > >In the middlegame, they will be close. Very few "EXACT" matches occur. In >endgames, this is different. So are hashtables basically useless in the middlegame? Or is there some reason that they still help? Am I missing something? Has anybody tried disabling them in the middle game to get rid of the memory access latency? Or are the entires are small enough that this is a negligible amount of traffic? Does everybody's branching factor suffer in the middlegame because you can't use hash table moves? Would Hsu's chips would really suffer in the endgames compared to the lack of hash tables and tablebases? I guess that once he really got into an endgame he could defer to a software solution, but when he was starting a search from say the late middlegame this probably hurt. (I know that he had some small endgame tables in the chips.) > >> Deep Blue didn't have hardware >>hash tables and I believe that the 200 M number includes a bunch of repeated >>nodes. (I use 6 plies because I believe that what the chips did in the full >>width part of the search.) >> >>Then there's the question of how the lack of a value stack for alpha beta >>contributes to researches - the chips implemented a minimum-window search. How >>does this affect node counts? As Hsu says "...when the new move is better than >>the current best move, we may need to research the new move. We can ... repeat >>the minimum-window search multiple times, raising the test value slightly each >>time." (But then he says that it's just as efficient as regular alpha beta.) >> > >And it is similar to PVS as well, where almost all of the tree is searched >with a simple null-window (x,x+1) search. > > >>Also I believe that the 200 M number doesn't account for inefficiencies of the >>parallel search. The two previous items that I mentioned would be true for even >>a single chess chip - if you have multiple chips working in parallel then >>there's obviously going to be additional overlap. > > >DJ's 3M number doesn't account for this either. Neither does my 2.5M number >on my dual xeon... > >Not any easy way to measure this "loss"... I thought that it might not be as bad for SMP systems since they could get some benefit due to shared hash tables. Every one of Hsu's chess chips was completely independent on this level - so any search overlap would be completely wasted effort. I understand that it would be difficult to quantify. > > > >> >>I'm sure that this has been discussed before, but it seems like these DB node >>counts are expecially misleading. I doubt that there's any chance of getting any >>consensus on how they should be scaled for comparison. >> >>Regards, >>Keith
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.