Author: Dave Gomboc
Date: 09:40:35 05/14/99
Go up one level in this thread
On May 14, 1999 at 10:00:03, Robert Hyatt wrote: >On May 13, 1999 at 23:00:53, Eelco de Groot wrote: > >> >>Robert, Mr. Hyatt, thanks for all the new info on the 'Deep Blue for consumers' >>chip! Does Mr. Hsu already have a name for it? I suppose you could call it 'Baby >>Blue' , but maybe that is too innocent a name for this monster... (A topic for >>the polls, maybe, choosing a good name?). Regarding your thoughts on 'guts' , I >>am not a programmer, but does not the 'soul' of a program reside for a large >>part in its positional understanding also? Since the chip can be operated in >>parallel to a software program, could it not be used mainly for a deep tactical >>evaluation? Letting the program do a 1 ply search on all the positional features >>Deep Blue is not very good at, while the chip does a 4 ply mainly tactical >>search? It would be up to the programmer then to decide how much weight each of >>the two evaluations must get to retain the original character of the program. Am >>I making any sense here? >> > >yes... but the problem here is that this is what programs like Fritz/Nimzo/etc >do to an extent. They do a lot of work at the root of the tree, and then have >a very primitive evaluation at the tips. And they make gross positional >mistakes as a result. The _right_ way to search is a good search, followed by >a _full_ positional evaluation. And that is _very_ slow (which is why the fast >programs don't do this). DB _does_ however, because they do the eval in >hardware and the cost is minimal compared to our cost. "_Right_" depends on what works the best. If you find assumptions that carry over to all of the leaf positions that matter, and save yourself from the cost of eval at each one of them, you will be much faster. Sometimes a leaf position that matters will get hit, and you get toasted up. Tough one. :) Zobrist hashing is no different. I don't think it is categorically an error to do such a thing. The argument is that the bigger the subtrees get, the more likely you are going to screw yourself over, though I've not seen this demonstrated formally before. I have observed a small effect with Rebel when doing analysis (v8 and v9) -- or at least, I think this is what is going on... of course, I can't be sure. Have it search a root position to depth N, with the best move being a capture, and the best response being a recapture. Now have it search the position after those two moves to depth N-2. From my recollection, there is often a bump in the score one way or the other. It's usually the most severe when the queens come off. AFAIK Rebel clears its hash tables after each move, but I suppose there could be some hash table residue causing different pruning/extension decisions or something. >IE you don't want to search 1 ply deep, and find that move "x" is positionally >the best, then let the hardware search 20 plies deep and not notice that if you >play "x", your opponent gets two connected passed pawns on the 6th, absolute >control of the 7th rank, and the only open file on the board to boot... That >is the mistake that 'root processors' make regularly... too much changes >between the root, where the evaluation parameters are set, and the tips where >they are applied... > > >>Kind regards, >> >>Eelco de Groot. Dave
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.