Author: Yngvi Bjornsson
Date: 11:14:26 11/18/99
Go up one level in this thread
Don't get me wrong, I'm not objecting to that there is a negative effect with increased depth, I was simply questioning the explanations you gave ("increased chance of choosing the incorrect branch" and "the uncertainty increases as the values back up the tree"). They didn't sound right to me. On the other hand, I think you might be right about that there exists such negative effects (although observing that a D-ply deep search occasionally wins a D+1 search is not as such an indicator that there are). I do agree with you, that the quality of the evaluation function can worsen with increased depth, and I think this is a far more plausible explanation. Like you said, Bob has mentioned this. Also, I know Jonathan Schaeffer observed similar behavior when automatically tuning the evaluation function in Chinook. He found it to be very important to tune the evaluation function using the same search depth as used in actual play. (On the other hand, the Cilkchess team used very shallow searches in their temporal-difference learning with good results.) I think it is important to realize that the sole purpose of the evaluation function is to provide a ranking (total order) of the leaf nodes. Based on this ranking, the minimax-based search tells what is the best "ranking" we can provably reach. The ability to correctly rank the leaf nodes may worsen with increased depth, because: - The positions evaluated become more disparate the deeper we search, therefore the evaluation function cannot rank them as reliably anymore (using shallow searches the positions at the leaves differ only in the placement of a few pieces and are as such easier to compare, whereas with deep searches some fundamental characteristics may more often differ). - The positions become more "obscure" (at least from the human perspective). For example, it is possible that we need to evaluate a position where all the major/minor pieces are still on the board but 3 or more pawns have been exchanged off for each side. I think, in actual play, such positions are not likely to occur, but still deep searches can produce them. This is not a problem by itself, except for the fact that when humans tune the evaluation function they do it with "human like" positions in mind. The evaluation function is not build with such positions in mind, and the evaluation function doesn't handle them very well. - There are more positions to rank. However, I'm not sure this is important at all. Many of the positions are lop-sided and easy to evaluate as either good or bad (highly or lowly ranked). I think this is an interesting topic. As I see it, the search is simply a substitute for a lack of knowledge. It is difficult to determine where the best tradeoff is. Some static long-term positional features (e.g. pawn-structure) that carry from the middlegame (or even the opening phase) into the endgame, cannot be subsituted for by a search and definately need to be in the evaluation function. Others, the search can reveal (e.g. tactics). The deeper we search, the more we rely on the search to discover some of the positional merits of the position. Therefore we need less positional knowledge in the evaluation function itself to maintain the same standard of play (in theory at least). However, the deeper we go, the more demand we put on the evaluation function to correctly rank possibly disparate positions. For that, some additional knowledge might be needed! Maybe, one can conclude, that with deeper searches one needs less positional knowledge, but one has to assure that this knowledge is more finely tuned. Just a thought. -Yngvi
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.