Author: Robert Hyatt
Date: 07:59:57 08/24/04
Go up one level in this thread
On August 24, 2004 at 09:57:45, Tord Romstad wrote: >On August 24, 2004 at 06:14:32, Uri Blass wrote: > >>On August 24, 2004 at 04:57:02, Tord Romstad wrote: >> >>>Making the static eval aware of its limitations offers many interesting >>>possibilities, and I think there are many valuable and important ideas >>>waiting to be found by the adventurous programmer here. The basic >>>idea is to extend in positions where the static eval is likely to be >>>highly inaccurate, and to reduce in positions where it is likely to >>>be very accurate (internal node recognizers is an extreme special case). >> >>The idea is simple but the problem is to write evaluation to evaluate the >>variety of the score. > >In general, it is of course very difficult. But a few simple special >cases can be implemented rather easily. One case was the one I described >in my previous message: Winning material advantage for one side, a very >strong attack for the other side. A similar case is when a huge material >advantage is compensated by dangerous passed pawns. In both of these >cases, the static eval is likely to be highly unreliable, and it makes >sense to extend. For reductions, consider the case of a simple endgame >where one side is ahead by a rook and the other side has no passed pawns, >and the stronger side has no hanging pieces. In such situations, the >winning score returned by the static eval is almost certainly correct, >and reducing the depth is relatively safe. > >Perhaps it would be possible to improve and generalize such techniques >by statistical methods. Start with a huge set of positions from real >games, along with the results of all the games. Let your chess engine >evaluate all the positions, and look at the values of all the components >of the evaluation function (material, pawn structure, mobility, centre >control, king safety, hanging/pinned pieces, etc.). By studying the >data, it is possible that we could find formulas to make crude estimates >of the probability distribution of the three possible results based on >the "evaluation vector". > >Tord _years_ ago (circa middle 70's) I (and others) did something close to this. My evaluation returned two numbers, one the "score" and the other the "uncertainty" associated with that score. The uncertainty was based on the usual suspects, ie unsafe king position, advanced passed pawn, more than one piece apparently hanging, etc. I then used the depth and the uncertainty to make search extension decisions. The problem with this today is that I am afraid the "uncertainty" would always be high. IE weak pawns are complex creatures to recognize. And the uncertainty has to be high since it is based on lots of pawn moves that have not yet been made and hence have a high level of uncertainty that they even can be made. I suspect that the more complex the eval, the higher the overall uncertainty, and the less useful this concept will be. Perhaps reaching the point where it is better to spend lots of effort on writing code to reduce the uncertainty rather than writing lots of code to use the uncertainty to force the search deeper...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.