Author: Ratko V Tomic
Date: 20:53:30 10/27/99
Go up one level in this thread
> Call it selection or not, but the name you put on it has little impact > on what it is doing. It is "selective", because the potential tree > of this algorithm is smaller than a "non-selective" version. I think that more accurate characterization which captures both the old style "static/fixed selective" and the newer "dynamic selective" algorithms is the inexact minimaxing. Namely, the pure alpha-beta (which has lower computation bound as sqrt(N), where N is the total number of leaf nodes, visited or not) will always return exactly the same value for the root as the full minimaxing will do (which will visit all N leaf nodes). If your algorithm achieves effective branching factor less than sqrt(AverageWidth), it is inexact minimaxing, i.e. it will not always return the same value (or the same "best" move) as the full minimaxing would. In the context of chess algorithms, the inexact minimaxing doesn't mean the program will pick a worse move than the exact minimaxing of the _same tree_ (which also would take much longer to compute) would. Namely, since the evaluation functions are inherently inexact (not only the positional terms but even the points for material are inexact, a hypothesis, e.g. there are positions where a knight is worth more than a queen), there is no useful purpose in expanding the effort (via pure alpha-beta) in finding the exact PV which has the best inexact value. It would be as wasteful as measuring the room for a carpet using the micron precision, even though the carpet store will measure and charge you for the carpet using, say, 1/4 inch precision measurement. The most economical use of the search time is if it can be arranged so to have a similar size inexactness of minimaxing as the inexactness of the evaluation function. Of course, someone writing a program doesn't usually know either figure, the inexactness of the evalation functions or the inexactness of the approximate minimaxing. But indirectly, by varying the evaluation terms and, say, by varying the ways cutoffs or traversals are done (thus varying the approximation of the minimaxing) one can probe for the strongest combination and thus indirectly obtain an indication that the two sources of error produce roughly equal error (making the particular empirical combination the most efficient user of computing time), or at least as close as they can get within particular class of variations. Another related issue with minimaxing is whether the inexact values of the leaf nodes (which are in effect equivalent to estimates of the probabilities of a win, except they're not normalized to 1 as conventional probabilities are) should be minimaxed at all, i.e. whether the minimaxing (exactly or inexactly) of such estimates will actually maximize the probability of a win (which is the real objective). For some artificially constructed (for ease of mathematical treatment) games it can be shown that minimaxing of inexact (probabilistic) leaf evaluations does not produce the maximum chance of a win. While there is no such mathematically exact result for chess, the empirical observation of the current programs suggest that their minimaxing leads them to greedily take additional material (thus optimizing their minimax value within the horizon) even though they may be already in a winning position which can be converted to a whole point without taking any risk by grabbing irrelevant pawns and giving the opponet any chance. Similarly, they are reluctant to give material when well ahead materially, even though that would convert a somewhat double edged endgame into a much simpler, risk free win. The programs behave as if the final checkmating (which may be beyond horizon) while having an extra pawn is more worth than checkmating while equal in material. The underlying cause for such self-destructive behavior is the absence of estimation of the evaluation error range, i.e. even though the grabbing a pawn in a winning position has nominally greater (inexact) minimax value than picking a more quiet move, the latter may have a smaller error range (e.g. since it doesn't give a tempo to the opponent to initiate some desperate attack, and thus open some chances for himself where he didn't have any prior to the programs pawn grabbing). So the worse possibility from the "best" move may be actually worse than the worse possibility from the apparently "suboptimal" move.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.