Computer Chess Club Archives




Subject: Re: The Limits of Positional Knowledge

Author: Bella Freud

Date: 13:38:41 11/18/99

Go up one level in this thread

On November 17, 1999 at 20:56:51, Ratko V Tomic wrote:

>>>demonstration not only of the existence of some phenomena (or mechanisms or
>>>causes) which act negatively on the strength of the program with increasing
>>>depth, but also that the magnitude of their effects may be large enough to
>>>overcome, and not infrequently so, the effects of all the positive
>>>phenomena of the increased depth. Otherwise, if the net effect of
>>>the increased depth, when
>>>all pluses and minuses are tallied, is exclusively positive, the D ply program
>>>would never win against D+1 ply program.
>>It is entirely possible for a program searching D ply deep to win against a
>>program that searches D+1 ply, and it has nothing to do with unexplained
>>phenomenon that acts negatively on the program's strength with
>>increasing search depth.
>I didn't call them "unexplained" phenomena. They were merely the phenomena
>unacknowledged to even exist by Amir, i.e. the above was pointing out the
>obvious, that the D+1 ply search can "worsen" the choice relative to the D ply
>search (whether D-ply opponent can take advantage of it is another matter; in
>enough cases it can make the right choice for the wrong reason to win 20% of
>>It is simply a consequence of the fact that
>>the programs are using heuristic based evaluation functions to approximate
>>the true merit of the terminal nodes, and these evaluations are
>>sometimes incorrect. For example, a D+1 ply search sees a way to win a
>>pawn and grabs it. The program searching D ply deep didn't realize that
>>it was losing the pawn because it didn't see deep
>>enough. However, the pawn happens to be poisonous and taking it loses by force,
>>but neither of the programs does realize this at the time. When the D+1 deep
>>searcher realizes this a move or two later it is already to late to do anything
>>about it.

I think there is a counter to your argument about this.

I agree that with a sequence

a x b y c z d

that if d is a bad move that only manifests itself over the horizon, then it is
dangerous to have backed that best line and score back to the root.

If d loses then there is a danger that a loses.

The shorter the line the greater the danger and the longer the line the less the
danger, since side 'a' can deviate from b,c,d,e,f,g,h in the move sequence.

Or, that the deeper the line, the less chance there is that the position at d is
so important, thus causing a to lose.

Depth therefore helps bad evaluation functions to play better chess, in the
sense that search applied at the next few moves allows for escapes from the
consequent trouble. Such a system is just playing obstructionist chess.


>The two ways of describing the situation are not mutually exclusive. Namely, one
>can in any discussion on chess strategy shift the perspective, as you did above,
>and speak from the position of omniscence, claiming that, since the chess tree
>is finite, every node always has a definite value 1, 1/2 or 0, and any choice
>switches it among those 3 values, and that's all there is to it, and all the
>other discussion on, say, center control, king safety, pawn and square
>weaknesses, bishop pairs, open lines, initiative, mobility, or any other
>positive and negative forces, etc is meaningless. Well, while such god-like
>perspective might be error-proof, it is an entirely sterile approach for any
>sub-godlike creature, humans or computers.
>In the same vain, one could in principle dismiss any discussion of, say social
>or political issues, by shifting perspective to the atomic structure which any
>social creature does indeed have, and claim that it's all different arrangements
>of atoms, and there is nothing else to it. Well, it may be so (from that angle),
>but nothing useful will come out of it when discussing how to set-up a tax
>system, organize parliament, etc.
>So, in our discussion, you can say that all that happened was that D+1 program
>had stepped into node with value 0 (from its perspective), which neither program
>evaluated as such at the time, and D+1 program lost.
>But the same occurence can be viewed as different types of phenomena (or forces
>or causes) acting in the program's algorithm. Namely, when certain parameters in
>the algorithm (e.g. the D parameter) are changed, that causes certain effects on
>the quality of move choice. Some terms in the final evaluation clearly benefit
>from the D->D+1 change (such as detection of early termination nodes), i.e. they
>do produce better choice with D+1 than with D. I called such terms (or causes or
>phenomena), positive phenomena. Since D->D+1 change does lead in some nonzero
>percentage of cases to the worst choice, then one has to conclude that (within
>this perspective) there are other causes or forces which are negative. One can
>then look which kind of evaluation terms had the worst behaviour in this case
>and try some depth dependent tune-up.
>In fact, if you check back this thread, you'll see Bob explaining his tune-ups
>for the increased depth. He was saying how at larger depths the problematic
>situation was how to combine multiple evaluation terms, and the simple addition,
>which may work at shallower depths doesn't work, unless re-tuned for the new
>region of depths. This is an example of, what I called earlier, a "negative
>phenomenon". The basic cause here, in terms of the evaluation functions, is that
>a more accurate evaluation functions is non-linear, so that a linear
>approximation (which adds up different positional contributions) breaks down
>once the positions being evaluated and compared (at the root, ultimately)
>diverge enough from each other (as it happens with increasing depth). The only
>reasonably linear terms are the material values (which is how they got
>discovered and used in human play from early on), so it is in most cases
>reasonable to add up material. But adding up other positional terms, found to
>work at shallower depths, when searching deeper will mislead program. So, the
>nonlinearity (or non-additivity) of the evaluation function, in the linear
>approximation (as it is normally used in chess programs) results in negative
>"force," worsening the move choice with increased depth.
>>Again, I'm not convinced that there is such a phenomenon in chess, on the
>>contrary all evidence I've seen suggest it isn't. Pearl did demonstrate such
>>pathology in his artificial game. Like I pointed out in an earlier email, when
>>backing up erroneous terminal evaluations using minimax operators the
>>probability of an incorrect move decision at the root may indeed increase with
>>additional search depth. Whether it increases or decreases is simply a function
>>of the branching factor of the tree and the ratio of correct vs. incorrect
>>terminal evaluations.  In chess, I've seen no evidence that such pathology is
>There is no exact result of either kind above for chess, those are all results
>for simplified game trees with some additional assumptions (e.g. about how
>values of parent and child nodes are related, in general), which amount to
>little beyond putting in by hand (via such assumptions) what one wishes to
>obtain at the output of such derivation.
>As to chess, the nonlinearity phenomena certainly produce negative effects as
>depth increases, that's the effect Bob was describing.
>>>doubt that following a branching path with some nonzero chance of picking a
>>>wrong (relative to the exact knowledge) branch at each branching point,
>>> reasults in the increase of chance of picking the wrong branch somewhere
>>>as the number of branching points (the depth) increases.
>>I can understand your argument if you are talking about the game as a whole.
>>There each move decision is made independently of each other and there is a
>>nonzero change of picking a wrong move each time.  So, for example,
>>when playing against a perfect opponent the probability of making a
>>critical error along the way increases the longer (and narrower) the
>>game-path is you have to take to ensure the optimal result. But what I
>>cannot see is how this applies to the alpha-beta search.
>> I cannot possible see
>> how increasing the length of the paths will *necessarily* introduce
>> additional change of an error (as I think you are suggesting). Like
>>mentioned above, it can, but will not necessarily do so. If I'm missing
>>something, please explain.>
>It depends how accurate various evaluation terms at the leaf nodes are, and how
>well they are approximated by a sum when their amplitude grows. As Bob pointed
>out, the evaluation function terms A and B which worked well at lower depths,
>when you compared them to each other there, can't be safely added together at
>higher depths, when they both grow apart (or away from their root node values).
>Material value is one general exception, which does work reasonably well across
>depths (provided one computes it in a quiescent position). But other positional
>terms are not as well behaved when combined (compared or added) with others.
>The increase of depth will not *necessarily* make the _net_ evaluation worse. We
>know of obvious mechanism how it can make the evaluation better (i.e. the better
>detection of short termination nodes or otherwise [materially] decisive nodes).
>In that case error propagation has no effect on move choice, since node is known
>either with perfect accuracy (a checkmate) or with high degree of certainty (a
>decisive material gain), which removes the error region overlaps (which may be
>due to other, positional, terms). It is the overlapped error regions which allow
>the wrong move choice to be taken as the "best" from a given node.
>But some other evaluation terms may not improve (with increased depth) by much,
>or not at all, and may even worsen the evaluation. The game result is the net
>outcome of all such positive and negative aspects, combined over many moves.
>Thus, the mere game result is not enough to separate what was helped and what
>not by the depth. Within the typical depths the programs are using at present,
>the net outcome is overall improvement with depth. But to find out how each
>component of the evaluation process behaved, one would need to peel off, as it
>were, those effects which are clear, specifically the obvious D+1 tactical
>advantage over D, discard those games won on that account, and look the
>remaining ones (which would obviously tilt the previous score toward the D-ply
>program). It may well be that the anticipated overall plateauing of strength
>with depth would show first quite clearly on such subset of games.
>One more detail from above:
>> But what I
>>cannot see is how this applies to the alpha-beta search.  The values
>>that are backed up *only* depdend on the evaluation of the terminal
>>nodes (on the so called minimal- or critical-tree).
>It is true that the numbers that are backed up depend only on the numbers
>computed at the leaf nodes. But which leaf node will finally win-out as the one
>whose value is assigned to the root (and thus determining the move choice),
>depends on all other nodes as well, on evaluation errors and overlapped error
>intervals, playing a role whenever any two "fuzzy" values are compared. The leaf
>value propagates up not because of its absolute numeric worth but because of its
>relative worth to the other similar values, coming from other competing leaf
>At each level of such up-propagation it wins based on comparisons with other
>inexact values (other than for early termination cases, which I excluded already
>from error propagation effects). The ultimately victiorious leaf node, accepted
>as the end point of the PV, has had as many of such "fuzzy" victories as there
>were levels in the tree. That's exactly the random walk model I was using
>earlier. While the number you get at the root is exactly the same as its
>originating leaf node evaluation, no matter how many levels you go, the chance
>of such leaf node being the truly best (and thus its implied root move choice
>being the best) may decrease with depth (depending on degree of error overlaps
>in its matchups during the up-propagation).
>If the victorious leaf evaluation was sharp enough to separate well the
>victorious leaf node from the competitors (in such up-propagation), such as
>checkmate or a decisive material gain, there is no problem with depth, the only
>surviving effect is the positive one (the discovery of such short termination
>node, which could only be found at sufficient depth). But, when there is no
>sharp gain (and there certainly are such positions), but the decisions fall down
>on dozens (or hundreds, as in DB) of positional terms, which have nonlinear
>depth effects, with heavily overlapped error intervals, the outcome may be a
>tossup (of random walk type). In any given position, the two effects, the error
>spread (with depth) from the overlapped error intervals and the increase in
>probability of finding a decisive short termination node, compete. The first one
>is negative, the second one positive. It is an open question whether the second
>effect will remain ahead (and by how much) as the depths increase.

This page took 0.02 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.