Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The Limits of Positional Knowledge

Author: Ratko V Tomic

Date: 09:53:35 11/11/99

Go up one level in this thread


> I believe that the deeper you go, the more accurate your 'scores' have
> to be (by scores I mean weights for each positional thing you recognize).

The reason for this is the error propagation effect, which as you propagate the
uncertain scores up the tree, increases the uncertainty of the backed up score
(unless the leaf uncertainty was 0, such as if checkmate was found, in which
case the error remains 0). Altough there are artificial "pathological" games
where the error grows exponentially with depth (Judea Pearl's book "Heuristics"
has a chapter on this error propagation and gives such examples), in chess, or
other "normal" strategy games, the error band behaves as it propagates like a
random walk, spreading as the sqrt(Depth).

Namely, when you have several uncertain values v1, v2,... from the child nodes
and pick one as the "best," due to the overlap of error intervals among the
similar quality moves, depending on amount of the overlap there is a
non-vanishing chance you will pick the wrong move, the true value of which is
actually worse than the one you ignored. In the worst case, if this happens
several times up the tree, you can systematically (linearly with depth) drift
away from the true best move (and the true best value). On average, though, you
are performing a random walk within these overlapping error intervals as you
cross the depth levels, which sometimes will make worsening choice, sometimes
improving choice (the right move for the wrong reason). The general outcome of
such random walk is a slowly expanding error, growing like sqrt(#Steps), and the
"steps" here are the depth levels of the tree.

This error propagation is the underlying reason why the searchers with imperfect
evaluation as they increase the depth eventually hit the region of diminishing
returns. (Of course, if you had a perfect evaluation, you wouldn't need to
search a tree; hence all searchers do have an imperfect evaluation.)
Another way to see these diminishing returns, as suggested in our earlier
discussion, is to realize that the proportion of non-junk nodes evaluated drops
to 0 exponentially with depth (since even the otherwise good moves which follow
junk moves create a junk variations and evaluations in the entire subtree below
the junk move, while to get a non-junk variation all moves from the root to the
leaf have to be non-junk moves.) Hence, as depth increases, even the smallest
leaf node evaluation uncertainty will be more likely to cause a miss of the
sparser and sparser sparks of non-junk in the exponentially growing dark sea of
junk.




This page took 0.02 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.