Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The Limits of Positional Knowledge

Author: Robert Hyatt

Date: 08:47:21 11/12/99

Go up one level in this thread


On November 11, 1999 at 12:53:35, Ratko V Tomic wrote:

>> I believe that the deeper you go, the more accurate your 'scores' have
>> to be (by scores I mean weights for each positional thing you recognize).
>
>The reason for this is the error propagation effect, which as you propagate the
>uncertain scores up the tree, increases the uncertainty of the backed up score
>(unless the leaf uncertainty was 0, such as if checkmate was found, in which
>case the error remains 0). Altough there are artificial "pathological" games
>where the error grows exponentially with depth (Judea Pearl's book "Heuristics"
>has a chapter on this error propagation and gives such examples), in chess, or
>other "normal" strategy games, the error band behaves as it propagates like a
>random walk, spreading as the sqrt(Depth).

There is another issue.  If you search to shallow depths, you only have to
choose between A, B and C.  One positional 'asset' may be all you can get.

If you search deeper, you have to decide if A+B is better/worse than C, A+C is
better/worse than B, etc.  IE at shallow depths, just recognizing key positional
features is enough.  At deeper depths you have to be able to differentiate
between various combinations of the positional features..  which makes it a good
bit harder.

IE an important point (to me) is that a program tuned on hardware that searches
to depth=D is probably _not_ optimally tuned to run on hardware that can search
to depth=D+n where n=1,2,...  I ran into this many times working on Cray Blitz,
where we tested/tuned on a vax, and found that the tuning was wrong when we
moved to the Cray for competition.




>
>Namely, when you have several uncertain values v1, v2,... from the child nodes
>and pick one as the "best," due to the overlap of error intervals among the
>similar quality moves, depending on amount of the overlap there is a
>non-vanishing chance you will pick the wrong move, the true value of which is
>actually worse than the one you ignored. In the worst case, if this happens
>several times up the tree, you can systematically (linearly with depth) drift
>away from the true best move (and the true best value). On average, though, you
>are performing a random walk within these overlapping error intervals as you
>cross the depth levels, which sometimes will make worsening choice, sometimes
>improving choice (the right move for the wrong reason). The general outcome of
>such random walk is a slowly expanding error, growing like sqrt(#Steps), and the
>"steps" here are the depth levels of the tree.
>
>This error propagation is the underlying reason why the searchers with imperfect
>evaluation as they increase the depth eventually hit the region of diminishing
>returns. (Of course, if you had a perfect evaluation, you wouldn't need to
>search a tree; hence all searchers do have an imperfect evaluation.)
>Another way to see these diminishing returns, as suggested in our earlier
>discussion, is to realize that the proportion of non-junk nodes evaluated drops
>to 0 exponentially with depth (since even the otherwise good moves which follow
>junk moves create a junk variations and evaluations in the entire subtree below
>the junk move, while to get a non-junk variation all moves from the root to the
>leaf have to be non-junk moves.) Hence, as depth increases, even the smallest
>leaf node evaluation uncertainty will be more likely to cause a miss of the
>sparser and sparser sparks of non-junk in the exponentially growing dark sea of
>junk.



This page took 0.03 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.