Computer Chess Club Archives




Subject: Re: The Limits of Positional Knowledge

Author: Yngvi Bjornsson

Date: 10:23:07 11/12/99

Go up one level in this thread

On November 11, 1999 at 12:53:35, Ratko V Tomic wrote:

>> I believe that the deeper you go, the more accurate your 'scores' have
>> to be (by scores I mean weights for each positional thing you recognize).
>The reason for this is the error propagation effect, which as you propagate the
>uncertain scores up the tree, increases the uncertainty of the backed up score
>(unless the leaf uncertainty was 0, such as if checkmate was found, in which
>case the error remains 0).

I don't think this is correct. When backing up minimax values the error
*either* reduces or increases depending on the ratio of incorrectly evaluated
leaf nodes (see e.g. several papers by Nau in AI).
On the contrary, there is some evidence that in chess-programs minimax based
searchers  (like alpha-beta) do indeed reduce the backed up error with
increasing search depth.  There was an article dealing with this issue published
in AI recently (ca 2. years back). Unfortunately, I do not have the exact
reference here, but the title was something like "Benefits of using multi-valued
evaluation functions in minimax" by H. Horacheck. If I remember correctly, the
(over-simplified) main conclusion was that the deeper the program goes the
higher ratio of terminal nodes are evaluated "correctly" (because so many of the
positions become lop-sided), and thus the propagated error does indeed reduce.

Although, these experiments were mainly based on simulated game-trees (there
were also some results based on an actual chess program) they still give some
useful insight into the behaviour of minimax. In my opinion, the true pros/cons
of minimaxing in chess has not been fully explained yet.


This page took 0 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.