Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: late move reductions

Author: Arek Paterek

Date: 15:31:01 03/02/06

Go up one level in this thread


On March 01, 2006 at 14:52:06, Robert Hyatt wrote:
>2.  I then decided to try what appeared to be the direct and obvious approach,
>namely measuring how often a move fails high during the search.  What I decided
>to do was to extend the fh% I have been calculating, by doing the following:  I
>have two counters, one the number of times this particular move has failed high,
>and the other the number of times this move has been searched and failed high,
>plus the number of times this move was searched and did not fail high but
>another did at the same ply.  I increment the FH counter by 100 on each fail
>high, and increment the other counter by 1.  If I have searched other moves
>prior to this fail high, I cycle thru and increment their tried counter but not
>their fail high counter since they didn't fail high.  The reasoning behind
>incrementing by 100 is this gives me a simple percentage when I divide #fh by
>#tries, since each fh is incremented by 100.  For example, I try the move 100
>times at a ply where a fail-high occurs.  60 times this move failed high, 40
>times it did not but another move did.  The fh counter becomes 6000 (60 * 100)
>while the tried counter is 100 total.  6000 / 100 = 60, the percent of the time
>that when a fail high happened, this was the move that produced it.
>
>To go along with that, I allow the operator to enter a fail-high percentage to
>use as a cutoff.  For example 60.  If, when making a decision "to reduce, or not
>to reduce, that is the question..." if the fh % is < 60% I reduce, if it is >=
>60 I do not.  I am currently running a test to see what the optimal number is
>here, but that requires a bunch of test suite runs plus a lot of games.  So the
>jury is out.  I've started at 50% and tried numbers up to 100%, but that level
>is obviously beyond risky.  So far, 50-75% have looked decent, but I am now
>trying to quantify the range as 50, 60, 70, 80, 90 first, then I will refine it
>once I find the best upper/lower bound out of that group.

Using #fh/#tries as the only criteria, when to reduce, seems to be risky
if #tries is low. Did you consider using Bayesian statistics here?
I mean, we could assume that for each entry in the history table
there is one unknown parameter Theta which is the value that fh%=#fh/#tries
would get, after infinite number of tries. If we choose uniform prior for Theta,
then P(Theta | #fh, #tries) = beta(1 + #fh, 1 + #tries - #fh), where beta(a,b)
is the Beta distribution. So, given #tries, we could compute the highest #fh
that gives a desired confidence c% (let's say we want to be 95% sure), that
Theta is less than a fixed proportion p% (let's say p%=60%).

Example 1: when p%=60%, c%=95%, #fh=0, #tries=2, then #fh/#tries=0%,
but P(Theta <= p% | #fh=0, #tries=2) ~= 0.936 < c% , so we are not doing the
reduction.

Example 2: when p%=60%, c%=95%, #fh=565, #tries=1000,
then P(Theta <= p% | #fh=565, #tries=1000) ~= 0.99 > c%, so we are doing the
reduction.

I am curious about opinions on this approach.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.