Computer Chess Club Archives


Search

Terms

Messages

Subject: late move reductions

Author: Robert Hyatt

Date: 11:52:06 03/01/06


Thought I would post a more detailed follow-up to what I have been playing with,
to prompt further discussion and maybe even additional ideas.

1.  I started off trying to use the raw history scores, but they really were not
useful for a simple reason, that they conginually grow throughout the search.
Prior to CCT I had noticed this and even experimented with "aging" the values by
dividing all by 2 at some interval.  But regardless, the numbers get very large,
and chopping them back suddenly increases the reduction level, whereas longer
searches would decrease the level.  I punted on this pretty quickly and left the
normal history mechanism in place solely for move ordering.

2.  I then decided to try what appeared to be the direct and obvious approach,
namely measuring how often a move fails high during the search.  What I decided
to do was to extend the fh% I have been calculating, by doing the following:  I
have two counters, one the number of times this particular move has failed high,
and the other the number of times this move has been searched and failed high,
plus the number of times this move was searched and did not fail high but
another did at the same ply.  I increment the FH counter by 100 on each fail
high, and increment the other counter by 1.  If I have searched other moves
prior to this fail high, I cycle thru and increment their tried counter but not
their fail high counter since they didn't fail high.  The reasoning behind
incrementing by 100 is this gives me a simple percentage when I divide #fh by
#tries, since each fh is incremented by 100.  For example, I try the move 100
times at a ply where a fail-high occurs.  60 times this move failed high, 40
times it did not but another move did.  The fh counter becomes 6000 (60 * 100)
while the tried counter is 100 total.  6000 / 100 = 60, the percent of the time
that when a fail high happened, this was the move that produced it.

To go along with that, I allow the operator to enter a fail-high percentage to
use as a cutoff.  For example 60.  If, when making a decision "to reduce, or not
to reduce, that is the question..." if the fh % is < 60% I reduce, if it is >=
60 I do not.  I am currently running a test to see what the optimal number is
here, but that requires a bunch of test suite runs plus a lot of games.  So the
jury is out.  I've started at 50% and tried numbers up to 100%, but that level
is obviously beyond risky.  So far, 50-75% have looked decent, but I am now
trying to quantify the range as 50, 60, 70, 80, 90 first, then I will refine it
once I find the best upper/lower bound out of that group.

My next question:  Is anyone else doing the "to reduce or not reduce" decision
based on something similar?  Or are there even better ideas waiting to be
discovered.  If you are using this (I would be surprised if someone has not at
least tried this scheme, whether they improved it further or not) have you had
any particularly good threshold values you've discovered?  While I plan on
arriving at mine based on my usual testing, it would be interesting to see if my
final number is close to others (Tord?  Vas?  Fabien?  anyone else trying this?)

I'll post my "best percentage" after the testing is done, and as always, it is
possible there are bugs left lying around although the code is pretty simple and
all I have been changing is the decision-making part, not the actual reduction
part which is pretty well working...  Got a few tweaks to do for the parallel
search, but that won't be horribly complicated, just incrementing all the
non-fail-high moves at a split ply is a bit harder to do...

More as I get more data or come up with additional ideas...




This page took 0.04 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.