Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: History pruning

Author: Robert Hyatt

Date: 19:22:55 02/27/06

Go up one level in this thread


On February 27, 2006 at 19:36:10, Tom Likens wrote:

>On February 27, 2006 at 13:41:57, Robert Hyatt wrote:
>
>>On February 27, 2006 at 12:46:40, Frank Phillips wrote:
>>
>>>So you do this at only (expected) cut nodes?
>>>Tord seems to imply at anything other than pvNodes.
>>>
>>>Frank
>>
>>
>>I do it at _all_ nodes.  I think Tord does as well.  The problem is it is
>>impossible to predict with high accuracy whether a node is CUT or ALL (btw, this
>>is only useful at ALL nodes, since we have to search all moves and reducing the
>>depth reduces the effort required to accomplish that).
>
>Bob/Tord,
>
>I just got to my hotel (I'm on a business trip for the next few days) and I
>see CCT8 has sparked a number of interesting threads.  Reductions are
>especially fertile ground.
>
>I'd be careful reducing at PV nodes.  I saw a significant drop in djinn's
>positional strength when I applied this at PV nodes.  At the very least
>you might want to skip it at nodes where alpha/beta == RootAlpha/RootBeta.
>Ideally, as you mentioned you only want to apply this at ALL nodes.
>
>I've also experimented with "flipping" CUT nodes to ALL nodes if we search
>more than 'x' moves at a CUT node without a fail-high or an improvement
>in the score.  Once the flip occurs all the nodes below are toggled in the
>normal CUT -> ALL -> CUT etc., and these nodes become eligible for reduction.
>
>Also do you allow multiple recursive reductions or do you limit them?  I've
>applied the adaptive reduction idea a while back, with mixed results.  It's
>likely I didn't test this enough because I was in the middle of a major
>project at work and could only give it a small percentage of my attention :-(
>
>regards,
>--tom

My answers, Tord's might differ.

1.  I don't do the all/cut thing at all.  I never found it working very well in
Crafty, even though it was pretty good in Cray Blitz.  No idea why.  I always
chose to guess-blame it on much more aggressive null-move search than was used
in CB.

2.  I allow multiple reductions.  Logically they will never happen on two
consecutive plies, but otherwise I don't restrict them at all.

3.  I don't notice any bad effect in doing them even when the original
alpha/beta window is in effect, but since the main issue is failing high, and
since on a fail-high I re-search with the original depth anyway, I don't see any
problems such as shorter-than-normal PVs or anything like that.  I'll test this
idea however.  But I also do null-moves _everywhere_ and always found that
restricting that slowed the program down as well...

4.  the biggest thing I want to play with is the history values.  I am currently
"aging" the values between iterations, but what is happening is that the values
climb faster as the search gets deeper, which tends to make it get more
conservative as it goes deeper due to the history threshold limiting when
reductions are done.  Either the history values need to be stabilized, or
perhaps the reduction threshold needs to climb along with iteration number...
or something in between...




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.