Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: PHR (Peak History Reduction) idea

Author: Klaus Friedel

Date: 12:17:09 03/03/06

Go up one level in this thread


I use the allmost same method in Snitch. The history table values are increased
constantly and on additional value stored the curent maximum.
Evertime i use the history values i scale them by the current maximum. Works
quite well for me.

Regards,

Klaus

On March 02, 2006 at 15:43:35, Harald Lüßen wrote:

>On March 01, 2006 at 23:26:05, Robert Hyatt wrote:
>
>>On March 01, 2006 at 23:12:54, Uri Blass wrote:
>>
>>>On March 01, 2006 at 22:52:40, Uri Blass wrote:
>>>
>>>>On March 01, 2006 at 22:22:09, Robert Hyatt wrote:
>>>>
>>>>>On March 01, 2006 at 21:41:45, Daniel Mehrmannn wrote:
>>>>>
>>>>>>On March 01, 2006 at 14:52:06, Robert Hyatt wrote:
>>>>>>
>>>>>>>Thought I would post a more detailed follow-up to what I have been playing with,
>>>>>>>to prompt further discussion and maybe even additional ideas.
>>>>>>>
>>>>>>>1.  I started off trying to use the raw history scores, but they really were not
>>>>>>>useful for a simple reason, that they conginually grow throughout the search.
>>>>>>>Prior to CCT I had noticed this and even experimented with "aging" the values by
>>>>>>>dividing all by 2 at some interval.  But regardless, the numbers get very large,
>>>>>>>and chopping them back suddenly increases the reduction level, whereas longer
>>>>>>>searches would decrease the level.  I punted on this pretty quickly and left the
>>>>>>>normal history mechanism in place solely for move ordering.
>>>>>>>
>>>>>>[...]
>>>>>>
>>>>>>Hello Bob,
>>>>>>
>>>>>>nice to see you're testing history reduction :)
>>>>>>I can't agree in point one that's not good to work with history scores, because
>>>>>>they are growing during the search and decrease the possibility of reduction.
>>>>>>
>>>>>>I found an easy way to use the history scores native for reduction without
>>>>>>needing a new table or other stuff. I call it "PHR" (Peak History Reduction).
>>>>>>
>>>>>>The idea is to use allready kown tables and go the easest way as possible.
>>>>>>Basicly we're searching the higest possible historyscore in our current moves.
>>>>>>This is our "peak value" at this node. So we know this move produced much
>>>>>>cutoff's in the past. So current moves should at least reached the half or more
>>>>>>points of the score to be _not_ reduced othwhise we reducing the depth.
>>>>>>
>>>>>>What's the benefits of this idea ?
>>>>>>- No extra table needed
>>>>>>- If much moves close to the peak all are full searched
>>>>>>- If no other move over  50% we reducing all
>>>>>>- No static values creating a problem on long searches
>>>>>>
>>>>>>code example in Homer:
>>>>>>
>>>>>>if (sgdPtr->EnableHistPrune && currentExtend == 0)  {
>>>>>>    if ((mPtr->flags & 4)) {
>>>>>>        if (HistoryBest == DROP_MOVE) {
>>>>>>            HistoryBest = score /2;
>>>>>>            goto PVS;
>>>>>>        } else if (reachedMoves > 3 && (score <= HistoryBest || score == 0)) {
>>>>>>            NextDepth--;
>>>>>>            history_ext++;
>>>>>>            HistoryPrune = TRUE;
>>>>>>	    flags |= PruneMove;
>>>>>>	    goto PVS;
>>>>>>        }
>>>>>>    }
>>>>>>}
>>>>>>
>>>>>>Very simple, but works :) I'm using this now since june 2003 without any
>>>>>>problems.
>>>>>>
>>>>>>Best
>>>>>>Daniel
>>>>>
>>>>>
>>>>>OK... I tried something similar.  The problem I found was that as the history
>>>>>values built up, pruning went away.  At the speeds I saw this past weekend, the
>>>>>2-3-4 minute searches slowed down significantly.  I assume you are using some
>>>>>sort of aging process to decay the history values so that they don't rise to
>>>>>near 2^32-1 and sit there???  This weekend I was using actual history values,
>>>>>and when they reached some max point, I was aging all by dividing by 2.  But I
>>>>>only did the aging between iterations to avoid suddenly opening the pruning
>>>>>floodgates in mid-iteration, and the longer iterations did less and less
>>>>>pruning...
>>>>>
>>>>>The current approach works the same whether a cutoff happened 10 times or 10
>>>>>million...
>>>>
>>>>I divide all the history value by 2 every time that the remaining depth is more
>>>>than 8 plies.
>>>
>>>To be more correct I do not do it exactly but this is the idea(I divide
>>>something else that is not the history values that you mean).
>>>
>>>looking at my code I first divide when the remaining depth is bigger than 9
>>>I also have 2 counters for every move(one counter of number of fail high and one
>>>counter of number of cases when the move did not fail high)
>>>
>>>I use the ratio between these numbers in my decision if to prune and I divide
>>>both of them by 2 only if the number of cases when it was tried and failed low
>>>is bigger than 1000;
>>>
>>>Uri
>>
>>
>>My worry is the usual worry seen with priority computations in operating systems
>>that are based on accumulated CPU time.  If you don't "age" often enough, the
>>values climb to the top and you can't tell the difference between priorities.
>>If you age too often, the values all tend toward zero, and once again you can't
>>tell the difference between the priorities.  It would seem that something very
>>similar happens if one is not _extremely_ cautious...
>>
>>I am letting the values climb until one "touches" the upper range I want to see,
>>then I divide all by 2 and start the process again, except this time they get
>>back to the same "level" 2x faster since the biggest starts off at the half-way
>>point rather than at zero...
>
>I would like to get any comment on my method I posted a few days ago:
>
>When I read my history values I always get values between 0 (not good)
>and 256 (most often successfull). I do this with saved history values
>between 0 and 65536 and an additional maximum value which is always
>updated. Then I can return (history * 256 / (maximum + 1)). When maximum
>gets too big, all values and the maximum are divided by a factor.
>This is not very precise but you can easyly fix the flaws and
>find good tuning values for too big and factor.
>
>I do this because I want always be able to use the history
>as input value for some formulas like reductions or margins.
>Since my engine is not very good this might be a bad idea. ;-)
>
>Harald



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.