Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Search instabilities when re-using hashtables

Author: Robert Hyatt

Date: 08:30:28 06/17/02

Go up one level in this thread


On June 17, 2002 at 02:03:10, Uri Blass wrote:

>On June 17, 2002 at 00:32:32, Christophe Theron wrote:
>
>>On June 16, 2002 at 16:44:23, Uri Blass wrote:
>>
>>>On June 16, 2002 at 15:33:48, Christophe Theron wrote:
>>>
>>>>On June 16, 2002 at 14:11:25, Uri Blass wrote:
>>>>
>>>>>On June 16, 2002 at 12:53:19, Gian-Carlo Pascutto wrote:
>>>>>
>>>>>>Hello all,
>>>>>>
>>>>>>my program has been seriously suffering from search instabilities
>>>>>>since the day I started re-using my hashtables between searches
>>>>>>(i.e. not clearing them). I dissected the problem into a pathological
>>>>>>case this afternoon:
>>>>>>
>>>>>>1) We are pondering, do a deep 20 ply search and see that we have
>>>>>>a combination to promote a pawn or three. We start failing high
>>>>>>bigtime.
>>>>>>
>>>>>>2) The opponent plays a different move than expected, but because
>>>>>>this is an endgame, it transposes into the variation we were pondering
>>>>>>anyway, so we get a lot of hash hits.
>>>>>>
>>>>>>3) We start our normal search on ply 3 with a window [-50,50].
>>>>>>We get a hash hit one of the bottom nodes. It indicates a lowerbound
>>>>>>of +2000. This is enough to get a cutoff, and we fail high.
>>>>>>
>>>>>>4) We restart our normal search on ply 3 with a window [50, inf].
>>>>>>We get a hash hit one of the bottom nodes, but this time the bound
>>>>>>is _not_ enough to get a cutoff. We search this node, and (because
>>>>>>we're only 3 ply deep), only see an advantage of 0.01. We fail low.
>>>>>>
>>>>>>5) We see a fail high followed by a fail low in the search driver,
>>>>>>and need a way to resolve this. I do a full-window research in this
>>>>>>case [-inf,inf]. Which gets a score of +0.01 back.
>>>>>>
>>>>>>6) We start our search at ply 4 (same thing all over again)
>>>>>>
>>>>>>7) This repeats until the search starts to see the 20 pawn
>>>>>>advantage. We did 3 searches on each ply, two of them with an open
>>>>>>window.
>>>>>>
>>>>>>Thank god I don't get this worst-case-scenario often, but I
>>>>>>do see a lot of fail high/fail low combinations caused by this
>>>>>>effect.
>>>>>>
>>>>>>I'm wondering if anyone knows a solution, or what other people
>>>>>>that reuse hashtables do.
>>>>>
>>>>>The most simple thing to think about is clearing the hash tables after the first
>>>>>combination of fail high/fail low.
>>>>>
>>>>>Note that I decided to clear the hash tables in movei after every move because I
>>>>>found that not doing it is probably counter productive.
>>>>>
>>>>>I do not understand the reason but I found that in one case not clearing them
>>>>>caused a different score that leads to worse positional move at the same depth.
>>>>>
>>>>>After it I decided to test Movei with clearing the hash tables and found better
>>>>>results.
>>>>>
>>>>>Note that Movei does not use hash tables in an effective way.
>>>>>
>>>>>It does not use hash tables to prune the tree and it uses almost always the
>>>>>always replace method.
>>>>>
>>>>>The only case when I do not replace are cases when movei already remember a move
>>>>>at bigger remaining depth for position with the same 64 bit key(the same address
>>>>>in the hash tables is not enough).
>>>>>
>>>>>The reason is that in this case if the move that movei remembers is the same
>>>>>move it may be a waste of time and if the move is not the same move it may be
>>>>>counter producitve because movei may remember a different move that works only
>>>>>at small depth and when it gets bigger depth it may start with the wrong move.
>>>>
>>>>
>>>>So you are not using the "always replace" strategy. You are using the strategy
>>>>used by most chess programs: replace with better quality info only.
>>>>
>>>>Always replace is when you replace the hash table entry without looking at
>>>>what's already there.
>>>
>>>I do not think that I do what most programmer do.
>>>
>>>I believe that most programmers decide if to replace also when the entry is the
>>>same and the 64 bit key is different.
>>>
>>>I consider not to replace the best move only when the entry is the same and the
>>>64 bit key is the same.
>>>
>>>Uri
>>
>>
>>
>>Yes you are right. I had missed this important part about the hash key being the
>>same.
>>
>>So you are basically using an always replace scheme. But it is inferior to the
>>other scheme (replace only when new info is of better quality = bigger depth
>>than the existing one).
>>
>>Why don't you use the better scheme? You are only one test away from it.
>>
>>
>>
>>    Christophe
>
>The problem with replace only when the new information has bigger depth is that
>it is possible that there is a node with a big depth that is not used.
>
>I also think that it is possible that some nodes are used often at small depth
>with different order of moves that lead to the same position and not replacing
>only because of some deep nodes may be a mistake.

Which do you want to avoid:  (a) a small search of just a few nodes?  Or
(b) a large search of millions of nodes?  That is what depth-preferred
replacement buys you.  You keep the entries that save you the most work, which
makes the search far faster.


>
>It is also obvious that if I do not clear the hash tables after every move
>replace only when you have bigger depth is a mistake because you are going to be
>left with old nodes that are not important and my first implementation of hash
>tables was without clearing the hash tables.


This can be fixed with an "age" field.  first, use depth-preferred only if
the position is from the current search, otherwise keep it but overwrite it
whenever you want to store, regardless of the age.  I do this in Crafty and
it works well.

I also use two tables in crafty, one is "depth-preferred" as you explain,
the other is "always store".  I _always_ store a hash entry in one or the
other, most often the latter.




>
>I remember that I did not find clear advantage for replace only when the depth
>is bigger but it may be also a result of other problems.
>
>I decided to record moves in the hash tables only when the score is bigger than
>beta because my tests showed that it is better than other options that I tried
>inspite of knowing that in theory it should not be the case.

You should store a best move when score > alpha.  If score == alpha, you
won't have a best move, but in all other cases you will.


>
>Inspite of all the problems and the fact that hash tables are used only for
>better order of moves test suites proved that hash tables made movei clearly
>faster(about 1.5 times faster relative to movei without hash tables).

That is normal for middlegame positions.  You don't get many "EXACT" type
hash hits.  But try fine # 70 and this will change.

>
>I am not sure if it helps the public movei because the public movei does not
>clear hash tables between moves and for some reason it may create some problems
>but it helps the latest movei.
>
>Uri



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.