Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Handling repetitions?

Author: Robert Hyatt

Date: 09:43:40 12/22/05

Go up one level in this thread


On December 21, 2005 at 15:48:36, Frederik Tack wrote:

>On December 20, 2005 at 20:30:41, Robert Hyatt wrote:
>
>>On December 20, 2005 at 17:11:41, Frederik Tack wrote:
>>
>>>I'm looking for the best way to implement a repetition detection in the search
>>>tree for my engine without loosing performance.
>>>
>>>I've found some articles on the internet suggesting to create a repetition table
>>>using zobristkeys. I have now implemented this and it works fine, but i've lost
>>>about 10% performance.
>>>
>>>I have now come up with a new idea that should have no impact on performance.
>>>However, i'm not sure if what i'm about to do is completely correct. Here's the
>>>idea. Before starting the analysis, i store every position i've had on the board
>>>from the move history in the transposition table with an evaluation of 0 and a
>>>depth of maxdepth + 1. During the analysis, i use a replacement scheme in the
>>>transposition table that will not replace the position if the depth in the tree
>>>is smaller than depth in the table. Since i've stored the historic positions
>>>with maxdepth + 1, they will never get replaced. This should score every
>>>position that has been on the board before with a draw score.
>>>
>>>The fact is i haven't found any article on the internet discussing this way of
>>>working, so i fear it may have some drawbacks i'm not aware of. Has any1 tried
>>>implementing repetition detection this way or has any1 an idea why this wouldn't
>>>work?
>>
>>
>>There are issues, but several have used this approach.  Bruce Moreland (Ferret)
>>uses a separate table, but hashes it just like normal.  Problem is that the old
>>positions are a tiny part of the problem.  The real problem is positions reached
>>during the current search.
>>
>>The solution to this is that each time you reach a new position, you have to
>>store the position in the table if it is not already there.  If it is, increment
>>a counter so you can tell whether it is a 2nd or 3rd repetition later.
>> Then as
>>you unmake that move, you have to decrement the counter and if it goes to zero,
>>remove the entry. This becomes time-consuming itself.
>
>I could try and do the detection just after making the move like you suggest
>instead of doing the detection at the beginning of my negamax function. That
>would indeed improve performance.
>tx for the advice.
>
>>  And then when you get
>>ready to parallelize your search, you have to handle the problem of replicating
>>this stuff since positions from one search can not possibly repeat positions
>>from another part of the search every time.  If you get a hit on a position
>>stored from another branch of the parallel search, it isn't a draw at all.  This
>>takes some effort to handle, usually replicating the table or else storing a
>>"thread ID" for each entry.
>>
>>The list is just as good, and if you use the 50-move counter to limit how far
>>back you search the list, the performance loss is not that bad...


The main point was that if you store a position in this "repetition hash table"
when you reach it in the search, you have to remove it from the table when you
back up the tree and unmake the move that led to this position.  Otherwise you
get lots of false matches.  This has some overhead, because now _every_ node
requires an add and remove operation for this table.  The repetition list
eliminates the "remove operation"...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.