Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Handling repetitions?

Author: Stuart Cracraft

Date: 21:28:53 12/23/05

Go up one level in this thread


On December 22, 2005 at 17:19:03, Robert Hyatt wrote:

>On December 22, 2005 at 16:17:57, Stuart Cracraft wrote:
>
>>On December 22, 2005 at 12:43:40, Robert Hyatt wrote:
>>
>>>On December 21, 2005 at 15:48:36, Frederik Tack wrote:
>>>
>>>>On December 20, 2005 at 20:30:41, Robert Hyatt wrote:
>>>>
>>>>>On December 20, 2005 at 17:11:41, Frederik Tack wrote:
>>>>>
>>>>>>I'm looking for the best way to implement a repetition detection in the search
>>>>>>tree for my engine without loosing performance.
>>>>>>
>>>>>>I've found some articles on the internet suggesting to create a repetition table
>>>>>>using zobristkeys. I have now implemented this and it works fine, but i've lost
>>>>>>about 10% performance.
>>>>>>
>>>>>>I have now come up with a new idea that should have no impact on performance.
>>>>>>However, i'm not sure if what i'm about to do is completely correct. Here's the
>>>>>>idea. Before starting the analysis, i store every position i've had on the board
>>>>>>from the move history in the transposition table with an evaluation of 0 and a
>>>>>>depth of maxdepth + 1. During the analysis, i use a replacement scheme in the
>>>>>>transposition table that will not replace the position if the depth in the tree
>>>>>>is smaller than depth in the table. Since i've stored the historic positions
>>>>>>with maxdepth + 1, they will never get replaced. This should score every
>>>>>>position that has been on the board before with a draw score.
>>>>>>
>>>>>>The fact is i haven't found any article on the internet discussing this way of
>>>>>>working, so i fear it may have some drawbacks i'm not aware of. Has any1 tried
>>>>>>implementing repetition detection this way or has any1 an idea why this wouldn't
>>>>>>work?
>>>>>
>>>>>
>>>>>There are issues, but several have used this approach.  Bruce Moreland (Ferret)
>>>>>uses a separate table, but hashes it just like normal.  Problem is that the old
>>>>>positions are a tiny part of the problem.  The real problem is positions reached
>>>>>during the current search.
>>>>>
>>>>>The solution to this is that each time you reach a new position, you have to
>>>>>store the position in the table if it is not already there.  If it is, increment
>>>>>a counter so you can tell whether it is a 2nd or 3rd repetition later.
>>>>> Then as
>>>>>you unmake that move, you have to decrement the counter and if it goes to zero,
>>>>>remove the entry. This becomes time-consuming itself.
>>>>
>>>>I could try and do the detection just after making the move like you suggest
>>>>instead of doing the detection at the beginning of my negamax function. That
>>>>would indeed improve performance.
>>>>tx for the advice.
>>>>
>>>>>  And then when you get
>>>>>ready to parallelize your search, you have to handle the problem of replicating
>>>>>this stuff since positions from one search can not possibly repeat positions
>>>>>from another part of the search every time.  If you get a hit on a position
>>>>>stored from another branch of the parallel search, it isn't a draw at all.  This
>>>>>takes some effort to handle, usually replicating the table or else storing a
>>>>>"thread ID" for each entry.
>>>>>
>>>>>The list is just as good, and if you use the 50-move counter to limit how far
>>>>>back you search the list, the performance loss is not that bad...
>>>
>>>
>>>The main point was that if you store a position in this "repetition hash table"
>>>when you reach it in the search, you have to remove it from the table when you
>>>back up the tree and unmake the move that led to this position.  Otherwise you
>>>get lots of false matches.  This has some overhead, because now _every_ node
>>>requires an add and remove operation for this table.  The repetition list
>>>eliminates the "remove operation"...
>>
>>
>>Why bother with this kind of approach?
>>
>>Keeping simply the game history list with each item having
>>the hash representing the position at that time
>>and scanning backwards to see if the current position being
>>tested for 3-time-repetition is in that list N times seems
>>a better solution.
>>
>>Am I missing something? Why all the complexity?
>>
>>Stuart
>

Thanks -- on a long list of things to do...

>
>Several reasons.  If you don't do the "2 times is a draw" trick, you will miss a
>great many repetitions that are simply too deep to see in a normal search, even
>though you noticed the second repetition easily enough. So that is a problem.
>
>Second, the twice in the history once in the search solves a problem that step
>one above might hit.  The second repetition is _not_ a draw, and scoring it as
>such can turn a win into a draw.  For example, suppose you try chasing the king
>one way only to discover that it can "get back" to the original position.  But
>if you play on, and force him the other way, now you can win.  But if you take
>the second repetition as a draw, you will not search through it to see the win.
>
>That's the reason for the complexity.  I want to recognize the most repetitions
>possible, without overlooking something possibly better.  There are other more
>subtle reasons, such as not having a ponder move if the first search move is a
>two-fold repetition, yet you have to continue playing.  This solves that.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.