Author: Frederik Tack
Date: 15:51:18 12/20/05
Go up one level in this thread
On December 20, 2005 at 17:59:20, Dann Corbit wrote:
>On December 20, 2005 at 17:42:40, Dann Corbit wrote:
>
>>On December 20, 2005 at 17:11:41, Frederik Tack wrote:
>>
>>>I'm looking for the best way to implement a repetition detection in the search
>>>tree for my engine without loosing performance.
>>>
>>>I've found some articles on the internet suggesting to create a repetition table
>>>using zobristkeys. I have now implemented this and it works fine, but i've lost
>>>about 10% performance.
>>>
>>>I have now come up with a new idea that should have no impact on performance.
>>>However, i'm not sure if what i'm about to do is completely correct. Here's the
>>>idea. Before starting the analysis, i store every position i've had on the board
>>>from the move history in the transposition table with an evaluation of 0 and a
>>>depth of maxdepth + 1. During the analysis, i use a replacement scheme in the
>>>transposition table that will not replace the position if the depth in the tree
>>>is smaller than depth in the table. Since i've stored the historic positions
>>>with maxdepth + 1, they will never get replaced. This should score every
>>>position that has been on the board before with a draw score.
>>>
>>>The fact is i haven't found any article on the internet discussing this way of
>>>working, so i fear it may have some drawbacks i'm not aware of. Has any1 tried
>>>implementing repetition detection this way or has any1 an idea why this wouldn't
>>>work?
>>
>>That sounds similar to what most everyone does. Usually, you keep a list of all
>>the positions you have played [or are planning to play enough to shove it in the
>>pv] OTB up to now (usually called the move_stack or thereabouts). Then, you
>>just scan the move stack to see if there is any strange feelings of Deja Vu.
>>
>>That way, you can clear your hash table if you want to.
>
>If you are scanning your whole move stack, you can shorten it up a lot by only
>going back to the last non-reversible move. If (for instance) there is a pawn
>push or a capture or a promotion, etc. then we know for sure that the positions
>from there backwards are no longer in the realm of repeat candidates.
>
>A ten percent performance hit seems REALLY high. How are you checking for
>repeats? I would expect it to be hard even to measure it.
Tx for the reply.
The way i've currently implemented it is by using a repetition table (similar to
a transposition table. It's a table with a 16-bit index, a 64-bit zobristkey and
a counter for the number of times the position occurred. The problem with this
approach is i loose performance because i have to access this table in every
node in the tree. The first lines in my negamax function look like this :
function Negamax(const PositionPtr: PPosition; Depth: TSearchDepth;
Alpha, Beta: TEvaluation): TEvaluation;
begin
inc(Nodes);
if RepTable[PositionPtr^.ZobristKey and RepetitionTableSize].Counter > 1
then Result := evDraw // 3-fold repetition
else ...
Concerning the new implementation i suggested :
If i understand correctly, you suggest scanning the move stack during the
analysis. However, my suggestion does not involve any scanning during analysis.
Before starting the analysis, i would just store all the moves from the move
stack in the transposition table with an evaluation of 0 (indeed, i could
optimize this so i store only from the last non-reversible move on). Then i just
start the analysis and don't do any scan or anything special.
I also don't quite understand what you mean with 'That way, you can clear your
hash table if you want to'
Fred
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.