Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Repetition Checks

Author: Omid David Tabibi

Date: 09:37:31 07/31/03

Go up one level in this thread


On July 30, 2003 at 23:38:50, Uri Blass wrote:

>On July 30, 2003 at 17:34:55, Gerd Isenberg wrote:
>
>>On July 30, 2003 at 16:39:57, Bas Hamstra wrote:
>>
>>>On July 30, 2003 at 12:17:36, Gerd Isenberg wrote:
>>>
>>>>On July 29, 2003 at 14:34:41, Bas Hamstra wrote:
>>>>
>>>>>On July 29, 2003 at 00:16:53, macaroni wrote:
>>>>>
>>>>>>I have been having great trouble finding an efficient routine for Repetition
>>>>>>checks, I can get the zobrist keys of the current line being searched, but what
>>>>>>is the best (if there is a best) method of comparing them to check for 3 the
>>>>>>same? it seems crazy to loop through all of them for each one, looking for like
>>>>>>positions. Is there something really simple and nice i'm completely missing? or
>>>>>>is that the only way,
>>>>>>Thanks all :)
>>>>>
>>>>>How about this: char RepCheck[64000]. Now you in each position you take the last
>>>>>16 bits of the hashkey. If RepCheck[Last16] is nonzero, it *might* be a
>>>>>repetition and you do the extensive check of comparing hashkeys all to the root.
>>>>>For this to work, you increment RepCheck[Last16]++ in your Make(Move) code.
>>>>>Decrement it at Unmake(). So now you have 1/64000 th of the cost...
>>>>>
>>>>>Bas.
>>>>
>>>>Hi Bas,
>>>>
>>>>The "Ronald de Man" trick works well,
>>>>except Last16 becomes a bit larger than 64000 in your sample ;-)
>>>>May be last13 or last14 with smaller tables is even enough.
>>>
>>>Maybe. But as the game proceeds the probability of a "hit" becomes higher and
>>>higher, even if there was no real repetition at all. Suppose you are at move 30
>>>in a game and there is no real repetition. What's the probability of NOT getting
>>>a hit in this case? Probably smaller than you would think, says my gut feeling.
>>>Like throwing a 16bit dice 60 times in a row, not getting ONCE that number ;-)
>>>Therefore I prefer to have a not too small table.
>>>
>>>Bas.
>>
>>I see, i checked my code and only use 2^^12 entries, 4 KByte.
>>May be the reason IsiChess lost to Tao ;-)
>>
>>First of all i use gameMoveCount50 + 4 <= gameMoveCount of course.
>>I guess the collision rate is so low, that the table pays clearly off. There are
>>N entries set in 4K, where N is the number of all reversable (half) moves played
>>in the game and in current search backward from the current search position, <=
>>100.
>
>How much speed can you get from faster repetition detection?
>Do you use more than 1% of your time for repetition detection?

I guess even less. I use a very primitive way of repetition detection (running
through the history since the last capture, comparing the hash keys). Testing it
using a profiler, I found out that the time spent in repetition detection is
negligible. So, I don't intend to spend any time trying to optimize the process.


>
>>
>>There are still a lot of blanks inside the table. Ok 64K is not so huge today,
>>but 16 4K pages instead of one, hopefully at least in second level chache - if
>>you have more of these tables it becomes narrow in cache. Remember the thread
>>about TLB misses ;-)
>
>I do not understand all the cache considerations.
>I only know that if you use arrays often then it is better to do it with small
>arrays and not with big tables.
>
>I do not understand the comparison between 16 4K and 64K.
>
>Where do you have 16 4K in this thread?
>
>Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.