Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Repetition Checks

Author: Gerd Isenberg

Date: 23:58:47 07/30/03

Go up one level in this thread


On July 30, 2003 at 23:38:50, Uri Blass wrote:

>On July 30, 2003 at 17:34:55, Gerd Isenberg wrote:
>
>>On July 30, 2003 at 16:39:57, Bas Hamstra wrote:
>>
>>>On July 30, 2003 at 12:17:36, Gerd Isenberg wrote:
>>>
>>>>On July 29, 2003 at 14:34:41, Bas Hamstra wrote:
>>>>
>>>>>On July 29, 2003 at 00:16:53, macaroni wrote:
>>>>>
>>>>>>I have been having great trouble finding an efficient routine for Repetition
>>>>>>checks, I can get the zobrist keys of the current line being searched, but what
>>>>>>is the best (if there is a best) method of comparing them to check for 3 the
>>>>>>same? it seems crazy to loop through all of them for each one, looking for like
>>>>>>positions. Is there something really simple and nice i'm completely missing? or
>>>>>>is that the only way,
>>>>>>Thanks all :)
>>>>>
>>>>>How about this: char RepCheck[64000]. Now you in each position you take the last
>>>>>16 bits of the hashkey. If RepCheck[Last16] is nonzero, it *might* be a
>>>>>repetition and you do the extensive check of comparing hashkeys all to the root.
>>>>>For this to work, you increment RepCheck[Last16]++ in your Make(Move) code.
>>>>>Decrement it at Unmake(). So now you have 1/64000 th of the cost...
>>>>>
>>>>>Bas.
>>>>
>>>>Hi Bas,
>>>>
>>>>The "Ronald de Man" trick works well,
>>>>except Last16 becomes a bit larger than 64000 in your sample ;-)
>>>>May be last13 or last14 with smaller tables is even enough.
>>>
>>>Maybe. But as the game proceeds the probability of a "hit" becomes higher and
>>>higher, even if there was no real repetition at all. Suppose you are at move 30
>>>in a game and there is no real repetition. What's the probability of NOT getting
>>>a hit in this case? Probably smaller than you would think, says my gut feeling.
>>>Like throwing a 16bit dice 60 times in a row, not getting ONCE that number ;-)
>>>Therefore I prefer to have a not too small table.
>>>
>>>Bas.
>>
>>I see, i checked my code and only use 2^^12 entries, 4 KByte.
>>May be the reason IsiChess lost to Tao ;-)
>>
>>First of all i use gameMoveCount50 + 4 <= gameMoveCount of course.
>>I guess the collision rate is so low, that the table pays clearly off. There are
>>N entries set in 4K, where N is the number of all reversable (half) moves played
>>in the game and in current search backward from the current search position, <=
>>100.
>
>How much speed can you get from faster repetition detection?
>Do you use more than 1% of your time for repetition detection?


It is about in this range. If you have long chains of reversable move it may be
a bit more. In tactical positions it there is almost no difference.


>
>>
>>There are still a lot of blanks inside the table. Ok 64K is not so huge today,
>>but 16 4K pages instead of one, hopefully at least in second level chache - if
>>you have more of these tables it becomes narrow in cache. Remember the thread
>>about TLB misses ;-)
>
>I do not understand all the cache considerations.

Fast cache is a limited resource. Short arrays/tables are in general more cache
friendly. If code/data is not in first or second level cache there may be delays
from about 140 ns (Memory latency) up to 400ns in worst case, to get data from
main memory via cache into the processor.

>I only know that if you use arrays often then it is better to do it with small
>arrays and not with big tables.
>
>I do not understand the comparison between 16 4K and 64K.

4KByte (0x1000) is some Memory Pagesize. These pages are managed by the TLB
(Translation-Lookaside Buffer) inside the processor. TLB maps effective
addresses from a process to real physical addresses. There are TLBs for first
and second level cache. A TLB-miss occurs, if an effective address is not
already mapped in the TLB. This is likely in huge random Hash-accesses, there
are additional delays with much more time than pure memory latency.
And that was the point, Vincent came up with his 400ns latency issue.
64KByte require up to 16 TLB entries.

>
>Where do you have 16 4K in this thread?

16*4KByte pagesize == 64KByte if you use 16-bits from hashkey, like Bas.
I use 12 Bits as index and need a 4KByte table, one page.

Gerd

>
>Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.