Author: Gerd Isenberg
Date: 14:09:21 12/28/02
Go up one level in this thread
On December 28, 2002 at 15:58:55, Uri Blass wrote:
>On December 28, 2002 at 15:37:27, Gerd Isenberg wrote:
>
>>On December 28, 2002 at 15:01:24, Uri Blass wrote:
>>
>>>On December 28, 2002 at 13:13:39, Gerd Isenberg wrote:
>>>
>>>>On December 27, 2002 at 21:03:24, Dan Andersson wrote:
>>>>
>>>>>Ugh! Talk about destroying a nice run time behaviour. Using a 4K hash and a
>>>>>rehashing scheme uould get you a mean almost identical to one. The algorithm you
>>>>>describe would probably have a mean close to one also, but the standard
>>>>>deviation will be horiible to behold. But the missed probe behaviour will be
>>>>>real bad. Iterating over the move list raises the cost of an error linearly, or
>>>>>very nearly so. Real stupid. There is no excuse whatsoever to use that
>>>>>algorithm.
>>>>>
>>>>>MvH Dan Andersson
>>>>
>>>>Hi Dan,
>>>>
>>>>Thanks for the hint. In some endgame positions i got up to 20% collisions. In
>>>>openings or early middlegame < 1%-4%. So about 5% in avarage.
>>>>
>>>>Before i used the even more stupid approach iterating back over the move list,
>>>>comparing zobrist keys, first one 4 ply before and then in 2 ply decrements
>>>>until there are reversible moves. So the 4KB table was a nice improvement for
>>>>me.
>>>
>>>How much faster do you get from not using the more stupid approach?
>>>
>>>I call every improvement that is less than being 5% faster a small improvement.
>>>
>>>Uri
>>
>>Hi Uri,
>>
>>it's a small improvement according your definition, specially in tactical
>>positions. It depends how often the leading conditions looking for the last
>>irreversible move is true. For fine 70 about 2.5%.
>>
>> if ( Count50 + 4 <= GameCount ) {
>> if ( RepetitionHash[RepeHashIdx()] > 1 ) // the improvement
>> if ( Repetition() > 0 ) // iterate over move list
>>
>>The 4KByte table safes about 90%, most often > 95% of all calls to the rather
>>expensive Repetition call. In makeMove i increment, in unmakeMove i decrement
>>RepetitionHash.
>>
>>Gerd
>
>I can imagine even cases when it is counterproductive.
>The point is that you have to update repetitionHash even when
>count50+4>GameCount and if you get almost always Count50+4<=GameCount then it is
>counterproductive.
Yes, but that's very rare. But even in Leonid's positions, where almost all
forced moves are irreversible, it's not slower. One unconditional
increment/decrement per node seems not so expensive. And obviously there are
always some sidelines with reversible moves, like repeated checks with queen or
rook. Repetition() may be a rather expensive function with a loop over a long
chain of irreversible moves.
>
>I am not sure even how to test if it is productive because
>it is possible to get small improvements based on some random compiler
>optimization so testing based on the question if the program runs faster or
>slower can be misleading.
>
>Uri
Yes, specially with recursive functions like (q)search. Cache-effects, some data
or code size exceeds some multiplies of cacheline size, some additional
latencies, dependencies, stalls ...
Gerd
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.