Author: Gerd Isenberg
Date: 12:37:27 12/28/02
Go up one level in this thread
On December 28, 2002 at 15:01:24, Uri Blass wrote:
>On December 28, 2002 at 13:13:39, Gerd Isenberg wrote:
>
>>On December 27, 2002 at 21:03:24, Dan Andersson wrote:
>>
>>>Ugh! Talk about destroying a nice run time behaviour. Using a 4K hash and a
>>>rehashing scheme uould get you a mean almost identical to one. The algorithm you
>>>describe would probably have a mean close to one also, but the standard
>>>deviation will be horiible to behold. But the missed probe behaviour will be
>>>real bad. Iterating over the move list raises the cost of an error linearly, or
>>>very nearly so. Real stupid. There is no excuse whatsoever to use that
>>>algorithm.
>>>
>>>MvH Dan Andersson
>>
>>Hi Dan,
>>
>>Thanks for the hint. In some endgame positions i got up to 20% collisions. In
>>openings or early middlegame < 1%-4%. So about 5% in avarage.
>>
>>Before i used the even more stupid approach iterating back over the move list,
>>comparing zobrist keys, first one 4 ply before and then in 2 ply decrements
>>until there are reversible moves. So the 4KB table was a nice improvement for
>>me.
>
>How much faster do you get from not using the more stupid approach?
>
>I call every improvement that is less than being 5% faster a small improvement.
>
>Uri
Hi Uri,
it's a small improvement according your definition, specially in tactical
positions. It depends how often the leading conditions looking for the last
irreversible move is true. For fine 70 about 2.5%.
if ( Count50 + 4 <= GameCount ) {
if ( RepetitionHash[RepeHashIdx()] > 1 ) // the improvement
if ( Repetition() > 0 ) // iterate over move list
The 4KByte table safes about 90%, most often > 95% of all calls to the rather
expensive Repetition call. In makeMove i increment, in unmakeMove i decrement
RepetitionHash.
Gerd
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.