Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: double hash tables

Author: Chris Whittington

Date: 06:32:00 10/01/97

Go up one level in this thread



On October 01, 1997 at 08:50:17, Robert Hyatt wrote:

>On October 01, 1997 at 06:32:46, Chris Whittington wrote:
>
>>
>>CSTal always used to have one depth-based hash table with five bucket
>>attempts on clashes. After reading the threads in rgcc, I tried it out
>>with two hash table, an always replace and a depth-priority table.
>>
>>Some test positions get faster, some get slower, but, on the whole,
>>there's an improvement.
>>
>>Until, that is, the hash gets overloaded, when the old depth-only table
>>starts to come out on top.
>>
>>Is this a generally observed phenomenon ?
>>
>>Chris
>
>yes.  I have two hash algorithms for Crafty.  I use Ken's two-tiered
>approach
>normally, because it is very fast execution-wise, and simple to
>understand.  I
>include this version in the "released" source code, and use this version
>on ICC
>exclusively.

Can we assume from this that there are two versions of Crafty, one that
gets released, and one that doesn't ?

>
>The other approach, which goes by multiple names depending on exactly
>how it
>is implemented (rehashing as I used in Cray Blitz, which is likely the
>best of
>the best approaches), buckets, and chains, all offer advantages when the
>table
>becomes saturated.  Don Beal tested this in an ICCA article and found
>exactly
>the same thing.
>
>The only saving grace for me is that I have not hashed q-search nodes
>for way
>over a year.  When I compared hashing in the q-search with hashing the
>normal
>search only, I found it to be a few percent faster.  The tree got a
>little
>bigger, but the search speeded up enough to offset this and even give me
>a
>couple of percent more in speed.  A side-effect of this is that since I
>don't
>hash q-search nodes, my hash table is much less stressed than the
>programs that
>do.  I'd guess that for middlegame positions, at least 50% of the nodes
>would
>be classified as "leaf" or "quiescence".  And they never make it into
>the table.
>
>So I get by with smaller tables.
>
>But the idea of multiple probes is significantly superior to this
>two-tier
>approach, *if* you don't have enough memory and the table becomes
>over-subscribed.

My guess is that if there's enough memory then the validity of the
'second' always replace table is to find hash matches (and continuation
moves) in the deeper part of the search that normally would not get
hashed through failing the depth requirement of the depth-first table.
These matches would presumably be occuring in relatively adjacent parts
of the tree ? Ie a sort of short term memory in the deep regions .... ?

Presumably then, the search stalling earlier with the second table in
place will come about from more overwriting hits in the depth-first
table, losing important guidance info sooner (than if the depth first
table was alone and therefore larger)

Hmmm, this concentrating on the search now is going to tax my brain ....
:)

Chris



This page took 0.02 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.