Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hash collisions with deep searches

Author: Robert Hyatt

Date: 18:29:07 12/22/02

Go up one level in this thread


On December 22, 2002 at 06:44:04, Frank Phillips wrote:

>On December 21, 2002 at 13:57:09, Vincent Diepeveen wrote:
>
>>On December 20, 2002 at 22:05:14, Walter Faxon wrote:
>>
>>>On December 20, 2002 at 05:40:35, Gerd Isenberg wrote:
>>>
>>>>On December 19, 2002 at 16:50:32, Dann Corbit wrote:
>>>>
>>>>>Suppose that we are searching at long time control, and the hash table rapidly
>>>>>fills up.  This can throw move ordering in the toilet, if we can't find
>>>>>transpositions any more.
>>>>>
>>>>>What sort of replacement scheme is best for very long searches (several
>>>>>minutes)?
>>>>
>>>>Hi Dann,
>>>>
>>>>I use a two table approach.
>>>>
>>>>A primary big one, where i replace one of eight with the lowest
>>>>timestamp/draft/someFlags priority. Exact entries became a bit more resistance
>>>>against overwriting, the same for most left succesors of every root move (pv and
>>>>refutations lines).
>>>>
>>>>A smaller secondary one, with an always replacement scheme.
>>>>
>>>>In general i don't store or probe in qsearch. I use IID in __every__ interior
>>>>node without a move hint from hash.
>>>>
>>>>Regards,
>>>>Gerd
>>>
>>>
>>>Hi, Gerd.
>>>
>>>Good idea about IID!
>>>
>>>I have no chess program but it seems to me that one uses hashing for two
>>>reasons:  (1) to save information that was expensive to compute but may not be
>>>immediately useful (closer to the root), and (2) to save information that wasn't
>>>very expensive to compute but may be useful very soon (close to the current
>>>node).  That's why a two-table approach (or a one-table, two types of entry
>>>approach) is so often preferred.
>>
>>Not really.
>>
>>first there was a concept which most used with a single hash transposition.
>>then the 2 table concept was invented, but long before that already 8
>>probe concepts were there.
>>
>>Yet recently cache lines (starting with DDR ram and RDRAM) have increased
>>that much in length, that you for free can implement without risk a
>>concept of 4 probes in a row.
>>
>>That's obviously working a lot better than 2 slow accesses to ram at
>>random positions.
>>
>
>I am confused.  Memory access for hash tables should be random - yes?
>Nevertheless, will not the following be together in memory.  There could be more
>elements within the structure of course.
>
>
>typedef struct trans_record {
>	TransRecordT top;
>#if defined TWO_LEVEL_HASH_TABLE
>	TransRecordT bottom;
>#endif
>} HashRecordT;

Of course it will.  IN fact, this is my current hash table "entry"

typedef struct {
  TABLE_ENTRY prefer;
  TABLE_ENTRY always[2];
} HASH_ENTRY;

One entry "depth-preferred" as always, two entries "always store"

They are in 48 consecutive bytes of memory.


>
>
>>>Reason (2) implies that some sort of hashing should be done even in quiescence
>>>search.  The problem with qsearch hashing nowadays is the memory caching
>>>penalty.  Nowadays, if you have to wait on a memory access that might only help
>>>a little, you might as well do the search directly.
>>>
>>>Maybe one might use a very small hash table (10K entries, say), reserved for
>>>qsearch nodes?  (Or nodes beyond a certain depth or with reduced material.)
>>>Particularly if you could freeze it all into the cache...
>>>
>>>I think an older ICCAJ paper detailed experiments with hash node type
>>>allocation, but without reference to memory caching penalties.
>>>
>>>-- Walter



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.