Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hash collisions with deep searches

Author: Frank Phillips

Date: 09:29:38 12/23/02

Go up one level in this thread


On December 23, 2002 at 12:04:03, Robert Hyatt wrote:

>On December 23, 2002 at 04:25:22, Frank Phillips wrote:
>
>>On December 22, 2002 at 21:29:07, Robert Hyatt wrote:
>>
>>>On December 22, 2002 at 06:44:04, Frank Phillips wrote:
>>>
>>>>On December 21, 2002 at 13:57:09, Vincent Diepeveen wrote:
>>>>
>>>>>On December 20, 2002 at 22:05:14, Walter Faxon wrote:
>>>>>
>>>>>>On December 20, 2002 at 05:40:35, Gerd Isenberg wrote:
>>>>>>
>>>>>>>On December 19, 2002 at 16:50:32, Dann Corbit wrote:
>>>>>>>
>>>>>>>>Suppose that we are searching at long time control, and the hash table rapidly
>>>>>>>>fills up.  This can throw move ordering in the toilet, if we can't find
>>>>>>>>transpositions any more.
>>>>>>>>
>>>>>>>>What sort of replacement scheme is best for very long searches (several
>>>>>>>>minutes)?
>>>>>>>
>>>>>>>Hi Dann,
>>>>>>>
>>>>>>>I use a two table approach.
>>>>>>>
>>>>>>>A primary big one, where i replace one of eight with the lowest
>>>>>>>timestamp/draft/someFlags priority. Exact entries became a bit more resistance
>>>>>>>against overwriting, the same for most left succesors of every root move (pv and
>>>>>>>refutations lines).
>>>>>>>
>>>>>>>A smaller secondary one, with an always replacement scheme.
>>>>>>>
>>>>>>>In general i don't store or probe in qsearch. I use IID in __every__ interior
>>>>>>>node without a move hint from hash.
>>>>>>>
>>>>>>>Regards,
>>>>>>>Gerd
>>>>>>
>>>>>>
>>>>>>Hi, Gerd.
>>>>>>
>>>>>>Good idea about IID!
>>>>>>
>>>>>>I have no chess program but it seems to me that one uses hashing for two
>>>>>>reasons:  (1) to save information that was expensive to compute but may not be
>>>>>>immediately useful (closer to the root), and (2) to save information that wasn't
>>>>>>very expensive to compute but may be useful very soon (close to the current
>>>>>>node).  That's why a two-table approach (or a one-table, two types of entry
>>>>>>approach) is so often preferred.
>>>>>
>>>>>Not really.
>>>>>
>>>>>first there was a concept which most used with a single hash transposition.
>>>>>then the 2 table concept was invented, but long before that already 8
>>>>>probe concepts were there.
>>>>>
>>>>>Yet recently cache lines (starting with DDR ram and RDRAM) have increased
>>>>>that much in length, that you for free can implement without risk a
>>>>>concept of 4 probes in a row.
>>>>>
>>>>>That's obviously working a lot better than 2 slow accesses to ram at
>>>>>random positions.
>>>>>
>>>>
>>>>I am confused.  Memory access for hash tables should be random - yes?
>>>>Nevertheless, will not the following be together in memory.  There could be more
>>>>elements within the structure of course.
>>>>
>>>>
>>>>typedef struct trans_record {
>>>>	TransRecordT top;
>>>>#if defined TWO_LEVEL_HASH_TABLE
>>>>	TransRecordT bottom;
>>>>#endif
>>>>} HashRecordT;
>>>
>>>Of course it will.  IN fact, this is my current hash table "entry"
>>>
>>>typedef struct {
>>>  TABLE_ENTRY prefer;
>>>  TABLE_ENTRY always[2];
>>>} HASH_ENTRY;
>>>
>>>One entry "depth-preferred" as always, two entries "always store"
>>>
>>>They are in 48 consecutive bytes of memory.
>>>
>>>
>>
>>Bob
>>
>>Does your testing indicate that 3 level tables are better?  I used to do it to
>>use more of the available memory, since I use a standard table size of (2 ^
>>power) to avoid using the mod operator, but was never convinced it was better -
>>table look up and store is more complicated and can take slightly longer.
>>
>>Frank
>
>
>I am not using a 3-level table.  I am doing _exactly_ what I have been
>doing for years, two tables.  I use the lower N bits to access the three
>entries as a single struct above.  I use the NEXT single bit to choose
>between the two always-store entries.  My approach produces _exactly_ the
>same node counts as the older separate table approach, for obvious reasons,
>since they addressing is _identical_.  The only difference is this is a couple
>of percent faster since everything is in one or two cache lines, rathare than
>always being in two cache lines in the older approach.

Nice - very clever.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.