Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Efficient hash algorithm?

Author: Roberto Waldteufel

Date: 17:42:46 09/21/98

Go up one level in this thread



On September 21, 1998 at 20:32:51, Dan Newman wrote:

>On September 21, 1998 at 19:44:55, James Robertson wrote:
>
>>On September 21, 1998 at 19:33:18, John Coffey wrote:
>>
>>>>You should also have two hash tables, one with an "always overwrite" >replacement
>>>>policy and another with a replacement policy based on depth. Almost everybody I
>>>>know agrees that this is the best way to do things on a PC.
>>>
>>>When I studied hashing for other types of applications, the rule would be that
>>>if you has two items with the same hash key, then you would store the second
>>>in the next available empty slot.
>>
>>It is just a waste of time. Most tables will hold so few positions that there
>>will be a lot of positions that could fill the same slot (i.e. have the same
>>index), and since no position is of any more importance than any other >position, there are no tangible rewards at all for doing this.
>
>There have been all sorts of different replacement policies tried, all with
>the notion that the information just gained about a position might well be
>more valuable than that currently stored in its corresponding hash table
>entry.  For instance, if a hash table entry is for a position that was
>searched to a shallow depth, we might well want to replace it with one that
>was searched to a greater depth--because it's score probably more accurate
>and the information was probably obtained at a higher cost.
>
>Many of us believe that a two level hash table (each entry having two
>slots, one always replace, the other replace if the draft (depth searched)
>is greater) works well on the PC--perhaps better than anything else tried
>so far...
>
>Re-hashing can also work well depending on the architecture of the machine.
>I think Bob Hyatt has said that Cray Blitz "re-hashes" up to 7x, for
>instance (he's able to grab all 8 entries in one go).
>
>-Dan.
>
>>
>>>This requires more data to keep track of
>>>where the next item is (could be a byte offset.)
>>>
>>>John Coffey

Do you know if any systematic research or experimentation has been done on
replacement strategies? If the positions have double entries, that halves the
number of positions that can be held in the table and doubles the access time
unless the first probe finds the position being searched for. The advantages
presumably more than offset these negative effects. Do you know what kind of
speedup might be expected in going, say, from one table of 1,000,000 entries to
two tables of 500,000 entries each? I am interested because I may well try to
implement this in place of my present "one big table" approach.

Best wishes,
Roberto



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.