Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Has anyone tried this in their hash table?

Author: Heiner Marxen

Date: 07:17:11 08/17/01

Go up one level in this thread


On August 17, 2001 at 01:57:33, Artem Pyatakov wrote:

>I read somewhere a while ago about a technique that was used for a different
>tree-searching game (I am not sure which one it was) and was wondering if anyone
>here has tried to apply it to chess.
>
>Here is the idea that was presented:
>
>As far as I know, most people use two hash tables in their programs - one is
>"always replace" the second one is "depth-is-greater-or-equal replace".
>
>Instead of using the "depth" as a criterion for replacing things in that second
>table, why not use the number of nodes searched as the criterion (i.e. replace
>if the number of nodes is greater or this is an OLD hash entry).

In my program "chest" (a problem solver) I did it that way, first, since it
appeared to me to be the right thing to do.
Later I changed that: now I estimate the larger number of nodes by the
larger depth, simply because I must store the depth, anyhow, and would have
to use extra memory to also store the number of nodes.

I use a hash search & replacement sceme that is a bit different than the
prominent 2-table sceme:  I use a single table, but I search more than one
place: that part of the hash code, which is not used for indexing the first
(main) hash slot (i.e. the high bits of the hash) are used as an offset to
the main index index, forming a secondary hash slot index.  From there
I do some more steps (4) of "quadratic rehashing", with steps 1, 1+2, 1+2+3
etc.  From these candidate slots I take the first free slot, or, if there
is no free slot, I replace one with the smaller depth.
(In fact, I decide between the first (main) and the last searched slot)

According to my statistics this works quite well.  Probing multiple entries
helps to fill free slots (instead of replacing useful data), and does not
cost a significant amount of time (other overhead costs more).
Choosing among at least 2 candidates for replacement, helps to retain
the more valueable nodes, and replacing always helps small localities.

>This idea makes sense to me because what we want the hash to do is save us from
>searching the MOST nodes. I think if one combines this with an aging mechanism
>this idea might fly.

( An aging mechnism is important to have, but I have no experience with that,
  since "chest" does only one big search, and then is done. )

Yes.  We want to save the most nodes.  But this is a somewhat fuzzy count:
We do not want to save those nodes, which we searched when we _did_create_
the node, but that nodes which we would have to search, when we had to
_recreate_ it.  Those two counts may differ quite significantly, since
both searches do involve hash probes and hash hits, and those are bound to
differ.

E.g. imagine an empty hash table.  Do some (recursive) search of depth say 3.
After it we have stored one node of depth 3, some of depth 2 and even more
of depth 1.  Now imagine the top node, depth 3, is replaced by other data.
Now let's try a re-search:  that node of depth 3 is not found, so it has
to be recomputed.  Now, how much work is this, measured in nodes to search?
Since the depth=2 nodes are still there, the new re-search stops quite early,
and is cheap, compared with the initial search.
Here, the initial node number would misguide us about the necessary amount
of work to recreate the node.  And if the initial search did find some
sub-nodes which the re-search does not find, the second search can be much
larger than the first.

>Has anyone tried this? What were the results?

Yes.  I prefer to use the memory for more nodes.  YMMV, of course.

>Thank you.
>
>Artem

You are welcome!

Heiner



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.