Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: hash entry replacement schemes (more)

Author: José Carlos

Date: 08:35:54 09/15/02

Go up one level in this thread


On September 15, 2002 at 10:18:45, Robert Hyatt wrote:

>On September 15, 2002 at 07:46:10, José Carlos wrote:
>
>>On September 14, 2002 at 23:37:59, Robert Hyatt wrote:
>>
>>>On September 14, 2002 at 23:34:58, Robert Hyatt wrote:
>>>
>>>>On September 14, 2002 at 19:19:43, martin fierz wrote:
>>>>
>>>>>On September 14, 2002 at 18:27:13, Miguel A. Ballicora wrote:
>>>>>
>>>>>>On September 14, 2002 at 15:39:54, martin fierz wrote:
>>>>>>
>>>>>>>On September 14, 2002 at 13:27:12, Tony Werten wrote:
>>>>>>>
>>>>>>>>On September 14, 2002 at 13:00:09, scott farrell wrote:
>>>>>>>>
>>>>>>>>>I have been reading plenty on hash replacement schemes.
>>>>>>>>>
>>>>>>>>>I have read Breuker's thesis, and done some (slow) searching through the CCC
>>>>>>>>>archives.
>>>>>>>>>
>>>>>>>>>I am going over my code and redoing the hashing parts, only to realise I have
>>>>>>>>>borken IID (internal iterative deepening).
>>>>>>>>>
>>>>>>>>>Now I am unsure of what I should be aiming at in relation to hashing code.
>>>>>>>>>
>>>>>>>>>I am also unsure of exactly what multi-probing the hash tables is. I am guessing
>>>>>>>>>there is still only one hashtable, and different equations to go from
>>>>>>>>>key/checkkey to the hashtable index, so that there can be several alternative
>>>>>>>>>slots that a hash entry for a position could appear at. Is Multi-probe just
>>>>>>>>>aiming at reducing collisions - or is the idea to store more than one hash entry
>>>>>>>>>for popular positions (say 1 for "depth based", and one for "replace always")?
>>>>>>>>>
>>>>>>>>>I think I am correct that I need atleast one of the probes as "replace always"
>>>>>>>>>to get IID to work. What else do I need to guarantee that I get the best moves
>>>>>>>>>from IID backed up to the main search?
>>>>>>>>>
>>>>>>>>>When people say "replace always" do that mean literaly, just store it all the
>>>>>>>>>time? It seems over simplistic - but I tried it, and it works pretty much the
>>>>>>>>>same as my code below that tries to keep the deeper entries.
>>>>>>>>>
>>>>>>>>>Any help apprecated.
>>>>>>>>
>>>>>>>>What most people do is have 2 tables. In the first you only store if the
>>>>>>>>searchdepth >= table depth (and you move the former entry to the second table).
>>>>>>>>If not then store in 2nd table.
>>>>>>>
>>>>>>>
>>>>>>>i don't know about most people. crafty does that. i tried it in my checkers
>>>>>>>program, and it was always worse than using a single table. this is just an
>>>>>>>observation, not an explanation :-)
>>>>>>>i remember asking about this and IIRC dieter buerssner replied that he had also
>>>>>>>toyed with 2 tables and it didnt work for him either.
>>>>>>
>>>>>>Do you probe more than once? if you probe twice, it is similar to have two
>>>>>>tables.
>>>>>
>>>>>i probe twice. why would that be similar to the two-table approach?
>>>>>i always thought that the point of having two tables was to make sure that in
>>>>>the table for the important entries (close to the root) nothing ever gets
>>>>>overwritten. and in the second table you can use a simple scheme like always
>>>>>overwrite, and it won't hurt you because there are no important entries there.
>>>>>i'm always overwriting the less important of the two entries i probe. which can
>>>>>still be important - although it's rather unlikely, i admit. BTW, my single
>>>>>table needs 5-10% less nodes than the double table for my test set. and it's
>>>>>much simpler...
>>>
>>>This suggests a bug in your two-table approach.
>>>
>>>They really should be pretty close.  And if you copy the approach I use,
>>>it should be exactly as efficient.  If I replace the entry in the
>>>depth-preferred table, then I move the replaced entry to the always store
>>>table.  If not, I just store it in the always store.  In effect, I have a
>>>set of two entries, plus a third I need to add.  I keep the third plus the
>>>best of the other two.
>>
>>  I've never tried it that way (I will) but I don't understand why it should be
>>better. The always replace table entry will probably soon be replaced, so it
>>won't help much keeping a depth-preferred node there. The always replace table
>>is mainly useful for locality, I mean, it's well known that in close branches at
>>low (remaining) depth many transpositions happen, and that's where the a.r.
>>table helps, I think. So I would say the old entry you had in the d.p. table
>>will be replaced much earlier than having a chance to be useful.
>>  Anyway, I'll try it and report difference here.
>>
>>  José C.
>
>The reason it seems to be better (for me) is that you can implement this two
>ways:
>
>1.  if new draft is greater than draft in "depth preferred" table, you replace,
>losing the entry in depth preferred.  Keeping the probably worse entry in
>always-replace table.
>
>2.  replace the least useful entry of the two in the table.
>
>It seems pretty obvious to me that (2) is going to be more efficient than
>(1).  It's the same principle that is used in 2-way set associative cache.

  Under that definition, it's obvious that (2) is more efficent. But how do you
know which entry is "the least useful entry"? That's what I fail to understand.

  José C.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.