Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: question about hash tables

Author: Uri Blass

Date: 12:30:48 05/07/02

Go up one level in this thread


On May 07, 2002 at 13:34:34, Robert Hyatt wrote:

>On May 07, 2002 at 06:24:19, Uri Blass wrote:
>
>>I read in bruce moreland's site about hash tables
>>see http://www.seanet.com/~brucemo/topics/hashing.htm
>>
>>I try to use them first in my program only for better order of moves
>>and I try to use the algorithm that is in that site
>>
>>I think that there is some misleading information there
>>
>>The function RecordHash includes recording the best move but when the depth is 0
>>there is no best move that was leading to the position.
>>
>>When I use hash tables only for better order of moves then it seems to be
>>useless to record hash tables when the remaining depth is 0.
>
>This is wrong.  Depth=0 simply means you are at the frontier of the tree.
>You can store such position or you can not do so.  It depends on whether you
>like one or the other better.  If depth==0, I call Quiesce() and I _never_
>store hash stuff in the q-search...  I did at once, then tried it without doing
>so and found no real difference.  I therefore went for simpler is better and
>removed storing/probing from the q-search, which does lower the "strain" on
>memory and table entries when searches are long and tables are small...
>
>
>
>
>>
>>I also think that recording hash tables in the last plies is relatively
>>unimportant at long time control and it may be even better not to try it if I
>>use the scheme "always replace" because it is more important to remember the
>>best moves in the first plies.
>>
>>Am i right?
>
>Not necessarily.  Particularly if by "last few plies" you mean plies where
>remaining depth > 0 but less than some value N.  The _next_ iteration those
>hash entries will still help your move ordering significantly.

I agree that in the first iterations it is good to record hash tables at every
positive depth but at long time control most of the time of the search is not
the first iterations so the first iterations are not the iterations when it is
important to save time.

My point is that when searches are long and tables are small I expect to write
different positions with different keys in the hash entries if I write too much
and this is the reason that I cosider not to hash plies when the remaining depth
is less than some value N when N should be dependent on the time control and on
the size of the hash tables.


>
>
>>
>>I prefer to start with something relatively simple and to check that I have no
>>bugs and only later to develop it to something more complicated and this is the
>>reason that I use "always replace"
>>
>>changing "always replace" to "only replace" when the remaining depth is big
>>enough seems to me simple to do when there is no problem to use information in
>>the hash tables about the best move for future search but replace only when the
>>depth is the same or bigger can cause problems later if I want to use
>>information of previous search about the best move because the depth in the
>>hashtables is wrong.
>>
>>Uri
>
>
>Nothing wrong with starting simple.  Always store entries in the normal search,
>don't in the q-search.

I already do it and I never hashed nodes in the qsearch.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.