Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hashtables: is larger always better?

Author: Robert Hyatt

Date: 18:21:07 09/26/01

Go up one level in this thread


On September 26, 2001 at 15:38:49, Andrew Dados wrote:

>On September 26, 2001 at 14:56:02, Robert Hyatt wrote:
>
>>On September 26, 2001 at 14:40:53, Gian-Carlo Pascutto wrote:
>>
>>>On September 26, 2001 at 13:05:43, Robert Hyatt wrote:
>>>
>>>>I really don't want to test with smaller keys.  When I tried 32 bits in the
>>>>tests Stanback, I and others did, it was horrible.  Collisions per second.  I
>>>>didn't think the search could stand that.  However, I have never tried to
>>>>determine how many collisions (replace this with bogus scores) the search can
>>>>tolerate with no ill side-effects.  That would be a _very_ good paper.  Which I
>>>>suppose I will write if nobody else does...
>>>
>>>For some anecdotal data:
>>>
>>>Sjeng has been using 32-bits for normal chess for quite some time
>>>and I don't seem to crash & burn (*). Didn't seem to change much going
>>>from the cyrix120 to the Athlon 1000 either.
>>>
>>>However! If I use a large openings book and do not disable probing
>>>it after the opening I _have_ gotten collisions and several times
>>>so! (and unfortunately in that case a _single_ collision will absolutely
>>>kill you)
>>>
>>>(*) I discovered recently that in about 5-15% of the cases I was
>>>getting bogus evaluations back in crazyhouse chess due to a hashing
>>>error. It _was_ producing bogus scores in the search, but 'fixing'
>>>it doesn't seem to have affected the strength of my program. Amazing
>>>isn't it?
>>>
>>>--
>>>GCP
>>
>>
>>The last thing is interesting.  Still has my curiousity up to see just how
>>many "errors" are required before the search falls apart.
>
>Two loose remarks here:
>
>- Assuming about equal distributions of fh and fl scores in HT,  single error
>will be 'no error' in 50% of times.
>
>- False fl or fh causing score backed up to root will result in research with
>different bounds which will quite likely weed out hash entries with not adequate
>bounds. So maybe simply counting hash entries with 'inadequate bounds' on
>research can give some insight into how many nodes in total are 'lethal nodes'
>changing score with hash hit only.
>
>-Andrew-


That is an interesting question.  I could see collisions causing a fail high
which would back up to the root and cause a re-search with new bounds, which
would then fail low.  That would cause me problems.  I can also see collisions
causing a fail-low to be erroneously backed up to the root, missing a change to
a new best move.

Either seems to be problematic for me.  Note that on the PVS (null-window)
search, a fail-high is not accepted unless a real score (rather than a fail-low)
is found on the re-search.  But if the root move fails high on the initial beta
value, I will play that move no matter what happens on the re-search unless yet
another move fails high on top of it....



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.