Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: How do you insure that Hash Table entries are correct?

Author: Dan Newman

Date: 13:51:34 09/17/00

Go up one level in this thread


On September 17, 2000 at 12:26:20, José Carlos wrote:

>On September 16, 2000 at 19:10:01, Dan Newman wrote:
>
>>On September 16, 2000 at 18:30:21, José Carlos wrote:
>>
>>>On September 16, 2000 at 17:04:23, Dan Newman wrote:
>>>
>>>>On September 16, 2000 at 12:25:27, Larry Griffiths wrote:
>>>>
>>>>>I have tried using a 64-bit hash key and storing it in each hash entry to insure
>>>>>that different positions that point to the same hash table entry can be
>>>>>verified.
>>>>>
>>>>>I also have a 32 byte smallboard that I use for debugging.  This smallboard has
>>>>>4bits for each piece color and type so I know for sure if the current position
>>>>>matches the hash table entry.
>>>>>
>>>>>The 64-bit hashkey works most of the time, but sometimes different board
>>>>>positions produce the same hash-key.
>>>>>
>>>>>Is there a fail-safe method other than my 32 byte smallboard for insuring that
>>>>>different board positions with the same hash keys can be resolved?
>>>>>
>>>>>Larry.
>>>>
>>>>I once tried to measure the error rate and got no errors at all with 64-bit
>>>>hashcodes over several hours of testing.  I was able to measure the error rate
>>>>for 32-bit hashcodes--that was about 1 false match/second (at perhaps 100k
>>>>probes/s).  I think someone came up with an estimate of (very approximantely)
>>>>one error/day with a 64-bit hashcode at 100 knps--or was it 1 Mnps?  Anyway,
>>>>the error rate is very low and can mostly be ignored.  I do try to make sure
>>>>that such an error won't crash my program though...
>>>>
>>>>-Dan.
>>>
>>>  If you got 1 false match per second with 32 bits hashcodes, then you shoud
>>>have a false match every 2^32 seconds with 64 bits hashcodes, which is IMHO not
>>>a problem at all.
>>
>>2^32 seconds is ~136 years.  With an error rate that low I wouldn't bother
>>checking the move :).  I too have thought that adding a bit should reduce the
>>false match rate by a factor of two.  It seems obvious.  But there is a
>>problem.  If you start out with one bit (two entries), then the error rate will
>>be quite high, almost the probe rate, R.  So if we add 31 bits to this we
>>should get R errors in 2^31 seconds.  If the probe rate is 100k probes/s, that
>>would give us 4.6 x 10^-5 errors/s (with a 32-bit hashcode), but I get one
>>error/s.  So at least at that extreme the factor of two per bit doesn't hold.
>>It may well hold at 32 bits and higher though...
>
>  Well, note that your 1 per second rate is just an approximation. Exact rate
>should take a long time to calculate to a given precision.

Yes.  One per second is a very crude approximation.  It is highly dependent
on the position tested.  I think I ran it on WAC which of course isn't a set
of ordinary positions.

>  Anyway, I don't catch your starting point. I cannot figure what is the error
>rate for a single bit, but if you do 100k probes/s, then you have 1 error every
>100000 probes with 32 bits. So, with 1 bit you'd have 1 error every
>100000^(1/32), this is, every 1.4 probes or 0.000014 seconds. This number looks
>too high anyway, but it's difficult to guess what is the exact number.

Well, if there are only two hashcodes, 0 and 1, then about half the positions
will hash to each of them.  It seems to me that most of the time you will get
get false matches with the occasional correct match.  But it would be very
difficult to calculate with any great exactness.  I think it would be very
position dependent too.

I don't understand the 100000^(1/32) calculation.  I would have calculated this
as 2^31 errors / second.  That is if we start with 32 bits and get one error
per second, then we should get two errors/s with 31 bits, and so on down to
2^31 errors/s at with one bit, which is of course impossible with only 100k
probes/s.  (I'm sure there's something wrong here :).)

>  About your estimation, after thinking about it for a while, I think I have >the answer. We both have considered in our calculations "perfect" hash codes,
>I mean, hash codes such that they get give you the best possible results. Nor >your or my hash codes are that good :( so our calculations look correct after >all.

I think you're probably right.  Our calculations would probably work with
ideal hash codes and perfectly random positions.  Of course if the positions
were really random, we would get much higher error rates since each hashcode
is oversubscribed by huge factors.  Even with 64-bit hashcodes almost every
probe that matched would be false.

-Dan.

>
>  José C.
>
>
>>-Dan.
>>
>>>  I simply ignore that problem in Averno, and the only thing I do is test if the
>>>HT move is legal in the current position before trying it.
>>>
>>>  José C.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.