Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: How important are hashtables?

Author: Dann Corbit

Date: 12:56:11 12/28/05

Go up one level in this thread


On December 28, 2005 at 14:12:03, Stuart Cracraft wrote:

>On December 27, 2005 at 15:35:24, Dann Corbit wrote:
>
>>On December 27, 2005 at 13:34:08, Stuart Cracraft wrote:
>>
>>>On December 27, 2005 at 04:49:51, Vasik Rajlich wrote:
>>>
>>>>On December 26, 2005 at 16:52:23, Jonas Cohonas wrote:
>>>>
>>>>>For long analysis that is, i mean if you have an engine running for say 2 days
>>>>>on a position, will it come to another conclusion whether you use 32Mb hash or
>>>>>2Gb hash?
>>>>>
>>>>>In other words will it, at that time control/analysis matter what hash size you
>>>>>use?
>>>>
>>>>I vaguely remember some tests which suggest that every doubling of the hash size
>>>>gives 5 or 6 rating points.
>>>>
>>>>Vas
>>>
>>>Yes - I recall something from Gordon Goetsch and Hans Berliner at CMU quite
>>>some time ago that gave 2x = 8 rating points. It may have been Carl Ebeling.
>>>I believe it was the 1970's or 1980's for that figure. And that may have been
>>>USCF rating points instead of ELO points.
>>>
>>>Here is a more recent commentary from 1998 SSDF which gave 4-5 rating points
>>>per doubling.
>>>
>>>http://www.geocities.com/CapeCanaveral/Launchpad/2640/ssdf/1998/ssdf9804.htm
>>>
>>>And here:
>>>
>>>http://www.chessassistance.com/Articles/020_Hash_size.html
>>>
>>>It is no quick way to improved rating unless the table is horribly small already
>>>of course.
>>>
>>>I like to size the table commeasurate with the size of searches I'm doing. Since
>>>I don't size dynamically but statically at compile time. I am sure most size
>>>dynamically at compile time for flexibility. I haven't done this yet.
>>
>>I think that Shredder benefits from large hash more than other programs.
>>
>>I have seen dramatic differences in solution times (much faster) with big hash
>>(512 MB or larger) compared to smaller hash sizes.
>
>Thanks.
>
>I need to experiment with larger hash tables and move from statically
>to dynamically, continually resizing at program startup until I don't
>get a failure and then take less than that for the malloc() for the hash
>areas and then use that amount.
>
>Right now, I do the allocs but they are based on preset parameters for
>the # of entries.
>
>Dynamic sizing is (I hope) what the commercials do.

Generally speaking, the interface (or an ini file, or whatever) will tell you
the maximum amount of RAM to use for hash tables and then you can suballocate
that memory any way you like.

Some programs fine-grain it (e.g. split the hash allocation into main hash, pawn
hash, eval hash, and even other kinds).

It's probably better to face the user with fewer questions, so give them just
one parameter.

I would suggest that if you get a parameter that is too large (e.g. malloc or
new fails) that you write an error message to a log file and allocate the
largest chunk that you can.  Typically, the way to find the largest possible
size is by binary search.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.