Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hash table usefulness

Author: Christophe Theron

Date: 18:49:55 11/24/01

Go up one level in this thread


On November 24, 2001 at 21:04:34, Jesper Antonsson wrote:

>On November 24, 2001 at 20:35:34, Robert Hyatt wrote:
>
>>On November 24, 2001 at 18:02:59, Jesper Antonsson wrote:
>>
>>>This is speculation, as I'm not a chess program author myself, but I wonder if
>>>anyone has experimented recently and could explain things to me.
>>>
>>>Machines of today has a tremendous memory bottleneck and I asked myself the
>>>other day how come the large hash tables used today are beneficial. I remember
>>>figures from long ago when hash tables were said to give a speedup of perhaps
>>>3%, but today when processors are 10-20 times faster than main memory, they
>>>should give less, perhaps even be detrimental? If you turn of hash tables
>>>entirely, how much of an increase does this give in NPS on a 1 Ghz+ processor?
>>>Nothing? A lot?
>>
>>Hash tables have _nothing_ to do with NPS.
>
>You know better. Of course memory references costs.


Robert is right.

In a chess program, even the fastest ones, there will be ONE access to the hash
table every 1000 processor cycles or so.

In an average speed chess program like mine, it means one access to the hash
table every 2000 processor cycles.

Accessing the hash table costs almost nothing because it is hidden by the large
amount of instructions to execute between two accesses.




>>They have a _lot_ to do with
>>the size of the tree that is searched.  IE try a fixed-depth search (say to
>>12 plies) and vary the size of the hash table from small to large.  The size
>>of the tree will vary by 2-3X, which is a _significant_ advantage in terms of
>>speed.  Even though the raw NPS stays pretty much constant..
>
>Ok, if the tree size decreases that much, I guess hash tables are ok for
>now. You may continue. :-)
>
>>>Has anyone experimented with small hashtables, carefully tuned to fit in  cache,
>>>and used perhaps only in shallow parts of the tree, and compared the results to
>>>the standard "use as much as you have"-approach? *Especially* in lightning
>>>games, where a huge hash table won't be filled anyway, a cache-tuned table could
>>>perhaps perform better?
>>
>>
>>It would be way too small and get overwritten at a ridiculous rate.
>>
>>>And by the way, does anyone bother to try to make sure his/her engine itself
>>>fits in instruction cache and that the search-function is so aligned that it
>>>won't get pushed out of instruction cache by more seldomly used functions?
>>>
>>>br,
>>>Jesper
>>
>>Most likely everybody does this...
>
>Oh, cool. I haven't heard anything about this, so I thought otherwise, but if
>you say so...



I have done this by putting all the most executed code in one C source. The
binary generated by this source is close to 32K, which fits the K6-2 cache for
example (not sure about the P2, P3 and P4).



    Christophe



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.