Computer Chess Club Archives


Search

Terms

Messages

Subject: Hash tables and data-cache, some programmer stuff...

Author: Ed Schröder

Date: 02:22:36 01/17/98


>Posted by Bruce Moreland on January 17, 1998 at 03:25:31:

>In Reply to: Re: Computer(CPU)benchmark for chessprograms posted by Ed
>Schröder on January 17, 1998 at 02:21:54:

>On January 17, 1998 at 02:21:54, Ed Schröder wrote:

>>Every programmer has his own view how to program the hash table. Typical
>>for Rebel is that it stores and uses the q-search results in the hash
>>table (most programs don't do that, true?). Another typical Rebel hash
>>table behavior is that it doesn't store positions in the hash table that
>>have no "best move". The latter speed up Rebel a little.

>I don't do this q-search thing because our searches are probably a lot
>different.  I wouldn't have much use for it the way I do things.

>But I haven't spent much time trying to figure out how to make the most
>efficient use of hash memory.  I use the Thompson dual-table scheme
>described numerous places, which has always worked fine for me.

>I like the idea of not storing elements that don't have a "best" move
>attached.  You lose the ability to fail low on a node based upon the
>value in the hash table, but a little bit of statistical collection
>seems to indicate that this is pretty rare.

Ok, some programmer stuff...

The reason for the "little" speedup in Rebel (leaving entries with no
best-move out the hash table) is that it saves entries in the Pc
"data-cache". Of course this may not work in every program but I also
can imagine you can speedup a chess program with 5-15% for free. And
there is no risk.

A few years ago I re-wrote a very time consuming part in Rebel's
evaluation function. Instead of massive code I pre-calculated all
possibilities and stored the results in 4 Mb memory. Then a simple
table loop replaced all the massive code.

I expected a huge speedup of 30-35% but it didn't happen.
In fact it was a little bit slower.

The reason is the limited Pc data cache. I assume lots of useful
data (normally kept) now simply disappeared in the Pc data-cache
due to all the extensive 4 Mb table lookup's. I have no other
explanation.

So besides all the "pro's" and "contra's" about different hash table
approaches I think one should also take into account the machine's
data-cache behavior too for a final judgement.

If I test Rebel with 512 kb hash and the same position with 28 Mb hash
then the "average NPS" with 512 Kb is obvious higher. It's not because
less code has to be examined by the program but because of important
data loss in the Pc data-cache.

End of programmer stuff...

Of course using 28 Mb hash will give much faster results then using
a hash table of 512 Kb but that's another story.

- Ed -



>For those who don't know what you mean, the idea here is that a hash
>entry is good for two things:

>1) Cutting off based upon the bound or score contained in it.
>2) Remembering the best move in the position so you can search it first
>next iteration.

>This second reason is the most important one in almost every case.  A
>lot of times,  you are not going to *have* a best move to store in your
>hash element, since you fail low when you search this node.  So in these
>cases, you get less benefit from hashing this node, and you might get
>more from leaving what was there before alone.

>Thompson's dual table method would make this optimization a little hard,
>since you write into your table in order to remember repetitions, but
>perhaps it can be salvaged.

>>The story: take 40 chess programs and you will notice 40 different hash
>>table behaviors.

>No kidding.

>bruce



This page took 0.03 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.