Author: Bas Hamstra
Date: 16:32:58 08/17/99
Go up one level in this thread
On August 17, 1999 at 10:29:36, Robert Hyatt wrote: >On August 17, 1999 at 05:15:06, David Blackman wrote: > >>On August 16, 1999 at 12:05:01, Robert Hyatt wrote: >> >>>it is one method of handling collisions (collision here means two different >>>entries want to be stored in the same table location.) 8 probe means you try >>>8 different table entries (this is sometimes called a 'bucket approach') and >>>replace the least-useful one of those. It works well on machines with lots of >>>memory bandwidth, but less well on a PC. It can be very effective if you have >>>a table that is too small, for example... >>> >>>On large-memory machines, it loses its edge and can actually slow you down, >>>since 8 probes into a hash table for every store/lookup call is not exactly >>>free... >> >>If you make your 8 probes to consecutive entries in the hash table, starting >>with one that is divisible by 8, and you make your hash table entry size so that >>8 entries is exactly the size of one first level cache line, then the penalty >>will most likely be too small to measure. This is because each hash access costs >>exactly one main memory access (which is probably what it would cost anyway) and >>for most programs, that's a lot more than the cost of what you do to the data >>once you've got it. >> > > >the problem is that a cache line is 32 bytes long. A hash entry is 16 bytes >(for me) and probably longer for others. you run thru 4 cache misses to get >those 8 entries. > >And then you run into something called 'clustering' because you are probing >8 consecutive entries. It is better to do rehashing, so that the first entry >is a random address, and then you use another part of the hash signature to >obtain a random offset from this position for each of the other 7 entries that >go along with this 'bucket'. > >Either way hurts quite a bit. You could do rehashing where you pick up pairs >of entries to minimize cache misses, of course... What's all this stuff about hash table efficiency? Since when is that a factor to reckon with? Personally I am not so eager to do a lot of work to gain 0,5 % speed. Suppose your hashtable is 2x faster. I gives you...what? I'm working on fast capture- /attack-/ and SEE generation. *That's* a factor. Regards, Bas Hamstra. > >>Of course the trade offs on a Cray were different. No cache, and most odd-stride >>access patterns were about as fast as sequential access? > >correct...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.