Author: Amir Ban
Date: 14:55:34 01/20/98
Go up one level in this thread
On January 20, 1998 at 16:00:31, Bruce Moreland wrote: > >On January 20, 1998 at 15:27:01, Robert Hyatt wrote: > >>yes. it's old news. And a data structures course would call this >>the "hash collision chain problem". rehashing eliminated chains, but >>produces more cache thrashing... the latter is more important on a PC >>that already has poor memory bandwidth... > >Assume hash keys are perfectly random, there should be no discernible >relation between distinct positions, no matter how similar they are. > >So if you hash to index I in the table, there should be no more >liklihood that there is an occupied entry at index I+1 than there is at >index I+K or I+rand(), right? If there is complete entropy here, >chaining should be uniform, regardless of how you rehash. So you should >be able to use I+1, since that is easiest, if you really want to rehash. > >The fact that the hash elements are next to each other in memory, and >can be accessed with an index, has nothing to do with anything, does it? > There is no more relation betwen I and I+1 than there is between I and >J. > >The above is my intuition. Intuition also tells me that the world is >flat, though. > >bruce I agree with that. Intutively I also think that there is no scope for improvement on the linear method. Your argument makes the point. Maybe a stronger way to say it is this: Complex probing methods are equivalent to selecting a different hash function, or to rehashing the hashed value. If the hashing is good to start with, this doesn't achieve anything. A different analogy: When you encrypt a message with some encryption method, some people feel that the message would be REALLY secret if at the end you multiply everything by 7 and add 510. That's not true, of course. Amir
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.