Author: Russell Reagan
Date: 17:37:57 09/07/02
In his book, The Practice of Programming, Brian Kernighan gives his implementation of a hash table which differs from the hash tables we use in chess engines for the transposition table. His approach is to keep a table of linked lists, so that if two things hash to the same value, no data is lost, but added to the end of that entry's linked list. I am curious about how this would work in a chess engine. Would the speed hit be enough to hinder the performance of the engine? Or would it be negligible? Obviously when using this method, nothing ever gets overwritten, which means that every node that you visit gets saved in the table, so eventually you run out of memory (probably very quickly on a lot of machines). So there must be some method of limiting the size of the table. What kind of method could be used to limit the size of the table using the Kernighan approach? Lastly, I would like to know where in the tree the search benefits the most from the transposition table. For example, if the search benefits the most in the shallow nodes, then it might be possible to limit the size of the table by using a depth parameter. Then you would limit the size of the table, since you could cut out the deeper levels of nodes, and (since the deeper levels contain the most nodes) cut down the size needed for the table, and solve the problem of the constantly growing table. Any thoughts on using this alternative form of a hash table in a chess program? I find it to be an interesting data structure as he describes it, mainly because I have only known a hash table to be what we use it for in chess, which has the drawback of being theoretically unstable. Thanks, Russell
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.