Author: Tom Kerrigan
Date: 21:03:51 03/29/00
Go up one level in this thread
On March 29, 2000 at 18:58:14, James Flanagan wrote: >The hash table stores information about positions that have already been >evaluated by the program. Each position requires several words of RAM to >describe its characteristics and evaluation score. Assume that each position >takes 48 bytes of memory. Then the storage capacity of a 48Mb hashtable is Programs do not store the entire position, so typically a hash table entry takes 16 bytes. >When you run out of hashtable memory, old data is dumped to make way for the >new. If you revisit a position that has been dumped, it has to be reevaluated >from scratch. Nothing terrible happens -- your program just runs a little >slower. The thing to note here is that old data is not dumped in a FIFO manner, or a random manner. There is a system to determine what gets dumped (the replacement scheme) whereby the important data is kept in the table. This minimizes the effect of the hash table size. >This raises an interesting question: how hard would it be to output these hash >table statistics? Not very. Which ones were you thinking of, specifically? In your case, you can try doing another 18 hour run (same position, of course) with 24MB hash instead of 48MB hash. See if the search is significantly slower. If not, I doubt increasing the size to 96MB (or whatever) would make it significantly faster. -Tom
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.