Author: Roberto Waldteufel
Date: 19:09:30 05/20/98
Go up one level in this thread
Yes, I see that there is the potential for erroneous scoring. The implementation which I am currently using, although quite extravagant in terms of memory requirements, seems to solve the problem. Instead of using the same hash table for both the purposes of score bounds and repetition detection,I actually maintain a separate table for repetitions, accessed by the same hash function as for the main table. The same 64-bit checksum is also used to access both tables. The main difference is that entries in the repetition table *never* get replaced while a game is in progress. Instead of reserving a bit to indicate the draw, I simply check for a match with the checksum at the address indicated by the hash function.This has the drawback of using 64 bits instead of one for each position scored, but eliminates the overhead of maintaining the integrity of a single dual purpose table. When a node is exited, it is removed from the table by resetting the checksum to zero. Both the main table and the repetition table contain space for 2M entries (1M White to play, 1M Black to play), so the hash function is a 21-bit integer, 1 bit indicating side to move. The repetition table therefore requires 16MB of RAM. The main table is of course much larger because it stores more information. As a matter of interest, I made a surprising discovery about hash tables some while ago when I was writing a Connect-4 program built around the NegaScout algorithm. During the debugging it was necessary to be able to turn off selected parts of the hash table routines. When it was all sorted out, I had located and fixed the bug, which I had previously determined to be connected with the hash table, but I had forgotten to "turn on" the part of the code that processed the upper bounds (fail high only was being stored). Result - the program actually ran faster with only half its hash table! On reflection, I suspect that when a position was reached which would normally cause an alpha cutoff (ie move too poor) if it had previously been stored, it simply found a fail high entry in the table one ply deeper. I know that some programs only hash positions that are not too near the leaves on the grounds that a shallow search is cheaper than the hashing overhead. Thus I was effectively replacing half the table with a 1-ply search extension compared to full hashing with both upper and lower bounds. Do you know if this has been tried before? It may not be so good in chess, because the overhead of searching is higher (Connect-4 has a maximum branching factor of only 7, and moves are trivially simple to generate). Roberto
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.