Author: Andrew Williams
Date: 10:46:08 06/13/01
Go up one level in this thread
On June 12, 2001 at 16:40:01, Landon Rabern wrote: >On June 12, 2001 at 16:27:04, Andrew Williams wrote: > >>On June 12, 2001 at 15:54:06, Landon Rabern wrote: >> >>>I am going to rewrite my opening book code. Right now it has all positions >>>stored in a file and then when the program is run, it loads them all up and puts >>>them into a hashtable that uses chaining to resolve collisions. I am not >>>satisfied with this because it eats a lot of memory for large books. >>>I would appreciate it if others would tell me what they do. I think it would be >>>better to just leave the whole thing on disk and read from it when necessary. I >>>guess I could just write the hashtable to disk and still use chaining, but that >>>seems overly nasty, open addressing might work better. Thoughts? >>> >>>Regards, >>> >>>Landon W. Rabern >> >>My book file is just a file of my book records. The first thing in each >>record is the hash-key (all 64 bits). The book on the disk is sorted and >>I access it using a binary-chop algorithm. This is rather slow for a >>large book. I expect that one day I'll introduce an indexing scheme to >>fix this problem. >> >>I build the book in chunks of 1 million entries. I sort each chunk before >>writing it out. When all the chunks have been written, I merge them so that >>the book file is in order; entries which refer to the same position are simply >>aggregated. >> >>Andrew > >Ok, so you just do a binary search on disk, not loaded into memory? I thought >about doing this, but thought it would be too slow, how fast is it for >moderately sized books? > >Regards, > >Landon During CCT3, I used a book with 1.5 million positions in it. It seems to have taken about 1/2 second to find a move. This is *very* approximate. Andrew
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.