Author: Andrew Williams
Date: 10:47:52 06/13/01
Go up one level in this thread
On June 12, 2001 at 16:54:18, David Rasmussen wrote: >On June 12, 2001 at 16:27:04, Andrew Williams wrote: > >> >>My book file is just a file of my book records. The first thing in each >>record is the hash-key (all 64 bits). The book on the disk is sorted and >>I access it using a binary-chop algorithm. This is rather slow for a >>large book. I expect that one day I'll introduce an indexing scheme to >>fix this problem. >> >>I build the book in chunks of 1 million entries. I sort each chunk before >>writing it out. When all the chunks have been written, I merge them so that >>the book file is in order; entries which refer to the same position are simply >>aggregated. >> > >That's exactly the same way that I'm doing it right now, except I have an index >in memory of every nth record, so I can do binary search in these indexes very >fast to determine the block in which a key must be in, in the file. This block >is then loaded into memory, and binary searched for the key in question. But I >guess that is what you meant by indexing system. I never really considered an implementation. This is one of those things that I'll turn to when I've temporarily run out of ideas for improving PM in other ways. Andrew
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.