Author: Robert Hyatt
Date: 21:06:09 05/07/01
Go up one level in this thread
On May 07, 2001 at 13:29:22, Dann Corbit wrote: >On May 06, 2001 at 22:15:47, James Swafford wrote: > >>On May 06, 2001 at 11:25:43, mike schoonover wrote: >> >>>yes folks its the new chessmaster 1,000,000!! >>>now with 16 man egtb's >> >>You're kidding about the 16 man egtb's, right? >>:-) >> >>Has anyone done the math to figure out how much space >>such tablebases would require? I'm sure it's unbelievably >>huge. > >At some point, it will cost so much to search the tables that you time will be >exhausted before you can ever find it. Suppose (for instance) that you have >10^20 bytes stored in the table and you can read one billion bytes per second. >How long will it take you to read it? 10^11 seconds to read the whole table >(3000 years). Fortunately we don't read a whole table. We read just the block with the current position's score, and access that... And we only do this after a capture takes you to the right number of pieces to probe a table. > >On the other hand, a multi-level indexing scheme (or perhaps octree type indexes >where board directions are viewed as dimentions) might be used to make finding >things feasible. Tables are not "searched" at all. They are direct-access to mate-in-N scores for each position. We don't search for a position. The position is used as a direct index into the file to the right byte. > >My brother in law's dad has a patent for a technology that will store a terabyte >on a square centimeter (conservatively). So information density may not be the >ultimate bottleneck. But the ability to find something in an ocean of data like >that will require some clever thinking.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.