Author: Angrim
Date: 14:07:34 10/24/01
Go up one level in this thread
On October 23, 2001 at 18:09:50, Ratko V Tomic wrote: >>1 bit per position value, your savings are not that good. The >>current tablebase method takes around 1.3 bits per position >>after compression. > >Is the 1.3 bit/position valid only for archiving database >on disk, or is it a working size (used directly during >search)? I.e. to lookup the data in the search tree, does >one have to expand the blocks of tables? If it does need >block level expansion, then that refutes the Bob's 1-lookup >per leaf node assertion, since if you need to decompress >entire block with thousands of distance-to-mate values >for one lookup, that's no different than having to follow >up a sequence of say 100 best moves. It does need block level decompression, just like the method that you were suggesting would(since you were talking about arithmetic compression being applied) This does not in any way refute Bob's 1-lookup per leaf node assertion since an entire block of compressed data is read in with a single IO operation. In contrast, your method would require that up to 100 blocks of data scattered throughout the two relevant tablebase files would have to be read in. Angrim
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.