Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Namilov egtb compression

Author: Robert Hyatt

Date: 21:28:24 08/14/05

Go up one level in this thread


On August 14, 2005 at 22:03:21, Joshua Shriver wrote:

>Anyone familiar with the compression algorithm used in namilov egtb?
>Was wondering if bzip2 could be a possible alternative.
>
>I'm about to download the 6men TB's and at a gig each, would be nice if bzip2
>could crunch them down even further.
>
>Just a thought, since doing file i/o w/ bzip2 is pretty simple.
>
>-Josh


There is a big problem.  Suppose you want to read the byte displaced
1,000,000,000 from the beginning.  Serial compression requires serial
decompression, reading and decompressing all the data before the byte you want,
so you can recognize the byte you want.

The EGTB compression uses "blocks" so that we can read in a single block from
anywhere in the file, and just have to decompress that block, rather than all
the preceeding data.  You can make the blocksize bigger to improve the
compression, but it costs you at decompression time, which is, unfortunately,
right in the middle of your tree search.  NPS goes into the toilet.

When we started this stuff, I ran a _bunch_ of tests for Eugene to choose the
best compression blocksize from a tree-search vs compression efficiency point of
view...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.