Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: EGTB - dumb questions?

Author: Pham Minh Tri

Date: 00:31:27 08/08/01

Go up one level in this thread


On August 07, 2001 at 10:45:47, Robert Hyatt wrote:

>On August 07, 2001 at 01:02:51, Pham Minh Tri wrote:
>
>>[snip]
>>>>4) Could someone explain the technique of compressing TBs (how good/fast, what
>>>>kind and how different from normal one)?
>>>>Many thanks in advance.
>>>
>>>
>>>
>>>Pretty similar to normal compression.  But if you know you are compressing
>>>bytes, particularly when you have lots of "zeroes" (draw scores) then you can
>>>compress more efficiently than if you are trying to compress other types of
>>>data (say ASCII which has many zero bits).
>>
>>But normal compression is required to de-compress before use. In other hand, I
>>know that we could use compressed TBs when computing. Maybe we need a more
>>efficient method (fast or partly de-compress) for TBs?
>
>
>The nalimov (.emd) tables are compressed using a tablebase-specific compression
>algorithm.  It is different from normal compression in two distinct ways.
>
>(1) it is specific to the type of data stored in tablebases, which lets it do a
>better job of compression than a general-purpose compression algorithm.
>
>(2) it compresses in "chunks" so that a single chunk can be decompressed as
>needed without having to decompress the entire file. This is why most find that
>using the compressed (.emd) files are actually faster than using the files that
>have been previously uncompressed and saved on disk.  The compressed versions
>require less total I/O bandwidth since when you read in an 8K block, you get
>way more than 8K of real table data.

I do not understand on this point. Why is a compressed version faster than
uncompressed one when we need to read only few bytes? I think, we need to look
at only a cell of TB per time, so read directly only those bytes from
uncompressed version seems to be simpler and quicker than read a chunk,
decompress it, then retrieve few bytes (others will be probably redundant). BTW,
all OS' have very good disk buffers, they will help to reduce much the disk
access without any further effort. Could you explain more?

>I don't think there is a faster or more efficient way of doing this than what is
>already being done.  We played with the "chunk size" quite a bit, with me
>running lots of test games, to find the "optimal value".



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.