Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Tablebase cache = LRU cache?

Author: Robert Hyatt

Date: 12:12:26 12/13/01

Go up one level in this thread


On December 13, 2001 at 05:41:28, guy haworth wrote:

>
>I think the tablebase cache is just used to keep a list of tablebase'd positions
>previously accessed, together with their values and depths.
>
>I have no stats but would not be surprised if positions turned up repeatedly -
>in which case Eugene Nalimov is taking advantage of this.
>
>Memory suggests that Rob Hyatt had something to do with the optimum
>runtime-format of Eugene's tablebases, discovering (e.g.) that 8KB was the best
>block-size to optimise runtime access, though not filesize for which the biggest
>possible blocksize is best.
>
>Maybe Rob had something to do with the cache size as well.
>
>G


When Eugene did the first cut at compressing/decompressing on the fly, he
asked me to run some tests to find the optimum blocksize.  It turns out that
this is not as easy as you might think, because the speed of the disk (transfer
rate) figures into the equation.  I ran the tests on multiple machines (my quad
xeon 400, my quad pentium pro 200, and on a couple of IDE disk machines.)

I discovered that on my machine, using 10K 80mbyte/sec SCSI drives, no
compression was the best, but only by a small amount.  Best performance across
all machines was produced by the 8K blocksize, which is the number I reported to
Eugene along with some raw NPS data to support it.

That's where the 8K came from.  I currently use compressed files simply to
conserve disk space even though I used to use non-compressed files as they are
slightly faster on my fast disk drives.

The cache size is a different issue.  This directly affects performance as it
is a LRU replacement cache that is used to hold old blocks of TBs so that if
they are needed again, no I/O is required.  Bigger is better, always, just so
you don't eat into the normal hash table size, because (for Crafty anyway) once
an EGTB hit occurs that goes into the normal hash table to avoid even calling
the EGTB probe code later.

I typically run with 32mb for the egtb cache size, unless I am in something
like a CCT type event where I might run it up to 64mb.




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.