Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: run-length-encoding question (slightly off-topic)

Author: Dan Andersson

Date: 14:05:09 03/27/02

Go up one level in this thread


It might be so. But huffman decoding can be speeded up quite a bit by using a
tad more memory for a decoding cache and lookup table. As long as they fit in
the CPU cache it might be possible. But if it's feasible or not depend on A) the
different compression ratios B) the penalty for memory accesses. Where A will
depend on the nature of the data. And B will depend hevily on A. Modern CPUs
have far outstripped the ability of main memory to supply them with data, and
HDs are dog slow. Therefore what was true a few years ago must be re-examined. A
hybrid scheme might be envisioned. (All depending on the data compressed.) Where
the HD files might use a heavier compression. (There are systems that manage to
compile source code (or intermediate representations) and start the programs
faster than it could load a pre compiled program. Due to the greater size of the
executable)
And the gain in loading time could be used to do a decompress to RLE or even
Huff code in memory. And the resulting memory block might then be further
decoded in the cache due to the possible gain due to the smaller amout of memory
transferred. Such a multi tiered approach might be possible, and maybe even able
to supply the CPU with the data it needs. But the tuning and scheduling of it
might be a nightmare. Or a pure joy. This is all fever driven speculations from
me. And I know just enough compression theory to be dangerous :) But the main
point is that algorithmic approaces that were unfeasible a few years ago. Might
fit better in todays  or tomorrows architectures. Boy, this is interesting!

MvH Dan Andersson



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.