Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Checkers: Las Vegas and Chinook

Author: Ren Wu

Date: 17:37:07 09/10/02

Go up one level in this thread


On September 10, 2002 at 18:09:33, martin fierz wrote:

>On September 10, 2002 at 15:41:45, Ren Wu wrote:
>
>>Hi, martin
>>
>>>e.g. i generated the 8-piece db myself, it is
>>>smaller than the chinook db, the access code is faster
>>
>>I wonder if you can give us some details on the methods you used to reduce the
>>database size? Seems to be that Chinook's ndex functino is already pretty much
>>minimal. Are you throw away all positions where capture is possible?
>
>just to put this in the right context: i just copied what i found in the chinook
>paper and found some slight improvements. but mainly i copied what they had done
>before me.
>throwing away capture positions is standard in checkers. it takes about 75% of
>all positions out, and they are typically "offbeat" values. chinook did that and
>i do that.
>my db is smaller than the chinook db, but not much, ~20-25%. i had no time to
>work on it at all, because i had to get an opening book generator going for the
>las vegas tournament, so i just modified the chinook scheme. if you already know
>what the chinook compression is doing, then the short answer is i only store 4
>values per byte instead of 5, but have a lot more skip-values.

Thanks, I see. I was thinking that you have found a way to reduce the
uncompressed db size(or index range). But of course, smaller compressed db also
make a lot of sence.

>if i had had more time, maybe i would have tried something different. i don't
>know if RLE is the best way to go - if i compress my endgame databases with
>datacomp, which is used for chess EGTBs, then i get ~40% better compression. i
>don't know how fast decompression is with that approach, and i never tried
>because i don't really know what datacomp does :-(
>but maybe that would give better overall performance, if it is not much slower
>than RLE. the problem is it really has to be fast - the database is accessed
>very often, unlike chess (like 50% of the nodes are db lookups once you're out
>of book).

Be very interesting to learn how these two compares, only if you have time to
try it out.

>my db indexing functions are different from chinook's in that i dont have a 1-1
>correspondence of index <-> position. i have some index values which map to
>impossible positions. first, i set up black men, then white men, disregarding
>the black men, then the kings of each side, this time taking the other pieces
>into account. this makes for a slightly larger index range, and for some gaps
>with impossible positions. the advantage is that it simplifies the index
>calculation greatly. ed gilbert who uses the 8-piece chinook db and mine, tells
>me that the chinook indexing scheme builds a huge table to compute it's indices,
>because it's much more complicated than my scheme. that table takes 40MB ram. so
>i'm not sure if my indexing with gaps isn't more efficient overall.
>for comparison, the compressed 4kings-4kings slice in my db is 30% smaller than
>the corresponding chinook slice. it has no gaps in my scheme. the 4men-4men
>slice is 15% smaller. that's the slice which has the largest problem with my
>indexing with gaps. uncompressed, my 4men-4men slice is 70% larger than the
>chinook 4men-4men. my general estimate is that my gaps cost me ~5% on average.
>which, compared to 40MB ram is not that much unless you happen to have 1Gig or
>more.

It is interesting to learn that dbs with wider index range is smaller after
compression. This is very interesting.


Thanks.




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.