Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Any programs besides Yace and Patzer that can use bitbase files

Author: Dieter Buerssner

Date: 17:00:18 06/19/04

Go up one level in this thread


On June 18, 2004 at 18:04:02, Dann Corbit wrote:

>On June 18, 2004 at 11:49:50, Dieter Buerssner wrote:
>
>>On June 18, 2004 at 10:10:55, Fabien Letouzey wrote:
>>
>>>On June 17, 2004 at 21:05:21, Dann Corbit wrote:
>>>
>>>>On June 17, 2004 at 19:32:17, Dieter Buerssner wrote:
>>>
>>>>>If anybody wants to implement "bitbases", and really doesn't want to do it more
>>>>>or less with own ideas, the articles of Ernst Heinz show really everything
>>>>>needed. After that, it should only be a programming exercize. The articles are
>>>>>avialable at http://supertech.lcs.mit.edu/~heinz/node17.html
>>>
>>>>>Especially interesting in this context:
>>>
>>>>>Space-efficient indexing of chess endgame tables.
>>>>>Knowledgeable encoding and querying of endgame databases.
>>>>>Endgame databases and efficient index schemes.
>>>>>Efficient interior-node recognition.
>>>
>>>>Yes, I have those papers.
>>>
>>>Do I correctly remember they don't address compression at all?
>>
>>Correct (from my memory).
>>
>>>Uncompressed is
>>>OK only for 4-man bases.
>>
>>All 4-men use about 14 MB here. For a typical modern Computer loading one or a
>>few 5-men bitbases into RAM is no big problem. Yace does this for KPPKP now
>>(around 40 MB). One could dynamically load an interesting bitbase depending on
>>the position. In long time control games, it would even be possible to create
>>them when needed from (say) Nalimov TBs.
>>
>>For example KRPKR would need
>>
>>  3612*62*61*24/5 = ~65 MB
>>
>>for one side to move (one side could be enough, giving cutoffs at least one ply
>>later, than both side to move would give) with a typical not too sophisticated
>>indexing scheme. Actually for KRPKR W/D only would probably work well, giving a
>>divisor of 8 instead of 5 above.
>
>I think the best way to store the records is in a database with a unique index
>for tables of more than 5 chessmen.  For smaller tables, just hold it in memory.

Dann, what do you mean by "records" here? The material constellation? There are
not too many of those, and I can imagine several rather efficient and easy to
program methods to come up with the table corresponing to the material without
the use of a database. If you mean the results of the positions, I cannot
imagine that a database could be similar efficient (by a big margin, efficient
in space and in access time) than the typical indexing scheme used.

>>>I suggest you read articles about other (converging) games.  Checkers/Draughts
>>>of course, but also Awari and even Nine Men Morris (I can give you pointer if
>>>you don't find them).  They usually use RLE/LZ/LZH to compress fixed-size
>>>blocks.
>>
>>Using compression may slow down things significantly, makeing it not attractive
>>anymore to probe at each node. Also, some memory overcomittment might not really
>>hurt. Unused parts of the bitbases will be swapped away. For material
>>constellations with pawns, one could make the pawn (especially the column of the
>>pawns) the most significant part of the index. Because pawns don't change
>>colums, without changing the material constellation, large parts of the RAM used
>>by the bitbase will be unused (in many situations).
>>
>>>More ambitious approaches to compression are knowledge-based; store only the
>>>exceptions to some rules.  Apparently the author of KnightDreamer
>>>(http://www3.tripnet.se/~owemelin/johan/KnightDreamer.html) has had some success
>>>with machine learning (unfortunately also only private stuff).  Bear in mind he
>>>had to provide the rules in the first place, of course.
>>
>>I think, Delphi also uses some knwledge based approach (perhaps even combined
>>with compression).
>>
>>>>>But with knowledge of the basic ideas, who to use symmetry to index
>>>>>TB-positions, many people will be able to come up with similar ideas, that
>>>>>should also work well (and perhaps identically). It might be more fun.
>>>
>>>>I sometimes wonder if there is a fundamentally better way to compress.  I have
>>>>thought about a minimal perfect hash for every legal move on a sparse board.  We
>>>>could encode it in a minimum number of bits, amounting to the number of possible
>>>>total moves for a set (as reduced by reflections, rotations, etc).
>>>
>>>>Of course, you would have to compute a new hash for each new set.
>>>
>>>How does this relate to TBs, which are position-based?
>>>
>>>Do you mean storing the best move for each position?  It can't be smaller than a
>>>W/L/D result.
>>
>>Like Fabien, I really don't understand this, either. One can consider the index
>>calculation of TBs as a perfect hash function, a very efficient and dense one,
>>too.
>>
>>Compared to typical distance to mate/conversion/... formats (using at least one
>>byte per position), storing moves (in a very compact way) might be an
>>alternative, using a similar amount of space (uncompressed).
>
>You have some position, and you want to find the answer.  The chess engine does
>this:
>compute the unique key for this position.  For instance, this:
>6nk/6pp/8/2R5/2R5/R1p1RR2/2R3PP/2R3NK w - -
>is equivalent to all of these:
>6nk/6pp/8/2R5/2R5/R1p1RR2/2R3PP/2R3NK w - -
>kn6/pp6/8/5R2/5R2/2RR1p1R/PP3R2/KN3R2 w - -
>2r3nk/2r3pp/r1P1rr2/2r5/2r5/8/6PP/6NK b - -
>kn3r2/pp3r2/2rr1P1r/5r2/5r2/8/PP6/KN6 b - -
>
>So we form the permutations, sort them, and use the least lexically as a key.
>Now, we do not use the key as is.  Instead, we feed it to our perfect hash
>function.  Suppose, in a tablebase of 4 men there are 500,000 legal
>possibilities after all redundancies are removed.  Then the perfect hash
>function we generated will return a number between 0 and 499,999.  It will take
>19 bits to encode this number, but we will use an int to hold it.

The normal indexing functions have very little redunancy already (and are very
fast to calculate).

>This int will
>be multiplied by 2 to give us the offset into a bit string.  We collect the two
>bits addressed by our number and they will mean:
>00 lost for side to move
>11 won for side to move
>10 drawn for side to move
>01 this is what the bit string is initialized to, and so we have a coding error.

Or forget about the coding error, and use 1.6 bits per position (which I do in
the bitbases).

>So for the two million positions, boiled down to 500,000 distinct entries, we
>need a total of 1,000,000 bits = 125,000 bytes of storage.
>
>>>>The reason I brought it up was that I think it would be nice if there were a
>>>>standard way to do it.  Imagine if every chess program author made his own
>>>>tablebase files.  Sounds like a clever thing that would encourage innovation.
>>>>That's on the one hand.  On the other hand, there would be 250+ redundant 7 gig
>>>>file sets on my computer.  That would be a bit annoying.
>>
>>Sure, I agree. But with bitbases used from RAM the space needed will be not that
>>much. It will be comparable to typical RAM sizes, not to typical disk sizes.
>
>What about my ten man bitbbase files?

I think, I will never have that problem :-)

Regards,
Dieter



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.