Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Crafty and EGTB Question

Author: Dann Corbit

Date: 14:56:44 06/17/04

Go up one level in this thread


On June 17, 2004 at 17:35:59, Ed Trice wrote:

>
>>
>>Take a disk position, and transform it into an EGTB index.
>>
>>Take a disk head.  Seek to a sector.  Load some blocks into memory.  Decompress
>>the blocks.  Get the answer from the decompressed pages and return it.
>>
>>How many evaluations do you think you can perform in this much time?  Hundreds
>>for sure, maybe thousands.
>>
>>It is a mistake to probe every possible position.  For sure, it will make your
>>chess engine play much worse.
>>
>>If you have bitbase files loaded into ram, it is a good idea to inquire of them
>>on every position.  But an EGTB probe for every possible probe will cost you 100
>>Elo, at least.  And the faster the CPU and the more time allocated for the
>>search, the bigger the penalty.
>
>Dan,
>
>I understand the mechanism of how this works. The checkers program I worked on
>with Gil Dodgen has a few trillion game-theoretical value positions, spanning
>something like 120 GB, and with a 2 GB buffer the performance hit is not too
>bad.
>
>Why?
>
>Most-recently-seen position buffering.

Eugene thought of this, it is called EGTB cache by most programs.

>With less than 2% of the dbs resident in RAM, I still get above 90% of the CPU
>at all times, and most likely 95% for the overall search.
>
>Does the EGTB schema function on the same principles?

Similar, but I have seen CPU drop to 10% or so on many occasions.  For sure, in
the deep endgame, it will drop to 50% or less.

>We (the checkers world) do not seek the index from the wide panorama of
>positions. We seek to a BLOCK which houses, typically, 4K or 8K worth of indexed
>entries.
>
>We convert the position into an idex. We do a b-search for the block. The block
>is further subdivided into markers (typically 64 subdivisions) so then we
>b-search to the subblock. Then, and only then, do we decompress for the
>position's value.
>
>Of course the databases for chess also contain distance-to-whatever
>(mate/conversion) but the RAM-resident checkers databases do not.

So the checker database files are really like the chess Bitbase files.  These do
add strength, and are often held in memory all the time.  Hence there is no
penalty for a lookup.

>I will pay more attention to the decompression code.
>
>I just cannot imagine that a program can play one heck of a game for 60 moves,
>then get down to R + 2p vs. R + 1p and toss away the draw even after completing
>a 17-ply search.
>
>Probing the R+p vs. R db would have prevented this.

EGTB use does not increase the strength of program play in chess, in every
measured experiment that I have seen.  Bitbase files (on the other hand) do have
a measurable strength gain.

I think that the problem is due to the difference in checkers verses chess as
far as pieces on the board.  With checkers you have only two types (kings and
regular) and promotions are a bit difficult. (E.g. you don't have every 3rd move
being a promotion most of the time).  So the database lookups will be similar
most of the time.

In chess, you might probe what happens if you take a knight.  And then you might
probe what happens if you take a bishop, etc. for every capture on the board.
So you are probing a different file every time.

With a bitbase (which just has won/loss/draw but not what move to make) you can
hold them in memory because they are much smaller.

Do a search on the Winboard forum archives or the CCC archives and I think you
will find some experiments that show tablebase files are a wash as far as
program strength goes.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.