Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Endgame code instead of Tablebases

Author: KarinsDad

Date: 21:21:13 04/17/99

Go up one level in this thread


On April 17, 1999 at 11:53:10, José de Jesús García Ruvalcaba wrote:

[snip]
>>
>>If 50% of the average 3, 4, and 5 piece tablebase case is a win, 35% is a draw,
>>and 15% is a loss (just throwing totally unsubstantiated numbers out here), then
>>if you may be able to compress the tablebases for this type of solution by 50%
>>right off the top since 50% of the positions result in the default result for a
>>particular tablebase. Some tablebases might be compressed even more (such as KQ
>>vs. K, KR vs. K, KBN vs. K) since half of those tables are automatically the
>>default result (i.e. if it is the side with the material advantage to move, that
>>side can force a win; and yes, there is a small percentage of cases where this
>>is not true for KBN vs. K).
>>
>
>	How are the tablebases compressed by 50% in that case? Please elaborate, as I
>do not get your point.

If you do not keep the "number of moves" in the tablebase and you just have
w/l/d info in it, then the program will calculate the best sequence of moves
once you get to a tablebase position as a root position. Therefore, if 50% of
the positions for a given tablebase case are wins and you probe the tablebase
and do not find that position in the tablebase, then you know that you have a
winning position. Hence, in this case, the tablebase only has to keep the draws
and loses in the file and you save approximately 50% of the file size.

Let's take KRK where it is the side with the rook to move. In this case, there
HAS to be a winning move in ALL cases for that side. So for that side to move,
the default is a win and there are NO positions at all in the tablebase. If it
was the other side to move, then about 10% of the positions would be a draw,
either due to the other side being able to capture the rook or stalemate.
Therefore, with the solo king side to move, if the position is not in the table,
it is a win regardless of where that king moves and approximately 90% of the
file is saved in this case. So, overall, 95% of this particular 3 piece file has
been saved (plus a little more since the "number of moves" is not in the table
either).

>
>>However, since Ernst's paper indicates that you need real scores assigned, it
>>may be difficult to come up with a default result mechanism that works well. And
>>if this is true, my thoughts on removing the "best move" for a position (and
>>saving space that way) is also off since that would have to be replaced with a
>>score anyway. So, it is obvious that a lot more thought has to be put into this.
>>
>
>	Removing the "best move" from what? Current tablebases (Thompson's, Edwards'
>and Nalimov's) do not store a best move for a position.

Yes, I understand that now (I didn't before). However, the idea is the same. You
could remove the "number of moves" until the conclusion from the tablebase in
this type of model since you would not be using it to re-hit the tablebase once
you have a tablebase position as a root position.

>
>>And there are other potential issues such as "the algorithmic approach works
>>fine, however, it is much slower than looking it up a tablebase, so it is
>>unusable in blitz", etc.
>>
>
>	Tablebases stored on hard disk are hard to manage, because they are slow to
>probe, even at standard time controls. Something even slower would be
>practically unusable even at standard time controls. I do not see a big
>difference between standard and blitz in this case, as at standard time controls
>tablebase probes will be hitting before, because the program has more time to
>move and a variation can lead to a tablebase position from a position with a
>relitively high amount of pieces on the board.

The slowness is due to hitting the tablebase ALOT during search. Once a
tablebase position is reached (i.e. is at the root), current tablebase lookups
are REAL fast compared to using a search tree (this and accuracy of results are
the reason tablebases are used in the first place).

The new model should actually result in better performance when hitting the
tablebases during search (since the tables are smaller due to the default result
positions not being in them). However, it is once you reach a tablebase root
position that the new model slows up. There, instead of checking the ~40 moves
out of the position in the tablebase and determining which one to use, you would
be using replacement evaluation code (for this particular case) to determine the
next best move in your search engine (or maybe you would just call a special
procedure for this particular case and ignore your search engine and evaluation
code altogether).

So, this model should be faster when you are REALLY slowed down using the
current tablebases and it may be slower (depending on how fast your algorithm is
compared to disk access) when you are REALLY fast using the current tablebases.

I hope this clarified what I meant.

KarinsDad :)



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.