Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: selective 7 pieces+ endgame positions storage question

Author: Ricardo Gibert

Date: 11:19:43 06/15/01

Go up one level in this thread


On June 15, 2001 at 13:37:46, Itay Ben-Dan wrote:

>On June 14, 2001 at 14:08:28, Scott Gasch wrote:
>
>>On June 14, 2001 at 13:15:29, Itay Ben-Dan wrote:
>>
>>>hello
>>>
>>>i'm a university student starting a chess endgames project,
>>>and i'd like to request info about the following idea:
>>>
>>>suppose i have smart filter that classifies positions
>>>as interesting to evaluate, e.g. in terms of bad-computer-eval
>>>positions etc.. and i store this (big) selective set of
>>>positions with the best move to play in each one, and e.g.
>>>with an eval (but not complete solution like tablebases because
>>>of time/storage constraints) that was taken at high depths offline,
>>>and allow a chess engine to use these positions in a similar way to
>>>using endgame tablebases - to improve the engine's strength..
>>
>>I'm not sure how the affect of this would be different than present day
>>"interior node recognizers" or having special case code in your eval except for
>>requiring a *lot* more space.
>>
>
>
>
>i tried to read a little on interior node recognizers.. i'm not sure i see
>how it is related.. anyway i didn't explain myself too good.. i'll try to
>explain again the idea i'm trying to do:
>as input i start with many endgame positions by GMs etc, and using a very
>selective filter i decide i want to store e.g. few gigabytes of 'interesting'
>positions out of them.. for each position i decide to store, i evaluate it
>e.g. by doing a very deep search.. i do all this offline and store the
>positions i selected in a database of e.g. a few gigabytes.. and then make
>an engine use this database to improve its strength in actual games..

Let p = the number of positions you can store in "a few gigabytes"
Let t = the time spent making a "very deep search"

The total time T spent preparing your database will be: T = p*t.

If we "very conservatively" estimate p to be a million and t = 2 minutes, then T
will be 2 million minutes or 38 years. Your database will cover many fewer
positions than you expect and/or be much shallower than you expect.

You can "speed up" this process with EGTBs, but the number of endings
contributed by 5-man to your 7-man database will be relatively negligible.

>
>
>
>>>the main problem i see with this idea is that when the engine
>>>actually reaches to a position i stored, after the opponent
>>>makes his move, there is no info about how to proceed...
>>
>>If you do not probe this gigantic database at the root search but rather only in
>>the 2nd..Nth ply you will get a PV with one move on it.  Next time its your turn
>>simply search again.
>>
>
>
>yes but i think that searching again is a problem, because if the position that
>was stored was searched offile (i mean, before the game started, a database like
>endgame tablebases etc) at a very high depth, or examined in other ways to each
>the eval stored for it, then searching again on the next move may not find the
>correct continuation.. one brute way to solve this is to force a search by the
>original engine that gave the stored eval to depth N-2 if N was the depth of
>search of the stored position - to insure it'd find the continuation it
>planned.. but i wonder if there are other clever solutions that are better to
>try to deal with this problem, maybe from other aspects..
>
>
>>>also, if anyone knows about an idea similar to what i described
>>>the is focused on endgames, i'd be interested to know..
>>
>>See Ernst's DarkThought website and his paper about fast interior node
>>recognizers.
>>
>
>thanks
>
>>>any comments about this ideas and ways to solve the problems
>>>that arise from it (from theoretical point of view, not
>>>implementation etc) - would be appreciated....
>>
>>I don't think you realize what kind of storage we're talking about here.  Its n
>>the order of terrabytes to do anything useful with 7 piece tables (maybe more)
>>unless your filter is very selective.  What is the rationale behind storing on
>>disk rather than adding new code to eval or writing recognizers?  On disk
>>requires a ton of space and a very slow probe process.  While new code in eval
>>just costs you an extra conditional in every node.  And done right interiornode
>>recognizers are fast also.
>>
>>Scott
>
>
>again the idea is to make the database in a very selective way, so it's a
>small one.. gigabytes or megabytes.. so probing it isn't the problem.. i'm
>interested if anyone knows or have any ideas about how to use the database
>in an engine.. how to overcome problems like find correct continuation after
>reaching a position stored, so the problem won't e.g. repeat moves back
>and forth from the position in the database.. also if anyone knows of other
>interesting problems that arise it'd be interesting..

But if we assume it is "gigabytes" of data, it will be on disk and when you
probe, you will almost always "miss", since the positions covered are extremely
sparse over the set of all 7-man positions, you will not be able afford to make
the probes. You will make "zillions" of probes to get just 1 hit. This just
isn't usable.

>
>any comments related to this idea would be appreciated.. like looking from
>another point of view on it so there are ways to deal with the problems that
>arise.. by letting the computer use it in other ways that can work better..
>
>thanks again..



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.