Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: A personal thought regarding the opening books

Author: Angrim

Date: 14:18:08 01/30/03

Go up one level in this thread


On January 29, 2003 at 23:26:12, Dann Corbit wrote:

>On January 29, 2003 at 18:59:28, Angrim wrote:
>
<snipage>
>>
>>The idea to have computers verify their own books, insuring that they
>>do not play to a position that they feel is losing in book has been
>>brought up before.  It has usually been rejected as taking too much
>>computer time, or else as having been tried in cray blitz and not
>>having worked well there.  Neither of these points really bothers me..
>>
>>I would take the very large book, estimate 1meg lines, and prune it with
>>the rule that a position is only important if it has been in multiple games,
>>likely giving roughly 1meg positions.  Then backsolve the whole tree
>>at 10 minutes a move using a strong engine.  I would not discard lines
>>based on their containing blunders, but would just mark the blunders as
>>being moves to avoid.  It could be handy to have those lines in book
>>so that you have the refutation handy if the opponent makes that blunder.
>>This search would cost 10 meg minutes to compute.
>>10,000,000/(365days*24hours*60min)= 19 years. if you split the search up
>>between 38 computers it would only take 6 months.
>>Clearly you would not want to repeat this search very often.  It would
>>likely be best to fix which engine was used for the search, and use
>>that for all positions, until a really major improvement to the engine's
>>strength was made, at which time you start a new search.
>>Also, the ability to maintain the resulting database would be quite
>>important, you should be able to add a new line without re-searching
>>everything.
>>
>>note: the difference between "backsolve the whole tree" and "search each
>>position in the whole tree" is vital.  Without knowing the searched value
>>of the leaf nodes, the computers ability to evaluate the earlier opening
>>moves is much weaker.
>
>With the CAP data (assuming that 100 million node crafty searches are good
>enough) you would only have to solve the missing ones.  There are many millions
>of positions at long time control.  Chances are good that 90% or better of the
>interesting positions are in there.
>
>We also have hundreds of millions at fast (10 second) time control.  If you
>minimax the whole thing, it might be better.

I expect that using the CAP database would be a good way to produce a
stronger opening book for crafty.  It does seem that the CAP data is
based on independent searches of the positions, rather than on backsolved
searches, but it is possible to make a good estimate of which positions
would benefit from being backsolved, and stuff the relevant results into
a crafty learned positions file before re-searching those positions.
Once that was done, or in parallel, the resulting tree of positions
could be searched for missing positions and those could be added into
the CAP database.

A similar process could be applied to the database with 100meg positions
at 10 seconds per to produce a really formitable book for use in blitz
games.

Angrim



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.