Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: A personal thought regarding the opening books

Author: Michael P. Nance Sr.

Date: 05:43:18 02/02/03

Go up one level in this thread


On February 01, 2003 at 03:14:53, Angrim wrote:

>On January 31, 2003 at 17:47:30, Dann Corbit wrote:
>
>>On January 30, 2003 at 17:18:08, Angrim wrote:
>>
>>>On January 29, 2003 at 23:26:12, Dann Corbit wrote:
>>>
>>>>On January 29, 2003 at 18:59:28, Angrim wrote:
>>>>
>>><snipage>
>>>>>
>>>>>The idea to have computers verify their own books, insuring that they
>>>>>do not play to a position that they feel is losing in book has been
>>>>>brought up before.  It has usually been rejected as taking too much
>>>>>computer time, or else as having been tried in cray blitz and not
>>>>>having worked well there.  Neither of these points really bothers me..
>>>>>
>>>>>I would take the very large book, estimate 1meg lines, and prune it with
>>>>>the rule that a position is only important if it has been in multiple games,
>>>>>likely giving roughly 1meg positions.  Then backsolve the whole tree
>>>>>at 10 minutes a move using a strong engine.  I would not discard lines
>>>>>based on their containing blunders, but would just mark the blunders as
>>>>>being moves to avoid.  It could be handy to have those lines in book
>>>>>so that you have the refutation handy if the opponent makes that blunder.
>>>>>This search would cost 10 meg minutes to compute.
>>>>>10,000,000/(365days*24hours*60min)= 19 years. if you split the search up
>>>>>between 38 computers it would only take 6 months.
>>>>>Clearly you would not want to repeat this search very often.  It would
>>>>>likely be best to fix which engine was used for the search, and use
>>>>>that for all positions, until a really major improvement to the engine's
>>>>>strength was made, at which time you start a new search.
>>>>>Also, the ability to maintain the resulting database would be quite
>>>>>important, you should be able to add a new line without re-searching
>>>>>everything.
>>>>>
>>>>>note: the difference between "backsolve the whole tree" and "search each
>>>>>position in the whole tree" is vital.  Without knowing the searched value
>>>>>of the leaf nodes, the computers ability to evaluate the earlier opening
>>>>>moves is much weaker.
>>>>
>>>>With the CAP data (assuming that 100 million node crafty searches are good
>>>>enough) you would only have to solve the missing ones.  There are many millions
>>>>of positions at long time control.  Chances are good that 90% or better of the
>>>>interesting positions are in there.
>>>>
>>>>We also have hundreds of millions at fast (10 second) time control.  If you
>>>>minimax the whole thing, it might be better.
>>>
>>>I expect that using the CAP database would be a good way to produce a
>>>stronger opening book for crafty.  It does seem that the CAP data is
>>>based on independent searches of the positions, rather than on backsolved
>>>searches, but it is possible to make a good estimate of which positions
>>>would benefit from being backsolved, and stuff the relevant results into
>>>a crafty learned positions file before re-searching those positions.
>>>Once that was done, or in parallel, the resulting tree of positions
>>>could be searched for missing positions and those could be added into
>>>the CAP database.
>>>
>>>A similar process could be applied to the database with 100meg positions
>>>at 10 seconds per to produce a really formitable book for use in blitz
>>>games.
>>
>>My idea is like this:
>>We have 24 million positions at about 100 million nodes each.
>>We have 200 million or so at about five million nodes
>>Replace all the fast data with better ones, if present.
>>Do a special minimax of the data.  The minimax would have to know that a 12 ply
>>search at a root is better than a 10 ply search at a leaf one ply forward.
>>
>Sounds reasonable, but one problem with this is positions where all of
>the searched leaf nodes score lower than the base position.  One of the
>non-searched leaf nodes might score higher than any of the searched
>leaf nodes.  In this situation I would want the ability to
>automaticaly add each of the searched leaf nodes to a learning file,
>and send this to a computer which would re-analyse the base node.
>
>>Take the minimaxed results and create a "speculative" ce for each record from
>>the calculation.
>
>Having just read the CAP faq, I see one other rather minor problem. CAP
>uses multiple different chess engines, and the eval scores of different
>engines are not highly comparable.  It could be that a position which
>scores +.5 with one engine is weaker than a position which scores +.2
>with a different engine.  Since the majority of the searches used crafty,
>I would make sure that all positions searched with non-crafty engines were
>also searched with crafty, and then only use the data from crafty for the
>minmax.  You still have the problem of different versions of crafty,
>but as far as I know that is less of a problem.
>
>Angrim
Isn't that what the Programmers do before the Programs release?>>>>Mike



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.