Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: How is the cAP-code in Crafty 17.0 doing (added with some questions)

Author: Michel Langeveld

Date: 12:56:14 10/23/99

Go up one level in this thread


On October 23, 1999 at 15:27:15, Robert Hyatt wrote:

>On October 23, 1999 at 15:04:44, Michel Langeveld wrote:
>
>>On October 23, 1999 at 12:52:11, Robert Hyatt wrote:
>>
>>>On October 23, 1999 at 11:14:07, Michel Langeveld wrote:
>>>
>>>>Thanks in advance.
>>>>
>>>>I want to try it myself ;-)!!!
>>>>
>>>>Michel Langeveld
>>>
>>>
>>>It is too early to say.  Compared to the opening book I use, less than 100K
>>>of the positions have CAP scores.  Which means that many positions have no
>>>data at all.
>>
>>I have no idea how much positions your book contains but it sounds terrible.
>>Maybe we have to add an project for CAP which is called "Crafty Book".
>>All positions of the Crafty book has to be examined.
>>
>>CAP is a great project but unfortunately it's not going as fast as I would like
>>to see it going. Does someone know some very fast machines which are free for
>>this project??
>>
>>>Another problem is that the CAP data is of the form
>>>
>>>  <position>  <score>  <best move>
>>>
>>
>>In the CAP data is also included:
>><ply>
>><number of nodes examined>
>><analysis variation>
>>
>>which might be very interesting to use these in Crafty.
>>
>>I have also no idea if all positions of a certain variation are also in the CAP
>>base. For example if 1.e4 h6 is a found in CAP as the best line that also that
>>also 1.e4 and 1.e4 h6 are examined. And It might be that when doing a minimax
>>again over all the data that 1.e4 e5 is found the very best.
>
>I used to minimax eval/material from tips back to root.  I started that in
>Cray Blitz, but it has obvious flaws.  I even did searches from all the 'tip'
>positions, but that didn't catch blunders in the middle of the line.  I also
>tried searching _every_ position so I could catch blunders, but searches deep
>enough weren't possible... due to the time required.

I hope Dann will add the Crafty book as project to CAP so that every book
position is analysed at least for 100M nodes. It might be good to collect all
learn information on a website if the full-cap-analysed-book so that errors can
be CAP-ed again.

>>>I can use that... but what I _really_ need to do, and I don't have the code
>>>to do yet, is to do a simple minimax of the CAP scores back up thru the book.
>>
>>Right this is what I described above too!
>>
>>>IE if the best move in position P is Rh1 with a score of +1.3, then I need to
>>>know that any move that leads to position P (with no cap score in the file)
>>>ought to have a score of -1.3.  I don't do this yet, which hurts.  I think it
>>>will be even more useful once I have time to write that code.
>>
>>Do you have an idea of how many positions are then filled with CAP data in the
>>book?
>
>The book is about 13.5 megs.  at 20 bytes per book position, that means about
>700K positions, where the cap import found about 100K 'matches'.  So about 1
>of every 7 possible moves has a CAP score.

So if we go with the same speed it will take a year to analyse all
crafty-book-position with CAP. (The CAP-data is now about 850K positions, and we
talk about a book of 700K).

>>Are you goining to extend the book with an extra flag which indicates a score is
>>is a real CAP-data-score or a score is a "moved-up" one? (By the way I remember
>>BOOKUP is has facilities to do this)
>>
>>Is having the book minimax with your algorithm a think which HAS to be done
>>before releasing Crafty 17.0?
>>In other words what has to be done before 17.0 can be released?
>
>this won't be a 17.0 feature (minimaxing).  Main thing for 17.0 is to test the
>new eval code and the eval tuning.  It is playing pretty well, but there are
>still quirks as the new candidate/majority code can influence things in odd
>ways and it takes some watching to spot these and fix things.

Ok, I see. If you need any help let us know!

>>
>>CAP data are positions which has roughly 100.000.000 of nodes examined. Do you
>>know on what hardware Crafty reach such numbers in a normal game? And is it
>>possible to add those positions in a seperate file so than can immediatly added
>>to CAP?
>
>100M nodes?  On my quad xeon it will search deeper than that in a real game,
>as it is searching close to 1M nodes per sec on the quad already.  That would
>take about 100 seconds to do...  :)

Cheeze. So I normal game (which is 3min/move) on your quad will produce higher
quality of data than CAP. I don't know If I had to cry or laugh about this.

I also calculate that all CAP data until now will take 590 days on your single
computer. Is this your developing machine? Or is your testing machine?

>>Will crafty 17.0 also scan in the CAP-data in a normal minimax when it has left
>>to book is is the book the only place wher CAP data is used at the moment?
>
>only in the book.  I don't 'trust' the scores that much, since different
>versions of different program produce different scores.  Choosing book moves
>is one thing, but I want "the crafty search" to be responsible once the book
>has ended.

Ok. seems fairly. Maybe that's why only 100K positions matched your book because
it has also plenty of middlegame, endgame end EPD-testset positions.

By the way... is it difficult to write a program that converts an book.bin to a
ascii file with all kind of EPD positions. Can you give me some hints in doing
this. Then I will try to write it. It might be of some help in future....

Kind regards,

Michel Langeveld



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.