Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: questions about the opening book of programs

Author: Michael White

Date: 13:12:11 08/23/98

Go up one level in this thread


On August 15, 1998 at 18:56:08, Robert Hyatt wrote:

>On August 15, 1998 at 09:20:55, Tom Kerrigan wrote:
>
>>I know of some cases where killer books have been used, but after talking with
>>dozens of other chess programmers, I'm convinced it isn't a serious problem.
>>Most people aren't out to get other people at these tournaments. They just want
>>to do well and have fun.
>>
>>-Tom
>
>I don't agree here for several reasons.
>
>1.  Ed doesn't compete any longer, because of frustration with having to
>"re-tool" the book to avoid getting "cooked" each year.
>
>2.  several "amateurs" have claimed at WMCCC events that they have books that
>contain cooks for commercial programs, such as genius and rebel.
>
>3.  I competed at so many ACM and WCCC events that I can't count them any
>longer, and cooking went on all the time.. during the event, before the event,
>after the event (for the next year's event) and so forth.
>
>(3) was the direct reason I started the book learning experiments in Crafty,
>because Gower and myself had to devote a week or two each year to preparing our
>opening book to be *certain* we wouldn't repeat any games from last year,
>because we were "cooked" many times.  Ther "cooking" didn't hurt very often,
>because our opponent didn't have access to "Cray Blitz" to make sure the cooking
>was "done"...  be when replayed a game from prior years, we always ended up in
>a poor position and had to fight for our life.
>
>It even happens on the chess servers, between a manual program and an
>automatic one, or between a human and an automatic program...
>
>

A friend and I experimented with automatically finding cooks and
implementing them in a cooked book.  We wondered if this might be the
way to make our first terrible chess program "learn".  Most of this
foolishness was motivated by the dream of AI.  We tried to automate
all of the practices of a good chessplayer: study historic games,
study his own games, study current games, study his opponents games,
and then play more games, and repeat.  After a lot of hovering in the
bottom ratings, we started trading chess programs to make the playing
part stronger.  It sure didn't seem to improve from its book learning.
Or maybe we weren't patient.

The only way we played "sneezy" on ICS was with a completely cooked
book.

sneezy was a crafty clone (and was an SCP clone before that, and was
a zzzzzz clone before that, and was a homebrew Fortran program before
that at some time or other we ran gnuchess, and who knows what else
under the ICS handle "sneezy" or "aahz" and maybe one other until
the great big ICS server crash, and then we were only "sneezy").
When we were running sneezy, every time an opponent moved to a position
in our book, we had a single cooked response.  From every plausible
move for the opponent we had a cooked response.  Every historic
opponent position that was reachable by the book was analyzed and
had a cooked response.  We also downloaded top level ICS games and
scanned them for hits against our cookbook.  Sometimes we could get
doubly cooked move trees from the ICS games.  (Doubly cooked: two strong
players play a game on ICS, and it follows the backed up scores in
the book for a great number of moves) The book was remodeled about once
per two days.  Actually, the book remodel was being done during the games
on another server (running gnuchess and/or crafty).  At any time we never
had more than 12000 moves in the cookbook, although the available off-line
pool varied from 150000 to 600000 moves, and took well over 18 months
(50 MHz Sparc20 dual CPU) to build, and kept the  load at 1.98 'round
the clock...

What happened?  Very interesting, to us.  Human opponents were divided into
two types: 1) those that were furious because they kept repeating games
(hahaha: we thought they were complaining about their own behaviors ;-) )
2) Some humans wiped the floor with us, because they figured out the
weakness and exploited it before we could get a book remodel back on line.
Usually, they were back within half an hour for revenge.  Sometimes they
would dump a game or two to get the right color.  Funny, but that's what
we wanted to see!  The ICS admins even gave us tutoring on "How to
randomize play".  Geeez.  Randomized play was the LAST thing we wanted.
Very humiliating.

Some of the most excitement came when we started using crafty for the
search engine.  The rating went up so we got more interesting games.
Plus, at this time (95,96? - Feb or Mar 97) there wasn't much going on with
automatic book learning.  So our downloaded ICS games usually gave deep
trees from Ferret and several of the crafty clones.  The crafty and gnuchess
clones almost always had the same (gnuchess, small, medium, large) books
(which we constantly scanned for hits) so our chances went up to follow a book
line.  Ferret apparently didn't change books during the time we played, so we
saw the same book lines a few times, but we only got a few games with Ferret
when we were using crafty as the engine.

Even though we had access to the gnuchess and crafty books, there was enough
variety that we couldn't cook more than one or two layers of out-of-book
moves, so we never got much of an advantage.

I think our technique could have been more successful if we had started with
crafty in the first place, and limited our play to two games on ICS before
each book remodel.  Watching the games was too tempting... the book rebuild
was more tedious than... playing chess ;-)

My conclusions when we quit playing sneezy were:
Automatic book cooking could be successful in match play if
 1) we kept a small deterministic book (one best move per position) as we
    had implemented it
 2) possibly had access to the opponents book and scanned it for hits, as
    we sometimes did
 3) saw the opponents game set and scanned for hits (game results were never
    ever ever ever ever used: they are meaningless!), as we often had...
 4) used a large (non deterministic) opening evaluation pool to use to form
    the deterministic book, as we did...
 5) used backed-up scores, a la Bookup to build the determisitic tree, as we
    did (We don't have bookup, so can't be sure if bookup does this, this
    way.)
 6) for every one processor playing games, had about 20 doing analysis
    off-line (find the first out-of-book (opponent) position, evaluate its
    "best" move.  Supplement these with a single layer of plausible moves,
    and their best successors.  Plausible moves arise as the best moves at
    lower search depths.  Repeat the process iff the best move was played
    in the game, or reached a "hit" with the old games databases)  For a
    brief time (until the CS dept discovered it) we used about 50 really
    spiffy RS/6000's for the offline analysis, and still had trouble keeping
    the book rebuild in sync.  It's difficult to keep control of the exp.
    explosion, even with the rules above.  This looks like a potential
    point for improvement, but so were the 6000's...
 7) used random access method to access the pool file for building the cook
    book (sounds trivial, but wasn't a bottleneck until the pool file grew
    to more than 75000 positions.  After that, it took longer to sort and
    manipulate the pool than to analyze the out of book positions. By the
    time it was a problem, we were too tired to fix it, and every bit
    of software we developed used the "sequentialness" of the pool file!)
 8) used off-line analysis time large enough to get two more ply than the
    game playing program could possibly get for that move during the game.
    (we always used 6-10 minutes, which seem to be too small.)
 9) used the opponent's very program to compute the book pool (if possible),
    in order to generate *only* the plausible move set.  (big grin)

Finally, we decided this would never make a 2nd-rate program play better
than a first-rate program, which was one of our original goals.  Computer
chess requires competition on each and all fronts for success.  (Exception:
Feng Hsu is 100% right on the money re. studying opponents previous games
for match play.  It's a no-brainer to take down a giant with an opening
prank.)  I gathered some statistics on how many ICS points it was worth to
have a deeper book.  If I recall correctly, it was insignificant against
players not within 200 points of our rating, and a 7 point bonus for every
book move we had against our opponent.  So automatic book cooking is a
bonus, iff there is no other advantage.  But these cooks are from automatic
analysis.  In a few cases with gnuchess clones, we could see a predictable
line and we got ready for it.  That was usually a huge success, and a lot
of fun.  I think the stronger humans can remember these lines and they
really do the same thing, so as shady or immoral as this seems, humans
do this by their very nature.  How else do they improve?  Analogy.  If
you sirs, figure out how to apply analogous reasoning to book learning
automation, we bow down in your direction.

Typically, we had 8-10 book moves, but got a few longer runs past 20
moves, and a few 4 move surprises.  Statistics do lie: when we turned off
the book, our rating usually improved!!!  This was, or course, because of
"repeater revenge" which we sometimes foiled by turning off book.  The
kibbitz and whispers were precious when this happened!

>
>
>
>>
>>On August 15, 1998 at 08:20:23, Robert Henry Durrett wrote:
>>
>>>On August 14, 1998 at 21:12:17, blass uri wrote:
>>>
>>>>I think it is a mistake to use other probabilities then 0 and 1 in competitions
>>>>like the WMCC.
>>>>The opening book should not be known to the public otherwise it is easy to learn
>>>>the program.
>>>>The program can learn after a game it loses by changing a probabilities
>>>>0 to 1 and 1 to 0
>>>>
>>>>Do programs use an opening book with probability 0 and 1 in events like
>>>>the WMCC?
>>>>
>>>>How many positions the opening books of programs contain?
>>>>
>>>>Uri
>>>Perhaps that's one of the reasons why the people who submit their chess engines
>>>for evaluation would not wish to use the opening repertoire which comes with the
>>>published version.
>>>
>>>It seems that such competitions for ratings are, to some extent, competitions in
>>>selection of opening book lines.
>>>
>>>What is the answer to this problem?  How can the "opening book competition"
>>>factor be removed?  Is it sensible to do this?

Now this is an ethics dilemma!  Suppose the program is built to modify its
opening repertoire in response to play against opponents (as many are
nowadays) and that book is brought to the competition.  Seems fair to me.
Suppose the original published book has subtle weaknesses as preperatory
bait for the competition.  Seems fair to me!  Is it fair advertisement that
the program X is the same as the one that competed and won competition Y?
First case, yes!  Second case?  Hmmm.

Regards,
Mike White



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.