Author: Mike S.
Date: 13:58:05 06/29/00
Go up one level in this thread
I hope some of those chess friends who play many computer games will answer this, because they will know which programs have most often a good evaluation when leaving the book, and which nearly loose by the book too often. I have done a test in 1998 which was published in CSS 4/98, where I tested for variation lenght, "variability" as I called it, doubles etc. I tested also, what the programs evaluation was for the first calculated move after the last book move at autoplay (tournament book setting). Such a variation contains only "active" moves, and therefore the evaluation at the end shouldn't differ much from 0.00 I think, if the book is good. I took a relatively small number of 30 samples - different variations - from each of the eight tested programs. From the Fritz 5 opening tree, 7 out of 30 evaluations at the end of the book line were bigger than ±0.30 (and none bigger than ±1.00). 2nd best in this comparison was Rebel 9's book with 9 bigger than ±0.30 and also none bigger than ±1.00. Worst was M-Chess Pro 7.1's book with 15 >±0.30, of which 7 were bigger than ±1.00 even, out of 20 variations (I wasn't able to get 30 different variations from the M-Chess book; it produced 25 doubles during playing the 20 different ones). But it has to be taken into consideration, that M-Chess is known for it's "extreme" evaluations; maybe another program would have made things look better than M-Chess' own opinion. Btw., most "variable" was the Genius 98 edition of Genius 5, which made no doubles at all. Unfortunately, all of the program versions I've tested are long outdated now, and I do not know if their book quality was kept or improved for the follow-up versions. There is not much book testing other than by playing games it seems. Regards, M.Scheidl
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.