Author: Harald Faber
Date: 08:14:24 03/03/99
Go up one level in this thread
On March 03, 1999 at 10:06:33, Shaun Brewer wrote: >I have been experimenting with openings and therefore played many games >attempting to determine if a certain book is better or not. As my PC is needed >for other tasks I have to interrupt the games and start again I then amalgamate >the results of several batches of games in an attempt to get something >statistically relevant. > >Here are the example scores for one such set of batches, all played on the same >machine using the same program with books a and b constant for all batches. > >a b >26 - 35 > 9.5 - 6.5 > 7 - 15 >58.5 - 54.5 >39.5 - 45.5 > >I am rapidly coming to the conclusion that hundreds of games would be required >to be able to state that a is better than b, and this would also apply to >program v program tests. In 2/5 a scores better so you suppose a to be better than b?? >What level of confidence can be attached to computer tournaments that the winner >is the best? Define "best" and you get the answer. >Is it true that computer v computer results vary more than human v human >results? Talking of strong humans I'd say yes because the humans know what they play, they play the same positions again and again so they get used to them. Programs often have a wide opening book that leads them to positions they don't understand. So the results are more chance than in humans play.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.