Author: David Rasmussen
Date: 16:00:15 01/03/01
Go up one level in this thread
On January 03, 2001 at 09:52:06, José Carlos wrote: > Lately, people have been talking here about significant results. I'm not >really sure if probabilistic calculus is appropiate here, because chess games >are not stocastic events. > So, I suggest an experiment to mesure the probabilistic noise: > > -chose a random program and make it play itself. > -write down the result after 10 games, 50 games, 100 games... > > It should tend to be an even result, and it would be possible to know how many >games are needed to get a result with a certain degree of confidence. > If we try this for several programs, and the results are similar, we can draw >a conclusion, in comparison with pure probabilistic calculus. > > Does this idea make sense, or am I still sleeping? :) > > José C. It is assumed that a chess game can be regarded as a bernoulli experiment, with the probability p for winning and 1-p for losing. The same assumption is made in the rating system. While not perfectly consistent, the idea is extended so that the distribution of wins in n games will be binomial. This is not really the case, as p is not constant, at least not when humans are involved. I believe this to be true too for computers. But a series of n bernoulli experiments (that will be binomial distributed) will approach the normal distribution as n grows, and also if p fluctuates around the some value p_0, we will not really have a binomial distribution, but as n grows, the distribution will approach the same normal distribution as if p was equal to p_0 all the time. So it still helps to play lots of games. In the case where p fluctuates (which is the case in practice IMO), you will just have to play even more games, than if p was fixed.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.