Author: Uri Blass
Date: 19:58:59 02/01/01
Go up one level in this thread
On February 01, 2001 at 17:26:30, Amir Ban wrote: >On February 01, 2001 at 17:18:46, Uri Blass wrote: > >>On February 01, 2001 at 17:08:36, Amir Ban wrote: >> >>>On January 31, 2001 at 20:17:17, Bruce Moreland wrote: >>> >>>>I expressed very forcefully that a 10-0 result was more valid than a 60-40 >>>>result. >>>> >>>>I've done some experimental tests and it appears that I'm wrong. >>>> >>> >>>No, you were right the first time. Check again. >> >>The question is what is the meaning of a more valid result. > >Valid in the sense of demonstrating who is stronger. > > >> >><snipped> >>>10-0 gets better than 99.9% confidence for the winner to be better. >>> >>>60-40 has about 95% confidence. >> >>I agree but the word confidence is misleading because you may ask the question >>what is the probability that the winner is the better player and the confidence >>does not give an answer to it. > >That's exactly what it answers. > >Amir From your previous post: "you assume the null hypothesis, which is that the result is NOT significant and is a random occurrence between equals." You cannot calculate the probability that the winner is the better player by assuming a model that does not exist. I can give a simple example: Suppose that the better program has 51% chance to win and 49% chance to lose when the results of games are independent and the only missing data is which program is better. Suppose you also see 10-0 result. you need to calculate p(the winner is better /the result is 10-0) You do it by base rule You know that: 1)p(the result is 10-0)=0.51^10+0.49^10 2)p(the better player is the winner and the result 10-0)=0.51^10 The probability that the winner is the better player after you see 10-0 result is not the level of confidence but 0.51^10/(0.51^10+0.49^10). Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.