Author: Uri Blass
Date: 03:49:48 02/02/01
Go up one level in this thread
On February 02, 2001 at 05:27:27, Amir Ban wrote: >On February 01, 2001 at 22:58:59, Uri Blass wrote: > >>On February 01, 2001 at 17:26:30, Amir Ban wrote: >> >>>On February 01, 2001 at 17:18:46, Uri Blass wrote: >>> >>>>On February 01, 2001 at 17:08:36, Amir Ban wrote: >>>> >>>>>On January 31, 2001 at 20:17:17, Bruce Moreland wrote: >>>>> >>>>>>I expressed very forcefully that a 10-0 result was more valid than a 60-40 >>>>>>result. >>>>>> >>>>>>I've done some experimental tests and it appears that I'm wrong. >>>>>> >>>>> >>>>>No, you were right the first time. Check again. >>>> >>>>The question is what is the meaning of a more valid result. >>> >>>Valid in the sense of demonstrating who is stronger. >>> >>> >>>> >>>><snipped> >>>>>10-0 gets better than 99.9% confidence for the winner to be better. >>>>> >>>>>60-40 has about 95% confidence. >>>> >>>>I agree but the word confidence is misleading because you may ask the question >>>>what is the probability that the winner is the better player and the confidence >>>>does not give an answer to it. >>> >>>That's exactly what it answers. >>> >>>Amir >> >>From your previous post: >> >>"you assume the null hypothesis, which is that the >>result is NOT significant and is a random occurrence between equals." >> >>You cannot calculate the probability that the winner is the better player by >>assuming a model that does not exist. >> > >This is what is taught in universities and is written in textbooks. If it >doesn't work, then statisticians have been talking nonsense for centuries. It is not written that it is the probability that the winner is better(the words that are used may be misleading to believe in it). Practically level of confidence has some meaning. confidence of 95% means that you can be sure in at least 95% that you do not reject H0 when H0 is right(H0 says that the opponents are equal). > >It makes simple sense: You find the significance of an event by calculating the >probability that the event is insignificant. > > >>I can give a simple example: >> >>Suppose that the better program has 51% chance to win and 49% chance to lose >>when the results of games are independent and the only missing data is which >>program is better. >> > >This is a completely artificial assumption. Where does it come from ? It comes from the idea that a small change in a chess program cannot cause a big change in the level of the program. You are right that exactly 51% or exactly 49% is artificial assumption but assumption that the probability to win is between 49% and 51% is a practical assumption (not exactly because you need to include also the fact that there are draws and that the probability of white to win is not identical to the probability of black but these assumptions will only do the problem more hard to explain and are not going to change the fact that practically after seeing 10-0 result in some situations you cannot be almost sure that the winner is better. Uri > >What you show is that if the two opponents are almost equal, then both have >about the same probability to win 10-0. This is true, but not relevant, because >the question is whether the opponents are equal to start with. > >Amir > > >>Suppose you also see 10-0 result. >> >>you need to calculate p(the winner is better /the result is 10-0) >> >>You do it by base rule >> >>You know that: >> >>1)p(the result is 10-0)=0.51^10+0.49^10 >>2)p(the better player is the winner and the result 10-0)=0.51^10 >> >>The probability that the winner is the better player after you see 10-0 result >>is not the level of confidence but 0.51^10/(0.51^10+0.49^10). >> >>Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.