Author: Walter Koroljow
Date: 09:48:36 02/04/01
Go up one level in this thread
On February 03, 2001 at 18:58:36, Uri Blass wrote: <snip> >Assume level of confidence 97% in all of your tests. > >If you reject H0 in 3% of the times then it is possible that you are always >wrong when you reject H0(for example when always We=0.5). > >If you reject H0 in 50% of the times then you are wrong only in at most 6% of >the cases that you reject H0(to be more exact I need to say if the probability >to reject H0 is 50% but the probability is something that you do not know and >the % of the cases that you rejected H0 practically is something that you know). > >Uri If I interpret you correctly, I disagree. Let us work at the 97% confidence level with H0 as before. We agree that the highest Type I error rate (false rejection of H0) occurs if, for each test we run, we have We = 0.5. In fact the error rate will be exactly 3% then. If We = 0.8, the error rate will be less -- assume 1% for convenience. Let me make my case by example. Suppose that in the population that we test repeatedly we have: 34% We = 0.5 (H0 true) 33% We = 0.8 (H0 true) 33% We = 0.2 (H0 false). Then the Type I error rate for the case H0 true is (.34*3% +.33*1%)/(.34+.33) = 2.01% This is less than 3%. I cannot see how the error rate could ever exceed 3% for any mix of We. For the case We = 0.2, a Type I error is impossible since H0 is false. So the overall Type I error rate will be: (.34*3% + .33*1% + .33*0%)/(.34+.33+.33) = 1.35%, of course also less than 3%. On a different topic -- The confidence interval approach which gives a bound and not a probability is consistent with the Bayesian approach which does give a probability. Why not do both? Those people brave enough to believe in an a priori distribution could accept the probability, and the rest would have to be content with the bound. Walter
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.