Author: Omid David Tabibi
Date: 21:53:55 12/18/02
Go up one level in this thread
On December 19, 2002 at 00:27:53, Miguel A. Ballicora wrote: >On December 18, 2002 at 11:07:49, Omid David Tabibi wrote: > >>On December 18, 2002 at 03:21:02, Bruce Moreland wrote: >> >>>On December 17, 2002 at 20:44:45, Omid David Tabibi wrote: >>> >>>>Heinz' experiments showed that std R=3 is weaker than std R=2 [1]. Bruce's >>>>Ferret also used std R=2 in WCCC 1999 [2]. So I took the one which is believed >>>>to be stronger (std R=2), and showed that vrfd R=3 is superior to it. >>> >>>Yes, but it is possible that normal R=3 is stronger than R=2, and that your >>>enhancement is weaker than R=3. >>> >>>You directly claim to be better than R=2, which is acceptable, but you imply >>>that you are better than R=3. It is possible that you are better than R=3, but >>>you have not shown this to be true. >>> >>>You could have anchored your conclusion much better by demonstrating that your >>>algorithm is superior to R=3 as well. It's important to do this, since your >>>algorithm is related to R=3. >>> >>>Whether my own program uses R=2 or R=3 has nothing to do with this. That R=2 is >>>accepted convention is all the more reason to challenging it by investigating >>>R=3. If yours is better than R=3, you are winning on all fronts. If it is not >>>better than R=3, your algorithm is very suspect, since it behaves differently >>>than expected. Even if it's already *proven* that R=2 is better (which I >>>doubt), you should take the time to prove it here, because if you prove it again >>>it's evidence that your program is operating properly. >>> >>>It's nothing personal. I would argue these points regardless of who wrote the >>>paper. >>> >>>bruce >> >>Have you ever conducted any research? If so, you would have known that a >>researcher doesn't examine everything since the creation of earth, he takes >>something which is known to be better and tries to improve it. > >In experimental sciences, many times things are repeated to certify that the >rigth conditions for measures are correct. Many times, those serve as controls. >It pretty much depends. > True. If you repeat published experiments and your results simply confirm them, there is no point to publish, but if your results contradict them, then you have a new case. Before starting the experiments on verified null-move pruning, I tested R=2 against R=3, and R=2 fared better. A few months ago I posted those results, also claiming that in longer time controls the superiority of R=2 over R=3 is not that significant (nevertheless, still superior). But the main point of the article isn't comparison between R=2 and R=3. It is about showing that vrfd R=3 is superior to both R=2 and R=3, and the experimental results conducted on thousands of positions strongly confirm that. For example, see Tables 2 and 6: vrfd R=3 solves about the same number of positions as std R=1. See Table 4: vrfd R=3 solves far more positions than R=2 and R=3. Based on these results, there is no room for doubt as to vrfd R=3's superiority. >Miguel > >> >>I didn't think that someone will seriously claim that std R=3 is better than std >>R=3; but now, I'd be glad to write another paper comparing those two, and also >>mentioning fixed time comparisons if people find it interesting. Because >>although not appearing the article, I have conducted tens of other types of >>experiments (including fixed time) and I _know_ that vrfd R=2 is clearly >>superior to std R=3. >> >>Omid.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.