Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Proving something is better

Author: Omid David Tabibi

Date: 13:13:59 12/18/02

Go up one level in this thread


On December 18, 2002 at 16:02:38, Bruce Moreland wrote:

>On December 18, 2002 at 11:07:49, Omid David Tabibi wrote:
>
>>Have you ever conducted any research? If so, you would have known that a
>>researcher doesn't examine everything since the creation of earth, he takes
>>something which is known to be better and tries to improve it.
>
>If I were testing the properties of a specific isotope of a specific element, I
>would assume that previous research is valid, because the test material is
>invariant.
>
>If I were testing something whose properties were variant, I would have to
>repeat some previous research.
>
>If you understand a certain type of wood, and are skilled at doing fine
>carpentry with this kind of wood, you may not be able to use the same methods on
>a different type of wood, because that wood may have different properties.
>
>The blind assumption that all wood is the same would lead you to produce some
>crusty looking furniture.
>
>You can't make use of much previous research in the computer chess field.  A lot
>of it was conducted on slow hardware, a lot was conducted before null-move
>happened, etc.
>
>So if someone says that R=2 is better than R=3, there is *no way* that I am
>going to believe this until I run it myself.

But when you run it yourself (what I did) and see that std R=2 is better than
std R=3, will you publish it? No, because it's a known result. But if you run it
and find that std R=3 is better than std R=2, only then you may publish it, for
it is in contrary to the previously published research.


>
>There are plenty of techniques that haven't been repeatable.  I believe that
>people have had a hard time repeating MTD(f).
>There were also problems
>repeating Donninger's "deep search" aspect of his original null-move article.
>I'm sure that there are others.

MTD(f) worked for Plaat, "deep search" worked for Donninger, Adaptive Null-Move
Pruning and Extended Futility Pruning worked for Heinz, Verified Null-Move
Pruning worked for me, etc.

You have to implement all these ideas to gauge their performance in your
programs.


>
>Most of that research is just ideas about stuff to try to repeat.
>
>>I didn't think that someone will seriously claim that std R=3 is better than std
>>R=3; but now, I'd be glad to write another paper comparing those two, and also
>
>I'm assuming you mean R=2 in the second line.  Your own data implies this.  I
>think it behooves you to investigate it.
>
>I believe that if you run R=3 for the amount of time that it takes you to get to
>depth=10 with R=2, *you* will find that you get more answers on both the WCS
>suite and the Neishtadt suite.
>

I conducted self-play matches between std R=2 and std R=3. The results showed
that std R=2 is superior, and that was enough for me.


>If you don't, I would have serious questions as to why not.  You are within one
>solution already.  How could you expect that you won't get at least two more
>solutions if you more than double the time?
>
>bruce
>
>>mentioning fixed time comparisons if people find it interesting. Because
>>although not appearing the article, I have conducted tens of other types of
>>experiments (including fixed time) and I _know_ that vrfd R=2 is clearly
>>superior to std R=3.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.