Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Proving something is better

Author: Omid David Tabibi

Date: 09:13:46 12/19/02

Go up one level in this thread


On December 19, 2002 at 09:53:21, Vincent Diepeveen wrote:

>Your own data shows that R=3 produces a far smaller tree
>and you didn't compare time to solution in any of the
>tests performed.
>
>So you cannot claim anything about verified nullmove.
>
>The tactics you solve easily with checks in qsearch.
>
>Heinz concluded a combination of R=3 + R=2 is better,
>*not* that R=2 is better than R=3 in itself.
>

Vincent, it seems that you have neither read Heinz' article. Read Section 3 of
Adaptive Null-Move Pruning, Heinz 1999.



>On December 18, 2002 at 22:50:56, Omid David Tabibi wrote:
>
>>On December 18, 2002 at 22:36:38, Andrew Dados wrote:
>>
>>>On December 18, 2002 at 22:12:43, Omid David Tabibi wrote:
>>>
>>>>On December 18, 2002 at 22:08:39, Andrew Dados wrote:
>>>>
>>>>>On December 18, 2002 at 22:00:08, Omid David Tabibi wrote:
>>>>>
>>>>>>On December 18, 2002 at 21:48:03, Andrew Dados wrote:
>>>>>>
>>>>>>>On December 18, 2002 at 18:56:21, Omid David Tabibi wrote:
>>>>>>>
>>>>>>>>On December 18, 2002 at 18:12:22, Bruce Moreland wrote:
>>>>>>>>
>>>>>>>>>On December 18, 2002 at 16:59:10, Omid David Tabibi wrote:
>>>>>>>>>
>>>>>>>>>>You take two numbers and draw a very general conclusion. Look at other tables
>>>>>>>>>>and depths, which show a more significant superiority of std R=2 over std R=3.
>>>>>>>>>>
>>>>>>>>>>Look at Tables 2 and 6. Vrfd R=3 solved almost the same number of positions as
>>>>>>>>>>std R=1 !!! Does it leave any room for doubt as for vrfd R=3's superiority over
>>>>>>>>>>std R=3 ?
>>>>>>>>>
>>>>>>>>>I don't see anything that shows demonstrated superiority of R=2 over R=3.  You
>>>>>>>>>say to look at table 2 -- so do I.  It shows that R=2 gets one more correct
>>>>>>>>>through ply 10, but takes over twice as long to do it.  I suggest that if R=3
>>>>>>>>>were allowed to continue until R=2 is finished, that it would have found
>>>>>>>>>significantly more than 1 solution in the mean time.
>>>>>>>>>
>>>>>>>>>Table 6 has no node counts, so I don't know how much faster R=3 is than R=2.  It
>>>>>>>>>gets 286 as opposed to 292.  Fine.  How much less time did it take to get it?
>>>>>>>>>
>>>>>>>>>Maybe VR=3 is better than R=3.  The paper should allow me to draw this
>>>>>>>>>conclusion.
>>>>>>>>>
>>>>>>>>>A reason that I bring up the comparison between R=3 and R=2, is if you are
>>>>>>>>>proving that R=3 is better than R=2, and you don't think that R=3 is better than
>>>>>>>>>R=2, then maybe your other results are flawed.
>>>>>>>>>
>>>>>>>>>You are writing a paper on some aspect of biological science, and your data is
>>>>>>>>>suddenly implying that evolution doesn't take place.  Doesn't *that* seem worth
>>>>>>>>>investigating?
>>>>>>>>>
>>>>>>>>>Either you are on the verge of a serious breakthrough, or your testing process
>>>>>>>>>is wrong.  You need to figure out which.
>>>>>>>>>bruce
>>>>>>>>
>>>>>>>>Bruce,
>>>>>>>>
>>>>>>>>Apparently we are not looking at the data from the same perspective. As I told
>>>>>>>>you before, I conducted self-play matches, and their results showed that std R=2
>>>>>>>>is superior to std R=3. Although I still think that this finding is not worth
>>>>>>>>publishing, as it is an already known fact.
>>>>>>>>
>>>>>>>>I understand your criticism of the fixed depth method, which is the standard
>>>>>>>>scientific comparison in computer chess. But I'm afraid your case against fixed
>>>>>>>>depth is not strong enough to convince the whole computer chess research
>>>>>>>>community to opt for fixed time comparisons instead.
>>>>>>>>
>>>>>>>>Mentioning some fixed time experiments in a footnote or appendix could have been
>>>>>>>>interesting; but even without them, my experiments took more than 6 months
>>>>>>>>24h/d, 7d/w.
>>>>>>>>
>>>>>>>>If you have a specific experiment in mind, I would be glad to conduct whenever I
>>>>>>>>get the time, but besides that, I would like the implemented algorithm in your
>>>>>>>>program to speak for its own.
>>>>>>>>
>>>>>>>>In our discussion today, I didn't get into details and kept my replies short,
>>>>>>>>because none of your points were new, and I have already discussed all these in
>>>>>>>>detail a few weeks ago. I'm sure anyone who followed those discussions could
>>>>>>>>have answered all your questions.
>>>>>>>>
>>>>>>>>Based on the programmers' feedbacks I additionally posted several implementation
>>>>>>>>suggestions for the various variants of this algorithm, which I'm sure you'll
>>>>>>>>find helpful.
>>>>>>>>
>>>>>>>>Now you will have to excuse me for not being able to continue the discussion,
>>>>>>>>for I am up to my ears busy working on another paper (on Blockage Detection)
>>>>>>>>which I hope to be ready soon.
>>>>>>>
>>>>>>>Please, refrain...
>>>>>>>
>>>>>>
>>>>>>Sorry Andrew, I didn't understand what you meant.
>>>>>
>>>>>I meant refrain from producing tons of 'scientific' papers with wrong or
>>>>>unjustified conclusions. Instead of vr=3 and 80% more nodes overhead simply
>>>>>implement checks in q-search and get more tests right with 10-15% overhead.
>>>>>
>>>>>Btw.. try self-playing vr=3 versus r=2 or 3 + checks in qsearch :)
>>>>>
>>>>
>>>>And why do you think that vrfd R=3 won't be superior to std R=2 if they both use
>>>>checks in q-search?
>>>
>>>Then what do you need to verify? Biggest problem with null move is when it drops
>>>you right into qsearch and you don't do checks after that.
>>>
>>
>>Based on the feedback I've received from other programmers, verified null-move
>>pruning usually yielded a considerable tactical improvement even in programs
>>using checks in q-search and other extensions.
>>
>>The idea of verified null-move pruning will work fine on almost every program;
>>on some after 10 minutes of implementation, on others after several days of fine
>>tuning. The main issue is to find the suitable reduction value after a fail-high
>>report. In my experiments I used a reduction of 1 after a fail-high report, but
>>programs with more sophisticated extensions might achieve good results with a
>>value of 2; or even an adaptive value.
>>
>>After a certain depth (a threshold), vrfd R=3 will always create a smaller tree
>>than std R=2. For some programs that threshold might be higher (one programmer
>>reported the depth 12 to be the threshold in his program!), so the experiments
>>should be conducted on higher depths. It is quite natural that for example until
>>depth 10, vrfd R=3 constructs a larger tree than std R=2, and starting from
>>depth 11, the tree will be smaller. The key issue is to find the threshold.
>>
>>
>>>Another (much smaller) problem is when a piece is trapped. Here you may try
>>>improving on verification search: When null fails low (I know, you do
>>>verification when null fails high) and is refuted with direct capture: either
>>>suppress null move in subtrees, or supress it only when you try evasions of
>>>piece en-prise. Or extend. Lots of options, most of us tried those some time
>>>ago. This is all old stuff....
>>>
>>>But I don't think bloating tree by 80% to get null problems right is a good
>>>idea.
>>>
>>
>>80% ?!?
>>
>>
>>>>
>>>>>>
>>>>>>BTW, are you still working on Rookie?!
>>>>>No, I stopped like 1 year ago.
>>>>>
>>>>>>
>>>>>>
>>>>>>>-Andrew-



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.