Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Questions about verified null move

Author: Omid David Tabibi

Date: 10:14:05 07/21/03

Go up one level in this thread


On July 21, 2003 at 13:07:56, Vincent Diepeveen wrote:

>On July 21, 2003 at 12:41:48, Omid David Tabibi wrote:
>
>>On July 21, 2003 at 11:15:30, Vincent Diepeveen wrote:
>>
>>>On July 20, 2003 at 04:55:17, Ryan B. wrote:
>>>
>>>>Questions about verified null move
>>>>
>>>>1. Should the verify be stored in the hash table?  At this time I store the
>>>>verify in hash but think I am doing something wrong.  My program is playing
>>>>stronger with standard null move set to R=3 than verified null move with R = 3.
>>>>Possibly the verify should not be sorted?
>>>
>>>This is a correct conclusion.
>>>
>>>Verified nullmove is not improving your program except if you test it at mating
>>>positions where checks matter a lot so it gives you another ply then to find
>>>tricks 1 ply sooner.
>>>
>>>However it will play worse. Also it basically shows how bad a qsearch is when in
>>>such positions you do not find the mate in qsearch.
>>>
>>>Also note that overhead of verified nullmove is pretty big.
>>>
>>>The way in which Omid compared in his article is scientific not correct.
>>>
>>
>>You can say whatever you want, but there isn't a week in which I don't receive
>>an email from another programmer who reports that verified R=3 works better than
>>adaptive R=2~3 and standard R=2 or R=3. But of course you have never tried it,
>>so you could never know. And for your information, I know of at least one
>>commercial program which is using verified null-move pruning.
>
>Names of those programmers.
>
>I bet all thos progs in the <= 2000 elo region and not knowing much of search
>trees what happens in them the last few plies when using the different mixes of
>forward pruning and nullmoves and qsearch features with each others.
>
>You are underestimating how many experiments the > 2000 program programmers try.
>
>I bet you never have done any objective comparision in your life.
>
>Note that in contradiction to you i do not claim that double nullmove which i
>invented is going to save nodes. In contradiction. It's going to find zugzwangs
>however garantueed. In fact it was invented to proof clearly that searching with
>nullmove is 100% the same like a normal search, only that some lines are
>searched shorter. But sooner or later you will find out the same move being the
>best.
>
>That was the intention of double nullmove. I never claimed it to be faster
>though than other nullmoves.
>
>Yet i know from head about 4 testsets, well KNOWN testsets, where using it will
>result in finding more solutions in the same time span.
>
>So if i replace in your crap story about verified nullmove the word 'verified
>nullmove' with 'double nullmove' and replace testset by some others then i can
>show any algorithm of course to be better than default R=3 or R=2 or my
>suggestion back around 1995/1996 into RGCC to use a combination of R=2 and R=3
>in some way. This general idea was later was picked up Heinz and in a specific
>combination we know it under a different name now.
>
>Then there were several verification ideas for nullmove. Like doing rudely a R-2
>search allowing no nullmove and in others they did allow nullmove.
>
>So your verified nullmove 'idea' is just a variant of older tried ideas, but
>then without bugs unlike in your implementation which has hashtable bugs.
>
>I am getting the impression that you do not exactly even understand how many
>experiments most programmers which have engines of around 500-700 points
>stronger than yours, are performing.

Oh, really? I'm yet to see those 3100-3300 rated engines...!



>
>They are however doing very objective science and no they do not post much about
>it.
>
>Your experiments in that sense are even below beginners level. And the way you
>wrote your article is the default way to present an incorrect algorithm as being
>correct.
>
>May i remember the Zugzwang team in the 80s, which was comparing node counts
>too?
>
>So they could claim a speedup (efficiency) of 50% of 1 processor with a
>hashtable of n, versus 500 processors hashtable n*30, of course forgetting to
>mention that it took hours to get that 500 processor run finished :)
>
>I'll give you another trivial example:
>
>I can proof that nullmove is very bad at the last ply for diep when using node
>counts. If i do not allow nullmove at depthleft <= 1 then i get like 10-20% node
>count reduction for DIEP. However when measured in time it is way slower of
>course :)



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.