Author: Vincent Diepeveen
Date: 10:19:40 07/21/03
Go up one level in this thread
On July 21, 2003 at 13:07:56, Vincent Diepeveen wrote: Perhaps some are amazed to read that double nullmove was invented to proof it a correct way of searching versus fullwidth knowing that you always get a much bigger depth with it than without nullmove. The reason for this was because in those years 1997-1999 and if i remember well far into 2000 even, professor Robert Hyatt was claiming loudly that deep blue was so much better because it searched 12 ply with incredible extensions and fullwidth, and that we could not compare any program using nullmove, even if it was getting 20 ply, with deep blue's strength because those 12 ply (in total) were better than 12 ply of my DIEP for example (which is not doing any other forward pruning currently, despite hundreds of objective tries, than nullmove). In 2000 Hyatt, after finding out that the consensus even for the < 2000 program programmers was that nullmove was a correct way to search, modified his statement to that deep blue was searching 18-19 ply fullwidth (despite that in 99 already clear enough for others 12 ply was mentionned by Hsu in his paper as being the maximum iteration depth), and not so long ago i still saw such idiot statements at the internet. Yet therefore the fullwidth versus nullmove discussion has been gone to the background. >On July 21, 2003 at 12:41:48, Omid David Tabibi wrote: > >>On July 21, 2003 at 11:15:30, Vincent Diepeveen wrote: >> >>>On July 20, 2003 at 04:55:17, Ryan B. wrote: >>> >>>>Questions about verified null move >>>> >>>>1. Should the verify be stored in the hash table? At this time I store the >>>>verify in hash but think I am doing something wrong. My program is playing >>>>stronger with standard null move set to R=3 than verified null move with R = 3. >>>>Possibly the verify should not be sorted? >>> >>>This is a correct conclusion. >>> >>>Verified nullmove is not improving your program except if you test it at mating >>>positions where checks matter a lot so it gives you another ply then to find >>>tricks 1 ply sooner. >>> >>>However it will play worse. Also it basically shows how bad a qsearch is when in >>>such positions you do not find the mate in qsearch. >>> >>>Also note that overhead of verified nullmove is pretty big. >>> >>>The way in which Omid compared in his article is scientific not correct. >>> >> >>You can say whatever you want, but there isn't a week in which I don't receive >>an email from another programmer who reports that verified R=3 works better than >>adaptive R=2~3 and standard R=2 or R=3. But of course you have never tried it, >>so you could never know. And for your information, I know of at least one >>commercial program which is using verified null-move pruning. > >Names of those programmers. > >I bet all thos progs in the <= 2000 elo region and not knowing much of search >trees what happens in them the last few plies when using the different mixes of >forward pruning and nullmoves and qsearch features with each others. > >You are underestimating how many experiments the > 2000 program programmers try. > >I bet you never have done any objective comparision in your life. > >Note that in contradiction to you i do not claim that double nullmove which i >invented is going to save nodes. In contradiction. It's going to find zugzwangs >however garantueed. In fact it was invented to proof clearly that searching with >nullmove is 100% the same like a normal search, only that some lines are >searched shorter. But sooner or later you will find out the same move being the >best. > >That was the intention of double nullmove. I never claimed it to be faster >though than other nullmoves. > >Yet i know from head about 4 testsets, well KNOWN testsets, where using it will >result in finding more solutions in the same time span. > >So if i replace in your crap story about verified nullmove the word 'verified >nullmove' with 'double nullmove' and replace testset by some others then i can >show any algorithm of course to be better than default R=3 or R=2 or my >suggestion back around 1995/1996 into RGCC to use a combination of R=2 and R=3 >in some way. This general idea was later was picked up Heinz and in a specific >combination we know it under a different name now. > >Then there were several verification ideas for nullmove. Like doing rudely a R-2 >search allowing no nullmove and in others they did allow nullmove. > >So your verified nullmove 'idea' is just a variant of older tried ideas, but >then without bugs unlike in your implementation which has hashtable bugs. > >I am getting the impression that you do not exactly even understand how many >experiments most programmers which have engines of around 500-700 points >stronger than yours, are performing. > >They are however doing very objective science and no they do not post much about >it. > >Your experiments in that sense are even below beginners level. And the way you >wrote your article is the default way to present an incorrect algorithm as being >correct. > >May i remember the Zugzwang team in the 80s, which was comparing node counts >too? > >So they could claim a speedup (efficiency) of 50% of 1 processor with a >hashtable of n, versus 500 processors hashtable n*30, of course forgetting to >mention that it took hours to get that 500 processor run finished :) > >I'll give you another trivial example: > >I can proof that nullmove is very bad at the last ply for diep when using node >counts. If i do not allow nullmove at depthleft <= 1 then i get like 10-20% node >count reduction for DIEP. However when measured in time it is way slower of >course :)
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.