Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: listing a few beginner bugs in Omids 'research'

Author: Robert Hyatt

Date: 10:16:08 12/18/02

Go up one level in this thread


On December 17, 2002 at 19:16:26, Vincent Diepeveen wrote:

>On December 17, 2002 at 18:21:28, Uri Blass wrote:
>
>>On December 17, 2002 at 18:11:20, Vincent Diepeveen wrote:
>>
>>>On December 17, 2002 at 17:30:36, Bruce Moreland wrote:
>>>
>>>if you go back in time a bit you see that i had
>>>major problems with Omids article and posted it here.
>>>
>>>there is more than just the problems you see there.
>>>
>>>also look at his homepage and get the positions he tested
>>>and then look to his node counts. for a mate in 2 position
>>>where i need like a couple of hundreds of nodes to get to 10 ply
>>>he needs 10 million nodes. then R=3 reduces that more.
>>>
>>>also his implementation is buggy of course. it doesn't take into
>>>account problems with transpositions. a classical beginners problem.
>>>
>>>But most important is that verification search is not something new
>>>it is a buggy implementation of something already described years ago
>>>with only 'novelty' that omid turns off nullmove *completely*
>>>after he finds a nullmove failure.
>>
>>No he does not.
>>There is no point in the tree that he turns off nullmove completely.
>>
>>>
>>>All with all a very sad article. The only good thing about it is
>>>the quantity of tests done.
>>>
>>>The test methods and the implementation and the conclusions are
>>>grammar school level.
>>>
>>>I do not know who proofread it, but it gotta be idiots or people who
>>>didn't care at all.
>>>
>>>Amazingly Bob defended Omid here and said nothing was wrong with
>>>the article.
>>
>>Bob also found that verification search is good for crafty based on his post.
>>Bob is not the onlyone who defended Omid.
>>
>>I also defend him and you are the only poster who attacks him(even posters who
>>said that it did not work for them did not say that it is very bad).
>>Most of what you say is not correct.
>>
>>Uri
>
>You are dreaming.
>
>Ok to list a few bugs in his paper:
>
>  a) all his test positions are mates and he doesn't do
>     checks in qsearch so R=2 versus R=3 matters a lot because
>     of the extra ply you miss of the main search finding the
>     mate for you. So if your qsearch is so buggy it is logical
>     that R=2 works at depth==9 better than R=3 at depth==9,
>     this is *trivial*. It is so trivial that no serious researcher
>     should compare the same plydepths with each other without
>     taking into account time.
>
>     Because we are going to conclude that minimax search is better
>     than alfabeta for sure.

That is simply nonsense.

That is a conclusion I could see _you_ making.  But not anybody else.

>
>  b) now already assuming the bug in his comparision is there to
>     compare depth==9 with depth==9 instead of the factor time,
>     then the bug is that he is just testing mates so reduction
>     factor matters. It is trivial to try adaptive nullmove then.
>
>  c) there is major bugs in the program Genesis looking at the
>     branching factor differences between R=1, R=2 and R=3.
>     I do not know a single serious chess program that has
>     such a difference.
>
>  d) Genesis needs way too much nodes to get to a decent plydepth
>     when compared to even programs doing checks in their qsearch
>     and extensions in nominal search. For mate in 2 he needs like
>     10 million nodes to get to depth == 10.

So?  If you force the search to a specific depth, that is not an unexpected
thing.


>
>  e) illegal position in his testset.
>
>  f) his algorithm is not new. It is a rewrite of something already
>     existing and he rewrote it wrong. He has a bug in his verification
>     search. You can easily proof it by using transpositions.


It is "new".  It is similar (in ways) to an old idea but it is also new
in ways.



>
>  g) It won't detect a zugzwang for sure simply because of transposition
>     bugs. Therefore the only claim can be that it is enhancing tactical
>     abilities of his program. Therefore it is crucial to also test
>     different forms of adaptive nullmove (with different depths to
>     go from R=3 to R=2).
>
>  h) It is unclear why he concluded verification search is better using
>     his own data.
>       a) more Fullwidth search than verification
>          clearly finds more positions.
>
>       b) R=3 uses less nodes than his verification search.
>
>     it is very unclear how he concludes then that verification search
>     is better. It's topping nowhere a list of 'this works better'.
>
>   i) Even from where i sit and without having genesis i can already smell
>      that adaptive nullmove works better than his own verification search.
>      his beloved verification search you can easily write down what it is
>      doing on paper. It's simply avoiding to nullmove last few plies
>      initially. So there is always a form of adaptive nullmove that is
>      going to completely outgun it simply.
>
>   j) the testset is just basically mating positions in all tests.
>      that's a very dangerous ground to conclude things.
>
>Note that at depth == 10 ply i solve with diep with R=3 i solve far more
>positions in way less nodes than this guy ever will. This testset is just
>too simple.


So???



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.