Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: listing a few beginner bugs in Omids 'research'

Author: Dann Corbit

Date: 17:02:07 12/17/02

Go up one level in this thread


On December 17, 2002 at 19:55:39, Vincent Diepeveen wrote:
>On December 17, 2002 at 19:36:59, Dann Corbit wrote:
>>On December 17, 2002 at 19:16:26, Vincent Diepeveen wrote:
>>>On December 17, 2002 at 18:21:28, Uri Blass wrote:
>>>>On December 17, 2002 at 18:11:20, Vincent Diepeveen wrote:
>>>>>On December 17, 2002 at 17:30:36, Bruce Moreland wrote:
>>>>>if you go back in time a bit you see that i had
>>>>>major problems with Omids article and posted it here.
>>>>>there is more than just the problems you see there.
>>>>>also look at his homepage and get the positions he tested
>>>>>and then look to his node counts. for a mate in 2 position
>>>>>where i need like a couple of hundreds of nodes to get to 10 ply
>>>>>he needs 10 million nodes. then R=3 reduces that more.
>>>>>
>>>>>also his implementation is buggy of course. it doesn't take into
>>>>>account problems with transpositions. a classical beginners problem.
>>>>>
>>>>>But most important is that verification search is not something new
>>>>>it is a buggy implementation of something already described years ago
>>>>>with only 'novelty' that omid turns off nullmove *completely*
>>>>>after he finds a nullmove failure.
>>>>
>>>>No he does not.
>>>>There is no point in the tree that he turns off nullmove completely.
>>>>
>>>>>
>>>>>All with all a very sad article. The only good thing about it is
>>>>>the quantity of tests done.
>>>>>
>>>>>The test methods and the implementation and the conclusions are
>>>>>grammar school level.
>>>>>
>>>>>I do not know who proofread it, but it gotta be idiots or people who
>>>>>didn't care at all.
>>>>>
>>>>>Amazingly Bob defended Omid here and said nothing was wrong with
>>>>>the article.
>>>>
>>>>Bob also found that verification search is good for crafty based on his post.
>>>>Bob is not the onlyone who defended Omid.
>>>>
>>>>I also defend him and you are the only poster who attacks him(even posters who
>>>>said that it did not work for them did not say that it is very bad).
>>>>Most of what you say is not correct.
>>>>
>>>>Uri
>>>
>>>You are dreaming.
>>>
>>>Ok to list a few bugs in his paper:
>>>
>>>  a) all his test positions are mates and he doesn't do
>>>     checks in qsearch so R=2 versus R=3 matters a lot because
>>>     of the extra ply you miss of the main search finding the
>>>     mate for you. So if your qsearch is so buggy it is logical
>>>     that R=2 works at depth==9 better than R=3 at depth==9,
>>>     this is *trivial*. It is so trivial that no serious researcher
>>>     should compare the same plydepths with each other without
>>>     taking into account time.
>>>
>>>     Because we are going to conclude that minimax search is better
>>>     than alfabeta for sure.
>>
>>Alpha-Beta is a form of minimax.  Time is important, I don't know if he
>>considered it or not.
>>
>>>  b) now already assuming the bug in his comparision is there to
>>>     compare depth==9 with depth==9 instead of the factor time,
>>>     then the bug is that he is just testing mates so reduction
>>>     factor matters. It is trivial to try adaptive nullmove then.
>>>
>>>  c) there is major bugs in the program Genesis looking at the
>>>     branching factor differences between R=1, R=2 and R=3.
>>>     I do not know a single serious chess program that has
>>>     such a difference.
>>
>>Every chess program has a different branching factor.
>>
>>>  d) Genesis needs way too much nodes to get to a decent plydepth
>>>     when compared to even programs doing checks in their qsearch
>>>     and extensions in nominal search. For mate in 2 he needs like
>>>     10 million nodes to get to depth == 10.
>>
>>It isn't as good as some other programs, but that can be said of any program
>>that is not a WMCCC winner or SSDF winner.
>>
>>>  e) illegal position in his testset.
>>
>>It was corrected in the actual test.  He had a bad version posted on his web
>>site.
>>
>>>  f) his algorithm is not new. It is a rewrite of something already
>>>     existing and he rewrote it wrong. He has a bug in his verification
>>>     search. You can easily proof it by using transpositions.
>>
>>Perhaps it is a rediscovery to some degree.  In any case, it was new to me.
>>
>>>  g) It won't detect a zugzwang for sure simply because of transposition
>>>     bugs. Therefore the only claim can be that it is enhancing tactical
>>>     abilities of his program. Therefore it is crucial to also test
>>>     different forms of adaptive nullmove (with different depths to
>>>     go from R=3 to R=2).
>>
>>I think this is untethered extrapolation.
>>
>>>  h) It is unclear why he concluded verification search is better using
>>>     his own data.
>>>       a) more Fullwidth search than verification
>>>          clearly finds more positions.
>>>
>>>       b) R=3 uses less nodes than his verification search.
>>>
>>>     it is very unclear how he concludes then that verification search
>>>     is better. It's topping nowhere a list of 'this works better'.
>>>
>>>   i) Even from where i sit and without having genesis i can already smell
>>>      that adaptive nullmove works better than his own verification search.
>>>      his beloved verification search you can easily write down what it is
>>>      doing on paper. It's simply avoiding to nullmove last few plies
>>>      initially. So there is always a form of adaptive nullmove that is
>>>      going to completely outgun it simply.
>>
>>Every chess program will have a different response to some new technique.
>>Perhaps his own program will behave in a totally different way if he changes his
>>move ordering.  In any case, it is an interesting idea and an interesting read
>>(for me)...  Obviously YMMV.
>>
>>>   j) the testset is just basically mating positions in all tests.
>>>      that's a very dangerous ground to conclude things.
>>
>>It is the only way to be 100% sure that the outcome is correct.  If you have any
>>position with only a material win, it is always possible that the move not only
>>is not the best move but also that may even lose.
>>
>>>Note that at depth == 10 ply i solve with diep with R=3 i solve far more
>>>positions in way less nodes than this guy ever will. This testset is just
>>>too simple.
>>
>>Why not write a truly excellent research paper and publish it?  Those that
>>bother to do it should be commended for their efforts.  We rarely see praise for
>>attempted explations of chess ideas and we often see ridicule.  Quite frankly, I
>>don't think that even the most basic of questions like "Bitboard or 0x88" or
>>"Fast/dumb verses Slow/smart evaluations" are answered.  Hence, any logical
>>debate on chess ideas is a good thing.  Some ideas may be a rehash of an old
>>notion or (perhaps) a slight tweak to an old idea.  But for some of us idiots,
>>it still makes for a bit of good and enjoyable reading.  And yet another thing
>>to try.
>
>hello Dan, for a researcher you miss a crucial point. what do you find
>from point h) ?
>
>He concludes something which is a contradiction to his own testresults.
>
>If i do 2 tests
>  a) and winner is X
>  b) and winner is Y
>
>How can a researcher then conclude Z is better?
>
>Even with 100 lines of text that's *impossible*.
>
>Do you agree or not?

Here is what I conclude from his article:
Null move verification is an interesting idea, worth examining.  It may or may
not make a program work better.  I'm not sure how forceful his article is, but
it is clear to me that the idea is simple.

>But regarding articles. I'm not a big scientific writer but
>i'm actually going to write something real soon. first improving diep
>NUMA a bit.
>
>then i'm going to do a few tests with it.
>
>Though my testing will be very objective and i won't cheat anywhere,
>it's for sure that despite that my algorithm and implementation is
>superb to any other form of parallellism ever produced for computerchess,
>that the speedups won't look too convincing to those who are not
>very well informed.
>
>A major problem i'm confronted with is complete bullshit written by
>others.

You see things as either shining white or jet black.  I see different shades of
white and grey.  An idea may be very helpful for someone else and not work at
all for you.  That does not mean that the idea is bad.  Only that it is bad in
your program's framework.

>Bob of course was in a league of little processors and invented the
>search times himself. In the league of many processors, like 500,
>there Feldmann concluded the incredible speedup of 50% which is
>very hard to conclude.
>
>I doubt i'll get close to that.
>
>Of course Feldmann didn't compare very fair. In fact there is no
>table 'time' at all in his research paper. Also it's amazing
>to see search depths of 5 ply mentionned.
>
>With the first versions of diep i already got deeper than 5 ply
>and that was at like 66Mhz 486 computers.
>
>He had like 500 processors, even if that was 1Mhz transputer processors
>a ply or 5 should be no more than a few seconds :)
>
>But researchers who can read it, will conclude the right thing from
>his research paper. The problem in the parallel field is that you
>can count those on 1 hands litterary.
>
>Let's talk about how many nodes a second i would get on a similar system
>Cilkchess and Zugzwang ran on world champs 99. Then compare how much
>more knowledge is in diep's eval and how much more nodes a second
>i get than them at the same system :)
>
>For me world champs 2003 is a major success if i get close to
>as many nodes a second as the fastest participant other than me.
>
>I hope getting around 10-20 MLN nodes a second though.
>
>Compare with a dual K7 1.6ghz where i get 130-200k nps at.
>
>Factor 100 more world champs 2003, of course if i get the system
>time which is very unsure always, i'll probably know it again only
>3 days before the tournament i fear!

An efficient parallel search will be something to shout about.  Lots of people
have put energy into that area, but very few people say anything about it or
show their code or anything like that.

You have written a very good explanation of double null move in the past, and so
I see that you have some talent for writing  (I think Bruce Moreland is probably
the most creative one around here when it comes to explaining chess algorithms,
but I don't think he is motivated to write an article or a book).



This page took 0.08 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.