Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: listing a few beginner bugs in Omids 'research'

Author: Uri Blass

Date: 23:26:03 12/17/02

Go up one level in this thread


On December 17, 2002 at 23:46:06, Vincent Diepeveen wrote:

>On December 17, 2002 at 19:54:06, Uri Blass wrote:
>
>>On December 17, 2002 at 19:16:26, Vincent Diepeveen wrote:
>>
>>>On December 17, 2002 at 18:21:28, Uri Blass wrote:
>>>
>>>>On December 17, 2002 at 18:11:20, Vincent Diepeveen wrote:
>>>>
>>>>>On December 17, 2002 at 17:30:36, Bruce Moreland wrote:
>>>>>
>>>>>if you go back in time a bit you see that i had
>>>>>major problems with Omids article and posted it here.
>>>>>
>>>>>there is more than just the problems you see there.
>>>>>
>>>>>also look at his homepage and get the positions he tested
>>>>>and then look to his node counts. for a mate in 2 position
>>>>>where i need like a couple of hundreds of nodes to get to 10 ply
>>>>>he needs 10 million nodes. then R=3 reduces that more.
>>>>>
>>>>>also his implementation is buggy of course. it doesn't take into
>>>>>account problems with transpositions. a classical beginners problem.
>>>>>
>>>>>But most important is that verification search is not something new
>>>>>it is a buggy implementation of something already described years ago
>>>>>with only 'novelty' that omid turns off nullmove *completely*
>>>>>after he finds a nullmove failure.
>>>>
>>>>No he does not.
>>>>There is no point in the tree that he turns off nullmove completely.
>>>>
>>>>>
>>>>>All with all a very sad article. The only good thing about it is
>>>>>the quantity of tests done.
>>>>>
>>>>>The test methods and the implementation and the conclusions are
>>>>>grammar school level.
>>>>>
>>>>>I do not know who proofread it, but it gotta be idiots or people who
>>>>>didn't care at all.
>>>>>
>>>>>Amazingly Bob defended Omid here and said nothing was wrong with
>>>>>the article.
>>>>
>>>>Bob also found that verification search is good for crafty based on his post.
>>>>Bob is not the onlyone who defended Omid.
>>>>
>>>>I also defend him and you are the only poster who attacks him(even posters who
>>>>said that it did not work for them did not say that it is very bad).
>>>>Most of what you say is not correct.
>>>>
>>>>Uri
>>>
>>>You are dreaming.
>>>
>>>Ok to list a few bugs in his paper:
>>>
>>>  a) all his test positions are mates and he doesn't do
>>>     checks in qsearch so R=2 versus R=3 matters a lot because
>>>     of the extra ply you miss of the main search finding the
>>>     mate for you. So if your qsearch is so buggy it is logical
>>>     that R=2 works at depth==9 better than R=3 at depth==9,
>>>     this is *trivial*. It is so trivial that no serious researcher
>>>     should compare the same plydepths with each other without
>>>     taking into account time.
>>
>>not all his test positions are mates
>>I believe that most of the test positions are not mates.
>
>vaste majority are. i don't know how well your thing is in
>finding mates, but with DIEP i see each position a mate score
>within seconds when i tested all the positions.
>
>>The fact that R=2 works at depth=9 better than R=3 at depth=9 is not the point
>>of the research.
>
>the whole working of the 'verification' search is based upon
>that comparision. the other comparisions are  done to talk around it.
>
>>
>>>
>>>     Because we are going to conclude that minimax search is better
>>>     than alfabeta for sure.
>>>
>>>  b) now already assuming the bug in his comparision is there to
>>>     compare depth==9 with depth==9 instead of the factor time,
>>>     then the bug is that he is just testing mates so reduction
>>>     factor matters. It is trivial to try adaptive nullmove then.
>>>
>>>  c) there is major bugs in the program Genesis looking at the
>>>     branching factor differences between R=1, R=2 and R=3.
>>>     I do not know a single serious chess program that has
>>>     such a difference.
>>
>>Genesis is not his serious program but it does not mean that there are bugs in
>>it.
>
>the branching factor of genesis is too huge (with nullmove R=2 i mean)
>to take it very serious.
>
>>>
>>>  d) Genesis needs way too much nodes to get to a decent plydepth
>>>     when compared to even programs doing checks in their qsearch
>>>     and extensions in nominal search. For mate in 2 he needs like
>>>     10 million nodes to get to depth == 10.
>>
>>You look at the wrong positions
>>The time that it needs to get depth 10 in mates in 2 are not interesting and can
>>be ignored but there are enough positions that are not forced mates.
>
>It shows the principle of Genesis needing too many nodes for
>such positions. There is not difference in branching number of nodes
>needed for genesis for a very complicated position versus a position
>with a very simple cutoff idea (mate in 2 for example) which only
>burns a few nodes.
>
>If the average number of moves and tactics is big, then in
>general i get a much bigger branching factor in DIEP than
>when the number of moves and tactics possible is very small.
>
>>>
>>>  e) illegal position in his testset.
>>
>>He replied that this was simply old positins in his site and he did not use the
>>illegal position.
>
>It's encouraging to know how accurate his testset was chosen then.
>
>>>  f) his algorithm is not new. It is a rewrite of something already
>>>     existing and he rewrote it wrong. He has a bug in his verification
>>>     search. You can easily proof it by using transpositions.
>>
>>I do not know what you talk about.
>
>You know very well what i mean. It has been said a lot of times before.
>
>>>  g) It won't detect a zugzwang for sure simply because of transposition
>>>     bugs. Therefore the only claim can be that it is enhancing tactical
>>>     abilities of his program.
>>
>>It can detect zugzwang but only if there is not more than one zugzwang in the
>>same line.
>
>If you aren't storing in hashtable whether a verifiation has been
>done already and taking that into account for transpositions, then it
>is *trivial* to proof that you can and will miss zugzwangs with this
>algorithm.
>
>just first get into a position where a verification is getting done,
>then at a position X do a search with nullmove (and no possibility
>to verify anymore). Store that in hashtable.
>
>Somewhere else before a verification has been done you get a transposition
>to that position X and there the damage is done!
>
>> Therefore it is crucial to also test
>>>     different forms of adaptive nullmove (with different depths to
>>>     go from R=3 to R=2).
>>>
>>>  h) It is unclear why he concluded verification search is better using
>>>     his own data.
>>>       a) more Fullwidth search than verification
>>>          clearly finds more positions.
>>
>>he did not compare verification search with R=3 but only verification search
>>with R=2.
>>There is no conclusion if verification R=3 is better than R=3 based on the
>>article.
>>The games that he did were against R=2 and not against R=3.
>
>That's exactly the weak spot of the implementation. If you implement
>something which practically is seeing tactically about the same
>as adaptive nullmove, then you better compare against it.
>
>Also a good compare would be an adaptive nullmove where you use
>R=1 the last 3 plies,
>R=2 the last 4 plies,
>R=3 the last 5 plies and bigger depths left.
>
>So in principle always doing a full search of 1 ply to try to get
>a fail low.
>
>That's missing a lot less that last few plies before the qsearch,
>which of course covers up possible major bugs in ones qsearch a lot.
>
>>>
>>>       b) R=3 uses less nodes than his verification search.
>>>
>>>     it is very unclear how he concludes then that verification search
>>>     is better. It's topping nowhere a list of 'this works better'.
>>
>>
>>>
>>>   i) Even from where i sit and without having genesis i can already smell
>>>      that adaptive nullmove works better than his own verification search.
>>>      his beloved verification search you can easily write down what it is
>>>      doing on paper. It's simply avoiding to nullmove last few plies
>>>      initially. So there is always a form of adaptive nullmove that is
>>>      going to completely outgun it simply.
>>
>>It does not avoid nullmove and it is using always R=3 after null in the search.
>
>I know, but if you write down what happens you will see that it
>basically will replace the search by a 1 ply search the last few plies
>before the root.

only in cases that null move was not done and these cases are minority when the
depth is big.

>
>Adaptive nullmove is closer to that than R=3 is. Of course others
>who use adaptive nullmove (i'm always using R=3) directly saw that
>and posted here that he should compare versus adaptive nullmove. I
>feel that this is 100% the truth.
>
>>If you use always R=3 after null then you can expect to use R=3 in most of the
>>tree when you search deep enough.
>
>Not really, because you assume the case where you do the verification
>near the root. However search doesn't work like that.
>
>Verification is doing countless of 1 ply searches near the leafs where
>otherwise the qsearch replaces the cutoff.
>
>Doing a 1 ply search there all the time which R=3 isn't doing nor R=2
>is doing, that's very expensive compared to just a qsearch.
>
>Of course a lot of that expensive searching is getting pruned away
>by transpositions which are caused by bugs becuase it doesn't store
>whether a cutoff already did a verification or not. So usually the
>answer is 'not'. You get a lot of transpositions when a verification
>isn't done yet to a tree where a verification wasn't done.

Today I do not use hash tables for pruning in movei so for me
this is clearly not relevant but I found that at long time control the differnce
between verification and not doing verification is smaller and I suspect that
when the time control is long enough verification can beat non verification
search.

I need to change my depth definition before using hash tables for pruning(today
I have a special varaible for partial extensions that is not stored in the hash
tables and (my tests showed no significant difference from using hash tables for
pruning even if I prune only when the depth is bigger than the stored depth.
If I prune only when the depth is bigger or equal then the stored depth I can
get deeper but can get less solutions at the same time)

My pruning ideas mean that I cannot be sure of getting the same result with the
same depth when other varaibles are different(for example I may get different
results when alpha and beta are different) so I am not sure about the value of
using hash tables for pruning for me and I decided not to use it unless I get a
significant better results in test suites(something that did not happen)

Today I use verification search only in endgames that are not pawn endgames but
I plan to test better ideas about using verification search(I have also other
ideas to test).

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.