Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: new thoughts on verified null move

Author: Omid David Tabibi

Date: 09:20:02 11/24/02

Go up one level in this thread


On November 24, 2002 at 11:05:24, Robert Hyatt wrote:

>On November 24, 2002 at 07:01:46, Omid David Tabibi wrote:
>
>>On November 23, 2002 at 23:33:39, Tony Werten wrote:
>>
>>>On November 23, 2002 at 23:14:52, Omid David Tabibi wrote:
>>>
>>>>On November 23, 2002 at 22:25:02, Tony Werten wrote:
>>>>
>>>>>On November 23, 2002 at 22:14:27, Omid David Tabibi wrote:
>>>>>
>>>>>>On November 23, 2002 at 21:50:01, Tony Werten wrote:
>>>>>>
>>>>>>>On November 23, 2002 at 21:24:08, Omid David Tabibi wrote:
>>>>>>>
>>>>>>>>On November 23, 2002 at 21:09:36, Tony Werten wrote:
>>>>>>>>
>>>>>>>>>On November 23, 2002 at 20:52:01, Omid David Tabibi wrote:
>>>>>>>>>
>>>>>>>>>>On November 23, 2002 at 20:00:15, Tony Werten wrote:
>>>>>>>>>>
>>>>>>>>>>>On November 23, 2002 at 11:11:16, Christophe Theron wrote:
>>>>>>>>>>>
>>>>>>>>>>>>On November 23, 2002 at 09:22:37, jefkaan wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>oops, wasn't finished yet..
>>>>>>>>>>>>>
>>>>>>>>>>>>>>are done by using the results of the positional eval
>>>>>>>>>>>>>>to prune the q-search,
>>>>>>>>>>>>>and there using only material eval
>>>>>>>>>>>>> (haven't tried it out yet, and wouldn't
>>>>>>>>>>>>>know how to do it, but it's only an idea,
>>>>>>>>>>>>>you know.. to explore options of
>>>>>>>>>>>>>more effective branch factor reducements
>>>>>>>>>>>>>and efficient programming (besides
>>>>>>>>>>>>>lousy solutions as inline assembler
>>>>>>>>>>>>>and bitboards..
>>>>>>>>>>>>>:)
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>Yes Chess Tiger does much more pruning than known (published) techniques.
>>>>>>>>>>>>
>>>>>>>>>>>>I think other top programs do it also.
>>>>>>>>>>>>
>>>>>>>>>>>>I still fail to see why the efficiency of an algorithm depends on what your
>>>>>>>>>>>>QSearch does.
>>>>>>>>>>>>
>>>>>>>>>>>>If your pruning algorithm is good, it will increase the strength of the program
>>>>>>>>>>>>regardless on how good your QSearch is.
>>>>>>>>>>>>
>>>>>>>>>>>>If your QSearch is smart, then it will increase the strength even more.
>>>>>>>>>>>>
>>>>>>>>>>>>I don't like the idea that some algorithms that have almost nothing to do with
>>>>>>>>>>>>each other would have such an influence on each other. It is indeed possible and
>>>>>>>>>>>>it probably happens all the time, but it's hard to work with such hypothesis in
>>>>>>>>>>>>mind.
>>>>>>>>>>>>
>>>>>>>>>>>>I think it's better to first assume that the kind of QSearch you do will not
>>>>>>>>>>>>interfere with the quality of the pruning algorithm used before the QSearch.
>>>>>>>>>>>>
>>>>>>>>>>>>If your QSearch sucks, it's not because you are doing a lot of pruning in the
>>>>>>>>>>>>"full width" part of the search. It's because it sucks.
>>>>>>>>>>>
>>>>>>>>>>>The paper does prove that the more your (q)search sucks, the better your pruning
>>>>>>>>>>>algoritm seems. But that's not really news.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>Does it prove that?! No, it's just my impression based on the data gathered so
>>>>>>>>>>far. Maybe a reduction of 2 (instead of 1) in case of fail-high report, will
>>>>>>>>>>work better in programs with heavy extensions and quiescence.
>>>>>>>>>
>>>>>>>>>A reduction of 20% seems to be working best in XiniX ( heavy qsearch).
>>>>>>>>
>>>>>>>>What do you mean by 20%? (you used a reduction of 1 or 2 in case of fail-high
>>>>>>>>report?)
>>>>>>>
>>>>>>>In case of a fail high I reduce the depth with 20%. ( doesn't work in your silly
>>>>>>>program :)
>>>>>>>
>>>>>>
>>>>>>Anyway, no matter what the reduction is, you are using verified null-move
>>>>>>pruning, which is good :-) In my paper I just gave a new null-move pruning
>>>>>>framework; feel free to play with the values that best fit your program.
>>>>>
>>>>>It's a no brainer to implement. If it's not bad then it's worth investigating.
>>>>>
>>>>>>
>>>>>>Even better values do exist. I've been working on them for some time and will
>>>>>>publish them in near future.
>>>>>
>>>>>If I might give an advice. For first reviews, send them it to some active
>>>>>chessprogrammers, and not to academic has beens. It will save you a lot of
>>>>>typework. ( you have been quite active on this forum lately )
>>>>>
>>>>
>>>>I will. However, after posting a new method, several days of "heavy presence"
>>>>will always be needed to clear things up and answer the questions...
>>>
>>>Yes, that's good.
>>>
>>>I just think that sending your algoritm to Robert (I think we were using this in
>>>74 ), Gian Carlo ( Can't work, just implemented it in 3 seconds ) and me (
>>>didn't crash my program, must be a good idea ) would have given you some "real"
>>>input.
>>>
>>
>>Yes, I agree. At least I would have changed my too optimistic comment "it can be
>>implemented by merely adding a few lines of code"! It is a few lines of
>>pseudo-code, but practically needs some more tuning to find the best reduction
>>values...
>>
>>
>
>It wasn't really "too optimistic".  But it is probably easier for me than
>most, as I am already using internal iterative deepening, which means I can
>call search recursively for the same position/ply, without wrecking things.
>
>That simplifies the code in that I don't do the little block of verification
>code plus a goto that you used, I simply recursively call search when the
>null-move search fails high and the "ok-to-verify" flag is set.  However, to
>be efficient, I wanted to modify my "do_null" flag I pass to each successive
>search() call, so that it would include "do_null & verify" to avoid adding a
>new search parameter.  That was a bit messier than I originally thought and
>that is the only thing that caused my initial attempt to produce errors, in that
>I simply blew the two flags out at times and did extra verification searches
>when I shouldn't have due to the bug.
>
>BTW, when I get this working, I have a huge cluster of quad xeons.  I can
>certainly re-run all your test positions with Crafty, and let 'em run 32 at
>a time to depth 12, 13, and 14 to see what happens...  If you want to send me
>your positions, I should probably re-run the 9, 10 and 11 tests as well to get
>a baseline comparison between crafty and your program also...  but I have plenty
>of machine time to run deeper tests...
>

Testing this idea at depths of +12 will shed more light on performance of this
algorithm in deeper searches. Because of its simple data structure Genesis is a
slow searcher (about 100k NPS on my 733 MHz) and so I couldn't afford a search
deeper than 11 plies. But it would be great if you run deeper tests with Crafty.

All my test suites are available at
http://www.cs.biu.ac.il/~davoudo/pubs/vrfd_null.html



>
>
>
>>>Tony
>>>
>>>>
>>>>
>>>>>BTW last time we mailed I concluded your last name was David, were does the
>>>>>Tabibi come from ?
>>>>>
>>>>
>>>>My full name has always been Omid David Tabibi. But I usually use just David as
>>>>my last name in informal occasions. After ICGA put my name as "O.D. Tabibi" on
>>>>their cover, I thought it would be a good idea to use my full name here to avoid
>>>>confusion.
>>>>
>>>>
>>>>>Tony
>>>>>
>>>>>>
>>>>>>
>>>>>>>In XiniX I have partial extensions (PLY is 32).
>>>>>>>The addition to your idea is to give big reductions when there is still a lot of
>>>>>>>searchdepth remaining. So fe when there is 12 ply left I give more reduction
>>>>>>>than when there's 6 ply left (with a minimum of 1 ply ) That's 6*0,2 is 1,2 ply
>>>>>>>more. For XiniX that seems to make the difference between a good and a bad new
>>>>>>>idea.
>>>>>>>
>>>>>>>Tony
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>>I'm
>>>>>>>>>interessed in your idea. It's commented out in my program now, but not deleted.
>>>>>>>>>I still have to play with it some more.
>>>>>>>>>
>>>>>>>>>Despite of the negative comments you had, I don't think it's a bad idea. I'm
>>>>>>>>>just not convinced yet it's a good one.
>>>>>>>>>
>>>>>>>>
>>>>>>>>It took me several months of experiments to get convinced. After a little more
>>>>>>>>tuning and playing with different reduction values (1 or 2), I believe you will
>>>>>>>>be convinced too ;-)
>>>>>>>>
>>>>>>>>
>>>>>>>>>Tony
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>Tony
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>    Christophe



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.