Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: new thoughts on verified null move

Author: Tony Werten

Date: 18:50:01 11/23/02

Go up one level in this thread


On November 23, 2002 at 21:24:08, Omid David Tabibi wrote:

>On November 23, 2002 at 21:09:36, Tony Werten wrote:
>
>>On November 23, 2002 at 20:52:01, Omid David Tabibi wrote:
>>
>>>On November 23, 2002 at 20:00:15, Tony Werten wrote:
>>>
>>>>On November 23, 2002 at 11:11:16, Christophe Theron wrote:
>>>>
>>>>>On November 23, 2002 at 09:22:37, jefkaan wrote:
>>>>>
>>>>>>oops, wasn't finished yet..
>>>>>>
>>>>>>>are done by using the results of the positional eval
>>>>>>>to prune the q-search,
>>>>>>and there using only material eval
>>>>>> (haven't tried it out yet, and wouldn't
>>>>>>know how to do it, but it's only an idea,
>>>>>>you know.. to explore options of
>>>>>>more effective branch factor reducements
>>>>>>and efficient programming (besides
>>>>>>lousy solutions as inline assembler
>>>>>>and bitboards..
>>>>>>:)
>>>>>
>>>>>
>>>>>
>>>>>Yes Chess Tiger does much more pruning than known (published) techniques.
>>>>>
>>>>>I think other top programs do it also.
>>>>>
>>>>>I still fail to see why the efficiency of an algorithm depends on what your
>>>>>QSearch does.
>>>>>
>>>>>If your pruning algorithm is good, it will increase the strength of the program
>>>>>regardless on how good your QSearch is.
>>>>>
>>>>>If your QSearch is smart, then it will increase the strength even more.
>>>>>
>>>>>I don't like the idea that some algorithms that have almost nothing to do with
>>>>>each other would have such an influence on each other. It is indeed possible and
>>>>>it probably happens all the time, but it's hard to work with such hypothesis in
>>>>>mind.
>>>>>
>>>>>I think it's better to first assume that the kind of QSearch you do will not
>>>>>interfere with the quality of the pruning algorithm used before the QSearch.
>>>>>
>>>>>If your QSearch sucks, it's not because you are doing a lot of pruning in the
>>>>>"full width" part of the search. It's because it sucks.
>>>>
>>>>The paper does prove that the more your (q)search sucks, the better your pruning
>>>>algoritm seems. But that's not really news.
>>>>
>>>
>>>Does it prove that?! No, it's just my impression based on the data gathered so
>>>far. Maybe a reduction of 2 (instead of 1) in case of fail-high report, will
>>>work better in programs with heavy extensions and quiescence.
>>
>>A reduction of 20% seems to be working best in XiniX ( heavy qsearch).
>
>What do you mean by 20%? (you used a reduction of 1 or 2 in case of fail-high
>report?)

In case of a fail high I reduce the depth with 20%. ( doesn't work in your silly
program :) In XiniX I have partial extensions (PLY is 32).

The addition to your idea is to give big reductions when there is still a lot of
searchdepth remaining. So fe when there is 12 ply left I give more reduction
than when there's 6 ply left (with a minimum of 1 ply ) That's 6*0,2 is 1,2 ply
more. For XiniX that seems to make the difference between a good and a bad new
idea.

Tony

>
>
>>I'm
>>interessed in your idea. It's commented out in my program now, but not deleted.
>>I still have to play with it some more.
>>
>>Despite of the negative comments you had, I don't think it's a bad idea. I'm
>>just not convinced yet it's a good one.
>>
>
>It took me several months of experiments to get convinced. After a little more
>tuning and playing with different reduction values (1 or 2), I believe you will
>be convinced too ;-)
>
>
>>Tony
>>
>>
>>
>>>
>>>
>>>>Tony
>>>>
>>>>>
>>>>>
>>>>>
>>>>>    Christophe



This page took 0.02 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.