Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Verified Null-moving

Author: Peter Fendrich

Date: 07:16:33 08/12/04

Go up one level in this thread


On August 12, 2004 at 09:28:24, Tord Romstad wrote:

>On August 12, 2004 at 08:42:28, martin fierz wrote:
>
>>>hi tord,
>>>
>>>i'm intrigued by your last sentence - that null move at all nodes more than
>>>doubles your node count! i wonder how this is possible at all? why? if i
>>>understand what you write you have an oracle which can tell you whether you
>>>should do a null-move search at a given node or not.
>>>
>>>int alphabeta(....)
>>
>>grrr, hitting tab-return is not such a good idea :-)
>
>Happens to me too, sometimes.  :-)
>
>>let me continue: you have
>>
>>int alphabeta(...)
>>  {
>>  ...
>>  if(null_oracle(position) == NULL_OK)
>>    do_nullmove_search()
>>  ...
>>  }
>
>Almost, but there is one important part missing above:  I return a
>fail high score *before* doing the null move search when I am very
>sure of a fail high.  It's more like this:
>
>int alphabeta(...)
>{
>  if(almost_certainly_fail_high(position, beta))
>    return beta;
>  else if(probably_fail_high(position, beta))
>    do_nullmove_search()
>  ...
>}


Mine is more like this:
  if (known_fail_high(position, beta)  //from hash, quick_endgame_recogn etc
     return beta;
  else if (! probably_fail_low(postition, alpha)
     do_nullmove_search()
  ...

I can't find out how to get almost_certainly_fail_high in my engine that doesn't
evaluate interior nodes. There are a few endgame cases that I handle as
known_fail_high even if they are (very) probably_fail_high.

>>if you remove that null_oracle thing you get twice as many nodes?
>
>If I remove the almost_certainly_fail_high() and probably_fail_high()
>functions (I don't have functions with these two exact names, but you get
>the idea), I search twice as many nodes.
>
>>i don't
>>understand how that can happen. your null_oracle would basically be telling you
>>not to do a null move search if it thinks it will fail low anyway. so for all
>>nodes where you don't have to do a null move search with R=3 you have to do a
>>normal search. you save yourself searching the R=3 part, but that should be very
>>small compared to the normal search part, shouldn't it?
>>
>>what am i missing?
>
>That recursive null move pruning is not the only kind of pruning I do.
>At fail low nodes, most of the moves are not searched with full depth.
>When the first 3 moves have failed low, I search all remaining moves with
>reduced depth except when they look especially interesting or forcing.
>If a move searched with reduced depth surprisingly turns out to fail high,
>it is re-searched with full depth.

Also here I do something similar but 3 is not a constant. It varies due to how
many hash-move + killer-moves + good captures etc I have.
I have also tested to vary the depth-reduction due to if move[ply-2] and
move[ply-4] etc are all in this category (probably_fail_lows). Reduce depth less
in the beginning and reduce it more if still in a path with only
probably_fail_lows. However it didn't give any improvements at all. Probably
because I mix this with extensions as well and can't sort out in my mind how all
this interact together...

>For some reason, most of the published research on selective search
>mechanisms has concentrated on ways to reduce the work at fail-high
>nodes.  Nullmove pruning, Multicut pruning and ProbCut are a few examples.
>Pruning techniques for fail-low nodes are much less commonly seen.  The only
>ones I can think of is futility pruning and razoring, but these are too
>dangerous to use except when the remaining depth is very small.
>
>Reducing the search depth towards the end of the move list is my own
>attempt to do selective search at expected fail-low nodes.  It is
>probably possible to invent better techniques (and it wouldn't surprise
>me if it has already been done in some of the professional engines), but
>at least in my engine, what I do works far better than nothing.

Once again, with evaluation at all nodes you have a better foundation for doing
move ordering and prune according to that compared to me that take big risks by
just relying on statistical observations in the search tree!

/Peter

>Tord



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.