Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Verified Null-moving

Author: Tord Romstad

Date: 08:48:35 08/12/04

Go up one level in this thread


On August 12, 2004 at 10:16:33, Peter Fendrich wrote:

>On August 12, 2004 at 09:28:24, Tord Romstad wrote:
>>
>>Almost, but there is one important part missing above:  I return a
>>fail high score *before* doing the null move search when I am very
>>sure of a fail high.  It's more like this:
>>
>>int alphabeta(...)
>>{
>>  if(almost_certainly_fail_high(position, beta))
>>    return beta;
>>  else if(probably_fail_high(position, beta))
>>    do_nullmove_search()
>>  ...
>>}
>
>
>Mine is more like this:
>  if (known_fail_high(position, beta)  //from hash, quick_endgame_recogn etc
>     return beta;
>  else if (! probably_fail_low(postition, alpha)
>     do_nullmove_search()
>  ...
>
>I can't find out how to get almost_certainly_fail_high in my engine that doesn't
>evaluate interior nodes. There are a few endgame cases that I handle as
>known_fail_high even if they are (very) probably_fail_high.

A simple addition which may or may not work for you is to do a preparatory
null move search with the depth reduced all the way to 0 before you do
the full null move search with depth reduced by R+1.  If the first null
move fails low, you skip the second.

Of course, this should only be done if the remaining depth is big.

>>At fail low nodes, most of the moves are not searched with full depth.
>>When the first 3 moves have failed low, I search all remaining moves with
>>reduced depth except when they look especially interesting or forcing.
>>If a move searched with reduced depth surprisingly turns out to fail high,
>>it is re-searched with full depth.
>
>Also here I do something similar but 3 is not a constant. It varies due to how
>many hash-move + killer-moves + good captures etc I have.

I also don't unconditionally reduce all moves after the 3rd.  All the move
types you mention above are always searched with full depth, even when
they are further down in the move list.

>I have also tested to vary the depth-reduction due to if move[ply-2] and
>move[ply-4] etc are all in this category (probably_fail_lows). Reduce depth less
>in the beginning and reduce it more if still in a path with only
>probably_fail_lows. However it didn't give any improvements at all. Probably
>because I mix this with extensions as well and can't sort out in my mind how all
>this interact together...

I have made similar experiments, but like you, I wasn't able to find anything
which worked better than the simple scheme.

>>For some reason, most of the published research on selective search
>>mechanisms has concentrated on ways to reduce the work at fail-high
>>nodes.  Nullmove pruning, Multicut pruning and ProbCut are a few examples.
>>Pruning techniques for fail-low nodes are much less commonly seen.  The only
>>ones I can think of is futility pruning and razoring, but these are too
>>dangerous to use except when the remaining depth is very small.
>>
>>Reducing the search depth towards the end of the move list is my own
>>attempt to do selective search at expected fail-low nodes.  It is
>>probably possible to invent better techniques (and it wouldn't surprise
>>me if it has already been done in some of the professional engines), but
>>at least in my engine, what I do works far better than nothing.
>
>Once again, with evaluation at all nodes you have a better foundation for doing
>move ordering and prune according to that compared to me that take big risks by
>just relying on statistical observations in the search tree!

Evaluating internal nodes helps, of course.  But I also consider statistical
observations, like the history counters for moves (moves with good history
are never reduced).  This has the interesting consequence that even tiny
changes to my move ordering often results in some tactics being found
one ply (or more!) earlier or later.

Tord



This page took 0.02 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.