# Computer Chess Club Archives

## Messages

### Subject: When to do a null move search - an experiment

Date: 03:31:58 04/26/04

```Last week, there was an interesting discussion on this forum about which
criterions should be used to decide whether or not to do a null move search.
In an attempt to find out what works best for my engine, I played a self-play
tournament between four slightly different versions of my engine.  The
following pseudo-code explains the null-move criterions used by each of
the four versions:

/* Gothmog A: */
int ok_to_do_nullmove() {
if(in_check(SideToMove)) return 0;
if(static_eval - value_of_biggest_hanging_piece(SideToMove) >= beta)
return 1;
return 0;
}

/* Gothmog B: */
int ok_to_do_nullmove() {
if(in_check(SideToMove)) return 0;
if(static_eval - value_of_biggest_hanging_piece(SideToMove)
+ value_of_biggest_hanging_piece(Opponent) >= beta)
return 1;
return 0;
}

/* Gothmog C: */
int ok_to_do_nullmove() {
if(in_check(SideToMove)) return 0;
else return 1;
}

Gothmog D is similar to Gothmog C, except that when the remaining depth
is 4 plies or more, a null move search with the depth reduced to zero
is done before the real null move search, and the real null move search
is avoided if the zero-depth search fails low.

The criterion in Gothmog A is the one I have always used in the public
versions of Gothmog.  Our discussion last week seemed to indicate that
most others use the simpler criterion in Gothmog C.

In my tournament, I all four versions face each other 50 times.  The
results:

Gothmog A   Gothmog B   Gothmog C   Gothmog D       Sum
Gothmog A    XXXX        26.5        31.0        29.5         87.0/150
Gothmog B    23.5        XXXX        30.5        26.0         80.0/150
Gothmog C    19.0        19.5        XXXX        31.5         70.0/150
Gothmog D    20.5        24.0        18.5        XXXX         63.0/150

It is interesting that the version with the most restrictive criterion
achieves the highest score.  This confirms my growing suspicion that
recursive null move pruning doesn't work nearly as well as most people
believe, and that it might be a good idea to make an effort to reduce
its use as much as possible.  I will try to experiment with even more
restrictive criterions (perhaps static_eval-biggest_hanging >= beta+margin)
and see whether that improves the strength further.

The number of games is of course still not sufficiently big to make any
definitive conclusions.  If I find the time, I will add more games and
more versions to the tournament.

Tord

```