Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Null move reductions

Author: Don Dailey

Date: 12:38:32 10/05/98

Go up one level in this thread


On October 04, 1998 at 19:31:04, Ernst A. Heinz wrote:

>On October 04, 1998 at 19:06:15, Roberto Waldteufel wrote:
>>
>>Hi all,
>>
>>I wonder what reductions various programs use for the null move. I reduce by 2
>>plies, but I believe a one-ply reduction may be more usual. However, I have
>>found R=2 produces quite good results in my program. I would like to hear of
>>others' experiences.
>
>Below I quote again from my article about "How DarkThought Plays Chess" as
>published in the ICCA Journal 20(3) and electronically available from the WWW
>page of "DarkThought" at URL <http://wwwipd.ira.uka.de/Tichy/DarkThought/>.
>
>*******************************************************************************
>
>Search Parameterization
>
>[...]
>
>Null-Move Search.
>      Generally regarded as both a curse and a blessing of
>      microcomputer-based chess programs, the realm of null-move
>      pruning provides ample opportunities for many improvements (e.g. by
>      parameter fine-tuning). In DARKTHOUGHT, null moves may be
>      layered (i.e. disabled in either the upper or lower parts of the search
>      tree up to a predefined depth threshold), made non-recursive or even
>      switched off completely. The first DARKTHOUGHT delayed null moves
>      until depths  <= 5 after Don Dailey (of SOCRATES and STARSOCRATES
>      fame) enthused about this scheme during the 8th WCCC in Hong
>      Kong, 1995.
>
>      Like many other chess programs, DARKTHOUGHT currently uses a
>      depth-reduction factor of 2 for null-move searches. But other values,
>      especially depth-adaptive ones in combination with various material
>      thresholds for disabling the null move in endgames, are constantly
>      experimented with. The same holds for depth and score margin of the
>      deep-search extension which default to 2 and the value of 3 Pawns
>      respectively.
>
>[...]
>
>*******************************************************************************
>
>=Ernst=

Another idea I have experimented with is "cheating" on the null move
test.  It works like this:   If your static evaluator (sometimes I
call this the stand pat score), returns a score above beta at an
internal node, the move is a candidate for selectivity.
In this case, instead of doing a
null move search with a zero width window around beta, you can
do one with a window of beta-n  where n is some small positional
margin.  The idea is that you are more likely to pass this null
move test and the end result is that your program will be much more
selective.  Don't forget you must return a score of greater than
beta,  I just return the stand pat score which I know is above
beta or I would not be doing the test.    You can experiment with
various margins or make them change dynamically based on depth.

The end results?  A VERY fast (and extremely selective) program.  In
practice you will be able to solve tactics on the same depth (but
much faster)  but you will miss more positional maneuvers.  I have
done very thorough  testing of various margins and have found this
a wash so to speak.  I cannot prove the program is weaker or stronger.

But I present this idea for those (if any) who have not thought of
it and want to experiment with it.  I view it as another knob to turn
in your quest for a better chess program.

By the way,  I have chosen not to use this technique in Cilkchess.
In principle it should not matter since as I said you cannot prove
to me it plays worse (or better.)   But my own phychological bias
tends to make me prefer to error on the side of covering all the
bases.

If anyone has any experiences or opinions on this, I would love to
hear about it.

----------------

I might as well explain something else I do that has always proven
to be a benefit to my programs.  I have even tried to throw it away
but am forced to bring it back since it is clearly an improvement.
Near leaf nodes of the tree, but still in the main search (not quies)
it has always payed off handsomely to not use the more dynamic null
move selectivity.  Instead I use my own homemade static routine that
essentially does the same thing.  For programs that have a static
exchange evaluator this would be a no-brainer implementation.  In
my program, I first determine if the static (stand pat) score is
above beta and again consider this a candidate for selectivity.
Then I determine statically the attack potential of the enemy
army.  This determination is based on what is directly attacked
and what is defending it.  In fact it is not very sophisticated
at all and simply errors in the favor of being conservative.  For
instance it is much less sophisticated that Bob Hyatt's SEE routine.

Here are a few of the considerations:

  1) If a piece is up-attacked, assume you will "get back" 2/3 of
     that pieces value.  (you can experiment with the ratio)

  2) If a piece is attacked by a greater valued piece but defended
     by a pawn, assume it is not attacked at all.

  3) If a piece is attacked by a piece of equal value, count the
     attackers and defenders.  If there are more attackers assume
     2/3 of the piece is lost.

  4) Don't forget to consider pawn on the 7th as a big attack.  I
     think we even consider a pawn on the 7th as not being an attack
     if the square in front is occupied.

There are lots of opportunities here for making assumptions and
playing with the degree of risk you are willing to accept.  For
instance the 2/3 figure could be tweaked or even changed dynamically,
or you could use a more thorough SEE evaluator etc.  Also you can
do some static mate threat testing too if you want to extend the
idea but it is not necessary (for us) to get a nice improvement.

So basically I apply this static test and compare the attack value
of the opponents position to our distance from beta.  Here is an
example:

In a white to move position, the program calls the static evaluator
function and notices that white is 3 pawns greater than beta.
Presumably white has a great position and would like to stop
searching.  But first we want to know if black has any substatial
threats.  We call the threat detector routine (as described above)
and notice that white has its queen directly attacked by a bishop.
If the queen has a value of 9 pawns, then 2/3 of this is 6 pawns.
Because blacks threat value is 6 pawns and we are only 3 pawns
above beta, we conclude that the position is too dangerous to
stop in.  If a minor piece had been attacked instead of a queen
we probably would have taken the cutoff.

Our program still uses null move searching predominantly.  It turns
out that doing 2 ply of this static stuff seems to be the right
number for us (at last two levels near leaf nodes of main search.)
If we use more than 2, the program weakens, if we
use 1 ply of this it's just slightly weaker and if we do not
do this at all and use pure null move searching all
the way to then end, the program is clearly weaker.

Our old Socrates serial program that won the 93 ICCA international
did not use null move at all.  We did 4 ply of this stuff, all statically
based.  We had more sophisticated rules and looked at certain kinds
of mate threats, ones where the king was on the edge and also did
some passed pawn analysis (pawns on the 6th.)   Even though we only
did 4 ply of selectivity and did not do null move pruning, this is
the fastest program (in terms of iteration depth and speed) I've
ever written.  I'm still wondering whether null move pruning (as
wonderful as it is) is the final say.  It is certainly not the
only way to write a very strong chess program.

- Don



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.