Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Null move reductions

Author: Roberto Waldteufel

Date: 19:28:24 10/05/98

Go up one level in this thread



On October 05, 1998 at 15:38:32, Don Dailey wrote:


>
>Another idea I have experimented with is "cheating" on the null move
>test.  It works like this:   If your static evaluator (sometimes I
>call this the stand pat score), returns a score above beta at an
>internal node, the move is a candidate for selectivity.
>In this case, instead of doing a
>null move search with a zero width window around beta, you can
>do one with a window of beta-n  where n is some small positional
>margin.  The idea is that you are more likely to pass this null
>move test and the end result is that your program will be much more
>selective.  Don't forget you must return a score of greater than
>beta,  I just return the stand pat score which I know is above
>beta or I would not be doing the test.    You can experiment with
>various margins or make them change dynamically based on depth.
>
>The end results?  A VERY fast (and extremely selective) program.  In
>practice you will be able to solve tactics on the same depth (but
>much faster)  but you will miss more positional maneuvers.  I have
>done very thorough  testing of various margins and have found this
>a wash so to speak.  I cannot prove the program is weaker or stronger.
>
>But I present this idea for those (if any) who have not thought of
>it and want to experiment with it.  I view it as another knob to turn
>in your quest for a better chess program.
>
>By the way,  I have chosen not to use this technique in Cilkchess.
>In principle it should not matter since as I said you cannot prove
>to me it plays worse (or better.)   But my own phychological bias
>tends to make me prefer to error on the side of covering all the
>bases.
>
>If anyone has any experiences or opinions on this, I would love to
>hear about it.
>

In your experiments, what values did you use for n, and how (if at all) did n
vary with depth?

>----------------
>
>I might as well explain something else I do that has always proven
>to be a benefit to my programs.  I have even tried to throw it away
>but am forced to bring it back since it is clearly an improvement.
>Near leaf nodes of the tree, but still in the main search (not quies)
>it has always payed off handsomely to not use the more dynamic null
>move selectivity.  Instead I use my own homemade static routine that
>essentially does the same thing.  For programs that have a static
>exchange evaluator this would be a no-brainer implementation.  In
>my program, I first determine if the static (stand pat) score is
>above beta

Do you do a fast evaluation, or do you do the full positional score here? Maybe
you could do a very fast rough evaluation based on material and only evaluate
the positional score if the margin for error is small, ie fast eval-opponent's
threat eval=beta+delta, where delta is the margin for error. If delta turns out
to be large, then the positional score is irrelavent.

> and again consider this a candidate for selectivity.
>Then I determine statically the attack potential of the enemy
>army.  This determination is based on what is directly attacked
>and what is defending it.  In fact it is not very sophisticated
>at all and simply errors in the favor of being conservative.  For
>instance it is much less sophisticated that Bob Hyatt's SEE routine.
>
>Here are a few of the considerations:
>
>  1) If a piece is up-attacked, assume you will "get back" 2/3 of
>     that pieces value.  (you can experiment with the ratio)

Do you mean 1/3? In the example given below the queen is attacked by the bishop,
which you asses as a threat value of 6. Is this calculated as 2/3 x 9 or 9-6? If
you mean the first of these, then you are only getting back a third of the
queen.

>
>  2) If a piece is attacked by a greater valued piece but defended
>     by a pawn, assume it is not attacked at all.
>
>  3) If a piece is attacked by a piece of equal value, count the
>     attackers and defenders.  If there are more attackers assume
>     2/3 of the piece is lost.
>
>  4) Don't forget to consider pawn on the 7th as a big attack.  I
>     think we even consider a pawn on the 7th as not being an attack
>     if the square in front is occupied.
>
>There are lots of opportunities here for making assumptions and
>playing with the degree of risk you are willing to accept.  For
>instance the 2/3 figure could be tweaked or even changed dynamically,
>or you could use a more thorough SEE evaluator etc.  Also you can
>do some static mate threat testing too if you want to extend the
>idea but it is not necessary (for us) to get a nice improvement.
>
>So basically I apply this static test and compare the attack value
>of the opponents position to our distance from beta.  Here is an
>example:
>
>In a white to move position, the program calls the static evaluator
>function and notices that white is 3 pawns greater than beta.
>Presumably white has a great position and would like to stop
>searching.  But first we want to know if black has any substatial
>threats.  We call the threat detector routine (as described above)
>and notice that white has its queen directly attacked by a bishop.
>If the queen has a value of 9 pawns, then 2/3 of this is 6 pawns.
>Because blacks threat value is 6 pawns and we are only 3 pawns
>above beta, we conclude that the position is too dangerous to
>stop in.  If a minor piece had been attacked instead of a queen
>we probably would have taken the cutoff.
>
>Our program still uses null move searching predominantly.  It turns
>out that doing 2 ply of this static stuff seems to be the right
>number for us (at last two levels near leaf nodes of main search.)
>If we use more than 2, the program weakens, if we
>use 1 ply of this it's just slightly weaker and if we do not
>do this at all and use pure null move searching all
>the way to then end, the program is clearly weaker.

Do you think the two pruning methods interact much? I mean, compared to no
selectivity at all, is the null move speedup plus the static pruning speedup
greater or less than the sppedup of the combination of both tequniques together?

>
>Our old Socrates serial program that won the 93 ICCA international
>did not use null move at all.  We did 4 ply of this stuff, all statically
>based.  We had more sophisticated rules and looked at certain kinds
>of mate threats, ones where the king was on the edge and also did
>some passed pawn analysis (pawns on the 6th.)   Even though we only
>did 4 ply of selectivity and did not do null move pruning, this is
>the fastest program (in terms of iteration depth and speed) I've
>ever written.  I'm still wondering whether null move pruning (as
>wonderful as it is) is the final say.  It is certainly not the
>only way to write a very strong chess program.
>
>- Don

Maybe the absence of null move pruning made it more useful to use your static
pruning technique at more levels in the tree. I imagine the relationship between
the two methods is probably very complex.

Best wishes,
Roberto



This page took 0.03 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.