# Computer Chess Club Archives

## Messages

### Subject: Re: Zero-width Window Null Move Search

Author: Don Dailey

Date: 10:53:09 06/18/98

Go up one level in this thread

```On June 18, 1998 at 10:21:06, Roberto Waldteufel wrote:

>
>On June 18, 1998 at 00:33:24, Don Dailey wrote:
>
>>On June 17, 1998 at 22:25:12, Roberto Waldteufel wrote:
>>
>>>
>>>
>>>On June 15, 1998 at 12:41:26, Ren Wu wrote:
>>>>
>>>
>>>>I used to use PVS but switch to NegaScout some time ago, but after that
>>>
>>>
>>>Please would you explain what is the difference? I was under the impression that
>>>PVS and Negascout were one and the same algorithm, namely search first move with
>>>window [alpha,beta], then search remaining moves with window [alpha, alpha+1]
>>>with a research if necessary. If this is wrong, I would like to know.
>>>
>>>Best wishes,
>>>
>>>Roberto
>>
>>PVS and scout are the same as far as I know.  PVS means principal
>>variation search.  The "nega" in negascout is the practice of
>>negating the scores from level to level and changing the point of
>>view of alpha and beta.
>>
>>This is to provide a consistant frame of reference and turns min-max
>>into maxi-max, both sides try to maximize the score.  It is a
>>semantical distinction and changes nothing but makes the code a
>>little more natural to deal with:
>>
>>
>>/* prototype */
>>int  search( int alpha, int beta, int depth );
>>
>>/* recursive call with nega-max */
>>score = -search( -beta, -alpha, depth-1 );
>>
>>
>>Sometimes programmers put a window around even the first move.  But
>>the "classic"  algorithm searches the first move with alpha and
>>beta which at root of tree is -INF +INF.
>>
>>I use MTD which in a nutshell is ALWAYS using a zero width
>>window.  How you pick the window determines which version of
>>MTD you use.  I use MTD(f) with my own modifications to improve
>>the behavior of it.
>>
>>All these algorithms are just various manipulations of the window
>>and together they are called "aspiration searches."   Using a
>>window other than alpha and beta is speculative and always risks
>>doing a re-search but the net affect (at least when using hash
>>tables) is a general speedup.  MTD is the most efficient of all
>>these because it takes the aspiration paradigm to the limit but
>>it is also a pain to deal with and has lots of tricky side affects.
>>Most consider it not worth the trouble.
>>
>>
>>- Don
>
>Thanks for your reply:- this is what I thought - basically PVS and negascout are
>pseudonyms, unless we take PVS to mean that the scores are not negated. I find
>it confusing therefore to see comparisons made between them as if they were
>something different from each other. I currently use negascout with a half-pawn
>window (ie plus or minus a quarter of a pawn), but used Negamax (all nodes with
>infinite window) before that and found that the zero window searches helped
>speed things up quite a lot. I am interested to read that you use Aske Plaat's
>MTD(f) algorithm. I tried this some time back, but, in contrast to Plaat's
>findings, I did not find it to be as fast as Negascout. Maybe I was doing
>something wrong, but I found that the number of researches was too high for the
>speed of null window search to pay off. I found this in tests with programs
>playing Chess, Checkers and Connect4. How many passes do you typically have to
>make on each iteration? Of course, in theory you might only need two, but more
>usually I needed between 8 and 12 passes, and on occasion much more. Perhaps my
>evaluation functions were either too fine, or too eratic.
>
>Idid not find the coding of MTD(f) to be a pain. It's actually less complex than
>other variants of alpha-beta in the sense that you only need alpha (or beta) to
>be passed, so fewer parameters on the stack. I was just disapointed that I
>couldn't make it work as well as my existing Negascout code.
>
>Best wishes,
>
>Roberto

The basic coding of MTD(f) is very simple and is easier to code up
than all the other variations of alpha/beta.  But then you get into
a bunch of different issues and suddenly things are not so simple.

I found these specific problems that I deal with one way or the
other:

. The principle variations are often flaky.
. Lazy evaluation becomes more problematic.
. Sometimes searches that find big wins blow up unreasonably.

You mentioned that your implementation was slower.  I strongly
suspect the driver is implemented incorrectly.  After I first
implemented mtd(f) Aske Plaat came to MIT and made a minor change
in the driver that made a big difference.  I find the mtd(f)
implementation to be significantly faster and as I've posted
before would RATHER use PVS but would have to give up too much.
I will note that his implementation at first seemed counter
intuitive to me but is sound.

appear to walk through the move list in the way I'm used to seeing
with PVS.  The same move can be searched many times and appear
anywhere in the moves list.  So it actually affects even your
interface, as I said yet another implementation issue to deal
with.

Aske Plaat implemented the PV by hash table lookup in our first
version of Cilkchess.  It sometimes returns a silly second move
and as Bob pointed out, the extra speed benefit of MTD goes away
after a few bad predictions.  I feel like this problem must
surely be solvable but just haven't focused too hard on it yet.
I did make a change that improves the situation, it involves
how I store the best move in the hash table for the second ply.
I do not store it if the bound in is the "wrong" direction.  I
rarely see the problem now but it's still not completely solved.
Also I would prefer to see the whole variation be sound, not just
the first two moves.  Still, the vast majority of PV's seem to
be sound (at least from the computers point of view.)

Lazy evaluation is no longer an issue for me.  I was just about
to give it up anyway, since my program is getting smarter and
smarter I get less benefit from it anyway.  Earlier programs
of mine would get very large speedups from lazy evaluation but
they had very simple evaluation.  Cilkchess is getting more
and more sophisticated in the knowledge area and it's hard to
make correct scoring assumptions any more.  So my solution which
I'm quite happy with is to do away with lazy evaluation.  If
I was working full time on the program I could re-engineer a
few percent speedup with a lot of work but I'm planning on even
more agressive evaluation in the future so I'm happy to leave
this one alone.    But for the record, I learned that lazy
evaluation increases the node counts of mtd(f).  mtd(f)
makes strong use of returned scores and lazy evaluation ends
up costing you because you will get a "weaker" score back quite
often which will just cost you extra mtd(f) probes.  For cilkches
it turned out that the benefit in nodes per second were almost
perfectly offset by the extra nodes we had to search.  It was a
case of "robbing Peter to pay Paul."

In my opinion, mtd(f) is a lousy algorithm for a program that is
not written very precisely.  A lot of good programs use lots of tricks
that "technically" are not correct but probably improve the program
in the chess playing sense.   Rex was very much like this, I was
hell bent for speed, had a great preproccesor (Larry wrote all the
rules for the Rexchess preproccesor) and I used any kind of sleazy
trick to make it as strong as possible, regardless of how sloppy the
code got or how risky the assumptions were.  The chess playing part
was well tested and all the tricks payed off.  I used very aggressive
lazy evaluation to deal with the few slow evaluation terms in the
program but my "fudge margin" was not large enough to encompass
all possible scores.  It was a calculated risk to speed things up
and as such it worked well.

Some programs use a kind of fast incremental evaluation that is
not guaranteed to be insensitive to move ordering.  Also many
programs use a variety of techniques that are move ordering sensitive
like singular extensions, various other extensions and tricks used
in the quies search.  If you have things in your program that are
window based (such as the Berliners recapture extension rule)
then you are likely to have problems with MTD.

The blowup problem I talked about before was solved by some changes
I made to the mtd(f) driver in Cilkchess.  Basically I am more
agressive about incrementing (or decrementing) beta on each pass
when I continue to fail in the same direction.  The trick with
MTD is to always choose beta based on a statistically sound guess
of where the score is likely to be.  Usually you will be very
close to the score of the previous iteration and mtd(f) says to
increment in small amounts for each failure because we expect
this to be the case.  When you fail repeatedly, you can assume
something has fundamentally changed about the position and you

So it doesn't surprise me if many or even most programs are slower
with MTD(f).  If you use MTD(f), your program must be very clean
and correct.

So I've had a lot of problems with MTD(f) but for Cilkchess it is
the right choice.  From conversations I've had with other
programmers and based on my knowledge of how their programs are
written, I suspect it is not a good choice for most programs.

- Don

```