Computer Chess Club Archives




Subject: Re: When to do a null move search - an experiment

Author: Vincent Diepeveen

Date: 10:41:53 04/26/04

Go up one level in this thread

On April 26, 2004 at 12:14:33, JosÚ Carlos wrote:

>On April 26, 2004 at 11:57:43, Vincent Diepeveen wrote:
>>On April 26, 2004 at 11:48:35, JosÚ Carlos wrote:
>>>On April 26, 2004 at 11:32:26, Tord Romstad wrote:
>>>>On April 26, 2004 at 10:39:42, JosÚ Carlos wrote:
>>>>>  An interesting experiment, of course. But I think your conditions are rather
>>>>>different from 'most' programs. I mean:
>>>>>  - You allow any number of null moves in a row (most programs don't do even
>>>>This has no importance, I think.  My experience is that I almost always get the
>>>>same score and PV when I enable/disable several null moves in a row, and that
>>>>the difference in number of moves searched is *very* tiny.
>>>  You're probably right, as you've tested and I speak from intuition, but at
>>>first sight, it seems that the fact that you allow several null moves in a row
>>>will increase your percentage of null-moves-tries/total-nodes-searched, and thus
>>>that avoiding unnecessary null moves will be a good idea.
>>In *all* experiments i did with nullmove and a program not using *any* forward
>>pruning other than nullmove, the best thing was to *always* nullmove.
>  Yes, that's what other programmers also said (including me) in the thread we
>had last week. That's pretty intuitive. With not any other forward pruning (or
>very little) but null move, the cost of not trying a null move that would have
>produced a cutoff it terrible compared to the benefit of saving an useless null
>move try. So avoid null move, in this case, must be only in a very few cases
>where you're 99.99% certain you'll fail low... if any.

99.99% means 1 in 10k nodes.

So doing nullmove always is cheaper, because in a lot of cases
transpositiontable is doing its good job and in other cases you search more than
10k nodes which you avoid searching now.

>  Gothmog is very different from that 'paradigm' (he does a lot of forward
>prunning and applies many ideas he has commented here), hence it works pretty
>well for him.

I get impression evaluation function plays a major role in when something is
useful or when it isn't.

Checks in qsearch is also a typical example of this.

>>Double nullmove i invented to proof nullmove gives the same results like a
>>normal fullwidth search for depth n which i may pick, and i use it as it finds
>>zugzwangs and i am sure that is very helpful, because the weakest chain counts.
>>So double nullmove always completely outgunned doing a single nullmove then
>>disallowing a nullmove and then allowing the next one.
>  I tried double null move some time ago, and it didn't work for me. Probably I
>did something wrong, but I recall an old post (see the archives) from C. Theron
>where he gave some points why double null move should not work. I, myself,
>didn't invest too much time though as I had much weaker points to fix in my
>program before.

Christophe didn't post it doesn't work AFAIK.

Further i must remind you that majority of commercial programmers posting here
is not busy letting you know what works for them or doesn't work for them.

To quote Johan: "don't inform the amateurs".

I remember that Christophe also posted that evaluation function is not so

His latest postings here made more sense however than the crap posted before

>>The quickest way to of course get a cutoff in a tree is by a nullmove.
>  Ed gives some clues about static forward pruning in his page. It must be
>doable as Rebel is, and has always been, one of the top, but it's very difficult
>to fine tune.

If anything on Ed's homepage works for you, then you are a lot behind the 'top
programs' (i hate to write it down like that but my english is not sufficient to
give a more political description than that).

>>So always allowing a nullmove is also intuitive the best thing to do.
>>Any forward pruning or weak qsearch you might use can however change that of
>  Hard to say how qsearch can affect that, but forward pruning definetly makes a
>difference in the current topic.

I feel the qsearch plays a more important role.

Considering the number of games he played he probably has played blitz level
matches. So small search depths. So tactics dominated.

If you nullmove more then qsearch plays a bigger role. If it plays a bigger role
and is not so very good then you will conclude not nullmoving is better.

'verified nullmove' uses the same buggy principle.

>  JosÚ C.
>>>>>  - You do pruning and reductions that are not public domain
>>>>>  This is important because your results mean: 'in a non-standard null move
>>>>>implementation (where you try more null move searches than most others) and with
>>>>>a lot of heavy pruning and reductions (I assume they're heavy according to your
>>>>>posts here and the high depths Gothmog gets) and in a MTD(f) root search,
>>>>>limiting null move application seems to benefit". This conclusion is, apart from
>>>>>interesting of course, very intuitive.
>>>>This is a very good point.  During our discussion last week, several people
>>>>claimed that checking the static eval before doing a null move search wasn't
>>>>really necessary, because the cost of a null move search would be tiny
>>>compared to the cost of searching the moves anyway.  This isn't really
>>>>true for Gothmog, because most of the moves are usually not searched to
>>>>full depth.
>>>>I am not sure it is fair to describe my pruning and reductions as "not
>>>>public domain", though.  It is true that I haven't published any paper
>>>>and that my engine is not open source (at least not yet), but I am happy
>>>>to describe the tricks I use to anybody who wants to listen.  :-)
>>>  True. I dind't mean they're a secret, but they're not implemented in most
>>>chess programs. Expressing correctly in english is not easy for me...
>>that's the case for all of us.
>>>>Whether the use of MTD(f) is important is hard to say, but my guess is
>>>>that it isn't.
>>>  Probably not. I mentioned it just to underline that your program is different
>>>from the average, though it's true that MTF(f) is more popular now than a few
>>>years ago.
>>>  JosÚ C.
>>>>>  Thanks for the data. You sure inspired many others to do similar tests (I'll
>>>>>do when I have time).
>>>>Yes, it would be interesting to see the results of similar experiments for
>>>>other engines.  It seems like a sure bet that this is a YMMV issue.  :-)

This page took 0.04 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.