Author: Tony Werten
Date: 05:37:34 04/27/04
Go up one level in this thread
On April 27, 2004 at 03:33:26, Bas Hamstra wrote: >About double nullmove: I tested this in some pawnendgames to see if it could >handle zuzwang problems, but I don't see it perform any better than normal >nullmove. Can Vincent or you post a position where double null outperforms >normal null? I agree the idea is elegant, but I just don't see it work. I switch it off in pawns only endgames. Couldn't figure out why it wasn't doing as well as it sounded. Tony > >Bas. > > > > >On April 26, 2004 at 15:04:32, Robert Hyatt wrote: > >>On April 26, 2004 at 14:05:14, Vincent Diepeveen wrote: >> >>>On April 26, 2004 at 13:51:03, Robert Hyatt wrote: >>> >>>>On April 26, 2004 at 11:50:18, Vincent Diepeveen wrote: >>>> >>>>>On April 26, 2004 at 06:31:58, Tord Romstad wrote: >>>>> >>>>>I wonder how you came to your conclusion. >>>>> >>>>>I did more experiments and concluded that always nullmoving saves most time. >>>>>Double nullmove is just performing slightly worse (less than a percent in search >>>>>time) and not missing zugzwangs, so i use it for that reason. >>>> >>>> >>>>This is one of those rare occasions where I agree with Vincent. >>>> >>>>Several years ago Bruce Moreland and I were talking, about this very subject. I >>>>was explaining how I did it in Cray Blitz (and also in Crafty). I classified a >>>>node as "ALL", "CUT" or "PV" according to Marsland/Schaeffer terminology, and if >>>>the node was classified as PV I avoided doing the null-search at all. >>>> >>>>Bruce did some testing and doing this restriction actually slowed him down a >>>>bit. So we both set about testing this stuff extensively and we came to the >>>>conclusion that null-move is best used _everywhere_ because we don't have >>>>perfect ordering and node classification is not that accurate as a result. The >>>>only reason for avoiding a null is: >>>> >>>>(1) you are in some sort of zugzwang position where a null-move will fail high >>>>for the wrong reason and wreck the search. Classic examples here are positions >>>>with very few pieces. IE pawns vs a knight where the knight can be zugged. >>>>Most require some minimal amount of material on the board to avoid this problem. >>>> >>>>(2) there is a tactical issue that is hidden with the R reduction. IE the >>>>classic position with white pawns at f2, g3 and h2, black queen at h3 and black >>>>pawn or bishop at f3, threatening mate on the move. If the R reduction prevents >>>>you from seeing the mate, you can have problems. >>>> >>>>(3) The hash table proves that the null-move search will not fail high, meaning >>>>that the search will be wasted effort. >>>> >>>>(4) Obvious positions such as when the side on move is in check. Not moving >>>>can't fail high here as the king is lost. >>>> >>>>(5) I don't allow two consecutive nulls. It is a potentially cute way of >>>>eliminating zugzwang problems, but it is only good for that, and it is not free >>>>in positions where no zugzwang is possible. I choose to not deal with it >>>>although I have this on my "to do" list to test with (say) pawn-only endings. >>> >>>Not allowing 2 consecutive nulls will for sure not find zugzwang problems. >>> >>>Not allowing 3 consecutive nulls will however find many zugzwang problems. As >>>pointed out hashtable could do difficult, but that very seldom happens. >>> >>>Double nullmove does allow 2 consecutive nulls but not a 3d. So for a correct >>>implementation you just need 1 'if then else' to also not allow null in a >>>transposition table cutoff. This is very trivial to do. >> >>That is exactly what I was talking about. More than 2 can't ever be good. In >>middlegame positions, I believe that 2 in a row introduces unnecessary overhead. >> In endgame it might be worthwhile. >> >> >>> >>>>In short, try null everywhere. But, as always, test, test, test. Theory says >>>>that doing null on a PV node is a waste. Practice says that not all PV nodes >>>>are really PV nodes after the search has started... Go with practice. :) >>> >>>I'm using PVS in its pure implementation, so i start with window [-inf;inf] >>>i also disallow nullmove when beta == inf. >>> >>>That's trivial stuff however. >>> >>>All other 'pv nodes' doing a nullmove is very good for different reasons. Even >>>when it gets a nullmove. double nullmove has a bit of a IID effect. >>> >>>>>Your results give IMHO only the indication that your qsearch needs to be fixed >>>>>and/or that you did your experiments at a too fast time control where tactics >>>>>dominate. >>>>> >>>>>>Last week, there was an interesting discussion on this forum about which >>>>>>criterions should be used to decide whether or not to do a null move search. >>>>>>In an attempt to find out what works best for my engine, I played a self-play >>>>>>tournament between four slightly different versions of my engine. The >>>>>>following pseudo-code explains the null-move criterions used by each of >>>>>>the four versions: >>>>>> >>>>>>/* Gothmog A: */ >>>>>>int ok_to_do_nullmove() { >>>>>> if(in_check(SideToMove)) return 0; >>>>>> if(static_eval - value_of_biggest_hanging_piece(SideToMove) >= beta) >>>>>> return 1; >>>>>> return 0; >>>>>>} >>>>>> >>>>>>/* Gothmog B: */ >>>>>>int ok_to_do_nullmove() { >>>>>> if(in_check(SideToMove)) return 0; >>>>>> if(static_eval - value_of_biggest_hanging_piece(SideToMove) >>>>>> + value_of_biggest_hanging_piece(Opponent) >= beta) >>>>>> return 1; >>>>>> return 0; >>>>>>} >>>>>> >>>>>>/* Gothmog C: */ >>>>>>int ok_to_do_nullmove() { >>>>>> if(in_check(SideToMove)) return 0; >>>>>> else return 1; >>>>>>} >>>>>> >>>>>>Gothmog D is similar to Gothmog C, except that when the remaining depth >>>>>>is 4 plies or more, a null move search with the depth reduced to zero >>>>>>is done before the real null move search, and the real null move search >>>>>>is avoided if the zero-depth search fails low. >>>>>> >>>>>>The criterion in Gothmog A is the one I have always used in the public >>>>>>versions of Gothmog. Our discussion last week seemed to indicate that >>>>>>most others use the simpler criterion in Gothmog C. >>>>>> >>>>>>In my tournament, I all four versions face each other 50 times. The >>>>>>results: >>>>>> >>>>>> Gothmog A Gothmog B Gothmog C Gothmog D Sum >>>>>>Gothmog A XXXX 26.5 31.0 29.5 87.0/150 >>>>>>Gothmog B 23.5 XXXX 30.5 26.0 80.0/150 >>>>>>Gothmog C 19.0 19.5 XXXX 31.5 70.0/150 >>>>>>Gothmog D 20.5 24.0 18.5 XXXX 63.0/150 >>>>>> >>>>>>It is interesting that the version with the most restrictive criterion >>>>>>achieves the highest score. This confirms my growing suspicion that >>>>>>recursive null move pruning doesn't work nearly as well as most people >>>>>>believe, and that it might be a good idea to make an effort to reduce >>>>>>its use as much as possible. I will try to experiment with even more >>>>>>restrictive criterions (perhaps static_eval-biggest_hanging >= beta+margin) >>>>>>and see whether that improves the strength further. >>>>>> >>>>>>The number of games is of course still not sufficiently big to make any >>>>>>definitive conclusions. If I find the time, I will add more games and >>>>>>more versions to the tournament. > > >>>>>> >>>>>>Tord
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.