Author: Vincent Diepeveen
Date: 05:09:34 02/25/99
Go up one level in this thread
On February 24, 1999 at 15:53:49, Don Dailey wrote: >On February 24, 1999 at 13:49:49, Will Singleton wrote: > >> >>On February 24, 1999 at 07:44:06, Vincent Diepeveen wrote: >> >>>On February 24, 1999 at 02:30:21, Will Singleton wrote: >>> >>>>On February 24, 1999 at 01:16:56, Sylvain Lacombe wrote: >>>> >>>>> >>>>>I just finished implementing the null move. At first, i thought it wasn't saving >>>>>much time. Then, i reallize that it does save alot, but only at deeper plys. On >>>>>the first few plys, i think, it evens slow me down. Is that normal? Am i doing >>>>>something wrong? >>>>> >>>>>I don't use the null move at the first ply, only at the second. It save about >>>>>40% reaching deep 6. But it takes about 10% more for reaching deep 3 and 4. >>>>> >>>>>Hope, you can help. >>>>> >>>>>Thanks. >>>>> >>>>>Sylvain. >>>> >>>>You're probably doing something wrong. But get a second opinion. :) >>>> >>>>My prog will use about half as many nodes early on, like up to ply 5, then the >>>>effect becomes more pronounced, and in plys above 7 or 8 can show 2-3x >>>>reductions. But it depends on the position. If there's a single best move, >>>>there's more reduction than if the position is pretty quiet. So try it on a >>>>couple of different test suites, like wac vs Bratko-Kopec. Or, use the >>>>positional BK positions measured against the tactical ones (12 each). >>>> >>>>Of course you wouldn't use null on the first ply, that would make it kind of >>>>hard. Are you using a reduction factor of 2? That's the most popular, it >>>>seems. I'm testing R=3 now, seems to work OK. But stick with 2 to start. >>>> >>>>Also, make sure you don't do 2 nulls in a row. And, for testing purposes, you >>>>should probably limit a null move to one per search (I mean, once the null has >>>>been done, don't do it again below that node). Then when it's working better, >>>>test out multiple nulls. >>>> >>>>Null moves are susceptible to zug positions, so just disable them in the >>>>endgame. You can try out better things later. >>>> >>>>Don't do null when in check! >>>> >>>>There's an open question about allowing nulls in the pv, so look at that. And >>>>make sure to clear the ep flag after a null. >>> >>>This is no open question. Of course always do nullmove. >>>If it doesn't give cutoff then you still can see whether you need a >>>mating extension. >>> >>>It saves for me a lot anyway. ALWAYS do nullmove, just take care you >>>don't improve your alfa with it if it fails. Some do that, but i'm >>>not a big fan of that. >> >>Yes, just recently I took the alpha update out, since it seemed to be causing >>problems on occasion. I don't know that for certain, but in my case, it's >>working better. >> >>Will > > >My null move is a "test" search, it always has a zero width window. All >I care about is whether you get a beta cutoff and seems to be a slight >improvement. I also had occasional trouble with the alpha updates when >I used to use them, but the zero width window solves this problem and >is seems to be slightly faster too! > >Your statement about not using two null moves in row shouldn't matter. >I had this rule in my program and Don Beal asked me, "why?" I took >it out and the program worked fine. I think a few years ago >my implementation of null move needed this rule to prevent infinite >recursion, but I cannot remember for the life of me why this was so. >The worst that can happen is that you do more depth reduced >searches which is such a tiny fraction of the whole you will not be >able to measure the difference in time. But even this won't happen >if you do not do a null move search when the "stand pat" score is >already below beta. Some programs do the null move selectivity anyway, >or they do it if the score is CLOSE to beta. However I decided to >ignore any minor speedups this gave because it also introduces >some risk. I really doubt you can prove one is better than the >other and my current program doesn't even register a speedup for >this. I'm not interested in what gives me another 0.5% speed up, but how to implement nullmove in such a way that it makes my search correct. One can see that nullmove is a correct form of search if one doesn't allow the third nullmove in a row; *only* forbid the third nullmove if there were 2 nullmoves before me. Then you get into the same position and are forced to search with the same color, so you detect zugzwangs then. However to overcome the reduction factor, i reduce my reduction factor after the first nullmove. This is kind of tricky (as you might have bad luck with hashtables giving you a cutoff for R=3, where you already reduced one time with R=3, so then the second nullmove is also R=3 although you want it to be R=2 or R=1), but it works cool. I'm sure that my nullmove eats up more nodes than always doing a nullmove. Note that i nullmove with the window [-beta,-alfa], i have found no evidence that this eats more nodes, but be warned. My move ordering is quite well, so alfa is usually beta-1. Using MTD i have big problems doing this, as 50% chance that a cutoff is no longer a cutoff, because when i search with mtdvalue=1, then usually all my cutoffs are 2, and if i then research with mtdvalue =10, then suddenly the 2 cutoffs are not seen as a cutoff, so i need a research then which eats up more nodes. >- Don
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.