Author: Martin Giepmans
Date: 12:36:45 11/23/02
Go up one level in this thread
On November 23, 2002 at 14:06:05, Uri Blass wrote: >On November 23, 2002 at 13:00:34, Martin Giepmans wrote: > >>On November 23, 2002 at 12:10:02, Uri Blass wrote: >> >>>On November 23, 2002 at 11:37:25, Martin Giepmans wrote: >>> >>>>On November 23, 2002 at 08:48:36, Omid David Tabibi wrote: >>>> >>>>>On November 23, 2002 at 08:45:00, Uri Blass wrote: >>>>> >>>>>>On November 23, 2002 at 08:11:37, scott farrell wrote: >>>>>> >>>>>>>Just after other people's thoughts. >>>>>>> >>>>>>>I think Omid's work overlooked the adapative null move searching many of us do, >>>>>>>ie. transitioning from r=3 to r=2. >>>>>>> >>>>>>>I think adaptive null move tries to GUESS where to use r=2 to reduce the errors >>>>>>>that R=3 makes. I guess it depends on how often this GUESS is correct, the cost >>>>>>>of the verification search, and how long it takes the adaptive searching to >>>>>>>catch the error at the next ply. >>>>>>> >>>>>>>Has anyone looked at setting the verification search to reduced depth of 2 >>>>>>>(rather than 1)? obviously to reduce the cost of the verification search. >>>>>> >>>>>>Omid checked it but you also reduce the gain. >>>>>> >>>>>>I think that I will look for good rules when to do the verification search so >>>>>>the cost will be significantly smaller but the gain is going to be the same in >>>>>>at least 99% of the cases. >>>>>> >>>>> >>>>>I'm currently working on other variations. The initial results are promising. >>>>> >>>>>>Uri >>>> >>>>I have done some tests with your method at greater depths. >>>>At depth 12 vrfd R=3 still had an overhead (in terms of treesize) of about >>>>25% compared to pure R=3. >>>>(my engine uses a simple Q-search that shouldn't give problems here) >>>> >>>>So the question is if your expectation that the treesize of R=3 and vrfd R=3 >>>>converge at greater depths (> 11) really holds. >>>> >>>>Needs more testing, I think. >>>> >>>>Another point: >>>>I would expect that vrfd R=3 becomes less safe at greater depths. >>>>The subtrees in which you don't verify nullmove (after the verification) become >>>>deeper and I see no reason - on logical grounds - why this shouldn't give safety >>>>problems. >>>>Even if R=3 and vrfd R=3 converge in terms of treesize, the safety (or rather >>>>the lack of it) might also converge ... >>>> >>>>In any case, thanks for sharing. >>>> >>>>Martin >>> >>>I see reasons why the safety does not converge. >>> >>>A common problem with null move is cases when the first move has a threat but >>>the threat can only be seen at big depth. >>> >>>With verified null move pruning you are going to see the threat clearly earlier. >>> >>>When the threat is in a move that is not close to the root then the danger of >>>being wrong in detecting the threat is smaller because you can miss one threat >>>but another threat may give the same result. >>> >>>Uri >> >>I think it will be clear if you replace the somewhat confusing expression "close >>to the root" by "distance to the leaves". > >I do not see the confusion because distance to the leaves has opposite meaning >to distance to the root. > >In terms of distance to the leaves(remaining depth) I say the following: > >It is more important to detect threats when the distance to the leaves is big. > >The point is that when the distance to the leaves is small even if you are wrong >in detecting threats there is a good chance that it is going to change nothing >in the final score. > >I also have examples when the algorithm save me 2 plies when the depth is big. In the subtrees after verification the remaining depth becomes bigger every iteration. For instance (not counting extensions for the sake of simplicity) if you search 14 ply and you do verification at ply 2 you get a subtree of 12 ply. In that tree nullmove with R=3 is done without verification. This is risky. If you search 16 ply the same tree will be 14 ply. More risk ... The question is if errors in a subtree will easily propagate to the root and influence the score and the best move. It is possible that the verification acts as a kind of barrier that blocks some or most errors and keeps them away from the root. I don't think so, but maybe ... > >Here is one of them. > >Latest movei need 11 plies when previous movei needs 13 plies and a long time to >solve the fail high > >[D]r1bq1rk1/3pbppp/p1n1p3/4P3/2B1NP2/PP5Q/1B4PP/3R1R1K w - - 0 1 > >depth=11 +1.20 f4f5 e6f5 e4d6 e7d6 e5d6 d8g5 f1f5 g5g6 h3h5 g6h5 f5h5 c8b7 >Nodes: 15025408 NPS: 158378 >Time: 00:01:34.87 > >depth=11 +1.21 e4f6 >Nodes: 18075970 NPS: 158979 >Time: 00:01:53.70 > >depth=11 +3.72 e4f6 e7f6 e5f6 d7d5 c4d3 h7h6 f6g7 f8e8 h3h6 f7f5 d3f5 e6f5 h6c6 >Nodes: 69601416 NPS: 156492 >Time: 00:07:24.76 > >I did not analyze the reason for the difference but it is possible that movei >saw no threat after Nf6+ gxf6 exf6 and verification search helped it to see the >threat. > >Uri This is interesting. After Nf6 gxf6 exf6 null fxe7 Qxe7 white has regained the lost material but blacks kingside is damaged. So the nullmove at ply 4 should probably not result in a cutoff (fxe7 is really a threat). Unless .. you don't evaluate kingsafety in terms of pawnshield and I seem to remember that you don't do that in Movei. Martin
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.