Author: Uri Blass
Date: 12:55:55 11/21/02
Go up one level in this thread
On November 21, 2002 at 09:10:32, Omid David Tabibi wrote: >On November 21, 2002 at 07:18:11, Uri Blass wrote: > >>On November 21, 2002 at 06:26:16, Uri Blass wrote: >> >>>On November 21, 2002 at 06:25:17, Uri Blass wrote: >>> >>>>On November 21, 2002 at 04:52:13, Omid David Tabibi wrote: >>>> >>>>>On November 20, 2002 at 22:05:29, Robert Hyatt wrote: >>>>> >>>>>>On November 20, 2002 at 16:55:41, Gian-Carlo Pascutto wrote: >>>>>> >>>>>>>Nullmove in Deep Sjeng uses an algorithm of my own, but I can >>>>>>>switch it back to other systems easily. I did so for running >>>>>>>a few tests. >>>>>>> >>>>>>>I made a version which uses Heinz Adaptive Nullmove Pruning >>>>>>>and a version which uses your verification nullmove. >>>>>> >>>>>>This would seem to be a bit harder than at first glance. They say that >>>>>>if the normal null-move search fails high, then do a D-1 regular search >>>>>>to verify that, but while in that verification search, no further >>>>>>verification searches are done, meaning that the normal null-move search >>>>>>fail-high is treated just like we do it today.. >>>>>> >>>>>>I'm going to experiment with this myself, just for fun, but it seems that you >>>>>>need to pass some sort of flag down thru the search calls indicating that >>>>>>you are either below a verification-search node or not so that recursive >>>>>>verification searches are not done... >>>>>> >>>>> >>>>>Exactly!! (finally someone read the article carefully) >>>>> >>>>>See Figure 3 for detailed implementation (the flag you mentioned which is passed >>>>>down as a parameter for search(), is called 'verify' in the pseudo-code). >>>>> >>>>>At first stage leave alone the zugzwang detection part (the piece of code at the >>>>>bottom of Figure 3). Due to instablilities, some programs might do a needless >>>>>re-search. First let the algorithm work fine in general, and then do the >>>>>zugwzang detection part. >>>> >>>>I let the algorithm to work without zugwzang detection and first results seems >>>>not to be good >>>> >>>>Some positions I get at the same depth and >>>>the only position so far in the gcp test suite that I got at smaller depth for >>>>tactical reasons is >>>>[D]5rk1/1r1qbnnp/R2p2p1/1p1Pp3/1Pp1P1N1/2P1B1NP/5QP1/5R1K w - - 0 1 >>>> >>>>I am going to try it in 10 sedconds per move and get resulkts in half an hour. >>>> >>>>Uri >>> >>>should be seconds,results,position >>>I type too fast. >>> >>>Uri >> >>Here are the results of the new version: >> >>1 39 39 >>2 9 48 >>3 8 56 >>4 11 67 >>5 6 73 >>6 6 79 >>7 5 84 >>8 4 88 >>9 4 92 >>10 2 94 >> >>results of the old version seem better: >> >>1 40 40 >>2 17 57 >>3 8 65 >>4 7 72 >>5 6 78 >>6 4 82 >>7 8 90 >>8 2 92 >>9 3 95 >>10 1 96 >> >>Remember also that I tested the olf version at more than 10 seconds per move so >>if it changed it's mind after 20 seconds from the right move to the wrong move >>the position is counted as a failure. >> >>Uri > >Did you use the exact implementation I described in Figure 3? > >BTW, you have to compare the algorithms at deeper searches. A fixed 10 ply depth >will be fine. Some notes: I do not the exact implementation in figure 3. Differences: 1)I do not use null move pruning at depth=1 because I have other pruning and I did not change it. null move pruning is used by movei only when the depth is at least 2. 2)I did use research so I have not varaible to tell me about fail high and after fail high, I trust the result of normal search(verify=false) with depth that is reduced by 1. 3)I already tested it at 300 seconds per move(still without research) and I expect to have results tommorow. I am going to compare it with results of my previous version(null move pruning R=3) I use also other pruning ideas that I did not change. Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.