Author: Stuart Cracraft
Date: 13:15:44 06/25/04
So, normally in the literature I've read (and code I've implemented, it's been to do a null move search with R set to 2, so search(...depth-1-R). But in large searches of 8, 9, 10, 11 and beyond in full width searches, the reduction of 2 does not seem to help as much as larger reductions due to the much smaller subtrees that the null move searches has to search with an R of 2. My question is: what have people done to experiment with larger figures of R and verify the return value is effective and horrid moves aren't produced? I've used R set to ply/2 and ply-2 where ply is the original target depth of the overall iteration. The savings in time is substantial and the moves look the same or as good, the tree searched is drastically smaller of course, but I am worried about quality. Is R of 2 or 3 a holdover from the slow computing days in the literature and nowadays you are using higher settings? Assume everything else about the null move search is held the same (not done in endgames, not done in the original position, no more than 1 null move in a row during the search without an intervening normal move, etc.) Thanks ahead, Stuart
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.