Author: Robert Hyatt
Date: 10:48:26 08/22/01
Go up one level in this thread
On August 22, 2001 at 00:03:16, Peter McKenzie wrote: >On August 21, 2001 at 21:01:56, Robert Hyatt wrote: > >>On August 21, 2001 at 16:58:30, Scott Gasch wrote: >> >>>Hi, >>> >>>I am thinking about time management these days. Currently I have rules that >>>make the engine take extra time when it's just out of book or if it sees two >>>fail lows during a search. It will also move faster if the root position is >>>blocked and it's considering a move that won't help unblock it. >>> >>>Additionally it moves faster if it has a clear recapture (yes, I know this is >>>dangerous but I have several safeguards in place and I have not been bitten by >>>it yet). >>> >>>I have seen other engines that handle time differently, though. For example, I >>>don't take extra time to resolve a fail high at the root. And with the "take >>>extra time and resolve the fail low" rule, I only take a little extra time and >>>will play without resolving if that time runs out. I have observed ferret >>>seeming to take a bunch of extra time in these cases though. Should we always >>>resolve root fail highs or fail lows? How important is this? >>> >>>Thanks, >>>Scott >> >> >>I don't resolve fail highs at the root either if I run out of time. I don't >>see the point. >> >>But you did miss one important idea that has been around forever: don't time >>out the search when partially thru analyzing a move at the root. And I mean >>"any" move. IE once you start searching a move at the root (unless it is the >>first move on a new iteration) and you run out of time, continue searching >>until you finish this move. This is important. Because often on the last >>iteration you will start a search on a non-first move, and if you have enough >>time, you will change to that new best move. But only if you have enough >>time. Make sure you do by not stopping until you search that move >>completely. With null-move most of the time this happens instantly anyway, >>so it costs you nothing. Occasionally you will discover that you can not >>prune away things you could in previous iterations,b ecause this move is about >>to become a new best move. Why stop until you know it is or it isn't??? >> >>easy to implement... > >Thanks for that Bob, it sounds like sound advice and it is something I don't do >in my program. I plan on trying it ASAP though :-) > >cheers, >Peter This idea was one of the many contributions to Cray Blitz by Harry Nelson. He had me count the nodes for each root move during the search. And once we had searched the first root move on any iteration, we could not "time out" until we had searched _all_ the root moves that had "interesting" node counts. IE if you had 5M, 5M, 4M, 300K, 20K, 150K, etc... the first three are "interesting" as those trees are very large compared to the others. They could be large due to lots of checks. Or they could be large due to their scores rising enough to be "close" to the best move. In Crafty, I use Harry's approach on the parallel splitting at the root. It saves nodes to split at the root, but it finds a new best move faster if each root move is searched one at a time using all the processors on each one in turn. My "hybrid" split algorithm searches the first 3 moves above one at a time, using all processors on each, then it searches the rest of the root moves in parallel which is more efficient. (I probably shouldn't have explained that since I don't know of anyone else that does it that way, but "oh well..." :) ) My current timing code works exactly as I explained in the previous post, which is a bit different from the Cray Blitz approach. But they are both "related" very closely as you can see. The current approach might search the first and second move and time out, and miss the fact that with a bit more searching, the third move would become the new best move. Cray Blitz would have found this. But Cray Blitz would also have burned a good bit of time searching that third move, even if the node count was high simply because the position goes nuts triggering search extensions (particularly singular extensions). This could _really_ "smoke the clock" on rare occasions. The current approach is simpler and less speculative, while retaining the basic "look and feel"...
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.