Author: Robert Hyatt
Date: 05:58:47 08/13/98
Go up one level in this thread
On August 13, 1998 at 06:13:25, Roberto Waldteufel wrote: > >On August 12, 1998 at 13:56:28, Robert Hyatt wrote: > >>On August 12, 1998 at 10:45:19, Roberto Waldteufel wrote: >> >>>Hi all, >>> >>>I have heard of the technique of internal iterative deepening, and I wonder if >>>anyone can explain the details. I already use iterative deepening at the root >>>(doesn't everyone?), and I am assuming that the "internal" refers to the same >>>thing at interior nodes of the main search (ie depth > 1). Does anybody know if >>>this works well with PVS/Negascout type algorithms, where most of the nodes are >>>searched with zero width? And what happens when you get fail high or fail low at >>>depths lower than the orriginal depth passed with the node? Do you ignore this >>>information and keep deepening, or are there cases when you use this added >>>weaker (but cheaper) information to selectively prune the node at the full >>>depth? Also, if anyone has done experiments with/without internal iterative >>>deepening, what was the order of magnitude of improvement it produced? >>> >>>Best wishes, >>>Roberto >> >> >>It's an old idea, and you can find source code for it in search.c in crafty, >>just look for "internal" or "deepening". >> >>The idea is, if you reach a node where alpha is != beta+1, which means that >>this should be a PV type node, and you don't have a hash table move to try, >>you are in a critical part of the tree with no real good idea about move >>ordering. So, in such a case, I just recursively call search, with the same >>position, but with depth-2. Say I am at ply=5, with depth=5 (5 plies left >>before qsearch takes over) and I notice this is a PV node as above. and I >>notice that I have no hash table move to try. I simply call search and pass >>it the same position, but with depth-2 (or 3 here). But that recursive call >>will have the same problem, ply=5 still, depth=3, a PV node, and no hash move. >>So it calls search with depth=1. This search does a normal 1 ply search (I >>stop the internal iterative deepening if depth <= 2) and returns. The previous >>search (with depth=3) now uses this 1-ply search result by taking the move from >>the PV, and then uses that as the first move to examine in the 3 ply search. >>Once we finish the 3 ply search, we return and the original search now uses this >>PV move in lieu of any hash table move and does the normal depth=5 search from >>that point forward. >> >>This helps significantly at times, none at all at other times. The best case >>is where an N ply search fails high, but you can't come up with a PV. When you >>start the next iteration, this will "fill in the holes". I've seen improvements >>of 10% in positions where there are fail highs, none in positions where there >>are none... > >Hi Bob, > >Thanks for the explanation. I may try this, but I am currently experimenting >using a new version of search which may not fit in well with the method. It is a >kind of hybrid of MTDF and PVS, and in the best case it only requires a single >zero-width probe for each depth (MTDF always needs at least two), which seems to >give some benefit in terms of speed (I have played several games v humans and >other programs - I find this a better indicator than test suites), but to >achieve this the algorithm does not return a PV or even an accurate value for >the root position (it does give a lower bound on the value, however), but it >does find the best move, which is enough to play a game. But since the PV is not >returned, I doubt that the internal iterative deepening would improve this >particular algorithm very much. > >Best wishes, >Roberto The serious improvements come from those "fail high but can't resolve" positions that crop up... Because if you start off a new iteration (in a normal PVS search, not in a mtd-like search) and you don't have a PV from the previous iteration to guide you, and you don't have a hash table move for the first move at each ply, the search starts out totally ignorant and can blow up... ie just like starting off at 10 ply without the 1-9 ply iterations having been done to get good moves ordered first... This does help on occasion... but the general average improvement is minor until one of these "biggees" hits. In my case, I "stuff" the PV from the last iteration into the hash table before starting this iteration, but if the PV has one move (fail high, but not resolved to a real score) this algorithm kicks in. For those that don't "stuff" this might help more since it is possible that PV nodes get kicked out of the hash table by replacement policies, particularly when the hash table becomes "full"...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.