Author: Robert Hyatt
Date: 20:33:42 03/11/03
Go up one level in this thread
On March 11, 2003 at 18:23:24, Peter Fendrich wrote: >On March 11, 2003 at 13:16:02, Robert Hyatt wrote: > >>On March 10, 2003 at 10:18:27, Peter Fendrich wrote: >> >- snip - > >>When you enter Search(), you first call a procedure Singular(). > >You seem to keep this functiona apart from the Search(). Is it to not destroy >the hash tables? Playing around with alpha,beta like this is a hash table >killer. No. Just for simplicity. > >>What this procedure does is to search the move list, but using an alpha/beta >>window that >>is offset downward by a SINGULAR_MARGIN. IE >> >>smove = Search(alpha-MARGIN, beta-MARGIN, etc) >> >>This is used to detect that _one_ move fails high only. If more than one fails >>high, then we >>give up. If none fail high, we don't have a singular move (you might not want >>to test all of >>them as it is expensive, but if something doesn't fail high pretty early, it is >>pretty safe to >>give up and say "no singular move here." > >Do you mean early in the movelist? Yes. Remember the idea of "ALL" nodes from Knuth/Moore? For those positions, it is possible that all moves are very bad, and that none will fail high on the offset window search. No point in trying them all, but the fewer you try, the more likely it is you miss a singular move that just happens to be later in the move list. > >>What you hope to end up with is a single move that failed high on the offset >>window while >>all others fail low. If this happens, pass this move back to search and search >>everything as >>you normally would, except when you encounter this specific move, extend the >>depth by >>whatever value you want. >> >>There are additional ideas you can do, such as Deep Blue's "sticky transposition >>table" so that >>once you find a move is singular in a given position, you extend it the next >>time without >>having to do the singular test above. > >Next time (iteration) it's possible that the move isn't singular any more due to >the increased depth. Yes, but if you allow that, you introduce an instability. You at least want to extend it in the current iteration if it was singular in the previous iteration. The DB team paper explains this problem... > >>One thing I mentioned is that the singular test above is very expensive. To >>control this >>cost, it is done with a "reduced depth". Say depth-3 rather than the normal >>depth-1, to >>limit the computational cost. > >Still quite expensive. It's hard for me to see how all this extra search just to >find out that a move is singular can pay off. Singular moves doesn't have to be >bad and yet all these extra nodes... You are hoping to find a move that looks good but a deeper search shows it is either safe to depend on it or it is bad. Its about accuracy. > >>That's a basic explanation. I can give you more if you want. I've played with >>_both_ types >>of SE (Hsu's approach and the "cheapo" approach given above.) In Cray Blitz, >>Hsu's approach >>seemed to work well. In Crafty, it did not. The above approach seemed good in >>places, but >>bad in other places and I have never released a version with this code as I >>never thought it was >>better than non-SE crafty. >> >>But I will go back and play with it from time to time, when I have time. > >One should try to find some even cheaper schemes. >For instance, we are in the first move at some level (that normally has a ~95% >FH). Now when a move one ply down fails high that is strange enough to re-search >one ply deeper. Just a thought. > >/Peter There are probably dozens of tangents to go off on and test... and who knows but what one might be brilliant.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.