Author: Robert Hyatt
Date: 09:59:42 01/26/05
Go up one level in this thread
On January 26, 2005 at 08:50:03, Dr. Axel Steinhage wrote: >Hi all, > >I registered to this forum just a week ago. However I have quite some experience >in Chess-Programming although I always did it for myself only. In the late 80ies >I wrote an Assembler Program for Z80 which was on the same level as Colossus4 at >that time. Then I stopped programming for more than a decade. One year ago I >restarted with a new Engine in ANSI C. I named it "Astimate" and concerning the >limited time I can invest in that hobby, I think I am quite far already. I am >very proud on the fact that I never ever looked into someone elses code but >wanted to discover everything on my own. Being a scientist by education, I read >the important publications though! Doing that, I learned a lot about Singular >Extensions, starting out from the first paper of the DeepBlue team up to the >various comments by Bob and others here in the forum. >It seemed to me that so far SE is still a "nice idea" only. The problem seems to >be with the efficient implementation. So I sat down for quite some time and >tried to come up with an algorithm that works well in practice. Now, I think, I >have found one. I made some tests and so far it looks very good as it finds >lotsa combinations earlier without adding a lot overhead. Before going into more >testing, I would like to hear the programming-gurus' opinion about the idea. So >please give your comments. The algorithm works as follows: > >I do a normal Search (Soft NegaScout, PVS, Aspiration, Verified Nullmove (R=3), >Hashtables, Killer, ...) and keep track of the best and the second best move >when testing out all possible moves. When the best and the second best differ by >a given margin S, I define the move as singular. So far, this is well known. But >now come two innovations: >1. in case of a fail high, the best move may be singular but I don't know it >because I have cut off before searching all moves. This, I prevent as follows: >In case of a fail high, I look if the second best move is within the S window. >If so, I cut off cuz the best move cannot be singular. If not, I go on searching >(although I could cut off already!) with reduced depth (R=2). I do this until I >have searched all moves or until I have a second best move within S (or another >fail high, of course). If all the other moves are outside the S window, I define >the move singular. Your case of "fail high" seems to be terribly expensive. Doubling the size of the tree at a minimum, because to search the second move after you have a fail-high is pure overhead. DB did this as well, but they did do a search to a significantly reduced depth, rather than a normal search. Hsu called this "FH singular" (fail-high singular). I implemented it exactly as he spelled it out, when I added SE to Cray Blitz sometime in early 1993. Even with a reduced-depth search, this was expensive, costing me a ply of search depth overall. >2. If I found a move to be singular, I do NOT do a research. Instead, I store >this information in the Hashtable and prevent this hash-entry from being >overwritten in the future. In the next depth-iteration, I know from the >Hash-Entry then already upfront that this move might be singular and extend its >max depth. Of course, I don't do the singularity search on the move I have >already classified singular. OK. Also a known idea. Problem is that _this_ might be the last iteration, and you fail to pick up on the tactics that the SE move would expose, since you can't complete the next search in time. > >Because of the reduced depth singularity-search after cutoff and omitting the >research, there is practically no overhead other than the extension itself. >Of course, this algorithm is "cheapo SE" as it might miss quite a lot of >Singular moves: first, the reduced depth might not discover a singularity. You only have to do the reduced-depth at fail-high nodes. At a PV node, it is easier since you are going to search every branch anyway, and the only challenge is to prove that all but one move are some value S worse than the best move to call the best move a singular move. >second, the "second best" value may be wrong, as it might also only be a >boundary (have to analyse that). Finally, the information that a move is >singular stems from the last depth iteration. However, in the current depth >iteration, the move may not be singular anymore. > >Despite of these drawbacks, the algorithm turned out to work quite well on some >test positions with my engine. Before pdoing more tests, however, I would rather >like to hear what you think about my idea. > >Axel You ought to continue testing. I have tried a couple of cheapo SE implementations and have never (yet) had any significant luck with them. GCP even tried the full HSU implementation in Crafty and we never saw any real improvement as a result. I was (and am) suspicious that aggressive null-move search is orthogonal to SE, as one is trying to deepen the search, while the other is trying to do the opposite. You never know unless you try and try and try. I've had plenty of ideas that failed, and failed, and failed, and then finally worked reasonably much later after changes to the base idea implementation as well as other changes to the program that also influenced this idea.
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.