Author: Vincent Diepeveen
Date: 14:46:49 01/26/05
Go up one level in this thread
On January 26, 2005 at 08:50:03, Dr. Axel Steinhage wrote: Your idea to delay the extensions until the next iteration is most interesting to investigate nevertheless, because it's a new idea for me. Note that I doubt whether in future when depths get bigger and bigger i'll be able to keep SE turned on in Diep. SE is very expensive at bigger depths (way above 10 ply, at 10 ply it costs me 0.5 ply with diep roughly on average). However is isn't true that the research is so cheap. Basically that's the biggest cost that SE causes in Diep. Therefore real expensive is that R-3 ply research at nodes which are >= beta. Reasonably little moves generate a singular extension compared to the huge costs of research. First of all i use reduction factor 3 in diep, not 2. That already makes it quite cheaper as opposed to what you do. If you have had such extreme huge costs already, why would you not extend now? The second expensive part in singular extensions is next thing. Suppose at 5 ply depthleft (minimum depth in diep for SE research), which causes a 2 ply search, some evaluation problem in diep's eval causes this move m in position p to be singular. What i do in diep then is not only store in hashtable that m is the singular move in p, i also store that depth was 5 ply. Especially when one is parallel, extensions get a mess when not stored well in hashtable. There is no need for special overwrite concerns. With 4 probes a search depth of 5 ply is very UNLIKELY to get overwritten. Basically what happens then is if search again comes in position p, i only extend when this position is <= 5 ply. I do not extend when depth > 5 ply for p, because stored is 5 ply for p; extending based upon some 2 ply research at say depth == 10 is just not relevant. Suppose it was some simple fork which you just missed as you went straight on onto qsearch. However, what i do want to note is that most software is extremely inefficient searching. Additional another thing might be your implementation of S. I'm using S=0.75 in diep. Any smaller value just didn't make any sense or simply lets the tree explode. The bigger S is, the cheaper the research obviously. The reasons to use singular extensions in diep has several reasons: A) it makes it tactical stronger and because i default extend very little, diep sees at least some tactics then. you don't want to search 8 ply at blitz or 10 ply at tournamentl evel without seeing some tactics. Diep's ugly searchdepths are an important reason. B) diep has the bad habit to always exchange everything it sees. It must have inherited that from me, i'm also always exchanging everything. Singular extensions are brilliant to see exchanging lines real deeply and might sometimes avoid for diep bad ways to exchange the board empty. I do not have proof however this is the case. C) SE is of course a very lazy and inefficient way to do extensions. So just call me lazy D) The bad overhead that SE gives you by means of many stupid small searches, you party win back because all that info gets in hashtable and will act as a kind of 'IID' (internal iterative deepening). So basically your move ordering improves a bit at the bigger search depths thanks to it. E) By far the most important reason is psychological for the observer. Even in openingsposition i might already get 1 or 2 probes to EGTB's thanks to silly singular extensions. Now programmers know this is utmost bullshit, but the average user doesn't. It is the utmost proof to them that your program sees "EVERYTHING", as from the start it hits the EGTB's. Because of E i have left them in so far. Yet positional it's not giving it a single rating point. I would argue positional it makes it your program a lot weaker, BECAUSE it loses at least 0.5 ply or more for DIEP. Additional singular extensions has at least 2 more nasty effects: a) it's brilliant in finding that position where you misevaluate and will play into that position b) if you have a better position it might blow the game to a draw or even sometimes a loss by forcing the position. This because SE will especially extend capture moves which means long exchanging lines it will see utmost deeply. At 12 ply search depth it easily sees exchange lines which might get on theboard up to 25 ply. No problem. A major disadvantage of that is that when the score after those 25 ply is 0.60 and the best move currently based upon a 12 ply line gives 0.55, it will go for that exchange line. However it might be that if you see the 12 ply line which gives 0.55 further and 25 ply deep that this line is +2.0 there. c) if you're not heavily forward pruning in a program, SE will just extend and extend. When used in combination with other dangerous extensions, you'll get massive explosions. What i learned is that mating extensions can be combined with recapture extensions, but neither of them with SE. d) Singular extensions are great to find a winning shot after you already are dead won on the board or seeing yourself lose after making the blunder. That last is psychology for the operator real ugly. Especially because i'm FM and already know when i'm losing when play out normal, 20 moves before my opponent does, so letting diep them make happy already after 10 moves is no good idea. e) Singular extensions are in fact more interesting when based upon using R=0, at R=3 their effect is to use a dutch saying: "Like eating mustard after having finished a meal". In 1995 i had actually invented singular extensions myself and used it without reduction factor and basically only after a fail high occured. Not for moves < beta. At search depths i got in 1995, which was about 6 ply search depth, singular extensions were very useful. Of course bloody expensive, but if all you get is 6 ply, then a factor 2 overhead is no major problem if it enhances tactical strength a lot. In 2005 i'm not so sure anymore about SE. I'm pretty sure the only real interesting thing now is to get a huge positional search depth rather than tactical search depth. Don't ever forget there is other ways to extend your search. Vincent >Hi all, > >I registered to this forum just a week ago. However I have quite some experience >in Chess-Programming although I always did it for myself only. In the late 80ies >I wrote an Assembler Program for Z80 which was on the same level as Colossus4 at >that time. Then I stopped programming for more than a decade. One year ago I >restarted with a new Engine in ANSI C. I named it "Astimate" and concerning the >limited time I can invest in that hobby, I think I am quite far already. I am >very proud on the fact that I never ever looked into someone elses code but >wanted to discover everything on my own. Being a scientist by education, I read >the important publications though! Doing that, I learned a lot about Singular >Extensions, starting out from the first paper of the DeepBlue team up to the >various comments by Bob and others here in the forum. >It seemed to me that so far SE is still a "nice idea" only. The problem seems to >be with the efficient implementation. So I sat down for quite some time and >tried to come up with an algorithm that works well in practice. Now, I think, I >have found one. I made some tests and so far it looks very good as it finds >lotsa combinations earlier without adding a lot overhead. Before going into more >testing, I would like to hear the programming-gurus' opinion about the idea. So >please give your comments. The algorithm works as follows: > >I do a normal Search (Soft NegaScout, PVS, Aspiration, Verified Nullmove (R=3), >Hashtables, Killer, ...) and keep track of the best and the second best move >when testing out all possible moves. When the best and the second best differ by >a given margin S, I define the move as singular. So far, this is well known. But >now come two innovations: >1. in case of a fail high, the best move may be singular but I don't know it >because I have cut off before searching all moves. This, I prevent as follows: >In case of a fail high, I look if the second best move is within the S window. >If so, I cut off cuz the best move cannot be singular. If not, I go on searching >(although I could cut off already!) with reduced depth (R=2). I do this until I >have searched all moves or until I have a second best move within S (or another >fail high, of course). If all the other moves are outside the S window, I define >the move singular. >2. If I found a move to be singular, I do NOT do a research. Instead, I store >this information in the Hashtable and prevent this hash-entry from being >overwritten in the future. In the next depth-iteration, I know from the >Hash-Entry then already upfront that this move might be singular and extend its >max depth. Of course, I don't do the singularity search on the move I have >already classified singular. > >Because of the reduced depth singularity-search after cutoff and omitting the >research, there is practically no overhead other than the extension itself. >Of course, this algorithm is "cheapo SE" as it might miss quite a lot of >Singular moves: first, the reduced depth might not discover a singularity. >second, the "second best" value may be wrong, as it might also only be a >boundary (have to analyse that). Finally, the information that a move is >singular stems from the last depth iteration. However, in the current depth >iteration, the move may not be singular anymore. > >Despite of these drawbacks, the algorithm turned out to work quite well on some >test positions with my engine. Before pdoing more tests, however, I would rather >like to hear what you think about my idea. > >Axel
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.