Author: Tord Romstad
Date: 12:27:22 03/15/04
Go up one level in this thread
On March 15, 2004 at 14:21:19, Vasik Rajlich wrote: >Hi Tord, > >this is great stuff! Expect more questions soon - and hopefully I'll have some >useful feedback as well. > >First one very general question: to what extent are you confident that what you >outlined here improves your engine? Do you test each piece separately, do you >test changes in groups, have you tried longer vs shorter time controls, etc? I >suspect that you are getting some benefit, given what is known about your >program, including NPS rates - just curious how you go about validating what you >do. I am very confident that the sum of all the tricks improve my engine. I haven't performed any tests recently, but the last time I tried the difference in strength compared to a brute-force nullmove version of my engine was something like 150 Elo points. My tests consisted of self-play games, gauntlets against other amateur engines, and tactical test suites. All tests pointed in the direction of the selective version being much stronger. I have not tried slow time controls, though. On the other hand, I have no reason to believe that all the countless search parameters I use have anywhere near optimal values. There is an endless number of things to experiment with. To a certain extent I have tried each piece separately, but I am not even close to having explored the huge set of possible combinations of settings. >>As promised, I send you a brief description of how my selective search >>works. Because my search looks rather unconventional (for instance, >>as you might remember from CCC, I use the concept of the "cost" of a >>move instead of extensions and reductions), I've tried to translate >>what I do to a more normal language. > >Actually, I also use this concept, called "weight" in my case. The idea came >from some descriptions of Junior's search in the CCC archives. Anyway, as you >pointed out, it's just a cosmetic change from extensions & reductions. I wasn't aware that others used the same formulation, but it doesn't really surprise me that others have come up with the same idea. >>First of all, I should mention that I use MTD(f), which means that all >>my searches are done with a zero width window. Therefore, the search >>fails high or low at all nodes. I call the static eval at all >>interior nodes, and use the evaluation to make many different search >>decisions, like explained below. > >I also use MTD (f), it's a bit more elegant IMO. PVS should behave the same way >though, as far as selective search goes. There are some differences, because in MTD(f) you should be careful about using alpha and beta in your pruning decisions in the same way you would do in a PVS search. Otherwise you will easily end up with terrible search inconsistencies. The same node will usually be searched several times with different values of alpha and beta, and pruning decisions which are valid in one case will not always be valid in another. Note that even null move pruning causes this kind of problems when combined with MTD(f), but other pruning techniques tend to cause more noticable problems. >>Basically, I have two different selective search mechanisms. The >>first one is very simple, and is used in many programs (I think). In >>the last n plies of the search (n is currently 2, but I haven't yet >>decided whether 2 or 3 works best), I do what I call "static null move >>pruning". When the static eval returns a score >= beta, there are no >>hanging pieces, no mate threat, the king safety for the side to move >>is good, and there were no extensions in the last two moves of the >>path leading to the position, I just return the score without doing a >>search. I do something similar further from the leaves as well, but >>only when the score is *very* far above beta. >> > >Yes, I can believe that you get a small but clear win from this. Null move seems >to me to be more effective further from the leaves. (An untested intuition.) Yes. Simple one- or two-ply threats are often easily detected by static techniques, while deeper threats must be detected by a search. >When I allow transitions from a null-move right into q-search, the program can >overlook some pretty simple stuff at the leaves. It seems to me that after a >null-move is tried, ideally there should be enough depth left to take care of >certain things. This is one of the points behind my dynamic null move reduction factor. I use my evaluation to try to guess whether there is a danger of a horizon effect problem, and reduce the reduction factor when this happens. >BTW Don Dailey wrote a very detailed description of his own algorithm for this, >it's in the CCC archives. Thanks, I will have a look at it as soon as the search engine works again. >>The other mechanism is based partly on how a move changes the various >>components of the evaluation function, and partly on the history of >>the move. > >Yes, this is the intesting stuff! I think this is where Junior, Shredder, Fritz, >Hiarcs, Tiger, etc, are getting some big gains. > >> >>In addition to all the common extensions, I extend moves which >>dramatically changes the value of some evaluation component (usually >>king safety or passed pawns). If a move dramatically increases the >>pressure on the opponent's king, I extend the move. The magnitude of >>the extension depends on how much the evaluation component in question >>increases. > >I tried several versions of this. None of them appeared to improve the level of >play in blitz. Have you tested this extension individually in self-play? Yes, I have. But extensions of this kind are always rather small in my engine; usually less than half a ply. Only really dangerous-looking attacking moves and certain passed pawn pushes to the 7th rank are sometimes extended as much as 3/4 of a ply. >Also, do you evaluate king safety separately for purposes of search and for >purposes of evaluation? No, it is the same eval in both cases. >Anyway, this extension is very logical. I will try it again once my new king >safety evaluation is ready. > >Also, it's very logical to extend moves which static eval indicates are better. >It appears that this is what Junior does - although it's not 100% clear. Really interesting. Is this based on your own observations of Junior's search, or have you read about it somewhere? >>At all nodes, the first three moves are always searched to full depth >>(or more, in case they are extended). When the first three moves fail >>low, I expect the node to be a fail-low node, and reduce the search >>depth for the remaining moves unless they seem particulary promising. >>All the remaining moves are searched with reduced depth unless one of >>the following conditions are satisfied: >> >>1. The move is extended. In this case, it obviously doesn't make >> sense to reduce the search depth. >> >>2. One of the last two moves in the path leading to the position were >> extended. >> >>3. The move played contains a threat (of mate or of capturing a >> piece), or moves a hanging piece to a safe square. >> >>4. The static eval or one of its components make a jump in positive >> direction when the move is made. >> >>5. The move has often caused a beta cutoff in the past (I compare the >> history table entry for the move to the total number of nodes >> searched). >> >>When a reduced move fails high, it is re-searched with normal search >>depth. > >Aha, I follow the logic here. > >The risk of course is that you will reduce some good positional moves, not >giving search the chance to show that they are good. Well, nothing is free. Part of the purpose of checking the history table is to prevent good positional moves from being reduced too often. I hope that good positional moves are usually good many places in the search tree, and that their history table entries therefore have high values. >>Like in the case of extensions, the amount of the reduction depends on >>just how bad the move looks. >> >>At the node directly following a reduction, I immediately return a >>fail low score if the null move search fails low by a big margin. In >>cases like this it is clear that the reduced move contains a dangerous >>threat, and it is too risky to search it with reduced depth. > >Actually, I tried a version of this: when a null-move failed badly (by two pawns >or more), I extended the search by a full ply. Of course with MTD (f) you need a >good fail-soft, and I worked through quite a number of issues with failing-soft >in my q-search, etc. Nevertheless, this extension appeared to hurt my program's >level of play slightly in blitz. > >This was quite a surprise. Generally, moves which threaten something (ie which >cannot be ignored) must be better, and more worthy of search, than moves which >do not threaten anything. My explanation is that this extension caused the >program to spend too much time looking at tactical lines and not enough at good >quiet positional lines. Probably. Perhaps you could make it work if you raise the margin to a value much higher than two pawns? >>The idea of these reductions is that a move which does not appear >>promising and which very rarely has worked in the past is usually very >>unlikely to fail high. >> >>I also have some other search tricks which you perhaps would not >>classify as "selective search", but which might still be interesting >>to you: >> >>I use a dynamic null move reduction factor. The null move search is >>usually skipped unless there is a good reason to think that the node >>is a fail-high node (this is decided by comparing the value of the >>nstatic eval to beta, of course). When I am *very* sure of a fail high >>(huge static eval, no hanging pieces, safe king, and no other kind of >>danger reported after calling the eval), I use R=4 or (in extreme >>cases) R=5. In most "normal" positions, I use values close to 3. >>When there is some obvious kind of danger in the position (like an >>unsafe king for the side to move, or a high passed pawn eval for the >>opponent), I use values in the range 1 to 2, in an attempt to avoid >>horizon effect problems. Like many other places in my search, I also >>consider the path leading to the position when deciding which value of >>R to use. When there has been an extension in the last few plies, I >>avoid using values bigger than R=3. > >Offhand it would seem to me that the increased savings above R==3 are too small >to be important. Perhaps. I think the most important part of my scheme is in the other direction, when I choose to reduce the reduction factor in positions where a serious warning sign is present. Tord
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.