Author: Robert Hyatt
Date: 19:43:44 01/06/04
Go up one level in this thread
On January 06, 2004 at 18:45:34, Tord Romstad wrote: >On January 06, 2004 at 16:24:07, Robert Hyatt wrote: > >>Yes, but Ferret is not using Hsu's Singular Extension algorithm. not even >>close. Bruce is using a "SE approximation" that works very well, but it is >>not to be confused with what Hsu defined as singular extensions. >> >>I did the full DB implementation in Cray Blitz, and using non-recursive >>null-move R=1, it seemed to work pretty well. I have tried it more than >>once in Crafty, and it simply did not work reasonably whatever I tried. I've >>not decided that it is hopeless, but I have not played with it further in at >>least a couple of years now... > >Have you experimented with Bruce's approximation as well? What were the >results? >I am tempted to try something similar to singular extensions myself some day, >but >I'm afraid I would have to modify the idea a bit to make it work with my MTD(f) >search. I haven't even given mtd(f)/singular-extensions any thought at all, but the idea seems basically flawed, when you think about it. :) However, I have tried Bruce's approach a long while back, and it never seemed to pay off for me. It might find tactics quicker, but then it searched about a ply less deep, and in games on ICC it always seemed worse. Bruce thought that it was a "break-even" deal, and if so, it might be worth it due to the tactical acuity. But I never quite got to "break-even". I think I sent the code to Mike Byrne and a few others at various times and I don't recall anyone reporting any tweaks that made it seem worthwhile.. they might respond differently however... > >>I came to the same conclusion that somehow, null-move with bigger R values simply >>doesn't work very well. You extend, but null-move reduces the depth and things get >>lost in the middle. > >That's interesting -- I have precisely the opposite experience. Small R values >have never >worked for me, except in the endgame. In the middle game, I currently use R=1, >2, 3 or 4 >depending on the evaluation function. I use R=1 or 2 when the remaining depth >is small >and the evaluation function decides that the risk of a horizon problem is big >(for instance >when the side to move has a trapped or pinned piece, there are serious >weaknesses in >the king shield, when the opponent has a very dangerous passed pawn, and so on), >and >R=3 or 4 at all other nodes. Except in a few very tactically complicated >positions, more >than 90% of the nodes are searched with R=3 or 4. > >Omid once guessed that this could be related to what we do in the qsearch. I >have a >big and complicated qsearch which includes checks and a few other forcing moves >as >well as captures. It is possible that lower values of R work better with >minimalistic >qsearch functions like yours; this is one of the many things I should probably >experiment >with when I have some time on my hands. > >By the way, have you ever tried using R=2 at nodes where one of the last 3 or 4 >moves >was extended, and R=3 in all other nodes? This could perhaps help you avoid the >"lost >in the middle" problem you describe. I tried various things. When I was doing SE stuff, I used pure R=2 I "think"... I could probably find this in the main.c comments as to when R=3 was first added, but I am not sure I left comments about SE in since I never really adopted it in a release version. > I don't do exactly this myself, but I do >other >similar tricks in my search. For instance, I am very careful about doing >forward pruning >or reductions when there are one or more extensions in the last few plies >leading to the >position. Like all path-dependent search decisions (including the recapture >extension, >which is one of the other ideas which have never worked for me), it causes >search >inconsistencies, but to me it is worth the cost. > >Tord
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.