Author: Robert Hyatt
Date: 11:19:36 07/29/04
Go up one level in this thread
On July 29, 2004 at 13:07:02, Uri Blass wrote: >On July 29, 2004 at 11:44:39, Robert Hyatt wrote: > >>On July 29, 2004 at 10:18:25, Sune Fischer wrote: >> >>>On July 28, 2004 at 17:48:56, Robert Hyatt wrote: >>> >>>>On July 28, 2004 at 14:21:26, Sune Fischer wrote: >>>> >>>>>On July 28, 2004 at 11:02:36, Robert Hyatt wrote: >>>>> >>>>>>On July 28, 2004 at 03:18:52, Sune Fischer wrote: >>>>>> >>>>>>>On July 27, 2004 at 18:26:16, Robert Hyatt wrote: >>>>>>> >>>>>>>>Aha. And exactly how many times do you do the N+1 iteration and get the _same_ >>>>>>>>best move? For crafty that is about 85% of the time. So I should cut the >>>>>>>>search off one ply early? Or is that 15% critical? >>>>>>> >>>>>>>I don't understand the question. >>>>>> >>>>>>you said I wasted time by starting the next search which won't fail low most of >>>>>>the time. I said you waste time by doing iteration N+1 that doesn't change the >>>>>>best move most of the time. See the fallacy in the argument? I _know_ going to >>>>>>depth N+1 won't change the best move most of the time. But it will likely >>>>>>change the best move when it is important to do so... >>>>> >>>>>No. >>>>>It's going to depend on how much time you have left. >>>>>If you need 5 seconds to fail-low and you have 4 seconds left, you won't see it. >>>> >>>>No (I can play that game too). :) >>>> >>>>the game _always_ depends on time. If you run out you have to do something. >>>>But again, in 85% of the cases, doing N+1 produces the same best move as N, so >>>>it is "wasted" by your definition. I'm interested in that 15% where it changes >>>>to something better. Starting the next iteration might produce nothing 80% of >>>>the time or more. But if it fails low twice in a game, it may well save me from >>>>making a bad blunder... I can't predict whether I will have enough time to get >>>>any information back, so I just dive in and search, and if it fails low, I get >>>>valuable information. If not, I don't. >>> >>>Ok it's simple and it works reasonably well. >>> >>>What I'm suggesting is more advanced, yes it's harder to get working, but >>>probably has a higher efficiency if implemented well. >> >>"more complicated" != "more advanced". I don't believe it is possible to >>accurately forecast the time for the next iteration. Which means when is it >>appropriate to do that quick nullwindow search? And when you do it and it >>returns way quicker than you expected, what do you do with the remaining time? >> >>There appear to be more problems this way than with what I currently do... >> >> >>> >>>>>>I never said "win-win". I said it works better for me after testing. And I >>>>>>have done _lots_ of testing with various approaches. That's how I settled on >>>>>>the current approach. I'm not much for tea leaves and Tarot cards. >>>>> >>>>>That interesting, because I was beginning to wonder how you could have such >>>>>strong opinions on something you _haven't_ tested. :) >>>> >>>>I _have_ tested both options many times. >>> >>>But you have not tested what I'm suggesting. >> >>I have definitely tested doing a fail-low search. You can find references to >>that back in 1978 which was when I finally dumped the idea of "don't start the >>next iteration if I don't believe it can be finished..." > >What is exactly the data that convinced you that this idea is worse than what >you did later. I could fail low, and get something useful. I could fail low, and then have to do a real search to get a score so that I could continue to try to find something better. Back in 1978 my testing was always me vs me. IE I gave up on the "don't start a new iteration unless it seems pretty certain that it can be completed" by playing lots of overnight games with that approach vs the new "search until time runs out, period." That's the way all changes were categorized as "good" or "bad" back then as there were no commercial programs to test against and no internet chess club to test on either. We tried +many+ variations on timing back then, and our conclusions were that the "search until time runs out" produced the best overall results. Note that this is not about a 100 Elo gain/loss. But it does mean that game outcomes can be changed. > >I think that the difference in elo is probably less than 20 elo. Very possibly. But when that one loss is in a WCCC, it is far more critical. > >The main important things are: >1)using more time after you already know about fail low >2)Deciding about target time correctly and it is also a problem that you can >solve if you decide to finish at the end of the iteration(you can decide not to >start a new iteration if you expect it to be finished after twice the average >time that you have for move and you can decide not to start a new iteration if >you expect it to be finished after another multiplication of the target time). That last doesn't work in all positions. Fine 7 is one of many where the 2x rule you mention will be so wrong.... > >Note that I have another rule and if the first move of the iteration takes >enough time then I decide that it is the last iteration). > >My logic is the following: >There are 2 cases: >Case 1:I spend significant time on the rest of the moves(in this case I have no >time for new iteration). >case 2:I do not spend a lot of time on the rest of the moves. >In these cases there is a big probability that the move is forced and other >replies are pruned fast thanks to null move pruning so I prefer to use less time >for it. That will fail for the kind of positions I worry most about. Those where my program doesn't see a problem until the last second. The position from Cray Blitz vs Belle in 1981 (White can play Qxb6 picking up a free knight but losing, or white can play Bxh6 and force a draw) is one such example. Things look good early, with the best move taking most of the time each iteration until the iteration where you see that it loses. > >Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.