Author: Robert Hyatt
Date: 17:00:04 07/29/04
Go up one level in this thread
On July 29, 2004 at 18:30:26, Sune Fischer wrote: >On July 29, 2004 at 11:44:39, Robert Hyatt wrote: > >>"more complicated" != "more advanced". I don't believe it is possible to >>accurately forecast the time for the next iteration. Which means when is it >>appropriate to do that quick nullwindow search? > >I can't pridict when a check extension is wasted and when it is useful either, I >can only say if it works better or worse on average. > >> And when you do it and it >>returns way quicker than you expected, what do you do with the remaining time? >> >>There appear to be more problems this way than with what I currently do... > >Simple != Good > >(I can play that game too:) > >>>But you have not tested what I'm suggesting. >> >>I have definitely tested doing a fail-low search. You can find references to >>that back in 1978 which was when I finally dumped the idea of "don't start the >>next iteration if I don't believe it can be finished..." > >Then why did it not work, which part failed, what is your analysis? I did the null-window search. It returned almost instantly without a fail-low. I have time left. What do I do with it? I simply think that turning a boundary condition into a special-case, when all the "rules" are not known (tree size, fail low time, etc) is at least no better than just ignoring things. IE I could take the logs for three wins and three losses on ICC, and post a summary of the times to see what the current approach did that was good or bad... And then I could go back to the key positions and see what would happen with a different approach. > >>> >>>>Your "assumption" is based on facts that have tons of contradictory evidence (IE >>>>I _have_ done lots of testing and reported on it many times in various ways.) >>> >>>AFAIK it was a completely new spin on an old idea. >> >>There's nothing new about the null-window search at all. As I said, in 1980 >>_every_ search I did started with a null-window search. As did Belle's... > >Don't mix two experiments here, I'm not talking about starting every search with >a nullwindow... What is the difference? The _last_ search is started with a null-window which is what you are suggesting. > >Be careful with drawing too many parallels here. > The issue is when time is low, doing a null-window search rather than a normal search. That is _exactly_ what happened in the old program. What happens _before_ time is low is irrelevant. >> >>> >>>I read your conclusion that ply N took 1.5 and ply N+1 took 1.4. >>>This is what I call the same magnitude, ie 1.4 >> 0.15*1.5 >> >>Yes. But _both_ together took less than 15% of the total time used up to that >>point... > >Yes that's true, but I think you should compare the last iteration time with the >time remaining, that way you eliminate a big branchfactor issue. > In one search, I can see the effective branching factor go from 1.7 to 10.0 on extreme cases. Regularly from 1.5 to 5.0 when things are "right"... And I am talking about the iteration-to-iteration effective branching factor, not completely different searches. I can post some of those for clarity if need be... >Secondly, don't jump to the conclusion that it won't work because you found a >position where it might not work perfectly. > >I can show you 100 positions where nullmove causes more damage then good, still >that doesn't mean you should turn it off in Crafty, right? :) > >I think only testing is the way to go. > As do I. >>>I don't think it varies "too" much. >>> >>>-S. >> >> >>You are lucky. Mine is wildly varying... > >Hehe, but on average.... :) > >-S. Varies wildly on average, in fact. Otherwise parallel search would be easy.
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.