Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: question about fixing the time management of movei

Author: Robert Hyatt

Date: 18:40:39 07/30/04

Go up one level in this thread


On July 30, 2004 at 17:53:37, Sune Fischer wrote:

>On July 30, 2004 at 10:32:36, Robert Hyatt wrote:
>
>>On July 30, 2004 at 03:12:54, Sune Fischer wrote:
>>
>>>On July 29, 2004 at 20:00:04, Robert Hyatt wrote:
>>>
>>>>>Then why did it not work, which part failed, what is your analysis?
>>>>
>>>>I did the null-window search.  It returned almost instantly without a fail-low.
>>>>I have time left.  What do I do with it?
>>>
>>>Now the idea is to simply stop searching and save the remaining time.
>>>
>>>There is not much point in doing anything thing else because you don't have
>>>enough time to resolve a real score and go to the next ply.
>>
>>That is where we disagree.  I don't know how to make such a statement without
>>actually trying the search to see...  And lo and behold, sometimes I get well
>>into the next search, far enough to change my mind in fact.
>
>And lo and behold 95% of the time you waste your time.

I'll remind you once more, 85% of the time you do iteration N+1 and don't get a
new move.  You just wasted _that_ time.  It is the 15% case that is important.
Ditto for failing low or high, etc...


>
>>>
>>>>I simply think that turning a boundary condition into a special-case, when all
>>>>the "rules" are not known (tree size, fail low time, etc) is at least no better
>>>>than just ignoring things.
>>>
>>>I think it is a little bit of the best of both worlds.
>>>You get the information you want regarding a fail-low, and you get to save some
>>>of the time.
>>
>>Perhaps.  But if you don't fail low, you won't know you are not until you
>>actually fail high.  By the time you fail high I'll be close to having the true
>>score and can look at other moves.  Seems simpler and with no disadvantage I can
>>see.
>
>I can see the disadvantage.
>
>> And I don't have to predict when to resort to this "end of iteration"
>>trick as I just use my normal code.
>
>Vanilla search has its advantages :)
>

When there are two equal alternatives, the simpler alternative is _always_ the
best one.  That is the KISS principle...





>>>>The issue is when time is low, doing a null-window search rather than a normal
>>>>search.  That is _exactly_ what happened in the old program.
>>>
>>>You haven't tested it like I suggested, so you haven't tested it - period.
>>
>>OK.  I'm not going to argue the point further, other than to say if I start
>>_every_ iteration with a null-window search, I _clearly_ start the last one the
>>same way.
>
>Very good, but how can you seperate the influence of the two?
>
>What you have tested and found is that a + b < 0.
>
>I'm proposing that a > 0.
>
>This is indeed possible, solution space is b = -(a+p) , p>0.
>
>>>>What happens
>>>>_before_ time is low is irrelevant.
>>>
>>>How do you know that?
>>>Why should this not waste a lot of time and make the whole thing look bad?
>>
>>Then we can _never_ compare results.  Our searches are different.
>
>yes, but more importantly our experiments are different.
>
>I think if you want to test "a" there is no reason to be testing "a+b" and then
>say we can't possible compare.

We are _always_ testing A+B+C+D+E+F against G+H+I+J+K+F.  Because the searches
are different, the extensions are different, the evals are different, etc.


>
>>>If branch factor goes up it works even better, it's the other way that can cause
>>>problems.
>>
>>No it doesn't.  It will go up for the null-window search just as well, and you
>>have the same problem.
>
>Yes it will, but the null-move search will _still_ be faster than the aspiration
>search, so it saves time no matter how you twist and turn it.
>


Never said otherwise.  What I _did_ say is that often you won't have time to
fail low with either approach.  -equality-.  Often both will fail low.
-equality-.  A fraction of the time one will fail low and the other will not.
Another fraction of the time neither will fail low and one will finish before
the real time limit and quit while the other has a chance to find a better move.

I simply prefer the last case, which is what I do.  If you don't like it, don't
do it.  That's pretty simple to deal with.  I'm simply stating what I do, and
why, and noting that I have tested _lots_ of alternatives before settling on
what I do now.  Not that it might not change in the future, of course.  I doubt
anything I do is totally "optimal" at present...





>> Except what threshold do you use to say "I don't believe
>>I can complete the next iteration because I have less than xx percent of the
>>total time left..."
>
>That's the big question of course, one very important parameter. Perhas a more
>sophisticated algorithm would even be needed.
>
>Some experimentation would have to be conducted, or one could start by
>collecting a bunch of data from real games and then grind the numbers.

That's where my original algorithm started.  I did this starting in 1978 where I
became full-width iterated rather than selective search only.  It originally
worked fairly well.  But as things changed, ie fractional extensions, null-move
search and eventually singular extensions, the predictability went south and
never came back.  I eventually dumped it and found the thing was actually
playing better due to the occasional fail-low at the last second triggering a
deeper search to save a game.


>
>Starting off by a careful estimate, eg. 20%, should be safe and good enough to
>assert if it works or not.


Where does 20% come from?  v=rand()*100 or something similar?



>
>>What do you do when the EBF for iteration N is 9, while if you search iteration
>>N+1 it drops back to 2.5?  Again I can post excerpts from game logs to show
>>exactly that...
>
>I think the main reason for a branch factor explosion is due to a lot of failing
>lows/fail-highs through the ply when there is bad move ordering.
>


Or the normal window dropping or rising enough to start to include deep tactical
lines that were getting pruned away earlier in the search.  Lots of possible
issues that change the EBF in unpredictable ways.






>A single move probably doesn't explode at one ply and then at the next ply it is
>suddenly a lot cheaper. That move explodes because of a lot of extensions and
>those will likely happen again at the next ply.

The inverse is pretty common.  A single move is cheap at ply=n, way expensive at
ply=n+1, however...



>
>So no, I don't expect this to be any big issue.
>
>>>>As do I.
>>>
>>>I think you are a great believer in drawing fast conclusions based on similar
>>>experiments - "details don't matter enough to make a difference"!?
>>
>>experience allows that...
>
>My experience tells me that the devil is in the details :)
>
>-S.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.