Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: question about fixing the time management of movei

Author: Robert Hyatt

Date: 20:18:35 07/31/04

Go up one level in this thread


On July 31, 2004 at 22:41:01, Uri Blass wrote:

>On July 31, 2004 at 21:51:01, Robert Hyatt wrote:
>
>>On July 31, 2004 at 13:01:26, Sune Fischer wrote:
>>
>>>On July 31, 2004 at 10:43:44, Robert Hyatt wrote:
>>>
>>>>OK.  That can be measured.  I search with a very narrow aspiration window.  I
>>>>don't believe a null-window will fail low 5% faster, but I'll run that test
>>>>tonight to find out and post the numbers...
>>>
>>>I think it would, what else is the point of nullwindows in general?
>>
>>Slightly more efficient.  But "slightly" is the opterative word if you have a
>>good aspiration window to start with.  But that is only "maybe" as well.  IE for
>>your null-window search, you probably want to set the window to some delta value
>>below what you believe is the "true score".  If you do that, you will get a
>>fail-high, but that takes way longer than the fail-low.  Meanwhile you have to
>>search a while to decide it won't fail low.  If you do this reasonably, you
>>search to the fail high.  And by then it is possible that I'll have a real score
>>for the first move rather than a >= some artificially lowered window.
>>
>>IE it isn't a clear win, IMHO, without a _lot_ of test data to show that it
>>works and is worth the extra complexity over having no special-case code at
>>all...
>>
>>>
>>>It gives you very little information (never an exact score), so if it's not
>>>faster...
>>>
>>>>>>We are _always_ testing A+B+C+D+E+F against G+H+I+J+K+F.  Because the searches
>>>>>>are different, the extensions are different, the evals are different, etc.
>>>>>
>>>>>I think this smokescreen isn't worth the paper it isn't written on.
>>>>>
>>>>>I suppose it is not possible for you to test PVS and nullmove as it has been
>>>>>described either?
>>>>
>>>>
>>>>Those are _your_ words, not mine.  Remember?  You complained that my null-window
>>>>search wasn't done the same way you suggested because something _else_ in my
>>>>search was different.
>>>
>>>No I said something else in your _experiment_ was different.
>>
>>Yes.  The only thing different was my _search_.  absolutely nothing else.
>>
>>
>>>
>>>If you make several changes in the same experiment, then it's not easy to say
>>>which is causing what. Maybe you threw out the baby with the bathwater.
>>
>>I made zero changes for the experimental setup.  I later changed from predicting
>>to searching until time ran out and got better results.  That's all I said.
>>That's all I have _ever_ said.
>>
>>
>>>
>>>>  I just pointed out that _lots_ of things in my search are
>>>>different,
>>>
>>>Of course, and your point would be...?
>>
>>one of us is on a different page.  You now say "if you make several changes..."
>>when I _never_ do that and never have.  Where that came from I have absolutely
>>no idea.  I simply pointed out that I had a search that did what you suggest,
>>namely using a null-window search at the start of each iteration.  We went
>>downhill from that point somehow...
>
>null-window search at the start of each iteration was not what Sune suggested.
>
>He suggested null-window search only in iteration that he is almost sure that he
>has not enough time to complete and it means that he even does not suggest to do
>null move search in every move because there may be moves
>when he will believe that he has time to finish the iteration but find that he
>has not time so he may stop in the middle of the iteration.

I understand.  But my point was that on the _last_ iteration, I started with a
null-window search _also_.  Which is what he suggested.  Just because I started
_other_ iterations with a null-window search was not relevant as I did that
anyway at the time...


>
>>
>>
>>>
>>>> so using your logic, we can't compare _anything_.
>>>
>>>Before you start analysing my logic it would be nice if you could understand
>>>what I'm talking about.
>>
>>I believe I understand the idea perfectly.  It isn't that complicated and it
>>isn't new.  I simply don't like it.  If you do, fine.  I'm obviously not
>>responsible for your engine nor you mine.
>>
>>
>>>
>>>>I agree the idea is broken.  But it wasn't _my_ idea. :)
>>>
>>>So you make your conclusions _before_ you run the experiment?
>>>
>>>Fasinating, and how incredibly errorprone.
>>
>>Never did that.  Never said I did.  So there definitely is something incredibly
>>error prone here.  But not my testing methodology...
>>
>>
>>
>>
>>>
>>>>>I don't believe this can practicly happen if you use careful estimates.
>>>>
>>>>I know it _does_ happen.  I am looking at log files all the time and see some
>>>>very unusual timing issues that are surprising...
>>>
>>>Well, maybe it won't work in Crafty then.
>>>
>>>I'm sure you'll come to that conclusion regardless with your very objective way
>>>of concluding things.
>>
>>Since I have tried other ways, yes I have reached the conclusion that what I do
>>is best for my program.  I arrived at _every_ decision made in Crafty by lots of
>>testing.
>>
>>
>>>
>>>>>Ok, so your don't want to do anymore hassle on this part of your program.
>>>>>It is perfect as it is! ;)
>>>>
>>>>
>>>>"Perfect as it is" is your term.  "better as it is" is my term.
>>>
>>>Ah, so you admit there is room for improvement!
>>>
>>>That in itself is an improvement I think :)
>>
>>
>>Have you _ever_ seen me claim anything I do is _perfect_?  I'll give you the
>>same challenge as I have repeatedly given to Vincent.  Please post a quote or
>>reference or citation of such a claim.  I have _often_ said "I tried that and
>>didn't like it but you should test to see if it works for you" however.  That's
>>hardly a claim of "perfection" IMHO.
>>
>>
>>
>>
>>>
>>>>>I'm just not ready quite yet to throw in the towl.
>>>>>
>>>>>>>Starting off by a careful estimate, eg. 20%, should be safe and good enough to
>>>>>>>assert if it works or not.
>>>>>>
>>>>>>
>>>>>>Where does 20% come from?  v=rand()*100 or something similar?
>>>>>
>>>>>Experience, from watching a lot of engine output.
>>>>>
>>>>>I can not remember _ever_ having seen Time(ply N+1) < 0.20*Time(ply n)
>>>>
>>>>
>>>>I posted one such example already.  Another way to see odd results is a deep
>>>>ponder search on the wrong move, where the hash table then provides scores that
>>>>make search times way beyond unpredictable due to transpositions between the
>>>>move pondered and the move played.
>>>
>>>I repeat: I have _never_ seen such a position.
>>
>>What does that mean?  I've never seen an atom split.  Doesn't mean it doesn't
>>happen and that others have not done it.
>>
>>
>>
>>
>>
>>>
>>>you posted this, as an example of an extreme position:
>>>
>>>>>               35->  15.93   8.92   1. Kb1 Kb7 2. Kc1 Kb8 3. Kc2 Kb7 4.
>>>>>                                    Kc3 Kc7 5. Kd3 Kb7 6. Ke2 Kc7 7. Kf3
>>>>>                                    Kd7 8. Kg3 Ke7 9. Kh4 Kf6 10. Kh5 Kf7
>>>>>                                    11. Kg5 Kg7 12. Kxf5 Kf7 13. Ke4 Ke8
>>>>>                                    14. Kd3 Ke7 15. Kc4 <HT>
>>>>>               36    17.33   8.92   1. Kb1 Kb7 2. Kc1 Kb8 3. Kc2 Kb7 4.
>>>>>                                    Kc3 Kc7 5. Kd3 Kb7 6. Ke2 Kc7 7. Kf3
>>>>>                                    Kd7 8. Kg3 Ke7 9. Kh4 Kf6 10. Kh5 Kf7
>>>>>                                    11. Kg5 Kg7 12. Kxf5 Kf7 13. Ke4 <HT>
>>>>>               36->  17.33   8.92   1. Kb1 Kb7 2. Kc1 Kb8 3. Kc2 Kb7 4.
>>>>>                                    Kc3 Kc7 5. Kd3 Kb7 6. Ke2 Kc7 7. Kf3
>>>>>                                    Kd7 8. Kg3 Ke7 9. Kh4 Kf6 10. Kh5 Kf7
>>>>>                                    11. Kg5 Kg7 12. Kxf5 Kf7 13. Ke4 <HT>
>>>>>               37    18.68   8.92   1. Kb1 Kb7 2. Kc1 Kb8 3. Kc2 Kb7 4.
>>>>>                                    Kc3 Kc7 5. Kd3 Kb7 6. Ke2 Kc7 7. Kf3
>>>>>                                    Kd7 8. Kg3 Ke7 9. Kh4 Kf6 10. Kh5 Kf7
>>>>>                                    11. Kg5 Kg7 12. Kxf5 Kf7 13. Ke4 <HT>
>>>>>               37->  18.70   8.92   1. Kb1 Kb7 2. Kc1 Kb8 3. Kc2 Kb7 4.
>>>>>                                    Kc3 Kc7 5. Kd3 Kb7 6. Ke2 Kc7 7. Kf3
>>>>>                                    Kd7 8. Kg3 Ke7 9. Kh4 Kf6 10. Kh5 Kf7
>>>>>                                    11. Kg5 Kg7 12. Kxf5 Kf7 13. Ke4 <HT>
>>>
>>>a = time(ply 36) = 17.33-15.93 = 1.40
>>>b = time(ply 37) = 18.70-17.33 = 1.37
>>>
>>>So we have b = 0.986*a.
>>>I don't where you took math, but I was tought that 0.986 is MUCH LARGER than
>>>0.20.
>>>
>>>Correct me if I'm wrong.
>>>
>>>-S.
>>
>>
>>What does that have to do with anything?  You said "if less than 20% of time
>>remaining, don't start another iteration."  That will fail in the above case if
>>you simply assume your target search time is (say) 20 seconds.  Yet there is
>>_plenty_ of time for another iteration to finish.
>
>I think that there is some misunderstanding here.
>
>I am not going to look exactly what he said in earlier posts but it is clear to
>me by his example that he meant:
>
>"Do not start a new iteration if you cannot finish it even in case that the time
>of the next iteration(ply N+1 minus ply N) is going to be 20% of the time of
>this iteration(ply N minus ply N-1)"

That makes no sense.  The next iteration will only take 20% of the time required
for the current iteration?  What I understood was that if 20% of the time is
left, then there is no reason to start another iteration.  Yet clearly from my
example, this is not always true.  The effective branching factor is a pretty
big variable.

>
>Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.