Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Depth vs Time

Author: Robert Hyatt

Date: 17:14:39 06/25/02

Go up one level in this thread


On June 25, 2002 at 17:42:58, Tony Werten wrote:

>On June 25, 2002 at 17:30:51, Ulrich Tuerke wrote:
>
>>On June 25, 2002 at 02:40:59, Gian-Carlo Pascutto wrote:
>>
>>>On June 24, 2002 at 18:53:24, Steve Coladonato wrote:
>>>
>>>>>I wonder what you consider 'comparable'. There's no guarantee
>>>>>they'll be similar whatsoever.
>>>>
>>>>That was not a well formed statement on my part.  What I meant was that for a
>>>>given ply depth, the evaluation that program X comes up with should be
>>>>comparable to the evaluation that program Y comes up with if both programs are
>>>>fairly equal in overall strength.
>>>
>>>No. There is no guarantee whatsoever that this is true.
>>>
>>>>Therefore, if the algorithms/heuristics that
>>>>program X uses allow it to get to ply M faster than program Y, then program X
>>>>should win if the time allowed constrains how much time each program can use for
>>>>analysis at that depth.  For example, if program X can get to ply 11 in 30 secs
>>>>and program Y takes 1 min 30 secs to get there, the overall analysis that
>>>>program X can generate during a game should be better than that generated by
>>>>program Y and program X should win.  So it seems that the efficiency of the
>>>>algorithms/heuristics will determine the overall strength of a program.
>>>
>>>Again, this is completely false.
>>>
>>>I will repeat what I said several times earlier in this thread, and that
>>>is that plies are not comparable between chessprograms. The analysis of
>>>one program at ply 11 can be completely different and of higher
>>>quality than another at the same 11 ply. If the second program reaches
>>>ply 11 faster, we have no information at all to make any solid conclusions
>>>about the relative strength of those programs.
>>
>>Completely agreed. This integer which we are talking about should be better
>>called "iteration number". It basically defines how many times the search had
>>been restarted exploiting each time the results of the preceeding iteration in
>>order to extend the search tree.
>>IMHO, the relation of iteration number to search depth is a very loose one,
>>having in mind that todays programs are heavily pruning as well as extending.
>>
>
>Hmm. I can imagine that a program that uses partial ply extensions might decide,
>when the timelimit is almost reached, to start an iteration with only half a ply
>deeper.
>
>Or even worse. Every uses iterative deepening, but did anybody ever prove that
>full plies are best ? Maybe 2/3 ply is better ?
>
>Tony

I didn't "prove" it, but I did test a bunch of different increments a few
years ago, from .5 to 2, and liked 1 the best.  Sometimes going by .5 would
go instantly since 1/2 ply is not really an extension unless something else
gets added.  The bad thing was that on occasion, critical hash info would
cause the N+.5 search to take longer than necessary since the search had to
be re-done if the hash table was overwritten in a critical spot.  Then that
would be wasted...

I didn't do exhaustive testing however, just a bunch of positions with various
increments from .5 to 2.0...  Everyone should try it to see if they find that
something other than 1.0 is better for their program.  And once they do, they
probably should re-test yearly to make sure the answer is still the same then.



>
>>Uli
>>
>>>
>>>--
>>>GCP



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.