Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Speedup?

Author: Dann Corbit

Date: 20:00:32 09/05/02

Go up one level in this thread


On September 05, 2002 at 22:37:18, Slater Wold wrote:

>On September 05, 2002 at 21:33:40, Dann Corbit wrote:
>
>>On September 05, 2002 at 21:29:09, martin fierz wrote:
>>
>>>On September 05, 2002 at 21:18:42, Slater Wold wrote:
>>>
>>>>Any comments/thoughts/ideas/suggestions welcome.
>>>
>>>great stuff slater!
>>>what i'd like to see is not only an average speedup (defined as time ratio, not
>>>nps ratio - i think that time ratio is what counts) as you give, but rather a
>>>list of all 300 speedups you observed, so we can see how the values are
>>>distributed (you gave 2 extreme examples) - or you can just give us the standard
>>>error on the speedup. what would also be interesting is if you reran the 2 CPU
>>>test (maybe more than once....), and recomputed the average, and looked how
>>>variable the average speedup is over such a large number of test positions. i'd
>>>think that at least the average should be fairly stable, but even that seems to
>>>be unclear...
>>
>>By what means are you limiting the search?
>>
>>Did you set time in seconds or depth in plies or what?  It will make a very big
>>difference on how we might interpret the results.
>>
>>Hash tables can share hits and mask speedup.
>>
>>Timed searches can suffer from the same effect.
>>
>>Depth in ply searches are probably the most reliable comparisons, but it is
>>impossible to know which ply level is sensible since some problems may take
>>weeks to reach ten plies and others may reach 32 plies in a few seconds.
>>
>>In short, the real difficulty here is designing the experiment.  Quite frankly,
>>I don't know the best way to proceed.
>
>I understand what you're getting out, but I do not agree.  Simply because the
>definition of "relative speedup" is "the ratio of the
>serial run time of a parallel application for solving a problem on a
>single processor, to the time taken by the same parallel application
>to solve the same problem on n processors".  It's all about "run time" and less
>about "run parameters".  IMO.
>
>As long as both runs were using the *same exact* settings, I think all would be
>fair.
>
>
>Also, I simply used 'st 60' in Crafty.  A *lot* of positions were thrown out
>because a.) they were solved at root or b.) the search time was less than 60
>seconds.
>
>WAC is probably not an "optimal" suite to use, because 99% of the positions are
>solved so easily.  If anyone wants to put something together for me that suits
>me better, I would greatly appreciate it.
>
>Going over 600 position logs is eating all my time at the moment.  ;)

The problem in this case is:
What *exactly* are you measuring?  How are you calculating the speedup?

I doubt if you can do it accurately.

Two different "11-ply" searches can be drastically different.

I am pretty sure that just NPS will give a wildly wrong answer.  Or may.  I'm
not sure.  I am sure that I don't trust most common sense sorts of measurements
unless they have a way to compensate for parallel effects (such as improved hash
table utilization).



This page took 0.02 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.