Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Speedup?

Author: Robert Hyatt

Date: 15:47:44 09/06/02

Go up one level in this thread


On September 06, 2002 at 17:53:41, Slater Wold wrote:

>On September 06, 2002 at 12:25:37, Robert Hyatt wrote:
>
>>On September 06, 2002 at 08:14:31, Slater Wold wrote:
>>
>>>On September 06, 2002 at 01:41:25, Dave Gomboc wrote:
>>>
>>>>On September 05, 2002 at 22:37:18, Slater Wold wrote:
>>>>
>>>>>On September 05, 2002 at 21:33:40, Dann Corbit wrote:
>>>>>
>>>>>>On September 05, 2002 at 21:29:09, martin fierz wrote:
>>>>>>
>>>>>>>On September 05, 2002 at 21:18:42, Slater Wold wrote:
>>>>>>>
>>>>>>>>Any comments/thoughts/ideas/suggestions welcome.
>>>>>>>
>>>>>>>great stuff slater!
>>>>>>>what i'd like to see is not only an average speedup (defined as time ratio, not
>>>>>>>nps ratio - i think that time ratio is what counts) as you give, but rather a
>>>>>>>list of all 300 speedups you observed, so we can see how the values are
>>>>>>>distributed (you gave 2 extreme examples) - or you can just give us the standard
>>>>>>>error on the speedup. what would also be interesting is if you reran the 2 CPU
>>>>>>>test (maybe more than once....), and recomputed the average, and looked how
>>>>>>>variable the average speedup is over such a large number of test positions. i'd
>>>>>>>think that at least the average should be fairly stable, but even that seems to
>>>>>>>be unclear...
>>>>>>
>>>>>>By what means are you limiting the search?
>>>>>>
>>>>>>Did you set time in seconds or depth in plies or what?  It will make a very big
>>>>>>difference on how we might interpret the results.
>>>>>>
>>>>>>Hash tables can share hits and mask speedup.
>>>>>>
>>>>>>Timed searches can suffer from the same effect.
>>>>>>
>>>>>>Depth in ply searches are probably the most reliable comparisons, but it is
>>>>>>impossible to know which ply level is sensible since some problems may take
>>>>>>weeks to reach ten plies and others may reach 32 plies in a few seconds.
>>>>>>
>>>>>>In short, the real difficulty here is designing the experiment.  Quite frankly,
>>>>>>I don't know the best way to proceed.
>>>>>
>>>>>I understand what you're getting out, but I do not agree.  Simply because the
>>>>>definition of "relative speedup" is "the ratio of the
>>>>>serial run time of a parallel application for solving a problem on a
>>>>>single processor, to the time taken by the same parallel application
>>>>>to solve the same problem on n processors".  It's all about "run time" and less
>>>>>about "run parameters".  IMO.
>>>>>
>>>>>As long as both runs were using the *same exact* settings, I think all would be
>>>>>fair.
>>>>>
>>>>>
>>>>>Also, I simply used 'st 60' in Crafty.  A *lot* of positions were thrown out
>>>>>because a.) they were solved at root or b.) the search time was less than 60
>>>>>seconds.
>>>>
>>>>Don't you want to be doing something like 'sd 10' and computing
>>>>time(2cpu)/time(1cpu)?
>>>>
>>>>Dave
>>>
>>>*I* don't think so.  Because the classic definition of "relative speedup" is
>>>based on runtime.  Not depth.
>>
>>
>>Dave's point is that sd=n is the _easiest_ way to get runtime data.
>>
>>Search to a fixed depth on 1 cpu, then to the same fixed depth on 2
>>cpus, and you have _perfect_ timing data to compute the speedup...  Both
>>searched to the same depth, traversed the same tree, etc...
>
>I don't think so.
>
>If you take the position of WAC41 you will see there is a 1.19x NPS speedup.
>However, there is a 16x speedup to ply 11!
>

I'm not talking NPS at _all_ here.  I am talking about how long it takes to
complete depth X with 1 cpu and with 2 cpus.  Divide the 2cpu time into the 1
cpu time and that's the number...


>Then again, if you take the position of WAC76 you will see there is a 1.62x NPS
>speedup.  However, there is a 1.44x speedup to ply 11.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.