Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New crap statement ? Perpetuum mobile

Author: Bruce Moreland

Date: 23:12:49 10/03/01

Go up one level in this thread


On October 04, 2001 at 00:28:35, Dave Gomboc wrote:

>On October 04, 2001 at 00:11:37, Uri Blass wrote:
>
>>On October 03, 2001 at 18:53:31, Dave Gomboc wrote:
>>
>>>Observing that the parallel algorithm was super-linearly quicker immediately
>>>gives rise to a superior sequential algorithm: alternate checking from the front
>>>of the array and the back of the array for the zero.  It would quickly be
>>>observed that the parallel algorithm does not have super-linear speedup over
>>>this superior sequential algorithm.  In addition, the sequential algorithm may
>>>be further optimized by ignoring the front of the array altogether, which would
>>>reduce the parallel speedup further.
>>>
>>>To ignore the superior sequential algorithm and instead publish results
>>>comparing only the original, poor sequential algorithm with the parallel
>>>algorithm is not merely disingenuous but simply bad science.
>>>
>>>Dave
>>
>>The question here is a practical question.
>>
>>It is possible that the original sequential algorithm that chess programs use is
>>a poor algorithm and this is the point of Bruce.
>>
>>Uri
>
>And that point was already superseded weeks ago.  Both Bob and I have clearly
>stated earlier that comparing a parallel algorithm against a _bad_ sequential
>algorithm is useless.  You must compare the parallel algorithm to the _best_
>available sequential algorithm.  To do anything but this is absurd, because if
>you don't insiste on the best available sequential algorithm then you can report
>an arbitrarily large speedup.
>
>Once you do insist on the _best_ available sequential algorithm, overall
>super-linear speedup is not possible, for the class of computations executable
>by Turing machines.  ( = the claim about consistent super-linear speedup not
>being possible is not made for quantum computation. )

The task is to make a general-purpose single-processor algorithm that figures
out the lowest value within an array.

The second task is to make a general-purpose dual-processor algorithm that does
the same thing.

A sequential search through the array would seem like the best solution in the
first case.

A pair of sequential searches through the array, one going front to back, and
the other going back to front, seems like a fine solution here.

I threw everyone a curve ball by suggesting that the data is pathological for
the first algorithm and optimal for the second, but that's life.  Sometimes you
don't know what your data is going to be like, and there is a lot of it that's
all bad.

Perhaps it would have been possible to design the first one so that it searches
the elements in random order, if you thought there was any possibility that
you'd encounter data that was this pathological, but that's not the point.

I am unconvinced that the appeals to Knuth and Isaac Newton are warranted.

bruce

>
>Dave



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.