Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New crap statement ? Perpetuum mobile

Author: Robert Hyatt

Date: 11:44:56 10/04/01

Go up one level in this thread


On October 04, 2001 at 10:33:01, Miguel A. Ballicora wrote:

>On October 04, 2001 at 03:03:37, Jeremiah Penery wrote:
>
>>On October 04, 2001 at 02:01:59, Bruce Moreland wrote:
>>
>>>On October 03, 2001 at 18:53:31, Dave Gomboc wrote:
>>>
>>>>Observing that the parallel algorithm was super-linearly quicker immediately
>>>>gives rise to a superior sequential algorithm: alternate checking from the front
>>>>of the array and the back of the array for the zero.  It would quickly be
>>>>observed that the parallel algorithm does not have super-linear speedup over
>>>>this superior sequential algorithm.  In addition, the sequential algorithm may
>>>>be further optimized by ignoring the front of the array altogether, which would
>>>>reduce the parallel speedup further.
>>>>
>>>>To ignore the superior sequential algorithm and instead publish results
>>>>comparing only the original, poor sequential algorithm with the parallel
>>>>algorithm is not merely disingenuous but simply bad science.
>>>>
>>>>Dave
>>>
>>>My sequential algorithm is just fine if you have no idea what the data looks
>>>like.  My parallel algorithm is also just fine if you have no idea what the data
>>>looks like.
>>>
>>>The fact that the data encountered in real life creates a worst case for the
>>>sequential algorithm and a best case for the parallel algorithm is a happy
>>>coincidence.
>>
>>The problem is that the data won't _always_ be that way.  If you average the
>>parallel speedup over all possible sets of data for that algorithm, the speedup
>>will be provably not super-linear.  You can't just pick one data point where you
>>do get a super-linear speedup and suggest that it always works that way.
>
>That is not Bruce's point (AFAIK). In a broad sense, for instance, alpha-beta
>search applies to different games. It is possible that there is one type
>of game where the "data" is "abnormal" when compared with other games.
>In this particular game,  the data has the particular feature that allows
>this kind of speed ups. Bruce said that for instance "finding mate in ones type
>of games" might be one example or the tree is full of that game is full of such
>kind of anomalities. Of course, the data itself is no chosen, you choose the
>game. You cannot have the speed up in checkers but you might have it in
>Artic-Chess (I just made it up). Who knows, maybe some kind of smart pruning
>is devised where you have the same kind of effect as "finding mate in ones" and
>you can prune _immediately_ in certain kind of positions (maybe hitting some
>tables, whatever). Then, you might have this super speedup.


I hate to keep answering, but the computer scientist in me will simply not
let that stand.  If you _know_ that the data is "abnormal" with lots of
shallow mates, then the time-slicing idea neatly refutes the super-linear
speedup.  Of course, the time-slicing algorithm is less efficient than one
pure search thread for normal cases.  But then the one pure search thread
is more efficient than a parallel search in the _same_ case.

So let's define the vagueness out of this.

Are we discussing the "average" case for chess or any other problem domain?
If so, super-linear speedup is impossible.

Are we discussing the "special cases" for chess or any other problem domain?
If so, then super linear is _also_ impossible as time-slicing will eliminate
the advantage of the paralle search even on one processor.

Are we discussing a "combination" of the above.  Super-linear speedup is _still_
impossible, because the average will never be super-linear.

There is really nothing else to say.  I point out the flaw in the first case,
and Bruce offers the second case.  I do the math for all the cases including
the one he offered, he said "but you should only consider the second case."
I then say OK, I will compare one element from each half of the list, and
proceed that way, which cancels the super-linear speedup" and he responds "but
that would be slower in the normal case.  (which his super-linear example does
not consider, btw).


This is going round and round...



>
>Then, a different question would be if you can rewrite the sequential algorithm.
>Maybe you can, but it might be not convenient or just will have other
>bottlenecks because of the rewriting (you add code so you might introduce an
>overhead).
>
>Regards,
>Miguel
>
>
>
>>
>>>It also refutes the idea that a dual-processor algorithm cannot be
>>>faster than 2x the speed of the single-processor algorithm that it is derived
>>>from, which was the point of the exercise.  I was trying to demonstrate that the
>>>parallel algorithm is not equivalent to the single-processor algorithm running
>>>on two processors -- it is a different algorithm.
>>>
>>>Of course, if you know what the data looks like, you can make a sequential
>>>algorithm that solves the problem, but it is permissible to stipulate that you
>>>don't know what the data looks like.
>>
>>Which is exactly why you can't say that you will get a super-linear speedup on
>>average.
>>
>>>Bob was saying that you can't get a more than 2x speedup with two processors no
>>>way no how except in some pathological cases.
>>
>>I don't think that's what he was saying.  I think he is simply saying that the
>>average must be less than N, for N processors.
>>
>>>I could have responded by suggesting that pathological cases could be common in
>>>some domains, but I thought this might be too big a pill to swallow, so I tried
>>>to come up with some examples that I thought everyone could understand and agree
>>>on.
>>>
>>>Silly me.
>>>
>>>The long argument about whether dragging something is necessarily *more*
>>>efficient than carrying something is especially crazy, as anyone who has tried
>>>to get a recalcitrant two year-old up a flight of stairs can tell you instantly.
>>> Of course now I've changed that example slightly and I look forward to another
>>>long argument about thermodynamics as applied to little kids.  Or I should say,
>>>I look forward to ignoring such an argument.
>>
>>>I provided an overly simple but not completely crazy example of an algorithm
>>>where you'd get way more than 2x speedup if you parallelized it in an obvious
>>>way for two processors.  Bob proceeded to discuss another algorithm entirely.
>>
>>Bob also showed that, on average, the speedup for that algorithm is less than
>>2.0.  That is the entire point of this discussion.
>>
>>>I have better things to do than try to convince a mule that it is stubborn.  The
>>>mule will be stubborn no matter what I say, since that is its nature.  I have
>>>better things to do than spend time arguing with people who don't listen.
>>
>>I think that's a bit unfair.  He may be guilty of it, but is the other side not
>>guilty of being overly-stubborn?



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.