Author: Robert Hyatt
Date: 19:49:14 10/04/01
Go up one level in this thread
On October 04, 2001 at 18:32:15, Dave Gomboc wrote: >On October 04, 2001 at 02:01:59, Bruce Moreland wrote: > >>On October 03, 2001 at 18:53:31, Dave Gomboc wrote: >> >>>Observing that the parallel algorithm was super-linearly quicker immediately >>>gives rise to a superior sequential algorithm: alternate checking from the front >>>of the array and the back of the array for the zero. It would quickly be >>>observed that the parallel algorithm does not have super-linear speedup over >>>this superior sequential algorithm. In addition, the sequential algorithm may >>>be further optimized by ignoring the front of the array altogether, which would >>>reduce the parallel speedup further. >>> >>>To ignore the superior sequential algorithm and instead publish results >>>comparing only the original, poor sequential algorithm with the parallel >>>algorithm is not merely disingenuous but simply bad science. >>> >>>Dave >> >>My sequential algorithm is just fine if you have no idea what the data looks >>like. My parallel algorithm is also just fine if you have no idea what the data >>looks like. >> >>The fact that the data encountered in real life creates a worst case for the >>sequential algorithm and a best case for the parallel algorithm is a happy >>coincidence. It also refutes the idea that a dual-processor algorithm cannot be >>faster than 2x the speed of the single-processor algorithm that it is derived >>from, which was the point of the exercise. I was trying to demonstrate that the >>parallel algorithm is not equivalent to the single-processor algorithm running >>on two processors -- it is a different algorithm. > >It however does _not_ refute the idea that a dual-processor algorithm cannot be >faster than 2x the speed of the _best_ single-processor algorithm. > >Furthermore, I agree that the parallel algorithm you gave is not equivalent to >the sequential algorithm you gave. Indeed, your >search-the-array-from-each-direction-with-one-processor algorithm is not >"derived" from the original -- more clear would be that the original string is >split into "n" chunks where n is the number of processors, either chunked or >interleaved (e.g. 9 characters, 3 processors) > >111111111111111111 <- single cpu case >123123123123123123 <- interleaved 3 cpu case >111111222222333333 <- chunked 3 cpu case > >This is entirely tangential to the discussion, though. What's important is that >you can create a sequential algorithm that searches the string in the >above-described fashion, the fashion of the original parallel algorithm that you >described, or any other fashion. > >>Of course, if you know what the data looks like, you can make a sequential >>algorithm that solves the problem, but it is permissible to stipulate that you >>don't know what the data looks like. >> >>Bob was saying that you can't get a more than 2x speedup with two processors no >>way no how except in some pathological cases. >> >>I could have responded by suggesting that pathological cases could be common in >>some domains, but I thought this might be too big a pill to swallow, so I tried >>to come up with some examples that I thought everyone could understand and agree >>on. >> >>Silly me. >> >>The long argument about whether dragging something is necessarily *more* >>efficient than carrying something is especially crazy, as anyone who has tried >>to get a recalcitrant two year-old up a flight of stairs can tell you instantly. >> Of course now I've changed that example slightly and I look forward to another >>long argument about thermodynamics as applied to little kids. Or I should say, >>I look forward to ignoring such an argument. >> >>I provided an overly simple but not completely crazy example of an algorithm >>where you'd get way more than 2x speedup if you parallelized it in an obvious >>way for two processors. Bob proceeded to discuss another algorithm entirely. >> >>I have better things to do than try to convince a mule that it is stubborn. The >>mule will be stubborn no matter what I say, since that is its nature. I have >>better things to do than spend time arguing with people who don't listen. >> >>bruce > >I think you're confusing "not listening" with "not agreeing" here. > >Dave That is correct. There is a huge difference between "not listening" and "listening but absolutely not agreeing with what I hear." I've cited well-known authors. I have shot down every super-linear example that has been given. But I am stubborn as a mule for doing this, it seems. In that case, what is the person that keeps proposing algorithms tnat don't deliver super-linear speedups? I don't mind the discussion. I don't mind the repetition. But I don't particularly like being called "stubborn" rather than "correct" here. Super-linear speedup is a pure myth, except for pathological cases that are infrequent. To claim a super-linear speedup, you can't pick one good test data set and go "ta-daaaaaa" there it is. You must produce a _bunch_ of typical test sets and get a super-linear average over all of them. Or else an "ugh...." is appropriate...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.