Author: Jeremiah Penery
Date: 00:03:37 10/04/01
Go up one level in this thread
On October 04, 2001 at 02:01:59, Bruce Moreland wrote: >On October 03, 2001 at 18:53:31, Dave Gomboc wrote: > >>Observing that the parallel algorithm was super-linearly quicker immediately >>gives rise to a superior sequential algorithm: alternate checking from the front >>of the array and the back of the array for the zero. It would quickly be >>observed that the parallel algorithm does not have super-linear speedup over >>this superior sequential algorithm. In addition, the sequential algorithm may >>be further optimized by ignoring the front of the array altogether, which would >>reduce the parallel speedup further. >> >>To ignore the superior sequential algorithm and instead publish results >>comparing only the original, poor sequential algorithm with the parallel >>algorithm is not merely disingenuous but simply bad science. >> >>Dave > >My sequential algorithm is just fine if you have no idea what the data looks >like. My parallel algorithm is also just fine if you have no idea what the data >looks like. > >The fact that the data encountered in real life creates a worst case for the >sequential algorithm and a best case for the parallel algorithm is a happy >coincidence. The problem is that the data won't _always_ be that way. If you average the parallel speedup over all possible sets of data for that algorithm, the speedup will be provably not super-linear. You can't just pick one data point where you do get a super-linear speedup and suggest that it always works that way. >It also refutes the idea that a dual-processor algorithm cannot be >faster than 2x the speed of the single-processor algorithm that it is derived >from, which was the point of the exercise. I was trying to demonstrate that the >parallel algorithm is not equivalent to the single-processor algorithm running >on two processors -- it is a different algorithm. > >Of course, if you know what the data looks like, you can make a sequential >algorithm that solves the problem, but it is permissible to stipulate that you >don't know what the data looks like. Which is exactly why you can't say that you will get a super-linear speedup on average. >Bob was saying that you can't get a more than 2x speedup with two processors no >way no how except in some pathological cases. I don't think that's what he was saying. I think he is simply saying that the average must be less than N, for N processors. >I could have responded by suggesting that pathological cases could be common in >some domains, but I thought this might be too big a pill to swallow, so I tried >to come up with some examples that I thought everyone could understand and agree >on. > >Silly me. > >The long argument about whether dragging something is necessarily *more* >efficient than carrying something is especially crazy, as anyone who has tried >to get a recalcitrant two year-old up a flight of stairs can tell you instantly. > Of course now I've changed that example slightly and I look forward to another >long argument about thermodynamics as applied to little kids. Or I should say, >I look forward to ignoring such an argument. >I provided an overly simple but not completely crazy example of an algorithm >where you'd get way more than 2x speedup if you parallelized it in an obvious >way for two processors. Bob proceeded to discuss another algorithm entirely. Bob also showed that, on average, the speedup for that algorithm is less than 2.0. That is the entire point of this discussion. >I have better things to do than try to convince a mule that it is stubborn. The >mule will be stubborn no matter what I say, since that is its nature. I have >better things to do than spend time arguing with people who don't listen. I think that's a bit unfair. He may be guilty of it, but is the other side not guilty of being overly-stubborn?
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.