Author: Robert Hyatt
Date: 11:00:58 10/03/01
Go up one level in this thread
On October 03, 2001 at 13:17:20, Miguel A. Ballicora wrote: >On October 03, 2001 at 11:56:38, Sune Fischer wrote: > >>On October 03, 2001 at 10:51:13, Miguel A. Ballicora wrote: >> >>>You did not like Bruce's example? This one does not involve scientific >>>knowledge: >>>Team A: One person , 1 cart, 100 boxes too heavy to lift by only one person. >>>Team B: Two persons, 1 cart, same 100 boxes. >>>Task = move them 100 yards. >>> >>>Team B will take <50% of the time that it will take for Team A. Lifting >>>once onto a cart is easier than pushing. >> >>This analogy fails because there are tasks that can be done by two persons and >>not by one alone. > >Exactly, some tasks can be done by two persons and not by one alone. That is >how the mechanics of the solution changes. That is what Bruce Moreland >wanted to illustrate. Once you have two persons, you do not have to >do exactly the same procedure that you do with only one person. It is >more efficient to change the procedure and take advantage of that. That is >a synergistic effect. The example was intended to show what a synergistic >effect is and nothing more. For that purpose, the analogy is ok. > >>This is not the case for CPUs. > >The question is: could be the case for two threads? I say "probably" not, >Bob Hyatt says "provably" not. That is the point of disagreement. > >Regards, >Miguel I say the following: Given any algorithm F() that does some task in T units of time, using a parallel versin of F() can not produce an average speedup of more than N, using N processors. No exceptions. No nothing. That has a simple inductive proof first given by Knuth, but used over and over by others in various books and papers. Note that _nobody_ here is developing a _new_ tree search algorithm. In every case F() is a normal alpha/beta search algorithm. If you develop some new algorithm G(), then it is certainly possible that G() produces a speedup of more than N using N processors, when compared to F(). However, in that case, the comparison is wrong. And when you compare G() with N processors to G() with one processor, you are back to <= N speedup again. I don't see why this is so hard to grasp. I had no trouble the first time I read the explanation. It is more than simply logical. It is _obvious_. Otherwise, I am _still_ waiting on someone to give some algorithm, no matter how simple, that will _consistently_ produce a super-linear speedup. I don't care if it will produce a super-linear speedup in 2 of 6 cases. The 6 case average has to be > 2 for it to be consistent. Such an algorithm, by its very definition, is impossible to write. Because whatever algorithm you write, I will do the same thing serially and run faster than the old algorithm which used one cpu. And then your speedup when compared to this rewrite will not be super-linear any longer. Even if it were not theoretically unsound, it would be logically unsound. Because if you go > 2x faster with 2, then you go > 4x faster with 4. That would mean you could solve an infinite-sized problem with less than an infinite number of processors. Since you would have proved that the number of processors required is < infinity based on your super-linear speedup. The old O(1) vs O(W^D) discussion was bad enough, using the theory that at some point in time, the entire finite chess tree can be searched completely and that reduces the search to O(1). And that will happen in a few billion billion years, if the universe happens to last that long. This is even further afield in absurdity. But don't only ask me, contact any computer science department close to where you live and ask someone there that has any experience in parallel algorithms. That would be an interesting exercise...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.