Author: Robert Hyatt
Date: 19:41:29 10/04/01
Go up one level in this thread
On October 04, 2001 at 21:36:38, Miguel A. Ballicora wrote: >On October 04, 2001 at 19:30:34, J. Wesley Cleveland wrote: > >>On October 04, 2001 at 17:18:43, Robert Hyatt wrote: >> >>>On October 04, 2001 at 14:22:09, J. Wesley Cleveland wrote: >>> >>>>On October 04, 2001 at 14:03:13, Robert Hyatt wrote: >>>> >>>>>On October 04, 2001 at 12:40:53, J. Wesley Cleveland wrote: >>>>> >>>>>>Take an iterative function F(a), i.e. F1(a) = a, F2(a) = F(a), F3(a) = F(F(a)), >>>>>>... Assume the function is complicated enough to use all the available registers >>>>>>on your machine and requires the same number of machine cycles for each >>>>>>iteration. Now find the first iteration for which F(b) > F(a). A serial >>>>>>implementation will spend a significant amount of time saving and restoring >>>>>>state as it goes between the two functions, while the two processor solution >>>>>>will only have to spend a minimal amount of time syncronizing at the end of each >>>>>>iteration (due to my second condition). >>>>> >>>>> >>>>>Explain how you are going to do a recursive function (I assume you mean >>>>>recursive and not iterative above due to the way you defined the thing) >>>>>in a parallel manner. >>>> >>>>I wrote recursive, but it could be implemented either way. >>> >>> >>>In the text above I see "iterative" not "recursive" >>> >>> >>>> >>>>The parallel part is that one processor calculates f1(a), f2(a), ... >>>>while the other processor calculates f1(b), f2(b), ... >>> >>> >>> >>>In the text above you assume some finite number of registers. And you assume >>>that on the machine you use, that is an issue. I choose to use a SPARC. >>>Which means that two different functions use two different sets of registers. >>>And that advantage evaporates. Or I run it on a MIPS and use 32 registers >>>rather than 8. >>> >>[snip] >> >>What I am saying is that in a *real* situation, there may be significant >>overhead in switching from one process to another, which will give a >>super-linear speedup even though theory would predict only a linear speedup. > >This has not been addressed yet. Once it is admitted that time slicing >is the way to go, the overhead is a theoretical obstacle to the denial >of the superlinear speed up. 2.01 > 2.00. > >Regards, >Miguel Timeslicing need have _no_ penalty. Check out the SPARC architecture. Zero procedure call overhead. Zero register conflicts. Shoot, our old xerox sigma 9 had two sets of registers that could be used for the two "threads" with zero overhead of any kind. So that part of the argument is totally moot...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.