Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New crap statement ? Perpetuum mobile

Author: Robert Hyatt

Date: 21:40:43 09/30/01

Go up one level in this thread


On September 30, 2001 at 10:56:25, Miguel A. Ballicora wrote:

>On September 29, 2001 at 14:54:03, Robert Hyatt wrote:
>
>>On September 29, 2001 at 10:41:37, Miguel A. Ballicora wrote:
>>
>>>On September 29, 2001 at 10:01:11, Robert Hyatt wrote:
>>>
>>>>On September 28, 2001 at 00:00:43, Miguel A. Ballicora wrote:
>>>>
>>>>>On September 27, 2001 at 18:06:34, Olaf Jenkner wrote:
>>>>>
>>>>>>>If you can consistently produce a speedup of 3.0, then I can time-
>>>>>>>slice that same algorithm on one cpu and get a speedup of 1.5.  And I can do it
>>>>>>>for _any_ alpha/beta algorithm you do in parallel.  Every time. And using your
>>>>>>>code, it will be easy for me to do it.
>>>>>>>
>>>>>>It's a good explanation. Some people seem to believe to perpetuum mobiles.
>>>>>
>>>>>Some believe that objects heavier than air will never fly.
>>>>>Everybody knows, that's against physics.
>>>>>
>>>>>Regards,
>>>>>Miguel
>>>>
>>>>
>>>>I think you worded that wrong.  Airplanes are heavier than air and have been
>>>>flying for 100 years now.  I think you meant the bumble-bee, because no one
>>>>believed they could move their wings rapidly enough to produce the lift required
>>>>to get them off the ground.
>>>
>>>I worded right, I was sarcastic to answer a sarcastic message.
>>>
>>>>There is a difference between "believed to be impossible" and "known to be
>>>>impossible."
>>>
>>>Exactly my point.
>>>
>>>>super-linear speedups are provably impossible.  I've already given the simple
>>>>proof.
>>>
>>>Then, I missed it. If you refer to doing two threads "slicing the time" this
>>>requires an overhead, and that means that using two processors are > 2.0 or
>>><2.0, who knows. (it could be 2.0001, or more, we do not know how future cpus
>>>will behave). Still, that was not the interesting part of the discussion. The
>>>interesting part was if an algorithm with two threads could be better than one.
>>>
>>
>>It requires no overhead to mention.  Every N milliseconds the processor handle
>>a timer interrupt.  I choose that point to switch to the other thread.  The
>>cost is minimal.
>>
>>You overlook the fact that the parallel search _also_ has its own kind of
>>overhead problems...  that would be eliminated in the approach needed for
>>a serial version (ie no atomic locks that are very slow...  etc.)
>
>I do not overlook that. That is true in the current generation of processors.
>You cannot guarantee the same in the future. You are trying to prove something
>with experimental observation. I would agree if you say that in the
>current "context".
>
>>>Super-linear speedups are "probably" impossible but so far I did not see that
>>>they are "provably" impossible. I would settle with "They are believed to be
>>>impossible".
>>>
>>>Regards,
>>>Miguel
>
>>They simply _are_ impossible.  Unless you believe in perpetual
>>motion, infinite compression, a fire that will burn forever, etc.
>
>What I believe is irrelevant. In fact, I do not believe it is possible to
>have super-linear speedups, but I have not seen it proved.
>Comparison with physic laws do not bear any demostration more that a
>sarcastic effect, which still does not prove anything. Besides, at one point
>you mention the law of mass conservancy. That showed to be an universal truth
>until Einstein's generation. Lavoisier and Newton are still right under their
>own context. If we continue to come up with physic's analogies we will end up
>discussing something totally irrelevant to the analysis of an algorithm.
>
>Regards,
>Miguel


I didn't bring up the physics analogy myself.

But irregardless, for any algorithm you can define/develop/write, that will
produce a speedup of > 2.0, I can do that same algorithm in a time-sliced
manner and run faster than the original single-thread algorithm.  _every_ time.
That is the classic proof that is given in any good book on parallel programming
when the topic of 'super-linear speedup' occurs.  The problem, which we all know
about, is that for every speedup >2.0 (which does happen, as I have said) there
are more that are < 2.0.  And timeslicing _those_ would be worse.  Because any
parallel search is _always_ worse than the best sequential search since move
ordering cannot possibly be perfect, except for rare positions.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.