Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New crap statement ? Perpetuum mobile

Author: Robert Hyatt

Date: 15:25:46 10/04/01

Go up one level in this thread


On October 04, 2001 at 18:22:56, Bruce Moreland wrote:

>On October 04, 2001 at 17:36:55, Robert Hyatt wrote:
>
>>On October 04, 2001 at 15:14:34, Bruce Moreland wrote:
>>
>>>On October 04, 2001 at 14:55:32, Robert Hyatt wrote:
>>>
>>>>I don't follow any of  that.  If in your algorithm, the number you want is in
>>>>the first half of the list, you get no speedup whatsoever.  That is exactly
>>>>1/2 the cases.  If it is in the second half of the list, you get anywhere from
>>>>nothing to a huge speedup.  If it is on the end of the second half, you get
>>>>nothing.  If it is on the front, you get a huge speedup.
>>>
>>>You didn't read the algorithm carefully.  The first thread searches sequentially
>>>forward to the middle.  The second one searches sequentially backward to the
>>>middle.  So if the one at the end of the data is always the one you want, you
>>>find it in optimal time.
>>
>>
>>So.  My analysis still holds.  For the 6-number dataset you get this instead
>>of the numbers I gave the first time
>>
>>position of key              time to find seq               time to find par
>>
>>1                                  1                                 1
>>2                                  3                                 2
>>3                                  5                                 3
>>4                                  6                                 3
>>5                                  4                                 2
>>6                                  2                                 1
>
>position of key               sequential                     parallel
>6                                   6                            2
>
>According to the data and algorithms I have described now several times.
>


Did I misunderstand that you said "one thread starts at the front, the
other starts at the end?"  If so, then 1 is correct.  Because both tests
will be done in the first cycle.  You have done 2 total cycles of work, but
one real cycle since you use two processes.

Or did I misinterpret what you said somehow???



>bruce
>
>>
>>The speedup is then (1/1 + 3/2 + 5/3 + 6/3 + 4/2 + 2/1) /6 =
>>
>>(1 + 1.5 + 1.66 + 2 + 2 + 2)/6 = 1.69 speedup
>>
>>Note that since you changed your algorithm, I copied it.  I check the first
>>number, then the last number, then the second number, then the next-to-last
>>number, etc.
>>
>>And it is _still_ not super-linear.
>>
>>>
>>>>But compute the averge.
>>>>
>>>>If you want to say "but the average is no good, I know the number is always in
>>>>the second half" then we use the timeslicing approach and the super-linear
>>>>speedup goes away.  Or even better, if we know it is in the second half of
>>>>the list more than 50% of the time, we just search the list backward and your
>>>>parallel algorithm gets _zero_ speedup.
>>>>
>>>>But if you _really_ don't know where the number you want is, if it is _really_
>>>>uniformly distributed throughout the list, then the normal algorithm (front to
>>>>back or back to front) and the parallel version of that same algorithm will
>>>>_not_ produce a super-linear speedup.
>>>>
>>>>The "appeal" to knuth, aho, hopcroff, ulman, et all is dead serious and
>>>>right on target.  This has been proven already to be impossible...
>>>
>>>If the problem is "find the minimum value in the array", you are right.  A
>>>two-thread program will always be twice as fast as a one-thread program, because
>>>you split the work exactly in half, and neither thread can cut off early.
>>>
>>>If you do an alpha-beta searching task where your move ordering isn't 100%
>>>perfect, you aren't right.  You end up fiddling around until you get a
>>>single-thread version working alright.  But this isn't necessarily provably
>>>best, so there is room for improvement.  You could have monkeys edit it at
>>>random and there is always a chance that it will get better.
>>
>>
>>I have never said otherwise.  I have said that if you average the speedup over
>>a large set of positions, it is _impossible_ to get a super-linear speedup for
>>that average.  As I showed in your example.  In fact, unless I did something
>>ugly in the math, your new algorithm _never_ produces a super-linear speedup.
>>It does produce 2.0 three times.  But it produces < 2.0 3 times as well.  The
>>average is still <= 2.0 which it must be.
>>
>>
>>
>>>
>>>If monkeys editing code at random doesn't break Knuth and Newton, then Vincent
>>>doing the same thing doesn't, either.
>>
>>
>>apples and oranges.  One position _can_ be super-linear.  A large set of
>>positions can _not_ be.  Nothing complicated about that, and nothing ambiguous
>>either.  One super-linear case here and there is not only not unusual, it is
>>demanded in computer chess.  Because we _know_ move ordering is not perfect.
>>And the odds are always present that we will find something quicker in a
>>parallel
>>search because we search two at a time.  But that will be the exception, not
>>the rule.  And that has been my only point from the beginning.  I don't care
>>about exceptional cases.  I want to know about the _typical_ case.  Those are
>>the positions I am going to get _most_ of the time.
>>>
>>>bruce



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.