Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New crap statement ?

Author: Robert Hyatt

Date: 14:51:41 09/26/01

Go up one level in this thread


On September 26, 2001 at 16:24:12, Bruce Moreland wrote:

>On September 26, 2001 at 14:52:19, Dann Corbit wrote:
>
>>On September 26, 2001 at 14:13:10, Bruce Moreland wrote:
>>
>>>On September 25, 2001 at 20:02:45, Dann Corbit wrote:
>>>
>>>>Superlinear speedup might be possible, even on average, but it would be due to a
>>>>change in algorithm.
>>>>
>>>>For instance, you can employ a completely different algorithm for parallel
>>>>sorting which will sort faster than any known serial algorithm.  Even if you
>>>>divide up the work for the serial technique, the parallel algorithm will beat it
>>>>because the fundamental algorithm is superior.
>>>
>>>Any parallel algorithm can be expressed serially, assuming that you can make the
>>>hardware do the same thing both ways.
>>>
>>>I'm an engineer, not a math guy, but the math behind that is screaming so loudly
>>>that even I can hear it.
>>>
>>>If what I've said is false, something bad has just happened to the original
>>>Turing machine.
>>
>>The turing machine says nothing about efficiency.  In fact, a serial tape reader
>>is probably just about the worst imagineable architecture.
>>
>>There are algorithms that are used only on parallel hardware.  You can implement
>>the same thing on serial machines, but it would stink, because the fundamental
>>speedup comes from the parallel design.
>
>What I mean is that it should be that if you can do it on a computer, you could
>do it on a Turing machine, and a Turing machine is one thread.

A turing machine can be non-deterministic.  Which means that it can generate
an almost infinite number of threads.  This is done by reaching a state where
two or more rules can fire at the same time, giving a "fork" of sorts.

I wrote an automaton simulator for use here many years ago.  The first thing
one of the faculty members tried was a non-deterministic algorithm that blew
me up on memory required (when I reach a "fork" I didn't fire off a second
thread, but I did have to stop, malloc() memory to hold the TM "tapes" and
then duplicate them.  Enough of those and the program got _real big_, _real
quick_.




>
>I'm not saying anything about efficiency either.
>
>>Consider the bitonic batcher sorting algorithm or the sample sorting algorithm.
>>They are lame on a serial machine -- you would not want to use them for any
>>sorting job.  But for large batches of input, they are among the fastest known
>>sorting algorithms on a parallel machine.
>>
>>If you have a large enough collection of CPU's, you could conceivably sort a
>>data set in a single cycle on a dedicated parallel sorting machine.  It would be
>>physically impossible to do that on a serial machine.
>>
>>Here is an analysis of Batcher's sort on a parallel machine:
>>SortingCost(N,P) = (a[i,j] C[i] + b[i,j] T j )*L[i]
>>
>>for var-hypercube C[i]=(log 2 N) and L[i]= N/P
>>a[i,j] and b[i,j] are system specific constants.
>>
>>It is clear that we can reduce the time to as small a value as we like by
>>increasing the number of processors.
>>
>>On a purely serial machine, you cannot achieve the same effect.
>
>I don't see why this should be the case.  It should be like recursion, which can
>always be replaced by iteration.
>
>bruce



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.