Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New crap statement ? Perpetuum mobile

Author: Robert Hyatt

Date: 21:53:07 10/04/01

Go up one level in this thread


On October 05, 2001 at 00:02:42, Miguel A. Ballicora wrote:

>
>Yes, there are different overheads. Which one is bigger? Two threads in
>one CPU or the introduction of a second CPU?. There is
>no law of physics that tells that one should be bigger than the other and
>you do not know what is going to happen with future processors.
>

This one is easy to answer.  I have run this many times.

First, take a program that requires a good bit of cpu time to run.  1
minute exactly would be a good choice.  Run it once on a dual-cpu machine.
Then run two separate copies.  Each should run in exactly 1 minute.  But they
won't.  They will typically run 7-10% slower due to memory conflicts and
bus conflicts.  OK...  got the dual-cpu test answered.

Second, run the same program on a single cpu machine.  Should take 1
minute still.  Now run it twice.  Should run in exactly two minutes.  The
extra time is the context-switching overhead.  This will typically be a
second or less.

The memory conflicts dominate on every machine I have tried...




>>2.  Threads don't have to have _any_ cost.  I programmed a xerox sigma 9 for
>>years.  It had two sets of registers.  I could flip back and forth between two
>>threads, using two sets of registers, with zero overhead.  Or at least closer
>>to zero than the overhead of a dual-processor machine.
>>
>>3.  Algorithms are independent of this anyway.  Because it is _always_ possible
>
>If it is independent why do you bring up an example of one particular computer?
>Moreover, and old computer.

To show that it is irrelevant...  by an example...  See above for a simpler way
to answer the architectural vs context-switching question.  There is no real
comparison.  the dual cpu machine is significantly slower due to architectural
considerations.  And you can run the test to confirm it easily enough...


>
>>to find an algorithm that won't work well on a particular machine.  And then
>
>That particular machine could be the dominant computer in the future.
>So, you admit that this is possible.


No I don't.  I am talking about finding an algorithm that won't run at _all_
due to (say) a lack of memory.  But then by using dual cpus with double the
memory, it will.  That super-linear speedup is not an algorithm issue, it will
go away on good hardware.  Choosing a defective platform doesn't make it legit
at all.



>
>>compare that to a machine with twice the resources that will run that
>>algorithm well.  That isn't what this algorithm analysis is about.  It is _very_
>>safe to assume that the dual-cpu machine and the single-cpu-running-two-threads
>>cases are _identical_ in their overhead costs from the machine perspective.
>>Which means it can be ignored...
>
>I do not see why it is safe. Particularly, when you don't know how the
>hardware is going to be in 10 years. 10 years ago branch misprediction
>was not an issue, today is. Can you guarantee that ignoring that overhead
>is safe in the future generation of computers? yes or no?.
>
>Miguel

I can guarantee you that future machines will have _worse_ memory bottlenecks.
And worse bus conflicts.  That will always be more expensive than context
switching.  New cpus will have zero context switching overhead as they will
begin to do threading inside the processor...




This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.