Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Still wrong

Author: Robert Hyatt

Date: 07:53:53 10/27/01

Go up one level in this thread


On October 27, 2001 at 01:04:04, Eugene Nalimov wrote:

>On October 27, 2001 at 00:24:45, Robert Hyatt wrote:
>
>>On October 26, 2001 at 23:04:56, Tom Kerrigan wrote:
>>
>>>On October 26, 2001 at 22:33:47, Robert Hyatt wrote:
>>>
>>>>On October 26, 2001 at 21:43:35, Tom Kerrigan wrote:
>>>>
>>>>>On October 26, 2001 at 21:19:11, Robert Hyatt wrote:
>>>>>
>>>>>>>"The floating point unit has 32 32-bit non windowed registers, which must be
>>>>>>>saved on a per-context basis"
>>>>>>
>>>>>>Memory fails as age increases, apparently.  :)
>>>>>
>>>>>Maybe FPUs are studied in a semester of comp org that you didn't teach.
>>>>
>>>>Actually FPUS really aren't touched on in a one-semester architecture
>>>>course.  With pipelines, cache, memory management, plus a few specific
>>>>architectures, time runs out pretty quickly.
>>>>
>>>>
>>>>>
>>>>>>There is only _one_ data path _into_ the CPU.  I was originally talking about
>>>>>>the 64 bit chunks that can flow into the cpu from outside.  And that is a
>>>>>>real bottleneck on Intel boxes, still.  IE you can't possible load
>>>>>>instructions, int data, and fp data, fast enough if you have to use memory.
>>>>>>And the classic SPEC benchmarks tend to stream data like crazy...
>>>>>
>>>>>This is going off on a tangent; Intel's decision to use a 64-bit FSB is almost
>>>>>certainly based on price/performance goals and not the bitiness of any processor
>>>>>internals. The FSB is 64-bit, the L2 bus is 256-bit, the SSE datapaths are
>>>>>128-bit, the x87 FPU is 64-bit (I believe), the core is 32-bit... all design
>>>>>decisions determined by any number of factors. It would have been a small amount
>>>>>of work to make the P4 a 64-bit chip instead of a 32-bit chip; this wasn't done
>>>>>almost certainly because the need for 64-bit is too small to justify a new
>>>>>instruction set. Or they didn't want the P4 to compete directly with the Itanic
>>>>>(and kick it in the nuts). AMD seems pretty happy to go the 64-bit route with
>>>>>x86-64 and minimal changes to the Athlon design.
>>>>>
>>>>>-Tom
>>>>
>>>>In any case, I still believe the _driving_ force for 64 bit machines is not
>>>>memory, since I still don't see any > 4gig machines lying around.  But I do
>>>>see a lot of people comparing FP performance to choose their next
>>>>high-performance workstation.  The best example here is still the Cray.  With
>>>>a 32 bit address bus, but a huge data path.  Ditto for comparing the processors
>>>>made by everybody, to the intel X86.  Everybody has done 64 bit processors,
>>>>but hardly any go beyond 2^32 address lines.  Seems to me that it is for
>>>>reasons other than address space, based on that...
>>>
>>>Well, I know that a lot of noise was made even a few years ago about certain OSs
>>>not supporting memory over 2GB. I also know that many of your nicer [non-Intel]
>>>MP systems ship with many, many GB of RAM.
>>
>>Sure.  And today, when you ask about large-memory systems, the topic generally
>>drops around to the Cray machines, particularly the Cray-2, and now the C90/T90
>>with 32 gigs (4 gigawords).
>
>Wrong. Probably that's true in the academia and HPC world, but I posted here
>some numbers about relative sizes of HPC and DB markets. I can assure you that
>*here* nobody talks about Cray. Or almost anybody -- one of our team members
>remembers with nostalgia times where he worked with Crays...


I was _talking_ about HPC.  Where people use double x[1000][1000][1000]
as a normal programming solution.  NUMA or even fully-distributed systems
certainly exist in reasonably small numbers for some applications (IE
large databases, or large web servers).  But they represent a tiny fraction
of 1% of the computer market.  So chip makers are _not_ going to design
chips for that tiny segment when the remaining 99.99% is so competitive.

That was what made Cray so very successful.  They picked a small market
and dominated it.  But with machines that cost $60,000,000.00


>
>>Most of the "MP systems" are not shared memory, so with 128 processors, and
>>128 gigs of ram, you still only need 30 bits of address space. (IE IBM SP
>>for one, CM5 for another, big alphas for another, etc.)
>
>That depends. On HPC -- yes, you are right, as you can modify the algorithm to
>run on MIMD machine. But in the DB world you can see all types of animals. Just
>go to the www.tpc.org, and look at the some hardware configurations. Here is the
>example:
>http://www.tpc.org/results/individual_results/Fujitsu/pw2000.082801.128cpu.es.pdf
>-- 128CPUs, 256Gb of *shared* RAM.


That is NUMA.  That is not exactly _shared_ in the usual sense.  It is
distributed memory with a routing interface that fiddles around to let one
machine access another's memory but in a non-uniform (speed) way.






>
>>> So somebody out there needs it. It's
>>>possible that the demand has been low due to memory prices, but with prices in
>>>the basement right now, I expect many more people will want > 4 GB RAM. I only
>>>have 512MB myself, but I know many people who are up past 2 GB already.
>>>
>>>-Tom
>>
>>I know a _few_ that are at 2 gigs.  And I know a couple that are using 4 gigs.
>>So the demand is there in very low levels.  And no doubt it will grow as
>>systems and apps grow.  But note that 64 bit architectures were around in
>>the middle 60's...  (60 and 64 bits).  They were obviously done for something
>>_other_ than address space...
>
>Bob, please give up :-). Tom and I know *a lot* about databases. Industry needs
>large databases [or at least willing to pay big money for those], and large
>databases need large address spaces.

I don't argue the first part of your point.  I do argue this:  64 bit
processors are _not_ being designed primarily for the large database
market.  Because 64 bit machines have been around for 30 years, and they
have _not_ been able to attach memories beyond 32 bits of address space.

That was my point.  Not that databases don't need them in a few rare cases.
But that isn't enough impetus to drive the 64 bit market.  The evidence is
there, since 64 bitters have been around for 30+ years, 64 bit micros have
been around 10+ years.  None of 'em came with > 4 gigs of main memory at the
time...

except for cray which addressed words, and not bytes, and they were limited
to 2^32 64-bit words and still are.






>
>MS creates Win64 before there is demand for large address spaces *outside* HPC.
>
>Eugene


I wouldn't argue that point.  However, back to the original question of "why
were 64 bit architecures developed?"  The answer didn't have anything to do with
addressing > 4 gigs for the first 10 years of 64 bit micro development.
However, if you were here I could play you one of several 1 hour videos made
by IEEE on the design of various 64 bit machines such as the MIPS, alpha,
IBM, HP, etc.  The videos are 10 years old.  And they don't (or barely) mention
larger memory.  They do mention the advantages of pumping 64 bits (or 128 bits
or 256 bits) around internally.  That was what I based my original answer on.

The MMU was _originally_ designed to eliminate memory fragmentation issues.
It later turned into virtual memory, demand paging, shared memory among multiple
processes, etc.  But fragmentation drove the development.  Speed drove the
development of 64 bit machines.  Later, other things because useful.  But they
were really a _result_ of the 64 bit development, _not_ the cause of the
development directly.  Big memory is one of the latter.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.