Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Who can update about new 64 bits chip?

Author: Matthew Hull

Date: 06:42:15 08/27/02

Go up one level in this thread


On August 26, 2002 at 22:36:13, Robert Hyatt wrote:

>On August 26, 2002 at 21:33:18, Jeremiah Penery wrote:
>
>>On August 26, 2002 at 15:06:05, Robert Hyatt wrote:
>>
>>>On August 26, 2002 at 13:12:42, Jeremiah Penery wrote:
>>
>>>>On August 26, 2002 at 11:16:47, Robert Hyatt wrote:
>>
>>>>As I said, much of that time must come from the long travel path, slow memory
>>>>controller, etc.
>>>
>>>We just disagree on where the delay is.  I believe that at _least_ 80% of
>>>the latency is in the chip.  Not in the controller/bus...
>>
>>>>If you see 140ns today (average), you don't believe that almost half of that
>>>>latency is caused by the travel path from CPU/controller/memory and back?  If
>>>>the memory controller runs at bus speed (133MHz), it has 7.5ns/clock cycle.
>>>>That alone is significant latency added to the process.
>>>
>>>I don't believe it, no.  I believe that most of the latency is in the DRAM
>>>itself, not in the controller.  The controller has no "capacitors" to deal
>>>with, it is made up of SRAM buffers and some form of hardware logic (such
>>>as TTL) which means switching times are at the picosecond level.  It takes
>>>a _bunch_ of picoseconds to add up to a nanosecond...
>>
>>The clock cycle of the memory controller is some 7.5ns.  It can only send one
>>request/clock, AFAIK.  That is already adding significant latency.
>
>
>OK... same issue I talk about in parallel programming classes...  Amdahl's
>Law, slightly paraphrased to fit here.  If the total access time is > 100ns,
>and the controller has a 7.5ns cycle time,  what happens when you design a
>controller that takes _zero_ nanoseconds?  Answer:  you are _still_ near
>100ns in total access time as you only eliminated 7.5...
>
>That is the problem I see.  DRAM is simply inherently slow in terms of
>latency.  It is hard to take a static charge, shunt it somewhere to measure
>it, and do that quickly.  Because of resistance, capacitance and inductance
>as I mentioned earlier..  IE replace a capacitor with a balloon filled with
>water.  It is _hard_ to get that water out and over to something that can
>measure it, when you have millions of balloons sharing common water tubes...
>
>
>
>
>
>>
>>>>Even a few 10s reduces your number of 120ns to the claimed 80ns of Hammer. :)
>>>
>>>I'll believe 80 when I actually get my hands on it.  :)  Because that will
>>>be faster than any Cray ever made (that used DRAM, older crays used bipolar
>>>but the memory was not nearly as dense).
>>
>>>I was talking about cray from the perspective that they have never had an 80ns
>>>memory access time.  It has _always_ been over 100 since they moved away from
>>>bipolar memory to DRAM for density.  And their controllers have >_never_ "sucked"
>>
>>It's difficult to find really accurate data on this.  I've read more than a few
>>different things.  But from what I can tell, the latency (cycle time) of DRAM c.
>>1994 was on the order of 100ns. (In 1980, it was 250ns; 1989 was nearer 170ns).
>>It hasn't been lowered at that pace since then, but it has gotten lower.  As
>>I've said, current figures I've seen place it at 70ns today.  _Any additional
>>latencies_ seen are/were caused by controller, path length, and whatever else.
>
>
>I don't know of any way to lower latency.  How do you make a charge leave a
>capacitor faster?  Maybe at super-conducting temps, perhaps.  But you have to
>deal with the magnetic field that has to propogate as the current attempts to
>move along a wire...  Smaller distances can help _some_.  But this is a "square"
>problem.  Not a linear problem.
>
>
>
>
>>
>>If you don't think path length can influence latency very much, then you must
>>not talk about RDRAM having bad latency. :)  The only reason it has higher
>>latency is because of a much longer path length for the memory request signal.
>>(And sometimes the banks can go into sleep mode, and take a while to wake.)
>
>I understand how they chain things together and how it added to the serial gate
>count.  The problem with RDRAM is that they couldn't make the DRAM dump any
>faster, and they added more gates in an effort to improve streaming performance,
>which has nothing to do with random access...
>
>
>
>
>>
>>Cray machines probably have some additional issues because they're
>>super-multi-ported designs, with a lot of processors trying to concurrently
>>access the same memory banks. (I'm talking about their vector machines, not
>>stuff like the T3E, which is some kind of Alpha cluster.)
>
>They have lots of issues.  But even going back to single-cpu machines, latency
>was no better than it is today...  and with every new generation of cpu, the
>memory fell that much farther behind...  ie cycletime of cpu * number of cycles
>to do a read has effectively been constant since they first went from Bipolar to
>DRAM.


How much is latency improved with bipolar?

Also, what is the future of RAM.  Now it seems hopelessly behind, and getting
"behinder".



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.