Author: Roy Eassa
Date: 13:35:13 09/06/01
Go up one level in this thread
On September 06, 2001 at 14:45:15, Robert Hyatt wrote: >On September 06, 2001 at 13:48:15, Jeremiah Penery wrote: > >>On September 06, 2001 at 13:30:32, Robert Hyatt wrote: >> >>>On September 06, 2001 at 13:06:24, Jeremiah Penery wrote: >>> >>>>On September 06, 2001 at 10:17:15, Robert Hyatt wrote: >>>> >>>>>More accurately, note that memory speeds (random access speeds) have not >>>>>increased _at all_. DRAM was 100ns (or slightly less) 10 years ago. It is >>>>>_still_ that slow today. >>>> >>>> >>>>You can get 5ns DRAM (or possibly faster) today. >>> >>> >>>Look again. There is no memory on the market that will let you _randomly_ >>>access any byte and get the result back in 5ns. They can transfer large chunks >>>and make it appear to be very fast. But it still takes just as long today to >>>dump a capacitor and make the 0/1 determination as it did 10 years ago. >>> >>>The Cray's are the best indicator. Just compare the hardware timings for memory >>>access starting at the Cray-1 (8 clocks) and ending up with the T90 (50 clocks) >>> >>>Cray 1 ran at 12.5 ns clock, T90 at a 2ns clock. The memory speed over that >>>20 year period is pretty constant. >>> >>>8*12.5 = 100ns. 2*50 = 100ns. >>> >>>If a PC had a 5ns memory, it would be able to sustain 1.6 gigabytes of memory >>>transfers per second (that is 200 million cycles per second * 8 bytes per >>>cycle). The PC can actually sustain more like 100 megabytes per second of >>>memory bandwidth. Which is closer to an average of 8 bytes every 100 >>>nanoseconds than it is to 8 bytes every 5 nanoseconds. >>> >>>Ignore the "PC400" specification. That is not for the first byte. That is >>>for the synchronous transfer clock speed _after_ the DRAM data has been dumped >>>into an SRAM on-chip cache. Once you get the data into SRAM, you can dump it >>>onto the bus at most any speed you can afford. but getting it from the DRAM >>>is _still_ a big problem. That is one reason early Cray's didn't even bother >>>with DRAM and used bipolar memory. But eventually cost drove them to DRAM and >>>a static memory access time. >> >>Here is an interesting article: http://www.hardocp.com/articles/memory/ddrovr/ >> >>While not as fast as I thought, it's still a lot faster than they were 10 years >>ago. > > >Better re-read the bottom paragraph. RAMBUS ram is 50% slower than SDRAM, in >terms of raw latency. WHich is exactly what I was talking about. Latency has >not changed much at all. After the latency period has elapsed, various >strategies to clock more data at faster rates has been done. FPM, EDO, SDRAM >and now RAMBUS all move data faster and faster, but _after_ that original >latency period has passed. And that period is no faster today than it was >10 years ago. In the case of RAMBUS, latency is about 1.5X what the latency >for SDRAM is. If you read long blocks of contiguous memory, RAMBUS looks >pretty good. If you do random accesses, RAMBUS (and all DRAM based >technologies) looks pretty ugly. IE this is why cache isn't based on DRAM. Here's the paragraph you mentioned: "Random accesses, however, do not simply draw from an already open page. As the name indicates, both row and column addresses need to be specified, decoded and accessed, introducing additional latencies. Exactly this fact is the Achilles heel of Rambus memory, since the initial latencies are substantially higher than in standard DRAM. Consequently, even if the maximum bandwidth achievable (peak bandwidth) by Rambus PC800 memory is 50% higher than the peak bandwidth of PC133 SDRAM, the average bandwidth under real life conditions is substantially less since random access-related latency becomes the main limiting factor. There has been enough coverage of Rambus DRAM vs. SDRAM, thus, there is no need to further go into details. The point, however, has been made that latency is a crucial component of memory performance."
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.