Author: Robert Hyatt
Date: 19:40:48 06/24/02
Go up one level in this thread
On June 24, 2002 at 19:48:02, Robert Henry Durrett wrote: >On June 24, 2002 at 13:07:11, Gian-Carlo Pascutto wrote: > >>On June 24, 2002 at 10:27:05, Robert Henry Durrett wrote: >> >>>This "latency issue" is interesting. Could you please elaborate? How do the >>>caches help? >> >>Bandwidth = amount of data that can be transferred per second >>Latency = amount of time it takes to find something in the memory >> >>Generally, a chessprogram will need to access small amounts of >>data that are scattered randomly throughout the memory of the >>machine. Because of this, bandwidth isn't so important, because >>there is only a small amount of data, but the latency is. >> >>-- >>GCP > >Thanks. > >That does clear up a lot for me. I had suspected that the word "bandwidth" was >used in a way unfamiliar to me. It sounds like the "information bandwidth" we >used in information theory for communication systems. > >Still unclear about latency, though. The confusion is because I imagine that >the process of retrieving from memory [or putting something into memory] >actually involves more than one physical [or logical] process, one of which is >"finding." This means, to me, that the latency spec would not necessarily tell >the whole story on reading from [or writing to] memory. There must be more to >this story which you have not said here yet. "finding it" is not the issue. Memory uses DRAM which is nothing more than capacitors, to store zeros and ones. To determine whether a bit is zero or one, the charge has to be gated somewhere and measured. This is a problem in that you can't instantly move a charge, because of resistance, capacitance and inductance. It takes time to select a specific row/column of a big DRAM chip, suck the charges out, measure them, and then write them back. DRAM also has this nasty property that the capacitors leak down, and each one much be read/rewritten (refresh operation) frequently... SRAM is so much nicer, but so much bigger and more expensive. I should add that the basic "latency" has not changed in a _long_ time. It is about as slow today as it was 20 years ago. Bandwidth is up due to reading more data out of the capacitors at once, buffering them in the chip, and then dumping the data in a burst, after the long latency delay. > >Presumably there would be similar concepts for caches. But it is not clear to >me that the "bandwidth" concept is very useful for caches. Simply specifying >the number of clock cycles required seems better. > >As for latency: High-speed caches will, presumably, have a different technology >from RAM so the individual physical [and logical] processes involved would, it >would seem, be different too. Not sure the "latency" concept is useful for >caches either. There may be a more direct way to specify the time required for >reading and writing. Maybe there's zero latency in caches since all gets done >before the next clock cycle comes along? > >Summary: If the cache is extremely large, we are talking about using the cache >in the place of the RAM of a normal PC. This is the scenario [& computer >architecture] I would like to address. Not clear yet as to how "bandwidth" and >"latency" would fit into this context. > >Bob D.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.