Author: Tom Kerrigan
Date: 19:19:11 08/20/04
Go up one level in this thread
On August 20, 2004 at 21:28:20, Robert Hyatt wrote: >On August 20, 2004 at 17:52:54, Tom Kerrigan wrote: > >>On August 20, 2004 at 17:36:51, Robert Hyatt wrote: >> >>... >> >>>As I said, I don't know. But clearly testing 256K vs 512K doesn't provide much >>>actual data to draw conclusions from. Obviously the 2048K chip was not 5x >> >>What is it that you don't know? If a program's working set doesn't fit into >>cache, then adding more cache will always increase performance, assuming a >>completely random access pattern. > >Why do you get to make such an assumption? I +specifically+ try to do lots of >sequential accesses to take advantage of cache line fills that pre-fetch data... I used the word "assuming" here to indicate the condition that my statement applies to. I obviously don't think that all programs have completely random memory access patterns; what kind of idiot would think that? Yet you use it as a strawman argument for the rest of your post. You're right that there's usually a lot of variation between systems with different sized L2 caches. That's a good explanation for why you and Eugene saw speedups with more cache. In my case, my numbers are from systems that are identical except for L2 cache size. There are two other ways that I can think of to approach this question: 1) If Crafty is constantly hammering main memory, it would scale very poorly with processor clock speed. Is this the case? I've seen posts that indicate that Crafty scales perfectly with processor clock speed. 2) If Crafty is constantly hammering main memory, you would get a very poor speedup running several threads on a shared memory bus machine (like a quad Xeon). What kind of speedups do you see? 1.1x? 1.2x? Or closer to 3.5x-4.0x? -Tom
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.