Author: Tom Kerrigan
Date: 14:52:54 08/20/04
Go up one level in this thread
On August 20, 2004 at 17:36:51, Robert Hyatt wrote: ... >As I said, I don't know. But clearly testing 256K vs 512K doesn't provide much >actual data to draw conclusions from. Obviously the 2048K chip was not 5x What is it that you don't know? If a program's working set doesn't fit into cache, then adding more cache will always increase performance, assuming a completely random access pattern. With chess programs, memory access is not random at all, it's obviously biased towards reusing data, which would increase performance even more. (Chess programs are obviously not full of loops that just read and write 2MB arrays.) The only reason why a chess program's performance wouldn't increase with size of L2 cache is because its working set fits into the cache. I don't know why this upsets you so much. I know that you think Crafty uses a bunch of huge arrays frequently enough and randomly enough to blow out the cache but you have no evidence of this, and there is evidence that indicates otherwise. If anything, I'd be happy about having a program that runs almost entirely in a chip's on-die cache. That means you're immune to the ever-growing disparity between MPU and main memory performance. >More I can't conclude without any way to do testing. I might look up the cache >modeling software and try that to see what it says, for fun... Why bother? Just pick a big array that you think is accessed frequently and randomly and instrument it. Print out which elements are accessed and when and you can easily get an idea of whether or not the accesses are hitting cache. (Or if it's being accessed so infrequently that it doesn't matter.) -Tom
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.