Author: Robert Hyatt
Date: 08:04:23 06/24/02
Go up one level in this thread
On June 24, 2002 at 08:49:15, Robert Henry Durrett wrote: > > >Apparently, "memory bandwidth" limitations result in chess computer performance >limitations. I don't really understand the details, but it's supposed to be >true. In fact, I don't really understand "memory bandwidth." I assume it is >some sort of limitation on how fast information can be written to or retrieved >from RAM. Presumably, new technology would improve this. Do I have this right? Right idea, yes. Although "improving it" is non-trivial. Memory speeds have not changed very much at all in 20+ years. It is doubtful they will unless we get away from capacitive storage (DRAM) technology. Dumping charges here and there will _never_ get any faster, due to basic resistance, capacitance and inductance present in any electrical circuit. > >So, the logical solution seems to be to minimize the number of times the program >has to "go to memory," which I interpret as "going to RAM." It would seem that >extensive use of caches would help in that regard. always... > >Someone pointed out recently that it takes only a few clock cycles to read or >write to a cache [depending on which cache] but takes a huge number of clock >cycles to do that with RAM. > >Now they're saying that the new Intel Itanium microprocessors have huge caches. >[Also huge prices!] > >Doesn't this suggest that judicious use of huge caches [in preference to RAM] >would produce better chess engines? This assumes that there is a way for the >programmer to actually accomplish this. The right compilers must be used. Using cache isn't really a compiler issue. It is a _programmer_ issue. > >If anybody here understands this stuff, please explain everything. :) > >Summary: Bigger caches mean better chess engines? To a limit. > >Bob D.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.