Author: Roberto Waldteufel
Date: 14:22:45 10/23/98
Go up one level in this thread
On October 23, 1998 at 10:30:22, William H Rogers wrote: >Robert > >The way I understood it was like this: suppose that you are accessing a long >list of names and addresses off your hard drive. When the computer fetches the >first address it also continues to load as many more as it can hold in the cache >memory, so that if you secquence through the files, the next one is already in >memory waiting for you. This speeds up the access. >Before the invent of cache, some programmers would create a large file in memory >and load it with data from their files, because memory access is much faster >than disk access. I wrote a small spell checker once and loaded the dictionary >totally into memory. It ran fast that way. However if I updated the dictionary, >I had to write it back to the hard disk when the program was through > >As far as where the cache is located, on some of the newer machines, part of it >is in the cpu and another part in held in chips on the mother board. It usually >works much faster that regular ram. > >Bill Thanks for the explanation. I wonder whether it is possible to improve the performance (speed-wise) of software by making it "cache friendly" in some way, eg by choosing data structures of certain sizes, or arranging data in a certain order. It certainly adds another dimension to the task of code optimization. Best wishes, Roberto
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.