Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Source code to measure it - results

Author: Jeremiah Penery

Date: 18:13:23 07/15/03

Go up one level in this thread


On July 15, 2003 at 20:19:34, Vincent Diepeveen wrote:

>On July 15, 2003 at 15:24:19, Gerd Isenberg wrote:
>
>Gerd use it with a bigger hashtable. Not such a small
>table.
>
>400MB is really the minimum to measure.

Why?

Measuring 90MB, something like 99.65% of the accesses should be to RAM and not
cache.  With 100MB, it's 99.8%.  Yet when I measure those two things, I get a
whole 6.1ns latency difference according to your test.  Even measuring only
20MB, 98.4% of the allocated memory can not be in cache. (All of this assumes
that the program takes up 100% of the cache, which it won't.)

There's something wrong that causes memory access time to be reported much
higher when testing larger 'hashtable' sizes.  Anything large enough to
overwhelm the cache should report similar, if not almost identical, results.
However, your program gives wildly different numbers.

Trying to allocate 12500000 entries. In total 100000000 bytes
  Average measured read read time at 1 processes = 183.935982 ns

Trying to allocate 11250000 entries. In total 90000000 bytes
  Average measured read read time at 1 processes = 177.806427 ns

Trying to allocate 43750000 entries. In total 350000000 bytes
  Average measured read read time at 1 processes = 253.592331 ns

In the last test, I can't be completely sure I wasn't paging at all.  I didn't
see the disk light flashing, but it's possible that this hit the disk more than
once, which would make the number look much higher than it should.

Still, relative to the other results people have given, this is not so bad,
since I have only PC2100 memory (133MHz DDR).



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.