Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Since the CPU is what really count for Chess !

Author: Robert Hyatt

Date: 13:37:35 03/18/03

Go up one level in this thread


On March 18, 2003 at 12:56:03, Tom Kerrigan wrote:

>On March 18, 2003 at 10:04:56, Robert Hyatt wrote:
>
>>>>>Using the Nforce2 chipset I'm able to run the ram at speeds from 50% up to 200%
>>>>>(100% being synchronous) of the fsb speed. I tested 200MHz FSB (400DDR) with
>>>>>200MHz memory (400DDR) and 200fsb with 100MHz memory (200DDR).
>>>>>The difference between ~1.6gb/s memory and ~3.2gb/s memory with craftys 'bench'
>>>>>command was 0.14%. Yes, about one seventh of one percent.
>>>>
>>>>That might well suggest _another_ bottleneck in that particular machine....
>>>
>>>What would that be?
>>>
>>>I ran a similar test on my AthlonXP 2500 w/nForce 2 chipset. Running the memory
>>>bus at 100 MHz or 133 MHz didn't make a significant difference in nps. The
>>>processor scored around 1.12 MN/s, and it scored some 20-30 KN/s more with a 133
>>>MHz memory bus. The FSB was 166 MHz in both cases.
>>>
>>>-Matt
>>
>>Were I guessing, I would guess the following:
>>
>>1.  no interleaving, which means that the raw memory latency is stuck at
>>120+ns and stays there.  Faster bus means nothing without interleaving,
>>if latency is the problem.
>
>Uh, wait a minute, didn't you just write a condescending post to me about how
>increasing bandwidth improves latency? (Which I disagree with...) You can't have
>it both ways.
>
>Faster bus speed improves both latency and bandwidth. How can it not?

It doesn't affect random latency whatsoever.  It does affect the time taken to
load a
cache line.  Which does affect latency in a different way.  However,
interleaving does
even better as even though it doesn't change latency either, it will load a
cache line even
faster.

"latency" is technically the amount of time required to read a random byte from
memory.  It
has been stuck at 120ns for almost 20 years.  But as cache line size has grown,
the actual
"felt" latency has gotten worse as once you get a cache line miss, memory is
"stuck" until
the entire line is read, and with 32/64/128 byte line sizes, that hurts.  A
faster bus will not
reduce that initial 120ns latency, but it _will_ reduce the total time required
to load a cache
line, which will reduce the latency for the _next_ memory read by that amount,
since they
have to be done serially.

If you do sequential memory reads, then this is not such a problem.  But for
random-access
reads, it is a killer.  And anything that can complete a cache line fill quicker
helps the next
memory reference by that amount.


It _may_ improve bandwidth.  That depends on whether every bus cycle can be used
to
transfer data or not.  that isn't always the case.  Just like increasing the
core CPU speed
doesn't always produce faster execution, if instructions and data can't make it
to the CPU
fast enough.


>
>-Tom



This page took 0.03 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.