Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Computer(CPU)benchmark for chessprograms

Author: Robert Hyatt

Date: 08:18:29 01/18/98

Go up one level in this thread


On January 17, 1998 at 23:19:24, Don Dailey wrote:

>On January 17, 1998 at 18:44:07, Robert Hyatt wrote:
>
>>On January 17, 1998 at 13:07:14, Don Dailey wrote:
>>
>>>Bob,
>>>
>>>I am glad you cleared this up.  Everyone does seem to think that once
>>>the hash table reaches saturation  "everything backs up like a stopped
>>>up sink" and life is over.  But as you stated this is not the case.
>>>
>>>A (very) rough guide is that once your hash table size reaches
>>>saturation
>>>you will get a 6% speedup if you double it.  But the proper way to view
>>>this is as an extra BENEFIT not a bottleneck.  If you double the search
>>>time your program will benefit TREMENDOUSLY, if you double the hash size
>>>it will benefit only SLIGHTLY.
>>
>>I just tried this, and didn't see this "tremendously" you mention.  IE I
>>kept searching the same position deeper and deeper until after an
>>iteration
>>finished it reported the hash was 99% full.  I cut it by half, and ran
>>to
>>the same depth, in roughly the same time...  I tried this on three
>>positions
>>and one of the three slowed down by 3-4%.
>
>I don't think you understood my sentence.   It almost sounds like you
>do not believe doubling the hardware speed is much of a benefit but
>that's what I was saying.  To reword:  It is a LOT more beneficial to
>double the time you spend searching than to double your hash table
>size.  This paragraph was designed to back up your argument so don't
>try to refute it!
>

Sorry.  If you read how it was worded, and take it in context with the
discussion, you can see where I went wrong.  :)


>About the 6% I get vs the 3% you get.  This can probably be explained
>and is not a point of contention because I do get 6%.  First of all
>the numbers will vary too much for only  3 positions to measure.  Also
>it's
>possible you need more saturation (being barely saturated at 99% is
>not the same as having overwritten all the entries many times.) Try
>somewhat longer times.   Also it could be cache effects that are sucking
>a little of the benefit.   Finally it could be that your replacement
>scheme changes these numbers for you, (this ones smells funny though.)
>Node counts might be more accurate than times for this one.

I agree.  But "saturation" is generally defined as tree-size/table-size,
if you read Beal's paper.  so 100% means the tree is exactly the size of
the hash table.  That's not the point where things degrade.  I believe
it
doesn't become very noticable until you hit 300-500%.

The other issue here is that I don't hash q-search nodes... so I have
*nothing* in my table but positions where depth > 0.  And my saturation
has to be measured differently, since the q-search is a significant
fraction of total nodes yet they never hit the table at all...



>
>At any rate your numbers are in the rough ballpark so I don't think
>anything is wrong with your program.   Larry noticed that most programs
>obey this rule of thumb.   I forget where we heard this from originally
>but I believe it might have been Ken Thompson (but don't quote me) and
>it's
>accurate for us.
>
>We used to tune for tournaments too.  I'm too lazy for this now but
>there was a time I would  try to find the smallest hash table size
>I could get away with.  We noticed if the table was much bigger than
>we needed the program was 2 or 3 percent slower!   Now days we don't
>bother and use hash table aging anyway which makes this less appealing.
>
>- Don



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.