Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Tip: how to reduce hard drive churning with tablebases

Author: Robert Hyatt

Date: 08:55:57 03/12/04

Go up one level in this thread


On March 11, 2004 at 23:39:57, William Penn wrote:

>On March 11, 2004 at 11:16:41, Robert Hyatt wrote:
>
>>On March 11, 2004 at 01:26:31, William Penn wrote:
>>
>>>>>I only suggest reduced hash size in situations where tablebase access is
>>>>>extremely heavy. There's no point to it otherwise, except I've noticed that
>>>>>smaller hash size spits out analysis "legs" quicker. {Leg=analysis completed at
>>>>>a particular ply level} Big hash size takes longer to finish the calculations at
>>>>>a particular ply level.
>>>>
>>>>Then you have something broken.  If bigger has slows the program down when
>>>>tablebases are not accessed, something is wrong..  And since they are not
>>>>accessed in most positions, bigger hash should generally always be better.
>>>
>>>Note that I'm running in infinite analysis mode for long periods of time,
>>>usually several hours to analyze each position.
>>>
>>>Bigger hash lengthens the legs. I define a leg as completion of analysis at a
>>>particular ply level. After completion of a leg, the analysis is spit out for
>>>the user to see as text in the engine window. The more hash, then generally the
>>>longer the time used for calculating each leg, the difference being that the
>>>analysis goes a little deeper with more hash (apparently) with the Shredder 8
>>>engine. The program speed as indicated by kN/s isn't slowed down a lot, but is
>>>always a little less with larger hash.
>>
>>Here you have a poor definition of "bigger hash is worse".
>>
>>Hash can do two things:
>>
>>(1) make the search go faster to a specific depth.  Generally, the bigger the
>>hash table, the faster the program reaches a specific depth.  You are us makeing
>>this metric.
>>
>>(2) make the search at a specific depth more accurate.  Which sometimes means
>>the time to the same depth will be slower, but then the score is more accurate.
>>You are ignoring this case.
>>
>>The longer the search time limit, the more important bigger hash sizes are.  And
>>this is _not_ guesswork.  It's been proven over and over and over with
>>testing...
>
>We're perhaps talking about different things, and I haven't made myself clear:
>
>Partly... What I'm interested in is getting the program to spit out analysis
>into the real world in text form (into the engine window) where I can see it at
>reasonable intervals. When analyzing for several hours in infinite analysis
>mode, this can be a problem. Typically those intervals are perhaps about 1 hour,
>2 hours, 4 hours etc if I use 512MB hash. Then if I switch to 768MB hash they
>are typically more like 1 hour, 3 hours, 9 hours so they're longer in-between.
>If I switch to smaller hash then they are usually more frequent. This is
>important for scheduling my analyses. It is better to have the intervals (which
>I also call "legs") smaller so I will have convenient points to stop the ongoing
>analysis. If I have to wait for 10-20 hours for the next leg to complete, that's
>usually too much time & trouble.

this is a different problem.  you said you have 1 gig of RAM.  768M of hash is
probably  just too big and causes paging/swapping...


>
>>>
>>>>add another gig.  Now your 512mb hash will work just fine...  leaving over 1gig
>>>>for the filesystem cache...
>>>
>>>I already have the maximum, 1G for this box. I'm not convinced (have no faith)
>>>that the Windows XP op system would use more RAM advantageously. I'd have to see
>>>it to believe it.
>>>WP
>>
>>XP will use 4 gigs just fine.  From experience...
>
>XP will, but this Compaq Presario's specs say 1GB maximum. There's only two
>sockets for RAM strips, and insofar as I'm aware 512MB is the biggest strip
>available for it. Thus it's limited to 2 x 512MB = 1GB total.
>WP



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.