Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: How important is a big hash table? Measurements... (here is my data)

Author: Robert Hyatt

Date: 08:15:47 03/30/03

Go up one level in this thread


On March 30, 2003 at 10:50:44, Robert Hyatt wrote:

>I ran the first 6 bt2630 positions to a depth that needed about 2 minutes
>per position with the default (3Mbytes) hash table size.
>
>I varied the hash size from 48K bytes through 384M bytes.  Note that a hash
>entry in Crafty is 16 bytes, and that I use an approach that is modeled after
>the 2-table approach used in Belle (1/3 of memory for depth-preferred table,
>2/3 for always-store, but there is only one table with triplets where the first
>entry (out of 3) is depth-preferred and the other two are always store).
>
>To determine the number of entries, divide the given hash size by 16.
>
>There is one anomaly in the data.  If you look at the results, there is a point
>where increasing the hash size suddenly makes the search take longer.  Position
>6 (the last position) fails high in the final iteration, at this point.  In the
>smaller hash tests, it does not, so this position is skewing the results because
>bigger hash actually makes it get a better result at the same depth, something
>that is certainly expected.  Here is the actual data:
>
>hash size     total nodes   total time
>--------------------------------------
>48K bytes.     1782907232   20' 48.262"
>96K bytes.     1324635441   16'  2.635"
>192K bytes.     986130807   12'  4.402"
>384K bytes.     654917813    8' 29.490"
>768K bytes.    1867732396   22'  9.466"
>1536K bytes.   1547585550   18' 36.299"
>3M bytes.      1214998826   14' 47.526"
>6M bytes.       997861403   12'  9.856"
>12M bytes.      315862349    4' 18.384"
>24M bytes.      291943247    3' 58.600"
>48M bytes.      281295387    3' 51.360"
>96M bytes.      258749561    3' 35.094"
>192M bytes.     252048149    3' 32.718"
>384M bytes.     249648684    3' 36.142"
>
>the speed break-even point seems to be around 96M for fine-tuning, or around
>24M if you look only for large speedups.
>
>for 24M hash size, that is 1.5M entries.  Compared to 291M total nodes in
>the tree.
>
>I should add that these positions are known to be tactical, which probably
>means the q-search is significantly larger than it would be for more normal
>positions, so the above results might be skewed toward smaller table sizes
>since q-search doesn't get stored at all.
>
>But in any case, for a minute a move or so, I'd clearly want something in
>the48-96M range.  Double the time and I'd want to double the hash size.
>
>And in _any_ case I would not assume that 16mb is the max that works.  As I
>said before, it depends on the quality of the search that is being carried
>out and it will probably vary engine to engine.
>
>Now I hope you will choose to dump that "this disproves the hyatt claim"
>stuff, you clearly didn't disprove _anything_...
>
>If anyone wants to see the raw data, I can post it, but its big.  Or I can
>post the test positions so that you can run them for yourselves...


I should have mentioned that the tests were run on my 2.8ghz xeon, using one
process only.  I would normally use 4 threads to include the hyper-threading
CPUs, and overall, the 4=thread program will probably run this test about
2x faster.  So I would probably double the hash size for the same length
searches since the tree would be something like 2.4X larger, factoring in the
other three threads.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.