Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Here is your _new_ data results...

Author: Robert Hyatt

Date: 06:57:12 04/02/03

Go up one level in this thread


On April 02, 2003 at 01:35:18, Bernhard Bauer wrote:

>On April 01, 2003 at 09:35:29, Robert Hyatt wrote:
>
>>On April 01, 2003 at 02:50:09, Bernhard Bauer wrote:
>>
>>>On March 31, 2003 at 14:55:20, Robert Hyatt wrote:
>>>
>>>>
>>>>The new results and old results are given below.  I notice that for both,
>>>>significant performance improvement is seen until 24Mbytes of hash memory
>>>>is reached.  Beyond that point, the improvement drops off even though things
>>>>_still_ see improvement with larger hash.
>>>>
>>>>Again, the following notes.  For 3Mbytes of hash memory, this is about 90
>>>>seconds
>>>>per move on a single 2.8ghz xeon.  The dual, with 4 threads searches more than
>>>>2.0 times
>>>>that many nodes, which will probably move the break-even point up to the next
>>>>size, which
>>>>is 48M.  This for a reasonable program that doesn't hash in the q-search.  I'd
>>>>suspect that
>>>>for reasonable programs that _do_ hash in the q-search, the size requirement
>>>>will move up
>>>>a couple of factors of two at least, due to the overwriting that will happen.
>>>>
>>>>Someone should run that test, or maybe I'll temporarily add hashing to my
>>>>q-search to
>>>>see how it affects things.
>>>>
>>>>But, regardless, the hash memory for best performance is the _same_ for both
>>>>runs,
>>>>within some margin of error that is not very large.  As I said, the positions
>>>>are not
>>>>important so long as they are not raw endgames like fine 70.
>>>>
>>>>After vincent's test, I will give the same test but only for fine 70, searched
>>>>to a depth of
>>>>36 plies, with the same variable hash sizes.  This ought to be a "best-case" for
>>>>hashing
>>>>since fine70 is about the most hash-friendly position known.  This will follow a
>>>>bit
>>>>later today.
>>>>
>>>>
>>>>
>>>>
>>>>--------------------------------------
>>>>      new data from Diepeveen
>>>>hash size     total nodes   total time
>>>>--------------------------------------
>>>>48K             685195642   10' 45.657"
>>>>96K             595795133    9' 21.891"
>>>>192K            532881448    8' 26.678"
>>>>384K            499903696    8' 7.834"
>>>>768K            464549956    7' 36.368"
>>>>1536K           419420212    6' 51.864"
>>>>3M              397280312    6' 31.477"
>>>>6M              372065936    6' 5.867"
>>>>12M             353954066    5' 49.194"
>>>>24M             335120523    5' 30.128"  new "big enough" point
>>>>48M             325010936    5' 24.549"
>>>>96M             319447256    5' 22.018"
>>>>192M            316337729    5' 20.492"
>>>>384M            308363819    5' 30.439"
>>>>
>>>>--------------------------------------
>>>>      previous data from bt2630
>>>>hash size     total nodes   total time
>>>>--------------------------------------
>>>>48K bytes.     1782907232   20' 48.262"
>>>>96K bytes.     1324635441   16'  2.635"
>>>>192K bytes.     986130807   12'  4.402"
>>>>384K bytes.     654917813    8' 29.490"
>>>>768K bytes.    1867732396   22'  9.466"
>>>>1536K bytes.   1547585550   18' 36.299"
>>>>3M bytes.      1214998826   14' 47.526"
>>>>6M bytes.       997861403   12'  9.856"
>>>>12M bytes.      315862349    4' 18.384"
>>>>24M bytes.      291943247    3' 58.600"  old "big enough" point
>>>>48M bytes.      281295387    3' 51.360"
>>>>96M bytes.      258749561    3' 35.094"
>>>>192M bytes.     252048149    3' 32.718"
>>>>384M bytes.     249648684    3' 36.142"
>>>
>>>Hi Bob,
>>>I suppose you used the first 6 positions of bt2630 to a fixed depth, (which
>>>one?)
>>
>>I varied the depth to make each take a minute or so with normal hash.  Here is
>>the actual data (input) I used:
>>
>>Dang.  I don't have the actual file.  Here are the depths I used, for the first
>>six
>>bt2630 positions:
>>
>>1:  sd=13
>>2:  sd=16
>>3:  sd=11
>>4:  sd=11
>>5:  sd=16
>>6:  sd=14
>>
>>I set up the test input something like this:
>>
>>title  bs2630-01
>>setboard xxxxxxxxxxxxxxxxxxxxxxxxxx
>>sd=y
>>solution zzz
>>
>>and repeated that for all 6 positions I used.  Then the "test" command will
>>work.
>>
>>If you have EPD input,  put the sd= command before the epd line and it should
>>work just fine.
>>
>>
>>
>>>. So you had 36 sec for each position from 24M hash. For such a short time
>>>we can not expect a huge win in time. But if you give crafty more time, you may
>>>see improvements for greater hash sizes. These improvments may show up in a
>>>shorter time or in a higher score. My tests seam to show that.
>>>Kind regards
>>>Bernhard
>>
>>
>>I don't disagree at all.  The only problem with your suggestion is that to make
>>the
>>search longer at 24M hash, it will be _way_ longer at 48K hash.  The second test
>>I
>>ran, using vincent's positions, were run a bit longer as the sd=n is a pretty
>>"coarse"
>>adjustment in terms of time.
>
>Hi,
>not all of the 6 first positions of BT2630 are usefull, so I took only position
>4. This position, however, is a zugzwang position. Note, that I used a modified
>version of 19.3
>
>Time for Position 4 of BT2630                                          Depth
>48k	192k	768k	3072k	12M	48M	96M	192M	384M
>12k	48k	192k	768k	3M	12M	24M	48M	48M
>17,07	19,17	10,25	9,07	7,95	7,88	7,84	7,91	7,76	10,00
>81,00	87,00	41,17	34,68	30,24	25,88	25,81	25,85	24,94	11,00
>250,00	177,00	85,00	61,00	49,08	39,28	38,70	38,49	36,95	12,00
>852,00	502,00	235,00	175,00	121,00	76,00	72,00	69,00	65	13,00
>				540,00	385,00	337,00	315,00	264	14,00
>
>Here I varied hash and hashp. Times are in sec. This table showes big time
>savings, so are larger hash size is usefull especially for analysis purposes.
>Kind regards
>Bernhard


That's certainly another way to measure.  Time remains constant, plot depth
against
hash size...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.