Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Here is your _new_ data results...

Author: Robert Hyatt

Date: 09:19:24 04/02/03

Go up one level in this thread


On April 02, 2003 at 11:15:54, Sune Fischer wrote:

>On April 02, 2003 at 09:57:12, Robert Hyatt wrote:
>
>>>Time for Position 4 of BT2630                                          Depth
>>>48k	192k	768k	3072k	12M	48M	96M	192M	384M
>>>12k	48k	192k	768k	3M	12M	24M	48M	48M
>>>17,07	19,17	10,25	9,07	7,95	7,88	7,84	7,91	7,76	10,00
>>>81,00	87,00	41,17	34,68	30,24	25,88	25,81	25,85	24,94	11,00
>>>250,00	177,00	85,00	61,00	49,08	39,28	38,70	38,49	36,95	12,00
>>>852,00	502,00	235,00	175,00	121,00	76,00	72,00	69,00	65	13,00
>>>				540,00	385,00	337,00	315,00	264	14,00
>>>
>>>Here I varied hash and hashp. Times are in sec. This table showes big time
>>>savings, so are larger hash size is usefull especially for analysis purposes.
>>>Kind regards
>>>Bernhard
>>
>>
>>That's certainly another way to measure.  Time remains constant, plot depth
>>against
>>hash size...
>
>I don't think this is quite the right way to measure things, a hash will also
>increase accuracy. You may not get to ply 10 faster, but it is conceivable that
>you instead find the solution a ply sooner.


This is not going to be easy to measure no matter what you do.

For example, fix time and notice the depth for a given hash size tells you
something.

fix depth and notice the time for the search for a given hash size tells you
something.

vary hash and measure time to solution tells you something, assuming that your
program
finds the solution quicker due to a more accurate search caused by the hash
stuff.

but it misses "better information" that might make you change your mind and find
a new
best move that is not a lot better, but it is better.  And it misses the case
where you get
a more accurate score that prevents you from changing to a different (and worse)
move,
or makes you change to a better move.

I'm not sure how to objectively measure the last case.  The other three are
empirical
studies made by running and measuring.  But when you try to factor in "better
search"
produced by bigger hash, it becomes more subjective.  IE a position like fine
#70 is
pretty clear, as you can find the solution 8 plies (for Crafty sometimes) sooner
than
the normal 26 plies needed.  But others don't produce this same quick solution
move
even though the search is "better".  In fact, it is possible to get the _exact_
same score,
for the exact same best move, and _still_ have carried out a better search that
just didn't
help in this particular case.




>With a replace and store many of the shallow entries will get overwritten in a
>small hash, you keep only the most important and expensive results, but I think
>often it's the shallow ones that transposes and brings you that bit of extra
>valuable information near the leafs.
>
>The tree will certainly shrink because of transpositions, but going to main
>memory is also a slowdown. As a result I think the effect seen in experiments
>like these are bound to be rather small if they do not take the quality of
>search into account.
>
>I think it would be interesting to design an experiment that pitted those two
>effects against eachother, to see which is the more dominant. Certainly there is
>some connection between the two.
>
>-S.


I think that the "test" would be highly tailored to a specific engine,
unfortunately.  As
producing positions that expose this behavior would be pretty engine-specific.
It would be interesting to do, however.  I've thought about writing an ICCA
paper on
the tests I ran last year about "how many collisions are needed to break a
search?" or
something similar.  This kind of info could be factored in to that.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.