Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hash effects

Author: Robert Hyatt

Date: 13:06:30 11/17/97

Go up one level in this thread


On November 17, 1997 at 15:27:59, Eugene Nalimov wrote:

>
>On November 17, 1997 at 10:44:54, Robert Hyatt wrote:
>
>>On November 17, 1997 at 10:00:14, Chris Carson wrote:
>>
>>>Here are the results:
>>>
>>>Constants:
>>>
>>>Pentium 133 (non-mmx)
>>>Windows 95
>>>Crafty 13.10
>>>Book off
>>>Pondering off
>>>Set Depth  11
>>>Move  1. a3   (opening)
>>>
>>>Results:
>>>
>>>Hash   Ply=9  Ply=10  Ply=11
>>>----   -----  ------  ------
>>>3MB    00:43  02:32   27:33
>>>6MB    00:42  02:28   14:05
>>>12MB   00:43  02:26   13:33
>>
>>very good data.  Let's see what this means:
>>
>>first, hash entries are 16 bytes per entry, so 12mb = 3/4 million
>>entries,
>>6mb = 3/8 million entries, and 3mb = 3/16 million entries.
>>
>>Second, on a P5/133, crafty should search around 30K nodes per second,
>>typical.
>>
>>Let's take each test one at a time:
>>
>>ply=9.  43 seconds = 43 x 30 x 1000 = 1,290,000 nodes.  Since Crafty
>>does
>>not hash quiescence nodes, I'll guestimate that 2/3 of the nodes
>>searched
>>never make it to the hash table, so 400,000 nodes will be stored, and
>>only
>>12mb is enough to store all of them.  However, since the times are
>>identical,
>>we can assume that the replacement strategy is good enough so that even
>>with the smallest size (187,500 entries) the 2:1 over-utilization is not
>>a
>>problem.  The next size is close to the size of the tree that was
>>searched
>>(non-qsearch nodes, 375,000 entries) while the last is obviously
>>overkill at
>>750,000 entries)
>>
>
>Maybe I'm wrong, but:
>
>1. In the 'a3' test hash table will be used mainly for move ordering -
>that's not endgame.
>    (But that's also not an middlegame either - maybe things are
>different here? It's easy
>    to make an experiment - just remove HT cutoff and alpha/beta bounds
>modification
>    from Crafty),
>

depends on what you are measuring.  The opening is more favorable to
hashing
than is the middlegame, because there are fewer moves at each ply
letting the
search go deeper, which is what transpositions need.  The endgame is
even
better, of course.


>2. So, if we are searching to ply N, we have to store entire tree at ply
>N-1. Of course,
>    Crafty will store results at ply N too, but (hopefully) hash table
>replace strategy is
>    good enough - at least it will not overrite hash entry with depth 2
>from ply N-1 by
>    hash entry with depth 1 from ply N.
>
>3. So, for middlegame search to ply N, 'sufficient' hash table size is
>not sizeof (tree),
>    but sizeof(tree)/(branching factor), e.g. sizeof(tree)/2.5.
>
>Eugene

not even that big it seems.  IE sizeof(tree)/10 is not bad at all if you
look at the last column.


>
>>ply=10 again has no differences, although the size of the tree is up to
>>roughly 150 x 30 x 1000 = 4,500,000 nodes.  Lets keep the 33%
>>non-qsearch
>>guess, so we now are going to search 1.5M nodes that need to be stored.
>>This is over 10x the size of the smallest table, 5x the size of the
>>middle
>>test you ran, and 2x the size of the largest table.  yet there is no
>>real
>>difference, which says that 10X over-utilization is not harmful due to
>>the
>>replacement strategy I use.
>>
>>ply=11 begins to find the breaking-point of the small hash table, since
>>the
>>times jump sharply.  The simplest conclusion to draw from this case is
>>that
>>3M is too small, while 6M/12M are close enough to be considered
>>equivalent.
>>27 minutes should turn into 27 x 60 x 30 x 1000 =  48,600,000 nodes.
>>This
>>is *way* over any of the sizes.  Again take 1/3 which is 16M nodes.
>>This
>>is a poor fit for 187,500 entries, and would seem to be a poor fit for
>>375,000
>>entries, but this size does better.  The 750,000 entries is better still
>>and
>>it would be nice to know if this trend continues for 1.5M entries and
>>beyond.
>>
>>It's interesting that the very simple two-level probe algorithm I use
>>(which
>>I generally call the "Belle hash algorithm" since it was first used in
>>Belle
>>nearly 20 years ago) works so well here.  It would obviously be worse if
>>I
>>hashed q-search nodes.  This is likely one of the things that led me to
>>get
>>rid of hashing capture search stuff, because performance tests were
>>always
>>better when I had it disabled...  I eventually removed the call to
>>LookUp()
>>in Quiesce() and have only tried putting it back once and didn't like
>>the
>>result.  Note that the difference between hashing and not hashing the
>>q-search
>>seems to be a total "wash"... ie qsearch is faster without hashing, but
>>the
>>tree is slightly larger.  In my case it was maybe 2% faster overall
>>without
>>hashing so I left it out.  However, in severe cases, it can be much
>>better
>>because it is harder to over-utilitize the hash table now than it was
>>when I
>>was hashing qsearch stuff.  So in small memory big search cases, this is
>>probably a big win.
>>
>>
>>
>>>
>>>I will try the same test with the middle game position
>>>later, for comparison.  :)
>>>
>>>Best Regards,
>>>Chris Carson



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.