Author: Robert Hyatt
Date: 15:44:07 01/17/98
Go up one level in this thread
On January 17, 1998 at 13:07:14, Don Dailey wrote: >Bob, > >I am glad you cleared this up. Everyone does seem to think that once >the hash table reaches saturation "everything backs up like a stopped >up sink" and life is over. But as you stated this is not the case. > >A (very) rough guide is that once your hash table size reaches >saturation >you will get a 6% speedup if you double it. But the proper way to view >this is as an extra BENEFIT not a bottleneck. If you double the search >time your program will benefit TREMENDOUSLY, if you double the hash size >it will benefit only SLIGHTLY. I just tried this, and didn't see this "tremendously" you mention. IE I kept searching the same position deeper and deeper until after an iteration finished it reported the hash was 99% full. I cut it by half, and ran to the same depth, in roughly the same time... I tried this on three positions and one of the three slowed down by 3-4%. I agree that if you overrun the table by 10x you are going to have problems, but at 2x, there are *so* many positions that you store that never get used (IE 25% hits is good in the middlegame) that if you overwrite that other 75% it has no effect at all. If the replacement strategy is decent, this seems to hold true. IE in KK's *long* think games, I don't see the search rip thru 12 iterations, then see 13, 14 and 15 bog down like nuts because the hash has been horribly overwritten (ie 1 hour at 100K nodes per second is about 3.6 billion nodes if I did my math right. We are going to use 8M max for hash, which is 6M for me divided by 16 bytes per entry gives a total of 384K entries (I think). that is over-subscribing by a factor of 10,000 or so... Someone might check my math of course... > >Another point Bruce made is that if the table is bigger than memory >you will get disk thrashing and this will kill all your benefit. > >- Don > > >On January 17, 1998 at 00:20:35, Robert Hyatt wrote: > >>On January 16, 1998 at 22:36:10, Detlef Pordzik wrote: >> >>>This sounds remarkable to me, since I haven't such a good education of >>>the >>>" what's going on inside ". >>>As far as I know, and from my own experience, progs like, for instance, >>>G5 >>>don't care too much for big hash tables, whilst, for example, R9 fills >>>up his >>>maximum capacity of 60 megs quite fast - mostly. >>>I once had a try with F5 - just to see, if it really was true, and >>>allowed him 85 megs on my 128 MB system.....full within about 5 minutes >>>- I simply can't believe, that this is efficient ? >>>Now to my question : >>>is there, using W 95, a kinda standard or approx formula, how much hash >>>to allow the program working on unlimited analysis time, which means, >>>for example, 8 hours ? >>>Then, of course, stand alone. - Or is it so - as I would suggest, that >>>it depends on the prog in the end....and in one's own experience ? >> >>First, let's dispell a myth: full = bad. Hashing is not going to quit >>when the table fills. Everyone uses reasonable replacement policies. >>Don >>Beal ran some tests and wrote a paper in an ICCA issue last year. Until >>you >>get into the 10x area (you have searched 10X the number of nodes that >>can >>fit in your hash table) the search won't degrade a tremendous amount. >>If >>you have done your homework on the replacement policy, we are talking >>about >>percentage ranges in the 10% to 20% range, *after* you search 10X the >>number >>of nodes you can store in the hash... >> >>This means that you can search until the table fills, and keep right on >>going, >>and not expect the roof to fall in. *unless* Frans tried to make it so >>fast >>that he didn't take the time to develop a reasonable replacement >>strategy, >>something I find difficult to believe. >> >>So full != bad. 2X full is worse than not full, but you might get the >>impression this is a horrible slow-down. It isn't...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.