Author: Don Dailey
Date: 10:07:14 01/17/98
Go up one level in this thread
Bob, I am glad you cleared this up. Everyone does seem to think that once the hash table reaches saturation "everything backs up like a stopped up sink" and life is over. But as you stated this is not the case. A (very) rough guide is that once your hash table size reaches saturation you will get a 6% speedup if you double it. But the proper way to view this is as an extra BENEFIT not a bottleneck. If you double the search time your program will benefit TREMENDOUSLY, if you double the hash size it will benefit only SLIGHTLY. Another point Bruce made is that if the table is bigger than memory you will get disk thrashing and this will kill all your benefit. - Don On January 17, 1998 at 00:20:35, Robert Hyatt wrote: >On January 16, 1998 at 22:36:10, Detlef Pordzik wrote: > >>This sounds remarkable to me, since I haven't such a good education of >>the >>" what's going on inside ". >>As far as I know, and from my own experience, progs like, for instance, >>G5 >>don't care too much for big hash tables, whilst, for example, R9 fills >>up his >>maximum capacity of 60 megs quite fast - mostly. >>I once had a try with F5 - just to see, if it really was true, and >>allowed him 85 megs on my 128 MB system.....full within about 5 minutes >>- I simply can't believe, that this is efficient ? >>Now to my question : >>is there, using W 95, a kinda standard or approx formula, how much hash >>to allow the program working on unlimited analysis time, which means, >>for example, 8 hours ? >>Then, of course, stand alone. - Or is it so - as I would suggest, that >>it depends on the prog in the end....and in one's own experience ? > >First, let's dispell a myth: full = bad. Hashing is not going to quit >when the table fills. Everyone uses reasonable replacement policies. >Don >Beal ran some tests and wrote a paper in an ICCA issue last year. Until >you >get into the 10x area (you have searched 10X the number of nodes that >can >fit in your hash table) the search won't degrade a tremendous amount. >If >you have done your homework on the replacement policy, we are talking >about >percentage ranges in the 10% to 20% range, *after* you search 10X the >number >of nodes you can store in the hash... > >This means that you can search until the table fills, and keep right on >going, >and not expect the roof to fall in. *unless* Frans tried to make it so >fast >that he didn't take the time to develop a reasonable replacement >strategy, >something I find difficult to believe. > >So full != bad. 2X full is worse than not full, but you might get the >impression this is a horrible slow-down. It isn't...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.