Author: Heiner Marxen
Date: 07:29:08 08/18/01
Go up one level in this thread
On August 17, 2001 at 12:00:00, Artem Pyatakov wrote: >Heiner, > >Thank you VERY much for this very detailed and helpful post. It is a good feeling to have helped! :-) >Your technique of multiple hash probes is very interesting - I might experiment >with this in my newbie engine and see what kind of results I get. Ah yes, testing is very important. I have had big surprises with hashing scemes. >Now, you talk about the problem that I did consider right after posting my >message - namely that the number of nodes that HAD to be searched could >significantly differ from the number of nodes that will have to be searched >later. > >One (somewhat) interesting thought that I had on the subject, would be that >instead of counting the number of nodes ACTUALLY searched with hash hits, we >could compute the number of nodes that would have to be searched without the >hash table! To do this, every time we get a hash hit, we would ADD the number >of nodes that was recorded in the hash table to the total number of nodes that >we searched. Yes, I have thought about that, too. >I think getting this number would serve as a better estimate, because DEPTH also >approximates the number of nodes that would have to be searched WITHOUT the hash >table. > >But again, there are definitely problems with this approach, since it assumes >that we have have no hash table to estimate node counts, which is certainly not >true. Exactly, that larger value is not really the value we want. But the value we really would like to compute "how much work will this entry save me in the future" is just not accessible. It not only depends on the recomputation amount, but also on the number of hits we will have in the future. We hav no chance to really know this. So we are left with guess work, anyway. Also, it is not very important to compute the exact future value of a hash entry. It will be sufficient to get an estimate that allows us to choose among candidates. If this decision is a good one often enough, we are doing fine. One way to measure the quality of a replacement sceme is the following: Run a test of medium size with a really large hash table, large enough, that barely any entry is replaced. This way we know the maximal benefit that can be achieved. Note the time (amount of work really done). Then run the same test with a much smaller hash table, which gets filled say 10 times. Compare the amount of work done now with the optimum we noted above. If you are within a few percent of the optimum, you may decide that your replacement sceme is good enough, and making it more complex is not worth the hassle. Also remember, that smaller nodes imply more nodes per MB, which also has a beneficial effect. You may measure that also. E.g. I once had a complex replacement sceme, which nearly doubled the size of my hash entries, i.e. I had only half as many as I have now. When I measured the better replacement with fewer nodes against the less intelligent replacement sceme with nearly twice the amount of entries, the simple sceme with more entries did win quite clearly. Much to my own suprise! Finally I deleted all that complex code... funny. Of course, such decisions have to be based on real data, i.e. experiments. The theory may be misleading, sometimes. >Thanks again! > >Artem Have a nice day! Heiner
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.