Author: Robert Hyatt
Date: 07:45:41 05/30/00
Go up one level in this thread
On May 30, 2000 at 01:34:27, Tom Kerrigan wrote: >On May 30, 2000 at 00:25:29, Robert Hyatt wrote: > >>On May 28, 2000 at 19:44:05, Tom Kerrigan wrote: >> >>>On May 28, 2000 at 10:02:05, Georg v. Zimmermann wrote: >>> >>>>Hi, >>>> >>>>today I realized that the program I'm toying around with does in its search() >>>> >>>>1.) generate all moves >>>>2.) order the moves and put hash move if available to 1st position >>>>3.) do the recursive search >>>> >>>>I thought that I could increase its speed by testing the hash move first and >>>>only if not >= beta do the move generation and the rest. >>>> >>>>From my tests it shows that it sticks with the hash-move about 50% of the time. >>>>Should this number be higher ? >>>> >>>>I was very dissapointed when I didn't notice any speedup after my changes. What >>>>speedup should I expect ? Something like 0.5-1% or more like 1ply ? >>> >>>You can forget about 1 ply. Your program needs to go ~4x faster for it to search >>>1 ply deeper. Which means that your move generator must be taking ~400% of the >>>execution time, which is clearly impossible. >> >> >> >> >>His effective branching factor might be different than yours. Mine is at about >>3.0... I have seen some higher and some a bit lower. > >If you make a program 20% faster, it will not search 1 ply deeper, regardless of >your branching factor. (Unless your branching factor is < 1, which doesn't seem >likely.) > >> >>> >>>Here's what you have to consider when calculating your speedup: >>> >>>1) How many moves you get from the hash table. I just ran some random position >>>and got a 1.75% "hit rate." >> >>That sounds horribly low. I'd be concerned of something serious if it drops > >I don't hash during quiescence and my hash table was pretty small when I ran >that test. That may explain the low rate. But my point was that the expected >speedup of the optimization should be small. Even if you have a hit rate of 50%, >the speedup is still only 0.5 * 0.5 * 0.2 = 5%. > >-Tom I don't hash in quiescence either. I haven't in many years. However, I am not sure that your 5% number is right or wrong, buet 5% is something I would definitely go for if I knew it was laying around. You get enough of those, and you can get into big savings. I've found very few large savings (25%) in Crafty over the years. But I have found many dozens of 1-3% improvements, and an occasional 5-10% boost as well. I think the key to improving a program, once it plays legally, is to develop a methodology to carefully profile the code, find the hot spots, and then find ways to speed up those hot spots. But all the while paying _careful_ attention to the overall node counts on a wide range of test positions. A 1% speedup is of no use at all if you introduce an error that happens once every billion nodes. I can search that many nodes in 15 minutes. I can't stand errors that frequently. I have what would probably be called a "zero-tolerance for errors" in Crafty. If I make a change that should only make it faster or slower, then the node counts must remain constant. If they don't I debug until I find out why and fix it.
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.