Author: Robert Hyatt
Date: 18:25:14 05/25/04
Go up one level in this thread
On May 25, 2004 at 17:44:55, Andrew Wagner wrote: >I do a lot of reading through CCC archives. I use the search engine from here, >and also I'm in the process of reading through the old archives systematically >using the offline reader (I'm in the fall of 2001 currently, I think). Anyway, >sometimes I run across a nugget that makes me just stop and go "whoah". Here's a >quote from one of Bob's posts, originally about hashing algorithms: > >>I think the key to improving a program, once it plays legally, is to develop >>a methodology to carefully profile the code, find the hot spots, and then find >>ways to speed up those hot spots. But all the while paying _careful_ attention >>to the overall node counts on a wide range of test positions. A 1% speedup is >>of no use at all if you introduce an error that happens once every billion >>nodes. I can search that many nodes in 15 minutes. I can't stand errors that >>frequently. I have what would probably be called a "zero-tolerance for errors" >>in Crafty. If I make a change that should only make it faster or slower, then >>the node counts must remain constant. If they don't I debug until I find out >why and fix it. > >This is a fantastic point. Maybe somewhat obvious to our more experienced >members, but certainly words of wisdom for us newbies. So, my question is, what >methods are you all using for profiling your code? How do you go about >identifying and fixing your hotspots? Do you have a particular test suite you >use, or what? Andrew I usually use gcc and -g, and then gprof to output the results after running several different types of positions thru it...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.