Author: Gerd Isenberg
Date: 12:36:36 06/24/04
Go up one level in this thread
<snip> > >Never, never, never, never optimize a program before it is working correctly. >And when I say never, I mean not ever. > >The only exception to this rule is in the choice of algorithms. There is no >sense picking a bad algorithm to start with. And even if you did happen to pick >the wrong algorithm, then it is not hard to change it. > Yes, but you may consider efficiency of data structures, atomic functions on it and algorithms and class design before starting to write your program. For a search algo one should think about what data is needed recursively, indexed by ply or as local or parameter on the stack and what data is only needed once per node for some time, e.g. for eval and movegen (prepare). They should placed in different structs/classes, Node/Position, SearchTree and probably a TemporaryFullNode object per search object, which is shared and referenced by eval/movegen and reinitialized (conditionally) once per node. I tried a linked node list on the stack (some 100 bytes per node as one local object inside search), which was probably not a good idea. Better is to preallocate all objects (if not static or global). An array of nodes for the search may be allocated during construction time of a SearchTree-object as well as others. Keeping the stack frame small (<=64,128 Byte) is IMHO also important. Considering several threads is also an issue and how to handle and allocate shared objects. Do several working threads share a SearchTree object? Is there the need to allocate thread local memory per search object and thread. Or does each search thread allocates its own search object? I like alternating bottom-up and top-down development. And that includes profiling and playing around with some testprograms. I agree that hiding the implementation from the top-down view is very important. >Your advice is bad advice. I hope that nobody listens to it. Permature >optimization does absurdly more harm than good. For every ounce of benefit, >there are a trillion gallons of downside. When you start programming ANYTHING, >including a chess program, write clear, simple code that best expresses the >algorithm in the most straightforward manner. > >Now, let's go farther. Suppose that you have chosen some fundamentally bad data >structures. If your program is written in an abstract enough manner, it won't >matter. And the more abstract you make it, the less it will matter. > Yes, it is fine to have a search and even some eval stuff independent from 0x88, bitboards or whatever. Small inlines are very helpfull to direct the focus to the essential. For search issues i see really no need to use virtual functions and abstract interfaces, e.g. eval for several games states. I have problems with hiding some fill-stuff, e.g. finding set of squares rooks of one side may reach with two (or more) moves without getting successfully captured. Because such actions imply appropriate data structures and implementation. >My point: >1. Write clear code. >2. Choose good algorithms. >3. Write abstract code that hides the implementatiion details when possible. >4. When everything works well, profile it. >5. Speed up the stuff that will benefit from it. > >>I also have to disagree with that number, 11%. I just compiled it and ran it >>through a profiler. Here are the top 20 consumers. Evaluation totals almost 50% >>of the execution time. However, your point is well taken. Spending a significant >>amount of time improving MakeMove() and UnmakeMove() wouldn't gain much. There are expensive leaf nodes with full eval, looking for good captures if eval don't cuts. There are cheap(er) leaf nodes like lazy eval cuts, hash hits or deteceted repetitions. What for every program is the ratio of both and how often do they occur? Smart futility pruning versus lazy eval cuts - low nps versus high nps?
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.