Author: Tom Likens
Date: 07:13:56 05/26/04
Go up one level in this thread
On May 25, 2004 at 22:31:08, Andrew Wagner wrote: >On May 25, 2004 at 22:12:04, Tom Likens wrote: > >>On May 25, 2004 at 20:45:08, Tord Romstad wrote: >> >>>On May 25, 2004 at 20:09:57, Andrew Wagner wrote: >>> >>>>On May 25, 2004 at 19:36:04, Will Singleton wrote: >>>>> >>>>>I'm surprised Bob would say that profiling is important so soon in the >>>>>development process; perhaps there's some missing context. Profiling is, imho, >>>>>about the last thing you'd want to do. >>> >>>I completely agree. I have occasionally profiled my program, mostly out >>>of curiosity, but I am still nowhere near the stage where it is time to >>>work on low-level optimization. There are still tons of bugs to fix, >>>and countless possible ways to improve the search. Writing a fast program >>>is not mainly about increasing the N/s count by a few percent, it is about >>>searching the smallest possible tree without introducing too many errors. >>> >>>>Here's the link, so you can read it in context, if you'd like: >>>>http://chessprogramming.org/cccsearch/ccc.php?art_id=112972 >>>> >>>>> >>>>>1. Fix bugs in movegen, using perft tests. >>>> >>>>He does say "once it plays legally". To me, that implies a bugfree movegen... >>>> >>>>>2. Write a very simple, bug-free eval. >>>>>3. Concentrate on move-ordering, which is crucial to making the tree small. >>>>>Develop methods for measuring the quality of your ordering, don't only look at >>>>>node counts. >>>>> >>>> >>>>I would count these as what he calls "hot spots". Especially move ordering >>>>(though good eval helps move ordering). >>> >>>Perhaps, but they are not the kind of hot spots you would find by using a >>>profiler. Because Bob mentioned the technique of comparing the number of >>>nodes searched before and after some optimization is implemented and making >>>sure that the two numbers are the same, I doubt that he was talking about >>>things like move ordering. >>> >>>>>Don't spend a lot of time on arcane or new ideas until you're certain what you >>>>>have is bug-free. Especially make sure your transposition code is simple and >>>>>effective, tons of problems result from bad hashing. >>>> >>>>I would also be interested in a process for this. What process do you use to >>>>really be absolutely sure your program is bug free? Especially your hash table >>>>code. E.g. at the moment, I think Trueno is bug free, more or less. But I >>>>haven't found a good thorough test to give it that will tell me. >>> >>>Unfortunately I don't know to make sure my program is bug free. I have >>>plenty of bugs, even a couple of simple, obvious, eye-catching and very >>>serious-looking bugs which for some mysterious reason appear to make the >>>program much stronger (every time I try to make the one-line change needed >>>to fix the bugs the playing strength drops by about 100 Elo points). But >>>here are a couple of techniques I have found useful: >>> >>>1. Whenever you try out some new pruning technique (for instance when you >>> introduce hash table pruning), don't add the actual pruning to the engine >>> immidiately. Instead, set a flag which says "I would have pruned this >>> position if my new trick was enabled" (for instance, if the hash table >>> entry for the position contains a lower bound which is higher than beta, >>> and the depth of the entry is sufficiently big). If it later turns out >>> that the pruning decision was wrong (in the above example, if the search >>> doesn't fail high, despite the lower bound >= beta in the hash table), >>> you probably have a bug. Write the position to a log file. By inspecting >>> the log file, you should be able to spot the most serious bugs. This >>> technique is not only useful when adding non-speculative pruning, but >>> also when implementing various sorts of speculative forward pruning. It >>> can be used not only to find bugs, but also to detect holes and missing >>> special cases in forward pruning tricks. >>> >>>2. Write a function to compute a mirror image of the position on the board >>> (i.e. black and white sides interchanged). Find a big file of EPD >>> positions, and let your engine parse the file and evaluate each position >>> and its mirror image. This helps you find asymmetry bugs in the evaluation >>> function. This test should be done almost every time you do a change in >>> the eval. It is surprising how easy it is to introduce new asymmetry bugs. >> >>This is a great way to catch asymmetry bugs. Don't forget that you can >>mirror the board top-to-bottom *and* left-to-right. If you carry this >>out you will get three additional boards to compare against (four if you >>count the original position). >> >>1. original position >>2. T-2-B >>3. L-2-R >>4. T-2-B followed by L-2-R >> >>regards, >>--tom >> > >Yes, I was skeptical a few months ago when Andrew Williams (PostModernist) >suggested that I try this. I was amazed at the number of bugs I found when I did >it though! If only there were more such tests to flush out easy bugs! Chances are, if Mr. Williams suggests it, it more than likely has merit. Some of the other bug catching devices that I've incorporated into Djinn are: 1. Assertions - Fabien's Fruit is a good example of how to apply and use assertions. More than likely you will want to roll your own (almost everyone rolls their own before they are done). Fruit (and Djinn) guards the entrance to almost *all* of its internal functions with assertions similar to: ASSERT(board_is_ok()); ASSERT(color_is_ok()); ASSERT(square_is_ok()); etc. One caveat, do *not* place legitimate error trapping in your assertions. For example, char *ptr = malloc(25); ASSERT(ptr != (char *)NULL); is an invalid use of an assert. 2. This one has been oft mentioned but it is still very powerful. Validate your incremental hash key against a hash key generated from scratch (you should do this for *all* your hash keys- normal, pawn, eval etc.). 3. Create a second simple and slow version of any algorithm which has sacrificed simplicity for speed (or any really complicated algorithm in general). Use its results to validate the output of the original algorithm. Move generation and SEE functions are good candidates for this debug aid. 4. Invest in a good bounds checking tool, either BoundsChecker or the special versions of gcc (if you're using C). These type of programs will invariably find either a) memory leaks b) file descriptors not closed or c) memory overuns and corruption. You may also want to look at Valgrind if you are using x86 Linux. It's free and very good. 5. Print out the game tree to a log file. You can either roll your own *or* you may want to look into Andrei Fortuna's *excellent* CHANT program (it's advertised on the CCC search page). It took me an easy afternoon to add CHANT support to Djinn. Anyway, printing out the game tree to a log file will often show search inefficiencies and bugs. Just be care you don't blow up your hard-drive. Anyway, I'm probably doing other things as well, but frankly I've forgotten them. That's the beauty of most of these methods, you add them and then forget about them until they flag an error (which is great, if you have the chance *always* let your computer catch the bugs). regards, --tom
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.