Author: Bruce Moreland
Date: 11:03:00 10/09/99
Go up one level in this thread
On October 09, 1999 at 13:17:45, Scott Gasch wrote: >Hey, > >My branching factor is too high, or something. I have eliminated my explosive >qsearch but my main search tree is much larger than other programs'. The only >answer I have heard for this is "your move order is messed up." > >My move order is as follows: 1) hash move, 2) winning captures ordered by SEE, >3) even captures (determined by SEE), killer1, killer2, rest ordered on history >heuristic. Am I missing something? Yes, you didn't mention losing captures. Here is what you can do to help yourself. Whenever you want to make a performance change, save a numbered EXE and all of the source, and keep detailed notes about what each of them does that's new. Do a bunch of these per day, and don't bother to test them other than to see if they crash. At night, run them all on LCTI or some other similar test. LCTI is cool because it's 28 positions, 3.5 minutes each, so each run takes 98 minutes. Also, there's some tactics, some "positional" and some endgame stuff in there. You can do five or six of these per night if you sleep long enough, more if you can avoid using your computer in the morning and go off to some job or school or something. When you are ready to start working the next evening (or whenever), you have all of these nice dumps waiting for you. The crucial information is the time to complete each ply. Spend a few hours making a program that eats this information. It should allow you to compare two versions as follows. For each of the positions, you figure out what the deepest ply attained by both is. If one did 6 plies and the other did 7, the deepest ply here is 6. You then figure out the ratio between these two values and print it out. You can also keep a running total and dump that out at the end, too. Here is example output from my program: position 001 14/12! 00:00:10.250 00:00:42.875 418.29% position 002 12/11! 00:00:51.453 00:01:39.672 193.71% position 003 12/11! 00:00:24.468 00:02:57.282 724.55% position 004 13/12! 00:00:30.156 00:02:27.094 487.78% position 005 12/10! 00:00:14.234 00:00:34.265 240.73% position 006 16/15! 00:00:37.282 00:00:52.938 141.99% position 007 11/10! 00:00:15.656 00:02:36.094 997.02% position 008 12/12 00:02:48.109 00:02:42.172 96.47% position 009 12/12 00:02:43.406 00:02:43.547 100.09% position 010 12/12 00:01:43.781 00:02:52.672 166.38% position 011 15/15 00:02:37.750 00:01:53.500 71.95% position 012 12/12 00:01:43.594 00:01:15.328 72.71% position 013 13/13 00:02:50.047 00:03:13.843 113.99% position 014 12/11! 00:01:00.703 00:02:53.500 285.82% position 015 14/13! 00:00:46.563 00:01:31.781 197.11% position 016 13/12! 00:01:06.813 00:01:26.266 129.12% position 017 12/11! 00:01:08.469 00:01:15.875 110.82% position 018 11/10! 00:01:05.859 00:00:53.094 80.62% position 019 12/12 00:01:30.672 00:01:39.313 109.53% position 020 19/20! 00:02:05.328 00:01:37.875 78.10% position 021 17/18! 00:02:12.625 00:01:07.266 50.72% position 022 20/22! 00:01:32.140 00:01:04.500 70.00% position 023 15/15 00:02:33.328 00:03:16.859 128.39% position 024 14/14 00:01:36.359 00:01:56.641 121.05% position 025 15/15 00:02:08.000 00:01:36.516 75.40% position 026 16/17! 00:01:27.656 00:01:49.421 124.83% position 027 16/16 00:01:18.329 00:01:33.735 119.67% position 028 16/16 00:03:17.016 00:03:21.313 102.18% 2540046 3215237 126.58% What you can see is that the first version tends to search more deeply than the second version, except in the endgame positions, and that it gets to equivalent ply in less time except in endgame positions, where it might be about even or a little faster, it's hard to tell from looking at that output. The percentage number at the end of each line is the time taken by the second version, divided by the time taken by the first version, expressed as a percentage. The percentage on the last line is the overall speedup or slowdown, in this case slowdown, since it took 126% of the time to get to the same depth, overall. This is an extremely interesting number in my opinion. In this case my first version is my normal version, the second version I deleted the entire quiescent search and replaced it with "return Eval()". I conclude that this doesn't get as deep as actually using a quiescent search, except in endings. This is kind of a strange comparison because it actually matters how quickly the problems were solved, since this is such a drastic search change. In many performance tests the solution times will typically be a little faster if you get deeper faster, so it's not as important to look at the solution times, but in this particular case I can use another program that lets me compare solution times, to see if it's actually finding the answer faster or slower (I won't include output, but it's finding the answer slower in middlegames and a little faster in endings). I do literally hundreds of experiments like this. If you want to improve your move ordering, make a ton of versions that do move ordering differently, and see which is fastest. This program also tells me if the number of nodes taken to get to any particular depth is different between two version. In this case the number was very different, so there was a lot of output, which I deleted. Often this output is very important, since if I make a pure performance change, one that should not affect the tree size, I really really want to know if the tree size changed, because that always indicates a bug or an unintended feature. >I update killers and history at a fail high (beta cutoff) and a PV node. I do >not do anything with a root move list and I do not do internal iterative >deepening (don't understand it yet). I am doing a PVS and in my tree dumps I >see I am having to research sometimes, esp in low depth searches. Internal iterative deepening is a minor performance optimization, it is not a magic bullet. You are updating killers properly, although you'll notice that PV nodes almost never happen so that case isn't terribly important. Enjoy what you have now, if it is weak that means that if you play it on the server it will amuse the vast majority of players who are in the middle of the rating list. Have fun collecting thank you's from those people, and meanwhile keep fiddling a little every day, and you'll get stronger and stronger, especially if you remember to upgrade your hardware occasionally. >As an example of my branching problem, position fine 70 >(8/k7/3p4/p2P1p2/P2P1P2/8/8/K7 w) at 20 ply I am over 1 million nodes whereas >every good program is under 100k. Likewise after 1. a2a3 at seven ply I am at >70k while others are more like 30k. You have a bug or you are using a bad replacement scheme, most likely, but not necessarily. This position is somewhat sensitive to randomness. I used to think that a correct scheme would solve it quickly and an incorrect one won't, but that's not necessarily true. It would be good to figure this problem out, but it may not be related to other move ordering issues. bruce
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.