Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: methods for testing changes

Author: Dan Andersson

Date: 10:36:10 05/30/01

Go up one level in this thread


The test methodology should depend on what you test. If for example you make a
change in the code that should have no impact on search, you could check this by
either computing a checksum for the search (per ply or whatever) or dump the
search in a file and compare with a previous non changed run. You could even
combine both and only save the search if the checksum differs. If you want to
check search efficiency, you should have a large amount of test positions
(preferably non tactical, and from all phases of the game) and make an automated
report of the changed search metrics. If you want to check tactical prowess you
should run a tactical suite (preferably change the suite at intervals to avoid
over training). If you want to improve your programs weak play in certain kinds
of positions, you should save the offending position or positions and routinely
re-run them to see if your program avoids the mistakes it did before. This kind
of self testing is very beneficial, and prevents regression, as long as you are
certain the plan/knowledge is non productive (if you're not a good enough
chessplayer find any player with, say, elo >2000 and ask him). Why not send some
standard games Hossa lost for review at FICS teaching ladder, or to me for that
matter (I like to analyse games, not blitz games though. It helps me become a
better player.)

Regards Dan Andersson



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.