Author: leonid
Date: 12:13:34 08/19/00
Go up one level in this thread
On August 18, 2000 at 18:03:28, Dann Corbit wrote: >There has been some discussion about the use of test suites and their usefulness >for program improvements. Personally, I plan to use them extensively to fiddle >with "bean-counter" in order to try to get things right. > >Since there are so many variations that are possible with hundreds of >parameters, I was planning to use gradient search error minimizations with the >evaluation function to try and find an optimal value for all the parameters that >solves a test set of perhaps 5000 carefully verified positions. (Iteration >would be so expensive it would be impossible to use it). The experiment would >be repeated at different time controls, as perhaps some parameters are also a >function of time! > >Now, I am wondering (since at least one of the world's best chess programmers >does not use them at all) if it is such a good idea. So, I am wondering, if you >do not use test positions to tune your evaluation parameters, how on earth do >you choose suitable values for each positional, tactical, and material >parameter? What are the alternatives? Why are the alternatives better? If >test positions were used in the past and abandoned, what prompted the change of >heart? If test positions have *never* been tried, how is it known that they >won't be useful? In chess program 100% verifiable part is move generator and mate solver. The rest is never clean from occasional evil. Few funny bugs can stay in this part year around. Leonid.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.