Author: Jon Dart
Date: 20:44:34 08/18/00
Go up one level in this thread
On August 18, 2000 at 18:03:28, Dann Corbit wrote: > >Since there are so many variations that are possible with hundreds of >parameters, I was planning to use gradient search error minimizations with the >evaluation function to try and find an optimal value for all the parameters that >solves a test set of perhaps 5000 carefully verified positions. (Iteration >would be so expensive it would be impossible to use it). The experiment would >be repeated at different time controls, as perhaps some parameters are also a >function of time! I assume you are familiar with KnightCap, which has an automatically adjusted learning function. It learns from its own games, however, not from test suites. >how on earth do >you choose suitable values for each positional, tactical, and material >parameter? Well, in my case it's manual but a lot of the changes result from observing obviously bad moves and trying to make the program appreciate that they're bad. Examples: exchanging into a bad ending, under-estimating the danger from a passed pawn or a kingside attack, etc. I keep a little collection of such cases and once in a while I go in and try to fix one or more of them .. some are not easily fixable via evaluation changes, however. --Jon
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.