Author: Cesar Contreras
Date: 13:18:43 07/08/04
Go up one level in this thread
Your aproach could work, but the main problem it's the test environment. As i see, test suites are fine to find holes or bugs in your engines, but not to evaluate performance. As a program increase it's strengt, you will find that when you change some parameters, your engine get's better in one test suite, but there are many chances that it gets worse in other diferent test suite. I tried once one evolutionary aproach to fine tune my static evaluator, but i found the next problems: 1.- The test set was a series of games between the same program, but with diferen parameters. The result was an engine that easily beat previos version in a direct match, but who was inferior in matches against others engines. Why? simply because the test set didn't evaluate performance with diferent engines. 2.- The time control selected was short time control (3 min all game), so the values got inflated, because if your search tree it's short, evaluation parameters are more important. Again, the test set lack in the inclusion of games in long time control. 3.- In an evolutionary aproach you must kill "unsucessfull" entities, so you end up killing entities maybe that had not enought chance to probe it's value. (this it's just a mention, i know you are not using an evolutionary aproach). My point it's: you need to carefully select your test set, with diferent oponent's, diferent time controls, and diferent opennings, with this your aproach could probe successfull, but the problem now it's how much time it's going to take. Another thing you are mising it's the incredibly long list of parameters of the static evaluator, that are truly important.
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.