Author: Dann Corbit
Date: 13:04:32 03/10/04
Go up one level in this thread
On March 10, 2004 at 14:41:47, Uri Blass wrote: >On March 10, 2004 at 14:23:29, Dave Gomboc wrote: > >>On March 09, 2004 at 16:05:15, Gian-Carlo Pascutto wrote: >> >>>Yet no top program does this, and they had a human correct it >>>afterwards in Deep Blue. The conclusion should be obvious. >> >>Is that so? >> >>>If you can develop a *top level* evaluation function, better than >>>good human tuning, solely on learning from a GM games database, >>>you deserve an award. Nobody has succeeded before. >> >>Jonathan Schaeffer learned weights in Checkers [Chinook] without even using a >>human games database (he used TD learning). The weights he tuned score 50% >>against his hand-tuned code. >> >>I learned weights in Chess [Crafty] using 32k positions, hill-climbing an >>ordinal correlation measure. It too scores 50% against the hand-tuned code. > >How many games and what time control? >There is a difference if you score 50% with 2 games and with 2000 games? > >It is also possible that you get 50% against Crafty but less against other >opponents. > > >>Given Deep Sjeng's source code, I could zero its evaluation function weights, >>and learn them from GM games to score 50% against the weights you have right now >>too. > >You may be right but you cannot know about source code that you do not know. With his method, he will eventually reach a good result with any engine. It uses generations, and discards the weaker ones absorbing the stronger ones. After long enough waiting, it must become stronger. He wrote a paper on it. Look at this: http://www.cs.ualberta.ca/~dave/ to find this: Gomboc et al. Ordinal Regression for Evaluation Function Tuning, 10th International Conference on Advances in Computer Games, Graz, Austria, Nov. 24-27, 2003.
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.