Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Puzzled about testsuites

Author: Gian-Carlo Pascutto

Date: 13:05:15 03/09/04

Go up one level in this thread


On March 09, 2004 at 15:37:30, Michael Yee wrote:

>From a machine learning (or even plain regression) perspective, fitting a
>function to training data is fine as long as you make sure you're not
>overfitting (i.e., fitting noise). This is what training a model is all about.
>Additionally, I think there could be even less danger in the case of chess
>since the training data has no error (in theory).
>
>On the other hand, tuning the static eval for a very small training set could be
>dangerous if you're fitting more parameters than you have observations. But if
>you had a large enough training set, what would be the problem? You could always
>verify your new fitted eval on a validation test set.

You are fitting an evaluation function with tactical training data.
You're not only tuning parameters, you're influencing everything
since speed and tree size also varies.

The analogy is completely flawed.

>P.S. Deep Thought's authors tuned its weights using GM games...
>
>http://www.tim-mann.org/deepthought.html

Yet no top program does this, and they had a human correct it
afterwards in Deep Blue. The conclusion should be obvious.

If you can develop a *top level* evaluation function, better than
good human tuning, solely on learning from a GM games database,
you deserve an award. Nobody has succeeded before.

--
GCP



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.