Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Puzzled about testsuites

Author: Michael Yee

Date: 12:37:30 03/09/04

Go up one level in this thread


On March 09, 2004 at 11:50:39, Gian-Carlo Pascutto wrote:

>On March 09, 2004 at 11:28:16, Michel Langeveld wrote:
>
>>Last week I worked hard on Nullmover and the ecm-gcp testsuite in particulair.
>>I thought this testsuite is a fast way to the tune middlegame knowledge, kick
>>things out and get things back in and do multiple testruns.
>>
>>I did the following:
>>*1 kicked my kingsafety totally and use only pawnshelter
>>*2 kicked my mobility totally out
>>*3 added more information in the pawn struct
>
>Tuning your evaluation on a testsuite is fundamentally wrong.
>
>--
>GCP

To borrow a phrase from Uri : "I disagree." :-)

From a machine learning (or even plain regression) perspective, fitting a
function to training data is fine as long as you make sure you're not
overfitting (i.e., fitting noise). This is what training a model is all about.
Additionally, I think there could be even less danger in the case of chess since
the training data has no error (in theory).

On the other hand, tuning the static eval for a very small training set could be
dangerous if you're fitting more parameters than you have observations. But if
you had a large enough training set, what would be the problem? You could always
verify your new fitted eval on a validation test set.

Michael

P.S. Deep Thought's authors tuned its weights using GM games...

http://www.tim-mann.org/deepthought.html



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.