Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Puzzled about testsuites

Author: Dave Gomboc

Date: 11:23:29 03/10/04

Go up one level in this thread


On March 09, 2004 at 16:05:15, Gian-Carlo Pascutto wrote:

>Yet no top program does this, and they had a human correct it
>afterwards in Deep Blue. The conclusion should be obvious.

Is that so?

>If you can develop a *top level* evaluation function, better than
>good human tuning, solely on learning from a GM games database,
>you deserve an award. Nobody has succeeded before.

Jonathan Schaeffer learned weights in Checkers [Chinook] without even using a
human games database (he used TD learning).  The weights he tuned score 50%
against his hand-tuned code.

I learned weights in Chess [Crafty] using 32k positions, hill-climbing an
ordinal correlation measure.  It too scores 50% against the hand-tuned code.
Given Deep Sjeng's source code, I could zero its evaluation function weights,
and learn them from GM games to score 50% against the weights you have right now
too.  (I'd need to make some performance improvements to my current tuner to
tune table-lookup-based terms efficiently, but that's an implementation issue,
not a research issue.)

Weight tuning is no longer the issue.  Selecting the features that will be
evaluated is the issue!

Dave



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.