Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Puzzled about testsuites

Author: Uri Blass

Date: 11:41:47 03/10/04

Go up one level in this thread


On March 10, 2004 at 14:23:29, Dave Gomboc wrote:

>On March 09, 2004 at 16:05:15, Gian-Carlo Pascutto wrote:
>
>>Yet no top program does this, and they had a human correct it
>>afterwards in Deep Blue. The conclusion should be obvious.
>
>Is that so?
>
>>If you can develop a *top level* evaluation function, better than
>>good human tuning, solely on learning from a GM games database,
>>you deserve an award. Nobody has succeeded before.
>
>Jonathan Schaeffer learned weights in Checkers [Chinook] without even using a
>human games database (he used TD learning).  The weights he tuned score 50%
>against his hand-tuned code.
>
>I learned weights in Chess [Crafty] using 32k positions, hill-climbing an
>ordinal correlation measure.  It too scores 50% against the hand-tuned code.

How many games and what time control?
There is a difference if you score 50% with 2 games and with 2000 games?

It is also possible that you get 50% against Crafty but less against other
opponents.


>Given Deep Sjeng's source code, I could zero its evaluation function weights,
>and learn them from GM games to score 50% against the weights you have right now
>too.

You may be right but you cannot know about source code that you do not know.

Uri



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.