Computer Chess Club Archives




Subject: Re: Automatic Eval Tuning

Author: Landon Rabern

Date: 10:27:57 06/29/01

Go up one level in this thread

On June 29, 2001 at 11:18:48, Vincent Diepeveen wrote:

>On June 29, 2001 at 11:14:34, Artem Pyatakov wrote:
>>I am curious, have people here experimented or extensively used Eval Function
>>tuning based on GM games for example?
>>If so, is it effective to any extent?
>>I came across this page and it seemed kind of interesting:
>>Have others tried this too?
>>Thanks in advance.
>yes i tried some years ago automatic tuning.
>The bigger your evaluation is, the more problematic tuning it automatic
>is. Also automatic tuners don't have any chess knowledge, so they
>don't see the difference between tuning passed pawns negative if you happen
>to have a testset where a passer is bad now and then.
>Another problem for automatic tuners is that you tune for testposition set X,
>but that in reality it has to work well also for testset Y where it has
>not been tuned for.
>Evaluations hand tuned take into account testset Y, not only testset X.
>Anyway, when your number of parameters gets quite a big number then
>automatic tuning doesn't work anyway anymore.
>Of course it might beat random chosen parameters, but it'll never beat
>hand chosen parameters (unless a fool choses them).
>Best regards,

You are assuming that all you can do is supervised learning over a data set.
The method that shows the most promise is Reinforcement learning.  This allows
the learner to continuosly learn, if there is a hole in the evaluation it will
get fixed, because otherwise the program will lose.  You might want to try using
something like Q-learning or TD(lambda).  It might take a long time to get good
values from scratch, but you might have more success if you start from your
original hand coded numbers.



This page took 0.01 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.