Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Puzzled about testsuites

Author: Gian-Carlo Pascutto

Date: 14:48:19 03/09/04

Go up one level in this thread


On March 09, 2004 at 16:37:46, Michael Yee wrote:

>Wow. I was partly just being facetious with my initial comments. But I actually
>was mostly serious. I certainly don't think the analogy is "completely" flawed
>(since I think that would invalidate a lot of reasonable ideas in machine
>learning).

Machine learning is not flawed, your analogy of what he's doing to what
you're talking about is!

>If you parameterized your whole program, I don't see why a global search
>technique couldn't find the same weights that you hand-coded or even better ones
>(given a nice large training set). For example, let f(x) = DS's performance in a
>tournament given param vector x. Then a search technique (e.g., tabu search)
>could be used to optimize f(x) over x. I admit that it could take a long time,
>but I don't think it's impossible. (Also, I think it would still work if f(x)
>was based on the ability to match GM moves form a large set of training
>positions.)

Yes, this would work. No, this isn't what he's doing. He's making changes in
(the parameters of) one part of the program, and then measuring something
entirely different.

Strength of evaluation and the results of a tactical testset are not correlated
enough to be useful for optimalisation. As he already noticed in his first post,
they can be negatively correlated, even.

--
GCP



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.