Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Hello from Edmonton (and on Temporal Differences)

Author: Jay Scott

Date: 13:45:09 08/06/02

Go up one level in this thread


On August 05, 2002 at 17:39:46, Vincent Diepeveen wrote:

>And it's very easy to realize why.

You cite your personal experience, but you don't give any reasoning except this.
I read you as meaning, "Well, duh, it's obvious!" But I don't find it obvious; I
don't believe it.

Every learning algorithm that works at all contains some domain knowledge,
however little and vague the knowledge may be. That is a theorem: If you don't
know anything, you can't conclude anything. If I can translate you into more
sophisticated terms, I think you mean to say that an algorithm which is purely
empirical, and works solely by accumulating training data against some model,
will work poorly compared to human reasoning, which not only collects data but
also analyzes it in light of domain knowledge. Is that sort of what you mean?

I have two answers:

1) I agree that in practice a good algorithm which can combine empirical and
analytic learning methods should perform much better than either a purely
empirical or a purely analytic algorithm. In my opinion, not much is known about
how to integrate these two types of methods, and this is a key research area.

2) However, in principle, given enough time, an empirical algorithm with a
sufficiently general model can learn anything learnable. If you believe in
Darwin, you already know that; natural selection is a learning algorithm that
starts with almost nothing, and yet humans evolved, and we can figure stuff out.
Therefore, given enough time, empirical learning subsumes analytic learning. And
because of that, an empirical method can always work eventually, and it's very
hard to prove it can't work in a reasonable time. A simple learning algorithm
tuning a good model could well produce an evaluator with human or even
super-human quality. There's a lot of CPU time out there, and the only missing
ingredient is the good model.

I'm sure you don't believe that a super-human evaluator is possible, because you
say you think that top human evaluators are already near-perfect in their
tuning. But I haven't seen any evidence of that, and personally I doubt it. In
any case, a super-human evaluator is *still* possible in principle, because a
machine-tuned evaluator may be able to use a finer-grained model and, in effect,
make up its own evaluation terms to tune. The proof is in the pudding, but I
haven't seen any evidence to rule it out.

>Have you ever thought of that, how much *effort* has been put in learning
>in chess the last years, and how little good results it has brought?

Yes; very little effort. Games don't get much research funding because they're
seen as not serious, so the work is done mostly by interested amateurs and by
grad students who haven't figured that out yet. I agree about the results so
far.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.