Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Knowledge again, but what is it?

Author: Amir Ban

Date: 02:09:41 02/25/98

Go up one level in this thread


On February 25, 1998 at 00:15:59, Don Dailey wrote:

>Well said Fernando,
>
>I don't think it's possible to separate the two concepts.  Search IS
>knowledge (one form of it.)   I think we tend to artificially separate
>the two concepts which is probably a mistake.  It's similar to chess
>players arguing over tactical vs positional play.  It's more a human
>concept than anything else.   I fear we do subject our programs to
>our own superstitions but cannot help ourselves.
>

Actually I think it would be very interesting to separate the concepts,
but I don't see that we know how.

When we talk about knowledge, we usually mean evaluation. (This is the
part that the user senses. I know that some people also call "knowledge"
all kinds of fancy stuff whose purpose is to guide and help the search
engine. I don't quite agree, but for argument's sake let's stick to
evaluation). So a "knowledge" program has good evaluation, better than a
non-knowledge program anyway.

We all know what the BEST evaluation is. It's the one coming out of
perfect knowledge of the game. But what is good evaluation ? More
precisely, given two evaluation functions, how do you decide which is
better ?

I think most people, after a moment's thought would say: Put a search
engine on top of both and let them play a match (or run a suite, if you
will). This is not satisfactory, for practical reasons, because some
engines will do better with certain kinds of evaluations, or will behave
differently at different time controls, and for theoretical reasons,
because you would expect that good (or better) evaluation is something
that can and should be defined without the parasite of searching. When
you have an "objectively good" evaluation, you would expect that any
engine would do well with it.

Can anyone come forward with a way of comparing evaluation functions,
not necessarily a practical one, that does not involve searching ?

Let me make a try at this: The evaluation of a position should be a
measure of the expected outcome of the game (with assumed perfect play),
i.e. it can be mapped to a probability of winning. Say, with a score of
+0.5 you expect to win 65%, and with a score of -4 you expect to win
0.6%. The probability for winning should be monotonic with the score, or
else something is bad with the function. So one way to define a better
evaluation is if it is more monotonic. You can also actually decide on
score-to-probability mapping with some exponential say, and declare that
the better evaluation is the one that fits better the mapping.

I think this definition is on the right track, but there is something
clearly wrong with it: First, there is an almost infinite number of such
functions that would give you a perfect fit, but most of them are
nonsense. For example, an evaluation function that always returns 0 is
perfect in this sense, but obviously useless. A less extreme example is
an evaluation that limits itself to score from +0.5 to -0.5, but does
that perfectly. This is a perfect but wishy-washy evaluation, so not
very useful, because practically you need to know that capturing the
queen gives you 99.9% win, and this evaluation never guarantees more
than say 70%. Of course, if you are already a piece ahead, this
evaluation gives you no guidance at all.

I'm sure this definition can be improved on, but currently I don't know
how.

Amir




This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.