Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: ICCA Journal Sinks To A New Low

Author: Dan Homan

Date: 07:36:38 01/26/98

Go up one level in this thread


On January 26, 1998 at 09:24:14, Robert Hyatt wrote:

>
>1.  somehow the machine must "do it" like a human to be intelligent.
>there's never been such a requirement, because no one knows how a human
>plays chess yet.  And until we do, we aren't going to be able to prove
>whether a human does or does not use something like alpha/beta.  I know
>I do to some extent when playing chess.  But I don't know what else I
>use
>in addition (pattern recognition, etc.)  Crafty certainly uses
>alpha/beta,
>it certainly does pattern recognition in the eval...

The machine doesn't have to do it like a human, but there is more to
intelligence than simply doing a single task or type of task as well
as a human.  For example, by your definition a circuit breaker would
be intelligent because it can 'decide' to stop the flow of current
when it becomes too large as well as a human can.

Traditionally, this has been viewed as a simple machine.  Also, by your
definition Newton's Laws (F=ma, etc...) are intelligent because they
can predict as well as (or better than) a human how long a ball will
take to reach the floor if dropped.  These are simple equations.  Also,
by your definiton a rock is intelligent because it can sit in one place
as well as (or better than) a human.

Clearly something more than 'do it as well as a human' is required
for intelligence.  I'm not sure exactly what, but I outlined a few
things
in my last post.

>
>2.  somehow the task must be "complicated".  This is also false.  I just
>went through the first chapter in 5 AI books, from old to new.  None
>mention "complexity" as a requirement.

No, my point is that just because a task is complicated, doesn't mean
that intelligence is shown!  See my remarks below.  I use the example of
simple tasks to show that solving those simple tasks are not
qualitatively
different than solving the complicated task of playing chess well. In my
mind the method of solving the problem is what characterizes
intelligence,
not just the success at solving the problem.

Your use of the "Turing test" below is wrong in my opinion.  You claim
that a chess program can play a game that is indistinguishable from a
human playing and is therefore intelligent.   Perhaps this is true, but
over time (perhaps a great deal of time and games) the computer nature
of the play would be evident.  Also, I thought that Turing's point was
that you should be able to ask the program *any* question.  The whole
idea of that critera (I thought) was to demand generality.  By limiting
to only chess, you miss the whole point!  The reason Turing wanted
*any* question to be possible was that he realized how easy it would
be to answer any *one* question well.

>
>I'd be willing to bet that I can find two games played on ICC, one GM vs
>GM/IM, and one computer vs GM/IM, and you couldn't identify which was a
>computer and which wasn't, without using a computer.  What you'd most
>likely find was that the human made a couple of obvious tactical
>mistakes
>and the computer didn't.  But "perfection" or "imperfection" is not part
>of the test.  If you can not tell which is which, then for that game,
>the
>machine emulated intelligence...  whether or not it can "learn" or
>whatever.
>
>
>
>>
>>I like to think of intelligence as the ability to go beyond your
>>'programming'.  I know this a pretty vauge definition and probably
>>misses some important aspects of intelligence that others might point
>>out, but it sums up my objection to alpha-beta being an example of
>>intelligence.  Alpha-beta will do exactly what you tell it to do every
>>single time (just like a calculator).
>
>hmmm... what about the book learning I do?  Or the "position learning"
>where
>Crafty won't play the same losing move, whether it is in book or not?
>So
>it is "self-modifying" to a limited extent...

This is memorization, not learning.  Learning involves the ability to
extrapolate to novel situations.  Some programs have a limited ability
to do this by modifing the weights of their evaluation function over
time from the results of games, but even this is limited by the
structure
of the evaluation function.  They cannot learn French, for example :)

It is easy to build a machine to do a specific task well.  I can't see
that as intelligence however.  Just because the task is complicated
doesn't mean that intelligence is involved (other than that of the
programmer, of course).

 - Dan

P.S.  The more I read and re-read this post; I see just how thin the
line
is.  I'm not sure where the line should be drawn, but I think generality
is
important.  I just can't see the precise execution of well defined
commands
as intelligence.  I must admit that algorithms that adjust their weights
over
time are getting closer, however.

The problem here is that we each have a sense of what intelligence is,
but
no well defined definition exisits (other than Turing's which I think
you
have corrupted by limiting to a single task).



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.