Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: ICCA Journal Sinks To A New Low

Author: Robert Hyatt

Date: 11:14:51 01/26/98

Go up one level in this thread


On January 26, 1998 at 10:36:38, Dan Homan wrote:

>On January 26, 1998 at 09:24:14, Robert Hyatt wrote:
>
>>
>>1.  somehow the machine must "do it" like a human to be intelligent.
>>there's never been such a requirement, because no one knows how a human
>>plays chess yet.  And until we do, we aren't going to be able to prove
>>whether a human does or does not use something like alpha/beta.  I know
>>I do to some extent when playing chess.  But I don't know what else I
>>use
>>in addition (pattern recognition, etc.)  Crafty certainly uses
>>alpha/beta,
>>it certainly does pattern recognition in the eval...
>
>The machine doesn't have to do it like a human, but there is more to
>intelligence than simply doing a single task or type of task as well
>as a human.  For example, by your definition a circuit breaker would
>be intelligent because it can 'decide' to stop the flow of current
>when it becomes too large as well as a human can.

If that is the only thing in your domain, yes that is an "artificially
intelligent" device.  Assuming it requires intelligence to make the
decision.

Same for chess, however.  Everyone has *assumed* that chess requires
intelligence, because monkeys don't play chess, nor do dogs or cats or
any other mammals on the planet.  So if it requires intelligence to play
the game, then the program does, in fact, exhibit a form of "artificial
intelligence" to play the game also.  It might not be intelligent in the
same way we are, since we don't exactly know how we do most of what we
do either.  But if something tastes sweet, makes food taste sweet, then
we call this an "artificial sweetener" whether it uses the same sort of
chemical compound as suger, or whether it stimulates a completely
different
set of taste buds.  It doesn't matter...  so long as my tea tastes
sweet...
It might not be the "real" sugur...  but it accomplishes the same
purpose.



>
>Traditionally, this has been viewed as a simple machine.  Also, by your
>definition Newton's Laws (F=ma, etc...) are intelligent because they
>can predict as well as (or better than) a human how long a ball will
>take to reach the floor if dropped.  These are simple equations.  Also,
>by your definiton a rock is intelligent because it can sit in one place
>as well as (or better than) a human.


don't see how you possibly conclude that from what I wrote.  Here it is
again:

  If a program, in some domain, can not be distinguished from a human
  performing the same task, and if we agree that that task requires some
  form of intelligence to perform, then the computer must be said to
have
  some form of "artificial intelligence" in it.

I didn't say anything more, I didn't say anything less.  If you don't
think
it takes intelligence to play chess, bring your monkey over here and
I'll
take him on.  If you do agree that it takes intelligence to play chess,
come on over and I'll let you take Crafty on.  That's where the term
"artificial intelligence" came from.  No one has ever predicted,
seriously,
that a machine would ever be intelligent like humans, excepting the
Sci-Fi
writers of course.  But they can do many of the same things that we
believe
make us intelligent.  Chess is but one example...



>
>Clearly something more than 'do it as well as a human' is required
>for intelligence.  I'm not sure exactly what, but I outlined a few
>things


Never meant to imply that, because even one celled life forms eat.  And
just as efficiently as we do.  It is the "do something that a human
does,
and which requires intelligence" that has been the guide words for AI
for
many years...

>in my last post.
>
>>
>>2.  somehow the task must be "complicated".  This is also false.  I just
>>went through the first chapter in 5 AI books, from old to new.  None
>>mention "complexity" as a requirement.
>
>No, my point is that just because a task is complicated, doesn't mean
>that intelligence is shown!  See my remarks below.  I use the example of
>simple tasks to show that solving those simple tasks are not
>qualitatively
>different than solving the complicated task of playing chess well. In my
>mind the method of solving the problem is what characterizes
>intelligence,
>not just the success at solving the problem.


then lets stop here.  Do you believe that it takes intelligence to play
chess?  yes or no only answer.  If no, then the discussion ends.  If
yes,
then the question is moot because computers play chess, and they play it
better than 99.9% of the world's population.  It doesn't matter *how*
they
do it, just that they *do* do it.  At least in the field of AI.

>
>Your use of the "Turing test" below is wrong in my opinion.  You claim
>that a chess program can play a game that is indistinguishable from a
>human playing and is therefore intelligent.   Perhaps this is true, but
>over time (perhaps a great deal of time and games) the computer nature
>of the play would be evident.  Also, I thought that Turing's point was
>that you should be able to ask the program *any* question.  The whole
>idea of that critera (I thought) was to demand generality.  By limiting
>to only chess, you miss the whole point!  The reason Turing wanted
>*any* question to be possible was that he realized how easy it would
>be to answer any *one* question well.

certainly it would.  Because it would make hardly any tactical mistakes,
while even the best humans make one or two per game.  It would find many
tactical shots instantly while the best players would take minutes.  And
it
would exhibit an occasional positional flaw that a human might overlook
in
one case but not in another.

I agree about your interpretation of the Turing test as applied to
developing
an "intelligent electronic entity."  Everyone still believes that is
impossible.
But developing a specialized electronic entity has been done.  Whether
it be
chess, medical diagnosis, scheduling algorithms, or whatever.  if you
use
your above description, *no* program would ever be called an AI program.
That I don't buy.  Because programs are based on algorithms that have
been
called "AI" since day one...



>
>>
>>I'd be willing to bet that I can find two games played on ICC, one GM vs
>>GM/IM, and one computer vs GM/IM, and you couldn't identify which was a
>>computer and which wasn't, without using a computer.  What you'd most
>>likely find was that the human made a couple of obvious tactical
>>mistakes
>>and the computer didn't.  But "perfection" or "imperfection" is not part
>>of the test.  If you can not tell which is which, then for that game,
>>the
>>machine emulated intelligence...  whether or not it can "learn" or
>>whatever.
>>
>>
>>
>>>
>>>I like to think of intelligence as the ability to go beyond your
>>>'programming'.  I know this a pretty vauge definition and probably
>>>misses some important aspects of intelligence that others might point
>>>out, but it sums up my objection to alpha-beta being an example of
>>>intelligence.  Alpha-beta will do exactly what you tell it to do every
>>>single time (just like a calculator).
>>
>>hmmm... what about the book learning I do?  Or the "position learning"
>>where
>>Crafty won't play the same losing move, whether it is in book or not?
>>So
>>it is "self-modifying" to a limited extent...
>
>This is memorization, not learning.  Learning involves the ability to
>extrapolate to novel situations.  Some programs have a limited ability
>to do this by modifing the weights of their evaluation function over
>time from the results of games, but even this is limited by the
>structure
>of the evaluation function.  They cannot learn French, for example :)

However, your above statement is still wrong.  Alpha/Beta will *not*
replay the same moves every time, given this "memorization".  I'd love
to be able to generalize from such memorized stuff.  But I don't want to
invest the computation time to make it happen, although how to do it is
not nearly challenging as how to do it efficiently (There are several
PhD dissertations that come to mind, one by Murray Campbell on
"chunking"
for example).  So to beat crafty, you are going to have to do much more
than just probing around to find a winning line and then using that
again
the next time.  It won't work then...



>
>It is easy to build a machine to do a specific task well.  I can't see
>that as intelligence however.  Just because the task is complicated
>doesn't mean that intelligence is involved (other than that of the
>programmer, of course).
>
> - Dan


again it depends on whether we think playing chess requires intelligence
or not...  if it does, then the machine must exhibit some sort of
"artificial intelligence" to play chess, otherwise chess really doesn't
need any intelligence at all... something a lot of GM's would hate to
know, and something a lot of monkeys would be happy to hear... :)


>
>P.S.  The more I read and re-read this post; I see just how thin the
>line
>is.  I'm not sure where the line should be drawn, but I think generality
>is
>important.  I just can't see the precise execution of well defined
>commands
>as intelligence.  I must admit that algorithms that adjust their weights
>over
>time are getting closer, however.
>
>The problem here is that we each have a sense of what intelligence is,
>but
>no well defined definition exisits (other than Turing's which I think
>you
>have corrupted by limiting to a single task).

yes...  but don't forget the key adjective, "artificial".  Artificial
means
"some that might be different, but which acts similar to the real
thing."



This page took 0.02 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.