Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: ICCA Journal Sinks To A New Low

Author: Robert Hyatt

Date: 06:24:14 01/26/98

Go up one level in this thread


On January 26, 1998 at 08:57:31, Dan Homan wrote:

>On January 25, 1998 at 20:39:27, Robert Hyatt wrote:
>
>>On January 25, 1998 at 19:23:18, Amir Ban wrote:
>>
>>>On January 25, 1998 at 15:07:25, Komputer Korner wrote:
>>>
>>>[entire post snipped]
>>>
>>>Come on, Komputer. At least be annoyed for the right reasons. ICCAJ
>>>decided to bore us to death, and Mr. Korf seems to be the only person
>>>around who doesn't realize that any IBM statement on this is done for
>>>reasons of PR.
>>>
>>>I don't think any of your arguments are valid, but your conclusion (and
>>>IBM's) are correct. Computer chess is not artificial intelligence, for
>>>reasons that computer chess programs will find obvious. In the beginning
>>>of the 1980's, Douglas Hofstadter claimed that a computer that plays at
>>>a master level would need to have intelligence, in the sense that do
>>>that it would have to have, as a necessary by-product, general
>>>capabilities exceeding chess that are intuitively interpreted as
>>>intelligence. This simply happens not to be true (fortunately for us
>>>programmers), and Hofstadter has changed his mind.
>>>
>>
>>I totally disagree.  Every AI book I have in my office convers
>>alpha/beta,
>>minimax, best-first, depth-first, etc.  So maybe the brute-force type of
>>search they use doesn't match what some would like to have AI become,
>>but
>>that hardly means that chess programs are *not* "AI".  IE, my AI texts
>>say that minimax is a valid AI search algorithm...  therefore the
>>program
>>it is used in as an AI program.
>>
>>A common misconception is that a "real AI" program somehow has to "do it
>>like a human."  There is *no* such constraint in the world of AI.  Only
>>that the program must exhibit some measurable form of expertise in the
>>area under investigation.  The most common "test" has been the so-called
>>"Turing test"...  Which does *not* measure "how" a program does what it
>>does, only that it does it in a way that is indistinguishable from a
>>human,
>>when you only consider the final results (the moves played).
>>
>
>Alpha-Beta is just an algorithm... an equation really.  I don't see how
>this can be defined as intelligence.  Yes, it solves problems.  A
>calculator also solves problems, but no one would claim that it has
>intelligence.  Solving problems, IMHO, is only one aspect of
>intelligence.... there are many others that an algorithm like alpha-beta
>just doesn't have.  The most obvious one is the ability to learn from
>experience and extrapolate from that experience to novel situations.  My
>dog can do this, but an alpha-beta algorithm cannot.

again, AI is designed to measure human-like results.  If you want to
test a human that can square any number instantly, and you consider that
something important, then yes, a calculator would be intelligent in that
domain.

Your argument has two basic flaws, however:

1.  somehow the machine must "do it" like a human to be intelligent.
there's never been such a requirement, because no one knows how a human
plays chess yet.  And until we do, we aren't going to be able to prove
whether a human does or does not use something like alpha/beta.  I know
I do to some extent when playing chess.  But I don't know what else I
use
in addition (pattern recognition, etc.)  Crafty certainly uses
alpha/beta,
it certainly does pattern recognition in the eval...

2.  somehow the task must be "complicated".  This is also false.  I just
went through the first chapter in 5 AI books, from old to new.  None
mention "complexity" as a requirement.

I'd be willing to bet that I can find two games played on ICC, one GM vs
GM/IM, and one computer vs GM/IM, and you couldn't identify which was a
computer and which wasn't, without using a computer.  What you'd most
likely find was that the human made a couple of obvious tactical
mistakes
and the computer didn't.  But "perfection" or "imperfection" is not part
of the test.  If you can not tell which is which, then for that game,
the
machine emulated intelligence...  whether or not it can "learn" or
whatever.



>
>I like to think of intelligence as the ability to go beyond your
>'programming'.  I know this a pretty vauge definition and probably
>misses some important aspects of intelligence that others might point
>out, but it sums up my objection to alpha-beta being an example of
>intelligence.  Alpha-beta will do exactly what you tell it to do every
>single time (just like a calculator).

hmmm... what about the book learning I do?  Or the "position learning"
where
Crafty won't play the same losing move, whether it is in book or not?
So
it is "self-modifying" to a limited extent...


>
>I'm not against the idea of computers having intelligence.  I think they
>have already captured many aspects of intelligence, but always in a
>limited way.  No computer chess program can pass the "Turing test"
>because it cannot tell me how it feels about the weather today or
>what it thinks about the Pope's visit to Cuba.  I think intelligence
>is a general property, not a specific one.  I don't expect computers
>to have a human-like intelligence (a.k.a. the "Turing test"), but I
>expect them to capture certain aspects like generality and the ability
>to extrapolate from experience to new situations.


no one would argue with your "limited" adjective.  But limited
intelligence
is still intelligence.  That's the whole point here...


>
>
> - Dan



This page took 0.03 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.