Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: An interesting link

Author: Steven Edwards

Date: 18:31:17 03/29/04

Go up one level in this thread


On March 29, 2004 at 06:17:15, Sune Fischer wrote:
>On March 29, 2004 at 05:48:37, Steven Edwards wrote:
>
>>See: http://mitpress.mit.edu/e-books/Hal/chap5/five1.html
>>
>>Any comments on the second paragraph?
>
>You mean this piece:
>
>"The question of whether HAL's chess ability demonstrates intelligence boils
>down to a question of how HAL plays chess. If, on the one hand, HAL plays chess
>in the "human style" -- employing explicit reasoning about move choices and
>large amounts of chess knowledge -- the computer can be said to demonstrate some
>aspects of intelligence. If, on the other hand, HAL plays chess in the computer
>style -- that is, if HAL uses his computational power to carry out brute-force
>searches through millions or billions of possible alternatives, using relatively
>little knowledge or reasoning capabilities -- then HAL's chess play is not a
>sign of intelligence. "
>
>Very vague IMO,

>What is "human style"?

The occasional blunder and the occasional brilliancy, moves which are seen
rarely, if ever, coming from a traditional A/B searcher.

>What is "explicit reasoning"?

A deductive or inductive processing mechanism that can be clearly identified in
the program.  One could point a finger at it in the source and point out axioms,
reasoning rules, hypothesis generation, proofs, etc.

>What is "large amounts of chessknowledge"?

1. Some of the stuff stored in a GM's head.
2. Program source equivalent of the above abstracted and translated from chess
texts.
3. Processing that is too complex to be applied to each of millions of nodes in
reasonable time.

>What is "some aspects of intelligence"?

Passing the Turing Test, not just for chess move output, but also for explaining
the reasoning behind the move selection.  Optionally, also passing a Turing Test
on automated knowledge acquisition.

>You can claim this to be true (or not) for current programs, depending on how
>you interpret it.

I claim that any honest interpretation shows that a traditional A/B searcher has
none of the above.

>The thing is, if you write X lines of code and the program does what those X
>lines of code tell it to do, it is still just a dumb machine!

This is just the old reductionistic argument heard too many times already.

>Whether the code does pattern matching or something else is insignificant, IMO.
>
>The day the machines does something you _haven't_ tought it, that's the day it
>starts to look alive.
>In some way, a tree search can make the program do just that, it can see things
>that are not "explicitly" programmed! :)

What if someone were able to produce a program capable of a complete and
accurate simulation of a human brain?

If you think this is forever impossible, then please state a proof.

If you think it is possible (someday), then how is it that *your* mind can be
capable of seeing things that aren't explicitly programmed?

For that matter, how are you sure that you not such a program simulation
yourself?



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.