Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: To Robert Hyatt, Dan Corbit, Christophe Theron , And Other Experts.

Author: Matthew Hull

Date: 12:52:18 08/07/02

Go up one level in this thread



>I didn't say I found it "wanting".  I simply said that there is no _proof_
>for some of the claims made in the book.  That is what leaves me "cold" in
>the discussion at hand...  Until someone definitively explains how the human
>brain does what it does, it is impossible to compare what we think it does
>to what the computer really does.  IE we are just now discovering how "simple"
>man really is, as the genome project unveils more and more myths and tosses
>them out...

I think I understand your objections much better now after reading the following
excerpt is from The Times Literary Supplement, September 29-October 5, 1989.

(start of quote)
... The argument Penrose unfolds has more facets than my summary can report, and
it is unlikely that such an enterprise would succumb to a single, crashing
oversight on the part of its creator--that the argument could be "refuted" by
any simple objection. So I am reluctant to credit my observation that Penrose
seems to make a fairly elementary error right at the beginning, and at any rate
fails to notice or rebut what seems to me to be an obvious objection. Recall
that the burden of the first part of the book is to establish that minds are not
"algorithmic"--that there is something special that minds can do that cannot be
done by any algorithm (i.e., computer program in the standard, Turing-machine
sense). What minds can do, Penrose claims, is see or judge that certain
mathematical propositions are true by "insight" rather than mechanical proof.
And Penrose then goes to some length to argue that there could be no algorithm,
or at any rate no practical algorithm, for insight.

But this ignores a possibility--an independently plausible possibility--that can
be made obvious by a parallel argument. Chess is a finite game (since there are
rules for terminating go-nowhere games as draws), so in principle there is an
algorithm for either checkmate or a draw, one that follows the brute force
procedure of tracing out the immense but finite decision tree for all possible
games. This is surely not a practical algorithm, since the tree's branches
outnumber the atoms in the universe. Probably there is no practical algorithm
for checkmate. And yet programs--algorithms--that achieve checkmate with very
impressive reliability in very short periods of time are abundant. The best of
them will achieve checkmate almost always against almost any opponent, and the
"almost" is sinking fast. You could safely bet your life, for instance, that the
best of these programs would always beat me. But still there is no logical
guarantee that the program will achieve checkmate, for it is not an algorithm
for checkmate, but only an algorithm for playing legal chess--one of the many
varieties of legal chess that does well in the most demanding environments. The
following argument, then, is simply fallacious:

(1) X is superbly capable of achieving checkmate.

(2) There is no (practical) algorithm guaranteed to achieve checkmate.

therefore

(3) X does not owe its power to achieve checkmate to an algorithm.

So even if mathematicians are superb recognizers of mathematical truth, and even
if there is no algorithm, practical or otherwise, for recognizing mathematical
truth, it does not follow that the power of mathematicians to recognize
mathematical truth is not entirely explicable in terms of their brains executing
an algorithm. Not an algorithm for intuiting mathematical truth--we can suppose
that Penrose has proved that there could be no such thing. What would the
algorithm be for, then? Most plausibly it would be an algorithm--one of very
many--for trying to stay alive, an algorithm that, by an extraordinarily
convoluted and indirect generation of byproducts, "happened" to be a superb (but
not foolproof) recognizer of friends, enemies, food, shelter, harbingers of
spring, good arguments--and mathematical truths!

Chess programs, like all "heuristic" algorithms, are designed to take chances,
to consider less than all the possibilities, and therein lies their
vulnerability-in-principle. There are many ways of taking chances, utilizing
randomness (or just chaos or pseudo-randomness), and the process can be vastly
sped up by looking at many possibilities (and taking many chances) at once, "in
parallel". What are the limits on the robustness of vulnerable-in-principle
probabilistic algorithms running on a highly parallel architecture such as the
human brain? Penrose neglects to provide any argument to show what those limits
are, and this is surprising, since this is where most of the attention is
focussed in artificial intelligence today. Note that it is not a question of
what the in-principle limits of algorithms are; those are simply irrelevant in a
biological setting. To put it provocatively, an algorithm may "happen" to
achieve something it cannot be advertised as achieving, and it may "happen" to
achieve this 999 times out of a thousand, in jig time. This prowess would fall
outside its official limits (since you cannot prove, mathematically, that it
will not run forever without an answer), but it would be prowess you could bet
your life on. Mother Nature's creatures do it every day.

(end quote)

Regards,
Matt



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.