Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: The Simulation of Expertise:Deeper Blue and the Riddle of Cognition

Author: Dan Ellwein

Date: 13:58:49 03/27/00

Go up one level in this thread


On March 27, 2000 at 15:35:13, José Antônio Fabiano Mendes wrote:

>http://arn.org/docs/odesign/od191/deeperblue191.htm     JAFM

good article... Jose...

here are some excerpts from the article that i thought were particularly
interesting...

"Deeper Blue is running on a machine capable of evaluating 200 million nodes per
second.

A top grandmaster, at a very generous estimate, can visualize and evaluate
perhaps as many as a hundred different possibilities in a minute of concentrated
thought.

This is a speed difference of eight orders of magnitude, greater than the
relative speed gap between the most advanced tactical fighter jet and the
average inchworm.

Clearly, something is going on in the human grandmaster’s mind that is not only
radically different from what Deeper Blue’s program does, but also inconceivably
more efficient.

In view of the incredible complexity of chess and the limited speed of the human
mind, it is a kind of computational miracle that humans can play chess at all.


The clear implication, backed up by de Groot’s reports of the grandmasters’
verbal protocols, is that human chess masters immediately dismiss as irrelevant
almost all of the possible moves for both sides in a given position, focussing
only on a few alternatives at each ply.

But exactly how the grandmasters determine relevance remains a riddle.


At one time, when the computational barrier presented by full-width searches
seemed insuperable, programmers did try a selective search approach modeled (or
more accurately, intended to be modeled) on human methods of play.

But the resulting programs were so prone to oversights in complex positions that
they were consistently defeated by programs designed to do a full-width search.

For decades now the selective search method has been abandoned.


The failure of selective search programs to produce decent chess play becomes
even more puzzling when one considers that human beginners, properly taught, can
rapidly learn to play better chess.

It is no particular secret how this is done: the beginners are shown various
combinations and motifs, then given the opportunity to apply them in exercise
positions.

With sufficient practice, they are able to produce similar combinations in their
own games.

The difficulty programmers have in emulating this learning process comes at the
level of characterizing the various ideas that the beginners are being asked to
absorb.

Talented human students do not require a full abstract description of a type of
situation in which a given motif might be useful; they rapidly see what is
pertinent in the examples and, presented with exercises, detect relevant
similarities without further prompting.

Just where programmers would like to find clues, pedagogic practice defers to
the human mind and leaves out the intermediate steps.


The IBM team found human grandmasters sufficiently articulate that they were
able to use their input to extend and refine Deeper Blue’s evaluation
parameters.

But that refinement, as we have already seen, comes nowhere near to the level
necessary to enable a program to perform well with a selective search.


Where such long-range planning is possible, a master may keep his sights fixed
on one set of goals for many moves, with the consequence that he does not have
to start from scratch in the assessment of a position at every turn.

By contrast, a computer essentially starts over at each move, its search not
guided by any overarching ideas.

It is chiefly by this characteristic -- the readiness of the program to abandon
the strategically indicated paths in the dubious pursuit of material gains --
that computers can be distinguished from human beings in blind tests.

The importance of the human capacity to sort out patterns, detect relevance, and
apply learned knowledge is widely recognized...

In construing chess as a computational problem, computer scientists overlooked
the extent to which mastery in chess, like expertise in more open-ended
pursuits, requires that elusive quality known as intuition.

Recognizing a grandmaster’s chess abilities as something extraordinary,
outsiders are likely to impute to him improbable computational powers and to
construe a man-versus-machine chess match as a contest between human and
electronic computers.

The truth, stranger than this popular fiction, is that chess grandmasters do not
work like computers at all and their thought processes have thus far resisted
computational simulation.


...currently programmers have no idea how to enable the machine to select the
relevant features of the position or to form and follow plans.

Barring a conceptual breakthrough in this direction, computer chess is and will
remain detectably inhuman.


More importantly, we can see why Deeper Blue fails the Turing test, why computer
simulation doesn’t give us performance with a human feel to it.

The only way for programs to achieve high performance levels is to exploit the
remarkable speed of computers in a finessed, alpha-beta pruned version of the
brute-force search.

Perforce, this leads to an inhuman style of play.

But the more modest goal of emulating at least the problem-solving capacities of
the human brain still draws the artificial intelligence community like a vision
of the Holy Grail.

On that front, the computer chess saga is indeed a lesson in humility.

Deeper Blue’s performance does not advance our understanding of human
cognition."


regards -

PilgrimDan




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.