Computer Chess Club Archives


Search

Terms

Messages

Subject: Somewhat O.T. Artificial Intelligence, Go and Computers.

Author: Terry McCracken

Date: 14:26:21 08/04/02


http://www.nytimes.com/2002/08/01/technology/circuits/01GONE.html

To read the article from the NYT you need to sign up for a free subscription.

Terry


In an Ancient Game, Computing's Future
By KATIE HAFNER


arly in the film "A Beautiful Mind," the mathematician John Nash is seen sitting
in a Princeton courtyard, hunched over a playing board covered with small black
and white pieces that look like pebbles. He was playing Go, an ancient Asian
game. Frustration at losing that game inspired the real Mr. Nash to pursue the
mathematics of game theory, research for which he eventually won a Nobel Prize.


In recent years, computer experts, particularly those specializing in artificial
intelligence, have felt the same fascination — and frustration.

Programming other board games has been a relative snap. Even chess has succumbed
to the power of the processor. Five years ago, a chess-playing computer called
Deep Blue not only beat but thoroughly humbled Garry Kasparov, the world
champion at the time. That is because chess, while highly complex, can be
reduced to a matter of brute force computation.

Go is different. Deceptively easy to learn, either for a computer or a human, it
is a game of such depth and complexity that it can take years for a person to
become a strong player. To date, no computer has been able to achieve a skill
level beyond that of the casual player.

The game is played on a board divided into a grid of 19 horizontal and 19
vertical lines. Black and white pieces called stones are placed one at a time on
the grid's intersections. The object is to acquire and defend territory by
surrounding it with stones.

Programmers working on Go see it as more accurate than chess in reflecting the
ineffable ways in which the human mind works. The challenge of programming a
computer to mimic that process goes to the core of artificial intelligence,
which involves the study of learning and decision-making, strategic thinking,
knowledge representation, pattern recognition and, perhaps most intriguingly,
intuition.

"A good Go player could make a move and other players say, `Yes, that's a good
move,' but they can't explain to you why it's a good move, or how they even know
it's a good move," said Dr. John McCarthy, a professor emeritus at Stanford
University and a pioneer in artificial intelligence.

Dr. Danny Hillis, a computer designer and chairman of the technology company
Applied Minds, said that the depth of Go made it ripe for the kind of scientific
progress that comes from studying one example in great detail. "We want the
equivalent of a fruit fly to study," Dr. Hillis said. "Chess was the fruit fly
for studying logic. Go may be the fruit fly for studying intuition."

Along with intuition, pattern recognition is a large part of the game. While
computers are good at crunching numbers, people are naturally good at matching
patterns. Humans can recognize an acquaintance at a glance, even from the back.
"Every Go book is filled with advice on patterns of different kinds," Dr.
McCarthy said.

Dr. Daniel Bump, a mathematics professor at Stanford, works on a program called
GNU Go in his spare time. "You can very quickly look at a chess game and see if
there's some major issue," he said. But to make a decision in Go, he said,
players must learn to combine their pattern-matching abilities with the logic
and knowledge they have accrued in years of playing.

"If you watch really strong players," Dr. Bump said, "some seem to make fairly
mundane moves, but at the end of the game they're ahead. Others do spectacular
things."

One measure of the challenge the game poses is the performance of Go computer
programs. The last five years have yielded incremental improvements but no
breakthroughs, said David Fotland, a programmer and chip designer in San Jose,
Calif., who created and sells The Many Faces of Go, one of the few commercial Go
programs.

Mr. Fotland's program was the winner of a tournament last weekend in Edmonton,
Alberta, that pitted 14 Go-playing programs — including several from Japan —
against one another. But even The Many Faces of Go is weak enough that most
strong players could beat it handily.

Part of the challenge has to do with processing speed. The typical chess program
can evaluate about 300,000 positions per second, and Deep Blue was able to
evaluate some 200 million positions per second. By midgame, most Go programs can
evaluate only a couple of dozen positions each second, said Anders Kierulf, who
wrote a program called SmartGo.

In the course of a chess game, a player has an average of 25 to 35 moves
available. In Go, on the other hand, a player can choose from an average of 240
moves. A Go-playing computer would take about 30,000 years to look as far ahead
as Deep Blue can with chess in three seconds, said Michael Reiss, a computer
scientist in London.

If processing power were all there was to it, the solution would be simply a
matter of time, since computers are growing ever faster. But the obstacles go
much deeper. Not only do Go programs have trouble evaluating positions quickly,
they have trouble evaluating them correctly.

Nonetheless, the allure of computer Go increases as the difficulties it poses
encourage programmers to advance basic work in artificial intelligence. Graduate
students produce dissertations on the topic, and a handful of researchers around
the world devote much or all of their attention to it.

The game attracts people from all fields. For example, Chen Zhixing, a retired
chemistry professor in Guangzhou, China, wrote a program called Handtalk, which
dominated the computer Go field for several years. Dr. Bump, 50, whose field is
number theory, has been playing Go for 35 years and taught himself the C
programming language four years ago so he could write Go software. Mr. Fotland,
44, the creator of The Many Faces of Go has been working on computer Go for 20
years and is chief technology officer at Ubicom, a small semiconductor company
in Silicon Valley.

All are very strong Go players, and it takes a strong Go player to write even a
weak Go program. Mr. Fotland, for instance, said he had written programs for
checkers, Othello and chess. The algorithms are all very similar, and it is not
difficult to write a reasonably strong program, he said. Each of the games took
him a year or two to finish. "But when I started on Go," he said, "there was no
end to it."

Mr. Fotland said that his Go programming was especially weak when he was a
beginning player. "A lot of the stuff I wrote was just plain wrong because I
didn't understand the game well enough," he said.

Even when skill develops, however, translating it into a program is not an
obvious task. "There's a certain stream of consciousness when you're looking at
positions," Dr. Bump said. "You might look at 10 variations, but you don't
really know what's going on in the back of your mind. Even a strong player
doesn't know how his mind works when he looks at a position."

"We think we have the basics of what we do as humans down pat," Dr. Bump said.
"We get up in the morning and make breakfast, but if you tried to program a
computer to do that, you'd quickly find that what's simple to you is incredibly
difficult for a computer."

The same is true for Go. "When you're deciding what variations to consider, your
subconscious mind is pruning," he said. "It's hard to say how much is going on
in your mind to accomplish this pruning, but in a position on the board where
I'd look at 10 variations, the computer has to look at thousands, maybe a
million positions to come to the same conclusions, or to wrong conclusions."

Dr. Reiss, who is the author of Go4++, a previous champion that placed second in
last weekend's playoff, agrees with Dr. Bump. Dr. Reiss, who is an expert in
neural networks, compares a human being's ability to recognize a strong or weak
position in Go with the ability to distinguish between an image of a chair and
one of a bicycle. Both tasks, he said, are hugely difficult for a computer.

For that reason, Mr. Fotland said, "writing a strong Go program will teach us
more about making computers think like people than writing a strong chess
program."

Dr. Reiss, who works on Go full time, said he would not think of devoting his
time to any other problem. "It's a fundamentally interesting problem, but also
it's just the right level of difficulty," he said. "If it was too easy it would
have been solved already. If it was fantastically difficult, people might give
up in frustration."

"I think in the long run the only way to write a strong Go program is to have it
learn from its own mistakes, which is classic A.I., and no one knows how to do
that yet," Mr. Fotland said. A few programs have some learning capabilities
built into them.

Mr. Fotland's program, for instance, refers to a database of games played by
strong players in deciding its moves, and Dr. Reiss's program employs a learning
scheme for deciding which moves are interesting to look at.

Dr. Reiss said he had come up with an idea for a new Go program that would learn
by analyzing professional games. But to pursue his idea would require too much
work, he said, depriving him of time to continue making updates to his current
program.

It seems unlikely that a computer will be programmed to drub a strong human
player any time soon, Dr. Reiss said. "But it's possible to make an interesting
amount of progress, and the problem stays interesting," he said. "I imagine it
will be a juicy problem that people talk about for many decades to come."








This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.