Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Wanted: Deep Blue vs. today's top programs recap

Author: Tom Kerrigan

Date: 01:14:33 08/27/01

Go up one level in this thread


There are some issues here that have not received due attention.

First, [as most of you already know,] part of DB's search algorithms and all of
DB's evaluation function algorithms were implemented in custom VLSI chips. This
made it phenominally fast and also means that it can't exist as a PC program
(because you don't have the chips). However, PCs have general purpose
processors, which means they can run any algorithm you can think of, so the idea
of running DB on a PC isn't quite as stupid as most people seem to think, if
you're talking about the algorithms. There are two issues at play when
discussing implementing DB as PC software:

1) Work involved. Speaking from experience, the time-consuming part of writing
an evaluation function is not the actual coding, but instead deciding which
terms to include and what their weights should be. If you already know _exactly_
what an evaluation function is supposed to do, (and the DB team does,) I bet
implementing even the most complicated one would only take a couple of weeks.
Moreover, I believe that most, if not all, of DB's evaluation function already
exists as software. It takes months, if not years, to design a chip of this
sophistication, and more months and millions of dollars to get it manufactured.
It's absurd to think that anybody would put this much effort and money into a
project without first experimenting with the algorithms to see if they were any
good. Additionally, it has been reported that the evaluation function was being
tuned by the DB team long before they had any of their chips manufactured.
Exactly what were they tuning, then, if not a software implementation?

2) Speed. Some would have you believe that DB's evaluation function was
infinitely sophisticated and had every concievable term known to man implemented
in it. So many terms that they literally didn't know what to do with all of them
(i.e., a lot were zeroed out). This immense complexity is supposedly made
possible by the custom hardware, and any software implementation would run so
slow as to be worthless. Well, I don't believe this. It takes effort to
implement an evaluation function term, and it takes much more effort to
implement one in circuitry. Common sense says that the DB team only implemented
terms they thought would be useful, and not every single one they could think
of. Moreover, nobody can think of terms (in DB or otherwise) that are so
computationally intensive that they would cripple a PC program. MChess's
evaluation function is rumored to be phenominally complex, and it can still
search thousands of nodes per second and compete with the best. The only example
of an awe-inspiring DB evaluation term that Hyatt can think of is that DB
assigns a bonus to a rook on a file that looks like it might open up. That could
really mean anything, and several commercial programmers say they already have
terms that do this.

In other words, I don't think it would be especially difficult to run DB's
algorithms on a PC. (It would be extremely interesting, too.) There are a few
possible reasons why this hasn't happened yet:

a) Lazyness or non-interest. Maybe the people on the team have moved on to other
things or just don't feel like doing the work.

b) Legal issues. IBM might legally own the algorithms and not want to hand them
over for this purpose.

c) Reputation. Maybe the evaluation function really sucks as much as some people
think, and releasing a program based on it would make DB the laughingstock of
the computer chess community. Even if it's on par with that of the better
commercial programs, some people will be disappointed that it isn't the
revolutionary leap they expected. (There has been a TON of propoganda raising
peoples' expectations.)

Personally, I think the real reason is a combination of these three.

The next major issue about DB that's been overlooked is its search algorithm.
People know that it searched 200M NPS and Deep Fritz only searches 5M NPS. Well,
that sounds amazing, but it's pretty meaningless to compare NPS rates unless
you're talking about the same program. MChess used to beat up on Fritz pretty
bad while searching a small fraction of the nodes. The Rebel vs.
Crafty-at-an-absurdly-long-time-control matches show the same thing: a program
with a better evaluation function or search can search fewer nodes than the
competition and still win. Most of the "information" we have about DB's search
is speculation. We "know" they didn't use null-move [all the time]. We "know"
they didn't do forward pruning [all the time]. We "know" they used singular
extensions [some of the time]. We actually know for a fact that their search was
massively parallel, meaning their "effective" NPS rates were much lower than
200M. Maybe their search was phenominal, but it's equally possible that their
search was crap and wasted 195M of those nodes, making it effectively as fast as
Deep Fritz. Nobody has a clue, and if somebody claims to have a clue, well,
inspect their sources with a fine-tooth comb.

-Tom



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.