Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: new computer chess effort

Author: Robert Hyatt

Date: 18:49:12 12/20/99

Go up one level in this thread


On December 20, 1999 at 20:18:55, Greg Lindahl wrote:

>On December 20, 1999 at 19:55:33, Robert Hyatt wrote:
>
>>Remember that there are at _least_ as many that spend less time in the eval
>>than I do.  And that I doubt if anybody is as high as 90%.
>
>I know of one example which spends 90% in eval. And as you know, if eval becomes
>cheaper, it might behoove you to use more of it.
>
>>DB didn't, but belle did, and hitech did, and so forth.
>
>I disagree. You can't prove that no other approach produces a really fast
>engine. It's logically impossible with the data that you have in hand.
>

That's where you are wrong. A simple analytical approach: pick any part
of the engine you want, and assume the time for that part can be driven to
0.00 by hardware.  Amdahl's law steps in quickly for the parts you can't
drive to zero, and gives you a limit.  The search is at _least_ 10% of any
program I know of.  That means that there is _no way_ to speed any program
up by more than a factor of 10, assuming that everything but the search is done
in hardware, and that it is done in zero time, and that there is zero cost for
communication.  All very big assumptions.

The problem here is that I know _exactly_ how the alpha/beta engines work.
And alpha/beta is a _very_ non-trivial thing to do in hardware...  ie needing
to transmit up to 1K bytes to definie a position (location of pieces, various
things like castling status, repetition lists, active move list, etc...

If you intend to do something _other_ than alpha/beta, then everything I know
doesn't apply.  But to date, after almost 50 years of computer chess history,
nothing other than alpha/beta has come along and proven to be satisfactory...



>> There is no one
>> piece you can pick out and make execute in zero time, and produce any big
>> performance boost.
>
>Other than an engine which spends 90% of its time in eval. They tell me that
>this is a religious issue in the chess world -- how smart of an evaluation
>function to use, how clever you can be picking moves, etc etc. What you're
>asserting is that you know every possible permutation, algorithm, and factor.
>Quite a strong claim. I wish I was that smart in the field that I specialize in.


I understand alpha/beta searching _perfectly_.  If that is what you mean.  If
you are asking "would you like to have an eval in hardware so you can keep
adding more and more knowledge without slowing the program down?"  then I would
say "heck yes".  But that is _not_ going to produce a revolutionarily strong
program in a year.  It will take multiple years to answer the question "what
would we like to do in eval that we simply can't now because of speed?"




>
>>>The DB approach maximized the design cycle length and costs.
>>
>>No, no, _NO_...
>>
>>They spent 12 years maximizing _performance_.  Not anything else.  They
>>built on 10 years of Belle doing the same.  It was all about performance.
>>No, they didn't use the most expensive production process.
>
>Designing and making ASICs is very expensive compared to simply writing
>software. Perhaps they didn't use the most expensive ASIC possible, but they did
>spend more money than many other approaches, and debugging an ASIC does involve
>a very lengthy process (3 months fab turnaround, anyone?). Compared to every
>other previous effort, they had the longest cycle and higest costs. Thus, the DB
>approach maximized the design cycle length and costs, compared to other
>potential approaches which did not use such a huge, mucking ASIC.

The ASIC is the _right_ way to solve this problem.  Because when you start
doing a _complete_ engine, one of the problems is moving large quantities of
data around internally.  IE if you want to do a fast eval, you have to do it
in parallel.  How many different pieces of hardware can do a simultaneous read
from a memory device?  That becomes a huge bottlneck.  DB's design had some
interesting issues to overcome on this very problem, in order for it to fit on
the die size they had to live with.  And their ASIC wasn't "huge" by any
stretch.  I could give you exact details except my copy of Hsu's new book is
at the office, not here at home.  It wasn't pushing fab processing by any means.
It was optimized to implement the entire engine and be as quick as possible.
2.4 M nodes per second for a chip clocked at 24mhz was pretty impressive, IMHO.
10 clocks / node, doing _everything_.




>
>It's a pretty simple point to make.
>
>-- g


yes, but "simple" doesn't make it "right" either...



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.