Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: hardware math

Author: Jeremiah Penery

Date: 08:25:01 10/11/02

Go up one level in this thread


On October 11, 2002 at 11:07:52, Vincent Diepeveen wrote:

>On October 11, 2002 at 10:38:12, Jeremiah Penery wrote:
>
>>Hmm, let's see.  If DB gets 'upgraded to 2002 standards", that would mean they
>>can make a fully custom .13 micron chip running at 300MHz, able to do a full
>>evaluation every clock cycle.  It will also have 20GB/s memory bandwidth to
>>256MB of RAM for the hash tables on the board.  So one single chip will search
>>300M positions/second, and they can do whatever evaluation they want.  Yes, yes,
>>obviously a 'complete joke'.
>
>I'm more afraid for Brutus in like 30Mhz FPGA than i am for a
>deep blue at 0.13 micron.

Only because the latter will never exist. :)

>First of all, deep blue wasn't written in verilog or any 'high level'
>language. It was simply cut'n pasting the logics to each other.
>
>So it would require an entire new design to make something for 0.13
>in verilog or whatever.

I was just using that as an example of what is possible.  It could be done
today.  Obviously, we know it won't be.  The whole argument is theoretical.

>Secondly, that 0.13 process technology including the big salary from Hsu
>would be around 20 million of investments.

That's true, but if IBM were still sponsoring it, I doubt they'd have much
problem providing that kind of money.  After all, DB made them WAY more money
than that in marketing terms.

>This versus a FPGA board with some tools you can get for a couple of thousands
>of euro's (1 euro = 1 dollar at the moment about).
>
>Further, Hsu would have to proof a number of things
>   being capable of implementing all kind of things like
>   nullmove, efficient move ordering, and a lot of evaluative
>   things in hardware. it's not trivial to add ram to the

Nullmove should not be all that hard, since they already used it for threat
detection.  Does anyone know how they did move ordering in DB?  We can't say
whether it was efficient or not without knowing.  As far as evaluation, they
already did a lot of things according to Hsu's paper about it.  I'm sure if they
were going to do another redesign with today's hardware, they could find a lot
more things to add.

>   chip, because a single cacheline from RAM is a lot slower than
>   processing a bunch of nodes in hardware. If you run at 300Mhz
>   with say 10 clocks a node on average, you can achieve about
>   30 million nodes a second.

Yep.

>   However you can't do 30 million random word lookups a second in
>   the RAM. latency is too big for that. It's not trivial to combine
>   the 2 things.

Yep.  But if they don't have hashtables you complain how they can't possibly get
some depth without it. :)
If price is not the object, they should be able to use at least some very fast
SRAM, even if it's not very big.

>   In fact crafty with 1 million nodes a second can't even do all requests
>   to a hashtable.
>
>An important point in the end is the price where this all gets produced for,
>because you need to sell a bunch of these processors, or you won't get
>back that $20 million of investments.

If you're talking about selling it to the general public, it would never get
done.

>And in the end, when the cpu hits the market after say a year or 5,
>then i'll be having a 4 processor 10Ghz intel/amd machine delivering
>millions of nodes a second for DIEP :)

And?



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.