Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: New super pc

Author: David Franklin

Date: 17:13:35 06/23/99

Go up one level in this thread


On June 22, 1999 at 21:17:04, Paul Richards wrote:

>On June 22, 1999 at 20:04:36, Robert Hyatt wrote:
>
>>Don't overlook the issue that this is not a "single cpu" machine.  Using such
>>a massively-parallel architecture is a long way from trivial...
>
>The general concept seems easy enough, though I grant the details would
>be nontrivial.  Essentially everyone's program could "run in hardware" on
>such a machine.  For chess purposes then Hsu's vaporware could be obsoleted
>by newer vaporware. ;)))  The Xilinx chip that they purportedly use is
>certainly real enough.  Assuming that they have really developed the
>software necessary to control these chips in real time, the big question
>would be how fast it actually runs real applications.  The teraflop
>numbers quoted were for simple operations that don't reflect real
>application performance.  But I certainly hope it's all legit.  The
>size and power consumption comparison with Pacific Blue is hilarious,
>like a hairdryer vs. a small city. :)

I was involved (indirectly) with a similar sounding machine a couple of
years ago, and the venture was basically a complete disaster.

Let me first disclaim any real knowledge of FPGA systems, so I'm only
talking about observed experience in porting SW algorithms to a FPGA
system; it may be this new product doesn't suffer from the same problems.
However, what I saw was:

1) The 'best case' performance of a FPGA system often doesn't bear much
resemblance to observed performance. We were doing image processing, and
the performance for applying a general 11x11 convolution to a 16 bit image
was maybe 100x what could be achieved in software at the time. But we couldn't
take advantage of any S/W 'tricks' so we couldn't do a symmetric separable
filter any quicker. And performing a 2D transform to an image was about the same
speed as S/W (bilinear filtering), and chroma-keying was often slower than S/W.
The Starbridge tera-op figure for 16-bit adds isn't that useful unless you
really really need to do a lot of 16 bit adds.

2) Memory access times and patterns were critical. It was very very difficult to
do anything not using (more or less) sequential memory accesses, and performance
took a huge hit.

3) Development times were immense compared even to assembler; absolutely
ludicrous compared with C/C++. We're talking several months for the
implementation of something that would take a day (at the most) in C. And the
developers were the board designers, so we're not talking unfamiliarity with the
system here. I couldn't even imagine how long it would take to produce just a
legal move generator on that system.

I have to say that looking at the Starbridge site, the buzzword to real
information quotient seems to be very high indeed. From previous disappointing
experience I'd have to say I wouldn't be too excited about the prospects for
chess. Where it will probably excel is in very concise algorithms with very good
parallelizability and high locality of memory access. Cryptography springs to
mind as an area where you might get close to their claimed superiority over a
PII/PII.

-
Dave



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.