Author: Robert Hyatt
Date: 19:41:03 04/14/02
Go up one level in this thread
On April 13, 2002 at 17:00:05, Amir Ban wrote: >On April 12, 2002 at 15:04:54, Tom Likens wrote: > >> >> >>I probably shouldn't comment, since this topic seems to have become >>a religious debate, still... >> >>As an ASIC/Analog IC engineer I can guarantee you that hardware done >>"right" will blow away software everytime. No offense to the chess software >>programmers (of which I am one :), but custom hardware wins hands down. >>Unlike the recent debates on FPGAs, an ASIC solution gets the full benefits of >>the speed. When Hsu claimed Deep Blue was running at X-MHz, guess what, >>it *was* running at X-MHz!! This debate about speed is crazy. Deep Blue >>was done in 0.6u technology. That is ancient, modern ASICs are 0.13u copper >>processes, with 0.1u just around the corner. Hsu could get a staggering jump >>in speed by just doing a dumb shrink on his design- more than likely an order >>of magnitude (and probably more). And if he improved the basic hardware >>design, ... well who knows?! >> > >Hardware is almost by definition faster than software, but is much less flexible >and that practically makes it much less attractive. The design cycle is roughly >6 months, and this means that a PC programmer can try in one evening what a >dedicated-hardware designer can try in a year. > >This is not a practical way to develop, so in practice a hardware design would >try to be flexible and be governed by adjustable parameters, and clearly the DB >designers tried to do that. But this flexibility means the hardware needs to >adopt some model and so limits what it can do. > >E.g. it can be seen from the description of DB97 that a big part of the chip >evaluation was piece-square tables (called "piece placement array"), which makes >sense, because you can roll so many evaluation terms into that. It's clear that >these tables were not generated on the chip, which means that they were >calculated on the nodes and downloaded to the chip. This means quite a lot of >strain on the nodes, on the datapaths from the nodes to the chips, and >ultimately it means that evaluation was really done in the nodes' software. It >also means DB was far from being a true node evaluator. This doesn't follow at all. It depends on _how_ the "piece/square" values are used... I didn't get any impression from the DB paper that it uses piece/square tables anyway. It uses a "coefficient matrix" which _could_ be piece/square values. Or they _could_ be just like the many arrays I have in Crafty's evaluation. Arrays that have nothing to do with "piece/square" type values. > >If you consider the challenge and the problems they had, it's easy to understand >why they made such compromises. To implement all evaluation on the chips >directly (as some would have you believe they did) simply does not make >practical sense. Sure it does. Belle had its evaluation _in_ hardware. If I were designing an ASIC implementation of Crafty, I would do _that_ evaluation in hardware as well. I would certainly keep all the _weights_ as downloadable parameters so that I could adjust things as needed. But then that is how my evaluation is _already_ done... It makes perfect sense to do the evaluation in hardware as they did. And as Hsu has explained. Because it can be done in parallel which reduces the cost to a small fraction of a linear series of calculations. I would not feel uncomfortable putting my evaluation into hardware so long as the "weights" were adjustable. And that is just what they did... You only have to read what they wrote... > >In the end, despite the advantages of a hardware design, the problem is that >getting the hardware right takes 90% or more of your resources. When there is so >much you can and should do in software, it's not obvious to me that this is the >best way to make progress. > >By the way, while Hsu & company can now use 0.2 or lower technology, so can any >ordinary programmer who wrote his program for the 486 and now can run it on over >1 GHz. There's no advantage to the hardware guys in this respect. Actually it's >their disadvantage. This happened to Hitech, another hardware design that was >very fast in its prime but by the mid-nineties was surpassed by off-the-shelf >PC's. That is simply wrong, because you don't understand hardware. You can run at 1ghz and execute _one_ instruction per cycle. They could execute one _node_ every ten clock cycles. That is a _huge_ advantage. And it is _exactly_ why a special-purpose implementation of _anything_ will blow the doors off the software implementation. yes it takes more time. But they took 10+ years. That is certainly enough time... Hitech was _never_ "real fast". By the time it came along it was no faster than Belle (1981) nor Cray Blitz. It searched about 150K nodes per second which was not quite as fast as belle and only a tad faster than Cray Blitz of 1985. By 1986 they were slower than us as well... > > >>As far as positional items go. Hsu and his team were bright guys with FULL- >>TIME grandmasters on the team (who were being paid by IBM no less). It was >>their *job* to come in every day and make Deep Blue a better chess playing >>computer (talk about Nirvana ;) I find it hard to believe that Deep Blue >>didn't have a reasonable grasp (at least as good as the micros) of the main >>positional ideas of chess. Did it understand everything, of course not, but >>I bet it was damn good (just ask Kasparov). >> >>The current batch of software programmers are good, maybe some of the >>best ever but frankly when talking about Deep Blue it is *not* a level >>playing field. The deck is heavily stacked against the PCs. >> > >There are grandmasters participating in other programs on a regular basis and >nobody argues that this proves that the programs are awesome. > >Amir
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.