Author: Uri Blass
Date: 05:33:42 05/05/01
Go up one level in this thread
On May 04, 2001 at 21:51:48, Robert Hyatt wrote: >On May 04, 2001 at 17:57:36, Jesper Antonsson wrote: > >>On May 04, 2001 at 14:48:14, Robert Hyatt wrote: >>>My 60M figure is "peak". To compare that to DB you have to use 1000M nodes >>>per second. It would _still_ be a long way away. >> >>Well, what I remember is that they reported around 400M nodes peak and 200M >>nodes average. Anyway, a factor of 4-16 is not something I consider very much, >>it isn't more than two to six years of Moores Law. :-) However, it is still an >>open question how good DB was at evaluation. Those guys were smart and could >>throw silicon at the eval terms, so it's possible that they had significantly >>better eval than state-of-the-art chess software of today. On the other hand, >>it's possible they didn't. > > >If you do the math: 480 chess processors, 1/2 at 20mhz, 1/2 at 24mhz, you >get an average of 22mhz, which at 10 clocks per node means an average of 2.2M >nodes per second per processor. Times 480 and you get 1 billion. Peak of >course, but it _could_ reach that peak. Hsu claimed his search was about 20% >efficient which would take that to roughly 200M... > >On a 64 cpu alpha it is _possible_ that Crafty might exceed 60M nodes per >second. But in reality it would be searching like a 40M node per second >sequential processor due to the .3 efficiency loss for each processor. > >Still, it would be _very_ fast. Just not as fast as deep blue by quite a >ways... And then there is the evaluation problem. I _know_ I don't do in my >eval what they did in theirs as I would probably be another factor of 3-5 slower >if I did... I still guess that your evaluation is better because you had many years to tune your evaluation and they did not. Uri
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.