Author: Robert Hyatt
Date: 20:30:08 05/22/99
Go up one level in this thread
On May 22, 1999 at 22:03:11, James B. Shearer wrote: >On May 22, 1999 at 00:25:52, Robert Hyatt wrote: > >>I'm not really assuming anything at all, because search loss is _not_ >>constant. It is easily possible (and 100% probable) that many searches >>with N processors run N times faster than with 1. I see this regularly. >>Yes, there are cases where it is less. But in terms of NPS, they 'delivered' >>what they said. Because when I quote NPS figures for Crafty (as does everyone >>with a parallel search) I give "raw nps" numbers. Because 'effective nps' is >>impossible to calculate. > > Well if you insist on using "raw nps", then they were predicting 3 >billion positions per second (3 million per chip times 1000 chips) and only >achieved 1 billion (5 years late). So they did not deliver what they said in >the 1990 Scientific American article. > James B. Shearer I have never seen them use that number. They have always, in the things I have seen from them, uses "raw nps". IE at the ACM events they would quote 700K for one processor, 1.4M for two, etc... That is the number every parallel program has quoted in the ACM events, since there is no real way to compute anything else. IE for a thousand chess positions, I average about 3.2X speedup over 1 processor. So do we say that .8 loss is meaningful? Only for those 1000 positions. Because in _the_ critical position in a game, I might get a speedup of 10X, 4X, or .5X. And without going back to evaluate it, I have no idea which. And even if I do, I probably won't get the same speedup the second time. Which is why raw nps makes more sense, since it can at least be calculated...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.