Author: Robert Hyatt
Date: 14:40:39 06/26/03
Go up one level in this thread
On June 26, 2003 at 16:47:06, Eugene Nalimov wrote: >On June 26, 2003 at 16:11:42, Robert Hyatt wrote: > >>On June 25, 2003 at 13:20:46, Tom Kerrigan wrote: >> >>>On June 25, 2003 at 04:52:12, Dann Corbit wrote: >>> >>>>On June 25, 2003 at 03:55:03, Andreas Guettinger wrote: >>>> >>>>>Apple Hardware VP Defends Benchmarks: >>>>> >>>>>http://apple.slashdot.org/apple/03/06/24/2154256.shtml?tid=126&tid=181 >>>> >>>>I'll be darned. An oinking weasel. >>> >>>It obviously doesn't pass the smell test when Apple's scores disagree with the >>>officially submitted SPEC scores so dramatically, even if the VP does try to >>>justify their testing methodology. >>> >>>The guy mentions that the PPC scores could have been higher if they had used a >>>different compiler? Uhhh, why didn't they do that and avoid this whole mess? >>> >>>-Tom >> >> >>His testing methodology was not _that_ bad. He _did_ use the same compiler for >>both processors, which is certainly reasonable. >> >>Whether he used that specific compiler because it made the g5 look better is >>another issue, although it is doubtful that the gcc guys have got any great >>g5 customizations built in yet. >> >>One _could_ make a case for testing either way. (a) using the same compiler; >>(b) using the _best_ compiler for each respective machine. > >By "the same compiler" you mean "the same front end?" > >Thanks, >Eugene Actually I was thinking more about the "back end". IE emitting code and optimizing same. The parsing and converting to some intermediate form is not that interesting from this perspective. It's what happens after that that becomes "interesting". In the case of GCC, there is plenty of evidence that the optimizer guys do a lot of communicating, so that if one discovers a new trick, all the backenders use it if possible and applicable. Vendors seem to be more protective of their "tricks" for reasons that must relate to software marketing rather than hardware marketing/performance. > >>The classic problem with (b) is that humans are influencing the outcome in a >>big way, because you not only measure raw hardware performance, you measure how >>good the optimizing gurus are at their craft. Either way is open to lots of >>criticism, unfortunately. >> >>SPEC is still going to be the best comparison since each vendor is free to >>use the fastest compiler and settings he can find, so long as the result >>produces correct and validated answers. The "gurus" still count, of course, >>but absolute is absolute.
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.