Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Why is assembly more effecient than C?

Author: Dave Gomboc

Date: 21:57:32 09/28/98

Go up one level in this thread


On September 28, 1998 at 14:12:25, Robert Hyatt wrote:

>On September 28, 1998 at 10:06:42, Jon Dart wrote:
>
>>
>>On September 28, 1998 at 09:17:09, Robert Hyatt wrote:
>>
>>>On September 28, 1998 at 03:01:19, Danniel Corbit wrote:
>>>
>>>>On September 27, 1998 at 18:18:25, Robert Hyatt wrote:
>>>>[snip]
>>>>>not exactly.  IE I can't imagine that a C compiler + optimizer can beat
>>>>>hand-tuned asm code, even if I write both the C and the asm code.  The
>>>>>guys that write the optimizers are good, but they aren't as good as
>>>>>someone that has been programming asm code for 30 years...
>>>>>
>>>>>The main reason everyone doesn't use ASM code is portability, *not*
>>>>>speed.
>>>>Risc C compilers can almost always outdo hand written code except for very small
>>>>snippets.  For CISC I agree with you, especially Intel x86, since there are so
>>>>many good Intel assembly programmers.  For thousands or millions of lines of C,
>>>>an equivalent ASM is very hard to produce for Risc machines.
>>>
>>
>>The Intel processors now do many of the tricks that RISC processors have
>>traditionally done. It used to be that you could just get the processor manual,
>>add up the instruction times, and figure out how fast your code would run.
>>Now that's not true anymore. So writing optimal assembly language is
>>non-trivial, even for the Intel machines. (However, I would add that few
>>compilers do a really great job of register allocation - which is quite a bit
>>harder on Intel than other architectures - so that is one area where a human can
>>improve on the compiler).
>>
>>--Jon
>
>
>There are other things too.  IE how many chess programmers do an x=x*2.5 in
>their evaluation function?  None?  Better check out cray blitz.  And the reason
>is buried in the Cray architecture and how floating point stuff is done in
>parallel with integer stuff, so that I can do x=x*2 and y=y*2.0 in the same
>time it would take to do just x=x*2...  but the compilers don't know whether
>you can take a float and use it as an int, or vice-versa, while *I* do because
>I know how the number will be used later, and where the important part of the
>number is (whole or fraction or both).  The compiler *always* has to be
>conservative and once it has a float, it has to stick with a float to avoid
>losing those fractional bits, even when they will be zero (but it can't know
>that of course.)
>
>That's the point.  I *know* *all* about the program and the values it is
>computing.  The compiler doesn't...

I think that if somebody had enough time to write it all in assembly, they might
do a better job than a top-notch compiler.  That's a pretty big if though, it
might take a lot longer than a human lifespan.  Anyway, I think more often the
compiler will be better.  If I may make an analogy :), let's use computer chess.

The best programs are better than virtually all humans, they have limited
knowledge but they apply it unfailingly, every place that it is possible.  There
are a few humans who can still sit down against some PC software and crush it,
but for the vast majority of us, the software is better than we are.

The same holds for assembling: when it comes to peephole optimizations over a
million lines of C code, the assembler is a far better grinder than any human
could possibly be.  Humans beat it in cases like you mention above, where they
simply understand something about the code that the assembler does not.

So be practical, write in a high-level language, and if you really need to,
hand-tune like crazy where there are bottlenecks. :-)

Dave Gomboc




This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.