Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: By the way...

Author: Robert Hyatt

Date: 20:48:29 06/28/03

Go up one level in this thread


On June 28, 2003 at 22:23:28, Tom Kerrigan wrote:

>On June 28, 2003 at 10:43:50, Robert Hyatt wrote:
>
>>On June 28, 2003 at 04:44:10, Tom Kerrigan wrote:
>>
>>>On June 28, 2003 at 00:18:35, Robert Hyatt wrote:
>>>
>>>>On June 26, 2003 at 22:50:59, Eugene Nalimov wrote:
>>>>
>>>>>I didn't look at GCC sources, but I looked at sources of some other compilers,
>>>>>and understand x86 and PPC architecture well enough, so I think I know that x86
>>>>>and PPC backends should be vastly different, and each should contain lot of
>>>>>platform-specific and unique code.
>>>>>
>>>>>Thanks,
>>>>>Eugene
>>>>
>>>>I wouldn't disagree.  However, I'd suspect that both are written by the
>>>>same core "group" of people.  Which means they are probably pretty competitive
>>>>with each other in terms of aggressive optimizations.  That means that it is
>>>>unlikely that one processor will get a huge jump on the other due to the
>>>>optimizer gurus for one being far better. (all of that directed toward gcc
>>>>only, of course).
>>>
>>>I've never looked at the gcc compiler, but I imagine that it has a pass where it
>>>converts whatever its intermediate format is to native machine code and
>>>optimizes that machine code, e.g., makes sure branch targets are on 16 byte
>>>boundaries for the Athlon, makes sure to use multiplies instead of shifts in
>>>certain situations on the P4, etc. These sorts of optimizations can make or
>>>break the performance of an executable and they're hard enough to keep straight
>>>for one x86 processor, much less every x86 processor AND some completely
>>>different RISC processor (POWER4/PPC970) with rules that are probably just as
>>>complicated, given its "bundling" setup. So unless you have information to the
>>>contrary, I'd suspect that different sets of people work on generating this
>>>final machine code.
>>
>>You can find this out bY investigating the gcc project.  In some cases they
>>might have disjoint groups of people working on the back end, but at the "top"
>>of the tree there is a single group of maintainers.  And for many of the
>>architectures, there is a single group working on all.  This is not that
>>uncommon.  For example, Donald Becker (NASA) did almost all of the ethernet
>>drivers for linux, even though different cards/drivers are drastically
>>different.
>
>In other words, you don't know. Okay.

I don't know for the PPC specifically, no.  I'll be happy to give you
the alpha and X86 names if you want.  And the Cray and a few others,
although there are quite a few "common players" in all.  As I mentioned.



>
>>>As for optimizations carrying over from one architecture to the other, I expect
>>>this is very unlikely given how different the architectures are. If you order
>>>your instructions on the PPC970 to be bundled just right for high performance,
>>>the same ordering is obviously going to have no effect (or probably a
>>>detremental effect) for Pentium 4 performance, because the P4 doesn't even do
>>>bundling at all.
>>
>>
>>Sorry, but _many_ optimizations are machine / architecture independent.  Any
>>good compiler book will explain them, and new ideas are coming out every day.
>
>Yes, I never said that wasn't the case. But _many_ optimizations are _not_
>machine/architecture independent and those are the ones I was talking about
>(obviously).

OK.  But in gcc, the number of those "cute" optimizations is low compared
to the usual generic optimizing.  IE we just _recently_ got CMOV produced
by the compiler, much less optimizations that use it.




>
>>Of course there are also processor-specific tricks that get exposed daily as
>>well, but that is a much smaller subset of optimizations than the overall
>>ideas dealing with reducing operations done.
>
>How do you figure that one group is larger/smaller than the other?

Experience.  "generic optimizations" have received a _lot_ more
attention, because of generality.  And more interest.  A vendor's group
is likely to spend more time getting deeper into the architectural tricks
in addition to the generic optimizations.  But we were talking about GCC,
where that's not particularly true.



>
>>That simply means that the overall optimizations are similar, and then on the
>>_very_ "back end" of all this some processor-specific tricks are employed to
>>further (hopefully, but not always) speed things up further.  But as I said,
>>using gcc on two processors does as much as possible to eliminate any _real_
>>processor-specific tinkering, but it also means you are comparing two machines
>
>Doesn't that depend on the back end? Is there something in gcc's architecture
>that actually prevents it from being as good (i.e., maxing out
>"processor-specific tinkering") as other less-portable compilers?

Nothing prevents it at all. It just takes lots of _time_.  A vendor trying
to push their own processor superiority, or trying to develop a for-sale
compiler will do better.  Microsoft C is a good example.  Beats GCC on any
architecture it works on.  Not because GCC is written poorly, it just doesn't
get as much attention by as many qualified compiler jocks as does MSVC.


>
>-Tom



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.