Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Introducing "No-Moore's Law"

Author: Robert Hyatt

Date: 15:11:36 02/28/03

Go up one level in this thread


On February 28, 2003 at 17:28:23, Jeremiah Penery wrote:

>On February 28, 2003 at 11:45:18, Robert Hyatt wrote:
>
>>On February 28, 2003 at 01:36:00, Jeremiah Penery wrote:
>>
>>>On February 27, 2003 at 22:45:31, Robert Hyatt wrote:
>>>
>>>>If you have time, I want to ask a very precise set of questions, since we keep
>>>>going around and around with the non-engineering types here...
>>>
>>>Somehow, I get the impression that you're just skimming what I write, assuming I
>>>must be saying what you think I'm saying.
>>
>>I'm not skimming anything.  You said "the chips will run much faster but they
>>underclock
>>them when they are released..."
>
>I never said exactly that.  I have said similar things, but they carry different
>connotations.
>
>This goes back to marketing.  I say marketing forces them to release chips only
>as fast as they need to.

If you read Steve's comments, he disagreed with this just as I did.

Yes, they might "underclock" some faster chips.  If (1) the fab line is
producing
faster chips and producing them at a rate that satisfies the demand for that
clock
speed;  and (2) there is a demand for slower chips and siphoning off faster
chips
and marking them slower won't impact the ability to meet the demand for the
faster chips.

No, I don't believe they underclock for any other reason.  Ther eis much  money
to
be made by clocking them as fast as possible.  Just look at the difference in
the price
for a 2.4ghz xeon and a 2.8ghz xeon and ask yourself "given the demand, which
would
I produce?"

the faster they go, compared to the competition, the more they will sell.


>
>>>>Here they are:
>>>>
>>>>(1) When you design a chip for a particular fab process, do you have a pretty
>>>>accurate idea of how fast it is going to clock?  To clarify, some think that you
>>>>crank out the design, run it down the fab line, and then if it will run at (say)
>>>>5 ghz, you will actually clock them at 3ghz to avoid pushing the envelope too
>>>>far too fast.  The engineers I have talked with dispute this with a laugh, but
>>>>I thought I'd give you a chance at this as well.
>>>
>>>When Intel shrunk the P4 from .18um to .13um, the processor speed first released
>>>in .13um was the same as the top-end part from .18um - 2GHz.  It's laughable to
>>>think that process shrink wouldn't give them quite a bit of frequency headroom
>>>from the very first, even given the immaturity of the process.
>>
>>It isn't so "laughable" to me.  First, it might take time to get the fab process
>>tuned up to reliably produce faster parts.  But reducing the die size has other
>>advantages, including lower voltage and lower heat, at the same clock frequency,
>>so there is a reason for going there..
>
>Marketing, again.  They didn't need a processor with higher clockspeed than was
>released before, so they didn't make one right away.

If you believe that, that's certainly your choice.  I don't.  And no engineer I
have talked
to suggests that that happens.  The thing driving the clock speed is money.
Faster means
more sales.  The thing limiting clock speed is technology.  What will the fab
produce and
how fast can they run?

I've _never_ seen a case of someone intentionally coming in slower than they
could, knowing
that the wider the gap between their speed and the speed of their competition,
the wider the
gap in sales will be also.


>
>.13um P3s were being produced at the same time .18um P4s were being produced, so
>the process wasn't totally immature or untested.  1400MHz P3 uses lower voltage
>and has far less thermal dissipation than the 1800MHz P4 in the same process,
>while remaining very competitive with the P4 in performance, despite the
>disparity in clock speed.  By your logic, they would have ramped up and promoted
>the P3 because of these advantages.  Instead, Intel worked quietly to kill the
>P3 once P4 was released.

That has nothing to do with the current discussion, however.  The _fastest_ they
can
run is what the engineers are targeting.  Not something less.

It may well be that the lower voltage parts were in demand for _another_ reason.
 Perhaps
the laptop world.  If you want to limit heat dissipation, that is a valid
constraint.  But not
in the desktop world most likely, with heatsinks and fans galore.

>
>>> Even on a very
>>>mature process, near the end of a core's lifespan, they _have_ to leave some
>>>headroom at the top, or you get processors like the 1.13GHz P3, which was pushed
>>>right to the limit, and suffered for it.
>>
>>I don't disagree.  I simply claim the "headroom" is not very large, and it is
>>intentionally
>>made just larger than the expected variance in the parts off the fab line, so
>>that _all_ will
>>run at that speed reliably.  10%  Maybe.  25-50%?  not a chance in hell...
>
>10% is not a small amount.  I wouldn't be surprised if it was a bit more today,
>though.  In the days of 25MHz chips, they probably had no practical headroom -
>1MHz would be a full 4%.  As clock speed increases, more headroom must be given.

Yes, but if you look back at comments in this thread, 10% was _not_ discussed.
It was
more like 3ghz vs 5ghz, and that is a _huge_ "headroom".



>
>>>>(2) when you design a chip for a particular process, do you have a good idea of
>>>>how fast it will run, or do you "wing it" and run a few off and test them to
>>>>see what they can clock at?  Again the engineers I talk with say that they
>>>>know in advance what it should run at and they run it there.
>>>
>>>There are a ton of variables that can affect the clock speed attainable by a
>>>certain chip design on a certain process.  If you really feel like discussing
>>>specifics, I can do so.  Here, I will only say that there are things beyond the
>>>path timings that can affect attainable clockspeed.  Thermal issues are a big
>>>deal for this.  Even the packaging used can affect clock scaling of a particular
>>>chip.
>>
>>I don't believe I have said otherwise.  Also the very process of depositing
>>material to
>>build pathways is not precise, leading to variable widths and resistance/etc.
>>That's a
>>part of the fab process.
>
>Which is one reason why their calculations can be wrong, and they have to test.

Again, _everything_ has to be tested after it is designed and the prototypes are
built.  So what?  I have to test _every_ program I write.  But, in general, what
I
expect from a program is usually delivered.  Whether the target be a specific
run-time, or memory limit, or I/O throughput, or whatever.

But testing to see if things work is completely different from first building
something
and then testing to see if it is even viable as a solution.  That is very
infrequent, particularly
in the semiconductor world.  Unless you take "leaps" like using copper, or GAS,
or
whatever...  But even then the engineers have a pretty good idea on what to
expect for the
first chip out of the box...




>
>>>They can calculate all they want, but they still have to test to make sure
>>>something beyond the scope of their calculations doesn't change the results.
>>
>>Certainly, but I would maintain that _most_ of the time their calculations are
>>dead
>>right.  With an occasional glitch since no process is perfect.
>>
>>
>>
>>>
>>>>(3)  Is there any science in the process of designing a chip, or is it a bunch
>>>>of "trial and error" operations?  IE in Intel's "roadmap" they are discussing
>>>>plans for the next 18 months or so, with predicted clock frequencies.  Are they
>>>>able to simply take today's chips and "crank 'em up" after a year, or are they
>>>>changing the fab, the dies, etc to make the chip faster.
>>>
>>>Of course they tweak the manufacturing process over its lifetime.  Processor
>>>cores are tweaked less often.  I ask again, do you seriously think that when
>>>Intel went from .18um Willamette P4s at 2GHz to .13um Northwood P4s that they
>>>couldn't increase the clockspeed, even given the immaturity of the process?
>>>
>>
>>
>>That is exactly what I believe, yes.  There is a _big_ gain if you are 2x faster
>>than
>>your competitor.  Just like everyone here looks at the SSDF list, and buys the
>>program
>>at the top, even if it is just a tiny bit higher in rating.  What would they do
>>if the top
>>program were 200 points higher?  Buy fewer?  I don't think so, it would create a
>>greater
>>demand for _that_ program.
>
>And when the next version of that program is released, what happens?  Are people
>going to plop down another $50 for it, knowing that what they already have is
>still 200 points higher than anything else?  If it was $1000, would they still
>buy the next version?

I think the purchase decision is made a the time of need.  I want a program now.
 What is
best and by how much.  The "better" it is, the more I am willing to pay for that
difference
between it and its competitors.  As the others catch up, I either have to lower
my price to
approach theirs, or make my program faster to maintain that advantage.  But at
any instant
in time, the larger the gap between my product and theirs, the better off I will
be in the world
of marketing.





>
>>When I purchase machines, I look at the clock speed, and I benchmark.  If one is
>>clearly
>>faster, that's what I choose.  If it is _much_ faster, I'm likely to buy
>>several.
>
>And if you had something so much faster than anything else, would you feel the
>need to replace those machines anytime soon?

Depends.  The typical lifetime of a machine today is 3 years.  If you can run at
X
ghz today, in 3 years you will certainly be able to run at least 2X, so yes, I'd
buy
one.




>
>>>>I hope you see where this is going.  I believe, from the engineers that I
>>>>know (and that is not a huge number but I know enough) that this is a very
>>>>precise deal.  I designed stuff many years ago using TTL and CMOS stuff, and
>>>>the "books" were my bible for doing this, telling me _exactly_ what the gate
>>>>delays were for each type of chip (ie LS, etc.)
>>>
>>>Modern MPU manufacturing is FAR removed from designing small-scale TTL/CMOS
>>>stuff.  There are way more factors involved in clock scaling potential than just
>>>the gate delays, which themselves are determined by several other factors (e.g.,
>>>thickness of the gate oxide layers - thinner layers allow greater clock
>>>scaling).
>>>
>>
>>Actually it isn't.  Silicon compilers do a lot of the work I had to do by hand.
>>Summing
>>gate delays.  Doing the routing.  And in my day, an error was a terrific delay
>>as it took a lot
>>of time to re-do.  With silicon design tools, that process is much simpler from
>>the human's
>>perspective today, which is a plus.
>
>Intel does (nearly) full custom design on their commodity x86 chips.  I found
>some specific information about custom design, but most of what I could find was
>on the order of, "xyz uses/offers full-custom design."
>
>* Designer hand draws geometries which specify transistors and other devices for
>an integrated circuit.
>* Can achieve very high transistor density.
>* Design time can be very long (multiple months).
>* Offers the chance for optimum performance.­ Performance is based on available
>process technology, designer skill, and CAD tool assistance.

There you go.  Last item.


>
>So I'd say things are not easier for the human today who designs such a part,
>though they do have large teams working on each design and lots of computer
>assistance.  An error is still very hard to find, and might take even longer to
>fix today than it used to.

No disagreement.  But if I need to move something from over here to over there,
it is a simpler process with a good design tool.  IE go to an architect today
and find
a house plan you like.  Ask him to change the downstairs ceiling height from 9'
to 10'.

It takes him two seconds.  Even though the stairs now has to have another two
steps,
which might move a doorway, etc.  30 years ago you broke out the T-square and
drawing board and started over.

>
>IBM, on the other hand, uses nearly full automated design process for their
>POWER4 chips.  It's much easier for them to move to a new fab process and to
>create new cores to use on existing fab processes.
>
>>And let's back up to your first premis.  oxide layers are _part_ of the gate
>>delay issue.  In
>
>Yes, a small part.  It was just one example of something that affects clock
>scaling beyond simply summing the gate/wire delays.

however it is _part_ of the gate / wire delay.  The specs are pretty clear for
most any
gate you want to drop into a circuit.  The delays are directly calculatable from
the specs
of the fab process you are using.  Since that is tied to the physical properties
of the
stuff being built.


>
>>the 80's we had a host of 74xx TTL chips we could choose.  High power.  Low
>>power.
>>Shottkey.  You-name-it.  The differences were in the switching times, the power
>>requirements,
>>the power dissipation, etc.  That is a part of the silicon design process.  And
>>it is dictated by the
>>fab process as I had mentioned...
>
>There are things not dictated by the actual design or manufacturing process that
>affect clock scaling, like the packaging, which I already mentioned.

Its still a part of the overall "system" however.  IE a laptop processor can't
run as hot as a
desktop processor, because there can't be a 6" tall heat sink to help keep it
from frying.  So
that simply becomes a design constraint and you end up not producing a chip that
runs as
fast as it might in an environment where heat can be eliminated easier with a
big sink and
a pair of fans blowing right on it.

But none of that happens in a desktop.  The engineer tries for max clock speed,
tied in to some
livable thermal constraints, and designs/produces a chip that meets _all_ the
constraints he was
given.




This page took 0.04 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.