Computer Chess Club Archives




Subject: Re: Introducing "No-Moore's Law"

Author: Robert Hyatt

Date: 21:07:01 02/28/03

Go up one level in this thread

On February 28, 2003 at 20:41:59, Jeremiah Penery wrote:

>On February 28, 2003 at 18:11:36, Robert Hyatt wrote:
>>On February 28, 2003 at 17:28:23, Jeremiah Penery wrote:
>>>On February 28, 2003 at 11:45:18, Robert Hyatt wrote:
>>>>On February 28, 2003 at 01:36:00, Jeremiah Penery wrote:
>>>>>On February 27, 2003 at 22:45:31, Robert Hyatt wrote:
>>>>>>If you have time, I want to ask a very precise set of questions, since we keep
>>>>>>going around and around with the non-engineering types here...
>>>>>Somehow, I get the impression that you're just skimming what I write, assuming I
>>>>>must be saying what you think I'm saying.
>>>>I'm not skimming anything.  You said "the chips will run much faster but they
>>>>them when they are released..."
>>>I never said exactly that.  I have said similar things, but they carry different
>>>This goes back to marketing.  I say marketing forces them to release chips only
>>>as fast as they need to.
>>If you read Steve's comments, he disagreed with this just as I did.
>Not exactly.  Engineers producing chips does not equate to the company selling
>such chips.  Intel has demoed ~5GHz chips, IIRC.  By your logic, they should
>already be shipping.

Not by _my_ logic.  I discussed prototypes.  Chips produced at a yield of say
5-10% as the fab process is being tuned up for the new dies and design.  I don't
see a thing that is inconsistent there.  But once the fab line is running at
production speed, I don't think you will find them producing chips that will
run at X, but sell them at X-n, unless it is after the demand for the full-
speed chips is satisfied and they notice a demand for slower chips that could
be filled most economically by using the current chips after remarking them.

>>Yes, they might "underclock" some faster chips.  If (1) the fab line is
>>faster chips and producing them at a rate that satisfies the demand for that
>>speed;  and (2) there is a demand for slower chips and siphoning off faster
>>and marking them slower won't impact the ability to meet the demand for the
>>faster chips.
>>No, I don't believe they underclock for any other reason.  Ther eis much  money
>>be made by clocking them as fast as possible.  Just look at the difference in
>>the price
>>for a 2.4ghz xeon and a 2.8ghz xeon and ask yourself "given the demand, which
>>I produce?"
>>the faster they go, compared to the competition, the more they will sell.
>Then why does SPARC still sell so well?  Why isn't Alpha ubiquitous on the
>desktop?  In the ideal world, such a statement would hold true.  But in this
>world, marketing forces have a lot more to do with this than it probably should

Sparc isn't selling "so well".  IN fact, if you talk to Sun insiders, it is
doomed and they are moving to the PC world quickly.  They already use a PC
type chassis with IDE disks and the like.  Care to guess why?  Processor sucks.
Everybody knows it sucks.  Only the "sun loyalty" keeps a few coming back.  We
used to be 100% sun, for example.  We now have 5 out of 250 computers here.
The rest are mostly PCs with a few others (SGI, etc) thrown in for good measure.

The alpha was too expensive for the desktop.  256 bit bus was a killer.  But
that is _all_ that was wrong with it and give things another 3-5 years and
64 bit chips _will_ be the norm.

>>>>>When Intel shrunk the P4 from .18um to .13um, the processor speed first released
>>>>>in .13um was the same as the top-end part from .18um - 2GHz.  It's laughable to
>>>>>think that process shrink wouldn't give them quite a bit of frequency headroom
>>>>>from the very first, even given the immaturity of the process.
>>>>It isn't so "laughable" to me.  First, it might take time to get the fab process
>>>>tuned up to reliably produce faster parts.  But reducing the die size has other
>>>>advantages, including lower voltage and lower heat, at the same clock frequency,
>>>>so there is a reason for going there..
>>>Marketing, again.  They didn't need a processor with higher clockspeed than was
>>>released before, so they didn't make one right away.
>>If you believe that, that's certainly your choice.  I don't.  And no engineer I
>>have talked
>>to suggests that that happens.  The thing driving the clock speed is money.
>>Faster means
>>more sales.  The thing limiting clock speed is technology.  What will the fab
>>produce and
>>how fast can they run?
>>I've _never_ seen a case of someone intentionally coming in slower than they
>>could, knowing
>>that the wider the gap between their speed and the speed of their competition,
>>the wider the
>>gap in sales will be also.
>You still completely misunderstand the marketing concept at work here.  They
>will sell more for _this_ generation of chips, but they sell far less of the
>next few generations.

It doesn't matter.  The chips come off the _same_ fab line.  At the same cost.
Does it matter if you sell a million today and 500K next year, or 500K today
and 1M next year?  Profit = selling_price - cost.

>>>.13um P3s were being produced at the same time .18um P4s were being produced, so
>>>the process wasn't totally immature or untested.  1400MHz P3 uses lower voltage
>>>and has far less thermal dissipation than the 1800MHz P4 in the same process,
>>>while remaining very competitive with the P4 in performance, despite the
>>>disparity in clock speed.  By your logic, they would have ramped up and promoted
>>>the P3 because of these advantages.  Instead, Intel worked quietly to kill the
>>>P3 once P4 was released.
>>That has nothing to do with the current discussion, however.  The _fastest_ they
>>run is what the engineers are targeting.  Not something less.
>>It may well be that the lower voltage parts were in demand for _another_ reason.
>> Perhaps
>>the laptop world.  If you want to limit heat dissipation, that is a valid
>>constraint.  But not
>>in the desktop world most likely, with heatsinks and fans galore.
>_You_ were the one who said it first, not me.  "But reducing the die size has
>other advantages, including lower voltage and lower heat, at the same clock
>frequency, so there is a reason for going there.."  I just tried to respond to

What's wrong with that statement?  There is _no_ reason to go to reduced die
size except to (a) shorten distances, (b) shorten switching times, (c) reduce
power requirements, and (d) reduce heat dissipation.  You can go to a smaller
fab without going faster, if your only goal is reduced heat, for example.  But
we _were_ talking about speed, and nothing else.

>>>>> Even on a very
>>>>>mature process, near the end of a core's lifespan, they _have_ to leave some
>>>>>headroom at the top, or you get processors like the 1.13GHz P3, which was pushed
>>>>>right to the limit, and suffered for it.
>>>>I don't disagree.  I simply claim the "headroom" is not very large, and it is
>>>>made just larger than the expected variance in the parts off the fab line, so
>>>>that _all_ will
>>>>run at that speed reliably.  10%  Maybe.  25-50%?  not a chance in hell...
>>>10% is not a small amount.  I wouldn't be surprised if it was a bit more today,
>>>though.  In the days of 25MHz chips, they probably had no practical headroom -
>>>1MHz would be a full 4%.  As clock speed increases, more headroom must be given.
>>Yes, but if you look back at comments in this thread, 10% was _not_ discussed.
>>It was
>>more like 3ghz vs 5ghz, and that is a _huge_ "headroom".
>You wonder why I thought it seemed like you were skimming my posts.  This is
>exactly why.  5GHz was never used other than as an example number pulled out of
>the air, in a discussion about whether Intel could theoretically release such a
>chip.  I never _remotely_ claimed Intel was producing 5GHz silicon and releasing
>it at 3GHz.
>The biggest potential number for headroom I may have given was 25%.  That may be
>a bit high, but not implausibly so.

And if that is all you are saying, then why are you arguing, because I haven't
said anything different.  which leads me to wonder who is _really_ skimming.  :)

I don't think 25% is _anywhere_ within reality at the top end of the product
line.  I think 10% is a big stretch.  And I think that "headroom" is directly
proportional to how accurately the fab process works.  The less variance, the
less headroom.  New fabs probably have a significant headroom, but it definitely
shrinks as the fab matures.

But I think that would be true anywhere.  Whether you are building racing
outboard motors or computer chips.  The better the assembly process, the
closer the tolerances, the more consistent things are and the closer you can
push them to their theoretical maxes...

>>>>>There are a ton of variables that can affect the clock speed attainable by a
>>>>>certain chip design on a certain process.  If you really feel like discussing
>>>>>specifics, I can do so.  Here, I will only say that there are things beyond the
>>>>>path timings that can affect attainable clockspeed.  Thermal issues are a big
>>>>>deal for this.  Even the packaging used can affect clock scaling of a particular
>>>>I don't believe I have said otherwise.  Also the very process of depositing
>>>>material to
>>>>build pathways is not precise, leading to variable widths and resistance/etc.
>>>>That's a
>>>>part of the fab process.
>>>Which is one reason why their calculations can be wrong, and they have to test.
>>Again, _everything_ has to be tested after it is designed and the prototypes are
>>built.  So what?  I have to test _every_ program I write.  But, in general, what
>>expect from a program is usually delivered.  Whether the target be a specific
>>run-time, or memory limit, or I/O throughput, or whatever.
>>But testing to see if things work is completely different from first building
>>and then testing to see if it is even viable as a solution.  That is very
>>infrequent, particularly
>>in the semiconductor world.  Unless you take "leaps" like using copper, or GAS,
>>whatever...  But even then the engineers have a pretty good idea on what to
>>expect for the
>>first chip out of the box...
>So what are you harping on about here?  _Nobody_ ever claimed that they build
>first and then test.  Nobody disputed that the engineers should have a good idea
>of the maximum clock speed.  All I've said is that they may be able to produce
>something, but that doesn't mean they will sell it.

I'm only "harping" on how engineering works.  It is _not_ as haphazard as you
and others would suggest.  The engineers know very well what a particular fab
process and design will do.  It might take a while to get there, but there is
no guesswork at all...

ANd that is what I have been saying over and over.  Steve agreed.  As did any
other engineer I have talked to over the years...

>>>>>cores are tweaked less often.  I ask again, do you seriously think that when
>>>>>Intel went from .18um Willamette P4s at 2GHz to .13um Northwood P4s that they
>>>>>couldn't increase the clockspeed, even given the immaturity of the process?
>>>>That is exactly what I believe, yes.  There is a _big_ gain if you are 2x faster
>>>>your competitor.  Just like everyone here looks at the SSDF list, and buys the
>>>>at the top, even if it is just a tiny bit higher in rating.  What would they do
>>>>if the top
>>>>program were 200 points higher?  Buy fewer?  I don't think so, it would create a
>>>>demand for _that_ program.
>>>And when the next version of that program is released, what happens?  Are people
>>>going to plop down another $50 for it, knowing that what they already have is
>>>still 200 points higher than anything else?  If it was $1000, would they still
>>>buy the next version?
>>I think the purchase decision is made a the time of need.  I want a program now.
>You've already bought the program that's 200 points better than the competitors.
> The question is, "When do you 'need' to buy a replacement program?"  If you
>already have something twice as good as any competing product, it's very likely
>that your buying cycle time will dramatically increase.

Not if they come out next year with something 200 points better.  Or not if
I need _another_ program (not all computers are replacments, a great number
are _new_ installations and that percentage is climbing, not dropping, as more
first-type buyers take the leap.)

>> What is
>>best and by how much.  The "better" it is, the more I am willing to pay for that
>In 6 months, will you be willing to pay that much again for something only
>slightly better, given that what you already have is still twice as good as the
>competition?  Probably not.  This is assuming that the company is able to
>improve their product _at all_.

What is the point of the question?

Will I buy a 3ghz machine today and buy a 3.2ghz machine in 6 months?  No.
But the risk is will I buy that 3.0ghz if your competition is right behind
you in speed and significantly below you in cost (AMD vs Intel for example).

But if you offer me 3.5ghz today, I'll take it if I am in the market for a
new machine, no questions asked.

>>between it and its competitors.  As the others catch up, I either have to lower
>>my price to
>>approach theirs, or make my program faster to maintain that advantage.  But at
>>any instant
>>in time, the larger the gap between my product and theirs, the better off I will
>>be in the world
>>of marketing.
>The better off you are _today_.  If companies focused only on today, they would
>fail tomorrow.  In your business model, the companies sacrifice long term sales
>and revenue for a quick injection of cash.  That won't sustain anyone.

I still don't see how this is an issue.  How will producing a slower product
today help me tomorrow?  Once I lose a customer to a competitor, it is _much_
harder to get them _back_.  I'd want to offer the best that I could offer, to
drain _their_ customers that need more performance.

IE Cray _never_ played these games, _ever_.

I don't believe any other vendor does either.

>>>>When I purchase machines, I look at the clock speed, and I benchmark.  If one is
>>>>faster, that's what I choose.  If it is _much_ faster, I'm likely to buy
>>>And if you had something so much faster than anything else, would you feel the
>>>need to replace those machines anytime soon?
>>Depends.  The typical lifetime of a machine today is 3 years.  If you can run
>We're not discussing the typical case.  Anyway, why does that have any bearing
>on the issue?
>>at X ghz today, in 3 years you will certainly be able to run at least 2X, so
>If there was a sudden huge jump in clock speed (to 5GHz or whatever), what makes
>you think it would be as scalable as current processors?
>>yes, I'd buy one.
>If a new super-processor is released, and everyone buys it today (which you say
>would happen), then they're going to wait 3 years to buy a new one, by your
>example.  Where are the processor sales in the interim?  That's exactly why this
>business model _does not work_.

You do realize that not _everyone_ is going to buy today?  SOme just bought
yesterday.  They will be my customer in 3 years.  Some bought last year.  They
will be my customers in two years.  This is a _huge_ market.  A vendor is going
to sell a certain number of processors, period, due to new customers and
old equipment replacement.  If he can go even faster, he will also attract those
customers that might be buying a competitive processor, which _increases_ the
total revenue.

>>>>Actually it isn't.  Silicon compilers do a lot of the work I had to do by hand.
>>>>gate delays.  Doing the routing.  And in my day, an error was a terrific delay
>>>>as it took a lot
>>>>of time to re-do.  With silicon design tools, that process is much simpler from
>>>>the human's
>>>>perspective today, which is a plus.
>>>Intel does (nearly) full custom design on their commodity x86 chips.  I found
>>>some specific information about custom design, but most of what I could find was
>>>on the order of, "xyz uses/offers full-custom design."
>>>* Designer hand draws geometries which specify transistors and other devices for
>>>an integrated circuit.
>>>* Can achieve very high transistor density.
>>>* Design time can be very long (multiple months).
>>>* Offers the chance for optimum performance. Performance is based on available
>>>process technology, designer skill, and CAD tool assistance.
>>There you go.  Last item.
>Ignoring the first item...

Haven't ignored it at all.  It has been _the_ topic.

>>>So I'd say things are not easier for the human today who designs such a part,
>>>though they do have large teams working on each design and lots of computer
>>>assistance.  An error is still very hard to find, and might take even longer to
>>>fix today than it used to.
>>No disagreement.  But if I need to move something from over here to over there,
>>it is a simpler process with a good design tool.  IE go to an architect today
>>and find
>>a house plan you like.  Ask him to change the downstairs ceiling height from 9'
>>to 10'.
>>It takes him two seconds.  Even though the stairs now has to have another two
>>which might move a doorway, etc.  30 years ago you broke out the T-square and
>>drawing board and started over.
>If he's designing a several-million square foot, 100 story office tower,
>compared to the architect 30 years ago designing a 1500 square foot, 2 story
>house, the job is certainly NOT easier for the human today, no matter what tools
>he has.

But he isn't.  He is designing a small city, made up of one of these, one of
those, two of those, etc.  Divide and conquer and all that.  It isn't just one

>>>There are things not dictated by the actual design or manufacturing process that
>>>affect clock scaling, like the packaging, which I already mentioned.
>>Its still a part of the overall "system" however.  IE a laptop processor can't
>>run as hot as a
>>desktop processor, because there can't be a 6" tall heat sink to help keep it
>>from frying.  So
>>that simply becomes a design constraint and you end up not producing a chip that
>>runs as
>>fast as it might in an environment where heat can be eliminated easier with a
>>big sink and
>>a pair of fans blowing right on it.
>That doesn't mean the silicon isn't capable of running at the same speed as the
>desktop part.  That's the crux of this entire argument.

No it isn't.  The crux of the argument is "can the desktop processor, with the
full setup for heatsink and fan and power supply and so forth run faster than
the engineers say?"

The answer seems to be "no" according to popular (engineering) opinion.

Once again, I do not _care_ about the intentionally slowed down processors
and whether they will overclock or not.  I care about the front-line fastest
chips being produced _only_.  All my comments are addressed to that specific
segment of the chip market.  Not the low-head (mobile) processors.  Not the
re-marked slower-clocked processors made for an economy niche.  The best of
the processors _only_ is what I have been talking about, and I have _not_
been vague in that position whatsoever...

All attempts to change the chip topic will be returned to the main idea,
time after time.  :)

This page took 0.1 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.