Computer Chess Club Archives




Subject: Re: Introducing "No-Moore's Law"

Author: Jeremiah Penery

Date: 17:41:59 02/28/03

Go up one level in this thread

On February 28, 2003 at 18:11:36, Robert Hyatt wrote:

>On February 28, 2003 at 17:28:23, Jeremiah Penery wrote:
>>On February 28, 2003 at 11:45:18, Robert Hyatt wrote:
>>>On February 28, 2003 at 01:36:00, Jeremiah Penery wrote:
>>>>On February 27, 2003 at 22:45:31, Robert Hyatt wrote:
>>>>>If you have time, I want to ask a very precise set of questions, since we keep
>>>>>going around and around with the non-engineering types here...
>>>>Somehow, I get the impression that you're just skimming what I write, assuming I
>>>>must be saying what you think I'm saying.
>>>I'm not skimming anything.  You said "the chips will run much faster but they
>>>them when they are released..."
>>I never said exactly that.  I have said similar things, but they carry different
>>This goes back to marketing.  I say marketing forces them to release chips only
>>as fast as they need to.
>If you read Steve's comments, he disagreed with this just as I did.

Not exactly.  Engineers producing chips does not equate to the company selling
such chips.  Intel has demoed ~5GHz chips, IIRC.  By your logic, they should
already be shipping.

>Yes, they might "underclock" some faster chips.  If (1) the fab line is
>faster chips and producing them at a rate that satisfies the demand for that
>speed;  and (2) there is a demand for slower chips and siphoning off faster
>and marking them slower won't impact the ability to meet the demand for the
>faster chips.
>No, I don't believe they underclock for any other reason.  Ther eis much  money
>be made by clocking them as fast as possible.  Just look at the difference in
>the price
>for a 2.4ghz xeon and a 2.8ghz xeon and ask yourself "given the demand, which
>I produce?"
>the faster they go, compared to the competition, the more they will sell.

Then why does SPARC still sell so well?  Why isn't Alpha ubiquitous on the
desktop?  In the ideal world, such a statement would hold true.  But in this
world, marketing forces have a lot more to do with this than it probably should

>>>>When Intel shrunk the P4 from .18um to .13um, the processor speed first released
>>>>in .13um was the same as the top-end part from .18um - 2GHz.  It's laughable to
>>>>think that process shrink wouldn't give them quite a bit of frequency headroom
>>>>from the very first, even given the immaturity of the process.
>>>It isn't so "laughable" to me.  First, it might take time to get the fab process
>>>tuned up to reliably produce faster parts.  But reducing the die size has other
>>>advantages, including lower voltage and lower heat, at the same clock frequency,
>>>so there is a reason for going there..
>>Marketing, again.  They didn't need a processor with higher clockspeed than was
>>released before, so they didn't make one right away.
>If you believe that, that's certainly your choice.  I don't.  And no engineer I
>have talked
>to suggests that that happens.  The thing driving the clock speed is money.
>Faster means
>more sales.  The thing limiting clock speed is technology.  What will the fab
>produce and
>how fast can they run?
>I've _never_ seen a case of someone intentionally coming in slower than they
>could, knowing
>that the wider the gap between their speed and the speed of their competition,
>the wider the
>gap in sales will be also.

You still completely misunderstand the marketing concept at work here.  They
will sell more for _this_ generation of chips, but they sell far less of the
next few generations.

>>.13um P3s were being produced at the same time .18um P4s were being produced, so
>>the process wasn't totally immature or untested.  1400MHz P3 uses lower voltage
>>and has far less thermal dissipation than the 1800MHz P4 in the same process,
>>while remaining very competitive with the P4 in performance, despite the
>>disparity in clock speed.  By your logic, they would have ramped up and promoted
>>the P3 because of these advantages.  Instead, Intel worked quietly to kill the
>>P3 once P4 was released.
>That has nothing to do with the current discussion, however.  The _fastest_ they
>run is what the engineers are targeting.  Not something less.
>It may well be that the lower voltage parts were in demand for _another_ reason.
> Perhaps
>the laptop world.  If you want to limit heat dissipation, that is a valid
>constraint.  But not
>in the desktop world most likely, with heatsinks and fans galore.

_You_ were the one who said it first, not me.  "But reducing the die size has
other advantages, including lower voltage and lower heat, at the same clock
frequency, so there is a reason for going there.."  I just tried to respond to

>>>> Even on a very
>>>>mature process, near the end of a core's lifespan, they _have_ to leave some
>>>>headroom at the top, or you get processors like the 1.13GHz P3, which was pushed
>>>>right to the limit, and suffered for it.
>>>I don't disagree.  I simply claim the "headroom" is not very large, and it is
>>>made just larger than the expected variance in the parts off the fab line, so
>>>that _all_ will
>>>run at that speed reliably.  10%  Maybe.  25-50%?  not a chance in hell...
>>10% is not a small amount.  I wouldn't be surprised if it was a bit more today,
>>though.  In the days of 25MHz chips, they probably had no practical headroom -
>>1MHz would be a full 4%.  As clock speed increases, more headroom must be given.
>Yes, but if you look back at comments in this thread, 10% was _not_ discussed.
>It was
>more like 3ghz vs 5ghz, and that is a _huge_ "headroom".

You wonder why I thought it seemed like you were skimming my posts.  This is
exactly why.  5GHz was never used other than as an example number pulled out of
the air, in a discussion about whether Intel could theoretically release such a
chip.  I never _remotely_ claimed Intel was producing 5GHz silicon and releasing
it at 3GHz.

The biggest potential number for headroom I may have given was 25%.  That may be
a bit high, but not implausibly so.

>>>>There are a ton of variables that can affect the clock speed attainable by a
>>>>certain chip design on a certain process.  If you really feel like discussing
>>>>specifics, I can do so.  Here, I will only say that there are things beyond the
>>>>path timings that can affect attainable clockspeed.  Thermal issues are a big
>>>>deal for this.  Even the packaging used can affect clock scaling of a particular
>>>I don't believe I have said otherwise.  Also the very process of depositing
>>>material to
>>>build pathways is not precise, leading to variable widths and resistance/etc.
>>>That's a
>>>part of the fab process.
>>Which is one reason why their calculations can be wrong, and they have to test.
>Again, _everything_ has to be tested after it is designed and the prototypes are
>built.  So what?  I have to test _every_ program I write.  But, in general, what
>expect from a program is usually delivered.  Whether the target be a specific
>run-time, or memory limit, or I/O throughput, or whatever.
>But testing to see if things work is completely different from first building
>and then testing to see if it is even viable as a solution.  That is very
>infrequent, particularly
>in the semiconductor world.  Unless you take "leaps" like using copper, or GAS,
>whatever...  But even then the engineers have a pretty good idea on what to
>expect for the
>first chip out of the box...

So what are you harping on about here?  _Nobody_ ever claimed that they build
first and then test.  Nobody disputed that the engineers should have a good idea
of the maximum clock speed.  All I've said is that they may be able to produce
something, but that doesn't mean they will sell it.

>>>>cores are tweaked less often.  I ask again, do you seriously think that when
>>>>Intel went from .18um Willamette P4s at 2GHz to .13um Northwood P4s that they
>>>>couldn't increase the clockspeed, even given the immaturity of the process?
>>>That is exactly what I believe, yes.  There is a _big_ gain if you are 2x faster
>>>your competitor.  Just like everyone here looks at the SSDF list, and buys the
>>>at the top, even if it is just a tiny bit higher in rating.  What would they do
>>>if the top
>>>program were 200 points higher?  Buy fewer?  I don't think so, it would create a
>>>demand for _that_ program.
>>And when the next version of that program is released, what happens?  Are people
>>going to plop down another $50 for it, knowing that what they already have is
>>still 200 points higher than anything else?  If it was $1000, would they still
>>buy the next version?
>I think the purchase decision is made a the time of need.  I want a program now.

You've already bought the program that's 200 points better than the competitors.
 The question is, "When do you 'need' to buy a replacement program?"  If you
already have something twice as good as any competing product, it's very likely
that your buying cycle time will dramatically increase.

> What is
>best and by how much.  The "better" it is, the more I am willing to pay for that

In 6 months, will you be willing to pay that much again for something only
slightly better, given that what you already have is still twice as good as the
competition?  Probably not.  This is assuming that the company is able to
improve their product _at all_.

>between it and its competitors.  As the others catch up, I either have to lower
>my price to
>approach theirs, or make my program faster to maintain that advantage.  But at
>any instant
>in time, the larger the gap between my product and theirs, the better off I will
>be in the world
>of marketing.

The better off you are _today_.  If companies focused only on today, they would
fail tomorrow.  In your business model, the companies sacrifice long term sales
and revenue for a quick injection of cash.  That won't sustain anyone.

>>>When I purchase machines, I look at the clock speed, and I benchmark.  If one is
>>>faster, that's what I choose.  If it is _much_ faster, I'm likely to buy
>>And if you had something so much faster than anything else, would you feel the
>>need to replace those machines anytime soon?
>Depends.  The typical lifetime of a machine today is 3 years.  If you can run

We're not discussing the typical case.  Anyway, why does that have any bearing
on the issue?

>at X ghz today, in 3 years you will certainly be able to run at least 2X, so

If there was a sudden huge jump in clock speed (to 5GHz or whatever), what makes
you think it would be as scalable as current processors?

>yes, I'd buy one.

If a new super-processor is released, and everyone buys it today (which you say
would happen), then they're going to wait 3 years to buy a new one, by your
example.  Where are the processor sales in the interim?  That's exactly why this
business model _does not work_.

>>>Actually it isn't.  Silicon compilers do a lot of the work I had to do by hand.
>>>gate delays.  Doing the routing.  And in my day, an error was a terrific delay
>>>as it took a lot
>>>of time to re-do.  With silicon design tools, that process is much simpler from
>>>the human's
>>>perspective today, which is a plus.
>>Intel does (nearly) full custom design on their commodity x86 chips.  I found
>>some specific information about custom design, but most of what I could find was
>>on the order of, "xyz uses/offers full-custom design."
>>* Designer hand draws geometries which specify transistors and other devices for
>>an integrated circuit.
>>* Can achieve very high transistor density.
>>* Design time can be very long (multiple months).
>>* Offers the chance for optimum performance. Performance is based on available
>>process technology, designer skill, and CAD tool assistance.
>There you go.  Last item.

Ignoring the first item...

>>So I'd say things are not easier for the human today who designs such a part,
>>though they do have large teams working on each design and lots of computer
>>assistance.  An error is still very hard to find, and might take even longer to
>>fix today than it used to.
>No disagreement.  But if I need to move something from over here to over there,
>it is a simpler process with a good design tool.  IE go to an architect today
>and find
>a house plan you like.  Ask him to change the downstairs ceiling height from 9'
>to 10'.
>It takes him two seconds.  Even though the stairs now has to have another two
>which might move a doorway, etc.  30 years ago you broke out the T-square and
>drawing board and started over.

If he's designing a several-million square foot, 100 story office tower,
compared to the architect 30 years ago designing a 1500 square foot, 2 story
house, the job is certainly NOT easier for the human today, no matter what tools
he has.

>>There are things not dictated by the actual design or manufacturing process that
>>affect clock scaling, like the packaging, which I already mentioned.
>Its still a part of the overall "system" however.  IE a laptop processor can't
>run as hot as a
>desktop processor, because there can't be a 6" tall heat sink to help keep it
>from frying.  So
>that simply becomes a design constraint and you end up not producing a chip that
>runs as
>fast as it might in an environment where heat can be eliminated easier with a
>big sink and
>a pair of fans blowing right on it.

That doesn't mean the silicon isn't capable of running at the same speed as the
desktop part.  That's the crux of this entire argument.

This page took 0.06 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.