Computer Chess Club Archives




Subject: Re: Introducing "No-Moore's Law"

Author: Robert Hyatt

Date: 08:49:45 02/28/03

Go up one level in this thread

On February 28, 2003 at 01:25:20, Steve J wrote:

>On February 27, 2003 at 22:45:31, Robert Hyatt wrote:
>>>Robert and Jeremiah,
>>>  Thanks for the posts.
>>>  One point I was trying to make was that every reduction in size is done
>>>with an exponential increase in cost.  We will reach a time when the
>>>physics of very small devices will not allow for transistor that can be
>>>turned "on" and "off" at any reasonable cost.  Given that this is related
>>>to the size of the atom, it does not make much of a difference if the
>>>material is Silicon, GaAs, InP, or more exotic materials.
>>>  As Robert had mentioned, increasing the number of processors will then
>>>be the most effective way of increasing NPS.
>>>  Of course, this applies to the semiconductors and hardware only.  Software
>>>improvements, however, will continue exponentially for all time.  :)
>>If you have time, I want to ask a very precise set of questions, since we keep
>>going around and around with the non-engineering types here...
>  Certainly.  (I'll try to be careful not to slip into my nerd mode)
>>Here they are:
>>(1) When you design a chip for a particular fab process, do you have a pretty
>>accurate idea of how fast it is going to clock?  To clarify, some think that you
>>crank out the design, run it down the fab line, and then if it will run at (say)
>>5 ghz, you will actually clock them at 3ghz to avoid pushing the envelope too
>>far too fast.  The engineers I have talked with dispute this with a laugh, but
>>I thought I'd give you a chance at this as well.
>  Typically, a new process (say .13 micron) will take a number of months
>to develop.  Not until the process is relatively well wrung out are wafers
>ran with product which is intended to sell.
>  A product which is intended to sell can have millions of transistors arranged
>in to perform a function.  It would be nearly impossible to measure the
>performance of one transistor deep in the sea of transistors.
>  To simplify the debug of the process "test die" are manufactured.  These have
>isolated transistors, resistors, etc, which have dedicated wiring to the
>outside pads.  Direct measurements are made of those individual components.
>Of course, this test die is of no value to a customer because it contains
>isolated components.  However, it is an excellent method of getting very
>detailed information on the performance of those components!!  It is much
>easier to debug the components and then rest assured that when they are put
>together in a larger chip (which is intended for sale).
>  The test die are characterized and a detailed model is put together and
>the model is tweaked until it matches the actual silicon performance. (Believe
>me, there are teams of engineers that do nothing but this for a living!!).
>A model may predict that a product will come in at, say 5 GHz.  It would not
>be uncommon for the first revision of the silicon to come in at, say 4 GHz
>due to an unanticipated critical path.  The team of engineers descend on
>the 4 GHz device, find the problem, and should be able to get devices that
>are in the neighborhood of 5 GHz.
>  Part two of your question reminds me of a Calvin and Hobbes cartoon.  Calvin
>(the 5 year old boy) asks his father how do they know that a bridge has a
>5 ton limit.  The father (always one to press his son's gullibility) tells
>him that they keep driving heavier trucks over the bridge.  When the bridge
>collapses, that's the limit, and the bridge is rebuilt!
>  In short, yes devices are tested so that they will perform slightly better
>(perhaps .2 GHz).  This is done to guarantee that they will work over all
>specified voltages and temperatures.  However, going from 5 GHz to 3 GHz is
>WAY too much.
>  Keep in mind that product is often tested at worse case conditions (High
>temperature and low Vcc) where the performance is worse case.  There are many
>people who try and sqeeze extra GHz out of their product by turning up the
>Vcc and cooling down the processor with water, liquid nitrogen, etc.
>  In short, the product clock rate is sped up until it breaks, then they
>back off the speed a bit and say "that's the limit"!
>  (Also, there is BIG motiviation not to "back off" too much!  Just look at
>the difference in pricing between a 2.4 and 2.8 GHz Pentium!)

That was the same sort of answer I had gotten from other engineers.  However, it
always better to have someone answer here directly so that we don't have to go
the famous "but only Bob has heard that from an engineer, perhaps he didn't
or whatever. :)

>>(2) when you design a chip for a particular process, do you have a good idea of
>>how fast it will run, or do you "wing it" and run a few off and test them to
>>see what they can clock at?  Again the engineers I talk with say that they
>>know in advance what it should run at and they run it there.
>  I gave a partial answer to this above.  The engineers should have a fairly
>close idea how fast it will run.  However, several different wafer lots are
>always ran.  Units from each of the characterization lots are tested over
>voltage, temperatures, etc to ensure that the specifications are met.  Normally
>every device that is shipped is tested at high speed to guarantee it works.
>  If the initial design does not work as fast as expected, it can often
>be tweaked to meet the requirements.

Again, exactly what I had said.  The chips don't start out running at max.  They
"evolve" as things are improved...

>>(3)  Is there any science in the process of designing a chip, or is it a bunch
>>of "trial and error" operations?  IE in Intel's "roadmap" they are discussing
>>plans for the next 18 months or so, with predicted clock frequencies.  Are they
>>able to simply take today's chips and "crank 'em up" after a year, or are they
>>changing the fab, the dies, etc to make the chip faster.
>  Most of the driving force behind speed increases is the reduction in
>transistor size.  A few years ago .25 micron was state of the art.  Now
>.18 micron is becoming mature and people are looking to .13 micron and
>.09 micron devices.
>  For example, a product may originally be designed to a .18 micron
>process, while it is going through the "tweaking" process (mentioned
>in an earlier answer) a new team of engineers is getting ready the .13 micron
>version for the next generation process.
>  In short, the biggest jumps are going from one process generation to
>the next.  However, within a given process size, the performance can be
>modestly enhanced.
>>I hope you see where this is going.  I believe, from the engineers that I
>>know (and that is not a huge number but I know enough) that this is a very
>>precise deal.  I designed stuff many years ago using TTL and CMOS stuff, and
>>the "books" were my bible for doing this, telling me _exactly_ what the gate
>>delays were for each type of chip (ie LS, etc.)
>>Looking forward to an answer from someone that might carry a little credence
>>in the group here.  :)
>  Hope this helps.  Let me know if you have any other questions.

This page took 0.01 seconds to execute

Last modified: Thu, 07 Jul 11 08:48:38 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.