40
\$\begingroup\$

When looking at the roadmaps for the CPU manufacturing process https://wccftech.com/intel-expects-launch-10nm-2017/ enter image description here

Semiconductor Process Technology Year
10 µm 1971
6 µm 1974
3 µm 1977
1.5 µm 1981
1 µm 1984
800 nm 1987
600 nm 1990
350 nm 1993
250 nm 1996
180 nm 1999
130 nm 2001
90 nm 2003
65 nm 2005
45 nm 2007
32 nm 2009
22 nm 2012
14 nm 2014
10 nm 2016
7 nm 2018
5 nm 2020
3 nm ~2022

Why are these numbers chosen specifically? I have looked around, and there are deviations, such as:

Samsung Electronics began mass production of 64 Gb NAND flash memory chips using a 20 nm process in 2010.[114]

TSMC first began 16 nm FinFET chip production in 2013.[115]

And many others.

and so on. Yet as far as Intel and AMD are concerned, they are both in lockstep. Is there something to these numbers that lends themselves to the manufacturing process? Or is the selection completely arbitrary?

\$\endgroup\$
8
  • 3
    \$\begingroup\$ The 14 nm process refers to the MOSFET technology node that is the successor to the 22 nm (or 20 nm) node. The 14 nm was so named by the International Technology Roadmap for Semiconductors (ITRS). Until about 2011, the node following 22 nm was expected to be 16 nm. From en.wikipedia.org/wiki/…. \$\endgroup\$
    – rwong
    Commented Jul 26, 2020 at 4:19
  • 9
    \$\begingroup\$ Also, the ITRS naming process has broken down, as different fab companies have diverged in their technology, meaning their nodes can no longer be compared purely on the nm basis. \$\endgroup\$
    – rwong
    Commented Jul 26, 2020 at 4:20
  • 1
    \$\begingroup\$ I don't think Intel is going to be widely releasing any commercial 5nm chips this year are they? I do know that since about 8 years ago they had 5nm penciled into "2020" on their roadmap though. \$\endgroup\$ Commented Jul 26, 2020 at 22:55
  • 4
    \$\begingroup\$ I wish you'd plotted as semi-log ! \$\endgroup\$
    – uhoh
    Commented Jul 27, 2020 at 5:40
  • 1
    \$\begingroup\$ "Yet as far as Intel and AMD are concerned, they are both in lockstep." Are you sure? The numbers you mention are purely marketing, the actual size of the procede varies between companies. Intel's 7nm isn't AMD's 7nm. \$\endgroup\$
    – Mast
    Commented Jul 28, 2020 at 6:33

5 Answers 5

50
\$\begingroup\$

There are a number of different reasons for this.

The numbers aren't chosen

Modern CPU manufacturing processes, at least for top-of-the line mainstream CPUs such as Intel Xeon and Core, AMD Epyc and Ryzen, etc. are at the very edge of what is currently physically possible and economically viable.

Since the laws of physics and the laws of economics are the same for all players, it is to be expected that they all end up using the same technology. The only way this could be different is if one company manages a totally game-changing technological breakthrough without any other company noticing. Given the highly competitive nature, the amount of research and development invested by all companies, and the comparatively small community where everybody knows what the others are up to, this is highly unlikely.

So, in other words: Intel and AMD don't choose the process node size, they just use the best thing that is currently available, and that happens to be similar for both companies.

The numbers aren't real

The numbers are marketing terms chosen by an industry think tank. They don't accurately capture every detail of the various processes. There may very well be differences in the processes that have more impact than the node size.

For example, Intel is currently using the improved second generation of its 10nm process. Yet, both the first generation and the improved second generation of this process are lumped together under the same name "10nm" in the roadmap in your question.

Which brings us to our two next points. The first is a throwback to point #1, the second is a throwback to this very second point:

The numbers aren't chosen by Intel and AMD

As mentioned, the numbers are marketing terms chosen by an industry think tank. They aren't actually chosen by Intel and AMD.

The numbers are predictions

There is another way in which the numbers aren't real: not only are they marketing terms, that don't fully capture all the details, they are also predictions.

Now, as you probably know, predictions are hard. Especially predictions of the future. Case in point: the roadmap you show in your question has a 5nm process node for 2020, but actually, the current top-of-the line offerings are 10nm by Intel and 7nm by AMD, Apple, and nVidia. IBM's current top-of-the-line is the POWER9, launched in 2017 on a 14nm process. The POWER10 will probably be available in 2021 and manufactured in either 10nm or 7nm.

As you can see, the prediction is actually doubly wrong: it predicts that Intel and AMD will be in lockstep, and it predicts that the process node size will be 5nm, yet Intel and AMD are not in lockstep and neither of the two has hit 5nm yet.

The numbers are kind of a self-fulfilling prophecy

No company wants to be caught failing to hit the predicted process improvements. So, they work very hard to "hit the mark", but not harder, since these improvements are very expensive. (Moore's Second Law predicts that as chips get exponentially cheaper (for the same performance) or exponentially more performant (for the same price), chip fabrication gets exponentially more expensive.)

This is similar to what happened with Moore's Laws: originally, when Gordon Moore wrote down his Laws, he wrote them down as historical observations and projected their trend lines 10 years into the future without actually having solid statistical grounds to do so. 10 years later, he revised them (he had originally projected a doubling every year, which he then revised to a doubling every two years.) However, since then, Moore's Laws have morphed from historical observations to rough predictions to market expectations, where a manufacturer that doesn't hit the projected improvements of Moore's Laws will have to justify that failure to the market, the shareholders, and the stakeholders.

Also note that despite the ramifications of not being able to hit Moore's Law, actual development has dropped below the curve predicted by Moore's Law in 2012, and seems to be flattening out.

The ISTR had a similar effect.

Note, however, that the industry think tank which published the ISTR is actually no longer using it since 2017. They have created a new set of predictions called the ISDR, which are more based on "pull" created by new applications than on "push" created by process improvements.

\$\endgroup\$
10
  • 1
    \$\begingroup\$ It's also worth noting that as part of being predictions, they also follow a pattern, of dropping ~33% every ~3 years. \$\endgroup\$ Commented Jul 26, 2020 at 18:49
  • 2
    \$\begingroup\$ Remember back in Pentium 4, we used to measure CPU in Hertz? It was great for marketing, but completely useless in terms of real processing power. When the Core series were released, they were 1/3 to 1/4 the "Mhz" (4 Ghz dropped down to around 1 Ghz), but the processing power was easily 50% higher. It is practically the same deal here; a nearly useless number. \$\endgroup\$
    – Nelson
    Commented Jul 27, 2020 at 1:59
  • 5
    \$\begingroup\$ I think the details of this "industry think thank" would improve the post a lot (although it is enough good already for an up). \$\endgroup\$
    – peterh
    Commented Jul 27, 2020 at 13:35
  • 1
    \$\begingroup\$ It's worth mentioning that you can't directly compare node sizes between Intel and TSMC (AMD) processors -- Intel's 10nm is about the same in terms of performance and transistor density as TSMC's 7nm, and Intel's 7nm is about the same as TSMC's 5nm. \$\endgroup\$
    – NobodyNada
    Commented Jul 27, 2020 at 15:42
  • 2
    \$\begingroup\$ @J... on a simple SRAM... Another important point to make. The actual transistor density for a complex CPU would vary greatly from the projected maxium achieved for simple memory cells. IRC, intel's 14 nm has higher "on paper" density than GloFo's 14/12nm, but in CPUs it was measured lower. \$\endgroup\$
    – Dan M.
    Commented Jul 28, 2020 at 14:44
12
\$\begingroup\$

To make microchips with lots of transistors in great quantities, you will need one of these:

https://www.asml.com/en/products/euv-lithography-systems

This is the market leader in the industry (they are from an area in The Netherlands that is known to be big in pigs and... chip machines). If you buy their latest and greatest machine today, the chips that come out will have a 5 nm path width. Some years ago the paths were a bit wider, they will periodically have better offerings like every manufacturer. So it is not so much Intel's choice as it is a matter of what the latest ASML machines can do.

[Edit]

As Akiva's comment rightfully state, this relays the question from Intel to ASML.

Gullible answer

With every generation they do the best they can given the state of their R&D.

More cynical answer

Taking a modest yet just significant enough step every couple of years is convenient to the entire industry. Chip machine makers can sell a series of machines (which go by 40 million to over 100 million dollars a piece) for a couple of years, then when every potential client has one release a new version and play the same trick again. Chip makers are fine with this, they can do the same thing to their clients, offering bigger and better chips every couple of years. You are fine with this, you can by a new flashy device every couple of years when you get bored with the old one.

I honestly do not know the real answer, it is probably somewhere in between the two.

\$\endgroup\$
4
  • \$\begingroup\$ Begging the question then, if it is up to ASML, were there specific reasons why these numbers were chosen? \$\endgroup\$
    – Anon
    Commented Jul 26, 2020 at 7:21
  • 1
    \$\begingroup\$ @Akiva Originally they were spaced about sqrt(2) apart so that each major generation would roughly double density, while minor generations (half nodes) would be some smaller difference. Decades ago when physical measurement broke down, node names continued that tradition, although they're now essentially arbitrary. \$\endgroup\$ Commented Jul 26, 2020 at 14:46
  • 1
    \$\begingroup\$ It should be noted that it isn't just the ASML machine. You need litho, but there are a lot of other steps using machines from companies like Applied Materials, KLA, and Lam. State of the art processes often require new materials or new capabilities in these other steps. The reason why there's the industry-wide roadmap that Jörg's answer talks about is so all of these equipment manufacturers can work towards supplying the machines that will make possible the smaller feature sizes and hopefully everyone arrives there on time. \$\endgroup\$
    – IceGlasses
    Commented Jul 27, 2020 at 11:31
  • 3
    \$\begingroup\$ The cynical part is just plain nonsense. The big thing (why ASML is so dominant now) is EUV. This is an entirely new technology, where the whole optics have been redesigned from the ground up. The light source has shifted from an 193 nm (deep UV) Argon Fluoride Excimer laser to a tin vapor X-ray source at 13.5 nm. That's physics, not marketing. At 13 nm, lenses no longer work, so you need X-ray mirrors. \$\endgroup\$
    – MSalters
    Commented Jul 27, 2020 at 11:34
7
\$\begingroup\$

Gordon Moore started at Shockley Labs in the Bay Area, along with several other diverse and creative spirits. When those folk tired of the headgames of Shockley, they arranged for financing from Sherman Fairchild (of Fairchild Corp), and founded Fairchild Semiconductor.

Here is the key point --- at Fairchild, Dr Moore and the other (7) founders had to INVENT all their equipment. Chemically (which was Moore's specialty), mechanically (precision alignment), metallic splattering of aluminum, and OPTICALLY.

The initial optics were simply the lenses from a twin-lense reflex camera. Given typical 35 mm camera lenses can support resolution of 50_lines to 100 lines per millimeter, which at 1,000 microns per millimeter tells us the best resolution was somewhere between 20 microns and 10 microns.

That sufficed for about a decade. But the other parts of the fab --- the etching, the sputtering (before implanters came along), the precision and repeatable positioning, the light-sensitive photoresist, etc ALL HAD TO BE INVENTED.

And Gordon Moore was in the ideal situation, contributing every day, to see the results of the "Gee, this is a lot of fun, most of the time, as we move mankind along this incredible ability to manufacture.".

He could see the physical limits were far down the road, so he initially predicted a 2:1 change every 2 years.

That rapid binary change has eased up. Its very hard. Simple camera lenses no longer suffice. And lots of software is needed also, to prewarp the production systems to fold the fringing effects of photons into useful final results.

Its very hard. And slow.... to fool mother nature.

\$\endgroup\$
1
  • 3
    \$\begingroup\$ I don't think this question was about Moores law directly, so this answer needs more tie-in \$\endgroup\$ Commented Jul 27, 2020 at 19:32
6
\$\begingroup\$

While it is true that process node names are currently simply based on marketing and not any physical property of the silicon, the process node used to closely match the actual sizes (in nm) of the transistors on the silicon, in terms of the gate length of each transistor and the metal half-pitch between the transistors. As progresses in shrinking dies became more and more difficult, the actual physical distances became decoupled from the name of the process node.

As IEEE Spectrum elaborates:

In the era when gate length and metal half-pitch were roughly equivalent, they came to represent the defining features of chip-manufacturing technology, becoming the node number. These features on the chip were typically made 30 percent smaller with each generation. Such a reduction enables a doubling of transistor density, because reducing both the x and y dimensions of a rectangle by 30 percent means a halving in area.

...

The industry’s node number “had by then absolutely no meaning, because it had nothing to do with any dimension that you can find on the die that related to what you’re really doing,” says Paolo Gargini, an IEEE Life Fellow and Intel veteran who is leading one of the new metric efforts.

enter image description here

enter image description here

\$\endgroup\$
4
  • 1
    \$\begingroup\$ Are the node names nowadays at least approximately proportional to some length number that you could think of by looking at the chips, or are they totally and utterly arbitrary nowadays? \$\endgroup\$ Commented Jul 27, 2020 at 14:39
  • \$\begingroup\$ It's also worth noting that a single silicon atom is 0.2nm wide, and the smallest transistor invented thus far is 1nm. The bigger problem is quantum tunneling and quantum teleportation, which limits how close they can be to each other. \$\endgroup\$ Commented Jul 27, 2020 at 19:40
  • 1
    \$\begingroup\$ @TannerSwett: As was mentioned in a comment by user J...: Intel's current top-of-the-line 10nm+ process can fit ca. 100 million transistors per square millimeter, whereas TSMC's 7nm process only fits ca. 91 million transistors per square millimeter. So, while on paper, the 7nm process should be denser, it is actually the 10nm process that fits more elements. \$\endgroup\$ Commented Jul 28, 2020 at 8:13
  • 1
    \$\begingroup\$ … so, in other words, the node number not only has no relationship to the size of features on the chip, it is even actively misleading. \$\endgroup\$ Commented Jul 28, 2020 at 8:15
1
\$\begingroup\$

The numbers chosen by Intel (note that other manufacturers use slightly different naming conventions - global foundries had their 32nm node followed by 28nm node, and TSMC has 12nm and 6nm nodes along their 14nm+ and 7nm+ nodes - but the same general principle applies) reflect the fact that each node has approximately twice the density of their previous node.

So 5^2=25 is half of 7^2=49 is half of 10^2=100 is half of 14^2=196 and so on. Obviously this isn't entirely accurate both because they node itself isn't exactly twice as dense as the previous node(part of the reason why Intel has had so much trouble with their 10nm node is because they set the target density far higher than that https://www.extremetech.com/computing/295159-intel-acknowledges-its-long-10nm-delay-caused-by-being-too-aggressive) and because the marketing people like nice, round(or at least whole) numbers.

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.