©2026 Business Next Media Corp. All Rights Reserved. No.102, Guangfu S. Rd., Da'an Dist., Taipei City 106, Taiwan
At its 2022 Technology Symposium, TSMC talked about their upcoming N3 process node - oft marketed as 3 nanometers.
But look at how many versions of N3 the Taiwanese giant plans to launch in the coming years! 2022 sees N3, and then there is N3E, N3P, and N3X.
That's a lot of process nodes. Each of these variants in the TSMC N3 Multiverse are similar but also different.
It is a reflection of new realities in leading edge semiconductor manufacturing. It is no longer about using that one node that does everything for you. It's about picking - and literally co-developing - the right node for the product you want.
In this video, I want to talk about how today's latest leading edge semiconductor process nodes are made.
During the Glory Days of Moore's Law - a time spanning from about 1980 to 2005 - people designed chips without much thought about how those designs might be fabricated.
That is because every two years, the foundries delivered a 70% shrinkage of a chip's features. With a few exceptions, the design just shrank in both the X and Y directions - simple scaling.
Ah the good old days.
Achieving this shrinkage, however, really nobody other than the foundries had to worry about. It just happened, like water from the tap. And life was good for everyone.
But in 2005, things started to change. The foundries started hitting fundamental limitations which would have long lasting effects on the entire industry.
This is because the Glory Days were driven in large part by advances in lithography technologies. Up until 2005, a process node's feature sizes had been smaller than the light wavelengths used to etch them.
Then in 2001, the 193 nanometer laser came into production and that represented essentially the end of the line. 157 nanometer lithography did not take off - I made a video about this - and EUV was still many years away.
The first node to encounter this problem was at the 130 nanometer node - manifesting as an unusually high amount of a specific type of defect.
Prior to 130 nanometers, the industry's most significant fab defects were the random and unavoidable ones - like the ones caused from particles invading the wafer. Foundries had managed to bring those down to an acceptable level.
But now at this 130 nanometer node, a new type of error was rapidly gaining ground: Those relating to inadequate lithography. The printers are struggling to print what they are supposed to.
In response, the foundries introduced Resolution Enhancement Techniques to print some of the node’s exceptionally small features.
For instance, they added tickmarks at the end of design lines - kind of like the serif in serif fonts. The serif tick mark makes that design line bigger on the whole and so easier to print.
With feature sizes continuing to shrink but their core technology tool facing fundamental limitations, the foundries introduced "guard rails" at the chip design stage to avoid killer defects and achieve the best yields.
It is worth pausing here so that we can briefly discuss chip design and how chips made of different nodes stack up with one another.
One of the industry's frequently cited yardsticks is the Power, Performance, and Area metric or PPA. Definitions differ from provider to provider but on the whole:
Power measures how much power that the chip consumes - either during actual usage or when idling. Performance measures the chip's maximum attainable frequency.
And Area measures how much silicon area that the chip occupies in square millimeters. It has also been expressed in the past as a gate count.
Recently, certain dimensions have become less critical as certain markets have developed. The mobile processor market, for instance, cares much more for low power usage and smaller areas.
To reflect the special needs of these newer use cases, analysts have started adding additional dimensions to the traditional PPA yardstick.
The most popular one I’ve seen is Power, Performance, Area and Cost - PPAC. Are you really going to need that extra 10% performance if it costs $100 million more to fab it? A good question.
Okay I think that gets us caught up.
Anyway, going back to the first set of design-stage guard rails.
They were first called "Design for Manufacturability" and they arrived with the 90 nanometer node. These rules derive from the Resolution Enhancement Techniques first introduced in the older 130 nanometer generation.
Their primary goals were to get around the fundamental physical shortcomings of 193 nanometer lithography being larger than the 130 or 90 nanometer features they etched.
For instance, at these nanoscale sizes, corners are never corners. Lithography is physically incapable of making them. So there will always be a little bit of curvature, which can interfere with the overall chip design.
So to deal with this issue, foundries introduced a new design rule where you - the designer - cannot put a corner too close to a transistor gate.
You can choose to be more or less aggressive with this rule - and accept the tradeoffs with your choice. Being more conservative makes it more likely that your design yields. But in return, you got some unused space here - and so take a hit on area and possibly performance.
Throughout this new era of design and manufacture, the fabs created these rules themselves and largely imposed them on the designers without collecting feedback. The way the fabs saw it, it was just part of the larger yield improvement process.
But like bureaucracies in a government, once a rule came into effect, it was very rarely removed. The 90 nanometer node had four times more design rules than the 180 nanometer node, just two generations before it.
And we moved down to 65 nanometers and beyond, the rule books just got bigger - a trend that the introduction of 193i immersion lithography hardly helped to slow down.
And worse yet, changes came about with little warning. Fabs would introduce a structure in one generation and then suddenly remove it in the next. Eventually, this one-way road started to weigh on the chips' real world performance.
This arrangement was not economically sustainable. So roughly about the time of the 28 nanometer process node, the foundries and the chip designers decided that a more tightly bound relationship was necessary to make sure everyone gets what they are looking for.
And this is how we have come to Design Technology Co-Optimization or DTCO. How today’s leading edge semiconductor process nodes are really made.
DTCO approaches roughly begin with the confusing generation of nodes that the industry calls the N20, N16, and N14 nodes.
For these generations, the semiconductor manufacturing industry introduces two new innovations: FinFET transistors and multi-patterning.
I am not going to talk about multi-patterning again - go check out my previous video about 7 nanometers without EUV - but I do want to talk about FinFET.
I have mentioned FinFETs or "fin field-effect transistors" before. They are 3D transistors that replaced the Planar MOSFET, which are flat.
With planar, a gate sits between a source and a drain. The gate can be made to allow or disallow electrons to flow from the source to the drain.
So when the flow of electrons is allowed, the transistor is thus ON. Disallowed, OFF.
The FinFET raises that whole source and drain structure higher above its surrounding area - your Fin. And thus what I mean when I keep saying 3D.
Now that the Fin is 3D, how high, wide, and long that Fin gets to be contributes to how well the transistor performs in Power, Performance, Area, and Cost.
As you might expect, if you keep making that FinFET thinner and higher, it performs better and you might be able to squeeze more into a certain space.
But at the same time, there is a bigger risk of falling over - a situation referred to as fin collapse, or flop-over.
With DTCO, the chip designer works directly with the foundry's process engineers to end up with a process node capable of meeting the project's desired targets in PPAC - Power, Performance, Area, and Cost.
At the very start of the DTCO process, the two parties specify out the exact parameters of the future process technology. For instance, how thin and high you want that fin to get as well as the distances between the fins. These are still guided by the 70% scaling targets set out by Moore's Law.
It is like haggling at a Taiwan night market. There are tradeoffs being made in all of the four PPAC metrics so that the manufacturing best fits the final product's requirements.
For instance, let's say you are designing a low-power mobile chip. In this case, the more important measures are in power and area. Mobile chips have to fit into smaller physical packages and be extremely power efficient.
The next step would be for the foundry's process engineers to take those fin specifications and craft what is called a standard cell library.
As a reminder from my video on EDA software, standard cell libraries are standardized groups of gates that enable logical functions like an AND operation, an OR operation, a combination of the two, and so on.
The Standard Cells are like the Lego blocks of your chip design and are themselves literally made up of Fins.
They are sized with a metric called "Track Heights". Higher track heights mean more power but more area used. Lower track heights, the opposite.
As the foundry and the chip designers work together on the rules and libraries for the new node, they both contribute effort to obtain the final scaling percentage.
For instance, during the DTCO process from the N14 node generation to the N10 generation, the goal for one particular standard cell was to achieve a 0.47 times total area scaling. That means the N10 version of the cell covers just 0.47 of the N14 version of the cell.
This is achieved through a combination of manufacturing and design. The standard cell is redesigned to go from 9 tracks to 7.5 tracks, which is a 0.83 times area scaling.
And then manufacturing brings in the rest of the scaling - with N10's triple patterning allowing for physically smaller transistors corresponding to a 0.56 times area scaling.
0.56 from manufacturing times 0.83 from design is 0.47, the final goal.
Once this is done, the designers can then use this new, super-customized standard cell library to build up blocks and eventually, their netlists and designs. It truly takes a village to make a new leading edge process node. And for that reason, they are all very different.
This whole sequence starts years before the process node actually goes into high volume manufacturing. And it demonstrates how different leading edge process nodes can be from one foundry to the next.
It also means there must be a lot of trust between the leading edge foundries and their customers. The chip designers have to share a great deal of information about what they are designing, what they want it to eventually do, and how they intend to achieve it.
And this trust and information flow goes both ways. Unlike with the "Design for Manufacturability" rules, with DTCO the foundry might have to eat more process complexity - meaning more steps and chances for yield loss - in order to give the designers flexibility in their work.
Design Technology Co-optimization carried the industry through the last few generations. Combined with EUV finally coming online, the industry is well prepared for the immediate future. But what does the industry have beyond that?
It is hard to tell. The industry is pretty bad at making predictions beyond an intermediate 5 year window. But something IMEC has been mulling over is called System Technology Co-optimization or STCO.
Briefly speaking, STCO would extend the chiplet approach that AMD has popularized. The advanced packaging industries would be brought into the circle to help segregate certain parts that cannot shrink - like input/output or analog.
Whether DTCO will indeed mature into STCO is right now unclear. But there is an increasing need to take into account some of the industry's previously overlooked functions in order to squeeze out yet more PPAC.
The increasing use of DTCO at the leading edge reminds me of the old vertically-integrated semiconductor makers: the Integrated Design Manufacturer. It is an interesting development.
Over the past thirty years, academics and business elites have discussed how the semiconductor industry has split into two. Yet as those guys talked about the split, the two sides are working together again at the leading edge. Closer than ever before.
©2026 Business Next Media Corp. All Rights Reserved. No.102, Guangfu S. Rd., Da'an Dist., Taipei City 106, Taiwan