As announced last week by TSMC, later this year the company is set to start high-volume manufacturing on its N3P fabrication process, and this will be the company's most advanced node for a while. Next year things will get a bit more interesting as TSMC will have two process technologies that could actually compete against each other when they enter high-volume manufacturing (HVM) in the second half of 2025.

Advertised PPA Improvements of New Process Technologies
Data announced during conference calls, events, press briefings and press releases
Compiled
by
AnandTech
TSMC
N3
vs
N5
N3E
vs
N5
N3P
vs
N3E
N3X
vs
N3P
N2
vs
N3E
N2P
vs
N3E
N2P
vs
N2
A16
vs
N2P
Power -25%
-30%
-34% -5%
-10%
-7%*** -25%
-30%
-30%
-40%
-5%
-10%
-15%
-20%
Performance +10%
+15%
+18% +5% +5%
Fmax @1.2V**
+10%
+15%
+15%
+20%
+5
+10%
+8%
+10%
Density* ? 1.3x 1.04x 1.10x*** 1.15x 1.15x ? 1.07x
1.10x
HVM Q4
2022
Q4
2023
H2
2024
H2
2025
H2
2025
H2
2026
H2
2026
H2
2026

*Chip density published by TSMC reflects 'mixed' chip density consisting of 50% logic, 30% SRAM, and 20% analog.
**At the same area. 
***At the same speed.

The production nodes are N3X (3nm-class, extreme performance-focused) as well as N2 (2nm-class). TSMC says that when compared to N3P, chips made on N3X can either lower power consumption by 7% at the same frequency by lowering Vdd from 1.0V to 0.9V, increase performance by 5% at the same area, or increase transistor density by around 10% at the same frequency. Meanwhile, the key advantage of N3X compared to predecessors is its maximum voltage of 1.2V, which is important for ultra-high-performance applications, such as desktop or datacenter GPUs.

TSMC's N2 will be TSMC's first production node to use gate-all-around (GAA) nanosheet transistors and this will significantly enhance its performance, power, and area (PPA) characteristics. When compared to N3E, semiconductors produced on N3 can cut their power consumption by 25% - 30% (at the same transistor count and frequency), increase their performance by 10% - 15% (at the same transistor count and power), and increase transistor density by 15% (at the same speed and power). 

While N2 will certainly be TSMC's undisputed champ when it comes to power consumption and transistor density, N3X could possibly challenge it when it comes to performance, especially at high voltages. For many customers N3X will also have a benefit of using proven FinFET transistors, so N2 will not be automatically the best of TSMC's nodes in the second half of 2025.

2026: N2P and A16

In the following year TSMC will again offer two nodes that are set to target generally similar smartphone and high-performance computing applications: N2P (performance-enhanced 2nm-class) and A16 (1.6nm-class with backside power delivery).

N2P is expected to deliver a 5% - 10% lower power (at the same speed and transistor count) or a 5% - 10% higher performance (at the same power and transistor count) compared to the original N2. Meanwhile, A16 is set to offer an up to 20% lower power (at the same speed and transistors), up to 10% higher performance (at the same power and transistors), and up to 10% higher transistor density compared to N2P. 

Keeping in mind that A16 features enhanced backside power delivery network, it will likely be the node of choice for performance-minded chip designers. But of course, it will be more expensive to use A16 because of the backside power delivery, which requires additional process steps.

Source: TSMC

POST A COMMENT

47 Comments

View All Comments

  • name99 - Thursday, May 23, 2024 - link

    There is potential for a more-or-less 1-time jump in SRAM density with BSPD. (Whether it will be one time, or spread over two or three nodes depends on how aggressively clocks and signals are moved to backside along with power). But after that back to slow grind.

    On the other hand, for designs that aren't power-insane, there's the potential to start moving SRAM onto a second layer. (Yeah, yeah, we all know about V-Cache, calm down!) Existing packaging for this sort of thing is sub-optimal, but obviously will be improved constantly over the next few years.
    So point is, with a split between ever more local logic on one layer, and SRAM on a second layer, the party can keep going for at least a few more years.
    Reply
  • mattbe - Thursday, May 23, 2024 - link

    You can't just look at the diameter of the silicone atom. The crystal lattice of a silicone atom is 0.543 nm...... Reply
  • Dante Verizon - Thursday, May 23, 2024 - link

    Almost... The cache is practically stagnant... Reply
  • Dante Verizon - Thursday, May 23, 2024 - link

    3nm = 1.4x (See the progress of interactions.) Reply
  • lightningz71 - Thursday, May 23, 2024 - link

    The main culprit is SRAM with signal lines not far behind. Once feature sizes got down to where they are a few years ago, the physical space that the capacitance of an SRAM cell requires became an intractable problem. You just can't shrink them much more and have them continue to function properly. As for signal lines, there has to be physically a minimum amount of material to move the signals between individual transistors, and as transistors have shrunk, the signal lines have had to remain roughly the same. SRAM and signal lines are taking up greater and greater proportions of the physical area of logic chips, and it doesn't appear that it's going to get much better anytime soon. BPD does help to some extent in that it moves some of the power lines to the back side of the transistors, but that only saves you a couple nodes worth of scaling and complicates chip lithography processing. Reply
  • Dante Verizon - Thursday, May 23, 2024 - link

    SRAM needs to take the 3D route. Reply
  • nandnandnand - Thursday, May 23, 2024 - link

    It would be interesting to see Zen 6 or 7 take L3 cache off the core die entirely, and move to all 3D cache on an older node, and differing amounts of layers to differentiate the product stack.

    Samsung should get a move on with their own version: "X-Cube" announced in 2020, nowhere to be found since.
    Reply
  • name99 - Thursday, May 23, 2024 - link

    SRAM doesn't rely on capacitance, that's DRAM.

    The problem with SRAM is the density of wiring. We can make the SRA transistors smaller, but then we can't pack all the wiring required to power and control them into the available space.

    BTW DRAM (after a long long hiatus) is exploring ways of effectively stacking the (very long skinny) cylindrical capacitors. Samsung suggests it might have product by around 2031 (which, admittedly, is far enough away that I wouldn't take the exact date too seriously).
    Reply
  • Dolda2000 - Thursday, May 23, 2024 - link

    I was under the impression that backside power was supposed to alleviate SRAM wiring to some extent, but A16's density over N2 seems quite modest to say the least. Reply
  • nandnandnand - Thursday, May 23, 2024 - link

    It's obviously bad, but if the SRAM scaling is nearly zero, and analog is also close to zero, then 1.15x would be more like 1.3x for logic.

    The missing number is wafer price. A lot of customers could tolerate the stagnation in density scaling for the other benefits like performance and power efficiency, but if the wafer costs are skyrocketing, that could be a problem for some products.
    Reply

Log in

Don't have an account? Sign up now