Skip to main content
  • Home
  • Tech
  • Nvidia’s HBM4 performance push puts Samsung and SK Hynix on diverging paths, while Nvidia smiles

Nvidia’s HBM4 performance push puts Samsung and SK Hynix on diverging paths, while Nvidia smiles

Picture

Member for

1 year 3 months
Real name
Tyler Hansbrough
Bio
[email protected]
As one of the youngest members of the team, Tyler Hansbrough is a rising star in financial journalism. His fresh perspective and analytical approach bring a modern edge to business reporting. Whether he’s covering stock market trends or dissecting corporate earnings, his sharp insights resonate with the new generation of investors.

Modified

Nvidia raises the bar for HBM4 supply, intensifying race on speed and logic dies
Samsung leans on foundry strength, while SK Hynix faces TSMC bottlenecks
Is Nvidia pushing supplier competition to gain leverage in HBM price talks?

Nvidia has significantly raised its supply requirements for HBM4, the sixth-generation high-bandwidth memory expected to lead this year’s AI memory market. By demanding higher data transfer speeds than before, Nvidia is effectively pushing suppliers to strengthen their logic-die design capabilities. As the ability to secure that technology could reshape the outlook for Samsung Electronics and SK Hynix—Nvidia’s key HBM suppliers—some in the market see Nvidia’s move as a strategy to stoke competition and gain leverage in price negotiations.

HBM4 supply bar rises

According to the semiconductor industry on the 15th, Nvidia in the fourth quarter of last year informed Samsung Electronics and SK Hynix that it was raising the HBM4 supply requirements for its next-generation GPU platform, Rubin. Nvidia lifted the initial HBM4 data-transfer speed target from 8–10 Gbps to 11 Gbps or higher. As a result, the start of full-scale HBM4 mass production is expected to slip from the first quarter to after the second quarter, with the first-half market likely to be led by the current flagship, fifth-generation HBM (HBM3E).

The industry says Nvidia’s tougher requirements have effectively expanded the HBM performance race into competition over logic-die design capabilities. HBM typically stacks multiple DRAM dies vertically. The logic die is the chip attached at the bottom of the stack, handling functions such as data timing and path control and power management—tasks that are difficult to implement with DRAM dies alone. In particular, at the 11 Gbps-plus high-speed range Nvidia is demanding, heat and power fluctuations in HBM can quickly translate into data-processing errors, making upgrades to the logic die’s specifications essential to keep them in check.

Samsung is said to be revising its logic-die design with a focus on thermal control and performance improvements, while accelerating development in collaboration with its foundry division. TrendForce said Samsung plans to apply a 1cnm (10-nanometer-class) process to HBM4 and use its own foundry technology for the base die, already giving it a development edge over rivals. TrendForce added that this approach is better suited to achieving higher transfer speeds, and that Samsung may be the first to secure HBM4 supplier qualification and gain an advantage in supplying higher-tier Rubin products.

TSMC capacity crunch puts SK Hynix on alert

SK Hynix, meanwhile, may face a tougher path in improving data-transfer speeds. Unlike Samsung Electronics, which has its own foundry facilities and technology, SK Hynix relies heavily on Taiwan’s TSMC for key parts of its logic-die capabilities. For TSMC, which is already struggling to keep up with surging demand for leading-edge processes, additional investment and line expansion for logic dies is likely to be a lower priority. That raises the risk that SK Hynix’s performance upgrade requirements cannot be reflected quickly in production.

TSMC has long handled advanced-chip manufacturing for major fabless companies such as Nvidia, but utilization at sub-5nm nodes has recently remained near capacity. With orders for AI accelerators and high-performance computing (HPC) chips soaring, supply bottlenecks—especially around 3nm—have become more visible. TSMC has acknowledged the issue. In November last year, Chairman C.C. Wei said at an industry event that advanced-node capacity was short by about three times what major customers were demanding.

The capacity shortage is seen worsening over time. The Information reported on the 15th that TSMC recently told Nvidia and Broadcom it could not allocate as much production capacity as they need, effectively turning down additional foundry orders from major customers. Nvidia relies on TSMC to manufacture in-house designed GPUs, while Broadcom uses TSMC to produce AI chips such as Tensor Processing Units (TPUs) developed with Google.

Will HBM price hikes be capped?

Some in the market see Nvidia’s push for higher performance as a strategy to stoke competition between SK Hynix and Samsung Electronics and regain leverage in price negotiations. HBM prices have been soaring amid intensifying competition for AI leadership. According to industry sources, Samsung and SK Hynix have recently quoted prices more than 50% higher than before when renewing supply contracts with existing customers for 12-high HBM3E. The price of a 12-high HBM3E chip is said to be around $300, but companies that have recently renewed contracts are reportedly paying closer to $500 per chip. New customers are said to face even higher prices to secure supply.

Against this backdrop, Nvidia’s effort to pit Samsung and SK Hynix against each other could shift part of the pricing advantage back toward Nvidia. Even for a supply-constrained product like HBM, the pace of price increases could slow, or the scale of hikes could be limited. Over the medium to long term, the effect could become more pronounced, as the two suppliers may be forced to compete on non-price terms—such as long-term supply agreements, priority allocation, and customization—to improve yields and packaging responsiveness. In that scenario, upside for average selling prices (ASP) of HBM would naturally be capped.

Nvidia’s ability to pursue such a bold approach stems from its position as the dominant buyer in the HBM market. Jensen Huang, Nvidia’s CEO, said at a recent press conference in Las Vegas that “we are the first consumer of HBM4, and we don’t expect other companies to use HBM4 for some time,” adding that “as the sole consumer of HBM4, we will be able to enjoy that advantage.” He also noted that “Nvidia’s demand is extremely strong, so all HBM suppliers are expanding production,” saying, “we’re all doing well.”

Picture

Member for

1 year 3 months
Real name
Tyler Hansbrough
Bio
[email protected]
As one of the youngest members of the team, Tyler Hansbrough is a rising star in financial journalism. His fresh perspective and analytical approach bring a modern edge to business reporting. Whether he’s covering stock market trends or dissecting corporate earnings, his sharp insights resonate with the new generation of investors.