Broadcom tie-up shifts Samsung HBM from performance rivalry to supply competition
Input
Modified
Pricing and mass-production capacity offset earlier weaknesses
Potential opening of a Samsung–SK Hynix duopoly
HBM’s relative appeal shifts with changing market conditions

Samsung Electronics appears to have largely resolved performance and yield issues in its 12-high HBM3E products through expanded cooperation with Broadcom. Broadcom’s design-centric business model aligns with Samsung’s pricing competitiveness and supply flexibility, elevating HBM from what was once considered a problematic product to a strategic asset. As a result, market focus has moved beyond pure technological superiority toward supply timing and volume control.
DRAM performance gains narrow the gap
According to industry sources on the 15th, Samsung Electronics is expected to expand the share of its HBM3E 12-high products supplied to Broadcom, which designs Google’s next-generation tensor processing units (TPUs). Market expectations had initially favored SK Hynix as the sole or primary supplier of HBM3E 12-high, but Samsung’s progress in stabilizing performance and yields has altered the landscape. The two companies are currently conducting mass-production product tests and are reported to have reached near parity in performance. Google plans to deploy HBM3E 12-high in its upcoming upgraded seventh-generation TPU, known as TPU 7E.
Industry observers attribute this shift to aligned business interests between Samsung and Broadcom. Broadcom has long followed a model of designing system-on-chip products to customer specifications and outsourcing manufacturing at the most cost-efficient terms. A Broadcom engineer noted that the company’s identity centers on delivering chips that meet customer requirements at the lowest reasonable cost while maximizing margins, adding that Samsung’s ability to offer lower prices and larger volumes explains its appeal. This has led some in the industry to characterize Broadcom as closer to a design house than a traditional semiconductor manufacturer.
For Samsung, the strategy is clear: offset its late-mover disadvantage in HBM3E 12-high through pricing, yield improvements, and mass-production capacity. Over the past year, Samsung failed to pass Nvidia’s qualification tests, allowing SK Hynix to dominate the HBM market. In response, Samsung acknowledged product-level limitations and reworked its designs from the ground up. In particular, it redesigned the D1a DRAM used in HBM, reducing both performance variation and yield issues. As these efforts began to bear fruit, cooperation with Broadcom became viable, bringing a product once labeled a liability back into Samsung’s core lineup.
Market conditions have also turned more favorable for Samsung. As Nvidia GPU prices and operating costs have surged, big tech companies have started allocating more capital toward alternatives aimed at reducing dependence on Nvidia. Analysts suggest Broadcom opted for Samsung as a strategic partner instead of SK Hynix, whose HBM prices have supported operating margins of around 70%. Samsung is said to have priced its HBM3E products roughly 20% lower than SK Hynix, while highlighting its ability to supply additional volumes through new production lines at its Pyeongtaek campus.
From catch-up race to duopoly
With Samsung establishing a foothold in HBM3E 12-high, competitive dynamics are also shifting rapidly in the race for next-generation HBM4. HBM4 is a core memory component for Nvidia’s next AI accelerator platform, Rubin, set to debut next year, and requires both supply stability and performance validation from the outset. Samsung is reportedly expected to supply more than 30% of Nvidia’s requested HBM4 volumes next year, indicating that it secured a meaningful position in the supply chain earlier than it did with HBM3E.
This improved standing reflects changes in product configuration and production strategy. Samsung has opted to combine a logic die fabricated on its own 4-nanometer foundry process with next-generation DRAM, giving it a generational edge over rivals. An industry source familiar with Samsung’s operations said that unlike HBM3E, the company is meeting performance requirements more consistently in HBM4 and gradually strengthening its negotiating position.
Still, SK Hynix retains its status as the leading HBM4 supplier. In September, it became the first to establish mass production of 12-high HBM4 and began initial shipments of 20,000 to 30,000 units after supply talks with Nvidia in October. SK Hynix has also agreed to supply the maximum volumes it can handle next year, though industry consensus holds that capacity constraints will limit further expansion.
Micron’s presence has weakened in comparison. Its HBM4 program has faced delays due to technical issues during redesign, with analysts pointing to performance limitations stemming from the use of its own DRAM process instead of foundry-based logic dies. As a result, Micron’s share of Nvidia-bound HBM4 supply is expected to remain below 10%, reinforcing a de facto duopoly between Samsung and SK Hynix.

Production volume management shapes pricing
Against this backdrop, HBM4 pricing is increasingly driven by supply timing and production volume rather than pure technological advantage. A key example is SK Hynix’s revised ramp-up strategy. The company originally planned to begin mass production in February next year and sharply increase output by the end of the second quarter, but has since pushed the start to around April and opted for a more flexible expansion schedule. This reflects a shift away from rapid volume increases tied strictly to Nvidia’s qualification timeline toward supply pacing aligned with market conditions and customer schedules.
Concerns that mass production of Nvidia’s Rubin chips could be delayed have also influenced this recalibration. HBM4 doubles the number of input/output terminals to 2,048, significantly increasing complexity, while its logic die is produced using foundry processes rather than traditional DRAM manufacturing. Additional bottlenecks have emerged in TSMC’s CoWoS 2.5D advanced packaging technology, further destabilizing supply schedules. In this environment, managing supply timing has become as critical to profitability as technological readiness.
These considerations are extending into comparisons between HBM and conventional DRAM profitability. The 36GB HBM4 product slated for mass supply next year is being discussed at prices in the mid-$500 range, implying a per-gigabyte price of around $15. By contrast, spot prices for PC DDR5 16Gb chips have risen to $26.3, or roughly $13 per gigabyte, while server DDR5 RDIMM 64GB modules trade near $780, about $12 per gigabyte. The once wide pricing gap between HBM and conventional DRAM has narrowed considerably.
When factoring in the higher manufacturing costs of HBM4, which requires advanced foundry processes and packaging, some analysts argue that conventional DRAM now offers superior margins. UBS projected in an October report that HBM gross margins would reach 62% between December 2025 and February 2026, compared with 67% for conventional DRAM, with the latter expected to exceed 70% in 2026. This is forcing memory makers to make strategic decisions about how to allocate DRAM production capacity.
Samsung has opted to prepare HBM4 supply while continuing investments in conventional DRAM. With industry-leading DRAM capacity estimated at around 650,000 wafers per month, it plans to deploy sixth-generation 10-nanometer (1c) DRAM for HBM4 core dies and 4-nanometer processes for base dies, while expanding high-value conventional products such as GDDR7 and LPDDR5X using its 1b process. SK Hynix is also adjusting course, planning to expand 1c DRAM output to 140,000–190,000 wafers per month through process conversions at its Icheon campus. As performance gaps narrow, the HBM market is increasingly defined by how quickly, how much, and in what mix suppliers can deliver.