“Will Invest $73.1 Billion” Samsung Electronics Accelerates After Seizing HBM4 Leadership, Piling Pressure on SK Hynix
Input
Modified
Samsung Electronics, the “front-runner in HBM4,” makes an AI semiconductor push with a $73.1 billion investment Secures Nvidia, AMD, and OpenAI as customers, sharply expanding its market clout SK Hynix, the incumbent HBM powerhouse, faces mounting pressure to overhaul its technology strategy

Samsung Electronics will expand its annual investment to more than $66.4 billion for the first time in its history. The move amounts to a full-scale strategic wager to secure leadership in artificial intelligence semiconductors, epitomized by high-bandwidth memory (HBM). The market is already projecting that if Samsung Electronics, having seized the lead in the sixth-generation HBM (HBM4) race, follows through with additional investment in the segment, pressure on its key rival SK Hynix will intensify further.
Samsung Electronics’ Largest-Ever Investment
According to the semiconductor industry on March 23, Samsung Electronics recently disclosed its “2026 Corporate Value Enhancement Plan,” stating that it will commit more than $73.1 billion this year to capital expenditure and research and development. That marks a 21.7% increase from last year’s $60.0 billion and the largest investment in the company’s history. Most of the funds are expected to be directed toward the Device Solutions (DS) division, which oversees Samsung Electronics’ semiconductor operations. Going forward, Samsung Electronics intends to leverage a “one-stop solution” spanning memory, foundry, and advanced packaging to secure primacy in the AI semiconductor market and preserve an overwhelming technological lead in high-value memory segments such as HBM.
The market is also increasingly viewing Samsung Electronics as likely to focus especially on advancing HBM technology. For years, the company struggled to stand out in the segment, but sentiment has shifted rapidly as competition in HBM4 has intensified. In January, Samsung Electronics officially announced that it had become the first company in the world to commence mass-production shipments of a 12-layer HBM4 product. On the strength of product performance, it has established itself as an HBM4 front-runner. Samsung Electronics’ HBM4 uses its most advanced 1c DRAM process, a sixth-generation 10-nanometer-class node, while the base die, the bottom substrate of the HBM stack, employs a 4-nanometer process that offers advantages in both performance and power efficiency. As a result, Samsung Electronics’ premium HBM4 has reached an operating speed of up to 13 Gbps. That far exceeds not only the 8 Gbps benchmark set by the Joint Electron Device Engineering Council (JEDEC), but also Nvidia’s required 10–11 Gbps threshold.
The market had initially voiced considerable concern over Samsung Electronics’ technological gambit. The chief reason was that weak yields in the 1c DRAM process could become a bottleneck for HBM mass production. Samsung Electronics’ answer was a design modification to the 1c DRAM. It preserved the line width of the core circuitry while partially easing the line-width criteria for peripheral circuitry, thereby adjusting manufacturing complexity. As the peripheral circuitry became easier to implement, yields for Samsung Electronics’ 1c DRAM began to improve markedly, and the enlarged chip size also helped secure stability in the TSV process, a critical step in HBM production.
Samsung Electronics’ HBM4 Supported by Firm Market Demand
On the back of that technological footing, Samsung Electronics has successfully entered Nvidia’s HBM4 supply chain, one of the market’s most important customer ecosystems. More recently, Nvidia Chief Executive Officer Jensen Huang personally praised Samsung Electronics, reinforcing the cooperative framework between the two companies. Visiting Samsung Electronics’ booth at the annual GTC 2026 conference in San Jose on March 16 local time, he described the relationship as “a great partnership,” signaling a distinctly favorable stance. In his keynote address, he also introduced the next-generation language processing unit (LPU), Grok 3, and said, “A huge thanks to Samsung for handling production.” Grok 3 was developed on the basis of technology from Grok, an AI semiconductor start-up focused on inference chips that Nvidia acquired last year, and is expected to be applied to Vera Rubin, Nvidia’s next-generation AI accelerator slated for launch in the second half of the year. Samsung Electronics is also the supplier of the HBM4 mounted on Vera Rubin.
Momentum is also building in the race to secure other major customers. On March 18, Samsung Electronics signed a memorandum of understanding at its Pyeongtaek campus with AMD, the U.S. semiconductor manufacturer, to expand cooperation in next-generation AI memory and computing technologies. Under the agreement, Samsung Electronics will supply HBM4 for AMD’s AI accelerator, the Instinct MI455X. The two companies also plan to collaborate in high-performance DDR5 memory solutions to maximize the performance of the Helios AI data-center rack-scale platform and sixth-generation EPYC server CPUs.
On March 21, Taiwan-based IT media outlet DigiTimes also reported that “Samsung Electronics has passed the exacting technical validation conducted by OpenAI and design partner Broadcom, winning a major HBM4 order.” OpenAI, which partnered with Broadcom to secure custom silicon for its first in-house AI processor, codenamed Titan, has reportedly selected Samsung Electronics as an additional partner. Once Samsung Electronics’ 12-layer HBM4 begins to be deployed in OpenAI’s data centers in earnest later this year, profitability at Samsung Electronics’ memory division is expected to expand substantially.

SK Hynix’s Response as Samsung Electronics Intensifies Its Pursuit
As Samsung Electronics’ market influence expands in this way, SK Hynix’s strategic burden is deepening. If Samsung Electronics moves ahead with large-scale investment in AI semiconductors and further strengthens its competitiveness, a major realignment could unfold across an AI semiconductor supply chain that has so far revolved around SK Hynix. The existing HBM market had been characterized by SK Hynix’s dominance and Micron’s pursuit, with supply unable to keep pace with demand. Once Samsung Electronics, armed with overwhelming manufacturing capacity, begins to exert full-scale influence, part of that supply-demand imbalance could be alleviated and the competitive landscape could shift materially. According to Counterpoint Research, projected global HBM4 market share this year stands at 54% for SK Hynix and 28% for Samsung Electronics. Considering that SK Hynix’s market share had previously exceeded 60% while Samsung Electronics was in the low-20% range, the shift is striking.
SK Hynix has now sent its final HBM4 sample to Nvidia. The product was completed after undergoing design modifications and optimization work beginning in the fourth quarter of last year in order to meet Nvidia’s required data-transfer speed. The problem is that the product is highly likely to lag behind Samsung Electronics’ HBM4 in performance. That stems from SK Hynix’s decision during HBM4 development to combine a mature fifth-generation 1b DRAM with a base die fabricated on TSMC’s 12-nanometer process. Compared with Samsung Electronics’ 1c DRAM and 4-nanometer process, that is an older node.
Having ceded the lead in HBM4 performance competition to Samsung Electronics, SK Hynix is reportedly concentrating its efforts on advancing seventh-generation HBM, or HBM4E. The plan calls for using the 1c DRAM process for the HBM4E core die and applying TSMC’s 3-nanometer process to the logic die. SK Hynix had originally intended to use the 3-nanometer process only for customer-tailored HBM. Since the 3-nanometer node delivers a significant performance uplift, product costs inevitably rise. Samsung Electronics’ rapid advance in HBM4 has thus begun to reshape SK Hynix’s cost structure and technology strategy in earnest.