Samsung Cuts HBM3E Prices to Boost HBM4 Push as SK hynix Secures Key Clients
Input
Modified
Samsung Joins Late but Leads HBM3E Price Cuts as It Ramps Up Production at P4 Invests in New Lines to Prepare for the HBM4 Era SK hynix Set to Begin Shipments in Q4, Finalizes HBM4 Supply Agreements

As competition intensifies for dominance in the high-bandwidth memory (HBM) market, Samsung Electronics has significantly lowered the supply price of its fifth-generation HBM3E chips. The move is seen as an effort to boost the competitiveness of its delayed HBM3E lineup through aggressive pricing, while focusing on securing profitability with the next-generation HBM4.
Samsung’s HBM3E Supply Strategy
According to IT outlet DigiTimes on the 27th (local time), Samsung Electronics is preparing a major comeback in the second half of the year to regain ground in the HBM market after losing momentum due to delays in certifying its next-generation 12-layer HBM3E products. Following about 18 months of repeated testing and revisions, Samsung only began shipments of its 12-layer HBM3E to key client Nvidia in the fourth quarter, entering the market roughly 6 to 12 months later than competitors SK hynix and Micron.
With SK hynix and Micron already securing most of their HBM3E orders through 2026, Samsung has struggled to establish a meaningful foothold even after shipments began. The company’s countermeasure was price. As the average price of HBM3E is expected to decline by 2026, Samsung is reportedly leading price cuts of up to 30% for its 12-layer HBM3E stack products.
Some analysts believe Samsung’s price-cutting strategy began as early as the second quarter. Despite a noticeable rise in HBM3E sales during that period, the company’s DRAM average selling price (ASP) remained largely unchanged. In July, Samsung stated that “as supply growth is expected to outpace demand for HBM3E products, market prices are likely to be affected,” suggesting that the profit gap between HBM3E and general DRAM could narrow sharply.
Shift Toward HBM-Focused Production
Market analysts widely expect Samsung Electronics to focus on boosting profitability through next-generation HBM4 rather than HBM3E. The company is pivoting away from conventional server DRAM toward higher-value AI memory products in preparation for the upcoming HBM4 competition — a shift clearly reflected in its investment strategy for the P4 line at the Pyeongtaek Campus.
The P4 facility, scheduled to begin full-scale operations in 2026, is being equipped to produce 1c (sixth-generation 10nm-class) DRAM optimized for HBM4. Samsung is restructuring parts of its existing DRAM process to focus on AI memory while expanding EUV lithography and TSV packaging equipment — both essential for high-density manufacturing. An industry insider noted, “P4 is a symbolic project marking Samsung’s transition toward AI-driven memory. The company’s portfolio will gradually shift from server DRAM to high-value products such as HBM and HBM-PIM (Processing-in-Memory).”
Market research firm TrendForce echoed this outlook, projecting that most of Samsung’s new capacity from P4 will be allocated to HBM4 and HBM4E production. The firm added that as demand for HBM in AI GPUs and server markets continues to surge, investing in HBM-focused production will yield stronger profitability than conventional DRAM.

Competitors See Diverging Fortunes
Samsung’s key rival, SK hynix, has already completed development of HBM4 and established mass-production capabilities as of last month. The company plans to begin shipments in the fourth quarter and expand sales in earnest next year. Client acquisition has also been effectively finalized. During its third-quarter 2025 earnings call on the 29th, SK hynix stated, “HBM products have been sold out since 2023, and supply negotiations for HBM4 have been completed at a level that ensures profitability.” The company added, “We expect HBM supply to remain unable to catch up with demand in the near term, and therefore anticipate a much higher growth rate than general DRAM.”
Meanwhile, U.S. competitor Micron is reportedly struggling to meet Nvidia’s performance requirements for HBM4. According to Jeff Kim, an analyst at Jefferies, “Micron claims to have achieved 11 gigabits per second in HBM4, but it has not yet translated into good yield or volume production.” This contrasts with comments made by Micron CEO Sanjay Mehrotra, who said during the company’s earnings report last month that it had delivered 11Gbps HBM4 samples to customers and planned to begin initial mass shipments in the second quarter of next year, with full-scale production in the second half.
Citing engineering sources, Kim noted that Micron may need to redesign its HBM4 architecture entirely — a process that could take up to nine months. “Raising the speed of HBM4 through internal redesign is extremely difficult,” he explained, adding that “the full process could take six to nine months, during which Micron is converting part of its HBM production lines back to server DRAM.”