Samsung Doubles NAND Prices as AI Data Centers Reshape the Memory Market
Input
Modified
Server DRAM demand surge reshuffles capacity allocation
Cloud and data center SSD prices spike
Rising demand for high-speed storage to offset DRAM limits

Samsung Electronics has raised NAND flash supply prices by more than 100% quarter on quarter, heightening tension across the broader memory market. As equipment and wafer input have been concentrated on high-bandwidth memory (HBM) and server DRAM, effective NAND production capacity has been relatively curtailed, with the impact cascading from surging NAND prices to solid-state drive (SSD) pricing. The expansion of artificial intelligence (AI) infrastructure has further amplified demand for high-speed storage, compounding near-term supply pressure across the NAND market.
Reprioritization between DRAM and NAND
According to IT outlet Wccftech on the 25th (local time), Samsung set first-quarter NAND flash contract prices at more than double the previous quarter in long-term supply agreements with major global customers including Apple, Nvidia, and AMD. Ahead of the price increase, Samsung had already reduced the number of wafers allocated to NAND production from 4.9 million last year to 4.68 million this year. While the headline reduction amounts to just 220,000 wafers, the cut translated quickly into tighter supply, driving a sharp upswing in NAND prices.
Behind the pullback in NAND output lies a broader DRAM-centric reconfiguration of capacity across the memory industry. With AI server investment led by Nvidia accelerating demand for HBM and server DRAM, major memory manufacturers have prioritized limited equipment and manpower toward higher-margin DRAM products. NAND, by contrast, has been pushed down the order. Although NAND and DRAM share the same silicon wafer base, differences in fab operations, equipment investment, and process conversion schedules make reprioritization unavoidable.
Changes in production structure have also contributed to tighter supply. To meet AI data center demand, Samsung has been increasing the share of quad-level cell (QLC) output from lines previously focused on triple-level cell (TLC) products, a shift that entails equipment setup, process stabilization periods, and initial yield losses. The same dynamic applies to SK hynix, another major pillar of the global NAND market. Industry consensus holds that natural output losses during these transitions at both companies are intensifying current supply tightness.
Signals linking reduced supply to price spikes have become increasingly clear. Market tracker TrendForce projects first-quarter NAND flash contract prices to rise 33–38% quarter on quarter, while IDC expects NAND supply growth this year to remain around 17%. In a market where supply expansion is failing to keep pace with recovering demand, Samsung’s contract price hike stands as a symbolic marker of direction. After prolonged efforts to defend pricing amid weak profitability, the NAND industry appears to have entered a phase of simultaneously adjusting supply and prices to capitalize on the current memory upcycle.

Enterprise SSD ‘supply alert’
The earliest impact of NAND production cuts has been felt in data centers, most visibly in the 1Tb TLC products used for data center SSDs. TrendForce data show that in December last year, prices for 1Tb TLC surged by more than 65% month on month, while 512Gb and 256Gb TLC products also posted strong gains amid shrinking supply. Because data center SSDs integrate large volumes of NAND into individual servers, shifts in availability for specific capacities or process nodes are reflected almost immediately in pricing.
Unlike consumer SSDs, data center SSDs operate within a rigid supply-and-demand structure. Cloud providers and AI data center operators lock in storage procurement plans in advance of server expansions and must continuously secure products that meet defined performance and durability standards. As a result, short-term demand is relatively inelastic even when prices rise. Tightness in TLC-based NAND has therefore pushed QLC SSD prices higher in tandem, and more recently, multi-level cell (MLC) products used in industrial and consumer SSDs have also begun to climb through indirect demand spillovers.
Contract structures between suppliers and customers further amplify price sensitivity. In the data center SSD market, a significant share of volumes is adjusted through quarterly or semiannual price negotiations. Once supply reductions are confirmed, prices for new contracts can be revised upward rapidly. SanDisk offers a telling example: on the 20th, the company notified customers that it would double enterprise SSD prices in the first quarter compared with the fourth quarter of last year. The move far exceeded market expectations of a 30–40% increase, laying bare the extent to which supply shortages are driving price escalation.
HDD → SSD → NVMe SSD
Recent developments also underscore a clear shift toward NVMe-based SSDs as roles for memory and storage diverge in large-scale AI compute environments. As generative AI spreads, the absolute volume of data requiring storage—training datasets, parameter snapshots, logs, and checkpoints—inevitably expands. The ability to retrieve this data repeatedly and at high speed has emerged as a determinant of overall system performance. In AI servers, GPUs handle computation while HBM and server DRAM deliver ultra-low latency and high bandwidth, with storage responsible for reliably housing and accessing massive datasets and intermediate results.
Rising demand for high-speed storage to compensate for DRAM’s technical limits is another key driver behind NVMe adoption. During AI inference, intermediate data known as KV cache accumulates rapidly; relying solely on HBM or DRAM for this function imposes constraints in both capacity and cost. Nvidia has responded by equipping all units of its next-generation AI accelerator, Vera Rubin, with large-capacity SSDs. Vera Rubin incorporates 1,152TB of SSD capacity—more than ten times that of the preceding Blackwell model. With shipments planned at 30,000 units this year and 100,000 next year, new storage demand is projected to reach 34.6 million TB in 2026 and 115.2 million TB in 2027.
The evolution of storage technologies reinforces this shift. Historically, large datasets were stored on hard disk drives (HDDs), but rotating media entailed long latencies and limited parallel access. The spread of SSDs significantly improved access speed and stability, yet SSDs using the serial ATA (SATA) interface still faced constraints in large-scale parallel computing. NVMe SSDs, which connect directly to CPUs and GPUs via the PCI Express (PCIe) interface, have therefore become the data center standard.
Industry participants see NVMe as unavoidable despite higher costs. Given the computational loads and data volumes handled by AI servers, performance degradation from storage bottlenecks would impose far greater costs. As AI and cloud demand forms an independent axis separate from consumer markets such as mobile and PCs, investment in high-speed, low-latency storage is increasingly viewed as a necessity rather than an expense. This marks a phase in which NVMe is selected based on system efficiency rather than price competitiveness, offering a clear explanation for the current concentration of demand.
Comment