Skip to main content
  • Home
  • Tech
  • "Cutting Costs at the Expense of Performance" ASRock Bets on HUDIMM, Yet Faces Limited Applicability Amid AI-Driven Memory Shortage

"Cutting Costs at the Expense of Performance" ASRock Bets on HUDIMM, Yet Faces Limited Applicability Amid AI-Driven Memory Shortage

Picture

Member for

9 months
Real name
Aoife Brennan
Bio
Aoife Brennan is a contributing writer for The Economy, with a focus on education, youth, and societal change. Based in Limerick, she holds a degree in political communication from Queen’s University Belfast. Aoife’s work draws connections between cultural narratives and public discourse in Europe and Asia.

Modified

"Cost Reduction Achievable" ASRock Unveils Redesigned DDR5 Standard 'HUDIMM'
Pronounced Performance Degradation Positions It as a Compromise in an Overheated Memory Market
AI Boom Reorients Market Toward Server-Centric Products Such as HBM and SOCAMM 2

Taiwanese comprehensive computer component manufacturer ASRock has introduced a redesigned DDR5 standard targeting the entry-level market. As memory prices continue to surge amid the artificial intelligence (AI) boom, the company has proposed a cost-reduction approach by lowering performance relative to conventional DDR5. However, industry observers note that the extent of performance degradation surpasses theoretical expectations, potentially limiting its applicability even within the budget segment.

ASRock’s Low-Cost DDR5 Strategy

According to a report by IT media outlet WinFuture on the 18th (local time), ASRock recently unveiled a new standard dubbed ‘HUDIMM (Half-Unbuffered DIMM),’ a redesigned DDR5 specification. Unlike conventional DDR5, HUDIMM activates only a single sub-channel. Communication between the central processing unit (CPU) and memory is fundamentally structured around 64-bit channels, with DDR5 modules divided into two 32-bit channels. Each of these 32-bit divisions within a module is referred to as a sub-channel.

This design enables a reduction in both the number of semiconductor dies—one of the primary cost drivers—and the number of ICs (individual DRAM chips mounted on a memory module) per memory stick. As a result, production costs can be significantly lowered. In certain configurations, ASRock has also highlighted potential performance benefits. The company disclosed internal test results indicating that combining an 8GB HUDIMM with a standard 16GB UDIMM yields three 32-bit sub-channels, securing higher bandwidth than a single 24GB module. This underscores not only cost efficiency but also the possibility of improved performance under specific setups.

However, industry experts have raised concerns regarding the practicality of ASRock’s claims. While the logic of cost reduction through a decreased chip count is considered valid, the performance comparison itself is viewed as flawed. In the entry-level PC market targeted by HUDIMM, a single 24GB configuration is rarely adopted. Typical configurations in this segment consist of either a single 8GB module or dual 8GB modules. Consequently, ASRock’s analysis is seen as closer to a theoretical comparison than a reflection of real-world consumer choices.

Bandwidth Decline Becomes a Critical Issue

When evaluated as a standalone configuration, HUDIMM inevitably suffers from reduced bandwidth due to its halved sub-channel structure. Performance tests conducted by Hong Kong IT media outlet HKEPC, in collaboration with PC manufacturer ASUS, illustrate this limitation. A 16GB UDIMM delivered bandwidth of nearly 60GB/s, whereas converting it to an 8GB HUDIMM reduced bandwidth to 30GB/s. This test was conducted under a single-channel configuration.

The issue becomes even more pronounced when a 32GB UDIMM is converted to a 16GB HUDIMM. While the original 32GB UDIMM provided bandwidth exceeding 100GB/s, the converted 16GB HUDIMM recorded less than 60GB/s. This test was conducted under a dual-channel configuration. The results confirm a structural performance decline that extends beyond the theoretical reduction in capacity. Nevertheless, ASRock maintains that HUDIMM performance remains sufficient for use in low-cost computers and gaming PCs.

Market participants also view HUDIMM as a potential interim compromise for entry-level PC consumers. Although its application range is inherently constrained by performance limitations, the persistent burden of DDR5 pricing leaves room for HUDIMM to serve as a cost-accessibility solution. Indeed, prices of conventional DRAM used in PCs and smartphones, including DDR5, have surged sharply in recent months. This trend is attributed to major semiconductor companies prioritizing production capacity for high-bandwidth memory (HBM) used in AI servers, thereby intensifying supply shortages. Some forecasts even suggest that production expansions by the three leading memory semiconductor firms—Samsung Electronics, SK hynix, and Micron—may meet only around 60% of market demand through 2027.

Photo=SK hynix

Memory Giants Pivot Toward SOCAMM 2

The AI-centric shift sweeping the memory market has also accelerated competition in server-grade low-power memory modules, particularly ‘SOCAMM (Small Outline Compression Attached Memory Module) 2.’ SOCAMM 2 integrates multiple LPDDR5X low-power DRAM chips into a single module, serving as an AI server memory solution that bridges the gap between HBM and DDR5. As AI technologies advance and memory demand continues to expand, competition in the AI server memory market is extending beyond high-performance products like HBM to include LPDDR-based modules emphasizing energy efficiency.

The three global memory leaders are emerging as key players in the SOCAMM 2 race. Micron, which first collaborated with Nvidia on early SOCAMM models, developed a 256GB product last month and began supplying samples to clients. During the same period, Samsung Electronics initiated mass production of SOCAMM 2 modules for Nvidia, utilizing its 10-nanometer-class fifth-generation (1b) DRAM, while also discontinuing orders for certain low-power mobile DRAM products such as LPDDR4 and LPDDR4X. This reflects a strategic acceleration toward high-value LPDDR5X-based products, including SOCAMM 2, as part of a broader portfolio restructuring.

SK hynix announced on the 20th that it has begun mass production of a 192GB SOCAMM 2 product based on its 10-nanometer-class sixth-generation (1c) LPDDR5X DRAM. Designed to be optimized for Nvidia’s next-generation AI accelerator, Vera Rubin, SK hynix’s SOCAMM 2 delivers more than double the bandwidth of conventional RDIMM modules while achieving over 75% improvement in energy efficiency.

Picture

Member for

9 months
Real name
Aoife Brennan
Bio
Aoife Brennan is a contributing writer for The Economy, with a focus on education, youth, and societal change. Based in Limerick, she holds a degree in political communication from Queen’s University Belfast. Aoife’s work draws connections between cultural narratives and public discourse in Europe and Asia.