[AI Infrastructure] The Shifting AI Investment Landscape, Expanding Beyond GPUs to “Physical Infrastructure” Including Power, Cooling, and Networks
Input
Modified
Power bottlenecks emerging as AI-driven electricity demand from data centers surges Infrastructure investment gains prominence as a prerequisite for operating the AI ecosystem “Innovation in AI infrastructure will become the next engine of value creation”

Investment flows in artificial intelligence are expanding beyond semiconductors and data centers to encompass the broader realm of physical infrastructure, including power, cooling, and networking. As AI models grow more sophisticated and data center capacity continues to scale up, a confluence of challenges—surging electricity consumption, heat generation, and connectivity bottlenecks—has come sharply into focus. As a result, power grids, cooling technologies, and network equipment capable of fully translating GPU performance into real-world output are being reassessed as decisive determinants of AI competitiveness. In global capital markets, foundational industries that underpin the AI ecosystem are increasingly emerging as a new axis of growth.
Bloom Energy, HD Hyundai Electric cited as key beneficiaries
On the 19th (local time), experts appearing on CNBC’s investment-focused program warned that “by remaining fixated solely on AI semiconductors and software, investors risk missing decisive opportunities for the next wave of value creation,” adding that “as explosive AI growth drives a surge in data center power consumption, attention should turn to small- and mid-cap companies that possess the technologies capable of addressing these constraints.” They pointed to fuel-cell specialist Bloom Energy as a core stock poised to lead innovation in AI infrastructure. Bloom Energy has recently seen a sharp rise in orders for data center fuel cells, pushing its market capitalization beyond $35 billion. Last month, the company’s shares soared after it secured a $2.65 billion contract with AEP, the largest electric utility in the United States.
A broader group of companies positioned to alleviate power bottlenecks in AI infrastructure has also entered investors’ sights. Vertiv has established a dominant position in liquid cooling, setting de facto standards for data center thermal management systems. NuScale Power, a leader in small modular reactor (SMR) technology, is strengthening partnerships with big tech companies such as Amazon and Google. Arista Networks has emerged as a key player in network infrastructure by easing data bottlenecks between AI servers through ultra-high-speed Ethernet switching equipment. In South Korea, HD Hyundai Electric is widely regarded as a prime beneficiary of shortages in ultra-high-voltage transformers driven by grid aging and rapid data center expansion, with its earnings accelerating sharply.
AI-driven electricity consumption has already reached a level that is difficult to ignore. The International Energy Agency (IEA) projects that global data center power consumption this year will reach between 620 and 1,050 terawatt-hours, roughly double the level recorded in 2022, warning of potential grid shocks, particularly in the United States. Consulting firm Gartner forecasts that while data center power demand will rise 160% from 2024 levels this year, capacity expansion by suppliers will fail to keep pace, leaving 40% of AI data centers facing power shortages. BloombergNEF likewise noted that U.S. data center electricity demand is expected to nearly triple over the next decade, cautioning that the concentration of large-scale facilities in rural and outlying regions could intensify strain on power grids.
Aging infrastructure, surging power prices, and rising attention to SMRs
In practice, major data center hubs around the world—including Northern Virginia in the United States, as well as Germany and the Netherlands—are experiencing wait times averaging seven years and stretching up to 15 years to secure grid connections. Conditions have deteriorated so severely in Dublin, Ireland, that authorities have stopped accepting new data center connection applications altogether until 2028 due to grid saturation. The situation in the United Kingdom is similarly acute. Last year, the UK government officially warned that applications for power connections had surged tenfold over the past five years, with AI data centers driving a rapid increase in electricity demand and heightening fears of “gridlock.” Gridlock refers to severe congestion and overload in power grids that lead to connection delays or blackouts.
Grid bottlenecks are translating directly into soaring electricity prices. PJM, North America’s largest power grid operator, reported that prices in its capacity auctions jumped 800% year on year as a decade-long contraction in infrastructure supply collided with an explosion in data center demand. In the UK, auction prices for power supply in the 2027–2028 period have also reached record highs. Experts identify chronic underinvestment and aging power infrastructure as the fundamental causes. The IEA has warned that current global investment in power grids is far outpaced by spending on generation and electrification, posing a serious risk to energy security, and stressed that transmission and distribution investment must be raised to levels comparable with generation investment by the early 2030s.
Given the clear physical limits to rapidly expanding power grids, markets are increasingly focusing on strategies that maximize the efficiency of existing infrastructure. One approach involves colocating solar power or SMRs near data centers to supply a portion of electricity on-site, supplementing shortfalls through the grid. Energy storage systems (ESS) that store electricity for use during peak periods are also emerging as solutions that both ease grid strain and enhance stability. Distributing computational workloads across regions or time zones, or shifting workloads in response to power prices and grid conditions, is likewise seen as a viable alternative for mitigating bottlenecks without extensive grid expansion.
Alongside power, networking is also cited as a critical constraint for AI data centers. These facilities require not merely basic server connectivity but ultra-dense computing environments in which tens of thousands of GPUs exchange massive volumes of data in real time. Even with stable power supplies, inadequate network bandwidth or excessive latency can sharply reduce GPU utilization and overall system efficiency. In large-scale AI training and inference environments, conventional data center network architectures are proving insufficient. The explosive growth in inter-server data traffic is making the adoption of ultra-high-speed Ethernet, low-latency switching technologies, and comprehensive redesigns of internal data center network architectures unavoidable.

Undersea data centers emerge alongside next-generation cooling technologies
Cooling technology is also emerging as a core investment pillar for AI data centers. AI-optimized servers generate substantial heat due to their high computational density. Currently, cooling accounts for roughly 40% of total data center power consumption, and failure to manage it effectively can bring entire systems to a halt. Air cooling, adopted by about 70% of data centers worldwide, is widely viewed as approaching its practical limit at around 20 kilowatts per rack. To overcome this constraint, liquid cooling is rapidly gaining traction. This approach circulates coolant within servers or at the rack level to remove heat directly. Nvidia’s GB200 NVL72 supercomputer for AI factories, for example, is designed with liquid cooling to handle rack-level power densities exceeding 132 kilowatts.
Immersion cooling is also attracting attention as an alternative. This method submerges entire servers in dielectric fluid, allowing heat generated by CPUs, GPUs, memory, and other components to be absorbed directly by the liquid, resulting in superior heat transfer efficiency. By eliminating the need for internal server fans and reducing reliance on large-scale air-conditioning systems, immersion cooling can significantly cut power consumption for cooling. It enables rack-level power densities to rise from tens of kilowatts to several hundred kilowatts, making it well suited for highly dense AI server environments. However, it requires changes to operational practices, as maintenance involves removing servers from liquid rather than servicing them within conventional rack setups.
As AI data center demand accelerates, competition surrounding heating, ventilation, and air conditioning (HVAC) technologies is also intensifying. HVAC systems are essential for regulating heat, temperature, and humidity to ensure stable server operation. According to Global Market Insights, the global HVAC market, valued at $301.6 billion in 2024, is projected to grow to $545.4 billion by 2034—approaching the scale of the global smartphone market over the next decade.
More radical approaches to cooling are also re-emerging, including proposals to locate data centers underwater. Subsea data centers leverage the stable, low-temperature environment of seawater to improve cooling efficiency and reduce the need for onshore cooling infrastructure. They can also be designed to operate in forms that are physically separated from terrestrial power infrastructure. While large-scale commercialization has yet to materialize, the rapid rise in heat generation and power consumption driven by AI expansion is prompting renewed discussion of undersea data centers alongside immersion cooling as medium- to long-term technological options.