Skip to main content
  • Home
  • Tech
  • [China Semiconductor] Beijing Grants Subsidies to Data Centers Using Domestic Chips — A Bid for Self-Sufficiency in the Efficiency Race

[China Semiconductor] Beijing Grants Subsidies to Data Centers Using Domestic Chips — A Bid for Self-Sufficiency in the Efficiency Race

Picture

Member for

6 months 3 weeks
Real name
Aoife Brennan
Bio
Aoife Brennan is a contributing writer for The Economy, with a focus on education, youth, and societal change. Based in Limerick, she holds a degree in political communication from Queen’s University Belfast. Aoife’s work draws connections between cultural narratives and public discourse in Europe and Asia.

Modified

China Offers Power Subsidies to AI Data Centers That Exclude Foreign Chips
AI Chipmakers Race to Boost Energy Efficiency to Stay Competitive
With Power Demand Soaring, Infrastructure and Government Support Become Key to Market Position

China has decided to provide electricity subsidies to data centers using domestically produced artificial intelligence (AI) chips. The move is seen as an attempt by the government to offset cost risks associated with the lower power efficiency of Chinese-made chips. It also reflects Beijing’s strategy to advance semiconductor self-sufficiency amid intensifying competition over energy efficiency — or performance per watt — as power demand from AI data centers continues to surge.

Beijing Offsets Power Risks with Subsidies

According to a Financial Times report on the 4th, provincial governments in Gansu, Guizhou, and Inner Mongolia have introduced subsidy programs that cut industrial electricity rates for AI data centers by half. However, facilities using U.S.-made chips such as those from Nvidia or AMD are excluded from eligibility. Under the new policy, data centers in these regions can now run AI computations at about 0.4 yuan ($0.056) per kilowatt-hour, roughly 30% cheaper than the average industrial power rate in China’s coastal provinces. Some local authorities also plan to offer cash incentives large enough to cover a full year of operating expenses for qualified facilities.

The policy follows Beijing’s ban on Nvidia AI chips. In September, the Chinese government ordered domestic tech firms — including ByteDance and Alibaba — to halt testing and orders for Nvidia’s new low-end AI chip, the RTX Pro 6000D. The chip had been developed after the Trump administration banned exports of Nvidia’s China-specific H20 AI processors in April. As a result, Nvidia has effectively withdrawn from the Chinese market, and AI data centers in China have grown far more dependent on domestic chips. The government has since gone further, banning foreign AI chips altogether in new, state-funded data center projects. Sites with less than 30% construction progress have reportedly been instructed to remove already installed foreign chips or cancel related procurement plans.

The problem, however, lies in the low energy efficiency of Chinese-made chips compared with Nvidia’s products like the H20 or its latest Blackwell architecture. Following Nvidia’s exit from China, data centers powered by domestic chips from Huawei and Cambricon have seen electricity consumption rise 30–50%. The new power subsidy program is thus viewed as Beijing’s attempt to offset the side effects of its domestic chip–promotion policy while maintaining momentum toward semiconductor self-reliance.

The Efficiency War in the AI Chip Market

Some analysts view China’s power subsidy program as a “necessary condition” for achieving semiconductor self-sufficiency. The reasoning is that the global AI chip market is now dominated by a fierce competition over performance per watt — or energy efficiency — with major players rushing to release high-efficiency chips.

One prominent example is Tesla’s AI5 chip. At a recent annual shareholders meeting, CEO Elon Musk said, “To build functional humanoid robots, you need powerful AI chips that are both affordable and extremely energy efficient,” adding, “We believe the AI5 will deliver performance comparable to Nvidia’s Blackwell chips, at less than 10% of the cost and about one-third the power consumption.”

Google is also joining the race. On the 6th, the company announced plans to launch its 7th-generation Tensor Processing Unit (TPU), Ironwood, within weeks. Originally designed in 2013 to address soaring deep-learning workloads, Google’s TPUs are custom AI and machine-learning ASICs known for higher energy efficiency than Nvidia GPUs, thanks to their optimized power delivery architecture. The new Ironwood chip is designed for large-scale model training, complex reinforcement learning (RL), and high-volume, low-latency inference. It reportedly delivers up to 10× the performance of the 5th-generation v5p and 4× that of the 6th-generation Trillium (v6e) released last year.

In South Korea, FuriosaAI, a leading AI semiconductor fabless company, is also emphasizing energy efficiency with its next-generation inference chip Renegade (RNGD). According to CEO Baek Jun-ho, who introduced the chip at Hot Chips 2024 at Stanford University, Renegade processes 12 queries per second (8-bit floating point) while consuming 185 watts when running a small language model (SLM, MLPerf GPT-J 6B benchmark). By comparison, Nvidia’s L40s achieves a similar 12.3 queries per second but requires 320 watts, highlighting Renegade’s superior power efficiency.

Power Becomes the Decisive Factor in Data Center Expansion

The global race for energy-efficient AI chips is expected to intensify, driven by the explosive rise in computing demand as AI applications become mainstream. According to the International Energy Agency’s (IEA) report “Energy and AI,” global data center electricity consumption is projected to more than double — from 415 terawatt-hours (TWh) in 2024 to 945 TWh by 2030, equivalent to about 3% of the world’s total projected power use that year.

To secure stable energy supply, major tech companies are investing directly in the power sector. Microsoft signed a power purchase agreement last year with nuclear plant operator Constellation, adding nuclear energy to its data center power mix after years of focusing on renewables. Google invested $250 million in nuclear fusion startup TAE Technologies, while Amazon purchased a $650 million data center adjacent to Talen Energy’s nuclear power plant in northwestern Pennsylvania.

Experts say future data center expansion will largely depend on energy infrastructure availability. One market analyst noted, “In the end, data centers will inevitably cluster in regions where governments build new power plants or offer electricity subsidies.” The analyst added, “China, facing short-term limits on power infrastructure expansion, appears to be using subsidies to ease the cost burden for data centers running on domestic AI chips.”

Picture

Member for

6 months 3 weeks
Real name
Aoife Brennan
Bio
Aoife Brennan is a contributing writer for The Economy, with a focus on education, youth, and societal change. Based in Limerick, she holds a degree in political communication from Queen’s University Belfast. Aoife’s work draws connections between cultural narratives and public discourse in Europe and Asia.