Skip to main content
  • Home
  • Tech
  • One year after the “DeepSeek shock,” the high-cost AI infrastructure formula is shaking as value-for-money takes center stage

One year after the “DeepSeek shock,” the high-cost AI infrastructure formula is shaking as value-for-money takes center stage

Picture

Member for

6 months 3 weeks
Real name
Niamh O’Sullivan
Bio
Niamh O’Sullivan is an Irish editor at The Economy, covering global policy and institutional reform. She studied sociology and European studies at Trinity College Dublin, and brings experience in translating academic and policy content for wider audiences. Her editorial work supports multilingual accessibility and contextual reporting.

Modified

China’s AI market share surges from 1.2% to nearly 30%
Emerging markets targeted with price competitiveness
Low-cost open strategy versus closed subscription models

Chinese artificial intelligence (AI) startup DeepSeek has rapidly reshaped the global competitive landscape, sharply expanding its share of worldwide usage just one year after releasing its open-source model. Contrary to the old formula in which massive capital investment and expensive infrastructure translated directly into competitiveness, users are increasingly choosing models based on cost-performance efficiency. As DeepSeek rapidly gains ground in emerging markets such as Africa, a wave of additional low-cost Chinese models has followed, shifting the center of AI competition away from technological spectacle and toward efficiency and pricing.

Development and operating costs below 10% of U.S. peers

On the 19th (local time), IT media outlet Digitimes cited data from API platform OpenRouter showing that Chinese open-source AI models accounted for nearly 30% of global usage by the end of last year. This represents an almost 25-fold jump from just 1.2% at the end of 2024. The outlet said that within a year of unveiling its large language model (LLM) R1, DeepSeek has pushed U.S.–China AI rivalry into a new phase. Founded in 2023, DeepSeek released R1 in January last year and has since upgraded the model seven times, rapidly expanding its presence.

The decisive factor behind DeepSeek’s rise has been R1’s exceptional cost-performance ratio. According to a jointly published paper by founder Liang Wenfeng, training R1 required 294,000 dollars. Even when combined with the 5.576 million dollars spent developing its predecessor V3, total costs remain below 6 million dollars. Compared with the roughly 100 million dollars that OpenAI is reported to have spent training its base models, R1’s price advantage stands out clearly.

Performance metrics have also held up. In DeepSeek’s own report, R1 achieved 79.8% accuracy on the AIME 2024 benchmark, surpassing OpenAI’s reasoning model o1 at 79.2%. In LiveBench coding evaluations, R1 recorded 65.9% accuracy. While these figures were enough to shift perceptions from a cost-performance standpoint, DeepSeek further accelerated adoption by operating R1 as an open-source model. By lowering access and usage barriers compared with the closed models favored by U.S. big tech firms, the company aimed to speed up experimentation and adoption among enterprises and developers.

Questions have been raised over development costs. Semiconductor and AI research firm SemiAnalysis suggested that DeepSeek’s disclosed figures may only include GPU rental costs for pretraining. When data collection and preprocessing, infrastructure construction and maintenance, and labor are fully accounted for, actual development costs could exceed 500 million dollars, more than 50 times the stated amount. Inconsistencies in DeepSeek’s cost estimates over time have further fueled skepticism.

Even so, markets have reacted more strongly to relative efficiency than to absolute cost figures. Rapid adoption by major Chinese platforms illustrates this. WeChat, often described as China’s equivalent of KakaoTalk, integrated R1 into its search functions and is reviewing further uses combining conversational data. Baidu also converted its Ernie model to open source last June and made Ernie Bot free to use. These moves suggest that the value-for-money competition sparked by DeepSeek is prompting a broader recalibration of pricing and disclosure strategies across the AI ecosystem.

“Price first” over security and brand

Chinese AI, led by DeepSeek, has spread quickly across the Middle East and Africa, securing early leadership in emerging markets. Microsoft noted in a recent “AI diffusion report” that DeepSeek is rapidly increasing its share in so-called Global South markets by leveraging accessibility and low pricing, estimating usage in Africa to be two to four times higher than in other regions. The report said Chinese AI firms, backed by large government subsidies, are closing the gap with U.S. competitors.

Regional data highlight this trend. According to Microsoft, DeepSeek’s market share reached 89% in China in the second half of last year, followed by strong positions in Russia, Belarus, and Iran, countries subject to U.S. economic sanctions. Notably, African countries with relatively low AI adoption rates, such as Ethiopia and Zimbabwe, also ranked high. This suggests DeepSeek’s presence has expanded fastest in regions where access to Western technology is limited or infrastructure remains underdeveloped.

Here again, pricing and accessibility are key. Microsoft said DeepSeek offered its chatbot for free in emerging markets to build an initial user base. In lower-income countries with limited IT budgets, paid subscription models face natural constraints, creating an opening that DeepSeek has exploited. The report concluded that DeepSeek’s rise in Africa shows that global AI adoption decisions hinge less on model quality than on accessibility and availability.

DeepSeek is reinforcing its value strategy by developing small language models (SLMs), lightweight systems that preserve performance while reducing scale. By adopting efficient training methods that require fewer resources, these models lower operating costs. On the 31st of last month, DeepSeek introduced a new training method called manifold constrained hyperconnection (mHC), saying it significantly improved performance with only a 6.7% increase in model resources. A successor model, R2, is widely expected to launch in the first half of this year.

Limits to avoiding competition without clear technological superiority

Following DeepSeek, a wave of low-cost, high-performance Chinese AI firms has emerged. Moonshot AI is a prominent example. According to the Wall Street Journal, Moonshot AI secured hundreds of millions of dollars in new funding last November, valuing the company at 4 billion dollars. Founded in Beijing in 2023, Moonshot drew global attention with its LLM “Kimi K2 Thinking,” claiming training costs of 4.6 million dollars, lower than DeepSeek’s V3.

Kimi K2 Thinking also boasts strong performance metrics. In the Hard Logic Evaluation (HLE), which measures complex problem-solving ability, it achieved a 44.9% success rate, outperforming OpenAI’s GPT-5, Claude Sonnet 4.5, and DeepSeek’s V3. In web search capability tests, it scored 60.2 points, again ahead of its U.S. rivals. Like R1, it was trained using H800 GPUs, underscoring that competitive performance can be achieved without premium hardware.

Moonshot AI adopted a mixture-of-experts (MoE) architecture to cut costs, limiting active parameters to 32 billion despite a total parameter count of 1 trillion. This sharply reduced computing and chip usage while minimizing performance loss. An industry source said that while details of cost calculations remain unclear, the model’s price-performance is undeniable, adding that DeepSeek’s approach is increasingly seen not as a one-off success but as a reproducible formula.

Ultimately, competition in the post-DeepSeek era extends beyond simple price comparisons to emphasize technological evolution grounded in cost efficiency. Analysts say that without demonstrating clear technological superiority, U.S. firms will find it difficult to avoid competition altogether. As Chinese AI companies such as Moonshot AI, Zhipu AI, Minimax, and Baichuan continue releasing open models, subscription-based revenue structures built on heavy capital investment and closed systems face growing pressure.

In response, U.S. players have stepped up criticism focused on model imitation and censorship. Late last year, OpenAI publicly suggested DeepSeek may have used “model distillation” by querying its o1 model at scale and launched an investigation. An OpenAI spokesperson said signals of improper distillation had been detected and shared with the U.S. government. Critics, however, note that OpenAI itself has faced controversies over data usage without copyright holders’ consent, arguing that criticism of Chinese AI has evolved from technical ethics into a politically charged front in an intensifying competitive battle.

Picture

Member for

6 months 3 weeks
Real name
Niamh O’Sullivan
Bio
Niamh O’Sullivan is an Irish editor at The Economy, covering global policy and institutional reform. She studied sociology and European studies at Trinity College Dublin, and brings experience in translating academic and policy content for wider audiences. Her editorial work supports multilingual accessibility and contextual reporting.