[Semiconductor Power Struggle] A CPU Card That Could Upend the GPU Era, Nvidia Shifts Its AI Strategy Toward Intel Partnership
Input
Modified
Growing Attention on CPUs as a Core Component of Data Processing
Nvidia’s Strategic Positioning Moves from ARM Toward Intel
Scenarios Emerge for Utilizing Intel’s Foundry Capabilities

Signs of change have begun to emerge in the competitive landscape of the artificial intelligence semiconductor market, which has long revolved around graphics processing units (GPUs). Nvidia has recently begun placing central processing units (CPUs) at the forefront, renewing attention on how core chips within AI infrastructure divide their roles. In particular, a large-scale collaboration agreement signed last year between Nvidia and Intel has begun to take shape, elevating data center architectures that combine GPUs and CPUs into a new axis of competition. With the potential use of Intel’s foundry services also being discussed, analysts suggest Nvidia has entered a phase of reassessing its semiconductor supply chain strategy.
Attempts to Integrate CPU Ecosystems into GPU Infrastructure
On the 15th, Reuters reported that Nvidia plans to unveil a CPU strategy optimized for the era of agent-based AI during the keynote address of its annual developer conference, “GTC 2026,” opening the following day in San Jose, California. Nvidia GPUs have long served as the core chips in data centers, but the rise of “agent AI,” capable of making decisions and executing tasks autonomously, has significantly elevated the importance of CPUs. Dion Harris, head of AI infrastructure at Nvidia, stated that “as AI and agent workflows expand, CPUs have become a bottleneck,” adding that “a new growth opportunity has opened in the CPU market.”
The spread of agent AI has prompted a redesign of the workload distribution structure inside data centers. In the past, training massive AI models represented the central challenge. Today, however, inference—the process through which models perform real-world tasks—and workflow management have become equally critical. In environments where numerous agents operate simultaneously, the management layer responsible for coordinating execution sequences and data flows has also grown in importance. GPUs specialize in large-scale parallel computation, while CPUs handle general-purpose operations such as data movement, task scheduling, memory management, and service request processing.
This shift is expected to translate directly into increased demand for server-grade CPUs. Bank of America projects that the global CPU market will more than double, expanding from $27 billion last year to $60 billion by 2030. In practice, competition to secure CPUs has already intensified among major data center operators, and supply shortages are becoming increasingly visible. AMD and Intel have reportedly warned Chinese customers about potential CPU supply constraints, with delivery lead times for some products extending to as long as six months.
Into this emerging gap steps the strategic alliance between Nvidia and Intel. The two companies signed a $5 billion agreement covering server CPU supply and technical cooperation, formally launching a partnership in AI infrastructure. Intel’s key products in the collaboration are its sixth-generation Xeon processors, “Sierra Forest” and “Granite Rapids.” These models support multiple memory channels operating in parallel, a design considered well suited to agent AI environments that require large-scale data parallel processing. Industry observers have also focused on the possibility that Intel Xeon processors could be integrated into Nvidia’s ecosystem built around its high-speed interconnect technology, NVLink. NVLink is Nvidia’s proprietary interface that connects GPUs to one another and reportedly provides data transfer bandwidth several times higher than the widely used PCI Express standard.
Nvidia Moves to Strengthen Data Center Leadership
The partnership between Nvidia and Intel traces back to September of last year. At that time, the two companies announced plans to jointly develop chips for PCs and data centers. As part of the arrangement, Nvidia agreed to purchase Intel common shares at $23.28 per share. Once the transaction is completed, Nvidia is expected to hold roughly a 4 percent stake in Intel. The core objective of the collaboration was to combine the two companies’ technological ecosystems to jointly build AI server platforms for data centers. The companies declared that they would “connect system architectures using Nvidia NVLink and integrate Intel’s x86 CPU technology with Nvidia’s AI-accelerated computing.”
At the same time, the personal computing segment outlined a structure in which Intel would develop a system-on-chip integrating Nvidia GPU chiplets. Nvidia Chief Executive Officer Jensen Huang stated at a press conference that “Nvidia will become a very large customer of Intel CPUs and a major supplier of GPU chiplets for Intel chips.” The partnership has been interpreted as recognition that, even in an AI market increasingly centered on GPUs, the control and management functions provided by CPUs remain essential. It has also been seen as a pragmatic strategy to expand GPU-accelerated infrastructure while preserving the existing x86-based server ecosystem.
An actual equity transaction followed in December of the same year. Intel disclosed in a filing submitted to the U.S. Securities and Exchange Commission that it had completed a third-party allotment capital increase by issuing 214.77 million new common shares and selling them to Nvidia. Market commentary at the time frequently described the deal as “Nvidia throwing Intel a lifeline.” Intel had previously been struggling financially after falling behind in the AI semiconductor race, but Nvidia’s investment not only secured large-scale funding but also opened the possibility for Intel to reenter the AI infrastructure ecosystem.
Another strategic shift emerged in February of this year. Nvidia sold all of its remaining shares in ARM, approximately 1.1 million shares, completely severing its capital ties with the company. The move was widely interpreted as closing the final link with ARM, which Nvidia had once attempted to acquire in a $40 billion deal. As a result, Nvidia’s rationale for cooperating with Intel’s x86 ecosystem became clearer. The global data center infrastructure remains overwhelmingly dependent on x86 server architectures. Over several decades, hyperscale operators have built their server environments on x86 platforms, and replacing them would involve substantial costs and operational risks. Nvidia therefore appears to have chosen a strategy that expands GPU-accelerated infrastructure while maintaining compatibility with existing Intel-based server environments.

Possibility of an AI Semiconductor Supply Chain Reshuffle
Initially, the scope of cooperation between the two companies did not include foundry agreements. As a result, Taiwan’s TSMC continues to manufacture Nvidia GPUs. Even so, industry observers increasingly expect Nvidia’s supply chain strategy to evolve over time. The expectation is that some product production could be shifted to Intel in order to reduce Nvidia’s effective reliance on a single manufacturer for GPU output. This outlook has gained further weight given that the U.S. government holds more than a 9 percent stake in Intel as its largest shareholder, while policy momentum has intensified to bring semiconductor manufacturing capacity back to the United States.
Nvidia’s next-generation GPU roadmap also suggests signs of change. In January, Taiwan-based IT outlet DigiTimes reported that Nvidia is considering producing certain semiconductor components required for its “Feynman” GPU architecture—scheduled for release in 2028—at Intel Foundry. Specifically, the plan under consideration involves manufacturing the I/O dies responsible for GPU-to-GPU communication and memory input-output functions using Intel’s processes, while continuing to produce the main GPU compute dies at TSMC. Under this approach, a single GPU product would rely on two different foundries simultaneously.
From a technological perspective, such a structure is widely regarded as a highly practical option. I/O dies generally require less advanced process scaling than compute cores, and modern packaging technologies commonly combine multiple chips into a single package. Nvidia’s current-generation GPU, “Blackwell,” already applies the N4P process—an enhanced version of TSMC’s 5-nanometer-class N5 node—to its I/O die. Analysts therefore consider it highly plausible that the Feynman architecture could integrate GPU dies produced by TSMC with I/O dies produced by Intel, linking them through Intel’s advanced packaging technology known as EMIB.
Whether such cooperation will ultimately materialize remains uncertain. Intel’s 18A process currently focuses more on internal product manufacturing than on external customers. Future PC products expected to be built on this process include the Core Ultra Series 3 (Panther Lake) and the next-generation Nova Lake, while server processors such as Xeon 6+ (Clearwater Forest) and Diamond Rapids have also been mentioned. Meanwhile, Intel’s next-generation 14A process has yet to demonstrate fully verified competitiveness in terms of cost, yield, and investment scale. In practice, Nvidia’s decision on whether to shift part of its contract manufacturing to Intel will ultimately depend on the maturity of Intel’s manufacturing processes and their cost competitiveness.
- Previous Trump Urges Five Nations, Including South Korea, to Dispatch Warships to Hormuz, Seeking to Deter Iranian Provocations Through ‘Multinational Joint Response’
- Next “2026 Marks the First Year of Mass Adoption for Obesity Drugs”: The Obesity Drug Market Shifts to Pills as Big Pharma Competition Intensifies