Big Tech’s Bet on AI Agents: Low Cognitive Ability and Flawed ‘Junkware’
Input
Modified
Despite high hopes as next growth engine, technological maturity remains low Cognitive limitations mean it may take at least 10 years to solve Internal resistance, security and accountability issues hinder practical adoption

Big Tech companies are touting artificial intelligence (AI) agents as the next growth engine, but industry evaluations suggest that the technology still falls far short of expectations. AI agents are supposed to act like interns or colleagues who work alongside humans, but experts say their cognitive deficiencies and lack of functional reliability mean it could take at least a decade to fix the problems. Meanwhile, internal resistance to AI adoption, controversies over decision-making responsibility, and security concerns are all creating major barriers to real-world use.
“AI agents are nothing but junk”
According to the IT industry on the 23rd (local time), Andrej Karpathy, CEO of Eureka Labs, said on the podcast Dwarkesh the previous day, “AI agents have many cognitive flaws and don’t work properly,” adding, “It will take at least 10 years to fix these issues.” Karpathy, a co-founder of OpenAI, laid the foundation for the company’s deep learning and computer vision models when it was established in 2015. In 2017, he moved to Tesla, where he led neural network development for the company’s autonomous driving systems, earning a reputation as one of the leading experts in AI coding.
In the interview, Karpathy criticized the hype, saying, “The industry is trying to take too big a leap and exaggerates progress as if there’s been a major breakthrough, but the reality is quite the opposite. The current outcomes are nothing but AI slop — junkware.” He defined AI agents as “employees or interns who can work with you” and noted that the current generation of agents—such as those introduced by Claude and Codex—have failed to reach the expected standard. “They lack continuous learning capabilities,” he said. “Even when you teach them something, they can’t remember it. Their cognitive ability is simply insufficient to function properly.”
Karpathy cited his experience at OpenAI with Universe, a GUI (Graphical User Interface) agent project that failed, as an example of the challenges in AI agent development. Universe, launched in 2016, was designed as a platform for AI agents to perform and learn from various tasks in virtual environments. However, it failed due to instability in real-time operations and limitations in visual recognition. Regarding artificial general intelligence (AGI), he added, “Elon Musk said there’s a 10% chance that Grok 5 could reach AGI, but realistically, it will take at least 10 years before we see true AGI.”

Little more than repackaged chatbots and RPA
Karpathy’s remarks contrast sharply with the growing enthusiasm among Big Tech companies for agentic AI as their next growth driver. Microsoft has projected that by 2028, companies will collectively operate 1.3 billion AI agents. Salesforce, too, has integrated ‘Agentforce’ into its CRM (Customer Relationship Management) platform, predicting that “every industry will reorganize around agents.” Likewise, leaders such as OpenAI CEO Sam Altman and Meta CEO Mark Zuckerberg have claimed that the AGI era will arrive within three to seven years.
However, the market response to AI agent models has been underwhelming. In a report released in June, research firm Gartner stated, “Many AI agent products are little more than rebranded chatbots or robotic process automation (RPA) tools,” warning that “so-called ‘agent washing’ is rampant.” The firm further projected that “by 2027, 40% of all agentic AI projects will be canceled, and by 2028, only 15% of corporate decisions will be made by AI agents without human intervention,” citing “low ROI and poor practical usability” as key reasons.
The limited real-world adoption of AI agents stems not only from technical immaturity but also from emotional resistance within organizations. Many employees view AI not as a productivity tool but as a threat to their jobs. This sentiment is particularly strong among workers handling routine research, analysis, or repetitive tasks, leading to passive or even covert rejection of AI agents. In some financial institutions, analysts have reportedly expressed strong opposition during early deployment stages.
Accountability and security risks also remain significant barriers to adoption. When AI agents make autonomous decisions or execute tasks that result in errors or losses, determining liability becomes difficult. In high-risk sectors such as healthcare and finance, unclear legal or ethical responsibility can make deployment impossible. Moreover, because AI agents often have extensive access to sensitive data, security concerns are mounting. There is a growing fear that AI agents could expose internal confidential information or execute unintended commands, leading to security breaches.
AI investment bubble could trigger financial instability
Excessive expectations for AI agents are fueling concerns of an AI investment bubble, posing potential risks to the financial system and the broader economy. Sam Woods, Deputy Governor of the Bank of England’s Prudential Regulation Authority, warned, “There are many concerns, including the risk of an AI bubble. Uncontrolled new technologies could lead to financial instability.” James Egelhoff, chief economist at BNP Paribas, cautioned that “if the AI boom cools down, it could dampen consumption and drag down economic growth.”
OpenAI CEO Altman echoed these concerns in an interview with The Verge, saying, “We are currently in an AI bubble. Investors are overly excited about AI technologies.” He added, “AI itself is a fundamentally transformative technology—much like the Internet revolution—but as with the dot-com bubble of the 1990s, startups lacking strong technological foundations are being overvalued, and when the bubble bursts, losses could be severe.”
Still, some argue that concerns over an AI bubble are overstated. Stephen Jen, CEO of Eurizon SLJ Asset Management, wrote in a commentary for Reuters that “the AI bubble is only at base camp,” noting that “during the dot-com bubble, the average P/E ratio for tech stocks was 276, whereas current valuations are relatively stable.” He added, “Today’s Big Tech firms have solid profit bases and robust cash flows, making a sharp collapse like that of the past unlikely.”
Comment