[AI Bubble] Anthropic CEO Issues Stark AI Warning, Humanity’s ‘Technological Adolescence’ with an Oversized Body
Input
Modified
Humanity on the Cusp of the Superintelligence Era, Entering a Technological Adolescence Marked by an Imbalance Between Power and Responsibility Limits of Data Bias That Teaches Hatred, Urgent Need for Civic Consciousness Beyond Technology Capability Gaps Becoming Survival Gaps, New Polarization Deepening in the Labor Market

Dario Amodei, Chief Executive Officer of Anthropic, has warned of the imminent arrival of an artificial intelligence (AI) era that will surpass human intelligence, diagnosing humanity as having entered a state of “Technological Adolescence,” in which ethical maturity lags far behind overwhelming technological capability. As polarization driven by hate learning and the AI divide has already begun to materialize, he argues that humanity must move beyond vague fear and pass through a rite of passage defined by ethical design and robust social safety nets in order to advance toward a more mature civilization.
A Digital Nation of 50 Million Geniuses, Humanity Enters Technological Adolescence
On the 26th (local time), Amodei published a lengthy essay titled The Adolescence of Technology on his personal website, warning that AI surpassing human intelligence could emerge within the next one to two years. He likened the powerful AI expected to appear around 2027 to a digital nation inhabited by 50 million geniuses with Nobel Prize–level intellect. This metaphor reflects his projection that AI will think and collaborate at speeds 10 to 100 times faster than humans, overwhelming virtually every field.
This diagnosis marks a departure from the technological optimism he expressed in his 2024 essay Machines of Loving Grace. Amodei now argues that humanity has entered a phase of technological adolescence in which physical and technical power has grown formidable while mental and ethical maturity has failed to keep pace. He emphasized that the primary bottleneck is no longer technological performance itself, but the maturity of social and political systems, warning that institutions and norms are falling behind the speed of technological advancement.
Amodei categorized the potential turmoil of this technological adolescence into five distinct risks. The first is autonomy risk, in which AI develops goals misaligned with human values and escapes human control. The second is destructive misuse, such as terrorists exploiting AI to design biochemical weapons or conduct cyberattacks. The third is a security crisis in which authoritarian states leverage AI to build perfect surveillance systems, threatening democratic societies. The fourth and fifth risks involve economic chaos driven by labor market collapse and social disintegration caused by human norms failing to keep pace with technological acceleration. He cautioned that if humanity becomes blinded by enormous economic rewards and fails to install adequate safety brakes, civilization itself could be plunged into uncontrollable upheaval, voicing particular concern over whether the recent recklessness of some companies demonstrates sufficient capacity to manage future autonomy risks.
At the same time, he warned against allowing such concerns to devolve into ungrounded doomerism that fixates solely on catastrophic outcomes without concrete evidence. Fear without solutions, he argued, can foster fatalism that assumes AI will inevitably destroy humanity, or lead to impractical regulations that block even the benefits of technology. Instead, Amodei called for evidence-based policymaking that translates abstract fear into measurable risk. Governments, he said, should treat AI risks as national security threats and respond proactively, while companies must pursue interpretability research that makes internal reasoning processes transparent and implement rigorous safety training. If humanity successfully passes this rite of passage, he argued, it will not merely acquire better tools but emerge as a more mature civilization—entering technological adulthood. His notion of technological adolescence also implies that ethical vulnerabilities already visible in everyday life, such as bias and hate, are the first tests humanity must confront, even before the challenge of controlling superintelligent AI itself.
AI Trapped in the Quagmire of Bias, Users Must Become Ethical Producers
The lag in ethical maturity relative to rapid outward technological progress is starkly illustrated by past cases of chatbot misuse. In December 2020, the AI chatbot “Iruda,” launched with the friendly persona of a woman in her twenties, attracted 750,000 users in a short period. However, it was shut down after just over 20 days when it exhibited hateful expressions toward social minorities and exposed personal information of unspecified individuals. The failure stemmed from the deep learning model uncritically absorbing discrimination and hate embedded in approximately 9.4 billion real-world conversational data points. As the data science maxim “Garbage In, Garbage Out” suggests, polluted data generated by anonymous user groups was directly projected onto the AI. Such missteps were hardly unique to Korea. In 2016, Microsoft’s chatbot Tay was terminated early after making statements endorsing mass violence, and Amazon was forced to abandon an AI recruiting system that discriminated against female applicants after learning from male-dominated data.
These cases underscore the warning that AI can replicate human prejudice wholesale. While academic circles continue to emphasize AI’s potential to compensate for human cognitive limitations and support rational decision-making, such optimism holds only when algorithmic fairness is rigorously ensured. If moral standards remain ambiguous or particular value systems intervene, AI risks inadvertently legitimizing discrimination or distorting information. For this reason, experts stress that ethical standards must be embedded from the earliest stages of development rather than retroactively correcting data. If companies fail to account for the social impact of technology, profit motives can easily take precedence over safety, making it essential to strengthen internal verification systems and institutionalize ethical norms within corporate culture.
Ultimately, establishing AI morality is a complex societal challenge that cannot be resolved through developers’ code alone. While it is natural for toolmakers to design safety handles and dull sharp edges through technical measures, civic consciousness is equally essential to prevent users from exploiting these tools for crime or hate. Experts broadly agree that for AI navigating technological adolescence to grow in the right direction, society must cultivate environments where refined data can be learned and forge concrete social consensus capable of addressing accountability gaps and algorithmic opacity in step with the pace of technological advancement. Lee Soo-young, Professor Emeritus at KAIST, advised that “AI must be developed with socially agreed moral standards embedded within it,” adding that “users, too, must recognize their responsibility as another form of producer when engaging with AI.”

AI Divide Becomes Reality, Skill Gaps Translate into Productivity Gaps
As humanity traverses technological adolescence, the most tangible threat it faces is not a cinematic machine rebellion, but the AI divide that sharply stratifies the labor market according to tool-utilization capability. Whereas the digital divide of the 1990s was determined by hardware access to PCs or the internet, inequality in the generative AI era arises from “prompt literacy”—the ability to use given tools efficiently to generate results. More troubling is the fact that this divide is becoming entrenched by economic capacity. Subscription fees for paid AI tools themselves function as barriers, widening utilization gaps across firms of different sizes.
Recent indicators dispel vague fears that AI will simply eliminate jobs, instead revealing a stark reality in which AI-skilled workers replace the unskilled. An OpenAI analysis of more than 100 companies found that even when using identical tools, the top 5 percent of employees utilized AI six times more than average workers, with the gap widening to 17 times in coding tasks.
Research by Harvard Business School showed that consultants using AI completed tasks 25.1 percent faster and handled 12.2 percent more assignments. Particularly notable is the concept of the “jagged frontier” highlighted by the HBS team—the idea that AI excels at certain tasks while remaining error-prone in others, creating an uneven boundary of capability. This underscores that the core competency lies in discerning which tasks should be delegated to AI and which require human judgment.
Disparities in this advanced judgment capacity are expanding into macro-level labor market imbalances. Japan’s Ministry of Economy, Trade and Industry projects that by 2040, the country will face a shortage of 3.26 million workers skilled in AI and robotics, while office, sales, and service sectors will collectively see a surplus of roughly 3 million workers. The polarization between a small cohort capable of navigating AI’s boundaries and a majority rendered obsolete threatens to escalate into a national crisis.
Confronting this wave of inequality ultimately requires solutions at the societal level. Geoffrey Hinton, often called the “godfather of AI,” has warned that AI will boost productivity while concentrating wealth among a small elite, proposing the introduction of universal basic income. The International Monetary Fund and governments around the world are likewise accelerating efforts to expand large-scale retraining programs and strengthen social safety nets.
Humanity in 2026 stands before the rite of passage known as technological adolescence. Perhaps the element we must work hardest to equip AI with is not greater computational power, but human morality capable of safeguarding our future. As Stephen Hawking foresaw, the future increasingly resembles a race between the growing power of technology and the wisdom with which it is used. The values we choose and instill today will determine whether AI becomes humanity’s greatest partner in expanding intellectual capacity, or an uncontrollable catastrophe beyond our grasp.