Sam Altman’s Home Targeted With Molotov Cocktail as AI-Driven Layoffs Accelerate the Rise of ‘Neo-Luddism’
Input
Modified
AI Anxiety Fuels Extremist Rage Physical Resistance to AI Spreads, Rejecting Technological Change and Innovation Backlash Echoes the Luddite Movement of the Industrial Revolution

Sam Altman, chief executive officer of OpenAI and a leading figure in generative artificial intelligence, has seen both his private residence and the company’s headquarters exposed to terrorist threats in quick succession. Social conflict surrounding advances in AI technology is escalating into an uncontrollable pattern of physical violence. The episode lays bare how dangerous the recently spreading wave of “new-technology hostility” has become. The Luddite movement, which destroyed machines during the Industrial Revolution, is now being reenacted in the 21st-century AI era.
Man in His 20s Threatens OpenAI Headquarters After Arson Attack on Altman Home
According to The Wall Street Journal on April 12, San Francisco police arrested 20-year-old Daniel Alejandro Moreno-Gama on April 10 for hurling a Molotov cocktail at Altman’s home. At around 3:45 a.m. on April 10, Moreno-Gama allegedly threw a bottle containing burning cloth at the front gate of Altman’s residence in San Francisco, California. Part of the exterior gate was scorched, though no injuries were reported. By the time firefighters arrived after receiving the report, security personnel at the residence had already extinguished the blaze, and the suspect had fled.
The incident did not end there, extending all the way to OpenAI’s headquarters at 1,455 Third St. Roughly an hour later, at around 5 a.m., police responding to a report of a man threatening to burn down the building at OpenAI’s San Francisco headquarters, about 5 kilometers away, confirmed that he was the same individual captured on CCTV in connection with the Molotov cocktail attack and arrested him immediately.
On the afternoon of the incident, Altman posted on his blog a photograph showing his same-sex spouse and young son. “I love them more than anything,” he wrote. “I believe images have power. I normally guard my privacy closely, but I’m sharing this in the hope it might persuade the next person thinking about throwing a Molotov cocktail at our house.” He added, “A few days ago, a sensational article about me was published,” and said that “someone told me that, at a time when anxiety about artificial intelligence was intensifying, that article made me and my family more vulnerable.” The remarks were widely interpreted as suggesting that the attack was fueled by hostile public sentiment toward him.
Earlier, U.S. weekly magazine The New Yorker had published an article questioning Altman’s sincerity. Altman said, “I’m sorry to the people I’ve hurt, and I wish I had learned more, sooner.” He said his tendency to avoid conflict had caused both him and the company significant pain. “Many companies have said they would change the world, but we actually did it,” he said. “Much of the criticism directed at our industry comes from genuine concern about the incredibly large risks posed by this technology,” adding that “those concerns are valid, and we welcome good-faith criticism and discussion.” He added, however, that “while having those debates, we should try to reduce explosive confrontations, both figuratively and literally.”
AI Upheaval Reshapes the Labor Market
The threats against OpenAI are not unprecedented. OpenAI has long been a target for activists warning about the dangers AI poses to labor. In November of last year, an anti-AI activist went to OpenAI’s headquarters and threatened to “kill people,” prompting a temporary office closure. In February last year, anti-technology groups including “Stop AI” occupied OpenAI’s headquarters. STOP AI called for a halt to AI research, saying that “the moment AI surpasses humans, it will become an uncontrollable threat.”
The industry widely views the سلسلة of incidents as a modern mutation of the Luddite movement brought on by the spread of AI technology. It is a neo-Luddite movement in which people oppose or reject the use of new technologies such as AI on the grounds that they threaten human life. Historically, the process of embedding technology into society has never been smooth. Each time technology has reshaped society, those unable to bear the speed of change have responded in the language of anxiety. The Luddite movement during the early 19th-century Industrial Revolution is a prime example. At the time, British textile workers, whose livelihoods were threatened by the introduction of automated machinery, resisted by smashing the machines. On the surface it appeared to be an anti-machine campaign, but at its core it was a revolt against a capitalist-centered distribution of gains and mounting employment insecurity, more than against technology itself.
That is hardly surprising, given that the data already show in hard numbers that AI is no longer a threat of the future. In the United States, where hiring and firing are highly flexible, white-collar jobs are disappearing first. According to employment consulting firm Challenger, Gray & Christmas, the number of layoffs announced in 2025 for reasons tied to AI has already exceeded 50,000. That marks a more than twelvefold surge from two years earlier.
In practice, global big tech companies are pressing ahead with large-scale workforce reductions under the banner of “AI transition.” Amazon, on top of the 14,000 layoffs it announced last fall, recently said it would cut an additional 16,000 office workers. Amazon CEO Andy Jassy said, “The introduction of generative AI and agents will change the way work is done,” adding that “over the next few years, we expect our total corporate workforce to decline.” Other companies are following a similar path. Image-sharing platform Pinterest recently cut about 15% of its workforce, citing “AI-centered resource reallocation” as the reason. HP CEO Enrique Lores also told investors that embedding AI across the company could create an opportunity to eliminate as many as 6,000 jobs over the next few years. Chief executives at major companies including Salesforce and JPMorgan Chase have likewise issued bleak forecasts that a substantial share of their own white-collar roles will soon disappear.

What Must Change Will Ultimately Change
The employment risks triggered by AI can be broadly organized along five axes. First, the scope of automation is clearly expanding beyond simple repetitive tasks into the full spectrum of office and professional work, including data processing, advertising, education, counseling, accounting and litigation. Unlike past manufacturing-centered automation, AI directly replaces cognitive functions such as analysis and planning, putting pressure on job security even for highly educated, high-income workers.
At the same time, as AI’s coding capabilities rapidly advance, demand for software developers and data analysts is also entering a gradual contraction phase. Simultaneously, the gap between the few who possess the technology and the many who do not is likely to widen further. As productivity differentials between firms and individuals accumulate, the imbalance in assets and income is also intensifying. In addition, as foundational tasks traditionally handled by entry-level workers are replaced at speed, the pathways through which young people enter the labor market are themselves coming under threat. Alongside this, the relative value system among mental, physical and emotional labor is being restructured, adding pressure for change across the hierarchy of labor and the broader compensation framework.
One notable point is that past automation displaced physical labor, leaving the professions relatively sheltered. That is no longer the case. Law, finance, design and programming have all moved into AI’s sphere of influence. Blue-collar labor is not safe either. Advances in physical AI and robotics are rapidly reconfiguring manufacturing, logistics and services. From Hyundai Motor’s humanoid robot Atlas to Tesla’s Optimus and the flood of humanoid robots emerging from China, the evidence now shows that AI has moved beyond the screen and acquired a body. Nvidia, the world’s largest AI semiconductor company, recently added fuel to that shift by declaring the dawn of the physical AI era at GTC 2026, further accelerating the AI substitution of manual labor.
The strong resistance mounted by Hyundai Motor’s labor union in January against the introduction of Atlas reflected recognition that robots had moved beyond the status of human support tools and entered the same weight class as direct job competitors. The same logic applied in San Francisco, where autonomous robotaxis have been fully permitted to operate and civic groups have blocked them by placing cone-shaped “rubber” cones on the vehicles.
There are also cases in which AI adoption is being deliberately obstructed. According to a report on the state of AI adoption in enterprises released on April 7 by AI agent company Writer and research institution Workplace Intelligence, a survey of 2,400 knowledge workers in the United States, Britain and Europe found that 29% of all employees said they had experience sabotaging their company’s AI strategy. Among Generation Z, defined as those born between 1997 and 2012, that figure rises to 44%. Among employees who admitted obstructive behavior, 30% cited “fear of losing their jobs” as the reason.
The forms of sabotage were varied. Some took passive forms, such as inputting internal confidential data into unapproved public AI tools or simply refusing to use AI. Others took active forms, including deliberately submitting low-quality outputs or manipulating performance evaluations to make AI appear inefficient. Against that backdrop, resistance is expected to intensify further if “agentic scaling,” in which AI organizes itself autonomously, gathers momentum as well.
Experts agree that the only path forward is for humanity to establish ways to govern, manage and coexist with AI. They also argue that societies must find ways to retrain those displaced from jobs replaced by AI, connect them to new work and guarantee a certain level of income so that their quality of life does not deteriorate. Major economies are already discussing proposals such as universal basic income for all citizens, financed through capital taxes levied on wealth created by AI. In effect, AI, whose rise was accelerated by the free-market principle of “maximizing efficiency,” is now paradoxically forcing the center of social debate back toward the question of distributive justice.
- Previous “The Iran War Is Erasing 1 Percentage Point of Global Growth” The War-Redrawn Economic Map and the Encroaching Fear of Stagflation
- Next “Department Stores Alone Reap the Rewards of the High Jewelry Boom” As K-Shaped Polarization Reshapes Consumer Spending, Strategic Focus Shifts Toward the Affluent