Why Artificial Intelligence May Lead to Structural Job Loss
Published
Modified
AI concentrates productivity in a few AI-powered workers, making many roles redundant The idea of broad “AI readiness” ignores how labor markets reward extreme productivity Without strong social policies, AI-driven growth may come with widespread job loss

The discussion is framed by a worrying statistic: current estimates suggest that approximately 25% of jobs worldwide are at risk due to advances in generative AI. A lesser fraction, about 3.3%, is in a critical spot where automation could fundamentally change what the job involves (Gmyrek et al., n.d.). This isn't a future problem; it is a current reality for the job market. The idea of AI readiness, which involves education and competency development, is seen as a solution to help people adapt. There is a different truth when it comes to AI. While previous technological developments improved individual output, AI centralizes skills to the point where a single person, armed with sophisticated models and tools, can perform as much work as big teams. This leads to employment losses. If policymakers treat AI as simply a skills issue that can be fixed by job training, they risk missing the larger change: a job market that heavily rewards those who are experts in using AI tools and pushes others out.
Changing the conversation: from readiness to employment cuts
The common view treats AI like something that needs to be spread around. People need to be ready for it so they can work with machines. This appears useful and is appealing to politicians. It implies that technology will spread widely, that companies will gradually use tools to assist humans, which won't significantly affect job numbers, and that training can keep up. Though facts from 2023 onward show something different. Even though generative models and automation increase how much workers produce, these benefits aren't shared equally. Studies indicate diverse outcomes, with only slight average gains, masking large gaps between beginners and skilled workers. (Gmyrek et al., n.d.) Companies don't make hiring decisions based on average numbers. They look at what gives them an edge over their rivals. This creates a situation in which some workers profit from AI assistance. Companies then rationally focus jobs around those few who are great at using the technology. This results in job losses. Many jobs become pointless because the same work can be done by fewer workers who are skilled with AI. The implication for decision-makers is that improving readiness helps people individually but doesn't stop the job market from favoring the ultimate output.
The AI readiness idea can be dangerous because it doesn't focus enough on how things are distributed and shared. Those in authority frequently focus on training, retraining, and AI knowledge programs. Although they are important, these won't stop employment declines or unequal pay if the main economic pressure comes from concentrated increases in output. Looking at history might not help here. Past technological improvements improved jobs and created new opportunities, helping displaced workers find employment. This happened in a setting where humans and machines worked together in various roles. Nowadays, AI systems can take over entire sets of tasks that used to be spread across multiple employees. Thus, the right approach goes from teaching people to use AI to managing an economy in which AI leads to a winner-take-all production system. This update changes what's important, such as reducing job losses, ensuring stable income support, drafting new rules for business and ownership, and taking serious action to spread wealth rather than relying on quick-fix programs.
What the research says about productivity, exposure, and concentration
Data from actual workplaces shows how AI affects output and allocation. A large study on AI assistants in customer service found typical output increased by about 14–15% after implementation (Brynjolfsson et al., 2023). Though the gains weren't shared by all, beginners improved more than experienced workers, and top performers sometimes didn't improve at all. This affects how companies hire and how they design roles. Companies that can scale top performers' practices through AI can reduce their need for middle-level staff. AI spreads the best ways of doing things from great employees through its tools. This devalues mid-level expertise.
Looking at job-exposure numbers provides more detail. New research from the OECD indicates that many of the jobs with the highest levels of exposure are white-collar roles that require higher education (OECD Employment Outlook 2023, n.d.). This isn't just routine manual work since generative systems are getting better at cognitive tasks. This runs counter to the idea of readiness, which implies that lower-skilled workers are most at risk. Also, the International Labour Organization's 2025 study finds that one in four jobs worldwide will change due to generative AI, and that about 3.3% of jobs are highly exposed. (Gmyrek et al., n.d.) These numbers show that this isn't limited to a few areas; the changes are impacting important parts of the economy.

Usage data from platforms and vendors shows that language models and their interfaces aren't being used as much as they could be in the workplace. Though the gap is shrinking rapidly as companies add these models to work and interfaces automate processes (Generative AI and the SME Workforce, n.d.). New ways to measure exposure show that jobs with more actual use of AI are growing more slowly in hiring estimates (Artificial intelligence adoption and its impact on jobs, 2024). Hiring may favor younger workers skilled in technology (Son, 2025). Diffusion could lead to hiring freezes or shifts in who is hired before layoffs. The data imply a market that will reallocate output to roles supported by AI, leading to fewer, much more productive positions (Artificial intelligence, job quality and inclusiveness: OECD Employment Outlook 2023, 2023).
Realistic policy: why Universal Basic Adjustment is important
Accepting the fact that there will be job losses changes the policy options. If only a few workers reach the superhuman output level, then income security must be a standard condition for maintaining economic stability. It shouldn't just be a short-term fix for workers to move between jobs. The Brookings-style ideas for a Universal Basic Adjustment Benefit are good, but aren't strong enough in many cases. An effective program should be long-term, widespread, and adjusted. Benefits should last as long as it takes displaced workers to secure stable options or new roles in society, which could take years. Short-term support programs create stigma and difficult administration. A broad base with clear options reduces government resistance and increases speed. Benefits need to monitor local pay and inflation levels to maintain buying ability as economies change. Treating this support as optional or short-term could leave numbers without help.

Complementary policies must be expanded. Tax and ownership rules that take a share of AI-created profits, whether via capital gains or a tax on output produced by replacing humans, can fund adjustment systems while reducing the incentive to automate solely to capture revenue. Public funding should emphasize two areas that market incentives don't support well: public AI and infrastructure accessible to local firms and cooperatives. This can expand the ability to build and implement tools and support purpose-driven areas where human input is important, such as education, care, and cultural output. Public funding can keep wages stable in these areas even as the private sector consolidates. These policies shift our focus from individual training to a more society-based approach that shares profits and mitigates losses.
Addressing the criticisms with supportive evidence
Some say that AI creates more jobs than it destroys. This has some truth, particularly in datasets and time frames. AI spending creates new technical roles and services. Though the quality and amount of these jobs need to be judged against the amount of displacement. Research shows output gains with unclear effects on total employment (The Effects of Generative AI on Productivity, Innovation, and Entrepreneurship, n.d.). The danger is that these new jobs could be focused on certain geographic areas or skill sets. Also, a number of displaced workers will face long periods of few opportunities. The policy question isn't whether AI creates any jobs but whether the new jobs are enough and available to those who lose work. The concentration of exposure across certain jobs and countries suggests that the new jobs probably won't employ all the displaced workers.
There are those who claim that training courses can expand quickly. The data here is conflicting. Employees most at risk because of automation aren't always the ones participating in training. In a number of countries, those who need training most are participating the least in these programs (Artificial Intelligence and the Labour Market in Japan, n.d.). Even great training can't change the fact that bosses will pick a tool-supported worker who can deliver more output over a team of newly trained workers who aren't as productive. Therefore, training needs to be combined with demand-side measures like incentives for fair hiring, support for multiple ownership models, and public procurement that spreads AI gains.
Others fear that regulation will restrict innovation. Good regulation doesn't need to be a tax on innovation. It can set the rules for AI use, transparency, and the mandated impact on human roles replaced by AI, as well as mechanisms to reclaim value when public data or community knowledge improves private models. Those rules can preserve incentives that promote innovation, yet guarantee its benefits aren't focused without social gain. A handful of research groups have discovered how to conduct measurement and documentation smoothly.
A plan for an economy dealing with job losses
The idea of AI readiness is a joke if it means easy retraining and support will keep jobs. Data from studies, international numbers, and tool usage show that AI isn't just something that increases output across workers. It centralizes knowledge that can make roles unnecessary. We should stop pretending that readiness solves the issue. Instead, policy must accept that change is in progress. Also, gains will be focused unless they are shared. Social support must be improved from temporary help to a standard infrastructure. This means legally reliable and well-funded adjustment benefits, new tax and ownership rules that seize automation benefits, funding available for AI infrastructure, and policies that spread demand for human services. If leaders grasp these facts, they can shape a future in which technology raises living standards across the board rather than letting a small group dominate. Dislocation will be harder and more expensive to fix.
References
Brynjolfsson, E., Li, D. & Raymond, L. (2023). Generative AI at Work. arXiv preprint arXiv:2304.11771.
Gmyrek, P., Berg, J. & Bescond, D. (2023). Generative AI and Jobs: A Global Analysis of Potential Effects on Job Quantity and Quality. ILO Working Paper No. 96. International Labour Organization.
Gmyrek, P., Berg, J., Kamiński, K., Konopczyński, F., Ładna, A., Nafradi, B., Rosłaniec, K. & Troszyński, M. (2025). Generative AI and Jobs: A Refined Global Index of Occupational Exposure. International Labour Organization.
Malatji, M. (2026). Bridging the AI divide in sub-Saharan Africa: Challenges and opportunities for inclusivity. arXiv preprint.
Muro, M. (2024). How the U.S. can maintain its edge in AI without leaving workers behind. Brookings Institution.
Organisation for Economic Co-operation and Development (OECD) (2023). OECD Employment Outlook 2023.
Organisation for Economic Co-operation and Development (OECD) (2023). OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market.
Organisation for Economic Co-operation and Development (OECD) (2024). Generative AI and the SME Workforce.
Susskind, D. (2025). Universal basic income as a new social contract for the age of AI. LSE Business Review.