Structural Labor Redundancy and the Future of Work
Published
Modified
The AI readiness myth hides a future where only a few AI-powered workers dominate productivity AI may create structural labor redundancy, replacing large segments of human work Policy must shift from training workers to sharing AI-driven productivity gains

The initial impact must be powerful. Across developed economies, a worrying statistic is frequently downplayed amid optimism: tools capable of automating more than a quarter of current work hours already exist (OECD: 27% of Jobs like Administrative Work Could Be Easily Automated, 2023). Their practical application is expected to surge in the coming years. This isn't speculation; it's an assessment of technical potential that revises the preparedness assessment. In this context, it's important to recognize how the term AI preparedness has evolved from a policy goal to a more exclusive screening process—one that only a small portion of the workforce will pass at a meaningful level. Therefore, we need to move away from viewing preparedness as uniform and acknowledge automation as a structural shift. This isn't just a gradual change in tasks but a transfer of entire job categories toward a new group of highly skilled operators, leaving many behind. The current policy debate, which assumes widespread and easily reversible training, is insufficient. Teachers and legislators face a major challenge: how to ensure social stability and productive engagement when a few AI-driven workers can replace significantly more labor?
Shifting from Preparedness to Structural Redundancy: Reframing the Issue Around Scale and Selection
Current discussions about workforce readiness frequently depend on the idea that improving skills and expanding training will ease workers into new roles. This frames technology adoption as a civic education issue, when the real challenge is structural redundancy: AI can boost the productivity of a small group, making them much more competitive than others—something traditional training cannot offset. If one AI-empowered operator can do the work of a hundred, companies will favor that person. This is not an exaggeration; it's the reality of scale. Technical analysis shows that widespread automation is now feasible and provides considerable economic value. The McKinsey Global Institute notes that existing technology may transform most work hours, rapidly concentrating value among early adopters.
Why is this important for educational policy? Because the usual approach of raising the base level (more certificates, short courses, computer skills) assumes that high-end improvements won't outweigh gains at the middle levels. But when tools greatly increase output, returns become disproportionately high. The job market won't reward almost-ready workers; it will reward those who can manage AI systems at scale. This selection process favors highly concentrated talent, companies with substantial capital, and institutions that can provide access to advanced computing and data. The result isn't just temporary job loss but structural redundancy: entire job types becoming economically unimportant while income and influence gather within a small group. The financial results are clear and immediate. Larger re-training budgets, no matter how generous, won't address the key issues of inequality unless they're paired with bigger changes.

Evidence on Concentration and Speed, and the Limitations of Standard Preparation
We need evidence, not just words, and a clear method. I only use evidence from recent studies on task exposure, company adoption, and job-market signals. The Organization for Economic Co-operation and Development’s assessment of language-based AI found that about a third of jobs could be fully transformed at scale, with another third partially transformed, highlighting a wide range of potential problems (Development, n.d.). OECD OpenAI's job mapping showed a strong link between GPT abilities and many office jobs, suggesting that exposure isn't limited to routine manual tasks but also affects white-collar work. (Artificial Intelligence and the Labour Market in Japan, n.d.) OpenAI, meanwhile, company surveys and OECD adoption trackers show that actual usage is very uneven: in 2023, AI usage averaged in the single digits across companies but was much higher among IT and big companies, showing a gap in adoption between the few and the many (OECD, n.d.).
This pattern, when examined alongside adoption data, shows why the usual policy solutions, such as expanding training, speeding up apprenticeships, and paying for certifications, will not be enough unless they address three difficult facts. First, access to computing power and big datasets creates barriers that training alone can't overcome. Second, the added productivity from tech adoption shifts market power to a few companies and workers. Third, the speed at which advantages build means that slower regions and industries will lose high-paying jobs and talent in ways that can be hard to fix.
Policy changes: from General Readiness Programs to Targeted Inclusion Strategies
If we accept that AI causes job redundancy, not just task changes, policy has to adapt. Training should not aim for identical readiness. Instead, focus on four areas that shift market dynamics and broaden access to high-value capacities.
First, we need to make computing power and data public resources. If a few companies control them, they control productivity gains. Public money for shared computing platforms, local data trusts, and research computing can lower costs for smaller companies, local groups, and schools. This approach differs from standard training: it builds the physical and information base for broad access.
Second, we must change how we certify skills and organize work. The focus should be on team cooperation. Instead of pitting people against machines, we should think of jobs as human–AI teams. Certify skills in supervising, checking, and understanding AI outputs. This moves the focus from replacement to team improvement. It gives mid-skill workers better paths to stable jobs that require human decision-making—something AI struggles to match.
Third, we must strengthen social security and find new ways to share income in places affected by structural redundancy. If automation affects an entire local workforce, small training programs will not be enough. We require policies that stabilize income during these shifts and encourage new local investments. This includes business grants and public jobs linked to training programs.
Fourth, we need rules for digital resource competition. Limits on who controls data and computing power encourage a more equitable distribution. Support open-source models, flexible standards, and simple data-sharing. These steps prevent a winner-take-all market that favors a few skilled people with exclusive tech. The World Economic Forum and other groups often discuss training needs. But competition policy should also ensure income spreads more fairly, not just training opportunities (OECD Skills Studies: Empowering the Workforce in the Context of a Skills-First Approach, n.d.).

Responding to concerns about these approaches, some might say that big public investments in computing or rules on open-source models risk paying for harmful activity or reducing motivation for creativity. There are two answers to this. First, democratic oversight and licensing can require safety reports, inspections, and rules for responsible use tied to public computing funds. Second, the choice isn't between pure private innovation and public restriction; it's between concentrated private control of huge productivity gains and a mixed system where public structures lower costs while protecting incentives for responsible innovation. The other option, letting adoption happen without rules and letting existing large companies control it, guarantees concentrated gains and struggling communities.
The technical ability to automate much of the work is a measurable reality, changing markets and wages. The idea that general programs are sufficient distracts from the core issue: job redundancy driven by scale and concentration. If a few AI-supported workers can outperform large teams, standard strategies won’t protect communities or drive broad growth. We need to move from a skills-only focus to a strategy that builds shared resources, redesigns jobs as human–AI teams, and uses competition and social policy to ensure productivity gains are widely shared. Policymakers have to prioritize distributing tools and profits of this production model to turn automation’s impact into shared benefits.
References
Buhl, J. (2026). US vs China: Who is really winning the global AI race? PoliticsUK.
Eloundou, T., Manning, S., Mishkin, P. & Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv preprint.
Hawkins, A. (2026). China lags behind US at AI frontier but could quickly catch up, say experts. The Guardian.
McKinsey Global Institute (2026). McKinsey Global Institute: 2025 in charts. McKinsey & Company.
Muro, M. (2026). How the U.S. can maintain its edge in AI without leaving workers behind. Brookings Institution.
OECD (2023). Artificial intelligence and jobs: No signs of slowing labour demand (yet). OECD Employment Outlook 2023. Organisation for Economic Co-operation and Development.
OECD (2023). OECD Employment Outlook 2023. Organisation for Economic Co-operation and Development.
OECD (2023). Skill needs and policies in the age of artificial intelligence. In: OECD Employment Outlook 2023. Organisation for Economic Co-operation and Development.
OECD (2024). The impact of artificial intelligence on productivity, distribution and growth. Organisation for Economic Co-operation and Development.
OECD (2026). AI use by individuals surges across the OECD as adoption by firms continues to expand. Organisation for Economic Co-operation and Development.
PwC (2024). AI Jobs Barometer. PricewaterhouseCoopers.
World Economic Forum (2023). The Future of Jobs Report 2023. World Economic Forum.