Skip to main content
  • Home
  • AI/DS Column
  • The Productivity of the Super-Worker: Why “AI Readiness” Is a Promise, Not a Plan

The Productivity of the Super-Worker: Why “AI Readiness” Is a Promise, Not a Plan

Picture

Member for

9 months
Real name
The Economy Editorial Board
Bio
The Economy Editorial Board oversees the analytical direction, research standards, and thematic focus of The Economy. The Board is responsible for maintaining methodological rigor, editorial independence, and clarity in the publication’s coverage of global economic, financial, and technological developments.

Working across research, policy, and data-driven analysis, the Editorial Board ensures that published pieces reflect a consistent institutional perspective grounded in quantitative reasoning and long-term structural assessment.

Modified

Early AI labor research still shows uncertain signals about future employment
Retraining alone cannot solve displacement as firms reorganize around super-workers
Policy should focus on production systems, data infrastructure, and workplace redesign

In 2024, a substantial 65% of organizations reported regularly using generative AI (Singla et al., 2024). This figure, while apparently impressive, obscures a more complex situation. The widespread availability of these tools has not led to a reliable substitution of human labor across the economy. A report from the Swiss Institute of Artificial Intelligence notes that while AI readiness is a common term and many believe worker retraining is a simple fix, the reality is more complex. So far, AI-driven employment displacement has been real but small and uneven, and protecting entry-level pathways as well as focusing on augmenting jobs rather than replacing them are important considerations. Initial studies indicate some productivity gains, but adoption rates are inconsistent. AI usage is often concentrated among senior staff, and complete automation is still rare (Wolfe et al., 2025). The datasets that feed AI models are often flawed, the number of paying users is small compared to those who use the tools casually, and employers frequently exaggerate how much their employees use AI (Daily GenAI users see higher pay, job security, and productivity – while a third of the global workforce regularly feel overwhelmed, 2025). According to the Brookings Institution, evidence calls into doubt the effectiveness of short, low-cost retraining programs as a way to restore full employment following widespread layoffs related to AI. Instead, the main policy focus should be on reshaping the production side to better deal with these labor market shocks. This involves changing company practices, data governance, and market structures. This is where the real work needs to be done to guarantee a fair transition for workers.

The Fragile Promise of AI Readiness

The typical view of AI in the workplace is that tools are introduced, workers are retrained, and new jobs emerge. This simplified version assumes that tasks displaced by automation are limited and that retraining is easy. The truth is much more complicated.

First, access to tools does not automatically mean they are properly integrated into work processes. Surveys and platform data show that many organizations and workers are experimenting with generative AI, but only a few are getting consistent, valuable results. Executives usually overestimate how widely AI is being used; managers are more likely than frontline employees to say that daily collaboration with AI is common. AI use is also more common among senior employees (Frequent Use of AI in the Workplace Continued to Rise in Q4, 2025).

This presents a couple of challenges. Training programs directed at a broad range of workers may not reach those who need assistance the most. Also, if only some employees are skilled at using AI, companies must decide whether to restructure work to amplify the contributions from those users or to make bigger changes to enable more employees to benefit. The first option could accelerate employment displacement, while the second requires time, money, and skilled management, which many companies lack.

Second, the data used to train AI is important. Generative systems are trained using data that is often messy, incomplete, and misleading (Alemohammad et al., 2023). Companies expecting fast, cheap retraining for displaced workers frequently underestimate the necessary infrastructure to make AI reliable. This includes carefully chosen training data, existing software, and ways to verify its origin and ensure its quality. These issues cannot be fixed with simple training courses. They require changes in engineering, purchasing, and management.

Third, early studies show different outcomes. Some AI implementations significantly increase productivity for frontline employees, while others increase workload without reducing the number of employees needed (Robinson, 2024). If the most productive users are already highly skilled, then simply improving their abilities may increase output without creating new job opportunities for displaced workers. In such cases, AI readiness initiatives focused on short retraining periods will neither restore jobs nor distribute benefits fairly.

Weak Signals from Early AI Labor Research

Researchers are correct to point out that early labor studies offer suggestions but are not definitive. Many studies compare occupations using scores based on how likely they are to be affected by AI or platform usage data—meaning information gathered from digital tools that track which tasks are completed by AI. However, these measurements can differ. What AI can theoretically do at the task level (the smallest unit of work) is different from how it's actually used in the workplace. Platform data show that, for many jobs, the number of tasks currently handled by AI is much lower than what these systems could potentially handle (The Projected Impact of Generative AI on Future Productivity Growth, 2025).

Figure 1: Occupational AI exposure estimates are often much higher than actual AI task usage, highlighting the gap between theoretical automation potential and real workplace adoption.

The amount of observed use varies by platform and function. For example, coding and text editing tasks are more commonly handled by AI than hands-on trades or frontline services (Ozgul et al., 2024). This difference is important for policy because it suggests two possible futures. In one, AI helps workers and creates new jobs. In the other, companies redesign production processes to replace routine labor. Current data cannot yet tell us which path will dominate.

Also, short-term employment indicators are unreliable. Slower hiring in roles likely to be affected by AI has appeared in some payroll data, notably among younger workers. At the same time, unemployment rates have not increased sharply, which would denote significant employment displacement (Ghosal, 2025). This illustrates the uncertainty inherent in data analysis: different datasets and methods can yield different results. The reasonable approach is to be cautious, not complacent. Current findings offer early warnings but not a complete guide. Spending on quick, short courses alone cannot address the challenges. If production is reorganizing to be more efficient with fewer workers, then support programs must enable gradual transitions.

Figure 2: Unemployment trends across occupations with different levels of AI exposure move largely together, suggesting that early labor-market effects of AI remain inconclusive.

Finally, company-level experiments reveal important factors that simple projections miss. When AI suggestions guided less experienced workers, improvements were often significant and lasting. These studies also show that highly skilled employees may start to rely on AI outputs in ways that reduce their original contributions and could eventually degrade the data that future models learn from (Brynjolfsson et al., 2023, pp. 889-919). If models are trained on work that is already unoriginal, the system may slowly fall in its ability to solve problems creatively. This creates a paradox for decision-makers: the very efforts that make productivity gains more accessible could, in the long run, weaken the learning process that drives advancement unless companies invest in preserving high-quality human expertise.

Rethinking AI Readiness from the Production Side

If research is just beginning, then policy needs to start with how companies put things together and how they choose to use them. We need two connected changes. First, we need to change our focus from just training workers to improving the entire production system. This means policies and public funds should support companies. They adopt solid data management practices, open systems for tracking data sources, and clear standards for measuring.

Funding should go to projects that redesign workflows so that AI supports a wider range of employees, not only those who are already at an advantage. This is not about being controlling; it is about being practical. Building reliable, verifiable systems and redesigning workplaces often yield broader productivity gains than isolated training courses.

Second, we need to invest in governance and measurement. The current mix of metrics, platform data, and employer survey results in poorly targeted interventions. We need standardized, open ways to measure AI use in real work settings, and we need public funding to develop them. Governments should establish data trusts and benchmarks that are not owned by any one company. These can show integrations that actually reduce labor demand, increase wages, or change hiring patterns.

With better measurements, policymakers are able to target support for regions, age groups, and industries. It shows clear signs of structural change, not just experimentation. This is important for fairness. Without good measurement, safety nets and support programs will either over- or under-serve the people who need them most.

Third, we need to reexamine how we design retraining programs. Short courses focused on earning certificates have their place, but they work best when used with workplace redesign and on-the-job learning. Apprenticeship-style programs within companies using AI can accelerate lasting transitions. Companies that offer mentorship and exposure to different tasks allow junior workers to learn alongside the tools, preserving paths to advance in their careers. Public policy should support such models of co-funding partnerships between employers and learning institutions, offering tax credits for on-the-job retraining, and funding councils that coordinate training programs. It corresponds to actual changes in production.

From Training Programs to System-Level Policy

We must stop seeing AI readiness as only an individual responsibility and view it as a problem. That requires changes to the entire system. Current evidence provides an honest but incomplete view. There are some productivity gains, widespread automation is not yet a reality, and present measurements offer only limited insight into the future.

This points to a clear policy approach. Instead of spending public money on cheap courses that promise quick job placement, we should invest in the infrastructure that makes AI reliable. It is generally available and fair within companies by supporting data management, standards, workplace redesign, and shared measurement. Those investments are more difficult and take more time. They require more coordination between public organizations, industries, and educators. However, they make a fair transition more likely. This transition includes determining how companies build and purchase AI, who does so, and who is left behind.

Returning to the initial point that many companies report using generative AI, training alone would be sufficient. That is not the case. If policymakers care about jobs, wages, and respect at work, they must take action. If so, the levers are in the factories, the architecture, and the data that feed AI models. We should first build the systems and processes, and then the training will have a real impact.

References

Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A.I., Babaei, H., LeJeune, D., Siahkoohi, A. and Baraniuk, R.G. (2023) Self-consuming generative models go MAD. arXiv preprint.
Brynjolfsson, E., Li, D. and Raymond, L. (2023) ‘Generative AI at Work’, The Quarterly Journal of Economics, 140(2), pp. 889–919.
Eckhardt, K. and Goldschlag, N. (2025) Unemployment and AI exposure across occupations. Working paper.
Gallup (2025) Frequent use of AI in the workplace continued to rise in Q4. Gallup Workplace Report.
Ghosal, S. (2025) ‘Generative AI reshapes U.S. job market, Stanford study shows’, CNBC.
Gimbel, M., Kelkar, O., Patel, R. and Shah, A. (2025) AI usage by occupation: Evidence from Anthropic data. Anthropic Research.
Kolko, J. (2026) ‘Research on AI and the labor market is still in the first inning’, Brookings Institution, The Hamilton Project.
Ozgul, P., Fregin, M., Stops, M., Janssen, S. and Levels, M. (2024) High-skilled human workers in non-routine jobs are susceptible to AI automation but wage benefits differ between occupations. arXiv preprint.
Penn Wharton Budget Model (2025) The projected impact of generative AI on future productivity growth. Penn Wharton Budget Model.
PwC (2025) Daily GenAI users see higher pay, job security and productivity – while a third of the global workforce regularly feel overwhelmed. PwC Global Workforce Survey.
Robinson, B. (2024) ‘77% of employees report AI has increased workloads and hampered productivity, study finds’, Forbes.
Singla, A., Sukharevsky, A., Yee, L., Chui, M. and Hall, B. (2024) The state of AI in early 2024. McKinsey Global Survey on AI.
Wolfe, D., Price, M., Choe, A., Kidd, F. and Wagner, H. (2025) Revisiting UTAUT for the age of AI: Understanding employees AI adoption and usage patterns through an extended UTAUT framework. arXiv preprint.

Picture

Member for

9 months
Real name
The Economy Editorial Board
Bio
The Economy Editorial Board oversees the analytical direction, research standards, and thematic focus of The Economy. The Board is responsible for maintaining methodological rigor, editorial independence, and clarity in the publication’s coverage of global economic, financial, and technological developments.

Working across research, policy, and data-driven analysis, the Editorial Board ensures that published pieces reflect a consistent institutional perspective grounded in quantitative reasoning and long-term structural assessment.