Scenario II: Labour / Productivity

The development and usage of AI is supposed to dramatically change the structure of our workforce. While the technology promises increased productivity and efficiency, critical threats to workers, particularly concerning employment prospects, privacy, and wellbeing have also been identified.


The knock-on effects of job loss and mass unemployment on society could exacerbate inequality mid-term and destabilise political structures.


In order to take stock of the stakes at hand and the choices we face, we map out future trajectories for AI in relation to the labour market on a time horizon of five to ten years.

This scenario was written following an in-depth workshop in Palo Alto bringing together leading academics, asset owners and venture capital investors in November 2025.


The workshop and this scenario are part of a series exploring AI's systemic impacts in pursuit of alternative narratives.

Supported by:

Early 2026: Feeling the impact

Hyperscalers believe that the emergence of Artificial General Intelligence (AGI) will soon lead to the rollout of autonomous AI agents capable of behaving as ‘virtual employees’. These agents, according to AI developers, could automate millions of jobs worldwide — and raise global GDP by up to 7% (Goldman Sachs).


Following this narrative, capital is currently flowing heavily into capital expenditures in (generative) AI technology — research and development and data centre infrastructure — rather than into hiring human labor.

The displacement


Whilst AGI and mass automation have not yet shown signs of materialising, the extent of the impact on the labour market is so far ambiguous.


Researchers at Yale argue that there has not been a discernible disruption since ChatGPT’s release, and that more data is needed to fully understand the changes occurring (Gimbel et al., 2025).


However, a team of Stanford economists have found indications of employment declines in areas of the workforce which applications of AI are poised to automate — such as early-career workers in software development, customer service, and clerical work, a group which has seen up to a 13% relative hiring decline since the widespread adoption of generative AI (Brynjolfsson, Chander, Chen, 2025). 


These displacement risks are likely to have differential impacts across society. Women, who are more likely to be in clerical and customer service work, and low-skilled (white collar) workers, who perform more routine tasks, are likely at greater risk.


The labor market is hence so far only witnessing specific, localised disruptions, rather than economy-wide mass unemployment.

The displacement


Whilst AGI and mass automation have not yet shown signs of materialising, the extent of the impact on the labour market is so far ambiguous.


Researchers at Yale argue that there has not been a discernible disruption since ChatGPT’s release, and that more data is needed to fully understand the changes occurring (Gimbel et al., 2025).


However, a team of Stanford economists have found indications of employment declines in areas of the workforce which applications of AI are poised to automate — such as early-career workers in software development, customer service, and clerical work, a group which has seen up to a 13% relative hiring decline since the widespread adoption of generative AI (Brynjolfsson, Chander, Chen, 2025). 


These displacement risks are likely to have differential impacts across society. Women, who are more likely to be in clerical and customer service work, and low-skilled (white collar) workers, who perform more routine tasks, are likely at greater risk.


The labor market is hence so far only witnessing specific, localised disruptions, rather than economy-wide mass unemployment.

The displacement


Whilst AGI and mass automation have not yet shown signs of materialising, the extent of the impact on the labour market is so far ambiguous.


Researchers at Yale argue that there has not been a discernible disruption since ChatGPT’s release, and that more data is needed to fully understand the changes occurring (Gimbel et al., 2025).


However, a team of Stanford economists have found indications of employment declines in areas of the workforce which applications of AI are poised to automate — such as early-career workers in software development, customer service, and clerical work, a group which has seen up to a 13% relative hiring decline since the widespread adoption of generative AI (Brynjolfsson, Chander, Chen, 2025). 


These displacement risks are likely to have differential impacts across society. Women, who are more likely to be in clerical and customer service work, and low-skilled (white collar) workers, who perform more routine tasks, are likely at greater risk.


The labor market is hence so far only witnessing specific, localised disruptions, rather than economy-wide mass unemployment.

The surveillance issue


However, the impact on the workforce is not limited to unemployment. AI systems are already being used by employers for surveillance, monitoring, evaluation, and control of workers, often without their knowledge.


These applications can lead to psychological stress, increased injury risk, burnout, and job deterioration (AFL-CIO paper). 

The surveillance issue


However, the impact on the workforce is not limited to unemployment. AI systems are already being used by employers for surveillance, monitoring, evaluation, and control of workers, often without their knowledge.


These applications can lead to psychological stress, increased injury risk, burnout, and job deterioration (AFL-CIO paper). 

Augmentation or Automation?


We also have potential alternative paths ahead of us. AI deployment could be used in a  ‘human complementary way’, which would add value to organisations while avoiding the perils of mass displacement. AI can be designed to augment human capabilities, create new tasks, and enable lower-skilled or lower-ranked workers to perform more valuable, expert work (Acemoglu et al. 2023). But development typically hasn’t been pushed in this direction, despite indications that AI adoption challenges faced by enterprises could be overcome in this way. One reason for the ongoing focus on automation (versus augmentation) are the widely-used AI benchmarks which reward AI which appears like a human (e.g.., by passing the Turing Test), rather than testing AI systems as collaborators.


Developing AI that is complementary with humans might become a competitive advantage as corporate customers increasingly demand safety, transparency, and reliability before deploying tools at scale. Companies are more likely to achieve deep workflow integration with carefully designed human-AI collaborations, rather than generic automation agents.


Recent research from Anthropic indicates that the most AI-fluent people use their generative AI tools in an augmentative manner, which makes sense given that completely automated outputs are rarely (only up to 3.75% of the time) a replacement for real work at current capabilities.

Augmentation or Automation?


We also have potential alternative paths ahead of us. AI deployment could be used in a  ‘human complementary way’, which would add value to organisations while avoiding the perils of mass displacement. AI can be designed to augment human capabilities, create new tasks, and enable lower-skilled or lower-ranked workers to perform more valuable, expert work (Acemoglu et al. 2023). But development typically hasn’t been pushed in this direction, despite indications that AI adoption challenges faced by enterprises could be overcome in this way. One reason for the ongoing focus on automation (versus augmentation) are the widely-used AI benchmarks which reward AI which appears like a human (e.g.., by passing the Turing Test), rather than testing AI systems as collaborators.



Developing AI that is complementary with humans might become a competitive advantage as

Developing AI that is complementary with humans might become a competitive advantage as corporate customers increasingly demand safety, transparency, and reliability before deploying tools at scale. Companies are more likely to achieve deep workflow integration with carefully designed human-AI collaborations, rather than generic automation agents.


Recent research from Anthropic indicates that the most AI-fluent people use their generative AI tools in an augmentative manner, which makes sense given that completely automated outputs are rarely (only up to 3.75% of the time) a replacement for real work at current capabilities.

Augmentation or Automation?


We also have potential alternative paths ahead of us. AI deployment could be used in a  ‘human complementary way’, which would add value to organisations while avoiding the perils of mass displacement. AI can be designed to augment human capabilities, create new tasks, and enable lower-skilled or lower-ranked workers to perform more valuable, expert work (Acemoglu et al. 2023). But development typically hasn’t been pushed in this direction, despite indications that AI adoption challenges faced by enterprises could be overcome in this way. One reason for the ongoing focus on automation (versus augmentation) are the widely-used AI benchmarks which reward AI which appears like a human (e.g.., by passing the Turing Test), rather than testing AI systems as collaborators.


Developing AI that is complementary with humans might become a competitive advantage as eveloping AI that is complementary with humans might become a competitive advantage corporate customers increasingly demand safety, transparency, and reliability before deploying tools at scale. Companies are more likely to achieve deep workflow integration with carefully designed human-AI collaborations, rather than generic automation agents.


Recent research from Anthropic indicates that the most AI-fluent people use their generative AI tools in an augmentative manner, which makes sense given that completely automated outputs are rarely (only up to 3.75% of the time) a replacement for real work at current capabilities.

Augmentation or Automation?


We also have potential alternative paths ahead of us. AI deployment could be used in a  ‘human complementary way’, which would add value to organisations while avoiding the perils of mass displacement. AI can be designed to augment human capabilities, create new tasks, and enable lower-skilled or lower-ranked workers to perform more valuable, expert work (Acemoglu et al. 2023). But development typically hasn’t been pushed in this direction, despite indications that AI adoption challenges faced by enterprises could be overcome in this way. One reason for the ongoing focus on automation (versus augmentation) are the widely-used AI benchmarks which reward AI which appears like a human (e.g.., by passing the Turing Test), rather than testing AI systems as collaborators.


Developing AI that is complementary with humans might become a competitive advantage as eveloping AI that is complementary with humans might become a competitive advantage corporate customers increasingly demand safety, transparency, and reliability before deploying tools at scale. Companies are more likely to achieve deep workflow integration with carefully designed human-AI collaborations, rather than generic automation agents.


Recent research from Anthropic indicates that the most AI-fluent people use their generative AI tools in an augmentative manner, which makes sense given that completely automated outputs are rarely (only up to 3.75% of the time) a replacement for real work at current capabilities.

Unfortunately, however, we’re currently witnessing a focus on the perceived potential for short-term returns by hyperscaling ‘towards AGI’ — accepting possible human workforce casualties in the wake. Long-term systemic stability has not yet become a widely shared concern among founders and investors. Big Tech and the majority of private capital is focused on not-missing-out, rather than realising real productivity gains in the workforce. As a result, the prevalent narrative in the tech ecosystem is that mass job displacement is inevitable


While policymakers and unions are advocating for stronger guardrails and accountability mechanisms that may mitigate the effects of this, influential tech investors and lobbying organisations are currently forcefully pushing back against regulation, taxation, or redistribution efforts.


So far, then, while enterprise-ready autonomous AI agents aren’t here yet, and the material benefits of AI deployment in the workforce remain unclear, private capital and resources are still disproportionately directed at developing and deploying AI that will automate workers. 


So, where do we go from here?

Unfortunately, however, we’re currently witnessing a focus on the perceived potential for short-term returns by hyperscaling ‘towards AGI’ — accepting possible human workforce casualties in the wake. Long-term systemic stability has not yet become a widely shared concern among founders and investors. Big Tech and the majority of private capital is focused on not-missing-out, rather than realising real productivity gains in the workforce. As a result, the prevalent narrative in the tech ecosystem is that mass job displacement is inevitable


While policymakers and unions are advocating for stronger guardrails and accountability mechanisms that may mitigate the effects of this, influential tech investors and lobbying organisations are currently forcefully pushing back against regulation, taxation, or redistribution efforts.


So far, then, while enterprise-ready autonomous AI agents aren’t here yet, and the material benefits of AI deployment in the workforce remain unclear, private capital and resources are still disproportionately directed at developing and deploying AI that will automate workers. 


So, where do we go from here?

Unfortunately, however, we’re currently witnessing a focus on the perceived potential for short-term returns by hyperscaling ‘towards AGI’ — accepting possible human workforce casualties in the wake. Long-term systemic stability has not yet become a widely shared concern among founders and investors. Big Tech and the majority of private capital is focused on not-missing-out, rather than realising real productivity gains in the workforce. As a result, the prevalent narrative in the tech ecosystem is that mass job displacement is inevitable

While policymakers and unions are advocating for stronger guardrails and accountability mechanisms that may mitigate the effects of this, influential tech investors and lobbying organisations are currently forcefully pushing back against regulation, taxation, or redistribution efforts.


So far, then, while enterprise-ready autonomous AI agents aren’t here yet, and the material benefits of AI deployment in the workforce remain unclear, private capital and resources are still disproportionately directed at developing and deploying AI that will automate workers. 


So, where do we go from here?

Future I: Base-case scenario

What happens if we continue on trend?

Adoption faces hurdles

If we continue on our current trajectory, VCs continue to fund both automation and augmentation efforts, although more emphasis will be placed on the former. AI technology will continue to improve, although mainly with more permissions and compute granted to agentic AI systems, rather than material improvements in the LLMs at their core. As a result, adoption rates remain hampered by failures for AI to generate measurable returns for organisations; and only the most targeted and customised agentic systems reach scale. Generalised enterprise adoption stalls due to the persistent 'learning gap', and this slows the impact on the workforce.


'Job exposure to AI looks different when success rates are factored in'Anthropic Economic Index report, 2026

Regulation

Policy is reactive and lags technological adoption. Efforts focus on specific mitigation for groups experiencing hardship: enhancing unemployment benefits, providing localised, state-sponsored retraining programs (e.g., directing displaced clerical workers toward nursing or trades) (IMF 2025, Acemoglu et al. 2023). Regulation mandates transparency, but lacks teeth in preventing job displacement itself.

Workforce disruption


Disruption remains concentrated in high-exposure, low-complementarity occupations — those which focus on routine, rule-based tasks and relatively low reliance on human judgment — such as interpreters, software engineers, and administrative clerks. This particularly affects early-career workers. But older workers also struggle to adapt or find new roles after job separation, as some firms prefer younger, AI-literate talent. The creation of new roles and tasks happens too slowly to offset automation losses fully, stalling overall employment growth for certain cohorts. (Chandar 2025, Brynjolfsson, Chander, Chen, 2025, Acemoglu et al. 2023). 


Labour income inequality increases as more flexible workers with higher incomes and education-levels capture most labour gains, while the least competitive workers see incomes stagnate or decline (IMF 2025). Wealth inequality rises as small groups capture the majority of capital returns.

Public backlash

Increased anxiety about job loss persists among the public, leading to greater political activism and labour organising demanding binding collective agreements against surveillance and displacement (Tech Equity - Take the Mic). At the same time, local communities continue to push back against the buildout of data centres, as they become increasingly cognisant of the energy and water demands that AI infrastructure places on their neighbourhoods. Combined, the increased pushback against AI deployment gives a mandate to governments around the world to enact stricter regulation against AI developers and deployers, which acts as a brake on the worst-case hyperscaling scenarios. Inequality and unemployment, although higher than it was, stabilises, and is kept in check. This is because AI companies become incentivised to fund more up-skilling and retraining programmes, and create new jobs, in order to both maintain a ‘social license to operate’ and achieve meaningful adoption in the real economy.

Future II: Worst-case scenario 


Push towards AGI


The ‘winner-takes-all’ mindset intensifies among hyperscalers. Investment and R&D become single-mindedly focused on attaining Artificial General Intelligence (AGI) and replacing human tasks entirely (the Turing Trap, Acemoglu et al. 2023). Innovators succeed in automating core cognitive tasks previously thought immune, without developing sufficient complementary tasks. AI deployment prioritises consistency and cost savings over worker well-being, leading to intrusive monitoring and dangerous work intensification. (Acemoglu et al. 2023, AFL-CIO 2025)

Policy

At the same time, policymakers fail to implement structural reforms. The tax code continues to favour investment in capital over human labour, providing additional incentives for automation and further exacerbating wealth inequality. Lack of governmental AI expertise leads to ineffective oversight (Acemoglu et al. 2023), and policy guidelines centreing worker well-being are ignored or withdrawn (Tech Equity - Take the Mic).

Society


In the absence of an increase in social safety nets (e.g. via the development of a Universal Basic Income), displaced workers experience a 'soft decay' into poverty and power is further concentrated into the hands of a few 'hyperscaler' companies. This contributes to the development of authoritarian populism. The labour share of national income falls significantly. Total income declines for large segments of the population, especially low- and middle-income workers. Income inequality skyrockets.


These severe economic pressures lead to widespread political-economic implications and social instability. Worker discontent erupts into acts of resistance or sabotage against workplace technology, leading to generalised civil unrest (IMF 2025, Acemoglu et al. 2023). The removal of the ‘on ramp’ for young talent has also resulted in long-term labour market instability and a growing lack of trained mid-level staff, jeopardising the long term viability of organisations around the world. All in all, despite the initial productivity and efficiency gains created by AI automation, the discontent and instability created by mass displacement threaten the integrity of our democratic and financial institutions, the backbones of a functioning and successful economy.

Future III: Best-case scenario 


Viewed as a systemic risk


Asset owners, especially the powerful pension funds and sovereign wealth funds, begin to view workforce stability similarly to how they view climate risk: as a systemic issue consistent with fiduciary duty. They come to an understanding that widespread unemployment threatens their pension and taxpayer contributions, and the very economic stability that they rely on to function. Moving away from the inevitability doctrine around AI, LPs increasingly see the need to assert agency over how AI is deployed, focusing on ‘upstream’ solutions rather than mitigating the downstream effects.


Pension funds begin to utilise the narratives of their beneficiaries, who are concerned about the viability of their professions, in order to pressure asset managers to direct capital towards human-complementary AI. Engagement occurs at the CIO and board-level of asset owners in order to effectively shift the investment culture and philosophy.

Action flows to VCs and startups

Consequently, VCs invest increasingly in startups that offer transparent, reliable tools that augment human workers. They begin to classify portfolio companies based on whether the technology they are developing is automative v. augmentative. At the same time, they start collecting data on employment trends and labour shares of national income, and measuring the adoption rates of AI pilots and the relative revenue of responsible AI tools. Using their data, VCs find that more responsible and augmentative AI tools enjoy higher adoption rates and higher revenues, especially among the biggest corporate customers. They use these findings to push portfolio companies to include worker feedback in the R&D process, and to focus on augmentation. This prevents ‘deskilling’, improves product adoption, and mitigates regulatory risk. The tools also enjoy a stronger path to revenue, because corporate adoption is stalled by reliability issues with automative products.

Regulation


On the regulatory front, support grows for policies that level the playing field and create a stable environment for innovation, and the tech industry increasingly acknowledges that the current 'anti-regulation' stance may provoke a destructive backlash. New policies support the human-complementary path by equalising tax rates on labour and capital, removing the structural incentive to automate needlessly. Government funding is directed toward human-complementary research in sectors like education and healthcare. Policy mandates strong worker voice, transparency, and the right to appeal AI-driven decisions. (Acemoglu et al. 2023)

Augmentation emerges as dominant


As a result, the human-complementary path dominates: VC investment flows heavily into augmentation technologies that boost human expertise and create new tasks, rather than displacement. Innovators focus on 'Centaur Evaluations' designed to maximise joint human-AI performance, rather than human replacement (Turing Trap, Acemoglu et al. 2023, Brynjolfsson, Chander, Chen, 2025). The overall impact of AI on employment is positive, due to rising labour demand created by new tasks and products. AI effectively reduces the skill gap, providing the greatest productivity gains to novice or less-capable workers. Rapid, personalised AI retraining becomes highly effective, quickly equipping displaced workers for high-demand, high-complementarity jobs. (Acemoglu et al. 2023, Brynjolfsson, Chander, Chen, 2025). Labour-management partnerships and union-centred training programs manage transitions effectively. (AFL-CIO 2025, Tech Equity - Take the Mic)


Productivity gains are substantial, leading to higher growth and rising real wages. Income and wealth inequality are minimised compared to other scenarios, as the widespread productivity surge benefits workers across the income spectrum, especially those in formerly lower-skilled roles. The labour share of income remains stable or increases.

Takeaways for Investors

Takeaways for Investors

The future direction of AI's impact on the workforce is not inevitable. The decision tree remains broad and, at many critical points, is shaped by the decisions of innovators and capital allocators.

The future direction of AI's impact on the workforce is not inevitable. The decision tree remains broad and, at many critical points, is shaped by the decisions of innovators and capital allocators.

There are strategic opportunities and choices to make:


  • Augmentation tools may have a stronger path to revenue: By overcoming the trust and reliability problems of AI systems, backing products with augmentation rather than automation as a goal is an under-explored opportunity in the early-stage ecosystem today. Given indications that the most AI-fluent people use their generative AI tools in an augmentative manner, and that automated outputs are rarely (up to 3.75% of the time) a replacement for real work, augmentative design could become the most powerful way to design products using AI.


  • Regulatory risk is real and growing: And pressure from the public and communities affected by AI's disruption is inevitable. The early-stage ecosystem needs to be building with this pressure and disruption in mind to maintain a social licence to operate.


  • Systemic risk should factor into portfolio strategy. The worst-case scenario describes a feedback loop — mass displacement leads to instability, which undermines the economic foundations that make tech returns possible in the first place. Large asset owners are starting to treat workforce stability as a systemic risk akin to climate, and LPs may begin pressuring VCs to account for this.

Practical actions:

  • Analyse and understand whether your investees are developing technologies that are automative or augmentative.


  • Collect and monitor data on employment impacts, adoption rates, and product design choices.


  • Keep track of public statements from corporates on AI strategies and integrations.


  • Incorporate worker feedback into product development — if the goal is augmentation, then deep understanding of human and organisational workflows is essential for deep integration; AI is not a 'quick-fix'.


  • Understand and strategise at an organisational level the aptitude for contributing to systemic risk within each of these scenarios.

Feedback on this scenario

Reframe Venture Ltd. Company Number: 13492915.

Registered Office: Churchill House, 137-139 Brent Street, London, England, NW4 4DJ


Get in touch: hello@reframeventure.com


Privacy Policy

Reframe Venture Ltd. Company Number: 13492915.

Registered Office: Churchill House, 137-139 Brent Street, London, England, NW4 4DJ


Get in touch: hello@reframeventure.com


Privacy Policy

Reframe Venture Ltd. Company Number: 13492915.

Registered Office: Churchill House, 137-139 Brent Street, London, England, NW4 4DJ


Get in touch: hello@reframeventure.com


Privacy Policy