Scenario II: Labour / Productivity
The development and usage of AI is supposed to dramatically change the structure of our workforce. While the technology promises increased productivity and efficiency, critical threats to workers, particularly concerning employment prospects, privacy, and wellbeing have also been identified.
The knock-on effects of job loss and mass unemployment on society could exacerbate inequality mid-term and destabilise political structures.
In order to take stock of the stakes at hand and the choices we face, we map out future trajectories for AI in relation to the labour market on a time horizon of five to ten years.
This scenario was written following an in-depth workshop in Palo Alto bringing together leading academics, asset owners and venture capital investors in November 2025.
The workshop and this scenario are part of a series exploring AI's systemic impacts in pursuit of alternative narratives.
Supported by:
Early 2026: Feeling the impact
Hyperscalers believe that the emergence of Artificial General Intelligence (AGI) will soon lead to the rollout of autonomous AI agents capable of behaving as ‘virtual employees’. These agents, according to AI developers, could automate millions of jobs worldwide — and raise global GDP by up to 7% (Goldman Sachs).
Following this narrative, capital is currently flowing heavily into capital expenditures in (generative) AI technology — research and development and data centre infrastructure — rather than into hiring human labor.



Future I: Base-case scenario
What happens if we continue on trend?
Adoption faces hurdles
If we continue on our current trajectory, VCs continue to fund both automation and augmentation efforts, although more emphasis will be placed on the former. AI technology will continue to improve, although mainly with more permissions and compute granted to agentic AI systems, rather than material improvements in the LLMs at their core. As a result, adoption rates remain hampered by failures for AI to generate measurable returns for organisations; and only the most targeted and customised agentic systems reach scale. Generalised enterprise adoption stalls due to the persistent 'learning gap', and this slows the impact on the workforce.
'Job exposure to AI looks different when success rates are factored in'— Anthropic Economic Index report, 2026
Regulation
Policy is reactive and lags technological adoption. Efforts focus on specific mitigation for groups experiencing hardship: enhancing unemployment benefits, providing localised, state-sponsored retraining programs (e.g., directing displaced clerical workers toward nursing or trades) (IMF 2025, Acemoglu et al. 2023). Regulation mandates transparency, but lacks teeth in preventing job displacement itself.
Workforce disruption
Disruption remains concentrated in high-exposure, low-complementarity occupations — those which focus on routine, rule-based tasks and relatively low reliance on human judgment — such as interpreters, software engineers, and administrative clerks. This particularly affects early-career workers. But older workers also struggle to adapt or find new roles after job separation, as some firms prefer younger, AI-literate talent. The creation of new roles and tasks happens too slowly to offset automation losses fully, stalling overall employment growth for certain cohorts. (Chandar 2025, Brynjolfsson, Chander, Chen, 2025, Acemoglu et al. 2023).
Labour income inequality increases as more flexible workers with higher incomes and education-levels capture most labour gains, while the least competitive workers see incomes stagnate or decline (IMF 2025). Wealth inequality rises as small groups capture the majority of capital returns.
Public backlash
Increased anxiety about job loss persists among the public, leading to greater political activism and labour organising demanding binding collective agreements against surveillance and displacement (Tech Equity - Take the Mic). At the same time, local communities continue to push back against the buildout of data centres, as they become increasingly cognisant of the energy and water demands that AI infrastructure places on their neighbourhoods. Combined, the increased pushback against AI deployment gives a mandate to governments around the world to enact stricter regulation against AI developers and deployers, which acts as a brake on the worst-case hyperscaling scenarios. Inequality and unemployment, although higher than it was, stabilises, and is kept in check. This is because AI companies become incentivised to fund more up-skilling and retraining programmes, and create new jobs, in order to both maintain a ‘social license to operate’ and achieve meaningful adoption in the real economy.
Future II: Worst-case scenario
Push towards AGI
The ‘winner-takes-all’ mindset intensifies among hyperscalers. Investment and R&D become single-mindedly focused on attaining Artificial General Intelligence (AGI) and replacing human tasks entirely (the Turing Trap, Acemoglu et al. 2023). Innovators succeed in automating core cognitive tasks previously thought immune, without developing sufficient complementary tasks. AI deployment prioritises consistency and cost savings over worker well-being, leading to intrusive monitoring and dangerous work intensification. (Acemoglu et al. 2023, AFL-CIO 2025)
Policy
At the same time, policymakers fail to implement structural reforms. The tax code continues to favour investment in capital over human labour, providing additional incentives for automation and further exacerbating wealth inequality. Lack of governmental AI expertise leads to ineffective oversight (Acemoglu et al. 2023), and policy guidelines centreing worker well-being are ignored or withdrawn (Tech Equity - Take the Mic).

Society
In the absence of an increase in social safety nets (e.g. via the development of a Universal Basic Income), displaced workers experience a 'soft decay' into poverty and power is further concentrated into the hands of a few 'hyperscaler' companies. This contributes to the development of authoritarian populism. The labour share of national income falls significantly. Total income declines for large segments of the population, especially low- and middle-income workers. Income inequality skyrockets.
These severe economic pressures lead to widespread political-economic implications and social instability. Worker discontent erupts into acts of resistance or sabotage against workplace technology, leading to generalised civil unrest (IMF 2025, Acemoglu et al. 2023). The removal of the ‘on ramp’ for young talent has also resulted in long-term labour market instability and a growing lack of trained mid-level staff, jeopardising the long term viability of organisations around the world. All in all, despite the initial productivity and efficiency gains created by AI automation, the discontent and instability created by mass displacement threaten the integrity of our democratic and financial institutions, the backbones of a functioning and successful economy.
Future III: Best-case scenario
Viewed as a systemic risk
Asset owners, especially the powerful pension funds and sovereign wealth funds, begin to view workforce stability similarly to how they view climate risk: as a systemic issue consistent with fiduciary duty. They come to an understanding that widespread unemployment threatens their pension and taxpayer contributions, and the very economic stability that they rely on to function. Moving away from the inevitability doctrine around AI, LPs increasingly see the need to assert agency over how AI is deployed, focusing on ‘upstream’ solutions rather than mitigating the downstream effects.
Pension funds begin to utilise the narratives of their beneficiaries, who are concerned about the viability of their professions, in order to pressure asset managers to direct capital towards human-complementary AI. Engagement occurs at the CIO and board-level of asset owners in order to effectively shift the investment culture and philosophy.
Action flows to VCs and startups
Consequently, VCs invest increasingly in startups that offer transparent, reliable tools that augment human workers. They begin to classify portfolio companies based on whether the technology they are developing is automative v. augmentative. At the same time, they start collecting data on employment trends and labour shares of national income, and measuring the adoption rates of AI pilots and the relative revenue of responsible AI tools. Using their data, VCs find that more responsible and augmentative AI tools enjoy higher adoption rates and higher revenues, especially among the biggest corporate customers. They use these findings to push portfolio companies to include worker feedback in the R&D process, and to focus on augmentation. This prevents ‘deskilling’, improves product adoption, and mitigates regulatory risk. The tools also enjoy a stronger path to revenue, because corporate adoption is stalled by reliability issues with automative products.

Regulation
On the regulatory front, support grows for policies that level the playing field and create a stable environment for innovation, and the tech industry increasingly acknowledges that the current 'anti-regulation' stance may provoke a destructive backlash. New policies support the human-complementary path by equalising tax rates on labour and capital, removing the structural incentive to automate needlessly. Government funding is directed toward human-complementary research in sectors like education and healthcare. Policy mandates strong worker voice, transparency, and the right to appeal AI-driven decisions. (Acemoglu et al. 2023)
Augmentation emerges as dominant
As a result, the human-complementary path dominates: VC investment flows heavily into augmentation technologies that boost human expertise and create new tasks, rather than displacement. Innovators focus on 'Centaur Evaluations' designed to maximise joint human-AI performance, rather than human replacement (Turing Trap, Acemoglu et al. 2023, Brynjolfsson, Chander, Chen, 2025). The overall impact of AI on employment is positive, due to rising labour demand created by new tasks and products. AI effectively reduces the skill gap, providing the greatest productivity gains to novice or less-capable workers. Rapid, personalised AI retraining becomes highly effective, quickly equipping displaced workers for high-demand, high-complementarity jobs. (Acemoglu et al. 2023, Brynjolfsson, Chander, Chen, 2025). Labour-management partnerships and union-centred training programs manage transitions effectively. (AFL-CIO 2025, Tech Equity - Take the Mic)
Productivity gains are substantial, leading to higher growth and rising real wages. Income and wealth inequality are minimised compared to other scenarios, as the widespread productivity surge benefits workers across the income spectrum, especially those in formerly lower-skilled roles. The labour share of income remains stable or increases.

There are strategic opportunities and choices to make:
Augmentation tools may have a stronger path to revenue: By overcoming the trust and reliability problems of AI systems, backing products with augmentation rather than automation as a goal is an under-explored opportunity in the early-stage ecosystem today. Given indications that the most AI-fluent people use their generative AI tools in an augmentative manner, and that automated outputs are rarely (up to 3.75% of the time) a replacement for real work, augmentative design could become the most powerful way to design products using AI.
Regulatory risk is real and growing: And pressure from the public and communities affected by AI's disruption is inevitable. The early-stage ecosystem needs to be building with this pressure and disruption in mind to maintain a social licence to operate.
Systemic risk should factor into portfolio strategy. The worst-case scenario describes a feedback loop — mass displacement leads to instability, which undermines the economic foundations that make tech returns possible in the first place. Large asset owners are starting to treat workforce stability as a systemic risk akin to climate, and LPs may begin pressuring VCs to account for this.

Practical actions:
Analyse and understand whether your investees are developing technologies that are automative or augmentative.
Collect and monitor data on employment impacts, adoption rates, and product design choices.
Keep track of public statements from corporates on AI strategies and integrations.
Incorporate worker feedback into product development — if the goal is augmentation, then deep understanding of human and organisational workflows is essential for deep integration; AI is not a 'quick-fix'.
Understand and strategise at an organisational level the aptitude for contributing to systemic risk within each of these scenarios.
Feedback on this scenario



