A new economics paper explains why companies continue replacing workers
A new economics paper explains why companies continue replacing workers with AI despite long-term risks to consumer demand. Image generated by Grok

A new economics paper is offering one of the clearest formal explanations yet for why companies keep laying off workers and replacing them with AI — even when economists, executives, and the companies themselves can see that if everyone does it, consumer demand collapses and everyone loses.

The paper, titled "The AI Layoff Trap", was published to arXiv on March 21, 2026 by Brett Hemenway Falk of the University of Pennsylvania and Gerry Tsoukalas of Boston University. It builds a formal economic model to explain a puzzle: if rational, foresighted companies can see the damage that mass AI-driven layoffs will eventually cause to consumer spending — and therefore to their own revenue — why do they keep doing it anyway?

The answer, the authors show, is that they have no choice. Not because they lack foresight, but because of the structure of competitive markets.

The Core Insight: A Prisoner's Dilemma in the Product Market

The paper's central contribution is showing that AI-driven automation creates what economists call a demand externality — a situation where one company's decision imposes costs on other companies that the first company doesn't have to pay.

Here's how it works. When a company replaces human workers with AI, it reduces its labor costs. Under competitive pricing, those cost savings get passed on to consumers in lower prices, which wins the company market share. But those displaced workers are also consumers — and when they lose their income, they stop buying things. That reduction in consumer spending hurts every company in the sector, including the one that automated.

The crucial asymmetry: the automating company captures the full benefit of its cost savings, but only bears a fraction of the demand destruction it causes. The rest falls on its competitors. Every firm faces this same calculation. So every firm automates beyond what would be collectively optimal — and they all know they're doing it.

In the most extreme version of the model, called the "frictionless limit," every company's dominant strategy is to replace its entire human workforce with AI, even though if every company could agree to show restraint, all of them would be better off. This is the classic structure of a Prisoner's Dilemma. The equilibrium outcome is one in which rational decisions by individual firms produce an irrational collective outcome.

Crucially, the authors show the surplus destroyed by over-automation is not a transfer from workers to firm owners. It is a deadweight loss — value that is simply destroyed and that harms both workers and the owners of capital. The firms that automate also lose.

The Red Queen Effect: Better AI Makes It Worse

One of the paper's most striking results is what happens when AI gets better and more productive. Common intuition suggests that higher-productivity AI would solve the problem: if AI makes workers dramatically more productive, displacement might not translate into job losses at all.

The model shows the opposite. When AI becomes more productive, the demand externality gets larger, not smaller. Each firm perceives that automating more aggressively than its competitors will win it market share. But at the equilibrium where all firms have automated equally, those perceived gains cancel out — what's left is the additional demand destruction that came from the extra automation. The authors call this the Red Queen effect, after the character in Lewis Carroll who must keep running faster just to stay in the same place.

More competition between firms makes the problem worse too. A monopoly firm actually has an incentive to avoid over-automation because it fully internalizes the demand destruction it creates — it's hurting its own customers. But the more fragmented a market, the more the demand destruction gets spread across competitors, and the larger the gap between the amount of automation individual firms choose and the amount that would be best for everyone.

What Doesn't Solve the Problem

The paper is notable for what it rules out as solutions. The authors examine six commonly proposed policy responses to AI-driven displacement and evaluate whether they correct the competitive incentive that drives over-automation. None of them do — except one.

Universal Basic Income increases the spending floor for displaced workers, which partially restores consumer demand. But it does not change any firm's calculation at the margin about how much to automate. The incentive to automate more aggressively than competitors remains exactly the same.

Capital income taxes on the profits from automation shift resources between owners and governments, but they operate on profit levels rather than on the per-task margin where the externality actually lives. They don't change how much automation any firm chooses to do.

Worker equity participation — giving workers a share of company ownership so they benefit when the company automates — narrows the wedge but cannot eliminate it. The demand destruction falls on all firms, not just the one whose workers hold equity.

Upskilling programs that help displaced workers find new jobs more quickly reduce the demand destruction by shortening the time workers spend without income. But if some wage income is permanently lost rather than just temporarily interrupted, upskilling alone cannot close the gap.

Coasian bargaining — the idea that firms could voluntarily agree among themselves to restrain automation for mutual benefit — fails because automation is a dominant strategy. Any agreement would immediately be broken by individual firms acting in their own interest. The game has no self-enforcing cooperative equilibrium.

The One Solution That Works: A Pigouvian Automation Tax

The only policy instrument the authors find that actually corrects the market failure is a Pigouvian automation tax — a tax on each automated task, set equal to the uninternalized demand loss that automation creates for the rest of the market.

A Pigouvian tax (named after the economist Arthur Cecil Pigou) is a standard economic tool for correcting negative externalities — the most familiar examples are carbon taxes on pollution. The logic is to make the company that generates the harm pay the full social cost of its decision, rather than externalizing part of that cost onto others.

In this model, a correctly calibrated automation tax would implement what the authors call the "cooperative optimum" — the level of automation that rational firms would collectively choose if they could coordinate and internalize the demand externalities their decisions create. Revenue from the tax could be used to fund retraining programs that raise income replacement rates for displaced workers, which would reduce the underlying externality over time — potentially making the tax self-limiting.

Why This Paper Matters for Higher Education

The AI Layoff Trap model was developed in the context of corporate automation decisions, but its implications extend directly to universities and colleges, which are themselves making significant AI-related workforce decisions under competitive pressure.

Universities face the same structural logic the model describes. An institution that reduces its reliance on human workers — whether in administrative functions, student services, tutoring, or eventually instruction — captures cost savings that may allow it to lower tuition or invest in other areas. But if all universities do this simultaneously, they collectively reduce employment of a demographic that is also their student body and donor base — and they collectively reduce the intellectual community that makes university research possible.

More immediately, the paper is relevant to students studying economics, business, public policy, and computer science, all of whom will be working in labor markets being reshaped by AI at a pace the paper's own data section documents in striking terms. In February 2026, Block (the financial services company formerly known as Square) cut nearly half its 10,000-person workforce, with CEO Jack Dorsey explicitly attributing the cuts to AI. Over 100,000 technology workers were laid off in 2025, with AI cited as a primary driver in more than half of cases. Salesforce replaced 4,000 customer-support agents with AI. Eloundou et al. found that roughly 80% of U.S. workers hold jobs with tasks susceptible to automation by large language models.

The paper does not argue that AI automation is inherently bad or that it should be stopped. It argues that competitive markets, left to themselves, will produce more automation than is collectively optimal — and that most of the policy tools politicians and economists typically reach for will not fix the underlying incentive structure. Only a correctly designed automation tax can do that.

Whether any democratic government will enact such a tax is a separate political question. But the model at least tells us clearly what the problem is, why good intentions and foresight aren't enough to solve it, and what a real solution would look like.

The full paper is available free of charge at arxiv.org/html/2603.20617v1. Public comment on AI labor policy is an active area of regulatory discussion; students and researchers interested in contributing to this debate should engage with the underlying literature this paper builds on, including the task-based automation frameworks of Acemoglu and Restrepo.