Day 972 - The Trap Everyone Can See, and Nobody Can Escape
The math is settled. The politics aren't.
A new economic paper proves that if AI displaces workers faster than the economy can replace their jobs, the result is a trap no firm can escape, even with full knowledge of where they're headed. Whether we're already inside it is the only question left open. Here is what the model shows, why the obvious policy fixes don't work, and what, if anything, individuals can do when the system cannot save itself.
In February 2026, Jack Dorsey announced that Block was cutting nearly half its workforce. AI had made those roles unnecessary, he said, and within the year, most companies would reach the same conclusion. He wasn’t issuing a warning. He was stating a fact; and then doing it anyway.
That tension sits at the heart of a paper published last month by economists Brett Hemenway Falk and Gerry Tsoukalas: The AI Layoff Trap. Its central question is disarmingly simple. If every firm can see that mass automation erodes the consumer base all firms depend on, why would any of them do it? And its answer is more uncomfortable than most economic research tends to produce: because they have no choice.
The Cliff Is Visible. That Changes Nothing.
Falk and Tsoukalas design their model with an unusual assumption: full transparency. Every firm can see exactly how automation maps into lost worker income and reduced consumer spending. There are no information failures, no hidden dynamics. The cliff is visible to everyone.
And every firm races toward it anyway.
When a firm automates, it captures the full cost savings from replacing workers with AI. But the demand destruction — the spending those workers would have done across the economy — falls across all firms, not just the one that pulled the trigger. Each firm bears only a fraction of the damage it causes. The rest lands on its rivals.
Restraint is therefore irrational. A firm that holds back while others automate suffers the demand loss from their layoffs without capturing the cost savings from its own. It loses twice. Even if every firm in an industry agreed that collective restraint would raise all their profits, the agreement would immediately collapse, because automating is the best move regardless of what everyone else does. There is no negotiation that changes this calculus, and no amount of foresight that makes holding back rational.
The paper is also precise about what happens as AI improves. Better AI does not shrink this gap. It widens it. Each business gains an additional incentive to automate beyond its competitors to capture market share, but at the industry level, these gains cancel out. Everyone runs faster to stay in the same place, and the damage to the collective demand base grows with every lap. Economists call it the Red Queen effect.
The Policy Toolkit Is Almost Empty
The paper works through the obvious responses with unusual rigour. Most of them fail for the same structural reason: they change how much profit firms make overall, but not whether automating a given task is more attractive than not automating it. That decision — made at the level of each individual task, by each individual firm — is where the arms race actually lives. Instruments that don't reach that decision don't stop the race.
Universal Basic Income raises living standards for displaced workers but adds a fixed amount to the spending base without changing any firm’s per-task calculation about whether to automate. Capital income taxes scale the whole profit function down equally, but the decision to automate depends on the difference between options, not their absolute levels — tax rates cancel out of the equation. Worker equity participation narrows the gap but cannot close it: eliminating the distortion entirely would require workers to hold more equity than firms have to distribute. Private negotiation among firms runs into the same wall — automating remains the individually rational move regardless of any voluntary agreement, so no non-binding arrangement holds.
The one instrument that works is a Pigouvian automation tax: a per-task levy set equal to the demand destruction each automated position causes but that the automating employer does not currently pay. At the right rate, this aligns private incentives with social costs. It is the only instrument that operates on the right margin.
The problem is implementation. The paper’s model is a single closed economy. A tax imposed by one government does nothing for companies or workers outside it, and automation is mobile enough that unilateral taxation simply pushes adoption to untaxed jurisdictions. The economists note this themselves, pointing to multilateral coordination as the answer, analogous to border adjustments in carbon policy. Carbon policy has been under active international negotiation for more than three decades.
Who Actually Loses
There is a version of this story where the answer is obvious: workers lose, owners win. The paper’s formal result is more unsettling.
Over-automation is not a redistribution. It is a deadweight loss that harms both sides. Workers lose income through displacement. But business owners also end up worse off than if none of them had automated. The collective demand destruction is large enough that every company’s profit falls below what it would have been under restraint. The trap does not produce winners at one end and losers at the other. It destroys value that was available to everyone and distributes the loss across both sides of the ledger.
So What Do You Do?
It would be dishonest to end here with a tidy list of investments to make before the crash. The same demand destruction described in the paper would erode asset values alongside wages. If the wheels come fully off, more cash doesn’t buy much in an economy where consumers have stopped spending. That is worth saying plainly.
But the scenario is not binary. Between “everything is fine” and “complete collapse” lies a long, uneven compression; some sectors hollowed out faster than others, spending squeezed but not zeroed, opportunity distributed with increasing unevenness. In that world, how you are positioned relative to the disruption matters more than whether you have correctly predicted its endpoint.
The honest individual response is less about asset accumulation than about income resilience. The paper gives a clue about where the automation frontier moves slowest: the distortion is worst in fragmented, competitive markets — customer support, back-office operations, entry-level software. Roles embedded in complex human systems, where integration costs are high and measurable output is ambiguous, are harder to automate profitably. The frontier will reach them. It is not there yet.
There is also something structural worth naming. A direct exchange of value between a writer and an audience, which is what a paid newsletter is, does not sit inside the competitive dynamics the paper describes. It is not dependent on an employer or on a fragmented market in which each participant bears only a fraction of the damage they cause. Understanding the trap clearly, before most people do, is itself a form of positioning. Not because knowledge protects you from the aggregate outcome, but because the gap between where things are now and where they are heading is where asymmetric opportunities concentrate.
That gap is still open. It will not stay open indefinitely.
Next week: if the system-level fix requires coordinated policy that won’t arrive in time, what does individual-level coordination look like? The paper’s failure modes for collective action point, perhaps unexpectedly, toward some durable answers.
