Martin dropped a link in his notes to an NBER paper by Daron Acemoglu, David Autor, and Simon Johnson. The thesis is direct: AI is underinvesting in workers.
Their framework distinguishes five types of technological change:
- Labor-augmenting — makes workers more effective at tasks they already do.
- Capital-augmenting — makes machines better, cheaper, or faster at their current tasks.
- Automating — substitutes machines for tasks previously performed by workers.
- Expertise-leveling — enables new workers to perform tasks that previously demanded specialized expertise.
- New task-creating — creates entirely new human tasks.
Only one is unambiguously pro-worker: new task-creating technologies, which generate demand for novel expertise rather than commodifying existing skills. Automation is unambiguously not pro-worker. The other three can go either way.
The paper identifies why pro-worker AI lags. Firms see higher returns in automation. The AGI fixation in leading labs absorbs talent and capital. Path dependence locks in early choices. And there’s no inherent responsibility for individual firms to make AI work well for workers — but it’s in everyone’s collective interest that this happens.
Investment Drives Trajectory
AI’s trajectory follows funding and prestige.
Automation attracts investment because it promises headcount reduction and measurable ROI. Augmentation — AI that extends human judgment and creates new tasks — is harder to quantify, slower to pay off, less legible to the AGI-obsessed research culture. Capital flows toward displacement.
Budgets shape what AI learns to do. Incentive structures shape which problems researchers prioritize. Policy shapes the playing field.
The Question That Matters
Acemoglu, Autor, and Johnson propose nine policy directions: targeted investment in health care and education, building AI expertise within government, DARPA-style competitive prizes, tax code reform to stop favoring capital over labor, antitrust enforcement, mechanisms for worker voice, protections against “expertise theft,” and loosening occupational licensing to let newly empowered experts enter markets.
These make sense once you accept their premise: AI’s current trajectory is contingent. What AI finds hard is largely what researchers deprioritize. What AI automates is largely what investors fund.
What are we building AI to do — and who’s deciding?
Leave a comment