When the Music Stops
The AI boom rests on a contested premise: that scaling alone leads to intelligence. This paper examines the technical, financial, and human costs when that assumption meets reality.
The dominant narrative in AI development rests on a single thesis: that scaling large language models will reliably yield increasingly intelligent systems. Hundreds of billions of dollars have been invested in this assumption. This paper examines that thesis from three perspectives: technical, financial, and distributional.
Core Insight
A growing body of credible critique from researchers including Yann LeCun, Gary Marcus, Rodney Brooks, and Francois Chollet has identified fundamental architectural limitations in the current scaling paradigm. The financial structure of the AI boom exhibits concentration risks and capex-to-revenue gaps historically consistent with asset bubbles, and the costs of correction fall disproportionately on those with the least influence over the decisions that created the exposure.
Abstract
The dominant narrative in artificial intelligence development rests on a single thesis: that scaling large language models by increasing their parameters, data, and compute will reliably yield increasingly intelligent systems, ultimately approaching or achieving artificial general intelligence. Hundreds of billions of dollars have been invested in this assumption. This paper examines that thesis from three perspectives: technical, financial, and distributional. It surveys a growing body of credible critique from researchers, including Yann LeCun, Gary Marcus, Rodney Brooks, and Francois Chollet, who have identified fundamental architectural limitations in the current scaling paradigm. It analyzes the financial structure of the AI investment boom, identifying concentration risks, debt dependencies, and capex-to-revenue gaps that are historically consistent with asset bubbles. It also traces the distributional consequences of market corrections, showing that costs fall disproportionately on retirement accounts, regional economies, displaced workers, and democratic institutions. The paper concludes by proposing Augmented Human Intelligence (AHI) as a directional alternative: modular, biologically inspired architectures designed to enhance human judgment rather than replace it.
How This Fits
Connects the architectural arguments developed across the series to their real-world financial and human consequences, making the case for Augmented Human Intelligence as a directional alternative to the scaling-at-all-costs paradigm.