6 minute read

When the Music Stops

The Structural Fragility of the AI Boom


This is the second in a three-part series examining the cracks in the AI scaling narrative, from the technical limits, to the financial fragility, to who bears the cost when the correction comes.

Michael Burry, the investor made famous by The Big Short for calling the 2008 housing crash, is shorting Nvidia and Palantir. He might be early. He might be wrong. But the fact that serious money is now betting against the AI boom tells us something worth examining.

This isn’t a crash prediction. It’s a structural analysis. And the structure, when you look at it clearly, is more fragile than the headlines suggest.

The Bet That Built This Boom

The AI investment thesis rests on a chain of assumptions: scaling works, AGI is near, winner-take-all dynamics will reward the leaders, and therefore almost any level of investment is justified.

Hundreds of billions have been deployed on this logic. OpenAI’s valuation assumes it will capture a significant share of all future knowledge work. Nvidia’s market cap assumes AI infrastructure spending will continue accelerating for years. The hyperscalers (Microsoft, Google, Amazon, Meta) are spending at rates that only make sense if AI transforms their core businesses.

The bet is that training costs are front-loaded and inference revenue will follow. Build the models now, monetize them later. It’s a reasonable bet. But it is a bet.

The bet is also being reinforced from outside the market. The CHIPS Act, export controls on advanced semiconductors to China, and bipartisan rhetoric framing AI as a strategic national asset have created a policy environment that backstops spending regardless of near-term ROI. When national security and economic competition merge with a technology thesis, capital flows become stickier and harder to redirect, even when the underlying assumptions weaken.

The Cracks in the Thesis

The scaling hypothesis, the idea that more data and compute reliably yield smarter models, is under pressure. As I explored in my [previous post on the scaling skeptics], serious researchers are now questioning whether current architectures can reach general intelligence at any scale. Diminishing returns have arrived faster than expected.

Meanwhile, enterprise ROI isn’t materializing the way vendors promised. MIT’s 2025 “GenAI Divide” study found that 95% of enterprise AI pilots deliver zero measurable impact on the bottom line; only 5% reach production. S&P Global reported that 42% of companies scrapped most of their AI initiatives in 2025, up from just 17% the year before. The pattern is familiar to anyone who’s watched technology adoption cycles: impressive demos, difficult deployment, unclear value.

The revenue picture is also lopsided. Nvidia is selling shovels in a gold rush, and making historic profits doing it. But who’s finding gold? OpenAI reportedly lost $5 billion in 2024, and the trajectory is worsening: Microsoft’s SEC filings reveal that OpenAI lost approximately $12 billion in a single quarter in 2025, against roughly $4.3 billion in revenue for the entire first half of the year. Most AI startups are burning capital with no clear path to profitability. The infrastructure providers are thriving; the companies building on that infrastructure mostly aren’t.

There’s a persistent gap between what AI can do in a demo and what it can do in production. Benchmarks improve; reliability doesn’t. Hallucinations persist. Enterprise customers discover that “90% accurate” isn’t good enough for mission-critical workflows.

There’s also a competitive assumption baked into these valuations that may not hold. The investment thesis requires winner-take-all dynamics, but the market is showing signs of fragmentation. Open-source models are closing the performance gap with closed-source leaders. Inference costs are dropping fast. Enterprises are increasingly drawn to smaller, specialized, fine-tuned models rather than monolithic general-purpose ones. If the market fragments rather than concentrates, valuations built on “capturing all future knowledge work” start to look very different.

The technical skeptics I profiled in last week’s piece aren’t just academics arguing about architecture. Their critiques have financial consequences. If LeCun and Chollet are right that brute-force scaling won’t yield general intelligence, then the entire investment thesis, which requires something approaching AGI to justify current spending, rests on a technical error. The market is pricing in a future that may be architecturally impossible.

The Structural Fragility

Zoom out and the concentration risk becomes visible.

The Magnificent Seven tech companies increased their energy consumption by 19% in 2023 while the median S&P 500 company’s consumption stayed flat. Roughly 80% of U.S. stock market gains in 2025 were tied to AI-related companies. That’s not a broad-based technology boom; it’s a narrow bet by the entire market on a single thesis.

JPMorgan estimates that AI-related investment-grade bond issuances could reach $1.5 trillion by 2030. Much of this debt is predicated on productivity gains that may or may not materialize. If the gains don’t come, the debt doesn’t disappear. And the infrastructure itself has hard physical limits: gas turbines require three to four year lead times, and new nuclear capacity takes a decade or more. Capital can move fast; power plants can’t.

The question isn’t whether AI is useful; it clearly is, in specific applications. The question is whether the valuations, the infrastructure spending, and the debt levels are proportionate to the actual value being created. Right now, a lot of capital is chasing returns that require the scaling hypothesis to hold, enterprise adoption to accelerate, and competitive moats to emerge. All three are uncertain.

Historical Parallels (And Their Limits)

We’ve seen this pattern before.

The dot-com boom left behind real infrastructure: fiber optic cables, data centers, a generation of internet-native companies. Most of the companies that raised money during the bubble failed, but the technology was real and eventually transformed the economy.

The crypto boom left behind less. Blockchain has use cases, but the speculative frenzy produced more fraud than lasting value.

AI will probably land somewhere between. The technology trajectory resembles the dot-com era: real and ultimately transformative. But the valuation structure currently looks more like crypto: speculative and decoupled from cash flow. The technology is real. Transformer architectures, large language models, diffusion models; these are genuine innovations with genuine applications. But genuine innovation doesn’t guarantee that current valuations are rational, that current market leaders will dominate long-term, or that the timeline to profitability is what investors are pricing in.

Useful and overhyped are not mutually exclusive.

What I’m Watching

A few signals matter more than headlines:

Enterprise adoption vs. churn. Are companies moving from pilots to production, or quietly shelving experiments? Renewal rates will tell the real story.

Hyperscaler capex. When Microsoft, Google, or Amazon start trimming AI infrastructure spending, the narrative will shift fast. They have better visibility into actual demand than anyone.

Regulatory attention. As valuations and market concentration grow, scrutiny from antitrust bodies (FTC, DOJ, EU) or financial regulators (SEC) could significantly impact the narrative and business models.

The language shift. Listen for when “AGI in two years” becomes “useful tools that augment workflows.” The rhetoric is a leading indicator. When the people selling the dream start hedging, pay attention.


I’m not predicting a crash date. Markets can stay irrational longer than skeptics can stay solvent; Burry himself has learned that lesson more than once.

But I’ve spent 25 years watching technology hype cycles, and I know what structural fragility looks like. This is it. A narrow set of companies, a contested thesis, valuations that require optimistic assumptions to hold, and a growing chorus of informed skeptics.

The music might keep playing. But it’s worth understanding the warning signs, and where the exits are.


Next week: Financial fragility makes headlines, but the deeper question is who bears the cost when the correction comes. In “After the Music Stops,” I look beyond investor losses to the workers, communities, and democratic institutions caught in the undertow of a technology transition that was never designed with them in mind.


Originally published on Substack.