On the most recent episode of Hard Fork, Casey Newton and Kevin Roose were discussing the data center boom. They had some good points, but also made what I believe to be a crucial error:

Casey: “…no one has the resources they need to serve the demand for AI that they have. (…) We're still seeing a lot of skepticism, there's so much bubble talk, [and] I just want to posit that as a really important point in understanding what sort of bubble this is, because even the biggest companies do not have the resources that they need to serve that demand.”

Kevin: “Yeah, and I think that's a good point. And it's really a profound shift in the way that skeptics have been talking about this AI boom. I remember just even a couple of months ago, the leading sort of strain of criticism was that these AI companies would never be able to generate the demand to pay for all of the expensive data centers and infrastructure projects they wanted to do. And now that's shifted to, well, there's so much demand. what if they can't build enough to support the demand they have?

Newton elaborates on this point in a post last week, “We may now know what kind of AI bubble this is (Think railroads, not crypto).” The railroad analogy isn’t a bad one, but I think it’s worth dwelling upon for a bit. I have three notes (all of which are sort of a remix of a 2025 piece titled “It’s Giving Enron.”):

(1) The AI bubble is, indeed, not like the crypto bubble. I agree with Casey on this point. The fundamental value of bitcoin is basically zero. There are interesting reasons why bitcoin hasn’t fallen to zero (and likely will not anytime soon), but all of those reasons are variations on the theme of “lolwtf the entire economy is just gambling and speculation now.” Bitcoin isn’t valuable because it has uses that generate economic value. It’s valuable because investors have agreed to attach abstract value to it as a speculative asset.

AI has several uses that generate economic value. The industry as a whole is a cash furnace, but while the too-simple explanation of crypto is “what if tulip bulbs formed the basis of a multitrillion-dollar speculative economy?”, the too-simple explanation of AI is “what if we introduced machine learning techniques into everything everywhere all the time?” Both are bubbles, but Casey is correct that they’re materially different sorts of bubble.

(2) Railroads are a decent analogy. But broadband and the 2008 housing crisis are closer to home.

Newton writes:

“a relatively clear story is coming into focus. It is not a momentary-enthusiasm-deflating-once-reality-sets-in kind of bubble. Instead, it’s looking more and more like a railroad bubble: one where massive overbuilding and speculation may lead to periodic crashes and flameouts, but also one where the resulting product transforms commerce and productivity. (…) There’s value to be found in there somewhere, but for the moment the playbooks are still being written.” 

He has me in the first half. Yes, we’re looking at massive overbuilding and speculation, likely followed by a crash and flameouts. Yes, real underlying products are likely to emerge and flourish after the crash. The analogy I’ve leaned on in the past is the Web 1.0 broadband buildout-and-crash (It’s giving Gilder). I still suspect we’re that some of the exceptionally fancy accounting that we’re seeing today will, after the crash, be cast as at least borderline fraudulent (hence, it’s also giving Enron).

But the outstanding question is just how economically transformative the underlying product turns out to be. Railroads pretty completely transformed the economy. Broadband is no flash in the pan, but it is smaller by comparison. The underlying question isn’t is there value in there somewhere (yes, there is), but how much value is in there, relative to the debt levels?

And that’s what brings the 2008 housing crisis to mind. The great financial crisis nearly cratered the entire global economy. That wasn’t because the underlying asset class had no intrinsic value. The underlying asset class was houses! People live in those! We still don’t have enough of them today! But the problem was that they financialized the hell out of the housing market, and created absurd subsidies for unreliable home loans, just so they could spin up and sell shoddy financial products. If you build a financialized Rube Goldberg machine on top of a limited-but-valuable product, things can go very, very wrong.

(3) And we are still in the free-trial-period of most AI products. Everything is being subsidized. That renders a false picture of the demand curve.

The bit from Hard Fork that really set me off was when Kevin Roose dismissed critics as saying “these AI companies would never be able to generate the demand to pay for all of the expensive data centers and infrastructure projects they wanted to do. And now [the critics have] shifted to, well, there's so much demand. what if they can't build enough to support the demand they have?”

Roose is conflating two different types of demand here. He ought to know better.

Here are two statements that can simultaneously be true: (a) the AI industry faces extraordinary and growing demand for compute. It does not have enough data centers to satisfy this demand. (b) AI users pay for practically none of that compute, and would stop using AI if they had to pay the direct costs.

Earlier this semester, one of my advisees told me about one of his Claude Code projects. He asked Claude to build him a study guide with flash cards to help him prepare for a biology exam. I am generally an edtech skeptic, but this seems fine to me. Creative, even. He isn’t having Claude do the reading or the writing or the thinking for him. He’s just vibe-coding a test prep assistant to help with rote memorization tasks. Good for him.

How computationally intensive was his little vibe-coding project? I don’t know. Neither does he. He doesn’t need to.

The current deal (circa two months ago) is that my student pays $20/month to Anthropic, and Anthropic lets him build little things like this. My student doesn’t have to worry about how computationally intensive the exercise is. The cost of computation is borne by Anthropic, not the student.

If it had cost my student $500 to build that study guide, he would have just written his own damn flash cards. If it cost him an extra $20 on top of the monthly subscription, he might have gone the index card route as well. When the demand for compute is subsidized by the same AI industry that is trying to raise fresh rounds of investment (based on a pitch deck that highlights skyrocketing computational demand), we end up with a heavily distorted picture of the demand curve.

Could this vibe-coding project be more computationally efficient? Almost certainly. This is the ongoing threat that Deepseek poses to the AI industry. Deepseek’s model can do basically everything Claude does, at 1/6th the cost. With the exception of Deepseek, the AI industry has taken a scale-at-all-costs approach.

One of the major reasons why existing data centers cannot keep up with the computational demand is that the AI industry is not the least bit interested in pursuing efficiencies that would make their products less computationally demanding. They’ve constructed a sort of financial flywheel, where they subsidize computation-heavy behavior from users, then report back to investors about the soaring product demand and use it to justify the next investment round. If they built more efficient AI models, then it would paradoxically become harder to attract the next trillion dollars of investor cash.

This industry-wide subsidy system can’t go on forever. The money, eventually, has to come from somewhere. The Verge’s Hayden Field reported a couple weeks ago on the “AI monetization cliff” that seems to be approaching. “After years of offering cheap or totally free access to advanced AI systems, the bill is starting to come due — and downstream, users are beginning to feel the pinch.” Field is right about the current trends. But we shouldn’t be surprised if the AI companies balk and decide to hold off on profitability for just a little longer (aka indefinitely).

Back in 2022, I wrote a piece for The Atlantic titled “Money Will Kill ChatGPT’s Magic.” I think that piece still holds up quite well. The trajectory of the AI industry is going to bend toward money, and so long as we are in the free-(or cheap)-trial-period, we will have a heavily distorted view of what sort of AI products are actually profitable. All the major AI companies are reporting massive growth in their subscription plans, while also facing the obvious problem that those subscriptions are revenue-negative. It has been three and a half years, and the industry is still dramatically subsidizing its products and then bragging about all the demand.

All of which is to say that AI companies have not generated the demand to pay for the expensive data centers. They’ve generated and subsidized the demand for compute, and then used that computational demand to make a case for further AI infrastructure investment.

The crash occurs when we find out how much consumers value AI when they have to pay the full sticker price for the unsubsidized products. If it ends up that we merely have oversupply for the worlds-most-valuable product, then a few companies go bankrupt while the global economy roars forward. But if it turns out that the financial flywheel massively overstated how much economic value we get from AI, then all those trillions the banks and governments handed over to Sam and Elon might capsize absolutely everything.

I think it’s a mistake to pretend that AI is completely useless. But it’s an even bigger mistake to treat the subsidized compute demand as proof that a once-in-a-century economic transformation is currently underway.

Reply

Avatar

or to participate

Keep Reading