I still remember the first time I watched The Big Short. There’s that moment when Steve Carell’s character – the on-screen version of Steve Eisman – finally sees the fraud buried inside the housing market, and you can almost feel the air leave the room. Fast forward to this week, and the real Steve Eisman just gave me that same chill, except this time the topic wasn’t sub-prime mortgages. It was artificial intelligence.
He didn’t scream fire. He didn’t dump his stocks and run for the hills. But in a calm, almost casual television appearance, he floated an idea that cuts straight to the heart of the entire AI trade. And honestly? I haven’t been able to stop thinking about it since.
The One Assumption Holding Up the Entire AI Palace
Here’s the dirty little secret nobody on Wall Street wants to say out loud: the whole magnificent AI story rests on a single foundational belief. That belief? Bigger models will keep delivering dramatically better performance, forever. More parameters, more data, more chips – and voilà, something magically closer to true artificial general intelligence appears.
That assumption is why data-center spending is measured in hundreds of billions. It’s why chip companies trade at forty, fifty, sixty times earnings. It’s why every earnings call sounds like a religious revival meeting. But Eisman just asked the question everybody has been too afraid – or too greedy – to ask:
What if that assumption is wrong?
The Quiet Theory That Keeps Eisman Up at Night
Over the past few months, a growing chorus of researchers has started publishing papers suggesting something uncomfortable: the famous “scaling laws” that have driven progress in large language models might be hitting diminishing returns sooner than anyone expected.
In plain English? Throw ten times more compute and data at today’s models, and you might not get ten times the performance improvement. You might get two times. Or one and a half. Or, eventually, almost nothing at all.
“The large language models, as they keep scaling, which is the model that everybody has, will start to lose their efficaciousness. The improvement is gonna slow as opposed to increase.”
– Steve Eisman, December 2025
That single sentence is dynamite disguised as a shrug.
Why This Isn’t Just Academic Noise
Think about what happens if he’s even half right. Microsoft, Meta, Google, Amazon – every hyperscaler on earth – has been racing to lock up GPU supply for the next three to five years. They’re building power plants, literally, to feed the coming generation of models.
If the performance curve suddenly flattens, the economic justification for that capital expenditure vanishes overnight. Fewer chips ordered. Lower pricing power. Margins that looked bulletproof suddenly start bleeding.
And the ripple effects don’t stop at the semiconductor industry. Cloud margins, software valuations, infrastructure funds – everything that has been priced for hyper-exponential growth gets re-rated in a heartbeat.
The Mortgage Crisis Parallel That Made My Blood Run Cold
Eisman drew the comparison himself, and it’s terrifyingly apt. Back in 2006-2007, the entire fixed-income market rested on one unspoken assumption: U.S. house prices could not fall nationally. Not ever. Once that assumption cracked, the whole tower collapsed in weeks.
Today’s AI complex has its own version of “house prices only go up.” It’s “model performance improves proportionally with scale – forever.” Pull that brick out, and the edifice wobbles.
- Trillion-dollar market caps built on straight-line extrapolation
- Hundreds of billions in committed capex
- ETF flows that assume the party never ends
- Retail investors who think this time really is different
Sound familiar?
But He’s Still Long – For Now
Here’s the part that makes this even more fascinating. Eisman isn’t shorting the trade. He still owns the big names – the chip giant, the cloud titan, the social-media conglomerate turned AI lab. He’s not ringing the bell from the rooftop.
He’s doing something far more unsettling. He’s watching. Waiting. Stress-testing the foundational story in real time.
In my experience, that’s exactly how the smartest investors behave at major inflection points. They don’t need to be first. They just need to be right – and not too early.
Early Warning Signs Already Flickering
Look closely and you can spot cracks forming, even if most people refuse to see them:
- Some frontier labs quietly shifting rhetoric from “scale is all you need” toward architecture breakthroughs and post-training enhancements
- Growing complaints about data quality walls – the internet has been scraped clean
- Synthetic data experiments that sometimes make models worse, not better
- Benchmark saturation where new models barely nudge scores upward despite massive compute
None of these prove the scaling hypothesis is dead. But they’re yellow lights. And yellow lights have a way of turning red when nobody’s paying attention.
What History Teaches Us About “This Time It’s Different”
I’ve been around long enough to hear that phrase before. Dot-com era. Housing. Crypto in 2021. Every single time, the story felt bulletproof right up until it wasn’t.
The difference now? The numbers are bigger, the players are more sophisticated, and the societal stakes are arguably higher. AI isn’t just another sector. It’s become the central investment narrative of our generation.
That’s what makes Eisman’s quiet warning so powerful. When the guy who spotted the last “sure thing” turning into the biggest short in history starts asking uncomfortable questions about the new sure thing… well, maybe it’s time to listen.
How Investors Can Protect Themselves
None of this means you need to sell everything tomorrow morning. But it does mean asking harder questions:
- Are you being paid to take concentration risk in a handful of names?
- Have you stress-tested what happens to valuations if capex growth slows to 20% instead of 80%?
- Do you have exposure to companies that benefit whether scaling works or not (picks-and-shovels that are truly essential)?
- Are you watching leading indicators like inference demand, enterprise ROI studies, and academic papers – not just earnings calls?
Perhaps most importantly: do you have a plan for what you’ll do if the foundational assumption actually breaks?
The Bottom Line
We might look back at December 2025 as the month the first real crack appeared in the AI narrative. Or we might look back and laugh at how close we came to panic for no reason.
Either way, Steve Eisman just did us all a favor. He reminded us that no trend is inevitable. No assumption is sacred. And no matter how exciting the future looks, it’s always built on foundations that can shift when we least expect it.
In a market that’s forgotten what doubt feels like, a little doubt might be the healthiest thing of all.
Stay vigilant out there.