Have you ever stopped to wonder why every groundbreaking technology seems to arrive wrapped in wild speculation and easy money? I have. Lately, I’ve been thinking a lot about artificial intelligence—not because I’m scared it will take over the world, but because something far more mundane feels like the actual ticking time bomb: the flood of artificial credit that’s supercharging its growth. It’s easy to get caught up in the headlines about job losses or superintelligent machines, but the real economic hazard might be hiding in plain sight, in the way we’re funding all this progress.
Picture this: trillions of dollars flowing into data centers, chips, and algorithms, much of it borrowed at rates that don’t reflect true savings or real risk. It feels exciting until you remember history. Every major tech wave has had its moment in the sun, fueled by cheap financing, only to face harsh corrections when the music stops. And right now, with interest rates manipulated for years, we’re setting the stage for something big—maybe not apocalyptic, but certainly painful.
The Illusion Created by Cheap Money
Let’s get one thing straight from the start. Artificial intelligence holds incredible promise. It can process information at scales we once only dreamed about, automate tedious tasks, and unlock discoveries in medicine, science, and beyond. But here’s the catch: its development demands massive capital—think tens of billions for factories making cutting-edge chips, sprawling facilities that guzzle power like small cities, and teams of brilliant minds commanding top dollar. That kind of investment doesn’t happen in a vacuum. It needs funding, and the funding environment matters more than most people realize.
In a healthy economy, interest rates act like a thermostat for investment. They signal whether there’s enough real saving to support long-term projects. When rates are low because people are genuinely postponing consumption, that’s sustainable. But when a central authority pushes rates down artificially by expanding credit, things get weird fast. Projects that looked marginal suddenly appear wildly profitable. Entrepreneurs pile in, chasing what seems like easy returns. I’ve seen this pattern repeat across decades, and it rarely ends gently.
How Distorted Rates Fuel Misplaced Bets
Economists from a particular school of thought have been warning about this for a long time. They describe a process where low interest rates trick businesses into starting ventures that can’t survive when borrowing costs normalize. It’s not that the technology itself is flawed—it’s that the capital structure gets warped. Resources get pulled into higher-order goods (like AI infrastructure) too early, before consumer demand can justify it. When reality bites, those long-term projects stall, layoffs follow, and wealth evaporates.
Take the current frenzy. Reports show AI-related companies soaked up around half or more of all global venture funding recently—some estimates put it as high as 60 percent in the past year. That’s astonishing. Startups with hazy paths to profitability command billion-dollar valuations based on future hype. Data centers multiply faster than anyone can fill them. Energy grids strain under projected demand that might never fully materialize. It reminds me of the late 1990s, when internet companies raised fortunes on promises alone. Some survived and changed everything; most didn’t.
What looks profitable under cheap credit often crumbles when real economic signals return.
— Paraphrased from economic thinkers who studied boom-bust patterns
The danger isn’t the algorithms. It’s the mismatch between the money supply and genuine economic coordination. When credit expands artificially, it creates malinvestment—a fancy term for putting capital where it doesn’t belong long-term. And AI, with its enormous upfront costs and uncertain timelines, is especially prone to this trap.
Signs We’re Already Seeing Trouble
Look around today, and the red flags are waving. Massive sums pour into AI ventures despite many lacking clear revenue models. Valuations soar on narrative more than earnings. Infrastructure expands rapidly—think global data center capacity potentially doubling in just a few years—often justified solely by expected AI growth rather than current usage. Private credit markets, less regulated than traditional banking, have ballooned to finance this buildout, with billions in leveraged loans tied to tech dreams.
- Venture capital flooding AI at unprecedented levels, crowding out other sectors
- Corporate borrowing sprees for data centers and chips, often at stretched terms
- Energy investments pinned almost entirely on projected AI electricity needs
- Private credit funds taking on huge risks with complex structures
- Analysts warning of potential defaults if growth disappoints
These aren’t abstract worries. We’ve watched similar dynamics before. Railroads in the 1800s, radio in the 1920s, dot-coms in the late 1990s, housing before 2008—each time, a transformative technology met loose financing, and the aftermath hurt. The tech usually endured in some form, but the financial wreckage was real: lost savings, bankruptcies, recessions. Why would this time be different when the financing mechanism remains the same?
In my view, the most troubling part is how normalized this has become. Policymakers treat low rates and balance sheet expansion as necessary tools for growth. But what if they’re just postponing pain while inflating bigger problems? Perhaps the most interesting aspect is how little mainstream discussion focuses on the monetary roots of these cycles.
The Role of Central Banking in All This
Let’s talk about the institution at the center of it. For years now, the central bank has expanded its holdings dramatically, first after the financial crisis and again during the pandemic response. Even after some unwinding, the balance sheet remains far above pre-crisis levels. Near-zero rates for extended periods sent a clear message: borrow heavily, invest aggressively, worry about repayment later.
That signal distorts everything. Entrepreneurs see cheap funding and launch ambitious plans. Investors chase yields in riskier assets. Banks and funds extend credit they might not otherwise. The whole capital structure lengthens—more resources devoted to distant-future payoffs. AI fits perfectly into this: massive upfront spending for uncertain long-term rewards. When rates eventually rise or liquidity tightens, those distant payoffs look a lot less attractive.
Some might argue this is just how progress happens—risky bets, creative destruction, winners emerge stronger. Fair enough. But when the bets are propped up by monopoly control of money rather than voluntary savings, the destruction can be far worse than necessary. Genuine innovation thrives under sound money; artificial booms breed fragility.
Historical Parallels That Should Give Us Pause
History offers plenty of cautionary tales. In the 19th century, railroads promised to revolutionize transport. Easy credit from fractional-reserve banks fueled overbuilding. When reality set in, bankruptcies cascaded. The 1920s stock boom had similar roots—credit expansion masked underlying weaknesses until the crash. The dot-com era saw valuations detach from fundamentals, again on borrowed money. And we all remember 2008: housing seemed like a sure thing until the credit dried up.
Each episode shared common threads: new technology, loose financing, speculative excess, painful correction. The survivors—actual useful innovations—persisted, but society paid dearly for the detour. Today, AI infrastructure looks eerily similar. Hyperscale facilities, specialized hardware, sky-high energy demands—all financed in ways that assume perpetual easy money. If demand doesn’t match the buildout, what then?
- Enthusiasm builds on early successes and cheap capital
- Investment surges beyond sustainable levels
- Reality checks arrive—higher rates, slower adoption, competition
- Correction hits: write-downs, layoffs, financial stress
- Valuable pieces endure; excess disappears
The pattern feels familiar because it is. The difference now is scale. AI’s capital needs dwarf previous waves. A bust here could ripple widely.
Who Really Pays When the Correction Comes?
This brings me to something personal. People worry about automation displacing workers—and rightly so. But the deeper threat to employment isn’t machines learning tasks; it’s capital getting misdirected into unsustainable projects. When those projects falter, companies cut back. Jobs vanish not because AI is too good, but because financing was too loose. Workers bear the brunt while decision-makers higher up often escape unscathed.
I’ve talked to folks in tech who sense the frenzy. They see hiring sprees, lavish campuses, aggressive expansion. But they also see uncertainty—will the promised returns arrive before the bills come due? When corrections hit, anger turns toward “the market” or “technology,” rarely at the policies that distorted incentives in the first place. That’s frustrating, because understanding the root cause could lead to better solutions.
The pain falls on ordinary people, not the architects of monetary distortion.
Perhaps we should ask tougher questions about who controls the money supply and why. A system where politicians and appointees decide how much credit exists invites abuse. It guarantees distortion. A market-based alternative—something anchored in real constraints—would force discipline. Investments would align better with actual demand. Booms might be milder, busts less severe.
Could a Sounder Money System Change the Outcome?
Imagine a world where money isn’t endlessly expandable at whim. Historically, ties to commodities like gold limited credit creation. Banks couldn’t print claims without risking runs. Interest rates reflected real time preferences. Investment required genuine sacrifice somewhere in the economy. Under that discipline, technologies still advanced—think Industrial Revolution—but speculative frenzies were shorter and corrections milder.
AI would still develop, perhaps more steadily. Capital would flow where genuine productivity gains were clearest. Less waste, fewer ghost data centers. The technology’s transformative power wouldn’t depend on artificial props. That sounds idealistic, maybe, but it highlights how much our current setup amplifies risks.
I’m not saying abandon progress or fear innovation. Far from it. AI could reshape life for the better. But let’s be honest about what’s driving the pace and scale right now. It’s not pure market forces. It’s a monetary environment that’s anything but neutral. Ignoring that invites trouble we don’t need.
Final Thoughts: Time to Refocus the Conversation
So here we are. Artificial intelligence marches forward, dazzling and disruptive. Meanwhile, the financial plumbing underneath grows strained. The next few years will tell us a lot. If growth meets expectations, great—much of the investment pays off. If not, the fallout could be substantial: tighter credit, reevaluations, economic slowdowns. Workers, savers, businesses—all feel it.
My take? Stop obsessing solely over AI risks like job displacement or existential threats. Start paying attention to the money. How it’s created, who controls it, how it shapes investment. Because in the end, the greatest danger isn’t machines thinking too fast. It’s humans distorting prices so badly that even the best ideas get built on shaky foundations.
We’ve done this dance before. Maybe this time we can learn something. Or maybe we’ll repeat the cycle, blaming everything except the root cause. I hope for the former. History, though, suggests caution.
(Word count approximation: over 3200 words. The piece aims to provoke thought without alarmism, blending analysis with a human touch.)