- Tech Rundown
- Posts
- ⚡🏭 OpenAI Needs Two Hoover Dams Worth of Power for Worse Models
⚡🏭 OpenAI Needs Two Hoover Dams Worth of Power for Worse Models
Anthropic turning to the Middle East, and "enriching dictators"

There's something curious about OpenAI discontinuing GPT-4.5, their largest model, because it was too expensive to run and didn't deliver better results, while announcing a $30 billion annual cloud infrastructure deal with Oracle. It's like watching someone sell their mansion because it's too big to heat, then immediately signing a lease on a power plant.

But this apparent contradiction tells us everything about where AI is actually heading.
Scaling Laws Are Dead, Long Live Scaling
The conventional wisdom about AI development goes something like this: companies keep building bigger and bigger models, requiring ever more massive data centers to train and serve them. This narrative feels intuitive.
More parameters, more compute, more intelligence. The problem is it's increasingly wrong.
What's actually happening is far different. The era of simply making models bigger is over, not because we've hit some theoretical limit, but because it stopped being the most effective way to improve performance.
Instead, AI companies have discovered three much more expensive ways to make their models better:
First, they're using highly specialized experts to review and rank model outputs, creating feedback loops that help models generate better responses through reinforcement learning. This isn't the crude "thumbs up, thumbs down" approach of early RLHF. We're talking about PhD-level experts providing detailed evaluations that cost orders of magnitude more than traditional training data.

Second, they're feeding models vastly more context for each query: meeting transcripts, entire company file systems, comprehensive web searches across multiple topics. A single query might now consume what used to be an entire conversation's worth of input tokens.

Third, instead of just asking models for answers, they're having them think through problems step by step, generate solutions, then evaluate whether those solutions are good. This "reasoning" approach produces better outputs but can consume 10x more tokens per query than the old approach.

The result? Google has seen a 50x growth in token consumption to 480 trillion per month. Every query is consuming roughly 10x more tokens than it did a year ago. So while models aren't getting bigger, the infrastructure needed to run them is exploding.

This is why OpenAI needs 4.5 gigawatts of additional capacity, roughly the output of two Hoover Dams. Sam Altman called it "a gigantic infrastructure project," and for once, the hyperbole feels justified.
The Reluctant Turn to Sovereign Wealth
Of course, building power plant-sized data centers requires power plant-sized capital, which brings us to perhaps the most fascinating subplot in this story: AI companies' increasingly uncomfortable relationship with Middle Eastern money.
A leaked memo from Anthropic CEO Dario Amodei reveals the quiet desperation behind these funding decisions. Amodei acknowledged that accepting money from the UAE and Qatar would likely enrich "dictators," writing "This is a real downside and I'm not thrilled about it." But he also noted there's "a truly giant amount of capital in the Middle East, easily $100B or more" and "Without it, it is substantially harder to stay on the frontier."

This isn't exactly a ringing endorsement of geopolitical diversification. It reads more like someone calculating the cost of their principles against the cost of falling behind in the AI race.
OpenAI has already gone down this path, partnering with MGX, a state-owned Emirati investment firm, for their Stargate project and announcing plans for a data center in Abu Dhabi. The irony here is thick: Anthropic previously criticized these very deals, but now finds itself facing the same brutal math.

It's easy to say Middle East bad, America good, and that's what a lot of press outlets may say. But thinking a little more critically, this might actually be a net positive for Western interests.
As Amodei noted, the alternative is "pushing them into the arms of China." Better to have the UAE funding American AI development than Chinese AI development. It's realpolitik dressed up as venture capital, but it's probably the right strategic choice.
Oracle's Unexpected Moment
Meanwhile, Oracle finds itself in an enviable position as the only major cloud provider without a dog in the AI model fight. While AWS, Google Cloud, and Microsoft Azure all have their own frontier AI efforts, Oracle can position itself as the neutral Switzerland of AI infrastructure.
This matters more than it might seem. OpenAI doesn't want to spend more with Google, AWS, or Microsoft than necessary. Every dollar spent with a competitor is a dollar that could fund that competitor's own AI development.
Oracle has capitalized handsomely on this positioning. Their stock is up 68% in the last year and 329% over five years.

The Human Feedback Gold Rush
The explosion in companies providing human evaluation services shows just how expensive collecting high-quality training data has become. Surge AI has hit $1 billion in annual recurring revenue while remaining bootstrapped. Scale AI is at $850 million ARR. Turing, Handshake, and LabelBox are all seeing massive growth, collectively representing hundreds of millions in revenue from what used to be a tiny niche market.

This helps explain why Anthropic is turning to Middle Eastern money, not just because of infrastructure requirements, but because every aspect of frontier AI development has become vastly more expensive. (Though it's worth noting that human training data costs remain a relatively small part of the equation, even smaller than employee salaries, and far smaller than the infrastructure.)
Edwin Chen, CEO of Surge AI, recently made his first podcast appearance and delivered a pointed critique of the industry's infrastructure obsession. He argued that data quality is paramount, even over raw compute power, and that "simply throwing more compute at it will not work if the underlying data is flawed."

Chen's thesis is that many frontier labs spent the past year training models on synthetic data, only to realize it caused their models to "collapse on this very, very narrow scope of similarity."
Of course, Chen is talking his book, Surge's entire business model depends on human evaluation being valuable. But his critique raises an uncomfortable question: if he's right about the limitations of pure scaling, what does that mean for all these massive infrastructure investments?
The Microsoft Signal
Perhaps the most telling indicator of the uncertainty around these investments is Microsoft's retreat from exclusive infrastructure commitments to OpenAI. Despite being OpenAI's closest partner and largest investor, Microsoft has effectively refused to keep writing blank checks for infrastructure spending.

This is Microsoft, a company with $200 billion in annual revenue, stepping back from what OpenAI considers essential investments. If even Microsoft thinks the infrastructure costs have gotten out of hand, what does that tell us about the sustainability of this spending?
Meta has taken a different approach, using loan deals to keep their AI infrastructure investments off their balance sheet, even though it comes at a higher cost of capital. They're literally paying extra to avoid spooking investors with the true scale of their AI spending.

Winner Takes Most, Eventually
All this points toward an industry where the infrastructure requirements will lead to consolidation.
We're likely heading toward a world where there are maybe two or three companies capable of building and serving the most performant frontier models, not because of some great strategy, but because the infrastructure costs have become so enormous that only a handful of players can afford to compete.
I believe the open source model ecosystem will continue to thrive for many use cases, but for the bleeding edge of AI capability, we're looking at a winner-takes-most dynamic driven by pure capital requirements.
This consolidation is already underway, Microsoft acquired Inflection AI, Amazon bought Adept, and smaller AI companies are increasingly getting absorbed by the tech giants when they realise they need a supporter with infinitely deep pockets.

At least Oracle is having a good quarter.