Wiki posted an update
Sam Altman Says OpenAI Has Run Out of GPUs – What This Means for AI’s Future
OpenAI CEO Sam Altman recently made a surprising admission: the company has run out of GPUs. This revelation came alongside the launch of GPT-4.5, OpenAI’s latest language model, which is not only powerful but also incredibly expensive to use.
For those in the AI space, this isn’t just a minor inconvenience—it’s a signal of the intense hardware demand that artificial intelligence is placing on the tech industry. But what does this mean for OpenAI, its users, and the future of AI? Let’s break it down.
GPT-4.5: Powerful, But at a Cost
The launch of GPT-4.5 has created quite a stir—not just because of its capabilities but also because of its hefty price tag. OpenAI is charging an eye-watering $75 per million tokens, which translates to around 750,000 words. To put that in perspective, that’s 30 times more expensive than the cost of OpenAI’s previous reasoning model, GPT-4o.
So why the extreme price hike? The answer lies in computing power.
According to Altman, the rollout of GPT-4.5 has been staggered due to a severe shortage of GPUs—the specialized chips required to run large AI models. OpenAI simply doesn’t have enough hardware firepower to support the immediate demand.
“We will add tens of thousands of GPUs next week and roll it out to the plus tier then,” Altman reassured users. “This isn’t how we want to operate, but it’s hard to perfectly predict growth surges that lead to GPU shortages.”
This statement underscores a growing reality: AI development is no longer just about software—it’s about who controls the most computing power.
The AI Industry’s Race for GPUs
The AI boom has created an intense competition for high-performance chips, particularly those made by NVIDIA, the dominant player in AI hardware.
Recently, NVIDIA announced that it had sold a staggering $11 billion worth of its next-gen AI chips, called Blackwell, in what CFO Collette Kress called the “fastest product ramp in our company’s history.”
Every major AI company—including OpenAI, Google DeepMind, and Anthropic—is scrambling to secure as many of these chips as possible. Without them, their AI models simply can’t function at scale.
This hardware dependency raises a critical issue: even the most advanced AI systems are only as good as the infrastructure behind them.
Is GPT-4.5 Worth the Hype?
Despite its high cost, GPT-4.5 isn’t necessarily a game-changer. Unlike past releases, Altman himself has tempered expectations, noting that:
“This isn’t a reasoning model and won’t crush benchmarks.”
Instead, OpenAI describes GPT-4.5 as having a more natural, intuitive conversational ability, rather than raw problem-solving power. OpenAI VP of Research Mia Glaese told The New York Times that the model excels at:
“Engaging in warm, intuitive, naturally flowing conversations.”
This suggests that OpenAI is shifting its focus toward user experience rather than just brute-force intelligence. But is that enough to justify the massive price tag?
Some experts remain skeptical, especially given that GPT-4.5 was supposedly designed to be more computationally efficient than its predecessors. OpenAI’s own “system card” states that GPT-4.5 reduces compute costs by 10x compared to GPT-4.
So why is it still so expensive? That’s a question that remains unanswered.
What’s Next? GPT-5 and the Future of AI
Even as OpenAI scrambles to roll out GPT-4.5, the company is already looking ahead to GPT-5. Altman has described it as:
“A system that integrates a lot of our technology.”
This hints at a more comprehensive, multi-modal AI, possibly moving closer to Artificial General Intelligence (AGI)—the holy grail of AI research.
But with hardware shortages, skyrocketing costs, and growing regulatory scrutiny, OpenAI faces an uphill battle. Can they scale their models while keeping them accessible to users? Or will AI’s increasing reliance on expensive infrastructure make it a luxury only a few can afford?
One thing is clear: AI is evolving at breakneck speed, but the biggest challenge may not be the software—it’s whether companies can build enough computing power to keep up.
Final Thoughts
Sam Altman’s revelation about GPU shortages highlights a growing issue in the AI industry: the hardware arms race is just as important as the software itself.
For users, this means two things:
✅ Expect better, more intuitive AI experiences in the future.
❌ But also expect higher costs and potential delays as AI companies fight for resources.As OpenAI and its competitors push forward, one question looms: will AI’s future be defined by innovation, or by whoever controls the world’s supply of GPUs?
Let’s see how this race unfolds.