OpenAI reportedly isn't happy with Nvidia's GPUs while Nvidia's $100 billion investment plan in OpenAI is said to have 'stalled': Is the AI honeymoon over?
Are the two biggest beasts in AI falling out of love?
OpenAI reportedly isn't happy with the performance of Nvidia's GPUs. Meanwhile, Nvidia is having second thoughts about pumping $100 billion into OpenAI. These are the latest rumours around the two biggest players in AI. So, could their unholy alliance be faltering?
Last week, the Wall Street Journal claimed that Nvidia is rethinking its previously announced plans to invest $100 billion in OpenAI over concerns regarding its ability to compete with the likes of Google and Anthropic.
Then yesterday, Reuters posted a story detailing the reported dissatisfaction of OpenAI with Nvidia's GPUs, specifically for the task of inferencing AI models. If the latter story looks a lot like somebody at OpenAI hitting back at the original Wall Street Journal claims, the two narratives combined feel like just the sort of tit-for-tat off-the-record briefing that occurs when an alliance is beginning to falter.
For now, none of this is official. It's all rumour. However, it is true that Nvidia's intention to invest $100 billion in OpenAI was announced in September and has yet to be finalised.
The Wall Street Journal claims that Nvidia CEO Jensen Huang has "privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic."
In public, Huang has defended Nvidia's intentions when it comes to investments in OpenAI, but has stopped short of explicitly reconfirming the $100 billion deal. “We will invest a great deal of money, probably the largest investment we’ve ever made,” he said. But he also retorted, "no, no, nothing like that," when queried whether that investment would top $100 billion.
As for OpenAI, Reuters says that it is, "unsatisfied with some of Nvidia’s latest artificial intelligence chips, and it has sought alternatives since last year." It's claimed that OpenAI is shifting its emphasis away from training AI in favour of inference or running AI models as services for customers.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
It's for that latter task, inference, that OpenAI is said to have found Nvidia's GPUs wanting. "Seven sources said that OpenAI is not satisfied with the speed at which Nvidia’s hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software," Reuters claims.
It's certainly a somewhat plausible narrative. You could argue that Nvidia's GPUs are big, complex, relatively general-purpose hardware that's suboptimal for the specific task of inference.
By way of example, Microsoft has recently announced its latest ASIC, or Application Specific Integrated Circuit, specifically for inferencing. ASICs are chips designed to do a single, narrowly designed task very efficiently. And it's probably fair to say that, in the long run, most industry observers think that AI inferencing, at the very least, will be run on ASICs rather than GPUs.
A handy parallel case study of the power of ASICs is cryptocurrency mining. That too used to be done on GPUs. But ASICs are now far, far more effective.
Anywho, it's perhaps inevitable that the OpenAI-Nvidia love-in would falter to some degree. Both companies have a whiff of "world domination" about them and, in the end, their interests are never going to align perfectly.
As per the Wall Street Journal report, it's very likely Nvidia will still invest billions in OpenAI. And for now, no doubt OpenAI has little choice but to keep buying billions of dollars' worth of Nvidia GPUs. But if these stories have any truth in them, the honeymoon is probably over.

1. Best overall: AMD Radeon RX 9070
2. Best value: AMD Radeon RX 9060 XT 16 GB
3. Best budget: Intel Arc B570
4. Best mid-range: Nvidia GeForce RTX 5070 Ti
5. Best high-end: Nvidia GeForce RTX 5090

Jeremy has been writing about technology and PCs since the 90nm Netburst era (Google it!) and enjoys nothing more than a serious dissertation on the finer points of monitor input lag and overshoot followed by a forensic examination of advanced lithography. Or maybe he just likes machines that go “ping!” He also has a thing for tennis and cars.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

