Nvidia has continued its dominance of artificial intelligence (AI) datacentres, with its latest quarterly results showing revenue growth of 16% – a 93% increase from the same period last year.
The company’s datacentre business reported quarterly revenue of $35.6bn, and revenue of $115bn for the full year – a 142% increase compared to last year.
In his prepared remarks, Nvidia CEO and founder Jensen Huang said: “Demand for Blackwell is amazing as reasoning AI adds another scaling law – increasing compute for training makes models smarter and increasing compute for long thinking makes the answer smarter.
“We’ve successfully ramped up the massive-scale production of Blackwell AI supercomputers, achieving billions of dollars in sales in its first quarter. AI is advancing at light speed as agentic AI and physical AI set the stage for the next wave of AI to revolutionise the largest industries.”
During the earnings call, financial analysts questioned Nvidia over DeepSeek, which requires less powerful graphics processing units (GPUs), and the fact that cloud service providers (CSPs) such as Microsoft are designing their own custom chips optimised for AI workloads.
According to a transcript of the earnings call posted on Seeking Alpha, CSPs account for about half of Nvidia’s business. But there is also growing demand from enterprise customers. Huang said: “We see the growth of enterprise going forward,” which he believes represents a larger opportunity to sell Nvidia GPUs long term.
Huang used the earnings call to discuss why he thinks new AI models will drive up demand, even as AI models become more computationally efficient. “The more the model thinks, the smarter the answer,” he said. “Models like OpenAI, Grok-3 and DeepSeek-R1 are reasoning models that apply inference time scaling. Reasoning models can consume 100 times more compute. Future reasoning models can consume much more compute.”
When asked about the risk that CSPs were developing application-specific integrated circuits (ASICs) instead of using GPUs, Huang responded by talking about the complexity of the technology stack that sits on top of the challenge, implying that this would be a challenge if custom chips were deployed instead of standard GPUs. “The software stack is incredibly hard. Building an ASIC is no different to what we do – we build a new architecture,” he said.
According to Huang, the technology ecosystem that sits on top of this Nvidia architecture is 10 times more complex today than it was two years ago. “That’s fairly obvious,” he said, “because the amount of software that the world is building on top of architecture is growing exponentially and AI is advancing very quickly. So bringing that whole ecosystem [together] on top of multiple chips is hard.”
Discussing the Nvidia results, Forrester senior analyst Alvin Nguyen said: “Having yet another record performance from Nvidia seems commonplace despite the enormity of the feat. The record earnings represent the continued demand for Nvidia AI products. The emphasis on reasoning models driving more, not less, computation is a good verbal counter to the worries about DeepSeek impacting their demand.”
However, in Nguyen’s opinion, Huang’s responses to questions on custom chips that offer an alternative to Nvidia’s GPUs was “dismissive”.
“Their response to the question about custom chips from Amazon, Microsoft and Google threatening their business was dismissive and ignores the need for these companies to have options outside of Nvidia and to have semiconductors tailored specifically to their AI training and inferencing needs,” he said.