The latest financial results from PC server manufacturers show sales have been boosted by demand for artificial intelligence (AI)-accelerated servers. There is pent-up demand as manufacturers continue to experience supply chain issues, but this does not appear to be a hindrance for customers as projects to deploy AI-accelerated servers appear to be taking longer than expected.
HPE’s results show a backlog in the production of graphics proessing unit (GPU)-powered servers needed to run for AI training workloads, but it does have servers ready for customers to deploy.
According to the transcript of HPE’s first-quarter 2024 earnings call posted on Seeking Alpha, CEO Antonio Neri said: “We had a couple of deals that slipped from Q1 into future quarters because customers are taking a little bit longer to prepare the datacentre space, getting the power and the cooling ready.”
He described the GPU supply issues as “a tight environment” but said it was “seeing some improvements”.
According to Neri, HPE has a stock of GPU-based servers already built, but “customers will take time to accept those systems”. But as of the end of the first quarter, he said there is a $3bn backlog.
Neri said HPE customers faced a 20-plus week lead time for GPU-equipped servers. Specifically, he said customers were adopting servers equipped with the previous generation of Nvidia GPUs, the H100 architecture. “The majority of the demand today is on H100. Going forward, we’re going to have the Grace Hopper H200 and others.” These, he said, will include AMD’s recently introduced GPU, the Radeon Instinct MI300X.
Dell Technologies has also seen demand for GPU-powered servers continue to outpace supply. In its third-quarter filing, Dell reported it had experienced consecutive growth in traditional servers.
In a statement accompanying the earnings call for the quarter, Dell Technologies vice-chairman Jeffery Clarke said: “AI-optimised server orders increased by nearly 40% sequentially. We shipped $800m of AI-optimised servers, and our backlog nearly doubled sequentially, exiting the fiscal year at $2.9bn. We are also seeing strong interest in orders for AI-optimised servers equipped with the next generation of AI GPUs, including the H200 and the MI300X.”
Like HPE, most of Dell’s customers are still in the early stages of their AI journeys. “We are helping them get started and work through their use cases, data preparation, training and infrastructure requirements,” said Clarke.
Discussing the opportunity for Dell going forward, which reflects how it engages with customers across its product portfolio, consulting and finance businesses, he said: “Progress in this space won’t always be linear, but we are excited about the opportunity ahead.”
Beyond the GPUs, AI puts new demands on enterprise storage. Clarke said organisations that are building AI systems have initially trained their AI models using text-based data. Dell is betting on its customers moving to richer enterprise datasets, which aligns with its storage strategy.
He said the new PowerEdge F710 and 210 storage products offer increased performance around high-concurrency, latency-sensitive workloads. “It’s aligned with what is going to be the growing need as this gets deployed inside the enterprise,” he added.
Cooling requirements
A report from Dell’Ora Group noted that increasing semiconductor thermal design power and the emergence of large language model and generative AI applications are leading to significantly increasing rack power densities. This, according to Dell’Ora research analyst Lucas Beran, is leading to a transition from air-based thermal management systems to liquid cooling.
Air cooling has historically supported rack densities up to 20kW/rack, while liquid cooling is able to support rack power densities up to 100kW+/rack, according to Beran.
Data from Dell’Ora shows that accelerated servers can consume up to 6kW of power each. This would suggest that to use a datacentre’s existing air cooling infrastructure, only three 6kW AI-accelerated servers could be installed per rack, hence organisations looking at deploying these servers are having to redesign their datacentres to cope with the cooling requirements,
What is clear from the server manufacturers’ latest quarterly financial results is that demand for AI-optimised servers exists and is set to grow. However, organisations planning to deploy GPU-powered servers for training AI systems are facing a number of hurdles, both in the need to reconfigure their datacentres and supply issues.
In November 2023, analyst firm Gartner forecast that enterprise shipments of AI-optimised servers would reach almost 460,000 units in 2024, rising to 740,000 by 2027. This represents 24% of AI server shipments in 2024 and 27% in 2027.
“We expect enterprises to spend in 2024 and the first half of 2025 in a largely exploratory and evaluation mode of adoption for AI-optimised servers for GenAI use cases,” Gartner’s senior director analyst, Adrian O’Connell, wrote in its Forecast analysis: AI-optimised servers, worldwide report.
O’Connell noted that the uncertain economic outlook due to continued high inflation is likely to compound an already conservative approach from mainstream enterprises towards significant spending on a new technology. According to Gartner, the average selling price for an AI-optimised server will be $32,300 in 2024. This is expected to fall to $29,500 by 2027.
Overall, Gartner’s forecast shows that organisations are expected to spend $66bn on AI-optimised servers in 2024 and $88bn on general-purpose servers.
The supply chain issues and cooling requirements mean deployment of AI servers is not progressing as fast as the industry would like. However, as demand for AI increases, the server manufacturers are increasing their investments to make the most of this opportunity.
For instance, Lenovo reported that it has been building out support for AI with a 25% increase in research and development headcount, along with budget increases. The company claims its generative AI products have started to contribute positively to growth in its server business.