Huawei pushes OceanStor’s Arm chip benefits at Paris love-in

0
268
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

In the City of Love in October, Chinese hardware maker Huawei laid the seduction on thick. At the Palais des Congrès in Paris, it showed European enterprises its OceanStor Dorado V6, which provides file, block and object access at a claimed reduced price compared to US competitors.

A big sell is that its storage arrays use Arm processors. That’s a result of the US government ban that stops Huawei using x86 processors from Intel and AMD. That sanction – which Huawei calls “the US ban” – saw the supplier set up a subsidiary for its x86 server business called xFusion.

OceanStor Dorado V6 arrays come in variants from entry-level to extreme high performance and multi-petabyte capacity.

High-end 8000 and 16000 models can scale to 32 controllers and 6,400 drives, performance to tens of millions of IOPS and nearly 300PB of capacity, with advanced storage functionality including data deduplication, compression, cloning and replication. Entry-level 2000 and 3000 models can go to 16 controllers and 1,200 drives.

“Our Arm processor is the Kunpeng 920 that we have developed with our partner HiSilicon,” said Huawei storage products director Ludovic Nicoleau, in conversation with ComputerWeekly sister publication in France, LeMagIT.

“It’s a chip designed for storage that doesn’t uselessly use energy to feed circuits that don’t help storage, like those based on x86 chips in our competitors’ arrays. Our aim is to provide the lowest possible energy footprint in the datacentre to European enterprises.”

According to Nicoleau, the OceanStor Dorado V6 is 30% more efficient in electricity consumption than competitor products. HiSilicon Arm processors are used in its internal RAID controller, 32Gbps Fibre Channel and 100Gbps Ethernet cards, and even inside its SSDs to manage wear.

“Huawei has offered storage arrays since 2002,” said Nicoleau. “At first these were aimed at telcos, but then we brought that know-how to writing operating systems. Here, every function, according to its importance, according to its workload, is assigned a certain number of cores, and that has allowed us to develop optimised designs for our OS.”

The Kunpeng 920 exists in a number of variants, each with a 7nm Arm processor. At entry level is the model 3210, with 24 cores, clock speed of 2.6GHz and consuming 95W. At the top end is the 7265, with 64 cores, 3GHz and 200W. There is also the Kunpeng 916, with 32 cores, 2.4GHz and 75W.

Huawei would have used Arm processors from the start, but in 2016 it switched to the more conventional x86 chips for standard storage arrays for all markets. That came in the form of the OceanStor V3, which existed for just three years until the US measures against the company in 2019. Then came Huawei’s R&D push, which resulted in the new-generation V6 based on HiSilicon Arm CPUs, and which came to Europe in 2020.

Beyond the CPU, the SSDs are also of a proprietary design. Instead of 2.5” as a form factor, they are 1.7” and that allows for 36 of them in a 2U array. Its network cards parallelise traffic so that four active storage arrays in different datacentres is possible. That sets Huawei apart from competitors which use software-based synchronisation across three, it said.

The OS – all container-based – can be updated function by function without degradation in performance. The possibility of NAS, object and SAN functionality is planned, and simultaneously rather than one or the other, like some other products, but all this is yet to come. Block and file together has been possible for a few months, but the addition of object is set to arrive in updates later this year.

Another detail under the hood is that the 36 SSDs in fact form several RAID clusters, with each one accessible by a different Ethernet or Infiniband card. It is this design that allows for rapid synchronisation of arrays between themselves and up to 100km apart via fibre between sites, according to Huawei.

Inside the array, those clusters talk to each other via 100Gbps Ethernet – or, more precisely, NVMe-over-ROCE – and not via the PCIe bus.

“We don’t offer NVMe-over-TCP because it’s not performant enough,” said Nicoleau. “Use of NVMe-over-ROCE for internal communications as well as external presentation via SAN, to servers, is a technical choice that allows for homogeneity of the architecture.”

Source is ComputerWeekly.com

Vorig artikelWhat to Know About EU’s Digital Markets Act
Volgend artikelNetApp launches BlueXP as single console across on-prem and cloud