DPU 101: What are DPUs, what do they do, and who supplies them?

0
193
VMware CEO tells enterprises to become 'cloud-smart' to speed up pace of digital transformation

Source is ComputerWeekly.com

Data processing units (DPUs) have emerged as an important deployment option for datacentres that run heavy data-centric workloads such as artificial intelligence (AI) and analytics processing, and to speed storage input/output (I/O).

DPUs are deployed in servers and are the latest in an evolution of offload hardware that take workload away from central processing units (CPUs), freeing them to concentrate on core application cycles and boost performance there.

Meanwhile, DPUs can handle data transfer, data reduction, security and analytics. Perhaps essential to the rise of the DPU also is the extent to which they can fit with composable infrastructure architectures that knit infrastructure resources from hardware componentry.

And, while DPUs are a growing core hardware component, DPU capability is being built into cloud services, such as those from Amazon Web Services (AWS) and Microsoft Azure. Also, core environments such as VMware have been refactored to take advantage of DPU deployment by customers.

In this article, we look at where DPUs came from, what DPUs do and their benefits, key specifications and who the key DPU suppliers are.

Where did DPUs come from?

Although the idea of a data processing unit is relatively new, their evolution comes from a long line of offload cards, especially in network acceleration.

These started with basic network interface cards (NICs), to “offload NICs” that freed up CPU cycles while they processed network traffic, to “smart NICs” that took things further into what could be offloaded and introduced an element of programmability.

DPUs are an evolution from smart NICs and bring flexible programmability as part of composable architectures, as well as increased offload capabilities and including storage networking.

What is a DPU and what are its benefits?

A useful way of conceiving DPUs – and graphical processing units (GPUs) for that matter – is the idea of decentralisation. That is, offloading tasks that would formerly have been carried out centrally by the CPU to supplementary hardware aimed at specific tasks. In so doing, DPUs make up for server CPU inefficiencies in data-centric computation and data transfer workloads.

To put a finer point on things, it’s maybe useful to conceive of just how parallelised each of the xPU variants are.

CPUs have relatively few cores aimed at running a few operations at one time, GPUs have more cores and can handle more operations, and DPUs are made with many more cores and are built to handle incredible parallelised sets of workloads.

Whereas a GPU is primarily designed to run heavy computation around graphics rendering (although they have become used for more), a DPU takes things a step further to handle heavily data-intensive tasks, such as movement of data around storage and networking, but also AI and analytics.

A DPU is usually made up of a multi-core CPU, memory and controllers, PCIe sockets, and networking and storage fabric interfaces. DPUs are designed with processor cores and hardware acceleration blocks. DPUs come with their own operating system (OS) that allows them to combine with the primary OS and perform jobs such as encryption, erasure coding and data reduction.

DPUs can connect natively to NVMe storage – and NVMe-over-fabrics – to provide rapid access to very high-speed storage. They also often come with native acceleration for specific workloads like cryptocurrency. 

What kind of specs do DPUs offer?

The kind of spec you’d get from a DPU can be seen in Nvidia’s BlueField 3, which comes with up to 400Gbps of Ethernet or Infiniband connectivity, 32 lanes of PCIe Gen 5.0, up to 16 ARM CPU cores, 16 cores in a programmable data path accelerator, 32MB of onboard DDR DRAM, AES encryption, and NVMe-over-fabrics/over-TCP connectivity.

Who makes DPUs?

DPU hardware is available from suppliers that include:

  • Intel, which announced its Mount Evans DPU – co-developed with Google – in August 2021 and has a roadmap that includes further iterations with a two-year cadence. Mount Evans later became the E2000 series, launched in 2022. Intel appears to prefer infrastructure processing units (IPU) over DPU.
  • Nvidia, with its BlueField 3 and 2 DPUs, as well as the company’s hybrid GPU/DPU Converged Accelerators.
  • Marvell supplies its Octeon and Armada DPUs, which have a heavy bias towards use in telco applications.
  • AMD with its AMD Pensando Infrastructure Accelerators.
  • Fungible – acquired by Microsoft this year – introduced the DPU to the market when it came out of stealth in 2020, with products aimed at networking and storage. The Microsoft purchase came after Fungible rose to prominence as a pioneer of DPUs and then hit troubles as expansion in the market faltered. Market commentary at the time of the acquisition pointed to Microsoft being likely to incorporate Fungible IP into Azure, and in so doing having an eye on AWS’s DPU capabilities.
  • AWS’s Nitro cards include the hardware and software building blocks that provide compute, storage, networking and memory for its EC2 services, and include DPU options. DPUs also figure in AWS Glue, which is the company’s serverless compute platform aimed at extract, transform and load (ETL) and data warehouse workloads.

Source is ComputerWeekly.com

Vorig artikelGeneral availability: Start and stop load (Azure Load Testing) faults in Azure Chaos Studio
Volgend artikelIntro to Linux User Accounts and Permissions