Lightbits offers NVMe-over-TCP at 5x less than NVMe-over-FC et al

0
279
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

NVMe-over-TCP 5x cheaper than equivalent NVMe-over-Ethernet (ROCE) solutions – that’s the promise of Lightbits LightOS, which enables customers to build flash-based SAN storage clusters on commodity hardware and using Intel network cards.

Lightbits demoed the system to show performance equivalent to NVMe-over-Fibre Channel or ROCE/Ethernet – both much more costly solutions –  in which LightOS was configured on a three-node cluster using Intel Ethernet 100Gbps E810-CQDA2 cards during a press meeting attended by Computer Weekly’s sister publication in France, LeMagIT.

NVMe-over-TCP works on a standard Ethernet network with all the usual switches and cards in servers. Meanwhile, NVMe-over-Fibre Channel and NVME-over-ROCE need expensive hardware, but with the guarantee of rapid transfer rates. Their performance is due to the absence of the TCP protocol, which can be a drag on transfer rates as it takes time to process packets and so slows access. The benefit of the Intel Ethernet cards is that it decodes part of this protocol to mitigate that effect.

“Our promise is that we can offer a high-performance SAN on low-cost hardware,” said Kam Eshghi, Lightbits’ strategy chief. “We don’t sell proprietary appliances that need proprietary hardware around them. We offer a system that you install on your available servers and that works on your network.”

Cheaper storage for private clouds

Lightbits’s demo showed 24 Linux servers each equipped with a dual-port 25Gbps Ethernet card. Each server accessed 10 shared volumes on the cluster. Observable performance at the storage cluster reached 14 million IOPS and 53GBps read, 6 million IOPS and 23GBps writes, or 8.4 million IOPS and 32GBps in a mixed workload.

According to Eshghi, these performance levels are similar to NVMe SSDs directly installed in servers, with longer latency being the only drawback, but then only 200 or 300 microseconds compared to 100 microseconds. 

“At this scale the difference is negligible,” said Eshghi. “The key for an application is to have latency under a millisecond.”

Besides cheap connectivity, LightOS also offers functionality usually found in the products of mainstream storage array makers. These include managing SSDs as a pool of storage with hot-swappable drives, intelligent rebalancing of data to slow wear rates, and replication on-the-fly to avoid loss of data in case of unplanned downtime.

“Lightbits allows up to 16 nodes to be built into a cluster,” said Abel Gordon, chief systems architect at Lightbits. “With up to 64,000 logical volumes for upstream servers. To present our cluster as a SAN to servers we have a vCenter plug-in, a Cinder driver for OpenStack and a CSI driver for Kubernetes.”

“We don’t support Windows servers yet,” said Gordon. “Our goal is rather that we will be an alternative solution for public and private cloud operators who commercialise virtual machines or containers.”

To this end, LightOS offers an admin console that can allot different performance and capacity limits to different users, or to different enterprise customers in a public cloud scenario. There’s also monitoring based on Prometheus monitoring and Grafana visualisation.

Close working with Intel

In another demo, a similar hardware cluster but with open source Ceph object storage was shown and which was not optimised for the Intel network cards. 

In the demo, 12 Linux servers running eight containers in Kubernetes simultaneously accessed the storage cluster. With a mix of reads and writes, the Ceph deployment achieved a rate of around 4GBps, compared to around 20GBps on the Lightbits version with TLC (higher performance flash) and 15GBps with capacity-heavy QLC drives.Ceph is Red Hat’s recommended storage for building private clouds. 

“Lightbits close relationship with Intel allows it to optimise LightOS with the latest versions of Intel products,” said Gary McCulley of the Intel datacentre product group. “In fact, if you install the system on servers of the latest generation, you automatically get better performance than with recent storage arrays that run on processors and chips of the previous generation.”

Intel is promoting its latest components among integrators using turnkey server concepts. One of these is a 1U server with 10 hot-swappable NVMe SSDs, two Xeon latest generation processors and one of its new 800 series Ethernet cards. To test interest in the design in the framework of storage workloads, Intel chose to run it with LightOS.

Intel’s 800 series Ethernet card doesn’t completely integrate on-the-fly decoding of network protocols, unlike the SmartNIC 500X, which is FPGA-based, or its future Mount Evans network cards that use a DPU-type acceleration card (which Intel calls IPU).

On the 800 series, the controller only speeds up sorting between packets to avoid bottlenecks between each server’s access. Intel calls this pre-IPU processing ADQ (application device queues).

However, McCulley promised that integration between LightOS and IPU-equipped cards is in the pipeline. It will act as more of a proof-of-concept than a fully developed product. Intel seems to want to commercialise its IPU-based network cards as NVMe-over-ROCE cards instead, so for more expensive solutions than those offered by Lightbits.

Source is ComputerWeekly.com

Vorig artikelBLOOM: Open Source AI Language Model Now Available
Volgend artikelOpen Cybersecurity Schema Framework Launched