StorPool takes its software-defined storage to the AWS cloud

0
229
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

Ten times less latency than other storage solutions and one million IOPS from a single node – those are the claims of StorPool, which sells distributed and virtualised software-defined storage, for datacentre deployment to date, but now also available on the AWS cloud.

“It was AWS that came to us to propose the offer of extremely performant storage alongside its online storage services,” said Boyan Ivanov, CEO of StorPool, in a conversation with ComputerWeekly.com’s French sister publication LeMagIT during a recent IT Press Tour event.

“In fact, a StorPool unit on AWS allows online applications to achieve 1,200 IOPS, which compares to the 250 IOPS from the AWS Elastic Block Storage service that attaches directly to VMs [virtual machines].”

Ivanov said AWS doesn’t market StorPool among its services, but StorPool can install its system on AWS VMs.

StorPool claims that its software-defined storage’s high performance comes from not being burdened by too many storage functions. For example, block access has been the main focus, as used by transactional databases (ie, applications that have the most need for the most IOPS), OSs that read and write their volumes to virtual machines or their persistent containers.

“Traditional storage arrays are too complex – not elastic enough for modern use cases,” said Ivanov. “The best measure of performance now is latency – in other words, the speed your storage responds to your application or your systems. It’s in that way that we have developed our software-defined storage.”

Ivanov said enterprises can add third-party file services to StorPool so a portion of the disk works as NAS. “What is important is that you have a pool of storage that is faster than the basic offer,” he added.

StorPool’s software is installed on at least three servers and these present their drives like a virtual SAN to other machines on the LAN. That is pretty similar to software-defined storage like VMware vSAN and DataCore SANsymphony. But StorPool claims its code is better optimised and that its performance depends on an emphasis on the RAM of each node in the cluster.

“We use 1GB of RAM and a complete virtual machine per node to manage up to 1PB of data,” said Boyan Krosnov, technical director at StorPool. “That’s the key to offering better performance than arrays from Pure Storage or NetApp all-flash.”

When applications are deployed on the same server as the StorPool VM, latency to data on another node can be as low as 70μ. That’s just 1.5x the latency when the application directly accesses NVMe on the same server. And when data is on the same server, its latency under StorPool is divided by two compared with direct access. That’s down to parallelised NVMe access with StorPool, not the host OS.

“StorPool doesn’t use host OS drivers,” said Krosnov. “It uses ones we’ve developed that allow for an optimised RAID for NVMe SSDs, but also for network cards that connect the nodes.”

In its most recent version – v20 – StorPool supports NVMe-over-TCP, which can connect nodes to external disk shelves or have them operate as a target for other servers on the LAN.

NVMe-over-TCP offers low-cost storage networking, but at speeds to match NVMe SSDs. StorPool claims that an application connected via 100Gbps Ethernet can actually move data at 10GBps, which is the maximum authorised on such a connection.

Elsewhere in v20, StorPool has broken with former habits to offer NFS-based NAS functionality, with a maximum capacity of 50TB.

StorPool’s headline customers include Nasa, the European Space Agency and CERN. Last year, integrator Atos announced StorPool would be deployed as storage to its supercomputer projects.

StorPool is also available on AWS for its I3en.metal bare metal storage and r5n compute instances. According to a series of tests carried out by the software maker, such services – as measured by UK-based hosting provider Katapult – stayed under 4 milliseconds response time with databases at 10,000 requests per second. By comparison, AWS native block storage service EBS is limited to 4,000 requests, and other competitors 2,000.

“The problem with block storage services in the cloud is that they are not elastic,” said Krosnov. “After a certain level of access requests, the server uses other SSDs, meaning SSDs that aren’t directly connected to the PCIe bus. So, your application then passes through the bottleneck of the host OS.”

Source is ComputerWeekly.com

Vorig artikelEdge storage: What it is and the technologies it uses
Volgend artikelHackers Use Fake Job Offers and Open Source Software to Target IT Staff