Object storage: How research institute got 850TB for €150,000

0
224
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

Could you deploy 850TB of storage capacity for €150,000 – and not one euro more?

That was the challenge faced by the Institute of Plant Molecular Biology (IBMP) in Strasbourg, which it carried out using DataCore’s Swarm S3 software-defined object storage architecture on commodity hardware.

“Previously, the institute ran an annual budget of €4.4m to support 180 researchers,” says Jean-Luc Evrard, head of information systems at the institute. “That meant that every January we could invest €600,000 in research equipment or IT to support research. But that budget has fallen to around €2.2m, so there was nothing available for investment in IT.

“We had one possible solution – we could ask for emergency help from the state of €150,000. It’s a small amount compared to what we would have had and once it was spent, there would be nothing to deal with any further need to scale.”

Evrard adds that leasing arrangements with small payments made throughout the lifetime of the contract was not an option.

How to make one investment last several years

Since 2015, the IBMP has used DataCore’s SANsymphony software-defined storage, deployed on two redundant Dell Compellent arrays with a total capacity of 210TB. These arrays would ingest about 2TB of instrumentation data a day and Evrard was very satisfied with them.

Evrard likes DataCore’s solely software-defined approach, which works well within the constraint of having to buy hardware from suppliers specified by the organisation. He also praises their support, which saw responses within the hour from their London offices.

The problem was that other constraints on storage emerged as a result of new measuring equipment that produced a much heavier data payload. That brought the need to store 80TB of new data every year, with a retention period of at least 15 years because of the need to include data in regularly produced scientific publications.

But SANsymphony, which stands out in terms of speed of access, was not suited to this use case. The daily 2TB that it handled was not meant to be stored on it for more than a few weeks.

Even object storage busted the budget

So, in 2018, Evrard started looking for a new storage solution that would cost no more than €150,000.

“We quickly became certain that the storage needed to archive our data would have to be object storage,” he says. “In part, that was because it is not expensive, but also because it allows you to label things with a certain amount of metadata. That metadata allows researchers to cite evidence in their work more easily.”

Initially, Evrard looked at private cloud solutions. “OVHcloud told us they didn’t know how to do what we wanted,” he says. “We also went and looked at the University of Strasbourg, which had its own datacentre. When we said we had a budget of €150,000, they said they could host our data for three years, but after that,our storage was no longer guaranteed.”

So it was a case of falling back on more permanent solutions.

“Ceph was the solution generally recommended among the research community,” says Evrard. “We evaluated it, but for us it was too complex and would have required a lot of work. What is more, Ceph is much less an object storage solution than a distributed storage solution. In other words, it didn’t fit well to our needs.

“One product that corresponded exactly with our expectations was the ActiveScale array, which Quantum had bought from Western Digital.

“But alas, at the time we were deciding, Quantum changed its business model and it wasn’t possible to buy it, but only lease it. And without knowing what our future investment plans would be, we couldn’t run the risk of having to return the arrays at some point.”

Swarm: Storing metadata alongside files

And then DataCore bought Caringo’s Swarm. The supplier presented the new solution to the IBMP team and it was love at first sight.

“The first good point about the product is that it comes with an integral search engine,” says Evrard. “The second is that we could buy perpetual licences with a guarantee of seven years. Finally, the argument that seduced us was that the metadata is not held in a separate database but alongside the files, on the same disk.”

He adds: “When you look at most object storage solutions, you are more or less dependent on a database that federates all the metadata in a separate database. And if that database gets corrupted, you are in trouble.

“With Swarm, if you are in the situation where a node has gone down and you can’t replace it because you don’t have all the pieces, it is no longer a problem because all you have to do is move disks to nodes with enough free slots to recover the data with the metadata.”

The big task: Choosing what types of metadata

So the IBMP invested its €150,000 in 10 storage nodes – deployed on Dell servers – that run under Swarm with 850TB of usable capacity (1.4PB raw). Three operate as controllers to handle access, while seven contain the bulk capacity.

Evrard’s team will get the system operational in the first quarter of 2022. “The delay has been due to a few factors, including IT team training and defining storage policy,” he says.

“Our biggest task is to list which metadata our researchers need to provide to ensure documents are easily found in future. Notably, that includes metadata that can provide proof of data and location. We think we may also develop a custom web interface that simplifies capture of this metadata for researchers.”

Existing confidence in DataCore

Evrard has not yet been able to step back and evaluate Swarm properly, but he already had some faith in DataCore products.

“For example, this solution runs on two Dell Compellent arrays, but when the supplier deployed them, they had the wrong hardware serial number,” he says. “That led to updates not working until we realised after nearly a year had passed, by which time the array held 100TB of essential data. To fix this error, we had to erase the contents, carry out a factory reset and then reinstall the data.

“Thanks to SANsymphony, the operation was very simple. The system momentarily switched all production to the secondary array while the primary one was re-initialised. Then, rehydrating data from the secondary array to the primary was taken care of automatically. No user realised what was happening during this process.”

Next: Redundant arrays to reduce datacentre costs

The IT systems chief doesn’t necessarily know what investments he will be able to make in future, but he has a precise idea of the direction he’d like to take.

“Today, the bulk of our IT costs go on datacentre security equipment, notably on electrical generators,” says Evrard. “If I have the means to invest in storage, I will distribute the Swarm arrays between Strasbourg, Nancy and Reims, which are connected by dark fibre. That is so that, by using Swarm, the datacentres will replicate between themselves and there will always be one that can be failed over to in case of an outage.

“By doing this, I won’t have the same need for physical security on the arrays as today. And with the money saved on the generators, it won’t be more TB that I’ll buy, but PB!”

Source is ComputerWeekly.com

Vorig artikelOutdated systems core to DWP £1bn pension underpayment blunders
Volgend artikelCIO interview: Sheri Rhodes, CIO, Workday