Pure CTO drills down on key-value, no DFMs on FA//ST, and fast object

0
6
Indefinite storage: What it is and why you might need it

Source is ComputerWeekly.com

In this podcast, recorded at last week’s Pure//Accelerate 2025 event, we talk to Pure Storage’s chief technology officer (CTO), Rob Lee, to get a drill-down on use of key-value stores in the company’s Purity flash storage operating system, why it doesn’t use its much-trumpeted DirectFlash Modules in the newly announced FlashArray//ST, and what makes its fast object storage so fast.

Can you explain how a key-value store is used in Pure’s storage products? 

The reason we use a key-value store, and I’ll get into the benefits, is a lot of the same reasons that when you’re organising large amounts of information, you use a relational database. 

You organise your information in a very orderly way into tables. You can build indexes. You can look things up very efficiently. You tend to do that instead of just storing piles and piles of data unorganised and making it very hard to look up. 

Now, the genesis of using key-value stores in our products and software goes back to day one of the company.

One of the things we did very differently than everybody else in the market was we designed and rethought storage software, inclusive of file systems and how we map logical blocks to physical locations. We rethought how you build those for how flash works at the most native level. One of the key considerations with flash is, unlike magnetic hard disks, you don’t overwrite the contents in place.



To change the contents, you have to write a new copy and you have to garbage collect the old thing. And when you’re doing this, you want to avoid rewriting the same piece of flash over and over again to burn out the media.

Well, it turns out that by organising our metadata – which is effectively that mapping, if you will, of file names, file system to physical location, in a key-value store, from the research community, there are lots of great techniques to minimise that write amplification – the number of times that we have to rewrite that metadata structure and maintain it over time. 

So that was the key – no pun intended – insight driving us to organise our metadata in key-value stores. 

The second piece though – and again, borrowing from the database world and why you see so many key-value stores used at cloud scale – is they make it really, really easy to partition and distribute and create concurrency and parallelism.

And so when you look at FlashBlade and why FlashBlade is uniquely good at metadata performance – I talked a bit about this on stage [at Pure//Accelerate 2025] with FlashBlade EXA – is because we store all that metadata in a key-value store that allows us to very, very linearly and with very high concurrency, scale out performance in a way you simply can’t do with other data structures. 

To draw a very simple comparison, historically, most storage systems have organised their data in a tree-like structure. Well, if you think about how you look something up in a tree, you start at the top, you go left, you go right, you go left, you have to follow it step by step by step. 

[It’s] very hard to parallelise that, right? With a key-value store, you can take advantage of the media, the flash, parallel access, you can take advantage of our distributed technology, and you can look stuff up with very high speed, with very high concurrency.

Does the key-value store come into operation only at the metadata level, and is the file system intact elsewhere, running in parallel with that? 

What’s really nice about how we’ve built our software is we use one approach to managing the file system metadata, the user metadata, as well as our more physical metadata, if you will. All of the mappings between … as you know, we do data reduction, right? Well, when you do data reduction and you find deduplication, you have to keep a mapping that says, “Oh, I don’t have this block physically stored here, there’s a separate copy over there.” 

Well, that’s a mapping; we put that in the key-value store. So, by using the same approach to managing all our metadata, well, A, it’s less software to write, B, we can make that really, really robust and really, really performant, but then C, all of the parts of our system, whether it’s the file system, whether it’s our physical media management, get the benefits of the properties I just discussed.

Pure makes a big deal about the use of its DirectFlash Modules (DFMs) and the capacity they can achieve, and yet the newly announced FlashArray//ST doesn’t use them. What does it use and why? 

So it’s a great question, and look, I’ll start with the philosophy we have behind our hardware. I think it was Steve Jobs who said, if you want to build really good software, you’re going to build hardware to support that, and that’s really kind of our philosophy. If you look at DirectFlash, what makes DFMs work is the software that enables them.

We try to put very little into the DFM hardware. It’s to enable the software. 

Now, the reason why we are not using DFMs in FAST today or the data path of FlashBlade EXA is, with any design, you design for a range of the design space, performance, efficiency, cost, etc, and our DFMs are very much designed for a very wide range of the enterprise needs in terms of efficiency, capacity, etc.

With FAST and with FlashBlade EXA, we’re aiming at the ultra, ultra high end, top end of performance, and we simply haven’t designed our drives for that top tier of ultra performance. There’s a broader ecosystem of hardware vendors that do have more specialised products for those parts of the design space, and so where it makes sense to tap into that component market, we’re going to go do that. 

What specifically do DFMs not have that you have in the hardware that you’re using there? 

Well, again, there’s no specific component that you’re going to go point to. It’s how have I optimised the design of the DFM and how it’s used, versus trading off latency, power, space, capacity, etc, and we just haven’t optimised those for microsecond latencies because, again, they’re just not that part of the design space. 

Another of the things that Pure executives often refer to is the ability to provide very high-performance object storage. I’ve never really heard an explanation why that is possible. Is it just a case of chucking resources at it, or is there something else there? 

Absolutely, and we actually hit it in the first part of the podcast. A lot of it has to do with how we organise our metadata in a key-value store, and that drives a ton of performance for us. 

If we walk this back and we look at the legacy approaches to object storage, object storage in the enterprise has grown up in the era of cheap and deep. Folks have tried to implement the object protocols typically on top of a file system, on top of an underlying block device.

You have layers and layers and layers of inefficiency. So, number one is we implement an object natively. There are not layers and layers of performance-sucking inefficiency. And then number two is, if you look at performance in two pieces, one is the metadata, the administrative work, and then the data piece. Meaning, I want to look something up, I’ve got a name, I’ve got to figure out where it is, and once I figure out where it is, then I have the data piece of actually loading and transferring the data.

It turns out with modern object workloads, the administrative piece, the metadata, ends up being a very large portion of the overall performance demands. And again, that’s where our native approach, not layering it on top of the file system and having a highly distributed, highly parallel key-value store, allows us to deliver that performance. And then certainly we have a really fast data path. That’s since day one.

Source is ComputerWeekly.com

Vorig artikelInterview: Pure Storage on the AI data challenge beyond hardware
Volgend artikelBusiness Development Manager