In this podcast, we talk to Pure Storage’s Venkat Ramakrishnan about customer challenges when dealing with containers and storage and data protection.
Ramakrishnan, vice-president of products and engineering for Portworx, talked about customers who take on container deployments without thinking through the future scale, technical requirements and cost they are likely to accrue.
Here, Ramakrishnan warns against DIY and open source solutions, as well as the likely requirements in in-house skills that can pile up as container deployments become more numerous and complex.
What are the key challenges for customers in storage and data protection for containers?
Before we jump into it, let’s think about the why of it. More often in the tech industry, we spend a lot of time talking about the what, but we don’t spend enough time on the why.
Why should people use containers? Why should people run Kubernetes? On a very fundamental level, containers deliver application and data portability – especially that you can build your app anywhere and run it anywhere, […] that gives agility.
What agility leads to is speed. But when you increase the speed, when you drive more velocity, that means you’re supporting a lot of different application teams trying to build and iterate on their apps a lot faster. What that means is that you need to deliver a lot more automation.
The scale of a container-based deployment – Kubernetes-based deployment – can very rapidly become much bigger than what organisations are used to handling because they’re trying to give all these benefits and they have to support a lot of teams.
When organisations get to that scale, they should have enough tools to let them automate most of their day-to-day tasks, most of their day-to-day operations, most of their maintenance. One of the big challenges for enterprises is that lack of automation.
Kubernetes tries to automate a container runtime. Containers give portability, but there’s a lack of automation around how to orchestrate these applications – how to deliver the great performance they need, how to manage those applications, and how to protect them where you don’t have to, as an admin, get involved and protect every container, every app. Instead, you give application teams enough tools so they can declaratively and programmatically get access to those services without ever having to file a ticket.
You have all this velocity, scale and automation, and then if it gets blocked by somebody having to file a ticket and wait for it, that’s a huge block. The big challenge for companies is automation, a lack of automation in many of the tools. The other big challenge is the ability to support different platforms. That means neutrality. The promise of containers is you build anywhere and run it anywhere.
But the promise cannot be fulfilled if you don’t have a stack that delivers neutrality to you. Democratising the underlying infrastructure is a key pain point. And without that democratised underlying infrastructure, despite using containers in Kubernetes, organisations are being held back.
The third thing is security because a Kubernetes-based platform tries to bring a lot of developers and application teams into either a shared Kubernetes cluster or a dedicated Kubernetes cluster. You’re building serious businesses on top of Kubernetes using containers. How do you ensure these applications are secure? How do you ensure the data that goes over the network is secure? How do you ensure the different application teams on the same shared platform have no data leaking?
For example, you don’t want the sales guys or the sales teams, the sales apps, to be able to see the HR apps. You don’t want, for example, somebody else’s commission data that is going into finance to be visible to somebody in engineering. How do you build those multi-tenant apps? That’s a big issue. So, these are some of the major challenges in adopting containers in Kubernetes.
Those seem to be issues that result from the benefits of containers and Kubernetes. How do customers start to tackle those kind of issues?
This is a conversation I have had with customers many times. I’ve seen customers try different things and fail, and I always wish I had spoken to them when they were early in their journey.
Such as? Have you got any interesting failures?
There’s many of them. There are customers who think, ‘A particular storage interface is good enough for me; I can bring everything.’ And they soon realise, ‘No, that interface isn’t the right scale for me.’
Or they will say, ‘I could try to save money by just using everything open source.’ And they realise free is really not free, because when it comes to running mission-critical enterprises, you need someone who can give you 24x7x365 support, who can maintain the software, keep updating it and give you the capabilities to continue to leverage it.
You don’t have to build a technical debt yourself. You don’t need to hire an army of developers to run it. Hiring good developers is a hard job – it’s a really hard job to find good developers, good engineers, and the skillsets are always in constant demand. So, when somebody takes up all of this DIY, they’re essentially signing up to maintain the tool chain, to [take on] the technical debt. That’s a problem for large companies like Google and Microsoft. All these companies struggle with tech debt and enterprises are not geared to that.
I’ve seen customers try to take on the technical debt, just get buried in it and then say, ‘Okay, get me out of it.’ They don’t anticipate scale. They think they’re going to just run a few thousand containers. Lo and behold, suddenly they’re running 100,000 containers and they’re teetering on the edge, and there’s too many failures.
Sometimes we go on this rescue mission, and we rescue the customer out of their misery and then put them on the right path. The thing I generally recommend to customers is: don’t think tactically, think strategically. Don’t look at the need in front of you today; build a tactical solution and then try to scale it for your strategic needs. Go with proven pathways. Look at a proven pathway and go with that. And many times what you consider free is not essentially free; what you pay for it is actually what it will pay for itself.
Pick solutions that pay for themselves. It lets you control costs, it lets you cut down your operational expenditure and i lets you get more infrastructure. These kinds of solutions pay for themselves. So, while you end up paying [rather than getting it for] free, it eventually becomes free because the solution pays for itself. It delivers more efficiencies that saves costs and frees up your organisation so you can use that to go build other things.
Something I tell customers is to think a more strategically and be smart about those choices. We coach customers on how to think through the entire life cycle. What happens if one developer brings an app? What happens if 10 application teams, 100 application teams, bring their apps? What if they have different business continuity requirements? What if they have different performance requirements? What about different security requirements? This one application team might want to encrypt their data at rest or in flight, and one application team might want just multi-tenancy.
How do you deal with all of that? That’s one thing I coach customers on. [Also] think through an entire customer journey. What happens when the app is retired? Or when they have to upgrade to a new app? And what happens when they have to bring data from production to test, to test their new version of the app with production data? Those are all things they discover as they start looking for solutions.
They try to shoehorn the solution into what they have built, and it becomes a mess – a complex hodgepodge of a solution that they integrated. We tell customers to pick a platform that is simple enough for you to just single-click, that delivers all these capabilities, so you can focus on innovating, building apps, rather than tinkering with this DIY infrastructure you have built forever.