Cloud lock-in: Why the microservices cure might be worse than the problem

0
406

Commentary: Microservices can be a great thing to drive enterprise agility, but it can also create all sorts of costs for those trying to avoid cloud lock-in.

Cloud security concept

Image: iStock/Undefined Undefined

You want zero cloud lock-in? Well, it’s going to cost you. 

At least, that’s one response to Gert Leenders’ suggestion that the path out of lock-in is microservices spread across multiple accounts. The problem with this approach, countered David Abrams, is that “you’ll end up with ginormous data transit costs and latency of at least 30x what you’d be able to achieve by keeping everything on a virtual LAN.” Not good. 

But also not necessary, according to Dave Unger. “Companies consuming cloud almost never care [about lock-in]. They want velocity and efficiency, not flexibility.” This rings true with a point made earlier by Expedia exec Subbu Allamaraju: The route out of lock-in isn’t really technical so much as process-driven: “embrace[] techniques like service orientation, asynchronous and decoupled communication patterns, micro-architectures, experimentation, failing fast, tolerance for mistakes, chaos engineering, constant feedback and continuous learning.”

SEE: Cloud data warehouse guide and checklist (TechRepublic Premium)

Which one is the blue pill?

It’s fun to talk about lock-in at the marketing level. I’ve been doing it for years, given how I’ve mostly worked for open source startups since 2000. But for most companies, most of the time, “lock in” isn’t their biggest concern. Delivering customer value is. Faster. Better. Cheaper. 

Those things are real. Option value, or freedom from lock-in, really isn’t. Not in a deep, “I need to ship this product yesterday” sort of way.

Christian Reilly captured this well: “The problem is, and always will be, the lack of fungibility. Fungibility doesn’t drive revenue. The great promise of agnostic providers died as soon as the commodity concepts went out of the window. Smart folks use best…of breed. The idea of real multi-cloud is lunacy.” Rick Ochs explained why: “[M]ost companies are more interested in elasticity, agility, and the ability to dynamically control their spend. Unfortunately avoiding vendor lock-in tends to lead you down a path of limited elasticity/agility, because you are stuck on vanilla IaaS and afraid of adv[anced] tech.”

Put more baldly by James Emerton, “Depends on your scale, but for a small shop [the microservices spread over multiple accounts approach] sounds like a bunch of extra work for zero tangible benefit.” You get limited to no access to the higher-order cloud services that make Google Google, for example, and instead are saddled with punitive bandwidth costs. All to avoid the specter of lock-in (which, by the way, is impossible to escape). 

So what should you do?

Finding the balance

Like all interesting design decisions, it’s a trade-off; [there is] no universal right answer,” noted Matt Cline. “I’m sure there are contexts where using only core primitives [like basic compute and storage] makes sense. But I’d suggest they are probably the exception–and if you are one of those exceptions, it’ll be pretty obvious.” In other words, “it depends.” 

For example, there may be compelling reasons for you to build your artificial intelligence (AI) application using Google’s unified machine learning (ML) platform. That’s going to “lock” you into BigQuery and AutoML and TensorFlow Enterprise, running GCP’s TPUs and/or GPUs. Guess what? If it works, you’re not going to fret about lock-in because the customer value generated more than outweighs any concerns that this application you just built heavily depends on GCP-specific products. To get the most value from any of these cloud providers, you almost certainly will have to leverage their native services. 

SEE: Vendor relationship management checklist (TechRepublic Premium)

Or, as Allamaraju has said, “You can’t take advantage of all the new capabilities to innovate for your business while staying agnostic to the platform.”

Maybe the answer is, as Allamaraju went on to argue, that you embrace cloud-specific innovations while building option value through process agility. Or maybe you use different clouds for different kinds of applications, preserving option value that way. Maybe you do, in some instances, build on an open source primitive (like PostgreSQL, as I’ve written), while also taking advantage of cloud-specific managed services compatible with PostgreSQL. Related, maybe you self-manage Kubernetes to give you the same management primitives across different clouds.

There are many options. As noted, each of these mentioned above, and many others not identified here, will involve trade-offs. There is no magic solution that will give you the best of every cloud provider while ensuring perfect portability. Technology doesn’t work that way. 

But one thing is different about cloud, no matter which provider you choose: The cloud revenue model depends upon happy, committed customers who keep spinning those CPUs, storing that data, etc. This is perhaps the ultimate antidote to lock-in. It’s not perfect. It’s not free. But it’s much better than the shelfware lock-in we used to have to deal with.

Disclosure: I work for AWS, but the views expressed herein are mine.

Also see

Source is TechRepublic

Vorig artikelUser personas and DaaS could be IT's answer to security challenges of remote work
Volgend artikelRemote working is broken. These six changes can help fix it