Reverse cloud migrations: Why some enterprises are shifting their IT back on-premise

0
254
Renault confirms Google as preferred cloud partner

Source is ComputerWeekly.com

It is easy to argue that with cloud repatriation, the wrong workloads are migrated or managed – yet requirements such as skills, security and costs are not always predictable.

Some repatriations are inevitable – providers can always fail or regulations change, lower latency may be needed, and of course, kit has to be on-premise somewhere.

Paul Flack, director of solutions sales at public sector-focused Stone Group, says a hybrid model seems best overall for many, especially when customers worry about loss of “physical” control too. At the same time, risk in the cloud varies, depending partly on the provider.

Stone sees some repatriations that are typically down to costs, security or skill issues.

“They try to get everything in the cloud, and it doesn’t quite meet their expectations and they end up dragging things back in,” says Flack. “When it turns out expensively, the first thing is, they go: ‘help, we need that back on-site’.”

A lack of visibility can be disconcerting, and teams may not know how to manage cloud workloads. But recruiting cloud engineers and architects is expensive, adding to the cost.

“The cost of that person and that team goes up with it, as well as potentially the different types of cost that you manage on a consumption basis versus a capex model,” says Flack.

Bharat Mistry, technical director at security supplier Trend Micro, agrees, pointing out that some customers assume cyber security will all be taken care of by the cloud provider.

“The reality is, it depends on a number of things and your appetite for responsibilities,” says Mistry. “Often the dividing line isn’t clearly understood.”

If people take infrastructure as a service (IaaS), they can assume everything is protected, but typically there is a limit, for instance when it comes to patching and data responsibilities. 

“The provider may have things like firewall services that you can use – but are they equivalent to the kind of firewall that you may have had on-premise?,” says Mistry. “Quite often, it’s rudimentary.

“If you haven’t done your due diligence and homework properly, you have to bolster it with something else on top.”

Traditionally, organisations would have done full risk assessment and penetration testing or go on-site to explore checks and balances. But you cannot easily walk inside Microsoft Azure or Amazon Web Services (AWS) datacentres.

Sold on massive cost savings

Jeff Denworth, co-founder of storage supplier Vast Data, notes that executives can easily be sold on the idea of massive cost savings. This is not entirely their fault – huge marketing efforts over the past 10 years have positioned the cost benefits of cloud up front.

“Everybody at C-level loves the idea, like ‘oh, we want to save trillions of dollars. Thank you, Amazon, for saving us, blah, blah, blah’,” says Denworth. “Then the IT team start rationalising how it can be executed – and have to go and refactor all their code.”

An overall “lift and shift” of virtual machines, storage, networking and so on into public cloud “may be about five times what they were spending” previously, while the likes of Basecamp went cloud native before things like Kubernetes on-prem were not an option, says Denworth.

Today, if customers need continuous utilisation of a service, they can probably plan capacity and build something for themselves more affordably, he adds.

“To get a good discount from Amazon or Google or Microsoft, you may have to go and plan capacity with them, which is the same as doing it on-prem,” he says. “They just charge you more for that. But if you want GPUs as a service, it’s almost impossible to get an allocation of more than five to 10 of these in any region. It becomes this elastic service that you can’t plan on because there is almost no elasticity.”

A June 2022 survey by IDC (sponsored by hardware supplier Supermicro) reports that 71% of respondents expect to either partly or fully migrate workloads in public cloud into a dedicated IT environment over the next two years – down from about 85% in 2019.

A further 10% said they had moved into a hybrid environment, with only 13% indicating they would “run fully” in public cloud. Figures were based on 2,325 respondents, totalling 7,487 workloads, in IDC’s 1H21 Servers and storage workloads survey.

“To get a good discount from Amazon or Google or Microsoft, you may have to go and plan capacity with them, which is the same as doing it on-prem”
Jeff Denworth, Vast Data

Jaco Vermeulen, chief technology officer at consultancy BML Digital, confirms a steady trickle of repatriations from cloud. “While there are some exceptions, they can struggle to wrap their heads around cloud,” he says. “They can believe the only way they can ensure resilience is by double-building it themselves.”

Concerns about security and loss of control (or authority) can take hold. C-level execs may also favour rebuilding a team that puts human “bums on seats” around them – a kind of presenteeism or even empire-building, if you will.

What you cannot see, you cannot manage, they think – and Vermeulen says this sometimes means they also max out on storage of data that they don’t really need, “just in case”.

Some feel they can provide better monitoring of their own datacentre, ensuring uptime and active switching to failover, arguing that customers expect this. But Vermeulen says some US military organisations operate in the cloud, and if they don’t need that visibility, why would a non-real-time consumer business?

“And if they can justify building out a datacentre, it is an opex that will be written down over time,” he says. “But it’s really a matter of ‘I don’t trust cloud’, often when there’s a change of regime – although there is certainly an element of cloud cost delusions.”

Vermeulen also works with a fair few M&A transactions, finding that some don’t think to do due diligence on IT ahead of time – firms assume IT can be easily merged. But post-merger, they discover things don’t play together nicely and they can’t follow through on their intent.

There is a need to accept some downtime with the services, and if they cannot, maybe public cloud is not for them, says Chris Roberts, director of solutions engineering at storage supplier NetApp.

Uptime service-level agreements (SLAs) in cloud might not be “anywhere near” what firms expect for mission-critical workloads, with required downtime less within their control than on-premise, he says.

Different approach needed

Cloud can be agile and beneficial, boosting performance for less cost when workloads are easier to spin up. However, customers do not always recognise they need a different approach to on-premise when it comes to managing applications, data and solutions, says Roberts, adding that an automation layer can help to save and run those workloads in the cloud.

“If you’re not turning computers on and off, or not spinning storage up or down, depending on the workload, your costs will spiral out of control,” he says. “Customers are starting to analyse workloads in more detail, asking: ‘Where is the right place for this to live?’”

Tim Allen, head of engineering – DevOps and cloud at applications developer and consultancy xDesign, agrees, noting that data legislation is a continued force for confusion and anxiety on top of other cost and compliance factors.

“Working out who legally has access to your data is pretty complex, and if you keep having to spend money to review it, you’re not focused on your core business,” he says.

Poorly scoped or architected migrations not only fail to deliver efficiencies or process improvement, but may not account for the fact that staff will still be needed, and therefore paid, despite a “lift and shift” migration, says Allen. Also, some organisations enter cloud migration assuming that they will have to do it only once.

Nick Westall, CTO of services provider CSI, suggests the US may be ahead of the UK somewhat, with slightly higher reverse migration rates so far. Costs and performance versus on-prem are typically the issues.

“Suddenly, there are lots of extra zeros on the end of their monthly bill,” says Westall.

One UK-based customer ran a large overnight financial-services batch analysis programme on-prem – and a “cloud first” policy resulted in a “lift and shift”. Performance shrank to “a fraction” of what it had been. When the requisite performance tuning process concluded, they realised they would need to over-provision within the cloud to manage throughput, says Westall.

“They were consuming about half the capacity and using 100% of the throughput bucket,” he says. “They then had similar problems with the RAM.”

Westall believes the company had not understood how much work was needed to move on-prem workloads to the cloud.

A top 10-20% might be ideally suited to cloud, with workloads super-heavy on compute another 10-20%, usually with constrained windows and a premium cloud pricing model – leaving 60-80% in the middle in need of reworking.

“You could get that much more refined tuning when you are on-prem,” he says. “And customers normally just view things as a single large lump that they can lift and shift.”

Painful migrations

The effect of legacy and bespoke applications that no one else might have, but are essential, can make migrations much more painful, even with the increased granularity now available in cloud services.

Organisations should therefore be sure to plan for cloud from end-to-end and work with third parties that understand their needs and can actually predict cost, then prioritise. There are systems that do that, says Westall.

Jacob Lee, director of solution architecture at cloud provider Zadara, notes that the promised flexibility of cloud is not infinite, and that disadvantages can include security and cost.

“Most users are locked into someone else’s system,” says Lee. “How do you know that the environment is a good fit? If you commit or reserve a specific amount of data usage, in reality that can become the biggest waste – the opposite of the ‘spirit of cloud’.”

But cloud is not going anywhere. According to IDC’s November Black book, overall cloud-related spending in Europe is forecast to constitute almost one-third of total technology spending in 2022, and its share will keep increasing over the next five years.

Source is ComputerWeekly.com

Vorig artikelWhy and How to Learn Lua
Volgend artikelTwitter Now “Irrelevant” in Open Source