Hyper-converged infrastructure: Why software-defined everything might not work for all datacentres

0
640
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

While all datacentres have become in essence defined by their software, going for a fully fledged, software-defined hyper-converged infrastructure (HCI) will not always be the right course for every operator.

Not all need maximum abstraction and, for those that do, there can be critical barriers to achieving a true HCI-based environment.

Ross Warnock, director at datacentre infrastructure provider and consultancy EfficiencyIT, says the essential trade-off to consider is likely to be between security and agility.

However, datacentre operators should kick off the decision-making process by ensuring they have the difference between converged and hyper-converged infrastructures clear, he suggests.

Suppliers use the terms to mean different things – typically leaning towards the specific context in which the supplier in question operates.

“There is a lot of misinformation out there,” says Warnock. “Converged and hyper-converged are similar and share the same goal.

“Converged infrastructure is primarily designed to simplify the deployment of compute networking and storage resources. Hyper-converged has the same goal but a slightly different approach, in the sense that hyper-converged infrastructure is with software-defined compute – and typically on commodity components.”

Some operators may want to maintain a portion of their storage fabric, while others will not want to have “all their eggs in one basket”, he says.

Operators should ask how much flexibility in design and equipment they need, and consider their overall security requirement on top of that.

For many companies, that primary HCI goal of easing management by having a single control pane is unattractive, says Warnock. It represents a potential vector for an attack which, if successful, could give the perpetrators access to anywhere and everything.

“And that is not necessarily a flaw in the product,” he says. “So that’s probably the main difficulty.”

There are several reasons not to go for maximum abstraction, including the question of in-house skillsets.

Traditional approach

Warnock says the traditional approach of having a team that includes one person focused on storage, another in charge of networking, another for applications and yet another on the application layer will partly cancel out the benefits of an HCI migration.

“You need someone who understands all those elements well enough,” he Warnock. “With the skills that are in it, that can be difficult. If you speak to a storage specialist, generally, their networking knowledge is very limited. So to get someone at that level who is an expert in all those fields is difficult.”

After all, a key HCI driver is about looking to manage the whole shebang via a single pane of glass when a migration is complete, he adds. Also, can you take the team with you? HCI migrations are complex, even stressful. Close examination of all assets and requirements is essential and even when done right, remediation will be required, simply because it is a move to a new environment.

“In some cases, there will be quite a lot of remediation; in others, things will be quite simple,” says Warnock. “Sometimes it can’t be done, so you physically cannot move.”

He doubts that a “lift and shift” migration in one bite would be feasible in many cases, except perhaps in a very small environment with relatively “loose” requirements – maybe a startup with everything virtualised, rather than a lot of applications that will struggle in a hyper-converged environment.

“In reality, you see so many legacy applications that people are still running from years gone by that they are just trying to hold on to,” he says. “There is always something there.”

Organisations need to confirm first why they want HCI, he says. If it is to reduce costs via easier management, those hoped-for savings can be years away. A company needs to be in a position where it can afford to put those years in and come out the other side.

Instead, companies doing HCI are often looking to speed up application deployment, says Warnock.

Weighing up the costs 

According to supplier Nutanix, in its HCI business case guide, operators must be sure they are comparing apples with apples when estimating total cost of migration. Traditional storage area network (SAN) and all-flash arrays have their own migration requirements and costs and there is depreciation to account for, while typical cloud supplier total cost of ownership calculators often assume cheaper tech, it says.

Nutanix recommends estimating the number of virtual machines (VMs) across pre- and post-migration infrastructures, and including the cost and number of each specific server as well as of separate storage systems and SAN components, networking costs, virtualisation and licensing, not forgetting operating expenditure including admin, rack space, power and cooling. HCI, on the face of it, offers benefits by enabling infrastructure to be paid for as it grows, it notes.

That is another reason why migrating workloads as needed will suit more operators than rip-and-replace – retiring older infrastructure as they go, evolving cluster by cluster instead of risking everything on a forklift upgrade. That way, they can probably take advantage of new CPU, GPU, SSD and memory technologies as they emerge.

According to EfficiencyIT’s Warnock, more customers are moving to HCI now, partly because not everything belongs in the cloud.

“Actually, a hybrid approach works best for most businesses,” he says. “In 2020, those who had solutions sort of ready to work from home, who had the infrastructure platform, certainly benefited. Other organisations were just playing catch-up from the get-go.”

Dominic Maidment, technology architect at business energy supplier Total Gas & Power, has been involved in his employer’s convergence migration for five years. He agrees that the best strategy for most will be workload by workload – not least because of the difficulty of dealing with legacy applications.

“We are not actually that keen now to run those emulation programmes in terms of virtualising those loads,” he says. “What we would rather do is extract the apps, recode the apps, retest everything, and then move that onto a modern platform. Because otherwise, we are just sort of shifting the problem.”

Preparation is necessary, whatever the strategy, says Maidment, but companies should make their choices in their own time. That might mean a different solution or strategy, which may or may not suit a specific supplier.

Total Gas & Power, for instance, may have chosen differently if it had more of a focus on containerisation instead. At the same time, the firm urgently needed to replace its main disaster recovery (DR) facility.

Maidment adds: “Is HCI in general inevitable for everyone? Well, all the storage companies are putting massive amounts of R&D into running in public cloud.

“And abstraction is basically the thing that makes HCI projects or products really cool. But abstraction is also the thing that makes public cloud attractive, in that you don’t have to deal with the noise, all the pain, and keeping things going.”

Todd Traver, vice-president of digital resiliency at the Uptime Institute, suggests that most companies could at least consider HCI. Of course, some may prefer to avoid the loss of control that comes with maximum virtualisation, being “one layer removed” rather than working directly with a storage device or CPU.

“The only people who shouldn’t are those not prepared to do it, or who somehow think HCI is a cure-all for all the problems they have today, if they’re currently a mess with high technical debt,” says Traver.

“And down the track, if people go there, then what’s the next step after HCI? What are they looking forward to being able to do in future for which that is a necessary stepping-stone?”

A trigger might be support ending for their underlying database products, or all the servers and storage reaching end of life, with support costs and outages soaring.

“Now those are the types of thing that would trigger someone to put in the effort to convert to HCI,” says Traver. “But the point I would make is that HCI is not something you would want to do in a in a big hurry.”

Source is ComputerWeekly.com

Vorig artikelParliament bill to create the UK's Advanced Research and Invention Agency
Volgend artikelProgress towards gender equality in cyber still slow