No matter which of the many proposed models for shared-edge networking is chosen, rising demand for ultra-low-latency applications will likely incentivise the location of micro-datacentres at those edges. This raises questions about physical security and management.
Andy Barratt, UK managing director at global cyber security specialist Coalfire, agrees that edge micro-datacentre management will prove challenging, but suggests solutions will largely be evolutionary, not revolutionary.
“It’s all the same problems again that we had back with the remote branch office client-server build-out,” says Barratt.
With increased artificial intelligence (AI) processing offline and rapid responses required, including to and from a central office, management and physical security challenges are magnified when one or more network edges meet. Security must be extended, while guaranteeing correct levels of resilience, availability and uptime at all touchpoints.
“As part of recent deregulation, Openreach has to give you poles and ducts so you can lay your own fibre or infrastructure – theoretically, we could see BT-approved edge devices for use right next to fibre connectivity accessing the cloud environment, out in the wild,” he says.
Barratt quips that electrified barrier fences, like those a farmer might use when trying to keep animals in (and predators out) of their enclosures, might sound tempting in such circumstances. Animals and humans can both cause problems for remote edge devices and networking – and there’s little chance of completely protecting devices from damage, whether deliberate or accidental.
That said, the physical security classics – hardened, weather-proofed, ruggedised and standardised enclosures and locks, access control, surveillance and testing – remain useful, with self-contained, automatically deployable and “completely headless” operations being his choice of architecture in many cases.
“There’s always someone who’s got the key,” he says. “There’s always somebody who can go, ‘Oh, I’ll go and power the thing on and off’. Then before you know it, that critical edge compute application has been switched off and nobody knows why you can’t get into the cloud anymore.”
Tomas Rahkonen, distributed datacentres research director at the Uptime Institute, says ultra-low-latency applications that require shared-edge networking might include environments such as mass vaccination centres, live events ticketing or vehicle-to-vehicle communications, where edge devices on multiple networks – such as smartphones – need to connect to a separate, intelligent database in real time.
A lot of detail depends on the exact edge model chosen and end requirements. Regardless, Rahkonen advises that the key approach will be to start with a thorough risk assessment and analysis, as well as full consideration of expected benefits. Do you really need that level of edge functionality?
“What’s the value in the end? Typically, if it’s safety or public health, if there’s some type of cost for downtime, even if the sites are small, that might be really critical,” says Rahkonen. “It needs to be viewed holistically as part of your design and operation, with particular attention to the operational processes.”
Different designs for shared-edge environments might incorporate multiple locations and even multiple-edge colos or edge operators, all of which will present multiple variables that complicate management and security.
Filter all aspects of the proposition through that thinking process and decide if datacentre abilities at the edge, at those particular locations, is essential, because there will be costs to trade off. After that, you can get sign-off from the technical perspective, he suggests.
“There are so many facets to it. You have processes, you have data, you have personnel to think about,” says Rahkonen. “And you need to be prepared. You need to have a strategy for the event that someone gets into that type of facility because, for someone, it will happen at some point.”
Remote monitoring and management more important than hardware?
This points up the criticality of strong remote monitoring and management (RMM), including resilient tools, systems and processes that deliver the required continuity and uptime, perhaps incorporating a centralised network operations office that manages several edge-located micro-datacentre installations.
All these aspects will require support and staffing too, perhaps with multiple telecoms, compute and edge skillsets dispersed across locations.
“What happens in a power outage? You still need dedicated power for remote monitoring,” he says. “Perhaps you need an out-of-band network or on fibre, or be on 4G or 5G or something like that as a backup.”
With software-defined networking and bare metal servers, edge sites can ultimately become more flexible software-controllable points, which will likely make smarter edge deployments more attractive over time, Rahkonen suggests.
Nik Grove, head of hybrid cloud at ITHQ, warns that “full care and feeding” will still be required even for modular micro installations. Utilities, services and connectivity still represent complexity, as do staffing challenges, failover and continuity, especially when response times are critical.
“It can be about time to market and how quickly you can deploy the services. You still need monitoring and to unplug, and to understand where you’re going to put the thing first,” says Grove.
When it comes to remote-edge physical security, he relates a tale about a containerised datacentre in the Qatari desert, located a “suitable” distance from the city in question for disaster recovery purposes. Labourers parked a leaking diesel forklift behind it, next to a generator, and the facility caught fire.
In his view, the challenges of remote micro-datacentres, with or without things like armoured, intelligent CCTV, are essentially the same as if spinning up a full datacentre anywhere else.
“Now people are deploying 4K-8K, high-res HD cameras and want that data accessible for 30 days. It’s entirely possible your local McDonald’s could have requirements for 100TB [terabytes] of storage on site. Even your local Dalek would arguably be a micro-datacentre in itself,” he points out.
Strategic planning and design is key
Any edge datacentre should be strategised over multiple years and designed for, with limits and restrictions fully understood, exposed and managed. Minimal workload deployment, with just a couple of hardened racks or even servers, might typically make more sense.
“There is a place and time for micro-datacentres – anywhere with a glut of processing and compute that must be done on site, and you don’t want to pull that out to the cloud. But as part of your overall IT strategy, not a random act of tactical kindness,” says Grove.
Nik Grove, ITHQ
Simon Brady, datacentre optimisation programme manager at Vertiv, says physical security is sometimes the last thing thought about when familiar with larger, layered datacentres.
Intelligent alarm systems that respond in real time are a must-have for timely interventions from disconnecting to shutting down, switching on defences or re-routing traffic.
“That’s as good as you can do, because if somebody wants it, they’re getting it,” says Brady. “It’s hard to guarantee security for something essentially quite small and a long way from help. The software and the monitoring might be more important than a strong lock and hinge on a door.”
Storage policies, data and disk management, encryption, and backup remain key, as do internet of things (IoT) considerations and compliance with the General Data Protection Regulation (GDPR) and standards such as the EN 50600 series.
Balance risk against reward – work out how much you are prepared to spend, including on pen testers and standards, dogbolt or lock suppliers, and makers of cabinets or other so-called street furniture, and the costs of not protecting specific data.
Brady also notes that external perimeter fencing and security guards on patrol may be overkill for a micro unit.
“And to talk about physical remote security, [imagine wanting] to build a load somewhere in the middle of Africa,” he says. “In places in Africa you can have sites built and within three days they’re gone. Only the foundations are left.”
Steve Wright, chief operating officer at 4D Data Centres, says that while the relatively small and developed UK does not need really ultra-low latency at the edge today, probably in time it will.
“If we give people a technology, they’ll typically figure out a way to consume it,” he says. “It was never really a quick use case for fibre roll-out, but over the next decade, usage will go through the roof.”
Wright agrees that edge micro deployments will require attention to utilities, generators and extensive fire suppression systems, as well as automated workload movements to handle interruptions and the like.
Fully exploiting edge means eliminating human intellect interactions, “because that won’t scale”, and development of orchestration platforms for a multicloud concept, as well as high-performance computing (HPC) advances for more dynamic workloads.
“It’s still more suited to, ‘I’ve got this application stack that can work in this hyperscaler under this configuration, or this hyperscaler under this configuration, or my legacy VMware environments, in either my colocation datacentre or on my on-premise facility’,” says Wright.
Build security and keep adding to it
Stefan Schachinger, product manager for network security at Barracuda, warns that setups may work well in the lab with five devices and be a completely different story with hundreds in the wild, all requiring connectivity and management of software including firewalling, reliability and efficiency at a remote location.
“Look at access to the cabinet [and] how you get access to the central application or to the entire network. You need some sort of authentication to minimise what happens with physical access, and how long does it take to notice that something happened?” adds Schachinger.
“My advice here is always implement a defence-in-depth concept, including network security and multiple components.”
The key is to start somewhere with security, whether the security you are dealing with is sufficient or not, and then evolve from there. “It’s an ongoing procedure. Begin with low-hanging fruit, identify the biggest risks, then keep going,” says Schachinger.