With the right tools and strategy, public cloud should be safe to use

0
59
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

In 2006, Amazon Web Services (AWS), the first public cloud provider, offered publicly available services such as Elastic Computer Cloud (EC2) and Simple Storage Services (Amazon S3). Four years later, in 2010, Microsoft launched Microsoft Azure (which was initially called Azure). Lastly, in 2011, Google introduced Google Cloud Platform (GCP), a set of cloud computing services that runs on the same infrastructure that Google uses internally.

To date, these three cloud providers have dominated the global cloud market, with AWS maintaining its position as the market leader. Research shows that between 2010 and 2020, the global cloud computing market increased by 535%, from $24.6bn to $156.4bn, with the popularity of remote working considered one of the key factors driving this growth.

Era of remote work and cloud computing

During the Covid-19 pandemic, due to safety and public health concerns, many organisations implemented remote work plans, considering it to be the right balance between social disruption and economic destruction of lockdowns and restrictions.

Even three years after the pandemic, remote work has emerged as a dominant trend in the modern workplace. According to WFHResearch, 12.7% of full-time employees work from home. An additional 28.2% have adapted to the hybrid model, which combines both working from home and working in the office. In fact, 16% of companies even operate without a physical office. In 2020, 61% of businesses migrated their workloads to the cloud, demonstrating the importance of cloud computing in facilitating remote work.

The Covid-19 pandemic has transformed how businesses operate, and it’s altered the cyber security landscape. The need for flexible, accessible and reliable technology has never been more pronounced.

Evolution of cyber Threats and defences

Back in the mid-90s, cyber security focused on the physical protection of servers and communications. Encryption was considered to be sufficient. However, as networks began to grow and the internet exploded in the late 1990s, the concept of antivirus software, firewalls and intrusions detection systems were brought up due to the increasing number of malware exploiting vulnerabilities.

In 2000, many computer programmes used only two digits to represent a four-digit year, making the year 2000 indistinguishable from 1900, potentially threatening computer infrastructures worldwide. This was known as the Y2K bug. At the same time, the number of malwares such as CryptoWall, ZeuS, NanoCore and Ursnif increased significantly in the 2000s. IT professionals improved their defences, including secure coding practices and intrusion prevention systems.

A decade later, high-profile breaches by nation-state threat actors highlighted the importance of cyber security once again. For example, in 2014, Sony Pictures experienced a major data breach in which 100 terabytes of data were stolen by a North Korean cyber criminal group.

Between the 2010s and 2020s, with the popularity of cloud computing (and Internet of Things (IoT) devices) rising, ensuring the security of these technologies had become one of the top priorities for most organisations.

A modern dilemma

According to IBM’s Cost of a Data Breach Report 2023, the global average cost of a data breach was $4.45m, representing a 15% increase over three years, and a 2.2% increase compared to 2022. When factoring in remote working, the average cost of a data breach increased by almost $1m. This indicates that organisations that have adapted to remote work face higher costs than those that have not.

With remote work becoming an inevitable aspect of the modern workplace, public cloud computing emerges as a tool to facilitate this shift. In this context, chief information security officers (CISOs) and security practitioners play a critical role. They must not only ensure that these technologies are used safely and securely to prevent accidental or deliberate data leakage, but also minimise user impact. Given the ever-evolving nature of cyber threats, this is certainly a challenging task.

Insider risk

Apart from traditional external threat actors, insiders also possess the same or even higher level of threat. Referring to the Cybersecurity Insiders’ 2023 Insider Threat Report, which surveyed 326 cyber security professionals, here are some key takeaways:

  • 68% of the responders are concerned or very concerned about insider risk after shifting to remote and hybrid work
  • 53% of the responders believe it has become somewhat to significantly harder to detect insider attacks since migrating to the cloud
  • Privileged IT users/admins pose the biggest security risks to organisations (60%), followed by contractors / service providers / temporary workers / vendors / suppliers.

These results indicate that insider risk is a significant concern that CISOs and security practitioners need to address. Whilst numerous controls are in place to prevent external threat actors from accessing data, such as enforcing multifactor authentication (MFA) and enabling conditional access policies, etc. these measures may not be sufficient to mitigate insider risk. Without proper detection mechanisms for insider threats, accidental or deliberate data leakage can still occur because these individuals already have access to the data. In my opinion, they may pose an even greater threat.

XDR – Extended Detection and Response

On average, it took organisations 10 months (or 304 days) to identify and report a data breach. However, the report from IBM stated that organisations with an Extended Detection and Response (XDR) solution drastically reduced the data breach cycle to 29 days. So, the question is, what is XDR?

XDR is the evolution of endpoint detection and response (EDR), which goes beyond the traditional EDR approach. It ingests not only data from endpoints, but also identity, email, cloud workload, and more. Then, it uses advanced machine learning (ML) and artificial intelligence (AI) to correlate and parse real-time data to detect threats and anomalies. If more than one threat is identified, they will be prioritised by severity level, allowing Security Operation Centre (SOC) analysts to triage and investigate the incidents in a timely manner. With relevant configurations, some incidents can also be resolved using automated investigation and response (AIR).

At the same time, some XDR solutions usually have some or all of the following capabilities equipped to minimise data leakage:

  • Data Loss Prevention (DLP) to prevent sensitive information from being shared outside their network, crucial for protecting data in the public cloud
  • Cloud Access Security Brokers (CASB) act as security enforcement points that exist between cloud service users and cloud providers, helping to ensure secure and compliant usage of cloud services
  • Secure Web Gateways (SWG) protect users from potential threats in web and cloud traffic, making them essential for secure cloud-based operations.

There are a number of XDR solutions in the market, including but not limited to, Microsoft Defender XDR, Palo Alto Network Cortex XDR, and Fortinet FortiXDR.

These solutions sound awesome, but at the same time, they are expensive and tricky. Their out-of-the-box deployments are not used enough in security.

“In my experience there is no such thing as luck.” – Obi-Wan Kenobi, Jedi Master

Throughout my career, I have seen a number of incidents occur even if organisations still experienced data breaches, despite them having already invested heavily in security tooling. Even though the configurations are already tailored, one-off configuration is still not enough. They also need to be maintained at all times to ensure optimal performance.

Zero-trust

Zero-trust!? Are we not supposed to trust users or devices within the corporate network, and those who are connected via a VPN? No, not anymore. Zero-trust is the new trend. Users are the most targeted and least protected link in your security programme.

This term was first introduced by Stephen Marsh in his doctoral dissertation on Computer Security in 1994. Over 20 years later, in 2018, the National Institute of Standard Technology (NIST) and the National Cybersecurity Center of Excellence (NCCoE) published NIST SP 800-207 Zero Trust Architecture, which defines zero trust as “a collection of concepts and ideas designed to reduce the uncertainty in enforcing accurate, per-request access decisions in information systems and services in the face of a network viewed as compromised”. A year later, the National Cyber Security Centre (NCSC) recommended network architects to consider the zero-trust approach for IT deployments, especially those who are planning to use public cloud services.

The three main principles for zero-trust are:

  • Use least privilege access: By limiting user access with Just-In-Time (JIT) and Just-Enough-Access (JEA) controls, the potential damage of a compromised account is minimised.
  • Verify explicitly: Trust should never be assumed. Every user and every access request should be authenticated and authorised based on all available data points. This goes beyond simply verifying the user’s location or IP address.
  • Assume breach: Operate as if your network is already compromised. Employ end-to-end encryption and use analytics to gain visibility, drive threat-led detections, and continually improve defences.

Adopting a zero-trust approach can significantly enhance an organisation’s security posture, particularly when utilising public cloud services.

Final thoughts

The rise of public cloud services and the increasing reliance on remote work is facilitated by cloud computing. With this shift, the cyber security landscape has evolved, presenting new threats and challenges. Businesses now face the dilemma of ensuring the safe and secure use of technology whilst preventing data leakage. This task falls to CISOs and security practitioners who must also consider insider risks. Advanced security solutions like XDR and the concept of zero trust are discussed. Despite the complexity and evolving nature of threats, with the right strategy, tools, and constant vigilance, businesses can safely and securely leverage public cloud services.

Jason Lau is a senior cloud security consultant at Quorum Cyber.

Source is ComputerWeekly.com

Vorig artikelDublin confirmed as world’s third-largest hyperscale datacentre hub
Volgend artikelVacancy rates in major European datacentre hubs hit all-time low, CBRE data finds