The digital realm has undergone a dramatic transformation over the last decade. At the forefront of this evolution is Software as a Service (SaaS), which has revolutionised the business ecosystem by offering customised solutions and seamless connectivity. But as we integrate these technologies deeper into our business fabric, one critical issue looms large: privacy.
According to a 2022 estimate by McKinsey, the market capitalisation of SaaS companies stands at a staggering $3 trillion dollars. Despite this immense value already, most multinationals are slow and hesitant to adopt the latest SaaS has to offer. The reason behind this hesitancy lies in the flawed trust models currently prevalent within SaaS platforms.
While most SaaS companies strive to comply with standards like SOC2, ISO27X, adhering to the internal policies about data management and security which they set, these measures are only audited at the end of the year and a lot of trust is placed on the honesty of the operational data collection. This annual audit paints an incomplete picture of the company’s year-round practices and fails to guarantee the integrity and security of sensitive company data.
To address this imbalance, legal frameworks like GDPR and CCPA have been introduced, alongside company privacy policies. However, these governance standards remain largely theoretical and their efficacy can only be ascertained through litigation – a tedious and expensive process. As a result, the current SaaS trust model is like a house of cards, vulnerable to collapse under the weight of a single privacy scandal.
You may have read recently about the controversy surrounding Samsung engineers sending sensitive proprietary code to OpenAI’s ChatGPT. While the engineers may have intended to leverage the state-of-the-art tools for their corporation’s advantage, they inadvertently disclosed highly confidential intellectual property to an American private corporation. This case highlights the fact that while innovation and confidentiality can coexist, such instances are not yet commonplace.
This is where Privacy-Enhancing Technologies (PETs) enter the picture. PETs have the potential to be the silver bullet we need to transform our trust models. Unlike traditional models, PETs don’t require blind trust in a system. They enforce confidentiality constraints algorithmically, providing tangible and verifiable privacy protection.
PETs offer algorithmic transparency while simultaneously enforcing data processing opacity and preventing the reverse engineering of sensitive data sources. These technologies establish a robust privacy infrastructure that not only complies with regulations but offers the true meaning of what-you-see-is-what-you-get processing.
Confidential Computing is leading a revolutionary shift in the realm of Privacy-Enhancing Technologies (PETs). By leveraging protected enclaves within the hardware, it ensures data security during computation – a stark contrast to traditional methods focusing on data at rest or in transit. This innovative approach has attracted substantial attention from industry behemoths, with major cloud providers offering Confidential Computing services and chip giants like Nvidia, Intel, and AMD pouring billions into the development of next-generation hardware.
Another transformative PET is Differential Privacy, which introduces a novel standard for data anonymisation. By infusing ‘noise’ into the dataset, Differential Privacy protects individual privacy while preserving the overall data utility. This technique has demonstrated its effectiveness in high-stakes applications, such as its implementation in the 2020 US Census. The growing acceptance of Differential Privacy is further underscored by initiatives like OpenDP from Harvard, which aim to create open-source and trusted tools for implementing this technology.
These cutting-edge tools and techniques have gone through the rigorous validation of national bodies such as the US Census Bureau and technology giants. Now, they are poised to take centre stage in the tech industry. Needless to say, many eyes are on what lies ahead for this field, with a focus on standardisation and greater usability for non-expert developers.
This summer, we hope to see a breakthrough in PET adoption and its positive impact on society at The Eyes-Off Data Summit, which opens in Dublin this week (20 July). The summit is a forum where regulators, including the UK Information Commissioner’s Office and Irish Data Protection Commissioner, meet experts from renowned institutions such as Harvard, Oxford, and the Alan Turing Institute.
Here, a diverse group of data scientists, both from the public and private sectors, alongside data stewards, will assemble. The goal is to challenge the status quo and figure out how we, as a community, can foster a meaningful change in society.
Through the widespread adoption of PETs, we can unlock the full potential of the trillion-dollar SaaS market while safeguarding privacy and fuelling innovations that benefit everyone.
Jack Fitzsimons is co-founder of Oblivious AI