In this podcast, we talk to Mathieu Gorge, CEO of Vigitrust, about the ongoing impact of artificial intelligence (AI) on data, storage and compliance for CIOs. Gorge discusses the implications for data, its volume, the difficulties of keeping track of inputs and outputs from AI processing, and the need to keep up with law and regulation.
Gorge also casts an eye over the potential impacts of the new administration in the US and the evolving approach of the European Union (EU) to data in AI.
What do you think are going to be the key topics that impact on compliance and data storage, backup, etc, at this year’s RSA event?
I always look forward to going to RSA to learn about new technologies and get my finger on the pulse as to what’s happening in storage and compliance, and any related cyber security and compliance topics. This year, it seems we will see a lot of items around AI – AI technology, but the security of AI itself, as opposed to just AI-enabled technologies.
There’s a lot of talk about quantum and post-quantum as well, so it’ll be interesting to see what happens there.
And from a storage perspective, we’re seeing some changes owing to the new administration in the US, and to what’s being done in the EU with the EU act on AI that impacts data classification and data storage.
It’ll be interesting to see all this coming together at RSA.
I think we’re going to have some very interesting conversations, and I expect some new vendors to come out of the woods, so to speak.
Drilling down into some of the aspects you’ve mentioned there, what do you think are the key areas in which AI has moved on in the past year in terms of how it impacts compliance for organisations?
My view is that AI was the buzzword last year. Everybody needed to look into AI to try to understand how it could improve their processes, improve how they use data, and so on.
A year on, we see that a lot of organisations have implemented their own versions of ChatGPT, for instance, and some of them have invested in their own AI platforms so they can control it a little bit better.
And so, we’re seeing AI adoption growing up – remembering that AI is not new, it’s been here for years – but the adoption is really picking up at the moment.
What we’re seeing in the market is people looking at: “What kind of data can I use AI for? How does that impact on my data classification, my data protection, data governance?”
We’re also seeing a number of security associations starting their own AI governance working groups. In fact, at Vigitrust, with the Vigitrust Global Advisory Board, we also have an AI governance working group where we’re trying to map out all the regulations that come out that govern AI, whether they are driven by technology vendors or associations or even governments.
It’ll be interesting to see how much of AI governance is covered at RSA. If you want to do AI governance, you need to know what type of data you manage, and we’re going back to data classification and data protection.
The other issue with AI is that it’s creating a lot of new data, so we’ve got this explosion of data. Where are we going to store it? How are we going to store it? And how secure will that storage be? And then finally, will that allow me to demonstrate compliance with applicable regulations and frameworks? It’ll be interesting to understand what comes out of RSA on that front.
What do you think are the impacts of the new administration in the US on compliance and storage and backup, etc?
The new administration in the US, right from the beginning, has said it would invest in AI and that it saw AI as a great opportunity for the US. And in terms of deploying all of that, we know that the governance frameworks that are already in place are going to be applied.
We are seeing organisations like NIST developing more in-depth AI frameworks. We’re also seeing the Cloud Security Alliance moving towards AI governance frameworks of their own. We’ve even seen cities developing their own AI frameworks for smart cities and so on. I’m thinking of the city of Boston at the moment, for some reason.
And so, if you’ve got a government that is pushing organisations to use AI, they will want to have some governance on that. And it’ll be interesting to see how far they go. Will they respond with the equivalent of the EU AI Act? It is likely, because if you look at GDPR [the General Data Protection Regulation] in Europe, a few years later, we had CCPA [the California Consumer Privacy Act] and we’ve had some state regulations at this stage – I think 11 states in the US that have something similar to GDPR.
So, it’s very likely that this will follow. It’s not going to happen overnight, but I think some further announcements will be made in 2025 by the current administration.
What’s the latest with the EU and compliance? Especially with reference to the latest developments around AI, etc?
You know it’s funny, in the EU, AI is seen as a threat just as much as it’s seen as an opportunity, much more so than in the US, potentially because the risk appetite is a little bit less visible in Europe.
We are seeing every member state looking at their own AI regulation in addition to the EU framework. We’re also looking at how AI integrates with GDPR. And so, in other words, if you deploy AI solutions, you totally change the governance of data.
You end up having data that is essentially managed by a system rather than managed by different people. So the concept of a data controller, and who is really in charge of the data, becomes questioned again.
I think it’s interesting to see the various governments looking at, “Can we really deploy AI in a way that does not put us out of compliance for GDPR?”
I go back to two key aspects – classifying the data and storing the data.
As you know, with AI, you’ve got that question of bias on the data. Is the data treated the right way? Is the data that you put in – it’s then treated by AI, it comes out – is that putting you in or out of compliance with other frameworks like GDPR, and even the EU Act? And where should you store that data? What kind of protection should you have on it? How do you manage the lifecycle of that data within the AI framework? How do you protect your LLM [large language model]? How do you protect the algorithms you use?
And then finally, as you probably know, AI is very resource-intensive. That also has an impact on the climate, because the more you use AI at this point, the more capacity you need, and the more processing power you need. And that has an impact on green IT and so on.
So, I would urge people to look at the type of data they want to use for AI, do a risk analysis, and then look at the impact in terms of: Where are you going to store that data? Who’s going to store it for you? How secure is it going to be? And how is that going to impact your compliance, not just with AI regulation, but also with GDPR and other privacy frameworks?