Podcast: What is the impact of AI on storage and compliance?

0
177
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

In this podcast, we look at the impact of the rise of artificial intelligence (AI) on storage and compliance with Mathieu Gorge, CEO of Vigitrust.

We talk about the state of play of compliance frameworks for AI, and how to deal with the lack of maturity of governance in the field.

Gorge also talks about how organisations can recognise the limits of the current landscape but take control of a still-developing situation.

Antony Adshead: What are the key impacts of AI in terms of law and regulation in IT?

Mathieu Gorge: I think it’s important to understand that AI is not new. It’s been around for a while and we shouldn’t confuse machine learning, or intelligent machine learning, with proper AI.

The reality is that we’ve been hearing a lot about ChatGPT and the like, but AI is much more than that.

There are currently, depending on how you look at it, 35 to 40 regulations and standards around AI management. Which is kind of interesting because it reminds me of cyber security about 25 years ago, where the industry was trying to self-regulate and most of the big vendors were coming up with their own cyber security framework.



We’re seeing the same with AI. We know, for example, that the Cloud Security Alliance came up with their own initiative, the IAPP [International Association of Privacy Professionals] came up with their own AI whitepaper, which is actually quite good in that it documents 60 key topics that you need to look at around AI going well beyond the potential impact of ChatGPT, and so on.

We’re also seeing the EU with the AI Privacy Act and some states in the US trying to do that, so it’s like history repeating itself. And if it’s like cyber security, what will happen is that in the next five to 10 years, you will see probably four to five major frameworks coming out of the woodwork that will become the de facto frameworks, and everything else will be related to that.

The reality is that with AI you’ve got a set of data that’s coming in and a set of data that’s being, essentially, manipulated by AI and spits out another set. That set may be accurate, may not be accurate, may be useable or useful or not.

“If [AI regulation follows the example of] cyber security, in the next five to 10 years, you will see probably four to five major frameworks coming out of the woodwork that will become the de facto frameworks, and everything else will be related to that”
Mathieu Gorge, Vigitrust

One of the issues is that we don’t really have the right governance at the moment so you’re also seeing a lot of new AI governance courses being announced in the industry. And while that’s commendable, we need to agree on what is good AI governance, specifically with regard to the data that it is creating, where it ends up in terms of storage, the impact on compliance and on security.

Adshead: How will these impact enterprise storage, backup and data protection?

Gorge: Right now, when you look at traditional storage, generally speaking you look at your environment, your ecosystem, your data, classifying that data, and putting a value on it. And, depending on that value and the potential impact, you put in the right security and assign the length of time you need to keep the data and how you keep it, delete it.

But, if you look at a CRM [customer relationship management service], if you put the wrong data in then the wrong data comes out, and it’s one set of data. So, to be blunt, garbage in, garbage out.

With AI, it’s much more complex than that, so you may have garbage in, but instead of one dataset out that might be garbage, there might be a lot of different datasets and they may or may not be accurate.

If you look at ChatGPT, it’s a little bit like a narcissist. It’s never wrong and if you give it some information and then it spits out the wrong information and then you say, “No, that’s not accurate”, it will tell you that’s because you didn’t give it the right dataset. And then at some stage it will stop talking to you, because it will have used up all its capability to argue with you, so to speak.

From a compliance perspective, if you are using AI – a complicated AI or a simple AI like ChatGPT – to create a marketing document, that’s OK. But if you use this to do financial stuff or legal stuff, that’s definitely not OK. We need to have the right governance, the right checks in place, to assess the impact of AI-driven data.

This is early days right now, and that’s why we’re seeing so many governance frameworks coming out. Some of them are going in the right direction, some of them are too basic, some are too complicated to implement. We need to see what’s going to happen but we need to make decisions quite quickly.

We need, at the very least, for each organisation a set of KPIs [key performance indicators]. So, when I look at the data coming out of AI, am I happy that it’s accurate, am I happy that it’s not going to put me out of compliance, am I happy that I can store it the right way? Am I happy it’s not going to store bits of data and I don’t know where that’s going or what we need to do with it?

It’s a case of trying to find the right governance, the right use of AI.

It’s early days, but I would urge every company to start looking at AI governance frameworks right now so they don’t create a monster, so to speak, where it’s too late and there’s too much data that they can’t control.

Source is ComputerWeekly.com

Vorig artikelTimeseries data bij BIT
Volgend artikelInsurance company Admiral partners with Google Cloud for digital transformation push