In this podcast, we talk to Mathieu Gorge, CEO of Vigitrust, about key topics at RSA 2025 in San Francisco.
The impact of artificial intelligence (AI) on compliance was huge. Gorge discusses its spread in the enterprise and how this impacts the potential risk surface for organisations. Meanwhile, he also notes the trend among suppliers towards a more consultative approach based around business outcomes.
Finally, and with reference to the impact of AI on organisations, compliance, and their data, he talks about the discussion at RSA about the role of the CISO – chief information security officer – and whether they should be (solely) responsible in the face of risks posed by AI.
What were the key topics of relevance to data, storage and data protection that came up at RSA 2025?
I’ve been going to RSA in the US for about 20 years, and I’ve done a few in Europe. And generally speaking, every year, there’s one single topic, whether it was blockchain, it was orchestration, then last year was about AI deployment, AI adoption.
This year, it was kind of hard to see one single trend. However, what we can say is that based on the talks, and based on what the vendors were doing, compliance is at an all-time high. You could feel the energy, you could feel the innovation in compliance. There were a lot of vendors on the GRC [governance, risk, compliance] front, there were vendors on specific areas of compliance and data protection.
So, that was interesting to see. The next thing is we felt when we were there with some of my colleagues, that at least on the vendor showcase, the narrative had changed. It was more about the business outcome of using the right products.
So, whereas in the past, typically at RSA, it was like pure sales: buy my encryption, because you need encryption; buy my storage solution, because you need proper storage. This year, it really felt like a lot of work had been done on the business outcome of selecting solutions. So, the business outcome being, well, you’ll be more compliant, you’ll be able to demonstrate you’re doing data protection, you’ll be able to at a click of a button, know where you have data issues and where you don’t.
And then there was also the role of CISOs. CISOs were mentioned a good bit and extended to head of risk, head of compliance, and talking about the role of CISOs, specifically with regards to AI adoption.
Are the CISOs the right people to be in charge of AI adoption? Are they not busy enough already dealing with data protection? Who else should work with the CISOs? Who else should be looking after AI governance, which was also one of the big themes in the organisation? And what does it mean for compliance and for data protection? And there were some very interesting talks about that.
Could you expand a little on how vendors are emphasising business outcomes rather than necessarily their functionality or what they are particularly offering?
I felt the vendors were taking a more consultative approach, where you could see that some of them had case studies, whitepapers on the benefits of doing compliance the right way, as opposed to “you have to do compliance so whether you like it or not, you’re going to have to use us or our competitors”.
It was a case of, we’re now in a state where with AI adoption, the risk surface goes up tremendously. It reminds me of cloud where people could buy services and extend the risk surface with bypassing security and compliance.
And we see that happen with AI deployments as well. So, I felt there was a genuine direction from the vendor community and from the speakers to say, “Hey, we are going to adopt AI, so let’s try and do it the right way without compromising the rest of the security that we’re doing. Let’s try and understand what the right AI governance is for different types of AI deployments. And then let’s focus on how we can manage that in an easier way.”
And then came the question I already mentioned, which was who really should be in charge of that? Is it just the CISO, or is it the CISO and the chief AI officer, or do we need a chief AI security officer? And what does it mean for compliance? Really one of the key messages is that with AI, you just have a lot more data and you have less control on the new data that is being created.
And so you need to have the right frameworks. And whilst there are already many AI frameworks out there to manage AI deployments and AI in terms of data classification, they’re not always well known. In fact, even some of the CISOs are not necessarily aware of them.
So, I think as an industry, we have a duty to show up and make it easier for them to do the right thing because the risk surface is definitely going up.