AI and compliance: Staying on the right side of law and regulation

0
5
An encouraging new conversation around sustainable IT, says Nordic CIO

Source is ComputerWeekly.com

Regulations and legal frameworks for artificial intelligence (AI) currently lag behind the technology’s uptake.

The rise of generative AI (GenAI) has pushed artificial intelligence to the fore of organisations’ modernisation plans, but so far, most development has taken place in a regulatory vacuum.

Regulators are rushing to catch up. According to industry analyst Gartner, between the first quarter of 2024 and Q1 2025, more than 1,000 pieces of proposed AI regulation were introduced worldwide.

Chief information officers (CIOs) need to act now to ensure AI project compliance in a regulatory environment that Gartner vice-president analyst Nader Henein warns “will be an unmitigated mess”.

Missteps by AI suppliers and their customers have led to a host of problems, including privacy and security breaches, bias and errors, and even hallucinations where AI tools produce answers that are not based on facts.

The most high-profile examples of problems with AI are hallucinations. Here, the AI application – usually GenAI or a large language model (LLM) – produces an answer that is not based on facts.

There are even suggestions that the latest GenAI models hallucinate more than previous versions. OpenAI’s own research found that OpenAI’s o3 and o4-mini models are more prone to hallucination.

Mistakes and bias

GenAI can make basic mistakes, errors of fact and be prone to bias. This depends on data the systems are trained on, as well as the way algorithms work. However, bias can lead to results that might cause offence, or even discriminate against sections of society. This is a worry for all AI users, but especially in areas such as healthcare, law enforcement, financial services and recruitment.

Increasingly, governments and industry regulators want to control AI, or at least ensure AI applications operate within existing privacy, employment laws and other regulations. Some are going further, such as the European Union (EU) with its AI Act. And outside the EU, more regulation seems inevitable.

“At present, there is little in the way of regulation in the UK,” says Gartner’s Henein. “Both the ICO [Information Commissioner’s Office] and Chris Bryant, the minister of state at the Department for Science, Innovation and Technology, have stated that AI regulation is expected in the next 12 to 18 months.

“We do not expect it to be a copy of the EU’s AI Act, but we do anticipate a fair degree of alignment, particularly regarding high-risk AI systems and potentially prohibited uses of AI.”

AI laws and governance

AI is governed by a host of sometimes overlapping laws and regulations. These include data privacy and security laws, and guidelines and frameworks which set standards around AI use where they may be not backed by legal sanctions.

“AI regulatory frameworks like the EU AI Act are based on the assessment of risks, specifically the risk these new technologies can impose on people,” says Efrain Ruh, continental chief technology officer for Europe at Digitate.

“However, the large range of applications and the accelerated pace of innovation in this space makes it very difficult for regulators to define specific controls around AI technologies.”

And the plethora of rules makes it hard for organisations to comply. According to research by AIPRM, a firm that helps smaller businesses make the most out of GenAI, the US has 82 AI policies and strategies, the EU has 63, and the UK has 61.

Among these, the stand-out law is the EU’s Artificial Intelligence Act, and its first “horizontal” AI law governing AI, regardless of where or how it is used. But the US’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence also sets standards for AI security, privacy and safety.

In addition, international organisations such as the OECD, the UN and the Council of Europe have developed AI frameworks. But the task facing international bodies and national law makers is far from easy.

According to White & Case, an international law firm that tracks AI developments, “governments and regulatory bodies around the world have had to act quickly to try to ensure that their regulatory frameworks do not become obsolete…

“But they are all scrambling to stay abreast of technological developments, and already there are signs that emerging efforts to regulate AI will struggle to keep pace,” it says.

This, in turn, has led to different approaches to AI regulation and compliance. The EU has adopted the AI Act as a regulation, meaning it applies directly in law in member states.

The UK government has so far opted to instruct regulators to apply guiding principles to how AI is used across their areas of responsibility. The US has chosen a mix of executive orders, federal and state laws, and vertical industry regulation.

This is all made more difficult still by the absence of a single, internationally accepted definition of AI. That makes regulation and compliance by organisations that want to use AI harder. Regulators and firms have had time to learn how to work with regulations such as the General Data Protection Regulation (GDPR), but we are not yet at that stage with AI.

“As with other regions, there is a fairly low level of maturity when it comes to AI governance,” says Gartner’s Henein. “Unlike GDPR, which followed four decades of organic development in privacy norms, AI regulatory governance is new.”

Compliance with the AI Act, he adds, is made more complicated because it applies to AI features of technology, not just to whole products. CIOs and compliance officers now need to account for AI capabilities in, say, software as a service applications they have been using for years.

Moving to compliance

Fortunately, there are steps organisations can take to ensure compliance.

The first is to ensure CIOs know where AI is being used across the organisation. Then they can review existing regulations, such as GDPR, and ensure that AI projects keep to them.

But they also need to monitor new and developing legislation. The AI Act, for example, mandates transparency for AI and human oversight, notes Ralf Lindenlaub, chief solutions officer at Sify Technologies.

Boards, though, are also increasingly aware of the need for “responsible AI”, with 84% of executives rating it as priority, according to Willie Lee, a senior worldwide AI specialist at Amazon Web Services.

He recommends that all AI projects are approached with transparency, and accompanied by a thorough risk assessment to identify potential harms. “These are the core ideals of the regulations being written,” says Lee.

Digitate’s Ruh says: “AI-based solutions need to be built up-front with the correct set of guardrails in place. Failure to do so might result in unexpected events with tremendous negative impact on the company’s image and revenue.”

Source is ComputerWeekly.com

Vorig artikelVMware and Oracle licensing: Time to consider alternatives
Volgend artikelIs there still any point in writing?