CDEI publishes roadmap for UK AI assurance ecosystem

0
250
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

The UK government has released a document outlining the steps required to build an assurance ecosystem for artificial intelligence (AI) systems.

Published by the Centre for Data Ethics and Innovation (CDEI), the roadmap describes what should be done to verify that AI systems are effective, trustworthy and compliant.

Intended to drive a step change in adoption, AI assurance services such as audit, certification and impact assessments are expected to become a “multibillion-pound industry in its own right, unlocking growth and thousands of new job opportunities”.

The idea is to draw on the UK’s strengths in legal and professional services, AI research and standards, and have an established AI assurance ecosystem within the next five years, with professional services firms, startups and scaleups providing services in that area, the CDEI said.

“AI has the potential to transform our society and economy, and to help us tackle some of the greatest challenges of our time. However, this will only be possible if we are able to manage the risks posed by AI and build public trust in its use,” said digital minister Chris Philp.

The roadmap is one of the commitments set out in the National AI Strategy, which was published in September 2021. It follows calls from the industry and bodies such as Committee on Standards in Public Life to build tools that mitigate the risks posed by AI advances.

Moreover, the CDEI noted the steps focused on AI assurance addresses the issues in AI governance outlined by organisations such as the World Economic Forum, OECD and the Global Partnership on AI.

Six priority areas for action are set out in the roadmap. The first is to generate demand for reliable and effective assurance across the AI supply chain, which involves understanding the risks related to artificial intelligence systems, as well as the accountabilities for mitigating them.

The creation of a competitive and dynamic market for AI assurance with a range of services and tools is another area for action, as is the development of standards for the field and the improvement of links between industry and independent researchers to advance the development of AI assurance techniques.

The development of an accountable AI assurance profession to establish trust and quality is also among the actions outlined in the roadmap, supporting organisations to meet regulatory obligations by setting requirements in relation to artificial intelligence.

To deliver on the roadmap, the CDEI will work with UK industry bodies and regulators, and create an AI assurance accreditation forum, which will convene professional and accreditation bodies to engage them in the professionalisation of AI assurance.

In addition, the data ethics centre will support the Department of Culture, Media and Sport (DCMS) and the Office for Artificial Intelligence towards piloting an AI Standards Hub, which is expected to broaden the UK’s input to global artificial intelligence standards.

Other initiatives the CDEI is working on in relation to the AI assurance roadmap include work with the government’s Centre for Connected and Autonomous Vehicles around including ethical due diligence in the future regulatory framework for self-driving vehicles.

TechUK welcomed the publication of the roadmap and its goal to provide organisations with the tools and expertise to drive greater confidence along the AI supply chain, while pointing out there is more work to be done.

Anthony Walker, deputy chief executive at the trade body, said: “As this work moves forward, we must work hard together to ensure assurance checks are genuinely pro-innovation, proportionate and well aligned with existing standards and regulation.”

The release of the AI assurance roadmap follow the publication of a standard for algorithmic transparency, announced on 29 November 2021. Also a world first, the rules were developed by the Cabinet Office’s Central Digital and Data Office, the CDEI and other stakeholders from the public and private sectors.

It follows a review on algorithmic bias in decision-making carried out by the CDEI, one of the key recommendations from which is the implementation of a mandatory transparency obligation on public sector organisations using algorithms to support significant decisions affecting individuals.

Source is ComputerWeekly.com

Vorig artikelHome Office slammed over police IT legacy replacement delays
Volgend artikelYoung people think it’s ‘too late’ for them to get into tech