UK announces standard for algorithmic transparency

0
348
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

The UK government has announced a new standard for artificial intelligence (AI) to be adopted by government departments and public sector bodies.

Developed by the Cabinet Office’s Central Digital and Data Office (CDDO), the move is described as a world first and is the result of cooperation with the Centre for Data Ethics and Innovation (CDEI) and other stakeholders from the public and private sectors.

It follows a review on algorithmic bias in decision-making carried out by the CDEI, one of the key recommendations from which is the implementation of a mandatory transparency obligation on public sector organisations using algorithms to support significant decisions affecting individuals.

“Algorithms can be harnessed by public sector organisations to help them make fairer decisions, improve the efficiency of public services and lower the cost associated with delivery,” said Lord Agnew, minister of state at the Cabinet Office.

“However, they must be used in decision-making processes in a way that manages risks, upholds the highest standards of transparency and accountability, and builds clear evidence of impact.”

The publication of the standards was informed by an engagement between the CDDO and the CDEI with strategy consultancy BritainThinks in June 2021, to conduct a deliberative public engagement exercise to explore public attitudes towards algorithmic transparency in the public sector.

Stakeholders from inside and outside government were also consulted for the creation of the framework, including research trust Reform, Imperial College London’s The Forum and the CDEI, to host a policy hackathon. At the event, international experts from government, academia and industry discussed and developed practical solutions to the challenges posed by algorithmic transparency.

Organised into two tiers, the standard includes a short description of the algorithmic tool, including how and why it is being used. The second tier offers more detailed information about how the tool works, the datasets that have been used to train the model and the level of human oversight involved.

The government expects the standard, which also follows commitments made in the National AI Strategy and the National Data Strategy, will help teams to be “meaningfully transparent” about the role of AI in decision-making, particularly when algorithms might have a legal or economic impact on individuals.

A pilot of the standard will begin in various government departments and bodies in the coming months, and this initial phase will be followed by a review by the CDDO based on feedback. Formal endorsement from the Data Standards Authority will be sought in 2022.

The Cabinet Office said the creation of the standard was also informed by calls from organisations such as the Alan Turing Institute and the Ada Lovelace Institute, and international organisations such as the OECD and Open Government Partnership, which have advocated for transparency on the risks associated with AI-based decision-making. The bodies warned that scrutiny of the role of algorithms in decision-making processes is needed as a way to build citizen trust.

By publishing the standard, the government understands that it is “empowering” experts and the public to engage with the data and provide external scrutiny. It is also expected that greater transparency on AI use in the public sector will promote “trustworthy innovation” and enable unintended consequences to be mitigated at an earlier stage.

“Organisations are increasingly turning to algorithms to automate or support decision-making,” said Adrian Weller, programme director for AI at the Alan Turing Institute and member of the CDEI advisory board. “We have a window of opportunity to put the right governance mechanisms in place as adoption increases.”

According to Weller, the move will not only help to build appropriate trust in the use of algorithmic decision-making by the public sector, but will also act as a lever to raise transparency standards in the private sector.

Imogen Parker, associate director for policy at the Ada Lovelace Institute, said meaningful transparency in AI-based tools in government organisations is “an essential part of a trustworthy digital public sector”.

Describing the transparency standard as “an important step” towards greater trust, Parker said the framework is “a valuable contribution to the wider conversation on algorithmic accountability in the public sector”.

She added: “We look forward to seeing trials, tests and iterations, followed by government departments and public sector bodies publishing completed standards to support modelling and development of good practice.”

Source is ComputerWeekly.com

Vorig artikelJack Dorsey Expected to Step Down as C.E.O. of Twitter
Volgend artikelIT Priorities 2022: End-user computing