NHS England works with Ada Lovelace Institute to tackle AI bias in healthcare

0
306
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

NHS England is working with the Ada Lovelace Institute to pilot algorithmic impact assessments (AIAs) in healthcare. The Ada Lovelace Institute defines AIA as a tool for assessing possible societal impacts of an algorithmic system before the system is in use.

NHS England said the pilot, run by NHS AI lab, will be used as part of the data access process for the National Covid-19 Chest Imaging Database (NCCID) and the proposed National Medical Imaging Platform (NMIP).

While artificial intelligence (AI) has the potential to support health and care workers to deliver better care for people, it could also exacerbate existing health inequalities if concerns such as algorithmic bias aren’t accounted for. In what is believed to be a world first for AI adoption in healthcare, the trial’s goal is to ensure potential risks such as algorithm biases are addressed before they can access NHS data.

Through the trial, NHS AI Lab said that it will support researchers and developers to engage patients and healthcare professionals at an early stage of AI development when there is greater flexibility to make adjustments and respond to concerns.

Brhmie Balaram, head of AI research and ethics at the NHS AI Lab, hopes that supporting patient and public involvement as part of the development process will lead to improvements in patient experience and the clinical integration of AI.

“Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI,” said Balaram. “Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.”

NHS AI Lab commissioned The Ada Lovelace Institute to provide a guide on how to use an AIAs in the real-world. It is designed to help developers and researchers consider and account for the potential impacts of proposed technologies on people, society and the environment.

The sample version of the process guide, available to download from its website, provides step-by-step guidance for project teams seeking access to imaging data from the NMIP. It outlines how to conduct an algorithmic impact assessment (AIA) for their project.

The guide is aimed at designers, developers, data scientists and product or research managers working on products and data models that need to use NMIP imaging data. It covers accountability, transparency, record keeping and critical dialogue on how the design and development of the system might result in particular harms and benefits.

Octavia Reeve, interim lead at the Ada Lovelace Institute, said: “Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit. We hope that this research will generate further considerations for the use of AIAs in other public and private-sector contexts.”

Innovation minister Syed Kamall said: “While AI has great potential to transform health and care services, we must tackle biases which have the potential to do further harm to some populations as part of our mission to eradicate health disparities. 

“This pilot once again demonstrates the UK is at the forefront of adopting new technologies in a way that is ethical and patient-centred.”

Source is ComputerWeekly.com

Vorig artikelAmerica’s Chinese Tech Conundrum
Volgend artikelParasol data breach: Contractors rage as fallout from umbrella cyber attack continues