BCS calls on government to retain protections against AI

0
290
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

A citizen’s right to have decisions made by automated or artificial intelligence (AI) systems reviewed by a fellow human must not be removed while AI is in its infancy, the BCS, the Chartered Institute for IT, has warned.

This right, enshrined in UK law via Article 22 of the General Data Protection Regulation (GDPR), is just one of many aspects of British data protection that the government is currently seeking to change in a post-Brexit overhaul of data laws that has set it on yet another collision course with its erstwhile European Union partners. An ongoing consultation, Data: a new direction, was launched to this effect in September by the Department for Digital, Culture, Media and Sport (DCMS).

The BCS said the consultation suggested that human appeal against some automated decisions made by AI – including perhaps job recruitment or loan eligibility – might be unnecessary.

But because AI does not always involve the use of personal data to make decisions about people, the true protection of a human’s right to revisit AI-made decisions must consider wider regulation of AI, it said.

“Article 22 is not an easy provision to interpret and there is danger in interpreting it in isolation, like many have done,” said Sam De Silva, chair of BCS’s Law Specialist Group, and a partner at law firm CMS.

“We still do need clarity on the rights someone has in the scenario where there is fully automated decision-making which could have significant impact on that individual.

“We would also welcome clarity on whether Article 22(1) should be interpreted as a blanket prohibition of all automated data processing that fits the criteria, or a more limited right to challenge a decision resulting from such processing.

“As the professional body for IT, BCS is not convinced that either retaining Article 22 in its current form or removing it achieves such clarity.”

De Silva said it was also important to consider that the protection of human review of an automated decision currently sits in a piece of legislation that deals with personal data. If no personal data is involved, he suggested, this protection does not apply, but an automated decision could have a life-changing impact.

“For example, say an algorithm is created deciding whether you should get a vaccine,” he said. “The data you need to enter into the system is likely to be date of birth, ethnicity and other things, but not a name or anything that could identify you as the person.

“Based on the input, the decision could be that you’re not eligible for a vaccine. But any protections in the GDPR would not apply as there is no personal data.

“So, if we think the protection is important enough, it should not go into the GDPR. It begs the question: do we need to regulate AI generally and not through the ‘back door’ via GDPR?”

De Silva added: “It’s welcome that government is consulting carefully before making any changes to people’s right to appeal decisions about them by algorithms and automated systems – but the technology is still in its infancy.”

The BCS is currently gathering more views on this issue, and others raised in the consultation, from across its membership base, ahead of a wider response.

However, it is not the first voice to have raised concerns about the preservation, or not, of Article 22. In its recently published response to the consultation, the Information Commissioner’s Office said it welcomed the focus on bringing more clarity to a complex area in ethical terms, and suggested that future regulations could usefully include more guidance on the subject.

“However, resolving the complexity by simply removing the right to human review is not, in our view, in people’s interests and is likely to reduce trust in the use of AI,” said the ICO.

“Instead, we think the government should consider the extension of Article 22 to cover partly, as well as wholly, automated decision-making. This would  protect people better, given the increase in decision-making where there is a human involved, but the decision is still significantly shaped by AI or other automated systems.

“We also encourage consideration of how the current approach to transparency could be strengthened to ensure human review is meaningful.”

Source is ComputerWeekly.com

Vorig artikelBan UK police use of facial-recognition, House of Lords told
Volgend artikelHow to Watch William Shatner Launch to Space on Blue Origin