EU regulators call for ban on biometrics in public spaces

0
364
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

Two pan-European data protection bodies are jointly calling for a general ban on the use of automated biometric identification technologies in public spaces, arguing that they present an unacceptable interference with fundamental rights and freedoms.

The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have issued a joint opinion in response to the European Commission’s (EC) proposed Artificial Intelligence Act (AIA), calling for a general ban on the use of AI for the automated recognition of human features in public spaces.

This would include banning the use of AI to identify faces, gait, fingerprints, DNA, voices, keystrokes as well as any other biometric or behavioural signals, in any context.

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places,” said Andrea Jelinek, EDPB chair, and Wojciech Wiewiórowski, the EDPS, in a joint statement. “Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms.

“This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI.”

Although the AIA – which adopts a decidedly risk-based, market-led approach to regulating the technology – does ostensibly prohibit the remote, real-time biometric identification of people in public places, digital rights experts have previously told Computer Weekly that there are a number of loopholes that significantly weaken any claims that it has actually been banned.

These include the fact that there are a number of very broad “threat exemptions” allowing the technology’s use by law enforcement, and that only real-time, rather than retroactive, biometric identification is prohibited.

The view that the AIA’s “ban” on remote biometric identification is insufficient is also shared by the EDPB and EDPS.

“Remote biometric identification of individuals in publicly accessible spaces poses a high risk of intrusion into individuals’ private lives,” said the joint opinion. “Therefore, the EDPB and the EDPS consider that a stricter approach is necessary.

“The use of AI systems might present serious proportionality problems, since it might involve the processing of data of an indiscriminate and disproportionate number of data subjects for the identification of only a few individuals (for example, passengers in airports and train stations).

“The problem regarding the way to properly inform individuals about this processing is still unsolved, as well as the effective and timely exercise of the rights of individuals. The same applies to its irreversible, severe effect on the population’s (reasonable) expectation of being anonymous in public spaces, resulting in a direct negative effect on the exercise of freedom of expression, of assembly, of association as well as freedom of movement.”

The experts that Computer Weekly spoke to about the AIA also highlighted the lack of a “ban” on biometric AI tools that can detect race, gender and disability.

“The biometric categorisation of people into certain races, gender identities, disability categories – all of these things need to be banned in order for people’s rights to be protected because they cannot be used in a rights-compliant way,” said Sarah Chander, a senior policy adviser at European Digital Rights (EDRi). “Insofar that the legislation doesn’t do that, it will fall short.”

The EDPD and EDPS have similarly taken issue with the lack of attention given to biometric categorisation, further recommending a ban for any AI systems that group people into “clusters” based on ethnicity, gender, political or sexual orientation, or any other grounds on which discrimination is prohibited under the Charter of Fundamental Rights.

The joint opinion added: “Furthermore, the EDPB and the EDPS consider that the use of AI to infer emotions of a natural person is highly undesirable and should be prohibited, except for certain well-specified use-cases, namely for health or research purposes (for example, patients where emotion recognition is important), always with appropriate safeguards in place and, of course, subject to all other data protection conditions and limits, including purpose limitation.”

In the UK, information commissioner Elizabeth Denham has said she is also “deeply concerned” about the inappropriate and reckless use of live facial recognition (LFR) technologies in public spaces, noting in a blog that none of the organisations investigated by her office were able to fully justify its use.

However, although Denham acknowledged the significant impacts that biometrics can have on rights, she refrained from calling for a general ban, instead issuing a Commissioner’s Opinion to act as guidance for companies and public organisations looking to deploy biometric technologies.

“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” Denham wrote in the blog.

“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work.”

On 7 June 2021, in line with the EDPB and EDPS opinion, Access Now and more than 200 other civil society organisations, activists, researchers and technologists from 55 countries signed an open letter calling for legal prohibitions on the use of biometric technologies in public spaces, whether by governments, law enforcement or private actors.

As well as calling for a complete ban, the coalition also called on governments around the world to stop all public investment in biometric technologies that enable mass surveillance and discriminatory targeted surveillance.

Amazon, Microsoft and IBM have backed away from selling facial recognition technologies to police,” said Isedua Oribhabor, US policy analyst at Access Now. “Investors are calling for limitations on how this technology is used. This shows that the private sector is well aware of the dangers that biometric surveillance poses to human rights.

“But being aware of the problem is not enough – it is time to act. The private sector should fully address the impacts of biometric surveillance by ceasing to create or develop this technology in the first place.”

Source is ComputerWeekly.com

Vorig artikelThe Big Impact of Small Changes
Volgend artikelGoogle Facing Fresh E.U. Inquiry Over Ad Technology