NGOs file complaints against Clearview AI in five countries

0
413
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

Privacy and human rights organisations have filed legal complaints against controversial facial recognition company Clearview AI to data protection regulators in a coordinated action across five countries.

The complaints call for data protection regulators in the UK, France, Austria, Italy and Greece to ban the company’s activities in Europe, alleging that it is in breach of European data protection laws.

Clearview AI uses scraping technology to harvest photographs of people from social media and news sites without their consent, according to complaints filed with data protection regulators in the five countries.

The company sells access to what it claims is the “largest known database of 3+ billion facial images” to law enforcement, which can use its algorithms to identify individuals from photographs.

Clearview claims its technology has “helped law enforcement track down hundreds of at-large criminals, including paedophiles, terrorists and sex traffickers”.

The company also says its technology has also been used to “identify victims of crimes including child sex abuse and financial fraud” and to “exonerate the innocent”.

According to the legal complaints, Clearview processes personal data in breach of data protection law and uses photographs posted on the internet in a way that goes beyond what internet users would reasonably expect.

“European data protection laws are very clear when it comes to the purposes companies can use our data for,” said Ioannis Kouvakas, legal officer at Privacy International, which has submitted complaints in the UK and France.

“Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users,” he said.

Tracing through metadata

Privacy International claims that data subject access requests (DSARs) by staff have shown that Clearview AI collects photographs of people in the UK and the European Union (EU).

Clearview also collects metadata contained in the images, such as the location where the photographs were taken, and links back to the source of the photograph and other data, according to research by the campaigning group.

Lucie Audibert, legal officer at Privacy International said that the technology could quickly allow a client of Clearview to build up a detailed photograph of a person from their photograph.

“The most concerning thing is that at the click of a button, a Clearview client can immediately reconcile every piece of information about you on the web, which is something that without Clearview would take enormous effort,” she said.

“Applying facial recognition on the web means that you can suddenly unite information in a completely novel way, which you could not do before when you were relying on public search engines,” she said.

No legal basis

The complaints allege that Clearview has no legal basis for collecting and processing the data it collects under European data protection law.

The fact that pictures have been publicly posted on the web does not amount to consent from the data subjects to have their images processed by Clearview, the groups argue.

Many individuals will not be aware that their images have been posted online either by friends on social media or by businesses promoting their services.

Audibert said many hospitality businesses have been posting pictures of customers on social media to show they are open again as Covid restrictions are lifted, for example.

“Pubs and restaurants have been posting a lot of pictures of their new terraces opening and there are people everywhere in those photographs. People don’t know that they have been photographed by a restaurant, advertising on social media that they are reopening,” she said.

By identifying images online using facial recognition it is possible to build up a detailed picture of a person’s life.

Photographs could be used, for example, to identify a person’s religion, their political beliefs, their sexual preferences, who they associate with, or where they’ve been.

“There is potential for tracking and surveilling people in a novel way,” said Audibert.

This could have serious consequences for individuals in authoritarian regimes who might speak out against their government.

Clearview, which was founded in 2017, first came to the public’s attention in January 2020, when The New York Times revealed that it had been offering facial recognition services to more than 600 law enforcement agencies and at least a handful of companies for “security purposes”.

Also among the company’s users, of which it claims to have 2,900, are college security departments, attorney’s general and private companies, including events organisations, casino operators, fitness firms and cryptocurrency companies, Buzzfeed subsequently reported.

Images stored indefinitely

Research by Privacy International suggests Clearview AI uses automated software to search public web pages and collect images containing human faces, along with metadata such as the title of the image, the web page, its source link and geolocation.

The images are stored on Clearview’s servers indefinitely, even after a previously collected photograph or the web page that hosts it has been made private, the group says in its complaint.

The company uses neural networks to scan each image to uniquely identify facial features, known as “vectors”, made up of 521 data points. These are used to convert photographs of faces into machine-readable biometric identifiers that are unique to each face.

It stores the vectors in a database where they are associated with photographic images and other scraped information. The vectors are hashed, using a mathematical function to index the database and to allow it to be searched.

Clearview’s clients can upload images of individuals they wish to identify, and receive any closing matching images, along with metadata that allows the user to see where the image came from.

Legal complaints

The company has faced legal numerous challenges to its privacy practices. The American Civil Liberties Union filed a legal complaint in May 2020 in Illinois, under the state’s Biometric Information Privacy Act (BIPA), and civil liberties activists filed an action in California in February 2021, claiming that Clearview’s practices breach local bans on facial recognition technology.

The Office of the Privacy Commissioner of Canada (OPCC) published a report in February 2020 recommending that Clearview cease offering its service in Canada and delete images and biometric data collected from Canadians.

In Europe, the Hamburg data protection authority gave notice that it would require Clearview to delete the hash values associated with the facial images of a German citizen who complained.

The Swedish Authority for Privacy Protection found in February 2021 that the Swedish Police Authority had unlawfully used Clearview’s services in breach of the Swedish Criminal Data Act.

The UK’s Information Commissioner’s Office (ICO) opened a joint investigation with the Australian data protection authority into Clearview last year, focusing on its alleged use of scraped data and biometrics of individuals.

Coordinated action

Privacy International is pressing the ICO to work with other data protection regulators to declare that Clearview’s collection and processing practices are unlawful in the UK and in Europe. It is also calling on the ICO to find the use of Clearview AI by law enforcement agencies in the UK would breach the Data Protection Act 2018.

The complaint urges the ICO to work with other data protection regulators to investigate the company’s compliance with data protection laws. “We want to achieve a declaration that these practices are unlawful. The most important thing for us to stop is this mass scraping and processing of biometric data,” said Audibert.

Alan Dahi, a data protection lawyer at Noyb, said that just because something is online does not mean it is fair game to be appropriated by others in any way they want – neither morally nor legally. “Data protection authorities [DPAs] need to take action and stop Clearview and similar organisations from hoovering up the personal data of EU residents,” he said.

Fabio Pietrosanti, president of Italian civil rights organisation the Hermes Center for Transparency and Digital Human Rights, which has submitted one of the complaints, said facial recognition technologies threaten the privacy of people’s lives. “By surreptitiously collecting our biometric data, these technologies introduce a constant surveillance of our bodies,” he said.

Marina Zacharopoulou, a lawyer and member of digital rights organisation Homo Digitalis, which has also submitted a complaint said there was a need for increased scrutiny over facial recognition technologies, such as Clearview. “The DPAs have strong investigative powers and we need a coordinated reaction to such public-private partnerships,” she said.

In a coordinated action, Privacy International has filed complaints to the UK ICO and French data protection regulator CNIL; the Hermes Center for Transparency and Digital Human Rights has filed a complaint with the Italian data protection authority, GaranteHomo Digitalis has filed a complaint with Greece’s Hellenic Data Protection Authority; and Noyb, founded by lawyer Max Schrems, has filed a complaint with DSB, the Austrian data protection authority.

 

Source is ComputerWeekly.com

Vorig artikelAstronauts on Set: Space Station May Host Wave of TV Shows and Films
Volgend artikelWhy Apple and Google’s Virus Alert Apps Had Limited Success