EU artificial intelligence regulation risks undermining social safety net

0
339
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

The European Union’s (EU) proposed plan to regulate the use of artificial intelligence (AI) threatens to undermine the bloc’s social safety net, and is ill-equipped to protect people from surveillance and discrimination, according to a report by Human Rights Watch.

Social security support across Europe is increasingly administered by AI-powered algorithms, which are being used by governments to allocate life-saving benefits, provide job support and control access to a variety of social services, said Human Rights Watch in its 28-page report, How the EU’s flawed artificial intelligence regulation endangers the social safety net.

Drawing on case studies from Ireland, France, the Netherlands, Austria, Poland and the UK, the non-governmental organisation (NGO) found that Europe’s trend towards automation is discriminating against people in need of social security support, compromising their privacy, and making it harder for them to obtain government assistance.

It added that while the EU’s Artificial Intelligence Act (AIA) proposal, which was published in April 2021, does broadly acknowledge the risks associated with AI, “it does not meaningfully protect people’s rights to social security and an adequate standard of living”.

“In particular, its narrow safeguards neglect how existing inequities and failures to adequately protect rights – such as the digital divide, social security cuts, and discrimination in the labour market – shape the design of automated systems, and become embedded by them.”

According to Amos Toh, senior researcher on AI and human rights at Human Rights Watch, the proposal will ultimately fail to end the “abusive surveillance and profiling” of those in poverty. “The EU’s proposal does not do enough to protect people from algorithms that unfairly strip them of the benefits they need to support themselves or find a job,” he said.

Self-regulation not good enough

The report echoes claims made by digital civil rights experts, who previously told Computer Weekly the regulatory proposal is stacked in favour of organisations – both public and private – that develop and deploy AI technologies, which are essentially being tasked with box-ticking exercises, while ordinary people are offered little in the way of protection or redress.

For example, although the AIA establishes rules around the use of “high-risk” and “prohibited” AI practices, it allows the technology providers to self-assess whether their systems are consistent with the regulation’s limited rights protections, in a process dubbed “conformity assessments”.

“Once they sign off on their own systems (by submitting a declaration of conformity), they are free to put them on the EU market,” said Human Rights Watch. “This embrace of self-regulation means that there will be little opportunity for civil society, the general public, and people directly affected by the automation of social security administration to participate in the design and implementation of these systems.”

“The automation of social security services should improve people’s lives, not cost them the support they need to pay rent, buy food, and make a living. The EU should amend the regulation to ensure that it lives up to its obligations to protect economic and social rights”
Amos Toh, Human Rights Watch

It added that the regulation also fails to provide any means of redress against tech companies to people who are denied benefits because of software errors: “The government agencies responsible for regulatory compliance in their country could take corrective action against the software or halt its operation, but the regulation does not grant directly affected individuals the right to submit an appeal to these agencies.”

Giving the example of Austria’s employment profiling algorithm, which Austrian academics have found is being used to support the government’s austerity policies, the NGO said it helped legitimise social security budget cuts by reinforcing the harmful narrative that people with poor job prospects are lazy or unmotivated.

“The appearance of mathematical objectivity obscures the messier reality that people’s job prospects are shaped by structural factors beyond their control, such as disparate access to education and job opportunities,” it said.

“Centring the rights of low-income people early in the design process is critical, since correcting human rights harm once a system goes live is exponentially harder. In the UK, the defective algorithm used to calculate people’s Universal Credit benefits is still causing people to suffer erratic fluctuations and reductions in their payments, despite a court ruling in 2020 ordering the government to fix some of these errors. The government has also resisted broader changes to the algorithm, arguing that these would be too costly and burdensome to implement.”

Loopholes prevent transparency

Although the AIA includes provisions for the creation of a centralised, EU-wide database of high-risk systems – which will be publicly viewable and based on the conformity assessments – Human Rights Watch said loopholes in the regulation were likely to prevent meaningful transparency.

The most notable loophole around the database, it said, was the fact that only generic details about the status of an automated system, such as the EU countries where it is deployed and whether it is active or discontinued, would be published.

“Disaggregated data critical to the public’s understanding of a system’s impact, such as the specific government agencies using it, dates of service, and what the system is being used for, will not be available,” it said. “In other words, the database might tell you that a company in Ireland is selling fraud risk scoring software in France, but not which French agencies or companies are using the software, and how long they have been using it.”

It added the regulation also provides significant exemptions for law enforcement and migration control authorities. For example, while technology suppliers are ordinarily supposed to disclose instructions for use that explain the underlying decision-making processes of their systems, the AIA states that this does not apply to law enforcement entities.

“As a result, it is likely that critically important information about a broad range of law enforcement technologies that could impact human rights, including criminal risk assessment tools and crime analytics software that parse large datasets to detect patterns of suspicious behaviour, would remain secret,” it said.

In October 2021, the European Parliament voted in favour of a proposal to allow international crime agency Europol to more easily exchange information with private companies and develop AI-powered policing tools.

However, according to Laure Baudrihaye-Gérard, legal and policy director at NGO Fair Trials, the extension of Europol’s mandate in combination with the AIA’s proposed exemptions would effectively allow the crime agency to operate with little accountability and oversight when it came to developing and using AI for policing.

In a joint opinion piece, Baudrihaye-Gérard and Chloé Berthélémy, policy advisor at European Digital Rights (EDRi), added that the MEPs’ vote in Parliament represented a “blank cheque” for the police to create AI systems that risk undermining fundamental human rights.

Recommendations for risk reduction

Human Rights Watch’s report goes on to make a number of recommendations on how the EU can strengthen the AIA’s ban on systems that pose a risk.

These include placing clear prohibitions on AI applications that threaten rights in ways that cannot be effectively mitigated; codifying a strong presumption against the use of algorithms to delay or deny access to benefits; and establishing a mechanism for making additions to the list of systems that pose “unacceptable risk”.

It also recommended introducing mandatory human rights impact assessments that need to be undertaken both before and during deployments, and requiring EU Member States to establish independent oversight bodies to ensure the impact assessments are not mere box-ticking exercises.

“The automation of social security services should improve people’s lives, not cost them the support they need to pay rent, buy food, and make a living,” said Toh. “The EU should amend the regulation to ensure that it lives up to its obligations to protect economic and social rights.”

 

Source is ComputerWeekly.com

Vorig artikelVast Data: Big shifts promised for an AI future
Volgend artikelRivian, the electric-truck maker, soars in early trading after its I.P.O.