AI cannot be regulated by technical measures alone

0
383
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

Any attempt to regulate artificial intelligence (AI) must not rely solely on technical measures to mitigate potential harms, and should instead move to address the fundamental power imbalances between those who develop or deploy the technology and those who are subject to it, says a report commissioned by European Digital Rights (EDRi).

Published on 21 September 2021, the 155-page report Beyond debiasing: regulating AI and its inequalities specifically criticised the European Union’s (EU) “technocratic” approach to AI regulation, which it said was too narrowly focused on implementing technical bias mitigation measures, otherwise known as “debiasing”, to be effective at preventing the full range of AI-related harms.

The European Commission’s (EC) proposed Artificial Intelligence Act (AIA) was published in April 2021 and sought to create a risk-based, market-led approach to regulating AI through the establishment of self-assessments, transparency procedures and various technical standards.

Digital civil rights experts and organisations have previously told Computer Weekly that although the regulation is a step in the right direction, it will ultimately fail to protect people’s fundamental rights and mitigate the technology’s worst abuses because it does not address the fundamental power imbalances between tech firms and those who are subject to their systems.

The EDRi-commissioned report said that while European policymakers have publicly recognised that AI can produce a broad range of harms across different domains – including employment, housing, education, health and policing – their laser focus on algorithmic debiasing stems from a misunderstanding of the existing techniques and their effectiveness.

“EU policy documents favour debiasing datasets as the best means to address discrimination in AI, but fail to grasp the basics of debiasing approaches,” it said. “When discussing debiasing, the documents mistakenly suggest that mitigating biases in datasets guarantees that future systems based on these so-called ‘debiased datasets’ will be non-discriminatory.”

The report added that the EC’s proposed regulation also fails to consider other forms of bias that may occur in models and their outputs, betraying a lack of knowledge about the limitations of debiasing.

“Whether applied to datasets or algorithms, techno-centric debiasing techniques have profound limitations – they address bias in mere statistical terms, instead of accounting for the varied and complex fairness requirements and needs of diverse system stakeholders,” it said.

“Regulators have neglected to grapple with the implications, including technical impossibility results in debiasing (ie, it is impossible to fulfil multiple debiasing requirements within the same model), and the flattening of differences, especially social and political differences, that result from the pursuit of ‘unbiased’ datasets.”

The report also claimed that by adopting a “techno-centric” debiasing approach, policymakers are reducing complex social, political and economic problems to merely technical matters of data quality, ceding significant power and control over a range of issues to technology companies in the process.

“Debiasing locates the problems and solutions in algorithmic inputs and outputs, shifting political problems into the domain of design dominated by commercial actors,” it said. “Policy documents confer wide discretion to technology developers and service providers to set the terms of debiasing practices, leaving out challenging political and economic questions of these methods to the discretion of service providers.”

It added that by shifting complex socio-technical problems of discrimination into the domain of service providers, and by promoting debiasing frameworks that give them so much discretion, European policymakers have implicitly placed technology companies as arbiters of societal conflict.

“Such policy-making is likely to strengthen the regulatory power of tech companies in matters of discrimination and inequalities, while normalising the application of AI-based population management methods aligned with profit objectives,” it said.

Given the limitations of debiasing techniques, the report concluded that “overall, policymakers should cease advocating debiasing as the sole response to discriminatory AI” and instead promote it only for the narrow applications for which it is suited.

The report was researched and written by Agathe Balayn and Seda Gürses of Delft University of Technology in the Netherlands.

Writing in the report’s foreword, Sarah Chander, a senior policy adviser at EDRi, said: “Framing the debate around technical responses will obscure the complexity of the impact of AI systems in a broader political economy and ring-fence the potential responses to the technical sphere, centralising even more power with dominant technology companies.”

Chander added that “we should not allow techno-centric approaches to obfuscate more radical responses to the broad, structural harms emanating from AI systems”, and further called for bans on “uses of AI that inherently violate fundamental rights”.

The support for bans was also echoed elsewhere in the report, which recommended that “policymakers should implement regulatory processes so that AI systems which are inherently harmful or contrary to the public interest can be limited, prohibited or halted while in use”.

Earlier this month, the United Nations (UN) high commissioner on human rights, Michelle Bachelet, called for a moratorium on the sale and use of AI systems that pose a serious risk to human rights, at least until adequate safeguards are implemented, as well as for an outright ban on AI applications that cannot be used in compliance with international human rights law.

“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,” she said. “AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.

“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face.”

An accompanying report by the UN Human Rights Office, which analysed how AI affects a range of rights, found that both states and businesses have often rushed to deploy AI systems and are largely failing to conduct proper due diligence on how these systems affect human rights.

And in August 2021, a global study of algorithmic accountability in the public sector by the Ada Lovelace Institute found that very few policy interventions have meaningfully attempted to ensure public participation, either from the general public or from people directly affected by an algorithmic system.

“Proponents of public participation, especially of affected communities, argue that it is not only useful for improving processes and principles, but is crucial to designing policies in ways that meet the identified needs of affected communities, and in incorporating contextual perspectives that expertise-driven policy objectives may not meet,” the analysis said.

It added that for forms of participatory governance to be meaningful, policymakers must also consider how actors with varying levels of resources can contribute to the process.

Source is ComputerWeekly.com

Vorig artikelMastercard invests in digital competence training for five million very small businesses
Volgend artikelLinode commits to NVMe storage infrastructure upgrade across its global datacentre portfolio