Top 10 technology and ethics stories of 2021

0
314
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

In 2021, technology and ethics coverage was shaped by pushes to regulate artificial intelligence (AI), especially when deployed by law enforcement bodies and the use of biometric identification technologies such as facial-recognition.

In the European Union (EU), for example, lawmakers published a draft version of the Artificial Intelligence Act, but critics warned that while it was a step in the right direction, it would do little to stop AI-enabled human rights abuses.

The use of data by police – including how information is shared between the police and other public authorities, both now and historically – has also been a focus of Computer Weekly’s 2021 coverage, which will likely continue in 2022 with the forthcoming Police, Crime, Sentencing and Courts (PCSC) Bill and further sessions of the Under Cover Policing Inquiry (UCPI).

The role of unions in challenging technology companies has also been a big focus throughout 2021, starting with a Supreme Court judgement in February that ruled popular ride-hailing app Uber must classify its drivers as workers rather than self-employed.

Here are Computer Weekly’s top 10 technology and ethics stories of 2021:

1. Mining deaths lawsuit against major tech companies dismissed 

In November 2021, a US district court judge dismissed the legal case against five major US technology companies accused by the families of dead or maimed child cobalt miners of knowingly benefiting from human rights abuses in the Democratic Republic of Congo (DRC), which Computer Weekly first reported on in late 2019.

The lawsuit against Alphabet, Apple, Dell, Microsoft and Tesla marked the first legal challenge of its kind against technology companies, many of which rely on their cobalt supply chains to power products such as electric cars, smartphones and laptops.

According to the judgment – which victims plan to appeal – there was not a strong enough causal relationship between the firms’ conduct and the miners’ injuries, but lawyers representing the families have said “the companies told the court they are mere purchasers of cobalt and have nothing to do with DRC mines, while they tell consumers that they have control over their supply chains and have enacted ‘policies’ that prohibit child labour in the cobalt mines they source from.

“Whether the companies lied to the court or their consumers is a question of fact for a jury to decide, not the court on a procedural motion. We think the court of appeals will agree with us on that.”

2. Interview: Uber driver Yaseen Aslam on his Supreme Court battle and what’s next for gig workers

In February, a unanimous decision by the UK Supreme Court found that Uber must classify its drivers as workers, entitling them to better workplace conditions and protections for the first time.

Before the ruling, Uber classified its drivers as self-employed, independent workers on the basis that its app merely connects drivers with potential customers – a position the company has maintained throughout four years of legal proceedings and appeals that took the case all the way to the Supreme Court.

In the wake of the ruling, Yaseen Aslam, president of the App Drivers and Couriers Union (ADCU) and one of the two original claimants in the Uber case, spoke to Computer Weekly about the how the case started and its implications.

As a result of the ruling, estimates published by the GMB union suggest that “tens of thousands” of Uber drivers could be in line to receive £12,000 each in compensation. Law firm Keller Lenkner, which is representing around 10,000 drivers in a group litigation action against Uber, has similarly calculated that each driver could claim back between £10,000 and £12,000.

3. ‘Spy cops’ victims share ongoing data protection concerns

Established in 2015 to investigate the practices of undercover policing units – including the Special Demonstration Squad, which was created in 1968 to infiltrate British protest groups as part of the Met Police’s Special Branch – the Under Cover Policing Inquiry began its second phase on 21 April 2021.

It heard that officers collected and disseminated a “substantial volume of personal information” about left-wing activists, including women they deceived into intimate sexual relationships, in surveillance that was “clearly disproportionate and inappropriate”.

Witnesses also expressed concern about the retention of their personal information without their knowledge and the extent to which it affected, and continues to affect, their lives. They also questioned whether information about them is still being collected.

4. Serious violence duty in PCSC Bill would gut UK data rights

Human rights organisations are sounding the alarm over the inclusion of violence reduction measures in the UK government’s forthcoming Police, Crime, Sentencing and Courts (PCSC) Bill.

The measures will give police new powers to gather and share data on people allegedly involved in “serious violence”, but human rights champions and civil society groups claim this has the potential to undermine existing data rights and further entrench discriminatory policing practices.

There are also concerns, particularly among members of the medical profession, that the obligations placed on a range of public bodies, including healthcare providers, to share data with the police will ruin people’s trust in those organisations and stop them from accessing essential public services out of fear the information will be unfairly used against them.

First introduced to Parliament on 9 March 2021, the 308-page PCSC Bill had previously attracted significant criticism and prompted huge protests in cities across the UK due to a number of controversial measures that would, for example, criminalise Gypsy, Roma and Traveller communities’ way of life and radically restrict people’s ability to protest.

5. Europe’s proposed AI regulation falls short on protecting rights

In its Artificial Intelligence Act (AIA) proposal, published on 21 April 2021, the European Commission (EC) adopted a decidedly risk-based approach to regulating the technology, focusing on establishing rules around the use of “high-risk” and “prohibited” AI practices.

Speaking to Computer Weekly, however, digital civil rights experts and organisations claim the EC’s regulatory proposal is stacked in favour of organisations – both public and private – that develop and deploy AI technologies, which are essentially being tasked with box-ticking exercises, while ordinary people are offered little in the way of protection or redress.

This is despite them being subject to AI systems in a number of contexts from which they are not necessarily able to opt out, such as when used by law or immigration enforcement bodies.

6. TfL under fire for relying on Uber facial-verification data in licensing decisions

Transport for London’s (TfL) reliance on information from Uber’s facial verification-based driver ID system came under increased scrutiny in July, following multiple instances of misidentification leading to unionised drivers losing their private hire licences.

As a result, the transport regulator is facing numerous legal appeals from Uber drivers on the basis it did not conduct its own investigations into the incidents, and instead relied solely on evidence from Uber’s facial verification software.

The drivers have also argued that despite other sanctions being available, TfL opted straight away for licence revocation, which can severely impact drivers’ livelihoods if they are unable to legally operate their vehicle.

7. Leading venture capital firms are failing to protect human rights

Venture capital (VC) firms and high-profile tech accelerators are not conducting human rights due diligence on their investments, which means they cannot be sure the companies they invest in are not causing, or contributing to, human rights abuses.

In its first-ever review of venture capitalists’ human rights responsibilities, Amnesty International surveyed every firm on the Venture Capital Journal’s list of the 50 largest VCs, as well as high-profile tech accelerators Y Combinator, 500 Startups and TechStars.

It found that only one VC firm had due diligence processes in place that could potentially meet the standards set out by the United Nations.

The Amnesty report noted that VCs usually devote substantial resources to conducting due diligence on other aspects of their potential investments, with the average deal taking 83 days to close and the average firm spending 118 hours scrutinising and evaluating the proposition.

8. Ban UK police use of facial-recognition, House of Lords told

In evidence given to the Lords Home Affairs and Justice Committee about the use of advanced algorithmic tools by law enforcement, experts called into question the proportionality and efficacy of how facial-recognition technology has been deployed by UK police, as well as its legal basis.

Karen Yeung – an Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at Birmingham Law School – noted, for example, that in 11 trial deployments carried out by the Met police, only 9 to 10 arrests were made on the basis of around 500,000 facial scans.

All of this means the real-time location tracking of many, many hundreds of thousands of British people going about their lawful business, not bothering anyone, she said.

9. Illegal state surveillance in Africa ‘carried out with impunity’

Governments in Africa are conducting illegal digital surveillance of their citizens with impunity, despite privacy rights being well protected on paper, according to a comparative analysis of surveillance laws and practices in six African countries released in October.

The analysis, conducted by the Institute of Development Studies (IDS) and the African Digital Rights Network, pulled together six separate research reports looking at how the governments of Egypt, Kenya, Nigeria, Senegal, South Africa and Sudan are using and investing in new digital technologies to carry out illegal surveillance on citizens.

Speaking to Computer Weekly, the report’s editor and digital research fellow at IDS, Tony Roberts, said there are clear links between the surveillance carried out under formal colonialism and the surveillance being carried out now. “Under colonial rule, UK Special Branch spied on its political opponents,” he said. “When those opponents of colonial rule came to power after independence, some of them retained special branches and developed their own surveillance systems. 

“Over time, new technologies of surveillance have been incorporated. They are supplied by the UK, France and other countries in the global north. The UK continued to use signal intercept [techniques] to conduct surveillance on former colonies post-independence. The continuities are clear.”

10. Microsoft EU Data Boundary dubbed ‘smoke and mirrors’

In May, Microsoft committed to storing and processing all of its European Union (EU) customer data within the bloc by creating an “EU Data Boundary”,  but data protection experts criticised the move as a tacit admission that data is being routinely processed elsewhere.

Alexander Hanff, founder of Think Privacy and a lead privacy adviser at Amari.ai, for example, described Microsoft’s move as “smoke and mirrors”, claiming there was no feasible way it would protect European citizens’ data from being transferred overseas to the US, where there is a lower standard of data protection.

Hanff added that it was public knowledge that Microsoft is subject to a “huge number of requests from government surveillance agencies” in the US – as evidenced by its biannual transparency reports – under the Foreign Intelligence Surveillance Act and Cloud Act, and that it would be naïve in this context to think they were not making requests to access Europeans’ data.

Specifically, Section 702 of FISA allows the US attorney general and director of intelligence services to jointly authorise the targeted surveillance of people outside the US, as long as they are not a US citizen, while the Cloud Act effectively gives the US government access to any data, stored anywhere, by US corporations in the cloud.

A Computer Weekly investigation revealed in December 2020 that UK police forces were unlawfully processing over a million people’s personal data on the hyperscale public cloud service Microsoft 365, after failing to comply with key contractual and processing requirements within the Data Protection Act 2018, such as the strict restrictions placed on international transfers.

Source is ComputerWeekly.com

Vorig artikelBig Tech Journeys Into the Virtual Reality Reaches of the Metaverse
Volgend artikelTop 10 IT careers and skills stories of 2021