Internet companies should provide real-time data on disinformation, Lords told

0
324
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

The UK’s upcoming Online Safety Bill should place requirements on social media and internet companies to provide real-time updates about suspected disinformation, fact-checking experts have told a House of Lords committee.

To limit the spread of mis- and disinformation via the internet, the UK government’s current approach – outlined in its full response in December 2020 to the online harms whitepaper published in April 2019 – is to require online companies to explicitly state what content and behaviour is acceptable on their services in clear and accessible terms and conditions, including how content that is legal, but could still cause significant physical or psychological harm, will be handled.

Companies in Category 1 – those with the largest online presence and high-risk features, which is likely to include the likes of Facebook, TikTok, Instagram and Twitter – will also be under a legal requirement to publish transparency reports on the measures they have taken to tackle online harms.

To comply with the duty of care, the companies covered by the legislation will have to abide by a statutory code of practice being drawn up by Ofcom, which the government’s full response officially confirmed will be the online harms regulator.

If they fail in this duty of care, Ofcom will have the power to fine the companies up to £18m, or 10% of annual global turnover (whichever is higher), and will also be empowered to block non-compliant services from being accessed within the UK.

However, addressing the House of Lords Communications and Digital Committee on 23 February as part of its ongoing inquiry into freedom of expression online, Full Fact CEO Will Moy said real-time information from internet companies about suspected disinformation is needed to foster effective public debate.

“We need real-time information on suspected misinformation from the internet companies, not as the government is [currently] proposing in the Online Safety Bill,” said Moy, adding that Ofcom should be granted similar powers to the Financial Conduct Authority to demand information from businesses that fall under its remit.

“We need independent scrutiny of the use of artificial intelligence [AI] by those companies and its unintended consequences – not just what they think it’s doing, but what it’s actually doing – and we need real-time information on the content moderation actions these companies take and their effects,” he said.

“These internet companies can silently and secretly, as [the AI algorithms are considered] trade secrets, shape public debate. These transparency requirements therefore need to be set on the face of the Online Safety Bill.”

Moy added that the majority of internet companies take action on every piece of content in their systems through their algorithms, because they decide how many people they are seen by, how they are displayed, and so on.

“Those choices collectively are more important than specific content moderation decisions,” he said. “Those choices are treated as commercial secrets, but they can powerfully enhance our ability to impart or receive information – that’s why we need strong information powers in the Online Safety Bill, so we can start to understand not just the content, but the effects of those decisions.”

Lucas Graves, a research associate at Reuters Institute for the Study of Journalism, said that while AI can be useful in helping human fact-checkers spot false or misleading claims, as well as in finding patterns in the kind of claims being made, “the holy grail of completely automated fact-checking is only even conceivable in cases where there is a narrow statistical question being checked – something that can be looked up quickly – and even then, it fails often”.

Graves added: “Ironically, one of the least transparent environments in this respect is Facebook and all our other social media platforms. So the area where fact-checkers get less feedback about what the effects of their work have been is perversely on the social media platforms where that data is abundantly available – it’s just not presented to fact-checkers.”

Moy said automated enforcement processes built around tackling disinformation do not provide an effective means of fact-checking at internet scale, largely because of a lack of authoritative reference data that prevents AI algorithms from being able to determine whether something is true or not.

“Although internet companies have very fine-grained control over how content spreads, the technology that tries to support identifying false or misleading information is both limited and error prone, like a driver who has great speed control, but poor hazard awareness,” he said.

“And in a sense, that’s not surprising, because there is no single source of truth for them to turn to, there’s not a source of facts you can look everything up against. This kind of technology is very, very sensitive to small changes, so even when good reference data is available, which it often isn’t, the technology is limited in what it can actually do.”

Sandra Wachter, an associate professor at the Oxford Internet Institute, said algorithms are not very good at detecting the subtleties of human behaviour or language and changes in it – something that could lead to over-policing of content deemed to be disinformation.

“An interesting example came up a couple of days ago when content was taken down because it had the words ‘black’ and ‘white’ and ‘attack’ in it, but it was actually around a chess game,” she said.

“So the subtleties are very, very important in human language, and cultures are framed around subtleties. Algorithms might be able to do it for  a very clear, very narrow range of things, but as soon as we start talking about fuzzy concepts, I’m very worried about that.”

Moy added that during the Covid-19 pandemic, the very real threats presented to people’s health by the spread of misinformation has led to an acceleration of efforts to control information online, with political pressure firmly on the side of removing too much content with weak, if any, democratic oversight.

“We are concerned by these precedents, and in the Online Safety Bill, parliament has an opportunity to correct them,” he said.

According to research on voting misinformation during the 2020 US presidential race conducted by the Election Integrity Partnership (EIP), the problem of online disinformation is primarily driven by political elites, negating the need for widespread content suppression.

Commenting on its research, the EIP said: “We can see that several domestic verified Twitter accounts have consistently amplified misinformation about the integrity of the election. These are often stories revolving around misleading narratives about mail-in ballots, destroyed or stolen ballots, officials interfering with election processes, misprinted or invalid ballots, and more.”

It added that although social media platforms have been taking action in removing or at least labelling content that can be misleading, this often occurs after the content has already been widely disseminated.

“Platforms may need to begin enacting stronger sanctions to accounts and media outlets that are repeat offenders of this type of misinformation,” it said. “This could include labelling accounts that repeatedly share misleading information about voting or even removal from the platform. Labelling or removing just the content after virality may not be enough to curb the spread of misinformation on their platforms.”

The UK government has committed to introducing the Online Safety Bill in early 2021, although there is no set timeline and there have already been a number of delays in getting online harms legislation before parliament.

Source is ComputerWeekly.com

Vorig artikelThis Website Is My Pandemic BFF
Volgend artikelCRAC design upgrades simplify HVAC maintenance