Content removal will not stop misinformation, says Royal Society

0
217
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

Governments and social media platforms should not rely on content removal as a solution to online scientific misinformation, as it could backfire by exacerbating pre-existing feelings of distrust in these institutions, according to a Royal Society report.

The Online information environment report, published 19 January 2022, looks specifically at the spread of scientific misinformation online, which is defined as content that, despite being presented as fact, runs counter to or is outright refuted by the scientific consensus.

It found that although misinformation content is prevalent online, the extent of its impacts is questionable. For example, according to a Royal Society survey, the majority of respondents thought Covid-19 vaccines are safe; that human activity is responsible for climate change; and that 5G technology is not harmful.

The majority of respondents also felt that the internet had generally improved public understanding of science, and that most are likely to check suspicious claims for themselves.

However, it did find that roughly one in 20 did dispute the scientific consensus, although the broad range of motivations for sharing misinformation – from genuinely trying to help people to political reasons or simply profiteering – mean the issue is unlikely to be addressed by a single intervention.

The report further claimed that there is little evidence to suggest that the censorship of scientific misinformation will limit the harms caused, and that such measures could instead drive the misinformation into “harder-to-address corners of the internet”, although it does not explicitly state which areas of the internet these are.

It also warned that the UK government’s forthcoming Online Safety Bill focuses almost exclusively on harms to individuals, and therefore fails to recognise the wider societal harms that online misinformation may cause.

“Society benefits from honest and open discussion on the veracity of scientific claims. These discussions are an important part of the scientific process and should be protected. When these discussions risk causing harm to individuals or wider society, it is right to seek measures which can mitigate against this. This has often led to calls for online platforms to remove content and ban accounts,” it said.

“However, while this approach may be effective and essential for illegal content (e.g. hate speech, terrorist content, child sexual abuse material) there is little evidence to support the effectiveness of this approach for scientific misinformation, and approaches to addressing the amplification of misinformation may be more effective.”

Frank Kelly, chair of the report and a professor of the Mathematics of Systems at the Statistical Laboratory in Cambridge, added while clamping down on claims made outside of the scientific consensus may seem desirable, it could “hamper the scientific process” and force genuinely malicious content, or disinformation, underground.

“Science stands on the edge of error and the nature of the scientific endeavour at the frontiers means there is always uncertainty,” he said. “In the early days of the pandemic, science was too often painted as absolute and somehow not to be trusted when it corrects itself, but that prodding and testing of received wisdom is integral to the advancement of science and society.”

As such, the report recommended a range of policy actions that can be taken to further understand and limit the spread of misinformation. This includes supporting media plurality and independent fact-checking, monitoring and mitigating evolving sources of scientific misinformation online, and investing in life-long information literacy.

“Our polling showed that people have complex reasons for sharing misinformation, and we won’t change this by giving them more facts,” said Gina Neff, a professor of technology and society at the Oxford Internet Institute, and a member of the report’s working group.

“We need new strategies to ensure high quality information can compete in the online attention economy. This means investing in lifelong information literacy programmes, provenance enhancing technologies, and mechanisms for data sharing between platforms and researchers.”

In terms of the role technology companies can play, Open Data Institute executive chair Nigel Shadbolt noted: “There’s an absolute requirement on these platforms to make serious amounts of investments to keep the quality of the information space as high as we believe it should be.”

On the efficacy of de-platforming people or organisations involved in spreading misinformation online, Shadbolt added that while they do not have enough evidence to comment, “tracing dissemination routes” to understand how certain misinformation is amplified is an important area for future investigation.

The conclusions against censorship are also supported by research from the Election Integrity Partnership (EIP), which looked at misinformation during the 2020 US presidential election and found that reducing the spread of disinformation “doesn’t require widespread suppression”.

However, it did argue for targeted action against specific actors that are found to be consistently spreading misinformation at scale: “We can see that several domestic verified Twitter accounts have consistently amplified misinformation about the integrity of the election. These are often stories revolving around misleading narratives about mail-in ballots, destroyed or stolen ballots, officials interfering with election processes, misprinted or invalid ballots, and more.

“Platforms may need to begin enacting stronger sanctions to accounts and media outlets who are repeat offenders of this type of misinformation. This could include labelling accounts which repeatedly share misleading information about voting or even removal from the platform. Labelling or removing just the content after virality may not be enough to curb the spread of misinformation on their platforms.”

Others in the Royal Society report’s working group include Vint Cerf, vice-president at Google; Derek McAuley, a professor of digital economy at the University of Nottingham; and Rasmus Kleis Nielsen, a professor of political communication a the University of Oxford.

Source is ComputerWeekly.com

Vorig artikelInvestigators find Beijing 2022 app riddled with security flaws
Volgend artikelDanish digitisation partnership delivers plan to government