UK government issues “safety by design” guidance for tech firms

0
368
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

The UK government has issued “safety by design” guidance to help tech companies better protect users online ahead of its forthcoming Online Safety Bill.

Published by the Department for Digital, Culture, Media and Sport (DCMS), the guidance is designed to help tech companies of various sizes find the information needed to build safe digital products from the development stages right through to the user experience.

The Online Safety Bill, an official draft of which was published in May 2021, seeks to promote safety online by making internet companies and service providers more accountable for the content shared by users on their platforms.

Under the Bill’s duty of care, technology platforms that host user-generated content or allow people to communicate will be legally obliged to proactively identify, remove and limit the spread of illegal or harmful content – such as child sexual abuse, terrorism and suicide material – or they could be fined up to 10% of turnover by the online harms regulator, which is confirmed to be Ofcom.

The legislation will apply to any company in the world that serves UK-based users, with the rules tiered in such a way that the most popular sites and services (those with large audiences) will need to go further by setting and enforcing clear terms and conditions that explicitly state how content that is legal but could still cause significant physical or psychological harm will be handled. This will include misinformation and disinformation about a range of topics, such as coronavirus vaccines, marking the first time online misinformation has come under the remit of a government regulator.

The guidance advocates putting safety at the heart of platform design to minimise the risk of online harm occurring, and further advises companies on providing an age-appropriate experience for children through tools such as age assurance and verification.

“We’re helping businesses get their safety standards up to scratch before our new online harms laws are introduced and also making sure they are protecting children and users right now,” said digital minister Caroline Dinenage. “We want businesses of all sizes to step up to a gold standard of safety online and this advice will help them to do so.”

Four principles

To help businesses, the guidance outlines four safety by design principles, alongside a seven point checklist on how to practically implement them.

The principles include ensuring that users are not left to manage their own safety; that the platform must consider all types of user; that users are empowered to make safer choices; and that platforms are designed to protect children.

Each principle is accompanied by an outline of why it is necessary, as well as a concrete example of them in practice.

For instance, on the third point of empowering users to make safer choices, the guidance said “You should be careful that platform design does not limit a user’s ability to make informed choices. For example, using algorithms to recommend content that is harmful to a user, which they have no or limited control over changing.”

It added: “Good platform design helps users understand: the reliability and accuracy of the content they are interacting with how their online activity is seen by others, and how to manage that – such as by changing privacy settings or blocking a user the potential legal impact of their actions their rights and responsibilities online.”

Fact-checking experts previously told a House of Lords committee in February 2021 that the Online Safety Bill should force internet companies to provide real-time information and updates about suspected disinformation, and further warned against an over-reliance on artificial intelligence (AI) algorithms to moderate content.

Full Fact CEO Will Moy said at the time: “We need independent scrutiny of the use of AI by those companies and its unintended consequences – not just what they think it’s doing, but what it’s actually doing – and we need real-time information on the content moderation actions these companies take and their effects.

“These internet companies can silently and secretly, as the AI algorithms are considered trade secrets, shape public debate. These transparency requirements therefore need to be set on the face of the Online Safety Bill.”

In terms of the checklist – which the webpages says “is not mandatory, but may help you to improve the safety of your website, app or software” – the seven points the government recommends taking include reviewing the platform design for risks and harms, identifying and protecting users that may be vulnerable, and assessing how users can make reports or complaints.

“You should create clear terms of service explaining what is acceptable on your platform. These should be prominent and accessible to users of all ages and abilities. You should make it easy for anyone to report content or behaviour that breaks those rules,” it said.

“This means your users and employees (if you run a business) should know: where and how to make a report or complaint; what will happen afterwards; how long it will take before someone responds; [and] how a user can appeal a decision if they disagree with the outcome.”

Other actions

Other actions organisations should take include reviewing and testing safety measures, keeping up to date with information about designing safer online platforms, appointing a responsible person that understand the risks to manage user safety, and making sure employees know what to do to keep users safe.

The guidance also includes best practice design guides for a range of different types of platform features, including private or public channels, live streaming, anonymous or multiple accounts, search functionality, and the visibility of account details or activity.

In June 2021, a new campaign group was established to oppose the government’s Bill. Members of Legal to Say. Legal to Type claim the Bill’s duty of care is too simplistic, that it cedes too much power to US corporations and will, in practice, privilege the speech of journalists or politicians.

Group members include Conservative MP David Davis, Index on Censorship CEO Ruth Smeeth, Open Rights Group executive director Jim Killock and Gavin Millar of Matrix Chambers.

Source is ComputerWeekly.com

Vorig artikelNordic mobile wallets to merge onto single tech platform
Volgend artikel‘Crucial Time’ for Cloud Gaming, Which Wants to Change How You Play