Joint committee publishes report on improving Online Safety Bill

0
314
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

The joint parliamentary committee for the draft Online Safety Bill has published a 193-page report that calls for an end to the self-regulation of big tech and makes a number of recommendations on how the law can make internet service providers (ISPs) accountable for what is happening on their platforms.

The committee, which was established in July 2021 to scrutinise the Online Safety Bill and propose improvements before it goes to Parliament for final approval in 2022, said it had reached a unanimous conclusion to “call time on the Wild West online”.

“For too long, big tech has gotten away with being the land of the lawless. A lack of regulation online has left too many people vulnerable to abuse, fraud, violence and, in some cases, even loss of life,” said committee chair Damian Collins. “The companies are clearly responsible for services they have designed and profit from, and need to be held to account for the decisions they make.”

Major changes recommended in the report include placing criminal sanctions on senior managers for “repeated and systemic failings” to protect users from harm online and expanding the powers of Ofcom to investigate, audit and fine non-compliant tech companies.

“The committee has set out recommendations to bring more offences clearly within the scope of the Online Safety Bill, give Ofcom the power in law to set minimum safety standards for the services it will regulate, and to take enforcement action against companies if they don’t comply,” said Collins.

As former chair of the House of Commons DCMS Select Committee, Collins previously led an inquiry into disinformation and “fake news”, which similarly concluded by calling for an end to the self-regulation of social media firms.

Ending the online Wild West

As it currently stands, the draft Online Safety Bill would impose a statutory “duty of care” on technology companies that host user-generated content or allow people to communicate, meaning they would be legally obliged to proactively identify, remove and limit the spread of both illegal and legal but harmful content, such as child sexual abuse, terrorism and suicide material. Failure to do so could result in fines of up to 10% of their turnover by the online harms regulator, which was confirmed in December 2020 to be Ofcom.

Following its inquiry, the committee said in its report that “in seeking to regulate large multinational companies with the resources to undertake legal challenges”, Parliament should provide more clarity around the overarching duty of care through specific duties that explicitly set out what service providers should be doing to prevent harm online.

“We recommend the bill is restructured. It should set out its core objectives clearly at the beginning. This will ensure clarity to users and regulators about what the bill is trying to achieve and inform the detailed duties set out later in the legislation,” it said.

“Everything flows from these: the requirement for Ofcom to meet those objectives, its power to produce mandatory codes of practice and minimum quality standards for risk assessments in order to do so, and the requirements on service providers to address and mitigate reasonably foreseeable risks, follow those codes of practice and meet those minimum standards.”

The report added the Online Safety Bill should include a specific responsibility on service providers to create systems and processes to identify any “reasonably foreseeable risks of harm arising from the design of their platforms”, which they must take proportionate steps to mitigate.

On top of drawing up mandatory codes of practice for tech companies to follow, the committee further added that Ofcom should be required to produce a mandatory “safety by design” code of practice as well.

To further encourage compliance with the bill, the committee also recommends that Parliament requires internet service providers to conduct internal risk assessments to record reasonable foreseeable threats to user safety, which would go beyond content to also include the potentially harmful impact of algorithms.

“It should not be possible for a service provider to underestimate the level of risk on their service without fear of sanction. If Ofcom suspects such a breach, it should have the power to investigate, and, if necessary, to take swift action. We are not convinced that the draft bill as it currently stands achieves this,” said the report, adding that Ofcom should be required to set binding minimum standards to ensure the accuracy and completeness of risk assessments.

“Ofcom must be able to require a provider which returns a poor or incomplete risk assessment to redo that risk assessment. Risk assessments should be carried out by service providers as a response to the Online Safety Act before new products and services are rolled out, during the design process of new features, and kept up to date as they are implemented.”

It also recommended that the largest and highest-risk service providers should be placed under a statutory responsibility to commission annual, independent third-party audits of the effects of their algorithms, as well as of their risk assessments and transparency reports.

“Ofcom should be given the explicit power to review these and undertake its own audit of these or any other regulated service when it feels it is required. Ofcom should develop a framework for the effective regulation of algorithms based on the requirement for, and auditing of, risk assessments,” it said, adding that the various regulators of the Digital Regulation Cooperation Forum (DCRF) – which includes the Information Commissioner’s Office and the Competition and Markets Authority – should be placed under their own statutory requirement to cooperate and consult with one another.

“[The Online Safety Bill] seems to rely entirely on the platforms’ own risk assessments and their own reporting on how well their systems work at reducing harm”
Bill Mitchell, BCS, The Chartered Institute for IT

Commenting on the report, a senior associate at law firm Fieldfisher, Frankie Everitt, said: “The joint committee has used the opportunity to explore the wider and complex arena of online safety, far beyond the initial draft bill. As well as highlighting additional issues, such as those around online scams and specific offences, it has shifted generally from a focus on harmful content itself to a much broader scope, for example the algorithms that deliver content.

“This wide onus, however, shifts to the regulator Ofcom in practice, and the immediate question is whether it has the resources and capabilities to effectively police this more exhaustive list of issues in the online space.”

Bill Mitchell, director of policy at BCS, The Chartered Institute for IT, added that it was not clear that the bill does enough to keep teenagers safe online. “What would bother me is it seems to rely entirely on the platforms’ own risk assessments and their own reporting on how well their systems work at reducing harm,” he said. “I’d want to have more reassurance around how the bill will guarantee auditing and that accountability of social media platforms is open, transparent and rigorous.”

Criminal sanctions for tech company executives

A key point of contention throughout the committee’s inquiry was whether and when criminal liability for individuals within the technology companies should come into force. The current draft Online Safety Bill provides a grace period of two years to allow them to prepare for the changes.

However, addressing the committee in early November, digital secretary Nadine Dorries said the government was looking to shorten this window, and introduce criminal liability within “three to six months” of the bill’s assent.

The committee supported this position in its final report, and added that criminal sanctions for failure to comply with information notices from Ofcom should also be introduced within three months.

On the basis that major platform providers do not have adequate governance structures in place – as evidenced by the testimony of Facebook’s global head of safety, Antigone Davis, and others in November – the report also recommended extending criminal liability in the draft bill, which currently limits Ofcom to only taking action against senior managers on a failure to supply information.

“The bill should require that companies’ risk assessments be reported at board level, to ensure that senior management know and can be held accountable for the risks present on the service and the actions being taken to mitigate those risks,” it said.

“We recommend that a senior manager at board level or reporting to the board should be designated the “safety controller” and made liable for a new offence: the failure to comply with their obligations as regulated service providers when there is clear evidence of repeated and systemic failings that result in a significant risk of serious harm to users.

“We believe that this would be a proportionate last resort for the regulator – like any offence, it should only be initiated and provable at the end of an exhaustive legal process.”

According to Everitt, while the UK is “no doubt a huge market for big tech and digital services, there may be instances of services weighing up their UK revenues with the cost of compliance”.

She added: “You can forsee some platforms considering decisions to limit, at least partially, some services and functions to avoid costly compliance and mitigate the risk of enforcement. This, along with the recommendation of a personally liable safety controller, and the practical reality of finding someone to take on such a role and responsibility, may lead to some interesting boardroom conversations.”

Other recommendations

In terms of end-to-end encryption (E2EE), which would prevent service providers and the online harms regulator from being able to see what content is being posted or shared, the committee said the government needed to provide more clarity on how providers with encrypted services should comply with the safety duties ahead of the bill’s introduction.

“Balancing the benefits of E2EE in terms of protecting individuals’ freedom and privacy with online safety requirements is essential for the bill to be effective,” said the report, adding in its recommendations that E2EE should be identified as a specific risk factor in companies’ risk assessments. “Providers should be required to identify and address risks arising from the encrypted nature of their services under the safety by design requirements.”

However, the report added: “It is unclear how this will be achieved, and mature technical solutions which enable content on E2EE services to be moderated are not widely available.”

Although the government has launched a safety tech challenge fund, designed to boost innovation in artificial intelligence and other technologies that can scan, detect and flag illegal child abuse imagery without breaking E2EE, there are concerns this would unduly risk the privacy and security of law-abiding citizens, and could open up the way to surveillance.

 On linking social media accounts to real people for the purposes of dealing with abuse online, the report similarly said platforms that allow anonymous or pseudonymous accounts should be required to include the associated risks as a specific category in their risk assessments on safety by design.

“Anonymity and pseudonymity themselves are not the problem, and ending them would not be a proportionate response. The problems are a lack of traceability by law enforcement, the frictionless creation and disposal of accounts at scale, a lack of user control over the types of accounts they engage with and a failure of online platforms to deal comprehensively with abuse on their platforms,” it said.

In its concluding paragraph, it added the report must be understood as a whole document comprised of a cohesive set of recommendations that work collectively to produce a new vision of the Online Safety Bill.

“The government should not seek to isolate single recommendations without understanding how they fit into the wider manifesto laid out by the committee. Taken as a whole, our recommendations will ensure that the bill holds platforms to account for the risks of harm which arise on them and will achieve the government’s ultimate aim of making the UK the safest place in the world to be online,” it said.

The government will now have up to two months to respond to the committee’s report.

Source is ComputerWeekly.com

Vorig artikelAlmost half of networks probed for Log4Shell weaknesses
Volgend artikelThe Dream of U.S.-Made Computer Chips