UK government announces safety tech challenge fund winners

0
251
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

The UK government has announced the five winners of its Safety Tech Challenge Fund, who will each receive £85,000 to help them advance their technical proposals for new digital tools and applications to stop the spread of child sexual abuse material (CSAM) in encrypted communications.

Launched in September 2021, the challenge fund is designed to boost innovation in artificial intelligence (AI) and other technologies that can scan, detect and flag illegal child abuse imagery without breaking end-to-end encryption (E2EE).

The fund is being administered by the Department for Digital, Culture, Media and Sport (DCMS) and the Home Office, which will make additional funding of £130,000 available to the strongest projects after five months.

Digital minister Chris Philp told Computer Weekly that CSAM-scanning was the only inherent use of the technologies, and that the government would not mandate its use beyond this purpose.

The challenge fund is part of the government’s wider effort to combat harmful behaviours online and promote internet safety through the draft Online Safety Bill, which aims to establish a statutory “duty of care” on technology companies that host user-generated content or allow people to communicate.

Under the duty, tech companies will be legally obliged to proactively identify, remove and limit the spread of both illegal and legal but harmful content – such as child sexual abuse, terrorism and suicide material – or they could be fined up to 10% of their turnover by online harms regulator Ofcom.

The government has confirmed that the duty of care will still apply to companies that choose to use E2EE. It has also claimed the Bill will safeguard freedom of expression, increase the accountability of tech giants and protect users from harm online.

Proposed technologies

Over the coming months, the five projects will continue developing their proposals with the aim of bringing it to market sometime in 2022.

The projects include an AI-powered plug-in that can be integrated with encrypted social platforms to detect CSAM by matching content against known illegal material; using age estimation algorithms and facial-recognition technology to scan for and detect CSAM before it is uploaded; and a suite of live video-moderation AI technologies that can run on any smart device to prevent the filming of nudity, violence, pornography and CSAM in real-time, as it is being produced.

The organisations involved include Edinburgh-based police technology startup Cyan Forensics and AI firm Crisp Thinking, which will work in partnership with the University of Edinburgh and Internet Watch Foundation to develop the plug-in; cyber-safety firm SafeToNet, which is looking at how to use AI in video-moderation; and T3K-Forensics, an Austrian mobile data extraction firm working to implement its AI-based child sexual abuse detection technology on smartphones in an E2EE-friendly way.

Other companies include end-to-end email encryption platform GalaxKey and video-moderation firm DragonflAI, which will both be working with biometrics firm Yoti on separate projects that involve deploying its age estimation technologies.

GalaxKey, for example, will work with Yoti to implement age verification algorithms to detect CSAM prior to it being uploaded and shared into an E2EE environment.

DragonflAI will also work with Yoti to combine their on-device nudity AI detection technology with age assurance technologies to spot new indecent images within E2EE environments themselves.

“We are proud to be putting our solutions forward to encourage innovation, helping to change the digital space to better protect children online,” said Yoti CEO Robin Tombs. “We thank the Safety Tech Challenge Fund for welcoming the use of tech to tackle the rise in online-linked sexual crimes, and look forward to working with our partners to create tools that make the internet a safer place for children.”

According to Philp, “It’s entirely possible for social media platforms to use end-to-end encryption without hampering efforts to stamp out child abuse. But they’ve failed to take action to address this problem, so we are stepping in to help develop the solutions needed. It is not acceptable to deploy E2EE without ensuring that enforcement and child protection measures are still in place.

“We’re pro-tech and pro-privacy but we won’t compromise on children’s safety,” he said. “Through our pioneering Online Safety Bill, and in partnership with cutting-edge safety tech firms, we will make the online world a safer place for children.”

Speaking with Computer Weekly, Philp said the new technologies being developed will allow message content to be scanned for CSAM even when E2EE is being used, as “we’re not prepared to accept or tolerate a situation where end-to-end encryption means that we can’t scan for child sexual exploitation images.”

In August 2021, Apple announced its plan to introduce scans for CSAM on its US customers devices, which would work by performing on-device matching against a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organisations.

However, according to cryptographic experts, Apple’s plan to automatically scan photos for CSAM detection would unduly risk the privacy and security of law-abiding citizens, and could open up the way to surveillance.

Safeguards

In response to what safeguards are being considered to protect privacy, Philp said that any platform or social media company that allows one of the technologies being developed into their E2EE environments will want to be satisfied that users’ privacy is not being unreasonably compromised: “the platforms themselves introducing end-to-end encryption will obviously police that.”

He added the government will not mandate any scanning that goes beyond the scope of uncovering child abuse material: “These technologies are CSAM-specific… I met with the companies two days ago and [with] all of these technologies it’s about scanning images and identifying them as either being previously identified CSAM images or first-generation created new ones – that is the only capability inherent in these technologies.”

Asked if there is any capability to scan for any other types of images or content in messages, Philp said: “they’re not designed to do that, they’d need to be repurposed for that, that’s not how they’ve been designed or set up, they’re specific CSAM scanning technologies.”

Philp further confirmed that the National Cyber Security Centre (NCSC), the information assurance arm of UK signals intelligence agency GCHQ, had been involved in appraising the challenge fund applications: “They’re going to be looking very closely at these technologies as they get developed because we need to make sure that we draw on the expertise that GCHQ has technically, as well as working in very close partnership with the Home Office. This is a real partnership between NCSC, part of GCHQ, the Home Office, and DCMS.”

In the Autumn Budget and Spending Review for 2021, announced in late October, more than £110m was allocated for the government’s new online safety regime, which Philp confirmed was separate from the challenge fund, adding that while some of it would be used to build the DCMS’ own capability, “a large chunk of it will go to Ofcom” to finance their regulatory oversight under the Online Safety Bill.

In May 2020, a research report conducted by economic advisory firm Perspective Economic on behalf of DCMS, UK safety-tech providers hold an estimated 25% of the global market share, with the number of safety-tech firms doubling since 2015 from 35 to 70, and investment increasing eightfold.

Source is ComputerWeekly.com

Vorig artikelCloud storage compliance pitfalls: Post-pandemic and post-Brexit
Volgend artikelGaia-X founding member Scaleway exits project over misgivings about its future direction