The Enduring Afterlife of a Mass Shooting’s Livestream Online

0
337
Oracle enhances customer experience platform with a B2B refresh

Source is New York Times

Many of the sites tried taking down the videos as they were uploaded but were overwhelmed. Facebook said it removed 1.5 million videos in the 24 hours after the incident, though many managed to evade detection. On Reddit, a post featuring the video was viewed more than one million times before it was removed. Google said the speed at which the video was shared was faster than after any tragedy it had previously seen, according to the New Zealand government report.

Over the next few days, some people began discussing ways to evade the platforms’ automated systems to keep the Christchurch video online. On Telegram on March 16, 2019, people who were part of a group related to white supremacy batted around ways to manipulate the video so it would not be removed, according to discussions viewed by The Times.

“Just change the opening,” one user wrote. “Speed it up by 2x and the [expletive] can’t find it.”

Within days, some clips of the shooting were posted to 4chan, a fringe online message board. In July 2019, a 24-second clip of the killings also appeared on Rumble, according to The Times’s review.

In the ensuing months, New Zealand’s government identified more than 800 variations of the original video. Officials asked Facebook, Twitter, Reddit and other sites to dedicate more resources to removing them, according to the government report.

New copies or links to the video were uploaded online whenever the Christchurch shooting came up in the news, or on anniversaries of the event. In March 2020, about a year after the shooting, nearly a dozen tweets linking to variations of the video appeared on Twitter. More videos appeared when Mr. Tarrant was sentenced to life in prison in August 2020.

Other groups jumped in to pressure the tech companies to erase the video. Tech Against Terrorism, a United Nations-supported initiative that develops tech to detect extremist content, sent 59 alerts about Christchurch content to tech companies and file hosting services from December 2020 to November 2021, said Adam Hadley, the founder and director of the group. That represented about 51 percent of the right-wing terrorist content the group was trying to remove online, he said.

Source is New York Times

Vorig artikelGoogle Expands Open Source Security Efforts
Volgend artikelChinese Hackers Tried to Steal Russian Defense Data, Report Says