Twitter tests auto-block feature for accounts at risk of abuse

0
352
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

Twitter has begun pilot tests of new features and settings designed to protect users from online abuse, with the intention of automatically screening out abusive users and reducing the burden on victims to have to deal with unwelcome or triggering interactions.

The social network’s new Safety Mode is being trialled with selected beta testers, with an emphasis on people of colour, LGBTQ+ people, and women journalists, ahead of an anticipated wider roll-out in the future.

“We want you to enjoy healthy conversations, so this test is one way we’re limiting overwhelming and unwelcome interactions that can interrupt those conversations. Our goal is to better protect the individual on the receiving end of tweets by reducing the prevalence and visibility of harmful remarks,” said Twitter’s senior product manager Jarrod Doherty.

Safety Mode is an automated feature that temporarily blocks abusive accounts for seven days based on behaviours such as using insults or hateful remarks or sending repetitive or uninvited replies or mentions. During that time, they will not be able to follow you, see your tweets or send direct messages.

When switched on, Twitter’s new algorithms will assess the likelihood that a tweet addressed to you by considering both the tweet’s content, and the relationship between you and the replier, taking into account factors such as whether or not the two accounts frequently interact or follow one another, for example.

If you are in the trial group, you will already be able to activate it through Privacy and Safety under Settings. You will also be able to find information about tweets flagged via the feature, and details of the accounts that have been temporarily blocked. Before the seven-day block period ends, you will also receive a notification recapping this information.

Twitter said the system would inevitably make mistakes, so autoblocks can be viewed and retracted at any time via the site’s settings. The firm has also put in place measures to monitor the system’s accuracy and improve it over time.

The new feature was developed alongside a number of listening and feedback sessions with expert partners specialising in online safety, mental health, and human rights – many of them members of Twitter’s existing Trust and Safety Council, who provided feedback to make Safety Mode easier to use, and challenged Twitter’s developers to think through ways in which it might be manipulated and close off loopholes.

“As members of the Trust and Safety Council, we provided feedback on Safety Mode to ensure it entails mitigations that protect counter-speech while also addressing online harassment towards women and journalists,” said a spokesperson for Article 19, a London-based human rights organisation that specialises in digital rights and online equality.

“Safety Mode is another step in the right direction towards making Twitter a safe place to participate in the public conversation without fear of abuse.”

Twitter said it had also committed to the World Wide Web Foundation’s framework to end online gender-based violence, and was participating in discussions to explore new technical means of giving women and other groups more control over their online safety experience.

Source is ComputerWeekly.com

Vorig artikelSecurity Think Tank: Managing data securely throughout its lifecycle
Volgend artikelMaking the enterprise composable