Twitter is implementing a feature in the aim of reducing harassment and trolling, which have become major challenges on the network.

Twitter is going to install new Safety Mode which will prohibit accounts that make nasty comments or harass users with unwanted comments for seven days. Once enabled, the feature will work automatically, relieving users of the bother of dealing with unwanted tweets. It will be trialed on a small number of users at first.
The functionality can be enabled in settings, and the system will evaluate the content of the tweet as well as the relationship between the tweet author and the responder. Accounts that the user follows or interacts with often will not be automatically blocked.
“While we have made gains in providing users greater control over their safety experience on Twitter, there is always more to be done,” Katy Minshall, head of Twitter UK Public Policy, stated.

“Today, we’re launching Safety Mode, a feature that allows you to limit disruptive interactions on Twitter automatically, improving the health of the public conversation.”
While it has never stated how many human moderators it employs, a 2020 research from NYU Stern, a business school in New York, claimed that it employs around 1,500 to deal with the 199 million daily Twitter users globally. Twitter, like other social media networks, uses a mix of automated and human moderation. According to a new hate speech survey conducted on behalf of the Finnish government by Facts Against Hate, Twitter is “the worst of the tech giants” when it comes to hate speech.
Twitter has begun testing their new ‘Safety Mode’ feature, which aims to combat disruptive interactions.
— Pop Crave (@PopCrave) September 2, 2021
The feature allows users to automatically block accounts for seven days for using “potentially harmful language” such as insults, hateful remarks, and spam. pic.twitter.com/Km4yKJ0Cnh
Dr. Mari-Sanna Paukkeri, the study’s lead author, believes the solution is to use artificial intelligence systems that have been trained by people. “There are so many different ways to say horrible things, and building technologies that can recognize these, is rocket science,” she added. She went on to say that simply highlighting certain words or phrases, as many social media sites do, was not enough.
Along with dealing with abuse on the network, Twitter has stepped up its efforts to combat misinformation. It teamed up with Reuters and the Associated Press in August to expose false information and stop it from spreading. Birdwatch, a community-moderation mechanism, was previously developed, allowing volunteers to flag tweets that were erroneous.
Please check out the following website for further news articles:
