Social media giant Twitter has announced major changes to its user policies as of Tuesday. April 21st, with the changes being designed to stop abuse and harassment. The changes, announced in a blog post, include both policy updates and the launch of new technology, with the goal of making users less likely to come into contact with abuse, and also less able to “troll” other users.
Two new Twitter user policy updates
An excerpt from the company’s Tuesday blog post explains the first policy update. “We are updating our violent threats policy so that the prohibition is not limited to “direct, specific threats of violence against others” but now extends to “threats of violence against others or promot[ing] violence against others.” Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior. The updated language better describes the range of prohibited content and our intention to act when users step over the line into abuse.”
At this year's annual Robin Hood conference, which was held virtually, the founder of the world's largest hedge fund, Ray Dalio, talked about asset bubbles and how investors could detect as well as deal with bubbles in the marketplace. Q1 2021 hedge fund letters, conferences and more Dalio believes that by studying past market cycles Read More
The other policy update lets the firm put different kinds of blocks on users. While the company used to ban users completely and remove problematic content, Twitter can now fine tune how they choose to block users who violate policies.
For example, users may be banned for only a short period of time. If they want to return to using Twitter, they may have to jump through some additional hoops such as confirming they will follow Twitter’s rules.
New automated technology can identify abusive messages
The blog post also noted that Twitter is introducing new algorithms designed to automatically identify abuse and not prevent it from being seen by its targets. Tweets can still only be deleted by humans, but the service can automatically detect abusive messages and keep them out of users’ mentions so that they won’t see it unless users choose to.
“It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content.”
Analysts point out the new feature is similar to the current quality filtering, but that feature is only activated for verified users.