Twitter back in March introduced a new approach to improve the health of the public conversation on Twitter. Since then it has been working on what one refers to as “trolls,” which is behavior is fun, good and humorous. Today the company is taking actions against such trolls that distort and detract from the public conversation on Twitter.
Twitter feels that some of these accounts and Tweets violate the policies, and, in those cases, it is vouching to take action on them. It is said to be using policies, human review processes, and machine learning to determine how Tweets are organized and presented in communal places like conversations and search.
It is tackling behaviors that distort and detract from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. Twitter is taking is taking in many new signals most of which are not visible externally, like accounts with not confirmed email, or if the same person signs up for multiple accounts simultaneously, accounts that repeatedly Tweet and mention accounts that don’t follow them, or behavior that might indicate a coordinated attack.
It also clarified that people contributing to the healthy conversation will be more visible in conversations and search. In early testing Twitter says that it has seen a positive impact, resulting in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. This means that users are reporting less and not being disrupted.