As Twitter strives to make its social media platform a safe place for users, the social media company shared its progress in responding to abuse reports and removing spam from its platform. One of the significant efforts of Twitter as it cleans up its platform is to proactively identify potentially abusive tweets even before other users see the offending content.
Twitter recently started using the same technology that tracks spam and platform manipulations to identify tweets that contain threats and promote self-harm and abusive behavior.
Latest figures show that Twitter's efforts are bearing fruit, with the technology flagging 38-percent of potentially abusive content even before users see and report the offending tweets. Compared to the same period last year, practically all of the potentially offensive content reviewed by moderators came from reports by other Twitter users.
Twitter is also making significant progress in improving its response to abuse reports. Over the past year, there is a 16-percent reduction in the number of reports from users who had unhealthy or abusive interactions from accounts that they do not follow. There is also a three times increase in the number of abusive accounts that Twitter suspended after a user files a report. Furthermore, an improved reporting process also makes it easier for users to request the removal of content that shares sensitive information.
Moreover, Twitter is employing methods to prevent suspended users from creating new accounts. The company suspended around 100,000 accounts between January to March 2019, which represents a 45-percent increase compared to the same period last year.
However, Twitter recognizes the possibility that it may incorrectly suspend an account for a variety of reasons, including content that reviewers may take out of context. In response to these concerns, the company recently deployed a new in-app appeal process, which speeds up the appeal process by as much as 60-percent.
Moving forward, Twitter admits that there are still areas for improvement in its quest to make the platform safe for its users. Among the steps that Twitter plans to undertake include rewriting existing rules to make them easier to understand, modifications to the reporting process so that reviewers can take action on abusive content more quickly, and the introduction of a feature that allows users to hide replies to their tweets.
The company also seeks to improve the technology it uses in detecting abusive content, and it also plans to add more notices, which may help explain to users why particular content stays online.
Twitter is not the only social media platform grappling with the presence of harmful content and abusive users, and these companies are trying to resolve these concerns using technology and the deployment of additional workforce. For example, Facebook continues to boost the number of its staff members that track down and remove content like hate speech and terrorist propaganda. Meanwhile, YouTube, Google's video streaming platform, uses machine learning and artificial intelligence to remove conspiracy theorist and extremist content from its platforms, and it is also demonetizing videos with extremist content.
For Twitter's part, continued efforts are necessary to make its platform a safe place for users, given that abusive users tend to find ways to circumvent the safeguards imposed by social media platforms.