Amnesty International has a new volunteer program created specifically to deal with malicious content on Twitter. The project, called Troll Patrol, is a direct response to a study conducted by the group which looked into the abuses perpetrated against women via the platform. That study resulted in a report that ultimately described the platform as "toxic" because of the ease with which abusers could troll and harass women without recourse. Meanwhile, a substantial portion of the abuse witnessed by the human rights advocacy group was further conflated with tones of discrimination based on religion, sexuality, gender identity or norms, disabilities, ethnicity, and racism. Of chief concern to the group is the silencing effect that type of trolling has on women on the platform. Although Twitters rules and use policies do prohibit those types of abuses, the group doesn't feel as though the platform's leadership are doing enough to combat the problem.
To that end, Amnesty International's Troll Patrol is, in effect, a volunteer program which seeks to understand the abuses in a deeper sense. More directly, it is intended to gain a better understanding of the overall scale of the problem and how the abuses can be categorized and managed. For now, that means volunteers will need to sort through a trove of tweets and other media shared via Twitter to determine whether it is abusive and to what extent. To prevent abuse of that system, multiple "decoders" – as participants are called – examine each individual Tweet. However, as of this writing, there are 501,796 listings that still need to be analyzed and only around 2,393 decoders working on those. There are likely a lot more beyond even that which will need to be examined and categorized before any progress can be made – with consideration for just how bit the Twitter platform is. So although the project is off to a great start, it could probably use quite a few more volunteers.
Having said that, anybody interested in signing up needs to be aware that a substantial amount of that content is reprehensible to a very high degree. The group even warns at the beginning of its promotional video – included below – that the content is often highly descriptive and may simply be too much for some to watch. If all goes well, Amnesty International hopes to generate a comprehensive algorithm which can learn to identify abuse in real-time, putting pressure on Twitter to begin addressing the problem more actively. In the meantime, anybody interested in helping the effort along should head over to sign up page via the button below or navigate to the source link below for more details.