New AI Anticipates Violent Protests By Using Twitter

Protests can turn violent in seconds under the right conditions, but a new AI program is betting on being able to predict those outbursts by scanning social media posts and relating to those protests and the people in them on Twitter and other platforms. The AI was created by researchers from the University of Southern California, and was used to analyze some 18 million Tweets surrounding the 2015 protests over Freddie Gray, a Baltimore man who died in police custody. The AI, based on a deep neural network, was trained to find "moralized language", and its findings were compared to arrest rates. While protests don't always have to be violent for arrests to happen, arrest rates normally spark up in protests when they do turn violent, so the finding that there was a correlation between moralized Tweets and arrest rates meant that there was some significant relationship between moral outrage on social media and violent behavior in real-world protests. Essentially, if modified and trained properly, this AI program could potentially predict violent protests before they happen by looking for moralized language on social media.

The researchers based the AI's criteria on a study that they had conducted earlier. Essentially, the study found that highly moralized talk on social media often could be found to be linked to violent protests and that users whose morals were shared and validated by others were more likely to encourage violence more strongly, whether by further pursuing their platform or by an outright call to arms. The model the researchers used was based on sets of moral opposites, including fairness and cheating, care and harm, authority and subversion, as well as loyalty and betrayal. These four sets of values seem to underscore moral and sociopolitical outrage of the sort that can lead to violence, and the researcher's AI generated results that generally agree with that notion.

It is worth noting that the underlying study for all of this was funded by the United States Department of Defense. Many of the examples of moralized language could arguably be called hate speech, something that many social networks are cracking down on lately. This means that the tool could become less effective over time, though whether that happens and to what degree really depends on a large number of sociopolitical factors in the world of social media that many world authorities and tech companies are only now beginning to explore and act on in earnest.

You May Like These
More Like This:
Android Headlines We Are Hiring Apply Now