Twitter has teamed up with a number of academic partners to develop ways to evaluate conversation on its platform as part of a broader effort to curb abuse and stop the spread of misinformation across the social networking site. New partnerships have been formed following Twitter’s move earlier this year to solicit proposals from outside experts to explore methods for measuring conversational health on Twitter. More than 230 proposals from across the world were submitted, which were then reviewed by Twitter’s teams consisting of experts from the company’s various divisions including engineering, legal, research, machine learning, and trust and safety.
Following the review of proposals, Twitter has picked two partners to assist the company in formulating metrics for evaluating public discourse on Twitter and examining how people use the social media platform. One team is comprised of academics with diverse fields of expertise led by Dr. Rebekah Tromble, Assistant Professor of Political Science at Leiden University. Dr. Tromble will lead her team in creating two sets of metrics meant to explore how political views lead to the rise of communities on Twitter and the challenges associated with the development of political discussions on the site. The focus will be on “echo chambers” and “uncivil discourse”, with one set of metrics zeroing in on how much tolerance people can give to different viewpoints on the platform. The other set of metrics will take aim at incivility and intolerance in Twitter discussions. The other team of academic experts includes Professor Miles Hewstone and John Gallacher at The University of Oxford and Dr. Marc Heerdink at the University of Amsterdam. That team will evaluate how prejudice and discrimination on Twitter can be minimized with exposure to diverse viewpoints and backgrounds. Toward that end, the team will work to incorporate “text classifiers for language commonly associated with positive sentiment, cooperative emotionality, and integrative complexity” into Twitter’s communication structure, according to David Gasca, Director of Product Management and Health at Twitter.
The new alliances build on Twitter’s longstanding effort to provide a healthy conversation on its platform. Last month, Twitter introduced changes meant to combat bots and other malicious accounts following its acquisition of security startup Smyte. In May and June, the company also suspended over 70 million fake accounts. These efforts are expected to continue as Twitter purges its platform of abuse and misleading content.