Users who are looking to limit the amount of vitriol they're exposed to in online comment forums can now fine-tune the intensity of discussions using a recently spotted Chrome extension called Tune from Alphabet Inc. subsidiary and Google sister-company Jigsaw. While the name "Tune" may illicit thoughts about audio software and its UI does appear to be based on intuitive volume-style knobs, it actually provides users granular control to filter out comments in a given forum.
Effectively, Tune utilizes AI to allow users to hide 'toxic' comments containing attacks, insults, profanity, violence, and more on sites like YouTube, Facebook, Twitter, Reddit, and via the Disqus comment service.
Adjustments can be made to the tool on a by-site basis as well and users can take things further still by toning down exactly how many comments they'll see. Deeper customizations are available too, for those looking to filter down specific things. The only current caveats include the fact that a Google sign-in is required and Tune's experimental nature.
As noted above, the new extension is still in its experimental stages but that's hardly surprising since its being offered by one of Alphabet's most experimental units. Tune happens to be built on top of another piece of software called Perspective that was launched way back in 2016.
The overarching purpose of Perspective was, as its name might imply, to provide automated moderating in the same forums the new Tune extension affects but more directly to take down internet trolls. The AI-driven software was revealed with high hopes that it could cut back on comments that failed to meet certain standards.
Namely, the tool was intended to help participants in a discussion move past geopolitical and cultural challenges as well as providing a way to remove hateful or violent comments automatically. Simultaneously, it was meant to reduce censorship of comments that would be misinterpreted and removed from forum discussions.
By late 2017, it had become clear that Perspective wasn't working as planned. That largely came down to its dependence on user reporting, with participants in a discussion deciding exactly how "toxic" a given comment is. In short, the tool ultimately led to comments that contained blatant racism or threats of violence being marked as not being toxic -- with discrepancies between the given "toxicity" and "inflammatory" rating Perspective dealt out to comments.
The team behind Perspective has continued work on its AI algorithms in the meantime. More succinctly, Tune seems to be a more straightforward way of implementing the rating systems that were already in place.
In theory, by allowing users to filter comments themselves based on their own set level of intensity, the tool will perform more accurately. The intensity rating is most likely based on some combination of toxicity and inflammatory scores, alongside the usual filters for keywords such as swear words or specific phrases. That's as opposed to having an AI system try and both set the acceptable level and filter things.
...still experimental and a possible negative impact
There may also be some drawbacks to using Tune. Not least of all, filtering comments out will not reduce the amount of vitriol that actually exists online. If enough users turn to tools such as Tune for filtering, comments such as those that make threats may go unreported not be taken as seriously as they should. The tool could also end up serving as a way for some to simply avoid difficult discussions rather than engaging on issues.
Used properly, Tune could allow users to feel more secure and, more importantly, less stressed while conversing online but things aren't going to be perfect right from the start.
As noted by Tune's title in the Chrome Web Store, the features are still in the experimental stages, so users shouldn't expect a perfect filter for their discussions. Instead, Tune should improve over time -- both as the developers work to improve the algorithms via machine learning and as users make their own adjustments to their individual settings.