Go to any news website today (it doesn’t matter if it’s here or overseas) and check the comments. Chances are that there’s a bunch of trolls already making things toxic for the rest of the internet, drowning any kind of fruitful conversation with their vile, offensive comments.
That’s what Google and Jigsaw (an incubator within Google’s parent company Alphabet) is trying to fix via Perspective. Both companies describe it as an “early-stage technology that uses machine learning to help identity toxic comments.” Basically, they’re leveraging on the same machine-learning AI that powers Google’s Assistant to identify and remove toxic comments left by trolls online that completely derail conversations about anything, from politics to tech.
Perspective works by examining thousands of comments that are labeled abusive by human reviewers, and uses that database to compare to new comments to evaluate their toxicity. And just like Google’s Assistant technology, the more comments it runs into and studies, the better and faster it becomes.
Perspective will be available to publishers via API which can then be easily integrated to their websites. It can be used in a number of ways – from flagging potentially abusive comments as soon as they’re posted, to allowing posters to sort through the least toxic comments first. Obviously human moderators have the last word whether a comment is really toxic or not, but automated tools to flag and potentially hide these comments before they derail a conversation are badly needed during these divided times.