Blog

Role of Machine-Learning in Identifying Toxic Online Content

Feb, 2021 - By WMR

Role of Machine-Learning in Identifying Toxic Online Content

Increasing number of smartphone users and high penetration of internet has led to a massive increase in the number of social media users. Facebook, the U.S.-based social networking site, recorded over 2.6 billion monthly active users as of the first quarter of 2020. However, such platforms have become a booming playground for hate speech, extremist content, harassment, and misinformation.

The recent U.S. Capitol incident freshly highlighted the impact toxic online content can have. On Christmas Day, a far-right activist and conspiracy theorist allegedly posted a video on YouTube urging people to storm Washington, DC, on the very day the Congress would finalize Joe Biden’s election to the US presidency.

Many people have experienced online abuse and some have even lost their lives. I have heard stories from people who have posted things online that they would not have posted under any circumstances, but because someone decided to attack them, they were forced to do so. This is a huge problem and it really needs to be addressed. Many times the person doing the posting is not even aware that what they are doing is wrong. It could be somebody from work, a stranger, or even a person in their family. In order for us to protect ourselves, our children, our employees and our community we need to be conscious of what we say online.

Artificial Intelligence can help detect and alert toxic online content. Many tech companies are implementing machine learning to keep in check rising volumes of harmful content. Recently, Google’s Jigsaw announced that Perspective, a free, open source API is now processing 500 million requests daily and is measurably reducing toxicity to make conversations online better at scale. Jigsaw uses machine learning to spot toxic comments.

Although the use of Machine Learning can help mitigate the spread of online hate content, there are various issues with the approach. A major issue is that of confusing toxic comments with nontoxic comments when words that are related to gender, sexual orientation, religion or disability are used. In order to address the issue, AI must be trained to reduce unintended bias towards pre-defined identity groups. Moreover, multilabel classification of toxic comment also helps to overcome the issue. Machine Learning can be taught to classify between “toxic”, “severe toxic”, “threat”, “insult”, “obscene”, and “identity hate” cases.

As mentioned earlier, social media plays a major role in spread of online toxic content. Social media users, in a quest to rake up the number of followers, promote abuse, especially against women or against a country. Such scenario has highlighted the need for social media platforms to self-regulate toxic, hate speech. Various advertisers have also called out social media giants to stop the spread of toxic online content. AI can be used to enforce objective policies on users’ hateful, inflammatory, racist or toxic speech, and apply them consistently. It can also help curb hate speech, cyber bullying, sexual harassment across text, and voice and that sort of thing.

Contact us

mapicon
Sales Office (U.S.):
Worldwide Market Reports, 533 Airport Boulevard, Suite 400, Burlingame, CA 94010, United States

mapicon+1-415-871-0703

mapicon
Asia Pacific Intelligence Center (India):
Worldwide Market Reports, 403, 4th Floor, Bremen Business Center, Aundh, Pune, Maharashtra 411007, India.

Newsletter

Want us to send you latest updates of the current trends, insights, and more, signup to our newsletter (for alerts, special offers, and discounts).


Secure Payment By:
paymenticon

This website is secured Origin CA certificate on the server, Comodo, Firewall and Verified Sitelock Malware Protection

secureimg

© 2024 Worldwide Market Reports. All Rights Reserved