Abuse Detection

Problem

You have a large amount of user-generated content that needs to be reviewed for abusive and harmful language.

Data sources:

Posts on forums, social media, messages sent between users.

Solution

Identify abusive and harmful content automatically using Codeq’s abuse classifier as part of your community management solution, saving you time and resources and preventing malicious users from discouraging, harassing, and insulting other community members.

Give your users more attention when required.

API modules

  • Abuse Classifier

Case Study

Focus on Facts is a web content publisher who has created a political fact checking website. Users have recently been given the ability to comment on posts, but the controversial nature of some of the content has attracted a group of people who are attacking other users using racist or hate speech remarks, as well as other spouting other types of abuse.

Some legitimate participants in the conversations of the comments section are now complaining about the site allowing this type of behavior, while other users have outright stopped participating and abandoned the community. Since this started happening the website has experienced a steady drop in traffic.

  • Using Codeq’s abuse classifier, insults, hate speech, racist or threat comments are automatically spotted.
  • The amount of time needed to moderate the comments section and identify toxic behaviour can be drastically reduced.

Example:

What you just said is utterly retarded. Illegals just take good American jobs and are mostly criminals and rapists. Do not come to the US illegally or we'll have to teach you a lesson. Go back to your shitty country, Mexican!

Oi freak why don't you shut up! You're just another sand nigger trying to destroy America. Take you BLM bullshit and shove it up your nasty ass. Go fuck yourself, mudslime!