Overview about Kaggle Toxic Comment Classification Challenge (More information)
Nowadays, Social Network becomes a important part in our daily life. And the most common way we interact with it is text. We use text not only to communicate, but also to express ourself and spread the ideas, knowledge. However, not all people are responsible for what they comment or post on Social Network. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Eventually, the business looses it customers. Somehow, the Social Network is not safe anymore. To detect these negative comments is not easy because a typical Social Network has million users each posts a dozen of comments everyday. We can't hire people manually monitoring and stopping these negative comments. It is impossible! The data is too large and in high frequency. But the good news is we can do it with AI.
The Conversation AI team, a research initiative founded by Jigsaw and Google (both a part of Alphabet) are working on tools to help improve online conversation. One area of focus is the study of negative online behaviors, like toxic comments (i.e. comments that are rude, disrespectful or otherwise likely to make someone leave a discussion). So far they’ve built a range of publicly available models served through the Perspective API, including toxicity. But the current models still make errors, and they don’t allow users to select which types of toxicity they’re interested in finding (e.g. some platforms may be fine with profanity, but not with other types of toxic content). So the team hosts a Competition on Kaggle to challenge us to build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate better than Perspective’s current models. We’ll be using a dataset of comments from Wikipedia’s talk page edits. Improvements to the current model will hopefully help online discussion become more productive and respectful.
Some techniques used in this competition
This is clearly a Natural Language Processing(NLP) competition where Deep Learning shines. Our task is to put each comment to correct labels. There are 6 labels, so this is a Multi-Label classification:
Below is a sample data so you can get a sense about it
- Toxic, Obscene, Insult comment: "and i'm going to keep posting the stuff u deleted until this f**k**g site closes down have fun u stupid *ss b***h don't ever delete anything fre like i said before go to h**l".
- Clean comment: "OK, Steve, to be honest I really like the present form. So, I don't have any issue with the present one".
- Toxic comment: "Just shut up okay?Im only 10 years old. I just wanted to have a little fun".
- Clean comment: "I will edit as I see fit and remove things I find irrelevant as it is a free for all and freedom of opinion and speech so shut up." This comment seems like above comment, it even has "shut up" but it is not a negative feeling comment.
Our approach is pretty straight forward
- Use different pre-trained Word Embedding to transform raw text to numeric vectors.
- Use different Deep Learning architectures, mostly Recurrent Neural Network.
- Build Multi-Label model instead of 6 binary classification models. Because we cannot afford time to train 6 models.
- Make data augmentation by translate English comment to France/Spainish/Germany then back to English. Thanks to "Pavel Ostyakov" for spreading this idea on forum.
- Using traditional models like Naive Bayes, SVM, Logistic Regression and XGBoost with TF-IDF bag-of-word features.
- Spend a great deal of time to fine tune all models.
- Final result are blending of predictions of 12 models.
Data Science Team of TBV ended up on Top 3% of 4500 teams in Private Leader board with this approach. However, looking back, there are a lot of things we need to learn if we want to get higher score in future NLP competition.