We aim at finding out inappropriate comments from Quora website by building a binary classification model and apply the model to our website where you enter some questions and then the application will provide you with output to classify whether your words are TOXIC or NOT. We use word embedding method to map each text into corresponding data. Then we tried three different models and one combination method to train the model. The approaches we adopt to solve the problem are ‘GRU’, ‘LSTM’ and ’Attention’. We used the Django to build the AI application which including friendly interaction and beautiful interface. Finally, in the evaluation part, our accuracy reaches 0.70583. And the application can provide stable problem detection services.
- Ziran Gong - @nature1995
- Peihong Yu - @PeihongY
- Haoran Peng - @PPGod95
MIT ©