Skip to content

Latest commit

 

History

History
9 lines (5 loc) · 551 Bytes

README.md

File metadata and controls

9 lines (5 loc) · 551 Bytes

Toxicity Detector

Tired of hearing toxic comments but unsure of how to cut them off? Here's somthing to help cement your suspicions that certain words are indeed toxic.

This app is a demo for the working of the toxicity detector ML Model and it will tell you how toxic a particaular comment is.

This app uses a TFLite model which classifies and predicts if the given text is toxic on 6 different levels

The is available on the play store and can be found here.