Skip to content

Multilabel Classification of Tagalog Hate Speech using Bidirectional Encoder Representations from Transformers (BERT)

License

Notifications You must be signed in to change notification settings

kenth9p3/mlthsc-thesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

21 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Multilabel Classification of Tagalog Hate Speech using Bidirectional Encoder Representations from Transformers (BERT)

This repository contains source files for the thesis titled, Multilabel Classification of Tagalog Hate Speech using Bidirectional Encoder Representations from Transformers (BERT), at the Polytechnic University of the Philippines. The model classifies a hate speech according to one or more categories: Age, Gender, Physical, Race, Religion, and Others.

Hate speech encompasses expressions and behaviors that promote hatred, discrimination, prejudice, or violence against individuals or groups based on specific attributes, with consequences ranging from physical harm to psychological distress, making it a critical issue in today's society.

Bidirectional Encoder Representations from Transformers (BERT) is pre-trained deep learning model used in this study that uses a transformer architecture to generate word embeddings, capturing both left and right context information, and can be fine-tuned for various natural language processing tasks. For this project, we fine-tuned Jiang et. al.'s pre-trained BERT Tagalog Base Uncased model in the task of multilabel hate speech classification.

πŸ‘₯ Proponents

πŸ“‹ About the Thesis

πŸ“„ Abstract

Hate speech encompasses expressions and behaviors that promote hatred, discrimination, prejudice, or violence against individuals or groups based on specific attributes, with consequences ranging from physical harm to psychological distress, making it a critical issue in today's society. This study addresses the prevalence of hate speech on social media platforms by proposing a Tagalog hate speech classification model for efficient content moderation. Utilizing a fine-tuned Bidirectional Encoder Representations from Transformers (BERT), the study classifies hate speech based on categories such as Age, Gender, Physical, Race, Religion, and Others. The research draws from a dataset of 2,116 scraped social media posts from platforms like Facebook, Reddit, and Twitter manually annotated for analysis. Findings indicate that the model achieved a 97.12% precision, 90.18% recall, 93.52% f-measure for Age, 93.23% precision, 94.66% recall, 93.94% f-measure for Gender, 92.23% precision, 71.43% recall, 80.51% f-measure for Physical, 90.99% precision, 88.60% recall, 89.78% f-measure for Race, 99.03% precision, 94.44% recall, 96.68% f-measure for Religion, and 83.74% precision, 85.12% recall, 84.43% f-measure for Others, as well as an overall hamming loss score of 3.79%, indicating that the tool effectively classified hate posts with a high degree of accuracy in accordance with their respective labels.

πŸ”  Keywords

Bidirectional Encoder Representations from Transformers; Hate Speech; Multilabel Classification; Social Media; Tagalog; Polytechnic University of the Philippines; Bachelor of Science in Computer Science

πŸ’» Languages and Technologies

Model

Python PyTorch Jupyter Notebook Hugging Face Pandas Numpy Numpy

User Interface

HTML5 CSS3 JavaScript Flask

πŸ–Ό Screenshots

🎨 Labels

Multilabel Classification refers to the task of assigning one or more relevant labels to each text. Each text can be associated with multiple categories simultaneously, such as Age, Gender, Physical, Race, Religion, or Others.

Label Description
Age Target of hate speech pertains to one's age bracket or demographic
Gender Target of hate speech pertains to gender identity, sex, or sexual orientation
Physical Target of hate speech pertains to physical attributes or disability
Race Target of hate speech pertains to racial background, ethnicity, or nationality
Religion Target of hate speech pertains to affiliation, belief, and faith to any of the existing religious or non-religious groups
Others Target of hate speech pertains other topic that is not relevant as Age, Gender, Physical, Race, or Religion

πŸ“œ Dataset

2,116 scraped social media posts from Facebook, Reddit, and Twitter manually annotated for determining labels for each data split into three sets:

Dataset Number of Posts Percentage
Training Set 1,267 60%
Validation Set 212 10%
Testing Set 633 30%

πŸ”’ Results

The testing set containing 633 annotated hate speech data used to analyze performance of the model in its ability to classify the hate speech input according to different label in terms of Precision, Recall, F-Measure, and overall hamming loss.

Label Precision Recall F-Measure
Age 97.12% 90.18% 93.52%
Gender 93.23% 94.66% 93.94%
Physical 92.23% 71.43% 80.51%
Race 90.99% 88.60% 89.78%
Religion 99.03% 94.44% 96.68%
Others 83.74% 85.12% 84.43%

Overall Hamming Loss: 3.79%

πŸ› οΈ Installation

πŸ“¦ Clone with git-lfs

Since this repo contains large data files (>= 50MB), you need to first download and install a git plugin called git-lfs for versioning large files, and set up Git LFS using command git lfs install in console, in order to fully clone this repo.

πŸƒ How to run

Setup model

  • Clone the repository:
git clone https://github.com/kenth9p3/mlthsc-thesis.git
  • Create a virtual environment:
# Windows
python -m venv venv

# Linux
python3 -m venv venv
  • Activate virtual environment:
# Windows
source venv/Scripts/activate

# Linux
source venv/bin/activate
  • Install dependencies:
pip install -r requirements.txt
  • Run app:
python ./server.py

Setup user interface

  • Run index.html in the browser

  • Input Tagalog hate speech in text box or choose one of the examples

  • Click Analyze

  • Save results

About

Multilabel Classification of Tagalog Hate Speech using Bidirectional Encoder Representations from Transformers (BERT)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published