Skip to content

Latest commit

 

History

History
117 lines (96 loc) · 7.61 KB

File metadata and controls

117 lines (96 loc) · 7.61 KB

Deep Embedding Clustering for Speaker Diarization

Team Name: TensorSlow

Members: Aditya Singh (@adityajaas) and Shashi Kant Gupta (@shashikg)

Report: It is available here. It contains 4 pages of main text + 1 references page + 2 pages of supplementary materials.

Abstract

Speaker diarization has received significant interest within the speech community due to its promise to improve automatic speech transcription considerably. Commonly used approach to this problem include using embedding vectors such as d-vectors, i-vectors, or x-vectors with Spectral Clustering. We propose using Unsupervised Deep Embedding Clustering to cluster data in a more semantically meaningful latent representation with pre-trained Auto Encoders for improved imbalanced data separation. Stacked layers of Auto Encoders have been trained in a residual fashion in place of De-noising Auto Encoders for enhanced learning. We use VoxConverse and AMI Corpus split datasets to test our model. Our model shows considerable improvement over the Spectral Clustering approach. Clustering is perfomed on x-vectors extracted using Desplanques et al.'s ECAPA-TDNN framework. We use Silero-VAD for voice audio detection.

Live Demo on Google Colab

Open In Colab

DataSet

Model is tested on VoxConverse dataset (total 216 audio files). We randomly split the dataset into two parts: ‘test’ and ‘train’ with test data having 50 audio files. We also tested the model on AMI test dataset (total 16 audio files).

Results

VoxConverse

Methods DER
Spectral Clustering 17.76
Ours 12.99
Spectral Clustering (Oracle VAD) 17.98
Ours (Oracle VAD) 11.70

AMI Corpus

Methods DER
Spectral Clustering 21.99
Ours 23.39
Spectral Clustering (Oracle VAD) 14.96
Ours (Oracle VAD) 13.14

Demo on random YouTube file

Original Video Link: here
Diarization Output Link: here

The_big_debate_Education_in_India_vs_education_in_abroad.1.mp4

hypothesis


ipynb Notebook Files

  • Baseline<DATASET_NAME>.ipynb: To evaluate the DER score for the baseline models described in the report.
  • Compare_Spectral_vs_DEC_<DATASET_PARAM>.ipynb: To evaluate the DER score for the DEC models described in the report and compare it against the Spectral clustering method.
  • utilities/TrainAutoEncoder.ipynb: Output notebook file for training the AutoEncoder of the DEC model.
  • utilities/ExtractVAD.ipynb: Used to extract and save all the VAD mapping for the audio files.
  • utilities/ExtractXvectors.ipynb: Used to precompute X-vectors for the audio files and save it into a zip file to use it in the DiarizationDataset.

API Documentation

Documentation and details about functions isnide the core module.

Index