VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In the repository, I will introduce a VITS model for Japanese on pytorch version 2.0.0 that customed from VITS model.
We also provide the pretrained models.
VITS at training | VITS at inference |
---|---|
- Python >= 3.6
- Clone this repository
- Install python requirements. Please refer requirements.txt
- Download datasets
- Download and extract the Japanese Speech dataset, then choose
basic5000
dataset and move tojp_dataset
folder.
- Download and extract the Japanese Speech dataset, then choose
- Run preprocessing if you use your own datasets.
# Preprocessing (g2p) for your own datasets. Preprocessed phonemes for Japanese dataset have been already provided.
python preprocess.py --text_index 1 --filelists filelists/jp_audio_text_train_filelist.txt filelists/jp_audio_text_val_filelist.txt filelists/jp_audio_text_test_filelist.txt
# JP Speech
python train.py -c configs/jp_base.json -m jp_base
To get pretrained model for Japanese:
sh startup.sh
See vits_apply.ipynb or run streamlit run app.py
to see demo on streamlit-share.