Skip to content

Commit

Permalink
Amphion Alpha Release (#2)
Browse files Browse the repository at this point in the history
* amphion alpha release
  • Loading branch information
RMSnow authored Nov 28, 2023
1 parent 9f12af1 commit 9682d0c
Show file tree
Hide file tree
Showing 426 changed files with 378,683 additions and 50 deletions.
60 changes: 60 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Mac OS files
.DS_Store

# IDEs
.idea
.vs
.vscode
.cache

# GitHub files
.github

# Byte-compiled / optimized / DLL / cached files
__pycache__/
*.py[cod]
*$py.class
*.pyc
.temp
*.c
*.so
*.o

# Developing mode
_*.sh
_*.json
*.lst
yard*
*.out
evaluation/evalset_selection
mfa
egs/svc/*wavmark
egs/svc/custom
egs/svc/*/dev*
egs/svc/dev_exp_config.json
bins/svc/demo*
data
ckpts

# Data and ckpt
*.pkl
*.pt
*.npy
*.npz
*.tar.gz
*.ckpt
*.wav
*.flac
pretrained/wenet/*conformer_exp

# Runtime data dirs
processed_data
data
model_ckpt
logs
*.ipynb
*.lst
source_audio
result
conversion_results
get_available_gpu.py
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2023 Amphion

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
147 changes: 97 additions & 50 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,69 +1,116 @@
# Amphion

Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development. Amphion offers a unique feature: visualizations of classic models or architectures. We believe that these visualizations are beneficial for junior researchers and engineers who wish to gain a better understanding of the model.
# Amphion: An Open-Source Audio, Music, and Speech Generation Toolkit

<div>
<a href=""><img src="https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg"></a>
<a href="egs/tts/README.md"><img src="https://img.shields.io/badge/README-TTS-blue"></a>
<a href="egs/svc/README.md"><img src="https://img.shields.io/badge/README-SVC-blue"></a>
<a href="egs/tta/README.md"><img src="https://img.shields.io/badge/README-TTA-blue"></a>
<a href="egs/vocoder/README.md"><img src="https://img.shields.io/badge/README-Vocoder-purple"></a>
<a href="egs/metrics/README.md"><img src="https://img.shields.io/badge/README-Evaluation-yellow"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/LICENSE-MIT-red"></a>
</div>
<br>

**Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation.** Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development. Amphion offers a unique feature: **visualizations** of classic models or architectures. We believe that these visualizations are beneficial for junior researchers and engineers who wish to gain a better understanding of the model.

**The North-Star objective of Amphion is to offer a platform for studying the conversion of any inputs into audio.** Amphion is designed to support individual generation tasks, including but not limited to,

- **TTS**: Text to Speech (⛳ supported)
- **SVS**: Singing Voice Synthesis (👨‍💻 developing)
- **VC**: Voice Conversion (👨‍💻 developing)
- **SVC**: Singing Voice Conversion (⛳ supported)
- **TTA**: Text to Audio (⛳ supported)
- **TTM**: Text to Music (👨‍💻 developing)
- more…

The North-Star objective of Amphion is to offer a platform for studying the conversion of various inputs into audio. Amphion is designed to support individual generation tasks, including but not limited to,
In addition to the specific generation tasks, Amphion also includes several **vocoders** and **evaluation metrics**. A vocoder is an important module for producing high-quality audio signals, while evaluation metrics are critical for ensuring consistent metrics in generation tasks.

- TTS: Text to Speech Synthesis (supported)
- SVS: Singing Voice Synthesis (planning)
- VC: Voice Conversion (planning)
- SVC: Singing Voice Conversion (supported)
- TTA: Text to Audio (supported)
- TTM: Text to Music (planning)
- more…
## 🚀 News

In addition to the specific generation tasks, Amphion also includes several vocoders and evaluation metrics. A vocoder is an important module for producing high-quality audio signals, while evaluation metrics are critical for ensuring consistent metrics in generation tasks.
- **2023/11/28**: Amphion alpha release

## Key Features
## ⭐ Key Features

### TTS: Text to speech
### TTS: Text to Speech

- Amphion achieves state-of-the-art performance when compared with existing open-source repositories on text-to-speech (TTS) systems.
- It supports the following models or architectures,
- **[FastSpeech2](https://arxiv.org/abs/2006.04558)**: A non-autoregressive TTS architecture that utilizes feed-forward Transformer blocks.
- **[VITS](https://arxiv.org/abs/2106.06103)**: An end-to-end TTS architecture that utilizes conditional variational autoencoder with adversarial learning
- **[Vall-E](https://arxiv.org/abs/2301.02111)**: A zero-shot TTS architecture that uses a neural codec language model with discrete codes.
- **[NaturalSpeech2](https://arxiv.org/abs/2304.09116)**: An architecture for TTS that utilizes a latent diffusion model to generate natural-sounding voices.
- Amphion achieves state-of-the-art performance when compared with existing open-source repositories on text-to-speech (TTS) systems. It supports the following models or architectures:
- [FastSpeech2](https://arxiv.org/abs/2006.04558): A non-autoregressive TTS architecture that utilizes feed-forward Transformer blocks.
- [VITS](https://arxiv.org/abs/2106.06103): An end-to-end TTS architecture that utilizes conditional variational autoencoder with adversarial learning
- [Vall-E](https://arxiv.org/abs/2301.02111): A zero-shot TTS architecture that uses a neural codec language model with discrete codes.
- [NaturalSpeech2](https://arxiv.org/abs/2304.09116): An architecture for TTS that utilizes a latent diffusion model to generate natural-sounding voices.

### SVC: Singing Voice Conversion

- It supports multiple content-based features from various pretrained models, including [WeNet](https://github.com/wenet-e2e/wenet), [Whisper](https://github.com/openai/whisper), and [ContentVec](https://github.com/auspicious3000/contentvec).
- It implements several state-of-the-art model architectures, including diffusion-based and Transformer-based models. The diffusion-based architecture uses [Bidirectoinal dilated CNN](https://openreview.net/pdf?id=a-xFK8Ymz5J) and [U-Net](https://link.springer.com/chapter/10.1007/978-3-319-24574-4_28) as a backend and supports [DDPM](https://arxiv.org/pdf/2006.11239.pdf), [DDIM](https://arxiv.org/pdf/2010.02502.pdf), and [PNDM](https://arxiv.org/pdf/2202.09778.pdf). Additionally, it supports single-step inference based on the [Consistency Model](https://openreview.net/pdf?id=FmqFfMTNnv).
- Ampion supports multiple content-based features from various pretrained models, including [WeNet](https://github.com/wenet-e2e/wenet), [Whisper](https://github.com/openai/whisper), and [ContentVec](https://github.com/auspicious3000/contentvec). Their specific roles in SVC has been investigated in our NeurIPS 2023 workshop paper. [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2310.11160) [![code](https://img.shields.io/badge/README-Code-red)](egs/svc/MultipleContentsSVC)
- Amphion implements several state-of-the-art model architectures, including diffusion-, transformer-, VAE- and flow-based models. The diffusion-based architecture uses [Bidirectional dilated CNN](https://openreview.net/pdf?id=a-xFK8Ymz5J) as a backend and supports several sampling algorithms such as [DDPM](https://arxiv.org/pdf/2006.11239.pdf), [DDIM](https://arxiv.org/pdf/2010.02502.pdf), and [PNDM](https://arxiv.org/pdf/2202.09778.pdf). Additionally, it supports single-step inference based on the [Consistency Model](https://openreview.net/pdf?id=FmqFfMTNnv).

### TTA: Text to Audio

- **Supply TTA with latent diffusion model**, including:
- **[AudioLDM](https://arxiv.org/abs/2301.12503)**: a two stage model with an autoencoder and a latent diffusion model
- Amphion supports the TTA with a latent diffusion model. It is designed like [AudioLDM](https://arxiv.org/abs/2301.12503), [Make-an-Audio](https://arxiv.org/abs/2301.12661), and [AUDIT](https://arxiv.org/abs/2304.00830). It is also the official implementation of the text-to-audio generation part of our NeurIPS 2023 paper. [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2304.00830) [![code](https://img.shields.io/badge/README-Code-red)](egs/tta/RECIPE.md)

### Vocoder

- Amphion supports both classic and state-of-the-art neural vocoders, including
- GAN-based vocoders: **[MelGAN](https://arxiv.org/abs/1910.06711)**, **[HiFi-GAN](https://arxiv.org/abs/2010.05646)**, **[NSF-HiFiGAN](https://github.com/nii-yamagishilab/project-NN-Pytorch-scripts)**, **[BigVGAN](https://arxiv.org/abs/2206.04658)**, **[APNet](https://arxiv.org/abs/2305.07952)**
- Flow-based vocoders: **[WaveGlow](https://arxiv.org/abs/1811.00002)**
- Diffusion-based vocoders: **[Diffwave](https://arxiv.org/abs/2009.09761)**
- Auto-regressive based vocoders: **[WaveNet](https://arxiv.org/abs/1609.03499)**, **[WaveRNN](https://arxiv.org/abs/1802.08435v1)**
- Amphion supports various widely-used neural vocoders, including:
- GAN-based vocoders: [MelGAN](https://arxiv.org/abs/1910.06711), [HiFi-GAN](https://arxiv.org/abs/2010.05646), [NSF-HiFiGAN](https://github.com/nii-yamagishilab/project-NN-Pytorch-scripts), [BigVGAN](https://arxiv.org/abs/2206.04658), [APNet](https://arxiv.org/abs/2305.07952).
- Flow-based vocoders: [WaveGlow](https://arxiv.org/abs/1811.00002).
- Diffusion-based vocoders: [Diffwave](https://arxiv.org/abs/2009.09761).
- Auto-regressive based vocoders: [WaveNet](https://arxiv.org/abs/1609.03499), [WaveRNN](https://arxiv.org/abs/1802.08435v1).
- Amphion provides the official implementation of [Multi-Scale Constant-Q Transfrom Discriminator](https://arxiv.org/abs/2311.14957). It can be used to enhance any architecture GAN-based vocoders during training, and keep the inference stage (such as memory or speed) unchanged. [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2311.14957) [![code](https://img.shields.io/badge/README-Code-red)](egs/vocoder/gan/tfr_enhanced_hifigan)

### Evaluation

We supply a comprehensive objective evaluation for the generated audios. The evaluation metrics contain:

- **F0 Modeling**
- F0 Pearson Coefficients
- F0 Periodicity Root Mean Square Error
- F0 Root Mean Square Error
- Voiced/Unvoiced F1 Score
- **Energy Modeling**
- Energy Pearson Coefficients
- Energy Root Mean Square Error
- **Intelligibility**
- Character/Word Error Rate based [Whisper](https://github.com/openai/whisper)
- **Spectrogram Distortion**
- Frechet Audio Distance (FAD)
- Mel Cepstral Distortion (MCD)
- Multi-Resolution STFT Distance (MSTFT)
- Perceptual Evaluation of Speech Quality (PESQ)
- Short Time Objective Intelligibility (STOI)
- Signal to Noise Ratio (SNR)
- **Speaker Similarity**
- Cosine similarity based [RawNet3](https://github.com/Jungjee/RawNet)
Amphion provides a comprehensive objective evaluation of the generated audio. The evaluation metrics contain:

- **F0 Modeling**: F0 Pearson Coefficients, F0 Periodicity Root Mean Square Error, F0 Root Mean Square Error, Voiced/Unvoiced F1 Score, etc.
- **Energy Modeling**: Energy Root Mean Square Error, Energy Pearson Coefficients, etc.
- **Intelligibility**: Character/Word Error Rate, which can be calculated based on [Whisper](https://github.com/openai/whisper) and more.
- **Spectrogram Distortion**: Frechet Audio Distance (FAD), Mel Cepstral Distortion (MCD), Multi-Resolution STFT Distance (MSTFT), Perceptual Evaluation of Speech Quality (PESQ), Short Time Objective Intelligibility (STOI), etc.
- **Speaker Similarity**: Cosine similarity, which can be calculated based on [RawNet3](https://github.com/Jungjee/RawNet), [WeSpeaker](https://github.com/wenet-e2e/wespeaker), and more.

### Datasets

Amphion unifies the data preprocess of the open-source datasets including [AudioCaps](https://audiocaps.github.io/), [LibriTTS](https://www.openslr.org/60/), [LJSpeech](https://keithito.com/LJ-Speech-Dataset/), [M4Singer](https://github.com/M4Singer/M4Singer), [Opencpop](https://wenet.org.cn/opencpop/), [OpenSinger](https://github.com/Multi-Singer/Multi-Singer.github.io), [SVCC](http://vc-challenge.org/), [VCTK](https://datashare.ed.ac.uk/handle/10283/3443), and more. The supported dataset list can be seen [here](egs/datasets/README.md) (updating).

## 📀 Installation

```bash
git clone https://github.com/open-mmlab/Amphion.git
cd Amphion

# Install Python Environment
conda create --name amphion python=3.9.15
conda activate amphion

# Install Python Packages Dependencies
sh env.sh
```

## 🐍 Usage in Python

We detail the instructions of different tasks in the following recipes:

- [Text to Speech (TTS)](egs/tts/README.md)
- [Singing Voice Conversion (SVC)](egs/svc/README.md)
- [Text to Audio (TTA)](egs/tta/README.md)
- [Vocoder](egs/vocoder/README.md)
- [Evaluation](egs/metrics/README.md)

## 🙏 Acknowledgement


- [ming024's FastSpeech2](https://github.com/ming024/FastSpeech2) and [jaywalnut310's VITS](https://github.com/jaywalnut310/vits) for model architecture code.
- [lifeiteng's VALL-E](https://github.com/lifeiteng/vall-e) for training pipeline and model architecture design.
- [WeNet](https://github.com/wenet-e2e/wenet), [Whisper](https://github.com/openai/whisper), [ContentVec](https://github.com/auspicious3000/contentvec), and [RawNet3](https://github.com/Jungjee/RawNet) for pretrained models and inference code.
- [HiFi-GAN](https://github.com/jik876/hifi-gan) for GAN-based Vocoder's architecture design and training strategy.
- [Encodec](https://github.com/facebookresearch/encodec) for well-organized GAN Discriminator's architecture and basic blocks.
- [Latent Diffusion](https://github.com/CompVis/latent-diffusion) for model architecture design.
- [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS) for preparing the MFA tools.


## ©️ License

Amphion is under the [MIT License](LICENSE). It is free for both research and commercial use cases.

## 📚 Citations

Stay tuned, Coming soon!
140 changes: 140 additions & 0 deletions bins/calc_metrics.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
# Copyright (c) 2023 Amphion.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

import os
import numpy as np
import json
import argparse

from glob import glob
from tqdm import tqdm
from collections import defaultdict

from evaluation.metrics.energy.energy_rmse import extract_energy_rmse
from evaluation.metrics.energy.energy_pearson_coefficients import (
extract_energy_pearson_coeffcients,
)
from evaluation.metrics.f0.f0_pearson_coefficients import extract_fpc
from evaluation.metrics.f0.f0_periodicity_rmse import extract_f0_periodicity_rmse
from evaluation.metrics.f0.f0_rmse import extract_f0rmse
from evaluation.metrics.f0.v_uv_f1 import extract_f1_v_uv
from evaluation.metrics.intelligibility.character_error_rate import extract_cer
from evaluation.metrics.intelligibility.word_error_rate import extract_wer
from evaluation.metrics.similarity.speaker_similarity import extract_speaker_similarity
from evaluation.metrics.spectrogram.frechet_distance import extract_fad
from evaluation.metrics.spectrogram.mel_cepstral_distortion import extract_mcd
from evaluation.metrics.spectrogram.multi_resolution_stft_distance import extract_mstft
from evaluation.metrics.spectrogram.pesq import extract_pesq
from evaluation.metrics.spectrogram.scale_invariant_signal_to_distortion_ratio import (
extract_si_sdr,
)
from evaluation.metrics.spectrogram.scale_invariant_signal_to_noise_ratio import (
extract_si_snr,
)
from evaluation.metrics.spectrogram.short_time_objective_intelligibility import (
extract_stoi,
)

METRIC_FUNC = {
"energy_rmse": extract_energy_rmse,
"energy_pc": extract_energy_pearson_coeffcients,
"fpc": extract_fpc,
"f0_periodicity_rmse": extract_f0_periodicity_rmse,
"f0rmse": extract_f0rmse,
"v_uv_f1": extract_f1_v_uv,
"cer": extract_cer,
"wer": extract_wer,
"speaker_similarity": extract_speaker_similarity,
"fad": extract_fad,
"mcd": extract_mcd,
"mstft": extract_mstft,
"pesq": extract_pesq,
"si_sdr": extract_si_sdr,
"si_snr": extract_si_snr,
"stoi": extract_stoi,
}


def calc_metric(ref_dir, deg_dir, dump_dir, metrics, fs=None):
result = defaultdict()

for metric in tqdm(metrics):
if metric in ["fad", "speaker_similarity"]:
result[metric] = str(METRIC_FUNC[metric](ref_dir, deg_dir))
continue

audios_ref = []
audios_deg = []

files = glob(ref_dir + "/*.wav")

for file in files:
audios_ref.append(file)
uid = file.split("/")[-1].split(".wav")[0]
file_gt = deg_dir + "/{}.wav".format(uid)
audios_deg.append(file_gt)

if metric in ["v_uv_f1"]:
tp_total = 0
fp_total = 0
fn_total = 0

for i in tqdm(range(len(audios_ref))):
audio_ref = audios_ref[i]
audio_deg = audios_deg[i]
tp, fp, fn = METRIC_FUNC[metric](audio_ref, audio_deg, fs)
tp_total += tp
fp_total += fp
fn_total += fn

result[metric] = str(tp_total / (tp_total + (fp_total + fn_total) / 2))
else:
scores = []

for i in tqdm(range(len(audios_ref))):
audio_ref = audios_ref[i]
audio_deg = audios_deg[i]

score = METRIC_FUNC[metric](
audio_ref=audio_ref, audio_deg=audio_deg, fs=fs
)
if not np.isnan(score):
scores.append(score)

scores = np.array(scores)
result["{}_mean".format(metric)] = str(np.mean(scores))
result["{}_std".format(metric)] = str(np.std(scores))

data = json.dumps(result, indent=4)

with open(os.path.join(dump_dir, "result.json"), "w", newline="\n") as f:
f.write(data)


if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--ref_dir",
type=str,
help="Path to the target audio folder.",
)
parser.add_argument(
"--deg_dir",
type=str,
help="Path to the reference audio folder.",
)
parser.add_argument(
"--dump_dir",
type=str,
help="Path to dump the results.",
)
parser.add_argument(
"--metrics",
nargs="+",
help="Metrics used to evaluate.",
)
args = parser.parse_args()

calc_metric(args.ref_dir, args.deg_dir, args.dump_dir, args.metrics)
Loading

0 comments on commit 9682d0c

Please sign in to comment.