👩🏽🍳 TTS training recipes
💬 Where to ask questions
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly, so that more people can benefit from it.
|GitHub Issue Tracker|
|GitHub Issue Tracker|
|Github Discussions or Gitter Room|
🔗 Links and Resources
|Main Development Plans|
|👩🏾🏫 Tutorials and Examples||TTS/Wiki|
|TTS Releases and Experimental Models|
🥇 TTS Performance
Underlined "TTS*" and "Judy*" are
- High performance Deep Learning models for Text2Speech tasks.
- Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
- Speaker Encoder to compute speaker embeddings efficiently.
- Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
- Fast and efficient model training.
- Detailed training logs on console and Tensorboard.
- Support for multi-speaker TTS.
- Efficient Multi-GPUs training.
- Ability to convert PyTorch models to Tensorflow 2.0 and TFLite for inference.
- Released models in PyTorch, Tensorflow and TFLite.
- Tools to curate Text2Speech datasets under
- Demo server for model testing.
- Notebooks for extensive model benchmarking.
- Modular (but not too much) code base enabling easy testing for new ideas.
- Guided Attention: paper
- Forward Backward Decoding: paper
- Graves Attention: paper
- Double Decoder Consistency: blog
- Dynamic Convolutional Attention: paper
- MelGAN: paper
- MultiBandMelGAN: paper
- ParallelWaveGAN: paper
- GAN-TTS discriminators: paper
- WaveRNN: origin
- WaveGrad: paper
- HiFiGAN: paper
You can also help us implement more models. Some
If you are only interested in synthesizing speech with the released
pip install TTS
By default this only installs the requirements for PyTorch. To install the tensorflow dependencies as well, use the
pip install TTS[tf]
If you plan to code or train models, clone
git clone https://github.com/coqui-ai/TTS pip install -e .[all,dev,notebooks,tf] # Select the relevant extras
espeak-ng to convert graphemes to phonemes. You might need to install separately.
sudo apt-get install espeak-ng
If you are on Ubuntu (Debian), you can also run following commands for installation.
$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a diffent OS. $ make install
If you are on Windows,
|- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.) |- utils/ (common utilities.) |- TTS |- bin/ (folder for all the executables.) |- train*.py (train your target model.) |- distribute.py (train your TTS model using Multiple GPUs.) |- compute_statistics.py (compute dataset statistics for normalization.) |- convert*.py (convert target torch model to TF.) |- tts/ (text to speech models) |- layers/ (model layer definitions) |- models/ (model definitions) |- tf/ (Tensorflow 2 utilities and model implementations) |- utils/ (model specific utilities.) |- speaker_encoder/ (Speaker Encoder models.) |- (same) |- vocoder/ (Vocoder models.) |- (same)
Sample Model Output
Below you see Tacotron model state after 16K iterations with batch-size 32 with LJSpeech dataset.
"Recent research at Harvard has shown meditating for as little as 8 weeks can actually increase the grey matter in the parts of the brain responsible for emotional regulation and learning."
Audio examples: soundcloud
Datasets and Data-Loading
datasets/preprocess.py to see some examples.
After that, you need to set
dataset fields in
Some of the public datasets that we successfully applied
Example: Synthesizing Speech on Terminal Using the Released Models.
After the installation,
Run a TTS model, from the release models list, with its default vocoder. (Simply copy and paste the full model names from the list as arguments for the command below.)
tts --text "Text for TTS" \ --model_name "<type>/<language>/<dataset>/<model_name>" \ --out_path folder/to/save/output.wav
Run a tts and a vocoder model from the released model list. Note that not every vocoder is compatible with every TTS model.
tts --text "Text for TTS" \ --model_name "<type>/<language>/<dataset>/<model_name>" \ --vocoder_name "<type>/<language>/<dataset>/<model_name>" \ --out_path folder/to/save/output.wav
Run your own TTS model (Using Griffin-Lim Vocoder)
tts --text "Text for TTS" \ --model_path path/to/model.pth.tar \ --config_path path/to/config.json \ --out_path folder/to/save/output.wav
Run your own TTS and Vocoder models
tts --text "Text for TTS" \ --config_path path/to/config.json \ --model_path path/to/model.pth.tar \ --out_path folder/to/save/output.wav \ --vocoder_path path/to/vocoder.pth.tar \ --vocoder_config_path path/to/vocoder_config.json
Run a multi-speaker TTS model from the released models list.
tts --model_name "<type>/<language>/<dataset>/<model_name>" --list_speaker_idxs # list the possible speaker IDs. tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx "<speaker_id>"
Note: You can use
./TTS/bin/synthesize.py if you prefer running
tts from the TTS project folder.
Example: Using the Demo Server for Synthesizing Speech
You can boot up a demo
The demo server provides pretty much the same interface as the CLI command.
tts-server -h # see the help tts-server --list_models # list the available models.
Run a TTS model, from the release models list, with its default vocoder. If the model you choose is a multi-speaker TTS model, you can select different speakers on the Web interface and synthesize speech.
tts-server --model_name "<type>/<language>/<dataset>/<model_name>"
Run a TTS and a vocoder model from the released model list. Note that not every vocoder is compatible with every TTS model.
tts-server --model_name "<type>/<language>/<dataset>/<model_name>" \ --vocoder_name "<type>/<language>/<dataset>/<model_name>"
Example: Training and Fine-tuning LJ-Speech Dataset
Here you can find a CoLab notebook for a hands-on example, training LJSpeech. Or you can manually follow the guideline below.
To start with, split
metadata.csv into train and validation subsets respectively
metadata_val.csv. Note that for text-to-speech, validation performance might be misleading since the loss value does not directly measure the voice quality to the human ear and it also does not measure the attention module performance. Therefore, running the model with new sentences and listening to the results is the best way to go.
shuf metadata.csv > metadata_shuf.csv head -n 12000 metadata_shuf.csv > metadata_train.csv tail -n 1100 metadata_shuf.csv > metadata_val.csv
To train a new model, you need to define your own
config.json to define model details, trainin configuration and more (check the examples). Then call the corressponding train script.
For instance, in order to train a tacotron or tacotron2 model on LJSpeech dataset, follow these steps.
python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json
To fine-tune a model, use
python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json --restore_path /path/to/your/model.pth.tar
To continue an old training run, use
python TTS/bin/train_tacotron.py --continue_path /path/to/your/run_folder/
For multi-GPU training, call
distribute.py. It runs any provided train script in multi-GPU setting.
CUDA_VISIBLE_DEVICES="0,1,4" python TTS/bin/distribute.py --script train_tacotron.py --config_path TTS/tts/configs/config.json
Each run creates a new output folder accomodating used
config.json, model checkpoints and tensorboard logs.
In case of any error or intercepted execution, if there is no checkpoint yet under the output folder, the whole folder is going to be removed.
You can also enjoy Tensorboard, if you point Tensorboard argument
--logdir to the experiment folder.
- https://github.com/keithito/tacotron (Dataset pre-processing)
- https://github.com/r9y9/tacotron_pytorch (Initial Tacotron architecture)
- https://github.com/kan-bayashi/ParallelWaveGAN (GAN based vocoder library)
- https://github.com/jaywalnut310/glow-tts (Original Glow-TTS implementation)
- https://github.com/fatchord/WaveRNN/ (Original WaveRNN implementation)
- https://arxiv.org/abs/2010.05646 (Original HiFiGAN implementation)