You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

80 lines
3.3 KiB

  1. # Tacotron 2 (without wavenet)
  2. PyTorch implementation of [Natural TTS Synthesis By Conditioning
  3. Wavenet On Mel Spectrogram Predictions](https://arxiv.org/pdf/1712.05884.pdf).
  4. This implementation includes **distributed** and **fp16** support
  5. and uses the [LJSpeech dataset](https://keithito.com/LJ-Speech-Dataset/).
  6. Distributed and FP16 support uses NVIDIA's [Apex] and [AMP].
  7. Visit our [website] for audio samples using our published [Tacotron 2] and
  8. [WaveGlow] models.
  9. ![Alignment, Predicted Mel Spectrogram, Target Mel Spectrogram](tensorboard.png)
  10. ## Pre-requisites
  11. 1. NVIDIA GPU + CUDA cuDNN
  12. ## Setup
  13. 1. Download and extract the [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/)
  14. 2. Clone this repo: `git clone https://github.com/NVIDIA/tacotron2.git`
  15. 3. CD into this repo: `cd tacotron2`
  16. 4. Initialize submodule: `git submodule init; git submodule update`
  17. 5. Update .wav paths: `sed -i -- 's,DUMMY,ljs_dataset_folder/wavs,g' filelists/*.txt`
  18. - Alternatively, set `load_mel_from_disk=True` in `hparams.py` and update mel-spectrogram paths
  19. 6. Install [PyTorch 1.0]
  20. 7. Install [Apex]
  21. 8. Install python requirements or build docker image
  22. - Install python requirements: `pip install -r requirements.txt`
  23. ## Training
  24. 1. `python train.py --output_directory=outdir --log_directory=logdir`
  25. 2. (OPTIONAL) `tensorboard --logdir=outdir/logdir`
  26. ## Training using a pre-trained model
  27. Training using a pre-trained model can lead to faster convergence
  28. By default, the dataset dependent text embedding layers are [ignored]
  29. 1. Download our published [Tacotron 2] model
  30. 2. `python train.py --output_directory=outdir --log_directory=logdir -c tacotron2_statedict.pt --warm_start`
  31. ## Multi-GPU (distributed) and FP16 Training
  32. 1. `python -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True`
  33. ## Inference demo
  34. 1. Download our published [Tacotron 2] model
  35. 2. Download our published [WaveGlow] model
  36. 3. `jupyter notebook --ip=127.0.0.1 --port=31337`
  37. 4. Load inference.ipynb
  38. N.b. When performing Mel-Spectrogram to Audio synthesis, make sure Tacotron 2
  39. and the Mel decoder were trained on the same mel-spectrogram representation.
  40. ## Related repos
  41. [WaveGlow](https://github.com/NVIDIA/WaveGlow) Faster than real time Flow-based
  42. Generative Network for Speech Synthesis
  43. [nv-wavenet](https://github.com/NVIDIA/nv-wavenet/) Faster than real time
  44. WaveNet.
  45. ## Acknowledgements
  46. This implementation uses code from the following repos: [Keith
  47. Ito](https://github.com/keithito/tacotron/), [Prem
  48. Seetharaman](https://github.com/pseeth/pytorch-stft) as described in our code.
  49. We are inspired by [Ryuchi Yamamoto's](https://github.com/r9y9/tacotron_pytorch)
  50. Tacotron PyTorch implementation.
  51. We are thankful to the Tacotron 2 paper authors, specially Jonathan Shen, Yuxuan
  52. Wang and Zongheng Yang.
  53. [WaveGlow]: https://drive.google.com/file/d/1cjKPHbtAMh_4HTHmuIGNkbOkPBD9qwhj/view?usp=sharing
  54. [Tacotron 2]: https://drive.google.com/file/d/1c5ZTuT7J08wLUoVZ2KkUs_VdZuJ86ZqA/view?usp=sharing
  55. [pytorch 1.0]: https://github.com/pytorch/pytorch#installation
  56. [website]: https://nv-adlr.github.io/WaveGlow
  57. [ignored]: https://github.com/NVIDIA/tacotron2/blob/master/hparams.py#L22
  58. [Apex]: https://github.com/nvidia/apex
  59. [AMP]: https://github.com/NVIDIA/apex/tree/master/apex/amp