Fork of https://github.com/alokprasad/fastspeech_squeezewave to also fix denoising in squeezewave
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

68 lines
2.7 KiB

  1. # FastSpeech-Pytorch
  2. The Implementation of FastSpeech Based on Pytorch.
  3. ## Update
  4. ### 2019/10/23
  5. 1. Fix bugs in alignment;
  6. 2. Fix bugs in transformer;
  7. 3. Fix bugs in LengthRegulator;
  8. 4. Change the way to process audio;
  9. 5. Use waveglow to synthesize.
  10. ## Model
  11. <div align="center">
  12. <img src="img/model.png" style="max-width:100%;">
  13. </div>
  14. ## My Blog
  15. - [FastSpeech Reading Notes](https://zhuanlan.zhihu.com/p/67325775)
  16. - [Details and Rethinking of this Implementation](https://zhuanlan.zhihu.com/p/67939482)
  17. ## Start
  18. ### Dependencies
  19. - python 3.6
  20. - CUDA 10.0
  21. - pytorch==1.1.0
  22. - nump==1.16.2
  23. - scipy==1.2.1
  24. - librosa==0.6.3
  25. - inflect==2.1.0
  26. - matplotlib==2.2.2
  27. ### Prepare Dataset
  28. 1. Download and extract [LJSpeech dataset](https://keithito.com/LJ-Speech-Dataset/).
  29. 2. Put LJSpeech dataset in `data`.
  30. 3. Unzip `alignments.zip` \*
  31. 4. Put [Nvidia pretrained waveglow model](https://drive.google.com/file/d/1WsibBTsuRg_SF2Z6L6NFRTT-NjEy1oTx/view?usp=sharing) in the `waveglow/pretrained_model`;
  32. 5. Run `python preprocess.py`.
  33. *\* if you want to calculate alignment, don't unzip alignments.zip and put [Nvidia pretrained Tacotron2 model](https://drive.google.com/file/d/1c5ZTuT7J08wLUoVZ2KkUs_VdZuJ86ZqA/view?usp=sharing) in the `Tacotron2/pretrained_model`*
  34. ## Training
  35. Run `python train.py`.
  36. ## Test
  37. Run `python synthesis.py`.
  38. ## Pretrained Model
  39. - Baidu: [Step:112000](https://pan.baidu.com/s/1by3-8t3A6uihK8K9IFZ7rg) Enter Code: xpk7
  40. - OneDrive: [Step:112000](https://1drv.ms/u/s!AuC2oR4FhoZ29kriYhuodY4-gPsT?e=zUIC8G)
  41. ## Notes
  42. - In the paper of FastSpeech, authors use pre-trained Transformer-TTS to provide the target of alignment. I didn't have a well-trained Transformer-TTS model so I use Tacotron2 instead.
  43. - The examples of audio are in `results`.
  44. - The outputs and alignment of Tacotron2 are shown as follows (The sentence for synthesizing is "I want to go to CMU to do research on deep learning."):
  45. <div align="center">
  46. <img src="img/tacotron2_outputs.jpg" style="max-width:100%;">
  47. </div>
  48. - The outputs of FastSpeech and Tacotron2 (Right one is tacotron2) are shown as follows (The sentence for synthesizing is "Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition."):
  49. <div align="center">
  50. <img src="img/model_test.jpg" style="max-width:100%;">
  51. </div>
  52. ## Reference
  53. - [The Implementation of Tacotron Based on Tensorflow](https://github.com/keithito/tacotron)
  54. - [The Implementation of Transformer Based on Pytorch](https://github.com/jadore801120/attention-is-all-you-need-pytorch)
  55. - [The Implementation of Transformer-TTS Based on Pytorch](https://github.com/xcmyz/Transformer-TTS)
  56. - [The Implementation of Tacotron2 Based on Pytorch](https://github.com/NVIDIA/tacotron2)