107 Commits (0274619e45252180aec5b6ca9c18c9761d1a2765)
 

Author SHA1 Message Date
  rafaelvalle 0274619e45 train.py: using amp for mixed precision training 5 years ago
  rafaelvalle bb20035586 inference.ipynb: adding fp16 inference 5 years ago
  rafaelvalle 1480f82908 model.py: renaming variables, removing dropout from lstm cell state, removing conversions now handled by amp 5 years ago
  rafaelvalle 087c86755f logger.py: using new pytorch api 5 years ago
  rafaelvalle f37998c59d train.py: shuffling at every epoch 5 years ago
  rafaelvalle bff304f432 README.md: adding explanation on training from pre-trained model 5 years ago
  rafaelvalle 3869781877 train.py: adding routine to warm start and ignore layers, e.g. embedding.weight 5 years ago
  rafaelvalle bb67613493 hparams.py: adding ignore_layers argument to ignore text embedding layers when warm_starting 5 years ago
  rafaelvalle af1f71a975 inference.ipynb: adding code to remove waveglows bias 5 years ago
  rafaelvalle fc0d34cfce stft.py: moving window_sum to cuda if magnitude is cuda 5 years ago
  Rafael Valle f2c94d94fd
Merge pull request #136 from GrzegorzKarchNV/master 5 years ago
  gkarch df4a466af2 Fixing concatenation error for fp16 ditributed training 5 years ago
  rafaelvalle 825ffa47d1 inference.ipynb: reverting fp16 inference for now 6 years ago
  rafaelvalle 4d7b04120a inference.ipynb: changing waverglow inference fo fp16 6 years ago
  rafaelvalle 6e430556bd train.py: val logger on gpu 0 only 6 years ago
  rafaelvalle 3973b3e495 hparams.py: distributed using tcp 6 years ago
  rafaelvalle 52a30bb7b6 distributed.py: replacing to avoid distributed error 6 years ago
  rafaelvalle 0ad65cc053 train.py: renaming variable to n_gpus 6 years ago
  rafaelvalle 8300844fa7 hparams.py: removing 22khz 6 years ago
  rafaelvalle f06063f746 train.py: renaming function, removing dataparallel 6 years ago
  rafaelvalle 3045ba125b inference.ipynb: cleanup 6 years ago
  rafaelvalle 4c4aca3662 README.md: layout 6 years ago
  rafaelvalle 05dd8f91d2 README.md: adding submodule init to README 6 years ago
  rafaelvalle 5d66c3deab adding waveglow submodule 6 years ago
  Rafael Valle f02704f338
Merge pull request #96 from NVIDIA/clean_slate 6 years ago
  rafaelvalle ba8cf36198 requirements.txt: removing pytorch 0.4 from requirements. upgrading to 1.0 6 years ago
  rafaelvalle b5e0a93946 inference.ipynb: updating inference file with relative paths 6 years ago
  rafaelvalle 58b0ec61bd README.md: updating requirements and inference demo 6 years ago
  rafaelvalle 1ad939df1a inference.ipynb: setting relative model paths 6 years ago
  rafaelvalle 32b9a135d0 utils.py: updating 6 years ago
  rafaelvalle ce29e13959 train.py: updating 6 years ago
  rafaelvalle 1ea6ed5861 text/symbols.py: updating symbols 6 years ago
  rafaelvalle cdfde985e5 text/__init__.py: remove stop token 6 years ago
  rafaelvalle e314bb4cd0 stft.py: fix filter winlength error 6 years ago
  rafaelvalle 4af4ccb135 model.py: rewrite 6 years ago
  rafaelvalle 1ec0e5e8cd layers.py: rewrite 6 years ago
  rafaelvalle 249afd8043 inference.ipynb: import taco2model to be public 6 years ago
  rafaelvalle 1b243d5d5a hparams.py:rewrite 6 years ago
  rafaelvalle d0aa9e7d32 distributed.py: rewrite 6 years ago
  rafaelvalle 1683a57ae5 data_utils.py: rewrite 6 years ago
  Rafael Valle fc0cf6a89a
Merge pull request #53 from cobr123/patch-1 6 years ago
  cobr123 8de38495be
add pillow 6 years ago
  rafaelvalle 7eb045206c README.md: updating readme to explicitly mention that mel representation of WaveNet and Tacotron2 must be the same 6 years ago
  rafaelvalle c67005f1be Dockerfile: adding jupyter to dockerfile 6 years ago
  rafaelvalle cb3794796f Dockerfile: removing return from Dockerfile 6 years ago
  Rafael Valle a8de973923
Merge pull request #37 from yoks/master 6 years ago
  yoks a0ae2da05f
`used_saved_learning_rate` fix 6 years ago
  rafaelvalle 34066ac4fc requirements.txt: setting torch to 0.4.0 6 years ago
  rafaelvalle 12ab5ba89c model.py: setting weight initialization to xavier uniform 6 years ago
  rafaelvalle d10da5f41e hparams.py: commenting n_frames_per_step to indicate that currently only 1 frame per step is supported now 6 years ago