185cd24
(HEAD -> master)
waveglow: updating waveglow submodule by
2020-06-11 14:44:10 -0700
0102db2
README.md: updating waveglow published model by
2020-05-16 22:37:52 -0700
6f435f7
updating waveglow submodule by
2020-05-13 22:02:47 -0700
dd49ffa
Merge pull request #143 from taras-sereda/master by
2020-03-18 09:42:35 -0700
2f2ed63
Merge pull request #279 from sih4sing5hong5/patch-1 by
2020-03-18 09:41:32 -0700
604e74d
Merge pull request #313 from NVIDIA/dependabot/pip/tensorflow-1.15.2 by
2020-03-17 10:15:45 -0700
dbd477b
build(deps): bump tensorflow from 1.12.0 to 1.15.2 by
2020-03-09 21:31:38 +0000
91ae5b5
Update requirements.txt by
2020-03-09 14:31:11 -0700
2583315
Merge pull request #303 from NTT123/fix-batch-size-1 by
2020-02-10 13:43:47 -0800
ca5a22a
Merge pull request #304 from NTT123/remove-tensorboardX by
2020-02-10 13:43:11 -0800
438b939
remove tensorboardX; use torch.utils.tensorboard by
2020-02-10 16:15:55 +0800
14dbc37
fix error when batch size = 1 by
2020-02-10 16:07:00 +0800
6d0635e
train.py: printing correct variable by
2020-01-09 01:37:57 -0800
a513db5
utils.py: compatibility with new pytorch by
2019-12-11 16:53:20 -0800
37a033d
logger.py: compatibility with new tensorboardX by
2019-12-11 16:53:02 -0800
53a97e8
[bug-fix] pillow dependency in Dockerfile by
2019-11-01 14:21:00 +0800
70d37f9
train.py: reporting the right variable by
2019-10-25 21:45:37 -0700
131c146
Merge pull request #188 from jybaek/fixed-waveglow-link by
2019-04-22 16:49:14 -0700
d5321ff
Fixed link to download waveglow from inference.py by
2019-04-19 15:21:09 +0900
c76ac3b
README.md: clarifying terminology by
2019-04-03 14:59:20 -0700
e3d2d0a
README.md: using proper nomenclature by
2019-04-03 14:56:06 -0700
a992aea
README.md: updating terminology by
2019-04-03 14:54:45 -0700
eb2a171
Merge branch 'master' of https://github.com/NVIDIA/tacotron2 by
2019-04-03 13:51:59 -0800
821bfeb
README.md: adding instructions to install apex by
2019-04-03 13:51:36 -0800
d6670c8
Dockerfile: updating to use latest pytorch and apex by
2019-04-03 13:51:22 -0800
0274619
train.py: using amp for mixed precision training by
2019-04-03 13:42:00 -0800
bb20035
inference.ipynb: adding fp16 inference by
2019-04-03 13:41:11 -0800
1480f82
model.py: renaming variables, removing dropout from lstm cell state, removing conversions now handled by amp by
2019-04-03 13:36:35 -0800
087c867
logger.py: using new pytorch api by
2019-04-03 13:35:04 -0800
ece7d3f
train.py: changing dataloder params given sampler by
2019-03-19 13:47:01 -0700
f37998c
train.py: shuffling at every epoch by
2019-03-15 17:49:27 -0700
bff304f
README.md: adding explanation on training from pre-trained model by
2019-03-15 17:38:40 -0700
3869781
train.py: adding routine to warm start and ignore layers, e.g. embedding.weight by
2019-03-15 17:34:27 -0700
bb67613
hparams.py: adding ignore_layers argument to ignore text embedding layers when warm_starting by
2019-03-15 17:28:50 -0700
af1f71a
inference.ipynb: adding code to remove waveglows bias by
2019-03-15 16:54:54 -0700
fc0d34c
stft.py: moving window_sum to cuda if magnitude is cuda by
2019-03-15 14:36:56 -0700
5f03d07
seed from hparams for TextMelLoader by
2019-02-14 10:10:58 +0200
f2c94d9
Merge pull request #136 from GrzegorzKarchNV/master by
2019-02-01 12:10:42 -0800
df4a466
Fixing concatenation error for fp16 ditributed training by
2019-02-01 09:55:59 +0100
825ffa4
inference.ipynb: reverting fp16 inference for now by
2018-12-08 21:26:01 -0800
4d7b041
inference.ipynb: changing waverglow inference fo fp16 by
2018-12-05 22:14:35 -0800
6e43055
train.py: val logger on gpu 0 only by
2018-11-27 22:03:11 -0800
3973b3e
hparams.py: distributed using tcp by
2018-11-27 22:02:43 -0800
52a30bb
distributed.py: replacing to avoid distributed error by
2018-11-27 21:01:26 -0800
0ad65cc
train.py: renaming variable to n_gpus by
2018-11-27 21:00:05 -0800
8300844
hparams.py: removing 22khz by
2018-11-27 20:56:52 -0800
f06063f
train.py: renaming function, removing dataparallel by
2018-11-27 18:04:12 -0800
3045ba1
inference.ipynb: cleanup by
2018-11-27 12:04:36 -0800
4c4aca3
README.md: layout by
2018-11-27 11:59:05 -0800
05dd8f9
README.md: adding submodule init to README by
2018-11-27 11:55:40 -0800
5d66c3d
adding waveglow submodule by
2018-11-27 11:53:20 -0800
f02704f
Merge pull request #96 from NVIDIA/clean_slate by
2018-11-27 08:06:00 -0800
ba8cf36
requirements.txt: removing pytorch 0.4 from requirements. upgrading to 1.0 by
2018-11-27 08:04:21 -0800
b5e0a93
inference.ipynb: updating inference file with relative paths by
2018-11-27 08:04:04 -0800
58b0ec6
README.md: updating requirements and inference demo by
2018-11-27 08:03:34 -0800
1ad939d
inference.ipynb: setting relative model paths by
2018-11-27 07:45:09 -0800
32b9a13
utils.py: updating by
2018-11-25 22:34:38 -0800
ce29e13
train.py: updating by
2018-11-25 22:34:34 -0800
1ea6ed5
text/symbols.py: updating symbols by
2018-11-25 22:34:26 -0800
cdfde98
text/__init__.py: remove stop token by
2018-11-25 22:34:11 -0800
e314bb4
stft.py: fix filter winlength error by
2018-11-25 22:33:52 -0800
4af4ccb
model.py: rewrite by
2018-11-25 22:33:38 -0800
1ec0e5e
layers.py: rewrite by
2018-11-25 22:33:32 -0800
249afd8
inference.ipynb: import taco2model to be public by
2018-11-25 22:33:16 -0800
1b243d5
hparams.py:rewrite by
2018-11-25 22:33:05 -0800
d0aa9e7
distributed.py: rewrite by
2018-11-25 22:32:54 -0800
1683a57
data_utils.py: rewrite by
2018-11-25 22:32:47 -0800
fc0cf6a
Merge pull request #53 from cobr123/patch-1 by
2018-07-02 14:32:21 -0700
8de3849
add pillow by
2018-07-02 19:35:51 +0300
7eb0452
README.md: updating readme to explicitly mention that mel representation of WaveNet and Tacotron2 must be the same by
2018-06-14 11:25:42 -0700
c67005f
Dockerfile: adding jupyter to dockerfile by
2018-06-14 10:30:01 -0700
cb37947
Dockerfile: removing return from Dockerfile by
2018-06-12 21:38:03 -0700
a8de973
Merge pull request #37 from yoks/master by
2018-06-12 09:00:14 -0700
a0ae2da
`used_saved_learning_rate` fix by
2018-06-12 16:53:12 +0300
34066ac
requirements.txt: setting torch to 0.4.0 by
2018-06-11 08:16:31 -0700
12ab5ba
model.py: setting weight initialization to xavier uniform by
2018-06-07 20:28:52 -0700
d10da5f
hparams.py: commenting n_frames_per_step to indicate that currently only 1 frame per step is supported now by
2018-06-07 13:02:23 -0700
5f0ea06
hparams.py: adding use saved learning rate param by
2018-06-05 08:12:49 -0700
22bcff1
hparams.py: adding use saved learning rate param by
2018-06-05 08:12:35 -0700
2e93447
README.md: being explicit about action by
2018-06-04 17:12:32 -0700
8ae231b
README.md: more explicit about demo audio by
2018-06-04 16:56:22 -0700
4d733d1
README.md: including demo.wav in readme by
2018-06-04 16:55:17 -0700
b4e5240
adding demo.wav file by
2018-06-04 16:46:36 -0700
064629c
Merge pull request #23 from NVIDIA/attention_full_mel by
2018-05-20 12:25:54 -0700
d5b6472
model.py: moving for better readibility by
2018-05-20 12:22:06 -0700
977cb37
model.py: attending to full mel instead of prenet and dropout mel by
2018-05-18 06:59:09 -0700
da30fd8
Merge pull request #20 from NVIDIA/fp16_path by
2018-05-15 09:55:19 -0700
27b1767
train.py: fixing typo by
2018-05-15 09:53:33 -0700
817cd40
Merge branch 'master' of https://github.com/NVIDIA/tacotron2 into load_mel_from_disk by
2018-05-15 09:51:41 -0700
1071023
train.py: patching score_mask_value formerly inf, not concrete value, for compatibility with pytorch by
2018-05-15 09:50:56 -0700
cd85158
loss_scaler.py: patching loss scaler for compatibility with current pytorch by
2018-05-15 09:50:08 -0700
bd42cb6
Merge pull request #19 from NVIDIA/load_mel_from_disk by
2018-05-15 08:54:24 -0700
2da7a2e
README.md: describing how to load mel from disk by
2018-05-15 08:50:21 -0700
62d2c8b
data_utils.py: adding support for loading mel from disk by
2018-05-15 08:42:06 -0700
2d41ea0
hparams.py: adding load_mel_from_disk params by
2018-05-15 08:41:03 -0700
2056864
Merge branch 'master' of https://github.com/NVIDIA/tacotron2 by
2018-05-06 08:58:07 -0700
dcd925f
model.py: mixed squeeze target. fixing by
2018-05-06 08:58:01 -0700
4ac6ce9
ipynb typo by
2018-05-05 17:30:08 -0700
c67ca65
force single gpu in inference.ipynb by
2018-05-05 17:29:09 -0700
78d5150
inference (distributed) dataparallel patch by
2018-05-05 17:23:11 -0700