NVIDIA/tacotron2
Jupyter Notebook 5.3k starsTacotron 2 - PyTorch implementation with faster-than-realtime inference
✓ Synced 1h ago
Share on X →
README badge:
[](https://ngmi.review/repo/NVIDIA/tacotron2)
16
Merged PRs
1 month
Avg Merge Time
0m
Fastest PR
1 year
Slowest PR
#1195
Global Speed Rank
PR Size Analysis
Lines changed (additions + deletions) vs review outcomes. Re-sync to populate data for older PRs.
PRs by size
Avg review time (hrs)
Clean approval rate (%)
Recent Merged PRs
| # | Title | Author | Time | Reviews | Blocks |
|---|---|---|---|---|---|
| #143 | seed from hparams for TextMelLoader | @taras-sereda | 1 year | 0 | ✓ |
| #279 | [bug-fix] pillow dependency in Dockerfile | @sih4sing5hong5 | 4 months | 0 | ✓ |
| #313 | build(deps): bump tensorflow from 1.12.0 to 1.15.2 | @dependabot | 7 days | 0 | ✓ |
| #303 | fix error when batch size = 1 | @NTT123 | 13.6h | 0 | ✓ |
| #304 | remove tensorboardX; use torch.utils.tensorboard | @NTT123 | 13.4h | 0 | ✓ |
| #188 | Fixed link to download waveglow from inference.py | @jybaek | 3 days | 0 | ✓ |
| #136 | Fixing concatenation error for fp16 distributed training | @ | 11.2h | 0 | ✓ |
| #96 | Clean slate | @rafaelvalle | 0m | 0 | ✓ |
| #53 | add pillow | @cobr123 | 4.9h | 0 | ✓ |
| #37 | `used_saved_learning_rate` fix | @yoks | 2.0h | 0 | ✓ |
| #23 | model.py: attending to full mel instead of prenet and dropout mel | @rafaelvalle | 5m | 0 | ✓ |
| #20 | Fp16 patch, not path! | @rafaelvalle | 0m | 0 | ✓ |
| #19 | Load mel from disk | @rafaelvalle | 0m | 0 | ✓ |
| #6 | integer maxlen for padding | @raulpuric | 2.2h | 0 | ✓ |
| #4 | mask utils update for 0.4 cuda | @raulpuric | 47m | 0 | ✓ |
| #2 | train.py single gpu and 0.4 update | @raulpuric | 14.7h | 0 | ✓ |