Table of contents


Abstract

Previous works have found that generating coherent raw audio waveforms with GANs is challenging. In this paper, we show that it is possible to train GANs reliably to generate high quality coherent waveforms by introducing a set of architectural changes and simple training techniques. Two subjective evaluation metrics (Mean Opinion Score and MUSHRA) suggest that our model is state-of-the-art for mel-spectrogram inversion. We show qualitative results of our model on speech synthesis, music domain translation and unconditional music synthesis, to establish the generality of the proposed techniques. We also evaluate different components of the model, proposing a set of guidelines for designing general purpose discriminators and generators for conditional sequence synthesis tasks. Our model is non-autoregressive, fully convolutional, with significantly fewer parameters as compared to competing models and generalizes to unseen speakers for mel-spectrogram inversion. Our pytorch implementation runs at more than 100x faster than realtime on GTX 1080Ti GPU and more than 2x faster than realtime on CPU, without any hardware specific optimization tricks.

Results

Spectrogram Inversion on Unseen Speakers

Original

Reconstructed

End-to-end text-to-speech examples



Unconditional Music Synthesis

Original

Reconstructed

Sampled

Music Translation

Example for source domain: Bach Solo Cello

Beethoven accompanied violin Beethoven solo piano
Original Mor et al. 2019 Ours Mor et al. 2019 Ours
1
2
3
4
5

Samples along Training

50 epochs - 1.35 hours

100 epochs - 2.71 hours

200 epochs - 5.42 hours

400 epochs - 10.84 hours

800 epochs - 21.68 hours

1600 epochs - 43.36 hours

3200 epochs - 86.72 hours

Ablation

original

Baseline

l1_observed_no_feat_match

l1_observed_space

no_dilations

no_group_disc

no_multiscale_disc

no_patch_gan

no_weight_norm

spectral_norm