Skip to the content.

Abstract

Voice conversion has gained increasing popularity in many applications of speech synthesis. The idea is to change the voice identity from one speaker into another while keeping the linguistic content unchanged. Many voice conversion approaches rely on the use of a vocoder to reconstruct the speech from acoustic features, and as a consequence, the speech quality heavily depends on such a vocoder. In this paper, we propose NVC-Net, an end-to-end adversarial network, which performs voice conversion directly on the raw audio waveform of arbitrary length. By disentangling the speaker identity from the speech content, NVC-Net is able to perform non-parallel traditional many-to-many voice conversion as well as zero-shot voice conversion from a short utterance of an unseen target speaker. Importantly, NVC-Net is non-autoregressive and fully convolutional, achieving fast inference. Our model is capable of producing samples at a rate of more than 3600 kHz on an NVIDIA V100 GPU, being orders of magnitude faster than state-of-the-art methods under the same hardware configurations. Objective and subjective evaluations on non-parallel many-to-many voice conversion tasks show that NVC-Net obtains competitive results with significantly fewer parameters.

Full paper is available at https://arxiv.org/abs/2106.00992.

Code is available at https://github.com/sony/ai-research-code/tree/master/nvcnet.

Samples

Audio samples are taken from the VCTK data set [1].

A. Traditional voice conversion

Traditional many-to-many voice conversions are performed between different speakers that are seen during training. Some samples are presented in the table below.

Source Target NVC-Net NVC-Net
M2M
M2F
F2M
F2F

M2M: Male to male; M2F: Male to Female; F2M: Female to male; F2F: Female to female

B. Zero-shot voice conversion

Zero-shot many-to-many voice conversions are performed from/to speakers that are unseen during training. Some samples are presented in the table below.

Source Target NVC-Net
S2U
U2S
U2U

S2U: Seen to unseen; U2S: Unseen to seen; U2U: Unseen to seen

C. Diversity

NVC-Net can synthesize diverse samples by changing the latent representation of the speaker embedding. For a given reference utterance, the speaker network produces a Gaussian distribution. This allows us to sample multiple speaker embeddings.

Source Target
Samples produced by NVC-Net

D. Additional studies

Below are samples comparing the outputs between NVC-Net wo (without normalization on the content code) and NVC-Net w (with normalization on the content code).

Source Target NVC-Net wo NVC-Net w

References

[1] Veaux, Christophe; Yamagishi, Junichi; MacDonald, Kirsten. (2017). CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit, [sound]. University of Edinburgh. The Centre for Speech Technology Research (CSTR). https://doi.org/10.7488/ds/1994