Abstract:
Non-autoregressive architecture for neural text-to-speech (TTS) allows for parallel implementation, thus reduces inference time over its autoregressive counterpart. However, such system architecture doesn't explicitly model temporal dependency of acoustic signal as it generates individual acoustic frames independently. The lack of temporal modeling often adversely impacts speech continuity, thus voice quality. In this paper, we propose a novel neural TTS model that is denoted as FastTalker. We study two strategies for high-quality speech synthesis at low computational cost. First, we add a shallow autoregressive acoustic decoder on top of the non-autoregressive context decoder to retrieve the temporal information of the acoustic signal. Second, we further implement group autoregression to accelerate the inference of the autoregressive acoustic decoder. The group-based autoregression acoustic decoder generates acoustic features as a sequence of groups instead of frames, each group having multiple consecutive frames. Within a group, the acoustic features are generated in parallel. With the shallow and group autoregression, FastTalker retrieves the temporal information of the acoustic signal, while keeping the fast-decoding property. The proposed FastTalker achieves a good balance between speech quality and inference speed. Experiments show that, in terms of voice quality and naturalness, FastTalker outperforms the non-autoregressive FastSpeech baseline significantly, and is on par with the autoregressive baselines. It also shows a considerable inference speedup over Tacotron2 and Transformer TTS.



Figure1: The proposed FastTalker consists of Encoder (left panel) and FastDecoder (right panel). The FastDecoder contains a non-autoregressive context decoder (in blue) followed by a group autoregressive acoustic decoder layer (in green).





We develop 3 competitive baselines that are:
1) Tacotron2 [1]
2) Transformer TTS [2]
3) FastSpeech [3]


Speech Samples:



Ground Truth Tacotron2 (r=1) Transformer TTS FastSpeech FastTalker (K=1)
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]

References
[1] Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. Natural TTS synthesis by conditioning wavenet on melspectrogram predictions. In ICASSP 2018, pages 4779–4783.
[2] Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. Neural speech synthesis with transformer network. In AAAI 2019, pages 6706–6713.
[3] Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. Fastspeech: Fast, robust and controllable text to speech. In NeurIPS 2019, pages 3171–3180.