Relative Positional Encoding for Speech Recognition and Direct Translation

Ngoc-Quan Pham*, Thanh-Le Ha, Tuan-Nam Nguyen, Thai-Son Nguyen, Elizabeth Salesky, Sebastian Stüker, Jan Niehues, Alex Waibel

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

Transformer models are powerful sequence-to-sequence architectures that are capable of directly mapping speech inputs to transcriptions or translations. However, the mechanism for modeling positions in this model was tailored for text modeling, and thus is less ideal for acoustic inputs. In this work, we adapt the relative position encoding scheme to the Speech Transformer, where the key addition is relative distance between input states in the self-attention network. As a result, the network can better adapt to the variable distributions present in speech data. Our experiments show that our resulting model achieves the best recognition result on the Switchboard benchmark in the non-augmentation condition, and the best published result in the MuST-C speech translation benchmark. We also show that this model is able to better utilize synthetic data than the Transformer, and adapts better to variable sentence segmentation quality for speech translation.
Original languageEnglish
Title of host publicationINTERSPEECH 2020 Proceedings
Pages31-35
Number of pages5
DOIs
Publication statusPublished - 2020
Event21st Annual Conference of the International Speech Communication Association - Fully Virtual Conference, Shanghai, China
Duration: 25 Oct 202029 Oct 2020
Conference number: 21
http://www.interspeech2020.org/

Conference

Conference21st Annual Conference of the International Speech Communication Association
Abbreviated titleINTERSPEECH 2020
Country/TerritoryChina
CityShanghai
Period25/10/2029/10/20
Internet address

Cite this