TY - GEN
T1 - Relative Positional Encoding for Speech Recognition and Direct Translation
AU - Pham, Ngoc-Quan
AU - Ha, Thanh-Le
AU - Nguyen, Tuan-Nam
AU - Nguyen, Thai-Son
AU - Salesky, Elizabeth
AU - Stüker, Sebastian
AU - Niehues, Jan
AU - Waibel, Alex
N1 - DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2020
Y1 - 2020
N2 - Transformer models are powerful sequence-to-sequence architectures that are capable of directly mapping speech inputs to transcriptions or translations. However, the mechanism for modeling positions in this model was tailored for text modeling, and thus is less ideal for acoustic inputs. In this work, we adapt the relative position encoding scheme to the Speech Transformer, where the key addition is relative distance between input states in the self-attention network. As a result, the network can better adapt to the variable distributions present in speech data. Our experiments show that our resulting model achieves the best recognition result on the Switchboard benchmark in the non-augmentation condition, and the best published result in the MuST-C speech translation benchmark. We also show that this model is able to better utilize synthetic data than the Transformer, and adapts better to variable sentence segmentation quality for speech translation.
AB - Transformer models are powerful sequence-to-sequence architectures that are capable of directly mapping speech inputs to transcriptions or translations. However, the mechanism for modeling positions in this model was tailored for text modeling, and thus is less ideal for acoustic inputs. In this work, we adapt the relative position encoding scheme to the Speech Transformer, where the key addition is relative distance between input states in the self-attention network. As a result, the network can better adapt to the variable distributions present in speech data. Our experiments show that our resulting model achieves the best recognition result on the Switchboard benchmark in the non-augmentation condition, and the best published result in the MuST-C speech translation benchmark. We also show that this model is able to better utilize synthetic data than the Transformer, and adapts better to variable sentence segmentation quality for speech translation.
U2 - 10.21437/INTERSPEECH.2020-2526
DO - 10.21437/INTERSPEECH.2020-2526
M3 - Conference article in proceeding
SP - 31
EP - 35
BT - INTERSPEECH 2020 Proceedings
T2 - Interspeech 2020
Y2 - 25 October 2020 through 29 October 2020
ER -