Low-Latency Sequence-to-Sequence Speech Recognition and Translation by Partial Hypothesis Selection

Danni Liu*, Gerasimos Spanakis, Jan Niehues

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review


Encoder-decoder models provide a generic architecture for sequence-to-sequence tasks such as speech recognition and translation. While offline systems are often evaluated on quality metrics like word error rates (WER) and BLEU scores, latency is also a crucial factor in many practical use-cases. We propose three latency reduction techniques for chunk-based incremental inference and evaluate their accuracy-latency tradeoff. On the 300-hour How2 dataset, we reduce latency by 83% to 0.8 second by sacrificing 1% WER (6% rel.) compared to offline transcription. Although our experiments use the Transformer, the partial hypothesis selection strategies are applicable to other encoder-decoder models. To reduce expensive re-computation as new chunks arrive, we propose to use a unidirectionally-attending encoder. After an adaptation procedure to partial sequences, the unidirectional model performs on-par with the original model. We further show that our approach is also applicable to speech translation. On the How2 English-Portuguese speech translation dataset, we reduce latency to 0.7 second (-84% rel.) while incurring a loss of 2.4 BLEU points (5% rel.) compared to the offline system.

Original languageEnglish
Title of host publicationINTERSPEECH 2020
Number of pages5
Publication statusPublished - 2020
Event21st Annual Conference of the International Speech Communication Association - Fully Virtual Conference, Shanghai, China
Duration: 25 Oct 202029 Oct 2020
Conference number: 21


Conference21st Annual Conference of the International Speech Communication Association
Abbreviated titleINTERSPEECH 2020
Internet address

Cite this