Abstract
Encoder-decoder models provide a generic architecture for sequence-to-sequence tasks such as speech recognition and translation. While offline systems are often evaluated on quality metrics like word error rates (WER) and BLEU scores, latency is also a crucial factor in many practical use-cases. We propose three latency reduction techniques for chunk-based incremental inference and evaluate their accuracy-latency tradeoff. On the 300-hour How2 dataset, we reduce latency by 83% to 0.8 second by sacrificing 1% WER (6% rel.) compared to offline transcription. Although our experiments use the Transformer, the partial hypothesis selection strategies are applicable to other encoder-decoder models. To reduce expensive re-computation as new chunks arrive, we propose to use a unidirectionally-attending encoder. After an adaptation procedure to partial sequences, the unidirectional model performs on-par with the original model. We further show that our approach is also applicable to speech translation. On the How2 English-Portuguese speech translation dataset, we reduce latency to 0.7 second (-84% rel.) while incurring a loss of 2.4 BLEU points (5% rel.) compared to the offline system.
| Original language | English |
|---|---|
| Title of host publication | INTERSPEECH 2020 |
| Pages | 3620-3624 |
| Number of pages | 5 |
| DOIs | |
| Publication status | Published - 2020 |
| Event | 21st Annual Conference of the International Speech Communication Association - Fully Virtual Conference, Shanghai, China Duration: 25 Oct 2020 → 29 Oct 2020 Conference number: 21 http://www.interspeech2020.org/ |
Conference
| Conference | 21st Annual Conference of the International Speech Communication Association |
|---|---|
| Abbreviated title | INTERSPEECH 2020 |
| Country/Territory | China |
| City | Shanghai |
| Period | 25/10/20 → 29/10/20 |
| Internet address |
Fingerprint
Dive into the research topics of 'Low-Latency Sequence-to-Sequence Speech Recognition and Translation by Partial Hypothesis Selection'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver