Effective combination of pretrained models - KIT@IWSLT2022

Ngoc-Quan Pham*, Tuan Nam Nguyen, Thai-Binh Nguyen, Danni Liu, Carlos Mullov, Jan Niehues, Alexander Waibel

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

Pretrained models in acoustic and textual modalities can potentially improve speech translation for both Cascade and End-to-end approaches. In this evaluation, we aim at empirically looking for the answer by using the wav2vec, mBART50 and DeltaLM models to improve text and speech translation models. The experiments showed that the presence of these models together with an advanced audio segmentation method results in an improvement over the previous end-to-end system by up to 7 BLEU points. More importantly, the experiments showed that given enough data and modeling capacity to overcome the training difficulty, we can outperform even very competitive Cascade systems. In our experiments, this gap can be as large as 2.0 BLEU points, the same gap that the Cascade often led over the years.
Original languageEnglish
Title of host publicationProceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
PublisherAssociation for Computational Linguistics
Pages190-197
Number of pages8
ISBN (Print)9781955917414
DOIs
Publication statusPublished - 2022
EventThe International Conference on Spoken Language Translation - Dublin, Ireland
Duration: 26 May 202227 May 2022
https://iwslt.org/2022/

Conference

ConferenceThe International Conference on Spoken Language Translation
Abbreviated title19th IWSLT
Country/TerritoryIreland
CityDublin
Period26/05/2227/05/22
Internet address

Fingerprint

Dive into the research topics of 'Effective combination of pretrained models - KIT@IWSLT2022'. Together they form a unique fingerprint.

Cite this