Optimizing segmentation granularity for neural machine translation

Elizabeth Salesky*, Andrew Runge, Alex Coda, Jan Niehues, Graham Neubig

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Web of Science)

Abstract

In neural machine translation (NMT), it has become standard to translate using subword units to allow for an open vocabulary and improve accuracy on infrequent words. Byte-pair encoding (BPE) and its variants are the predominant approach to generating these subwords, as they are unsupervised, resource-free, and empirically effective. However, the granularity of these subword units is a hyperparameter to be tuned for each language and task, using methods such as grid search. Tuning may be done inexhaustively or skipped entirely due to resource constraints, leading to sub-optimal performance. In this paper, we propose a method to automatically tune this parameter using only one training pass. We incrementally introduce new BPE vocabulary online based on the held-out validation loss, beginning with smaller, general subwords and adding larger, more specific units over the course of training. Our method matches the results found with grid search, optimizing segmentation granularity while significantly reducing overall training time. We also show benefits in training efficiency and performance improvements for rare words due to the way embeddings for larger units are incrementally constructed by combining those from smaller units.

Original languageEnglish
Pages (from-to)41-59
Number of pages19
JournalMachine Translation
Volume34
Issue number1
Early online date24 Jan 2020
DOIs
Publication statusPublished - Apr 2020

Keywords

  • Neural machine translation
  • Subword units
  • Byte-pair encoding
  • Online optimization
  • Segmentation

Cite this