Abstract
Understanding robustness is essential for building reliable NLP systems. Unfortunately, in the context of machine translation, previous work mainly focused on documenting robustness failures or improving robustness. In contrast, we study robustness from a model representation perspective by looking at internal model representations of ungrammatical inputs and how they evolve through model layers. For this purpose, we perform Grammatical Error Detection (GED) probing and representational similarity analysis. Our findings indicate that the encoder first detects the grammatical error, then corrects it by moving its representation toward the correct form. To understand what contributes to this process, we turn to the attention mechanism where we identify what we term *Robustness Heads*. We find that *Robustness Heads* attend to interpretable linguistic units when responding to grammatical errors, and that when we fine-tune models for robustness, they tend to rely more on *Robustness Heads* for updating the ungrammatical word representation.
| Original language | Undefined/Unknown |
|---|---|
| Title of host publication | Findings of the Association for Computational Linguistics: ACL 2025 |
| Editors | Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar |
| Place of Publication | Vienna, Austria |
| Publisher | Association for Computational Linguistics |
| Pages | 8579-8601 |
| Number of pages | 23 |
| ISBN (Print) | 979-8-89176-256-5 |
| DOIs | |
| Publication status | Published - 1 Jul 2025 |