Beyond Noise: Mitigating the Impact of Fine-grained Semantic Divergences on Neural Machine Translation

Abstract

While it has been shown that Neural Machine Translation (NMT) is highly sensitive to noisy parallel training samples, prior work treats all types of mismatches between source and target as noise. As a result, it remains unclear how samples that are mostly equivalent but contain a small number of semantically divergent tokens impact NMT training. To close this gap, we analyze the impact of different types of fine-grained semantic divergences on Transformer models. We show that models trained on synthetic divergences output degenerated text more frequently and are less confident in their predictions. Based on these findings, we introduce a divergent-aware NMT framework that uses factors to help NMT recover from the degradation caused by naturally occurring divergences, improving both translation quality and model calibration on EN↔FR tasks.

Publication
Proceedings of ACL 2021
Avatar
Eleftheria Briakou
Eleftheria Briakou

I research Multilingual NLP and Machine Translation.