Neural machine translation (NMT) is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling and then translating entire sentences in a single integrated model.
NMT systems require only a fraction of the memory needed by traditional statistical machine translation (SMT) models. Furthermore, unlike conventional translation systems, all parts of the neural translation model are trained jointly (end-to-end) to maximize the translation performance.
The first scientific paper on using neural networks in machine translation appeared in 2014 when Bahdanau et al.[R 1] and Sutskever et al.[R 2] proposed end-to-end neural network translation models and formally used the phrase "neural machine translation" in their research. The first large-scale NMT system was launched by Baidu in 2015. The following year Google and other organizations launched a variety of NMT systems. These systems were followed by substantial advances in the following years. These advances included: a large-vocabulary NMT, applications to image captioning, subword-NMT, multilingual NMT, multi-source NMT, character-level NMT, zero-resource NMT, and zero-shot NMT. In 2015 there was the first appearance of a NMT system in two public machine translation competitions (OpenMT'15 and WMT'15). The following year 90% of the winners at WMT were NMT systems.
Since 2017, the European Patent Office has used neural machine translation to make information from the global patent system instantly accessible. The system, developed in collaboration with Google, is paired with 31 languages, and as of 2018, the system has translated over nine million documents.
NMT systems use deep learning and representation learning and depart from phrase-based statistical approaches that use separately engineered subcomponents by taking the whole sentence into account. Neural machine translation (NMT) is similar to what has been traditionally done in statistical machine translation (SMT). The main difference between NMT and SMT is the use of vector representations ("embeddings" and "continuous space representations") for words and internal states. The structure of the model in NMT systems is simpler than phrase-based models. There is no separate language model, translation model, and reordering model, but just a single sequence model that predicts one word at a time. However, this sequence prediction is conditioned on the entire source sentence and the already-produced target sequence.
Word sequence modeling in NMT systems was, at first, typically done using a recurrent neural network (RNN). A bidirectional recurrent neural network, known as an encoder, is used by the neural network to encode a source sentence for a second RNN, known as a decoder, that is used to predict words in the target language. RNNs struggle to encode long inputs into a single vector. This can be addressed by using an attention mechanism which allows the decoder to focus on different input parts while generating each word. Further Coverage Models address the issues in these attention mechanisms, such as ignoring past alignment information leading to over-translation and under-translation.
Convolutional Neural Networks (CNNs) are, in principle, somewhat better for long continuous sequences but were initially not used due to several weaknesses, but many of these were addressed with the development of "attention mechanisms".
Transformer systems, which are attention-based models, remains the dominant architecture for several language pairs. The self-attention layers of the Transformer model learn the dependencies between words in a sequence by examining links between all the words in the paired sequences and by directly modeling those relationships. This is a simpler approach than the gating mechanism that RNNs employ. And its simplicity has enabled researchers to develop high-quality translation models with the Transformer model, even in low-resource settings.
One application for NMT is low resource machine translation, when only a small amount of data and examples are available for training. One such use case is ancient languages like Akkadian and its dialects, Babylonian and Assyrian.
- Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd International Conference on Learning Representations; 2015 May 7–9; San Diego, USA; 2015.
- Sutskever I, Vinyals O, Le QV. Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems; 2014 Dec 8–13; Montreal, QC, Canada; 2014.
- Kalchbrenner, Nal; Blunsom, Philip (2013). "Recurrent Continuous Translation Models". Proceedings of the Association for Computational Linguistics: 1700–1709.
- Sutskever, Ilya; Vinyals, Oriol; Le, Quoc Viet (2014). "Sequence to sequence learning with neural networks". arXiv:1409.3215 [cs.CL].
- Kyunghyun Cho; Bart van Merrienboer; Dzmitry Bahdanau; Yoshua Bengio (3 September 2014). "On the Properties of Neural Machine Translation: Encoder–Decoder Approaches". arXiv:1409.1259 [cs.CL].
- Haifeng Wang, Hua Wu, Zhongjun He, Liang Huang, Kenneth Ward Church Progress in Machine Translation // Engineering (2021), doi: https://doi.org/10.1016/j.eng.2021.03.023
- Bojar, Ondrej; Chatterjee, Rajen; Federmann, Christian; Graham, Yvette; Haddow, Barry; Huck, Matthias; Yepes, Antonio Jimeno; Koehn, Philipp; Logacheva, Varvara; Monz, Christof; Negri, Matteo; Névéol, Aurélie; Neves, Mariana; Popel, Martin; Post, Matt; Rubino, Raphael; Scarton, Carolina; Specia, Lucia; Turchi, Marco; Verspoor, Karin; Zampieri, Marcos (2016). "Findings of the 2016 Conference on Machine Translation" (PDF). ACL 2016 First Conference on Machine Translation (WMT16). The Association for Computational Linguistics: 131–198. Archived from the original (PDF) on 2018-01-27. Retrieved 2018-01-27.
- "Neural Machine Translation". European Patent Office. 16 July 2018. Retrieved 14 June 2021.
- Wołk, Krzysztof; Marasek, Krzysztof (2015). "Neural-based Machine Translation for Medical Text Domain. Based on European Medicines Agency Leaflet Texts". Procedia Computer Science. 64 (64): 2–9. arXiv:1509.08644. Bibcode:2015arXiv150908644W. doi:10.1016/j.procs.2015.08.456. S2CID 15218663.
- Dzmitry Bahdanau; Cho Kyunghyun; Yoshua Bengio (2014). "Neural Machine Translation by Jointly Learning to Align and Translate". arXiv:1409.0473 [cs.CL].
- Bahdanau, Dzmitry; Cho, Kyunghyun; Bengio, Yoshua (2014-09-01). "Neural Machine Translation by Jointly Learning to Align and Translate". arXiv:1409.0473 [cs.CL].
- Tu, Zhaopeng; Lu, Zhengdong; Liu, Yang; Liu, Xiaohua; Li, Hang (2016). "Modeling Coverage for Neural Machine Translation". arXiv:1601.04811 [cs.CL].
- Coldewey, Devin (2017-08-29). "DeepL schools other online translators with clever machine learning". TechCrunch. Retrieved 2018-01-27.
- Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N.; Kaiser, Lukasz; Polosukhin, Illia (2017-12-05). "Attention Is All You Need". arXiv:1706.03762 [cs.CL].,
- Barrault, Loïc; Bojar, Ondřej; Costa-jussà, Marta R.; Federmann, Christian; Fishel, Mark; Graham, Yvette; Haddow, Barry; Huck, Matthias; Koehn, Philipp; Malmasi, Shervin; Monz, Christof (August 2019). "Findings of the 2019 Conference on Machine Translation (WMT19)". Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1). Florence, Italy: Association for Computational Linguistics: 1–61. doi:10.18653/v1/W19-5301.
- Wdowiak, Eryk (2021-09-27). "Sicilian Translator: A Recipe for Low-Resource NMT". arXiv:2110.01938 [cs.CL].
- Gutherz, Gai; Gordin, Shai; Sáenz, Luis; Levy, Omer; Berant, Jonathan (2023-05-02). Kearns, Michael (ed.). "Translating Akkadian to English with neural machine translation". PNAS Nexus. 2 (5). doi:10.1093/pnasnexus/pgad096. ISSN 2752-6542. PMC 10153418. PMID 37143863.