Book Review: Translation Quality Assessment: From Principles to Practice

Main Article Content

Rocío Caro Quintana

Abstract

With the growth of digital content and the consequences of globalization, more content is published every day and it needs to be translated in order to make it accessible to people all over the world. This process is very simple and straightforward thanks to the implementation of Machine Translation (MT), which is the process of translating texts automatically with computer software in a few seconds. Nevertheless, the quality of texts has to be checked to make them comprehensible, since the quality from MT is still far from perfect. Translation Quality Assessment: From Principles to Practice, edited by Joss Moorkens, Sheila Castilho, Federico Gaspari and Stephen Doherty (2018), deals with the different ways (automatic and manual) these translations can be evaluated. The volume covers how the field has changed throughout the decades (from 1978 until 2018), the different methods it can be applied, and some considerations for future Translation Quality Assessment applications.

Downloads

Download data is not yet available.

Article Details

How to Cite
Caro Quintana, R. (2020). Book Review: Translation Quality Assessment: From Principles to Practice. Language Value, 13(1), 110–115. https://doi.org/10.6035/LanguageV.2020.13.6
Section
Book and multimedia reviews

References

Banerjee, S., & Lavie, A. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization (pp. 65-72).

Castilho, S., Moorkens, J., Gaspari, F., Calixto, I., Tinsley, J., & Way, A. 2017. Is neural machine translation the new state of the art?. The Prague Bulletin of Mathematical Linguistics, 108(1), 109-120.

Nießen S, Och FJ, Leusch G, Ney H. 2000. An evaluation tool for machine translation: fast evaluation for MT research. In: Proceedings of the second international conference on language resources and evaluation, Athens, 31 May–2 June 2000, pp 39–45

Papineni, Kishore, Roukos Salim, Ward Todd, and Zhu Wei-Jing. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, Association for Computational Linguistics, Philadelphia, 311-318.

Romero-Fresco P, Pérez JM. 2015. Accuracy rate in live subtitling: the NER model. In: Díaz Cintas J, Baños Piñero R (eds) Audiovisual translation in a global context. Palgrave Macmillan, London, pp 28–50

Snover, Matthew, Bonnie Dorr, Schwartz Richard, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas, The Association for Machine Translation in the Americas, Cambridge, 223-231.