Sentiment Analysis as a Quality Assurance Tool in Translator Training: A Pedagogical Case Study

Main Article Content

Gabriel Cabrera-Méndez
Raquel Lázaro-Gutiérrez

Abstract

This paper presents a pedagogical case study on the use of sentiment analysis as a quality assurance tool in translator training. Conducted at the University of Alcalá (Spain), the exercise involved students analysing the sentiment of English source texts and their Spanish translations, focusing on neutrality coefficients ranging from -1 to +1. Results showed that sentiment analysis offers a promising complement to traditional quality assessment, particularly for politically sensitive texts where tonal fidelity is critical. Students found the activity both engaging and useful for developing affective sensitivity. Although the study was limited to a single cohort and relied on one AI model, the outcomes support the incorporation of sentiment analysis into translator education. With further practice, this method could be transferred to professional workflows, offering Language Service Providers an additional tool to ensure emotional and pragmatic consistency between source and target texts.

Downloads

Download data is not yet available.

Article Details

How to Cite
Cabrera-Méndez, G., & Lázaro-Gutiérrez, R. (2025). Sentiment Analysis as a Quality Assurance Tool in Translator Training: A Pedagogical Case Study. Language Value. https://doi.org/10.6035/languagev.8811
Section
Articles
Author Biographies

Gabriel Cabrera-Méndez, Universidad de Alcalá de Henares, Spain

Gabriel Cabrera Méndez
Holds a BA in Translation and Interpreting from the University of Granada. Sworn translator and conference interpreter.
He combines his professional work as a translator and interpreter with teaching at public institutions such as the University of Alcalá:

  • BA in Modern Languages and Translation

  • Master's Degree in Business-Oriented Conference Interpreting

  • Master's in Intercultural Communication, Translation and Interpreting in Public Services

He also teaches at private institutions such as Trágora Formación.

He works as an interpreter for the Spanish State Security Forces and began his career as a conflict-zone interpreter (Kosovo) and as an interpreter for the Spanish Royal Household.

His professional specialisations include: international mediation, terrorism, borders, public services, healthcare, refugees, asylum, and international protection.

He is Head of Quality for telephone interpreting at Dualia Teletraducciones, a member of the FITISPOs research group, and of the Sociedad Científica de Mérida (recipient of the 2021 International Award for Teaching Innovation).

He is also the author of Mamá, quiero ser intérprete and other publications, often focused on measuring the quality of interpreting.

He is currently a PhD candidate and is writing his thesis on a real-time quality assessment system for interpreting.

Raquel Lázaro-Gutiérrez, Universidad de Alcalá de Henares, Spain

Raquel Lazaro-Gutierrez is an Associate Professor in the Department of Modern Philology at the University of Alcala (Madrid, Spain), and teaches in the Master’s Degree in Intercultural Communication and Public Service Interpreting and Translation. She has been a member of different research groups, such as FITISPos-UAH in Spain (from 2001), and BIAL (from 2015) and BCUS (from 2018) in Belgium. She is the vice-president of the European Association for Public Service Interpreting, and Translation (ENPSIT), chair of the Spanish Cluster in Language Technology (Madrid), and member of the Stakeholder Assembly of the Interpreting SAFE-AI Task Force. She has been the PI of several projects such as "Corpus pragmatics and telephone interpreting", funded by the Spanish Government (2023-2026). She has participated in European research projects such as SOS-VICS (2011-2014), AHEH- Knowledge Alliances (2018-2021), or MHEALTH4ALL (2022-2025). She has also participated in national Spanish projects, such as Validación y adaptación transcultural de la Appraisal of Self-Care Agency Scale-Revised (ASA-R) (2017-2018), COMUNICAR (2016-2018), InterMed (2011-2014), and HUM2004-03774-C02-2 (2004-2007), regional projects, such as “Investigación-acción: Caminando juntos con lenguas y culturas” (2009-11), funded by Castilla-La Mancha or “Creando Puentes” (2008-09), funded by Madrid Autonomous Community, as well as local projects, such as “Diseño, compilación y análisis de un corpus multilingüe de interacciones mediadas sobre asistencia en carretera” (2017-18), or“Interpretación y Traducción en Centros Penitenciarios” (2013-14). She has obtained three prizes for knowledge transference, for the creation of MOOCs and for university – enterprise co-operation in projects about telephone interpreting.

References

Angelelli, Claudia V., & Baer, Brian James (Eds.). (2016). Researching Translation and Interpreting. Routledge. https://www.routledge.com/Researching-Translation-and-Interpreting/Angelelli-Baer/p/book/9781138849155

Bahdanau, Dzmitry, Cho, Kyunghyun, & Bengio, Yoshua. (2015). Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1409.0473

Baker, Mona. (2006). Translation and Conflict: A Narrative Account. Routledge. https://www.routledge.com/Translation-and-Conflict-A-Narrative-Account/Baker/p/book/9780415371586

Banerjee, Satanjeev, & Lavie, Alon. (2005). METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization (pp. 65–72). https://aclanthology.org/W05-0909/

Biel, Łucja. (2011). Training translators or translation service providers? EN 15038:2006 and its training implications. The Journal of Specialised Translation, 16, 61–76. https://jostrans.org/issue16/art_biel.pdf

Castilho, Sheila, Gaspari, Federico, Moorkens, Joss, & Way, Andy. (2017). Is neural machine translation the new state of the art? The Prague Bulletin of Mathematical Linguistics, 108(1), 109–120. https://doi.org/10.1515/pralin-2017-0013

Devlin, Jacob, Chang, Ming-Wei, Lee, Kenton, & Toutanova, Kristina. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT (pp. 4171–4186). https://aclanthology.org/N19-1423/

Gómez-González, María Ángeles. (2021). Translation Quality Assessment: From Metrics to Practices. Routledge. https://doi.org/10.4324/9780429274743

Han, Liting, Smeaton, Alan F., & Jones, Gareth J. F. (2021). Translation quality assessment: A brief survey on manual and automatic methods. In Proceedings for the First Workshop on Modelling Translation: Translatology in the Digital Age (pp. 15–33). ACL. https://aclanthology.org/2021.mtdp-1.3/

House, Juliane. (1997). Translation Quality Assessment: A Model Revisited. Gunter Narr Verlag. https://www.narr.de/translation-quality-assessment-a-model-revisited-9783823340014

Hugging Face. (n.d.). Sentiment analysis. Hugging Face. Retrieved April 26, 2025, from https://huggingface.co/tasks/sentiment-analysis

Karakanta, Aljoscha, van Genabith, Josef, & Way, Andy. (2021). Measuring sentiment transfer in neural machine translation. Machine Translation, 35(3), 261–289. https://doi.org/10.1007/s10590-021-09264-0

Kim, Yoon. (2014). Convolutional neural networks for sentence classification. In Proceedings of EMNLP 2014 (pp. 1746–1751). https://aclanthology.org/D14-1181/

Liu, Bing. (2012). Sentiment Analysis and Opinion Mining. Morgan & Claypool. https://doi.org/10.2200/S00416ED1V01Y201204HLT016

Liu, Yinhan, Ott, Myle, Goyal, Naman, Du, Jingfei, Joshi, Mandar, Chen, Danqi & Stoyanov, Veselin. (2019). RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. https://arxiv.org/abs/1907.11692

Munday, Jeremy. (2012). Evaluation in Translation: Critical Points of Translator Decision-Making. Routledge. https://www.routledge.com/Evaluation-in-Translation-Critical-Points-of-Translator-Decision-Making/Munday/p/book/9780415584894

Nida, Eugene A. (1964). Toward a Science of Translating: With Special Reference to Principles and Procedures Involved in Bible Translating. Brill. https://brill.com/view/title/15325

Pang, Bo, Lee, Lillian, & Vaithyanathan, Shivakumar. (2002). Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of EMNLP 2002 (pp. 79–86). https://aclanthology.org/W02-1011/

Pang, Bo, & Lee, Lillian. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1–2), 1–135. https://doi.org/10.1561/1500000011

Papineni, Kishore, Roukos, Salim, Ward, Todd, & Zhu, Wei-Jing. (2002). BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (pp. 311–318). https://aclanthology.org/P02-1040/

Ribeiro, Fernando. (2017). On the role of machine translation output in translator training: A preliminary study. Journal of Specialised Translation, 28, 129–152. https://jostrans.org/issue28/art_ribeiro.php

Rivera-Trigueros, Irene. (2022). Machine translation systems and quality assessment: A systematic review. Language Resources and Evaluation, 56, 593–619. https://doi.org/10.1007/s10579-021-09537-5

Saadany, Hossam, Orasan, Constantin, Mohamed, Ehab, & Tantawy, Ahmed. (2021). Sentiment-aware measure (SAM) for evaluating sentiment transfer by machine translation systems. arXiv preprint arXiv:2109.14895. https://arxiv.org/abs/2109.14895

Snover, Matthew, Dorr, Bonnie, Schwartz, Richard, Micciulla, Linnea, & Makhoul, John. (2006). A study of translation edit rate with targeted human annotation. In Proceedings of AMTA 2006 (pp. 223–231). https://aclanthology.org/2006.amta-papers.25/

Taboada, Maite, Brooke, Julian, Tofiloski, Milena, Voll, Kimberly, & Stede, Manfred. (2011). Lexicon-based methods for sentiment analysis. Computational Linguistics, 37(2), 267–307. https://doi.org/10.1162/COLI_a_00049

Toral, Antonio. (2020). Insights from the application of machine translation quality evaluation at the document level. Machine Translation, 34(1–2), 1–27. https://doi.org/10.1007/s10590-020-09225-2