The Banality of (Automated) Evil: Critical Reflections on the Concept of Forbidden Knowledge in Machine Learning Research

Main Article Content

Rosa M. Senent
Diego Bueso

Abstract

The development of computer science has raised ethical concerns regarding the potential negative impacts of machine learning tools on people and society. Some examples are pornographic deepfakes used as weapons of war against women; pattern recognition designed to uncover sexual orientation; and misuse of data and deep learning by private companies to influence democratic elections. We contend that these three examples are cases of automated evil. In this article, we defend that the concept of forbidden knowledge can help to inform a coherent ethical framework in the context of machine learning research. We conclude that restricting generalised access to extensive data and limiting access to ready-to-use codes would mitigate potential harms caused by machine learning tools. In addition, the notions of intersectionality and interdisciplinarity should be systematically introduced in data and computer science research.

Downloads

Download data is not yet available.

Article Details

How to Cite
Senent Julián, R. M., & Bueso Acevedo, D. (2022). The Banality of (Automated) Evil: Critical Reflections on the Concept of Forbidden Knowledge in Machine Learning Research. RECERCA. Revista De Pensament I Anàlisi, 27(2). https://doi.org/10.6035/recerca.6147
Section
Articles
Author Biographies

Rosa M. Senent, Dublin City University

Rosa M. Senent has a Philosophy Degree (2013) from the University of València and an Erasmus Mundus Master's Degree in Women and Gender Studies (2017) from the University of Oviedo and the University of Łódź (Poland). She is now pursuing a PhD in Sociology and Gender Studies at Dublin City University (Ireland). She is currently based in Spain. She can be reached at roseju@outlook.es.

Diego Bueso, University of Valencia

Diego Bueso obtained his degree in Physics (2016) and a Master's Degree in Remote Sensing (2017) from the Universitat de València. He is currently doing a PhD degree at the Image and Signal Processing Group (ISP) working on machine learning applications in geoscience and developing methods to infer Spatio-temporal causal relations from Earth-Observational data. 

References

Ajder, Henry; Patrini, Giorgio; Cavalli, Francesco & Cullen, Laurence (2019). The state of deepfakes: landscape, threats, and impact. Amsterdam: Deeptrace.

Alaa, Ahmed; Bolton, Thomas; di Angelantonio, Emanuele & et al. (2019). Cardiovascular disease risk prediction using automated machine learning: A prospective study of 423,604 UK Biobank participants. PLOS ONE,14(5),1-17.

Allen, Robin & Masters, Dee (2020). Artificial Intelligence: the right to protection from discrimination caused by algorithms, machine learning and automated decision-making. ERA Forum 20. Springer,585–598.

Arendt, Hannah (1999). Eichmann en Jerusalén: un estudio sobre la banalidad del mal. Barcelona: Lumen.

Baum, Seth (2020). Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems, Philosophy & Technology, Available at SSRN: https://ssrn.com/abstract=3651313 [Consulted 13 July, 2021].

Bender, Emily & Gebru, Timnit; McMillan-Major, Angelina & Schmitchell, Schmargaret (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency,610–623.

Bourdieu, Pierre (2000). La dominación masculina. Barcelona: Anagrama.

Bradshaw, Samantha & Howard, Pilip (2019). The global disinformation order: 2019 global inventory of organised social media manipulation. Oxford: Project on Computational Propaganda.

Buolamwini, Joy & Gebru, Timnit (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on fairness, accountability and transparency.PMLR,77–91.

Crenshaw, Kimberle (1990). Mapping the margins: Intersectionality, identity politics, and violence against women of colour. Stanford Law Review,43(6)1241-1299.

Dines, Gail (2010). Pornland: How porn has hijacked our sexuality. Boston: Beacon Press.

Dunn, Suzie (2020). Technology-Facilitated Gender-Based Violence: An Overview. Centre for International Governance Innovation: Supporting a Safer Internet,(1).

Feuerriegel, Stefan; Dolata, Mateusz & Schwabe, Gerhard (2020). Fair AI. Business & Information Systems Engineering,62(4),379–384.

Gebru, Timnit (2019). Race & Gender. In Oxford Handbooks of AI Ethics. Oxford: Oxford Handbooks.

Hagendorff, Thilo (2020a). Forbidden knowledge in machine learning reflections on the limits of research and publication. AI & SOCIETY,1–15.

Hagendorff, Thilo (2020b). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines,30(1),99–120.

Ham, Yoo-Geun; Kim, Jeong-Hwan & Luo, Jing-Jia (2019). Deep learning for multi-year ENSO forecasts. Nature,573(7775),568–572.

Harding, Sandra (1996). Ciencia y feminismo. Madrid: Morata.

Heaven, Douglas (2019). Why deep-learning AIs are so easy to fool. Nature, 574(7777),163–166.

Henry, Nicola; McGlynn, Clare; Flynn, Asher et al. (2020). Image-based Sexual Abuse: A Study on the Causes and Consequences of Non-consensual Nude or Sexual Imagery. London: Routledge.

Henry, Nicola; Powell, Anastasia & Flynn, Asher (2018). AI can now create fake porn, making revenge porn even more complicated. The Conversation, 1 March, 28

Howard, Philip & Kollanyi, Bence (2016). Bots,# strongerin, and# brexit: Computational propaganda during the UK-EU referendum. Available at SSRN: https://ssrn.com/abstract=2798311 [Consulted 20 July 2021].

Jobin, Anna; Ienca, Marcello & Vayena, Effy (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence,1(9),389–399.

Johnson, Deborah. (1996). Forbidden knowledge and science as professional activity. The Monist,79(2),197–217.

Johnson, Deborah (1999). Reframing the question of forbidden knowledge for modern science. Science and Engineering Ethics,5(4),445–461.

Kelly, Liz (1987). The Continuum of Sexual Violence. In Hanmer, Jalna & Maynard, Mary (Eds.). Women, Violence and Social Control (46-60). London: Palgrave Macmillan.

Kempner, Joanna; Perlis, Clifford & Merz, Jon (2005). Forbidden knowledge. Science, 307(5711),854–854.

Khan, Saan; Xiaoxuan, Liu; Siddharth, Nath & et al. (2021). A global review of publicly available datasets for ophthalmological imaging: barriers to access, usability, and generalisability. The Lancet Digital Health,3(1),51-66.

Kikerpill, Kristjan (2020). Choose your stars and studs: the rise of deepfake designer porn, Porn Studies,7(4),352–356.

Kimmel, Michael (2000). The gendered society. New York: Oxford University Press.

Kuhn, Thomas (1971). La estructura de las revoluciones científicas. México: Fondo de Cultura Económica.

Kusters, Remy; Misevic, Dusan; Berry, Hugues & et. al (2020). Interdisciplinary Research in Artificial Intelligence: Challenges and Opportunities. Frontiers in Big Data,3,45.

Latonero, Mark (2018). Governing artificial intelligence: Upholding human rights & dignity. Data & Society,1–37.

Le, Thai Hoang (2011). Applying Artificial Neural Networks for Face Recognition. Advances in Artificial Neural Systems, 2011,1687-7594.

Maddocks, Sophie (2020). ‘A Deepfake Porn Plot Intended to Silence Me’: exploring continuities between pornographic and ‘political’ deep fakes. Porn Studies, 7(4),415–423.

McGlynn, Clare & Rackley, Erika (2017). Image-based sexual abuse. Oxford Journal of Legal Studies,37(3),534–561.

McGlynn, Clare; Johnson, Kelly; Rackley, Erika; Henry, Nicola & et al. (2021). ‘It’s Torture for the Soul’: The Harms of Image-Based Sexual Abuse. Social & Legal Studies,30(4),541–562.

Nissani, Moti (1997). Ten cheers for interdisciplinarity: The case for interdisciplinary knowledge and research. The Social Science Journal,34(2),201–216.

Pastor-Galindo, Javier; Zago, Mattia; Martínez, Gregorio et al. (2020). Spotting political social bots in Twitter: A use case of the 2019 Spanish general election. IEEE Transactions on Network and Service Management,17(4),2156–2170.

Powell, Anastasia; Scott, Adrian; Flynn, Asher & Henry, Nicola. (2020). Image-based sexual abuse: An international study of victims and perpetrators. A Summary Report. Criminology,0817-8542.

Rackley, Erika; McGlynn, Clare; Johnson, Kelly; Henry, Nicola et al. (2021). Seeking justice and redress for victim-survivors of image-based sexual abuse. Feminist Legal Studies,0966-3622.

Robinson, Melody (2019). Biphobia, Rape Myth Acceptance, and Victim Blame for Bisexual Survivors of Sexual Assault. OSR Journal of Student Research,5(329).

Rolnick, David; Donti, Priya; Kaack, Lynn; Kochanski, Kelly & et al. (2019). Tackling Climate Change with Machine Learning, arXiv:1906.05433 [cs, stat] [Preprint]. Available at: http://arxiv.org/abs/1906.05433 (Accessed: 7 September 2021).

Russell, Diana (1990). Rape in marriage. Bloomington: Indiana University Press.

Sarewitz, Daniel (2016). The pressure to publish pushes down quality. Nature, 533(7602), 147–147.

Smith, David (1978). Scientific Knowledge and Forbidden Truths. The Hastings Center Report,8(6),30–35.

Tajalli, Payman (2021). AI ethics and the banality of evil. Ethics and Information Technology,1–8.

Viseu, Ana (2015). Integration of social science into research is crucial. Nature, 525(7569),291–291.

Walby, Sylvia (1992). Theorizing patriarchy. Oxford: Blackwell.

Wang, Yilun & Kosinski, Michal (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology,114(2),246–257.

Welzer-Lang, Daniel (2008). Speaking Out Loud About Bisexuality: Biphobia in the Gay and Lesbian Community. Journal of Bisexuality,8(1-2),81-95.

Westerlund, Mika (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review,9(11).

Wong, Karen & Dobson, Amy (2019). We’re just data: Exploring China’s social credit system in relation to digital platform ratings cultures in Westernised democracies. Global Media and China,4(2),220–232.

Youyou, Wu; Kosinski, Michal & Stillwell, David (2015). Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences,112(4),1036–1040.