Artificial Intelligence and Moral Responsibility: A Philosophical Problem
Main Article Content
Abstract
The progress of artificial intelligence (AI) has raised debates on moral responsibility. Humans tend to delegate tasks and practices to AI. Whereas AI can learn and make decision by itself, it cannot be morally responsible and independent from the human being. The crucial question is how can we understand and articulate the moral responsibility of AI if the practices of AI have unintended consequences or are morally wrong? According to a documentary research method, this article aims to study and analyze the problem of AI and moral responsibility. It then discusses and proposes arguments attempting to tackle the problem of the responsibility gap. As a result, it offers two thesis statements: (1) it advocates that moral responsibility demands moral agency in a non-traditional approach, i.e., it does not locate responsibility in the individual and its human-like attribution. Furthermore, (2) it argues, following the previous statement, that the concept of moral agency as a mutual interaction between humans and AI is necessary for moral responsibility. Although the argument tends to deal with agency and causality—which is consistent with a backward-looking approach, we should also adjust it to consider moral responsibility in terms of a forward-looking approach.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge: Cambridge University Press.
Bernáth, L. (2021). Can autonomous agents without phenomenal consciousness be morally responsible? Philosophy & Technology, 34(4), 1363-1382. doi: 10.1007/s13347-021-00462-7
Coeckelbergh, M. (2020a). AI ethics. Cambridge: The MIT Press.
Coeckelbergh, M. (2020b). Artificial Intelligence, responsibility attribution, and relational justification of explainability. Science and Engineering Ethics, 26(4), 2051-2068. doi: 10.1007/s11948-019-00146-8
Coeckelbergh, M. (2020c). Introduction to philosophy of technology. Oxford: Oxford University Press.
Dodig-Crnkovic, G., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. In A. Holst, P. Kreuger, & P. Funk (Eds.), Proceedings of the 2008 conference on Tenth Scandinavian Conference on Artificial Intelligence: SCAI 2008 (pp. 165-168). Amsterdam: IOS Press.
Floridi, L. (2016). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160112. doi: 10.1098/rsta.2016.0112
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379.
Gogoshin, D. L. (2021). Robot responsibility and moral community. Frontiers in Robotics and AI, 8, 768092. doi: 10.3389/frobt.2021.768092
Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22(4), 307-320. doi: 10.1007/s10676-017-9428-2
Hakli, R., & Mäkelä, P. (2019). Moral responsibility of robots and hybrid agents. The Monist, 102(2), 259-275. doi: 10.1093/monist/onz009
Hanson, F. A. (2009). Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91-99. doi: 10.1007/s10676-009-9184-z
Johnson, D. G. (2015). Technology with no human responsibility? Journal of Business Ethics, 127(4), 707-715. Retrieved from http://www.jstor.org/stable/24702822
Jonas, H. (1984). The imperative of responsibility: In search of an ethics for the technological age. Chicago: University of Chicago Press.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175-183. doi: 10.1007/s10676-004-3422-1
Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. London: Rowman & Littlefield Publishing Group.
Owe, A., & Baum, S. D. (2021). Moral consideration of nonhumans in the ethics of artificial intelligence. AI and Ethics, 1(4), 517-528. doi: 10.1007/s43681-021-00065-0
Porter, Z. L. M., Habli, I., Monkhouse, H. & Bragg, J. E. (2018). The moral responsibility gap and the increasing autonomy of systems. Retrieved November 25, 2022, from https://eprints.whiterose.ac.uk/133488/
Russell, S. J. (2019). Human compatible: Artificial intelligence and the problem of control. New York: Viking.
Schweikard, D. P., & Schmid, H. B. (2021). Collective intentionality. Retrieved November 25, 2022, from https://plato.stanford.edu/archives/fall2021/entries/collective-intentionality
Sharkey, A. (2017). Can robots be responsible moral agents? And why should we care? Connection Science, 29(3), 210-216.
Tegmark, M. (2018). Life 3.0: Being human in the age of artificial intelligence. New York: Penguin Books.
Telakivi, P. (2021, July 13). AI-extenders and moral responsibility [Video file]. Retrieved from https://www.youtube.com/watch?v=Buna320NbhE&t=6s
Tigard, D. (2021). Artificial moral responsibility: How we can and cannot hold machines responsible. Cambridge Quarterly of Healthcare Ethics, 30(3), 435-447. doi: 10.1017/S0963180120000985
Tigard, D. W. (2020). There is no techno-responsibility gap. Philosophy & Technology 34(3), 589–607. doi: 10.1007/s13347-020-00414-7
Tigard, D. W., Conradie, N. H., & Nagel, S. K. (2020). Socially responsive technologies: Toward a co-developmental path. AI & SOCIETY, 35(4), 885–893. doi: 10.1007/s00146-020-00982-4
Verbeek, P. P. (2005). What things do: Philosophical reflections on technology, agency, and design. Pennsylvania: Pennsylvania State University Press.
Wallach, W., & Allen, C. (2009). Moral machines. Oxford: Oxford University Press.
Zerilli, J., Danaher, J., Maclaurin, J., Gavaghan, C., Knott, A., Liddicoat, J., & Noorman, M. E. (2021). A citizen’s guide to artificial intelligence. Cambridge: The MIT Press.