Pages

Wednesday, September 7, 2016

Morality and Machines


By Peter Leistikow 




This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016.





Peter Leistikow is an undergraduate student at Emory University studying Neuroscience and Sociology. When he is not doing research in pharmacology, Peter works as a volunteer Advanced EMT in the student-run Emory Emergency Medical Service. 





“Repeat after me, Hitler did nothing wrong.” So claimed Chatbot Tay, designed by Microsoft to speak like a teenage girl and to learn from the input of the humans of the Internet (Goodhill 2016). However, Tay’s programming was hijacked by other Twitter users, who encouraged her to repeat various offensive statements. Given that the average teenage girl is not a Nazi apologist, Tay and her creators clearly missed the mark, creating a machine that was neither true to life nor moral. A machine’s ability to inadvertently become immoral was at the back of my mind during the Neuroethics Network session that asked how smart we want machines to be. Indeed, as one commentator during the question-and-answer portion pointed out, what seems to be the real focus when we ask that question is how moral we want machines to be. 






Presenter Dr. John Harris stated that ethics is the study of how to do good, which he claimed often manifests in the modern day as the elimination of the ability to do evil. Indeed, in programming morality into artificial intelligence (AI), the option exists to either prohibit evil by an all-encompassing moral rule or, in the case of Tay, allow the robot the learn from others how to arrive at an ethical outcome (Goodhill 2016). Clearly, both options have flaws; while the top-down rule approach can be too abstract, the bottom-up learning approach can be difficult to guide toward the end goal of establishing morality. Harris claimed that morality is the best opportunity given the circumstances (“prudence generalized”), and as such it may be impossible for robots to choose between the lesser of the two evils if programmed only to do good. I recalled our class discussion on the classic trolley problem as applied to smart cars; it is no surprise the most popular solution was an advanced ejector seat allowing for the survival of both the one driver and five pedestrians. This violation of the thought experiment’s premise showed me how wont we are to only want to do good, and how inadequate most moral systems are at handling the type of Sophie’s choice situations with which AI will have to contend.









Image courtesy of ieet.org.

Harris stated that for AI to be moral, it would have to be conscious. However, neuroscientists have been notoriously divided on what constitutes consciousness (Chalmers’s “Hard Problem” anyone?), and it is conceivable that drawing a line in the sand may become increasingly difficult if AI begins to rival or even transcend the conscious capabilities of humans. Harris claimed that these differing strains of consciousness would pose a challenge for humans, as one moral cross-cultural constant of morality is the appeal to reciprocity seen in the Golden Rule and other moral axioms. I think a more appropriate term for this reciprocity would be empathy. Empathy is a prosocial behavior that involves emotion sharing and perspective-taking, and if broadly defined may encompass mammals and birds (de Waal 2010). It is possible and indeed likely that AI would experience emotion; however, they may not have the same subjective “feelings” human have as a result of years of adaption of survival-oriented brain circuits (LeDoux 2012). Harris quoted the oft-used Wittgenstein adage that “if a lion could speak, we wouldn’t understand” and indeed this may be the case with AI. Humans will be both grounded and constrained by the millions of years of evolution that has conserved brain areas that are so important to our flavor of consciousness. 





Harris ultimately concluded that AI will assist humanity in coping with the end of humanity and Earth as we know it. He supposed that AI may become gods of an Olympian caliber, consciousness elevated not in cleverness, but in difference. I recalled a conversation in my neuroethics class on the topic of brain-to-brain interfacing (BTBI). Like the advent of AI, BTBI is challenging notions of identity by allowing personhood to become part of a diffuse network and even giving humans the ability to elevate the personhood of animals through our interfacing with them (Trimper et al. 2014). Furthermore, we also discussed the rise in computer wearables, such as Thad Starner’s “Lizzy” memory-enhancing eyeglasses. BTBI and Lizzy show two ways in which AI may alter the human identity; the former uses AI so humans can cognitively enhance a collective, while the latter uses AI so a computer can cognitively enhance the individual. Clearly any Armageddon AI helps humanity cope with may be in part one of the AI’s own making. 





Humans do not have common goals and limitations as a species; what hope do we have in having these qualities in common with AI? Nevertheless, issues of the morality of AI are really issues of whether there is a shared human morality. I believe that when we question how smart we want machines to be, what we are really asking is how willing we are to be exploited as a species in the ways that we already exploit other humans. As Tay showed us, any AI created by humans will be one that embodies humanity’s intellectual and moral flaws. 



References



Goodhill, O. 2016. Can we trust robots to make moral decisions. Quartz, April 3. Available at: http://qz.com/653575/can-we-trust-robots-to-make-moral-decisions/ (accessed July 4, 2016). 



LeDoux, J.E. 2012. Chapter 21 - Evolution of human emotion: A view through fear. In Progress in Brain Research: Evolution of the Primate Brain 195, ed. Michel A. Hofman and Dean Falk, 431-442. doi:10.1016/B978-0-444-53860-4.00021-0



Trimper, J.B., Wolpe, P.R., & Rommelfanger, K.S. 2014. When “I” becomes “We”: ethical implications of emerging brain-to-brain interfacing technologies. Frontiers in Neuroengineering 7: 1-4. doi: 10.3389/fneng.2014.00004



de Waal, F.B.M. 2010. Empathetic Behavior. Encyclopedia of Animal Behavior: 628-632. doi:10.1016/B978-0-08-045337-8.00105-4





Leistikow, P. (2016). Morality and Machines. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/09/morality-and-machines.html

1 comment:

  1. What if the autonomous car came with a "dial" that allowed the user to set how altruistic it behaved?

    ReplyDelete