By Yunmiao Wang
Miao is a second year graduate student in the Neuroscience Program at Emory University. She has watched Black Mirror since it first came out, and has always been interested in the topics of Neuroethics.
Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we treat our technology and how it can affect us in return. This piece is the final installment of a series of posts that discuss ethical issues surrounding neuro-technologies featured in the show, and will compare how similar technologies are impacting us in the real world.
SPOILER ALERT: The following contains plot spoilers for the Netflix television series, Black Mirror.
Plot Summary
“White Christmas” begins with a man, Joe Potter, waking up at a small and isolated outpost in the snowy wilderness. “I Wish It Could Be Christmas Everyday” plays in the background. Joe walks into the kitchen and finds Matt Trent cooking for Christmas. Matt, who seems to be bored of the mundane lifestyle of the outpost, asks Joe about how he ended up there—a conversation they have never had in their five years together at the outpost. Joe becomes defensive and is reluctant to share his past. He asks Matt the same question. In order to encourage Joe to open up, Matt shares a few stories about himself.
Matt first tells a story where he is a dating coach who trains socially awkward men like Harry how to seduce women. A remote technology called EYE-LINK enables Matt, along with eight other students, to watch through Harry’s eyes and help him approach women. In this fashion, Harry meets a woman named Jennifer at a corporate Christmas party. Due to a series of ironic circumstances, Jennifer kills both Harry and herself, as she believes both of them are troubled by voices in their heads. Matt and the rest of the students watching through EYE-LINK panic as they watch Harry die. They try to destroy any evidence that they were ever involved with Harry.
Image courtesy of Flickr user Lindsay Silveira. |
In order to win over Joe’s trust, Matt goes on to share another story about someone he once worked with, Greta, who lives in a spacious, futuristic house and seems to be very particular about every detail in her life. A week prior, Greta had an implant placed in her head that copies her thoughts and memories. The thoughts and memories are later surgically retrieved and stored in an egg-shaped device called a “cookie.” Matt’s job is to train the cookie, which is essentially a copy of Greta’s mind, to accept her position and to serve Greta day and night. Initially, the cookie shows great confusion about “her” situation, not knowing “she” is not the real Greta. Matt embodies the cookie with a simulated form of Greta and places “her” inside a vast space with an operating desk. To convince the cookie to do housekeeping work for the real Greta, Matt alters the cookie’s perception of time and makes “her” experience total isolation and boredom until “she” finally gives in. In the present day, Joe shows his disdain for the cookie technology and criticizes it as barbaric.
Joe finally shares what brought him to the outpost and starts his story by saying that his girlfriend’s father never liked him. Joe and Beth were in a serious relationship until his drinking problem slowly pushed Beth away. On a double-date they have with their friends, Tim and Gita, Beth seems to be upset, which causes Joe to drink more. After the dinner, the drunken Joe finds out that Beth is pregnant and congratulates her. Instead of being happy, Beth expresses her unwillingness to keep the baby, which angers Joe. After a heated argument, Beth blocks Joe through a technology called Z-EYE and leaves him. Being blocked by Z-EYE, Joe is only able to see a blurry grey silhouette of Beth and is unable to hear her. He spends months looking for her and writing apology letters without receiving any response. Joe also finds out that Beth has kept the baby, but he is not allowed to see her offspring due to the Z-EYE blocking. One day he sees Beth’s image in the news, which implies that she is dead. Saddened by the news, Joe is determined to meet his child for the first time since the block has been removed upon Beth’s death. He waits outside of Beth’s father’s cabin during Christmastime with a gift for the child. To his surprise, the child he has been longing to see has Asian heritage, which neither he nor Beth has. Joe soon realizes that Beth was having an affair with their friend Tim. He follows the child with shock and confronts Beth’s father. Out of anger, Joe kills Beth’s father with the snow globe he brought as a gift and runs away in panic, leaving the little girl alone on a snowy day.
In the present, Matt asks Joe if he knows what happened to the kid. Joe finally breaks down and confesses that he is responsible for killing both Beth’s father and the child. Matt soon disappears after the confession, leaving Joe to realize the outpost is the same cabin where Beth’s father and daughter died. It turns out that everything so far has taken place in a cookie of Joe in order to make him confess his crime. Matt helped the officers with Joe’s case to regain his own freedom due to his involvement in Harry’s death. Even though Matt is released from the police station, he is blocked from everyone through Z-EYE and will not be able to interact with anyone in reality. Back in the police station, as an officer leaves work for Christmas, he sets the time perception for Joe’s cookie to 1000 years per minute, leaving the copy of Joe wandering in the cabin as “I Wish It Could Be Christmas Everyday” goes on endlessly in the background.
Current Technology
Google Glass Enterprise Edition; image courtesy of Wikimedia Commons. |
“White Christmas” presents three fictional technologies: EYE-LINK, Z-EYE, and the cookie. The episode manifests the privacy issues in our real world, and, moreover, it explores the concept of selfhood and the boundaries of our relationship with advanced AI.
The EYE-LINK that allowed Harry to livestream his view with multiple people and the Z-EYE that blocks Matt from the rest of the world are closer to reality than fiction. Google Glass, despite the failure of its first version, has made its second attempt and returned this year as the Glass Enterprise Edition [1]. Given the controversy about privacy and the criticism of the wearability of its predecessor, the newer version has switched gears to become an augmented reality tool for enterprise. For example, according to the report by Steven Levy, the Glass has been employed by an agricultural equipment manufacturer company to provide workers with detailed instructions on the assembly line, which has dramatically increased the yield with high quality [1]. However, this pivot to partner with industrial companies does not necessarily mean the end of smart glasses for the general consumers. If anything, it might be a beginning of the evolution for smart glasses.
While Google Glass is not a built-in device, visual prosthetics that implant into the visual system are no longer a dream. There has been success in restoring near-normal eyesight of blind mice [2] and trials of vision rehabilitation in humans through implants [3]. It is just a matter of time before we see the birth of technology similar to EYE-LINK. After all, many people nowadays are used to sharing their lives on social media, in real-time, through their phones. If built-in sensory devices that augment our perceptions become reality, blocking others through signal manipulation would not be much of a challenge either.
Compared with EYE-LINK and Z-EYE, the cookie technology from the episode seems far more implausible based on our current understanding of neuroscience. The root of consciousness and our minds remain a mystery, despite how much we currently know about the nervous system. While we are decades away from copying our own minds, the current developments of AI are still startling. AlphaGo has been making the news over the past few years by defeating top professional Go players from around the world. While Deep Blue, another AI system, defeated world chess champion Gary Kasparov in 1997, the defeat of humans by AI in Go is much more difficult. Go, a classic abstract strategy board game that dates back to 3000 years ago, is viewed as the most challenging classical game for AI to win due to its large number of possible board configurations [4]. Given such a massive amount of possibilities, traditional AI methods involving exhaustion of all possible positions by a search tree do not apply to Go. Previous generations of AlphaGo were developed by playing with numerous amateur Go players via advanced search trees and deep neural networks. The reason that the recent win by AlphaGo Zero is so striking is that it learned to master the game without any human knowledge [5]. The newer version of AlphaGo learns to play the game by playing against itself with much higher efficiency. The triumph of AlphaGo not only means the winning of the game, but also represents the conquering of some challenges in machine learning. The advanced algorithm could potentially mean a step towards solving more complicated learning tasks, such as emotion recognition and social learning.
AlphaGo competing in a Go game; image courtesy of Flicker user lewcpe. |
As Google DeepMind continues to advance learning algorithms, a new company, Neuralink, founded by SpaceX and Tesla CEO Elon Musk has drawn a lot of attention for its audacious goal of combining human and artificial intelligence. Elon Musk is greatly concerned with AI’s potential threat to humanity and proposes that Neuralink could be a way to prevent such a threat from happening. Indeed, the brain-machine interface (BMI) is no longer a novel concept. Scientists have developed deep brain stimulation that benefits people who suffer from Parkinson’s disease, epilepsy, and many other neurological disorders [6]. In addition, people with paralysis are able to control artificial limbs through brain-machine interfaces [7]. BMI shows great promise in terms of improving people’s life quality. However, what Elon Musk is proposing is to augment the healthy human brain and improve its power by connecting it to artificial intelligence. While it is tempting to acquire a “super power” such as photographic memory through BMI, the great power comes with a great price – the interface will be highly likely to require invasive implantation of hundreds of electrodes onto the brain. Predicting the potential side effects will also be extremely challenging, as there is so much left unknown about our brains. Are people going to be willing to take enormous, unknown risks for the possibility of having photographic memory?
Despite the impressive progress scientists have made in the field of machine learning and artificial intelligence, we are still far away from anything like the cookie that would be able to copy a person’s consciousness and manipulate it to our advantage.
Ethical Consideration
After Matt explains how he coerces Cookie Greta to work for the real Greta, Joe feels empathetic towards the cookie and calls the technology slavery and barbaric. Matt argues that since Cookie Greta is only made of code, she is not real, and, hence, it is not barbaric. The disagreement between the two raises a fundamental question about whether or not the copy of a person’s mind is merely lines of code. If not, should these mind-copies have rights as we do? Similar discussions can be found in this previous post on the blog.
Image can be found here. |
“White Christmas” also brings up the question of how we perceive of our own minds and the minds of others. Why do some people believe that the cookie is nothing but simulation by a device? Many people seem to believe that there is a hierarchy of mind among different species. Daniel M. Wegner and Kurt Gray mentioned in their book, The Mind Club, a “mind survey” they conducted online. This self-report survey aimed to evaluate people’s perception of minds by asking them to compare thirteen potential minds on different mental abilities (see figure 1) (7). Based on 2499 responses, they found that people view mental abilities of the same mind differently. The authors categorize these mental abilities into two factors: experience and agency. Experience represents one’s ability to feel things such as pain, joy and anger. They define agency to be another set of mental abilities with which one can think and perform tasks instead of sensing and feeling. For example, the average response indicated that participants view themselves with high mental abilities in both experience and agency, whereas they rank a robot with relatively high agency but very little experience. Even though this two-dimensional mapping is a rather coarse way to quantify our perceptions of minds, it shows us that humans, whether consciously or subconsciously, rank minds of others (including humans, animals, robots and even god(s)) based on their mental abilities to think and to feel.
Let’s employ the concepts of agency and experience to help us understand why people do not think AI, including the cookie, has consciousness. One might agree that the cookie has a high level of intelligence, in other words agency, due to the power of algorithm in a futuristic world, but he or she might find it difficult to imagine that the code has feelings too. Matt gives Cookie Greta a physical body to help “her” cope with “her” distress. While it might be a filming tactic for the audience to better visualize cookie, the embodiment seems to also provide Cookie Greta with an outlet to feel, sense, and better understand “her” own existence. Moreover, Matt has to change Cookie Greta’s perception of time and leave “her” in prolonged solitary to force “her” into compliance given “her” fear of boredom. The fact that Matt cannot simply adjust the codes to make Cookie obedient but has to manipulate “her” through “her” fear, which is an emotion, somehow indicates the Cookie has the ability to feel and experience. Similarly, Matt takes advantage of Cookie Joe’s empathy and guilt through manipulation in order to make him confess. Even though it can still be argued that these seemingly human emotions are nothing but simulation, how can we be certain that the simulated mind does not experience these feelings? If they are able to feel the same way as we do, forcing Joe’s cookie to listen to the same Christmas carol for millions of years in isolation would be an utterly brutal and unfair punishment.
If we assume that the cookie indeed has some form of consciousness, the next question would be: should cookies bear the same consequences of their origin’s actions? It is clear that both Cookie Greta and Cookie Joe have the same memories and ways of thinking as their real selves (the term of “real” is used loosely here to differentiate the cookie and its origin instead of implying that the former is not real). Based on the confession, Joe is indeed responsible for the death of two lives. However, should the copy of his mind be responsible for his crime? Do we view the copy as an extension of him or do we see the cookie as an independent individual? Similarly, if Neuralink does succeed in creating a hybrid of human brain and AI, how do we define the identify of an individual and who should be responsible for its wrong-doing?
Conclusion
Darling's robot dinosaur; image courtesy of WikimediaCommons. |
If you disagree with how Matt and the officers treat the advanced AI in “White Christmas,” you might find some comfort in the study of human-robot interactions conducted by Dr. Kate Darling from the MIT Media Lab (8, 9). In an informal experiment, human subjects were first presented with robot dinosaurs and were asked to play with them. After building an emotional connection with the robots for about an hour, the participants were then instructed to torture and destroy them with various tools the experimenters provided. All of the volunteers refused to follow the command. Dr. Darling, an expert in robot ethics and an advocate for legal protection of robots, explains that even though people are aware that the robots are not actually alive, they naturally project their emotions on to the robot dinosaur. If people can feel empathy towards a life-like robot, are most of us really capable of watching the suffering of humanoid AI, even if it does not have consciousness? As Immanuel Kant said, “he who is cruel to animals becomes hard also in his dealings with men. We can judge the heart of a man by his treatment of animals.”
References
1. Levy, Steven (2017, July 18). Google Glass 2.0 is starting a startling second act. Retrieved from https://www.wired.com/story/google-glass-2-is-here/
2. Nirenberg, S., & Pandarinath, C. (2012). Retinal prosthetic strategy with the capacity to restore normal vision. Proc Natl Acad Sci USA, 109(37), 15012-15017. doi:10.1073/pnas.1207035109
3. Lewis, P. M., Ackland, H. M., Lowery, A. J., & Rosenfeld, J. V. (2015). Restoration of vision in blind individuals using bionic devices: a review with a focus on cortical visual prostheses. Brain res, 1595, 51-73. doi:10.1016/j.brainres.2014.11.020
4. The story of AlphaGo so far. Retrieved from https://deepmind.com/research/alphago/
5. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., … Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 500(7676), 354-359. doi:10.1038/nature24270
6. Lyson, M. K. (2011). Deep brain stimulation: current and future clinical applications. Mayo Clin Proc, 86(7), 662-672. doi:10.4065/mcp.2011.0045
7. Wegner, D. M., & Gray, K. J. (2017). The mind club: who thinks, what feels, and why it matters. New York, NY: Penguin Books
8. Can robots teach us what it means to be human? (2017, July 10). Retrieved from https://www.npr.org/2017/07/10/536424647/can-robots-teach-us-what-it-means-to-be-human
9. Darling, K. (2012). Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior toward robotic objects. Robot Law, Calo, Froomkin, Kerr ed., Edward Elgar 2016; We robot Conference. Available at https://ssrn.com/abstract=2044797 or http://dx.doi.org/10.2139/ssrn.2044797
Want to cite this post?
Wang, Y. (2017). The Neuroethics Blog Series on Black Mirror: White Christmas. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/the-neuroethics-blog-series-on-black.html
No comments:
Post a Comment