Pages

Tuesday, October 31, 2017

The Neuroethics Blog Series on Black Mirror: Men Against Fire


By Sunidhi Ramesh







Image courtesy of Pexels.

Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we treat our technology and how it can affect us in return. This piece is part of a series of posts that will discuss ethical issues surrounding neuro-technologies featured in the show and will compare how similar technologies are impacting us in the real world. 






SPOILER ALERT: The following contains plot spoilers for the Netflix television series, Black Mirror








Plot Summary





“Men Against Fire” begins with the introduction of Stripe, a young man decked in army gear who, among other soldiers, is tasked with a “roach-hunt.” The soldiers are situated in a foreign land (possibly Denmark, as the “civilians” all speak Danish) and are attempting to help the locals with a recent “roach” invasion. As the team gathers information about the roaches’ whereabouts, they plug data into their MASS systems, a device that allows the soldiers to actively manipulate maps, data files, and local information on a projected screen in front of their eyes.










An image of an American arm sick bay.

(Image courtesy of Flickr.)

Stripe and the other soldiers get a lead about a pack of roaches hiding at a ranch nearby. They immediately raid the location, discovering three roaches (revealed to be blank, zombie-like inhuman animals that screech, claw, and howl); Stripe manages to kill two of them, one with his gun and the other with his bare hands.





The next day, Stripe observes momentary glitches in his MASS technology, so he visits a medical examiner at the army base’s sick bay. After a full exam, no signs of errors are observed, and Stripe is sent home. The team, including Stripe’s new friend Raiman (“Rai”), embarks on yet another roach hunt the following morning; this time, Stripe is immediately aware of his MASS implant’s insufficiency, as he is now able to smell the grass around him (MASS strips the soldiers of some of their senses). Still, he trudges on and joins the group as they raid an abandoned building.



When there, Stripe sees a group of civilians—all of whom Rai guns down and shoots. Shocked, Stripe attacks Rai, shouting at her with four words: “What did you do?” He knocks Rai out with his gun and helps the other two civilians (a mother and her son) escape. Once in a safe shelter, Stripe is confronted by the civilian, Catarina, who explains that the MASS implants trick the soldiers into seeing humans as roaches. As she completes her declamation, Rai comes in and shoots Catarina and her son. Stripe is then sent to the psychiatric ward of the army base.








Image courtesy of Pixabay.

When there, Stripe is visibly in shock, rocking back and forth and whispering, “the whole thing is a lie.” The doctor appears and tries to explain the situation to Stripe, claiming that the MASS and the roaches have a purpose. “Do you have any idea… [what is] in their DNA?” the doctor shouts. “Higher rates of cancer, muscular dystrophy, MS, SLS, substandard IQ. Criminal tendencies, sexual deviances, it’s all there. The screening shows it… and it’s a lot easier to pull the trigger [on the enemy] when you’re aiming at the Boogeyman, hm?” He then tortures Stripe into a whimpering mess, forcing him to re-watch his own kills—this time unmasking the roaches and depicting them as the human beings that they are.





At the end of the episode, Stripe is shown in front of a bright, welcoming home; an alluring woman approaches him warmly, although she (and the home) is likely a figment of his (now functioning) neural device.





He is smiling at the world in front of him, but a tear visibly rolls down his cheek.








The Current State of Technology





“Men Against Fire” questions the murky future of warfare by suggesting the possible roles of technology in advanced combat. Among those depicted is the cryptic “MASS system,” a neural implant that appears to be useful in augmenting the army’s efficiency (it is implied throughout the episode that MASS helps the soldiers think clearer, see better, perform more effectively, shoot more precisely, hear selectively, and suppress emotions/feelings/senses that are irrelevant to the mission at hand). But later, in the unexpected plot twist with Catarina (shocking, in many ways, because it seems so conceivable), MASS proves to be something else entirely.





Still, the possibility of a technology that could use (or, rather, integrate into) our own brains to augment and modify our senses and abilities is mystifying. How far are we from this being a reality?








Image courtesy of Flickr.

Augmentation of the Senses: Touch, Vision, and Hearing 





In late 2012, researcher Gregg Tabot and his team (1) “developed approaches to convey sensory information critical for object manipulation—information about contact location, pressure, and timing— through intracortical mic-rostimulation of the somato-sensory cortex.” In other words, in experiments where the brains of nonhuman primates were electrically stimulated, Tabot was able to elicit sensations from projections specifically to a “localized patch of skin”— ones that can not only “track the pressure exerted on the skin” but also the specific “timing of contact events.” This form of biomimetic feedback is projected to be able to help “restore touch to individuals who have lost it.” Other experiments of this nature have used electrical interfaces with peripheral nerves to “convey basic somatosensory feedback” (2).





Technology that focuses on restoring eyesight in the blind has been in the works since the early 1970’s (3). Epiretinal devices, ones that bypass retinal processing such as the Argus II Retinal Prosthesis System (4), have been developed and approved to help stimulate the ganglion cells that merge at the back of the eye to form the optic nerve. The difficulty with visual impairment, however, is that it can be caused by numerous different underlying impairments—from the retina, the optic nerve, the lateral geniculate nucleus (LGN), to the striate cortex (V1) itself (among other relevant visual processing areas) (5). The field of neurobionics attempts to mediate these differences by offering the opportunity for cortically-based implant devices that stimulate these brain areas; and, although current improvements in eyesight with implant recipients is minimal, improvements are happening (5).










Image courtesy of Pixabay.

Cochlear implants (6, 7) and the advancement of cochlear implantation has allowed patients who have lost their hearing after their own development of language to “regain significant auditory benefit.” But, in a small group of individuals whose hearing deficits result from significant damage of the cochlear nerves, cochlear implantation fails to serve as a viable treatment (8). In these patients, an auditory brainstem implant (ABI) allows for the restoration of limited hearing (9), “allowing them to recognize environmental sounds” in addition to amplifying their abilities to communicate effectively (8).





It is important to note that the vast majority of these technologies are preliminary, rooted in mostly restoring prior function rather than enhancing normal functioning to super-human levels (as is implied by the MASS technology in Black Mirror).





Augmentation of Human Memory, Attention, and Learning Ability





The enhancement of human abilities (namely memory, attention, and learning) is not a foreign concept; in fact, a series of posts on this blog (which can be found here, here, and here) have addressed this topic. I will leave them here with the overarching statement that research on these forms of augmentation is currently being done, although, again, the extent of these modern methods allows for little of the precision and accuracy that is afforded to the devices depicted in Black Mirror.








Neuroethical Considerations





The notion of sensory and moral alleviation highlighted in “Men Against Fire” is, on the surface, necessary. Recent statistics put the number of returning war veterans with PTSD at anywhere from 10% to 18%, citing “combat stressors” such as “seeing dead bodies, being shot at, being attacked, receiving mortar fire, or knowing someone who has been killed” as triggers of the disorder (10).





The biggest trigger, though, remains being behind the trigger; men and woman who kill in battle are at a dramatically higher risk of developing PTSD and often display higher levels of trauma than those who have not killed (11, 12). So, what if we could desensitize our army from these triggers altogether? Sounds promising. But is it?








A soldier battling PTSD (post-traumatic stress disorder).

(Image courtesy of Wikimedia.)

MASS is, by every definition, a weapon—a device designed or used for inflicting physical damage. In the eyes of the soldier, MASS strips the humans targeted in this eugenic crusade of their humanness—their voices, their language, their faces. And it does all of this with a purpose—to make killing easier.





At the end of the episode, the doctor’s sickening monologue elucidates this purpose: “Humans. You know we give ourselves a bad rap, but we’re generally empathetic as a species. I mean, we don’t actually really want to kill each other. Which is a good thing. Until your future depends on wiping out the enemy… even in World War II, in a firefight, only 15, 20 percent of the men would pull the trigger. Fate of the world at stake, and only 15 percent open fire… plus the guys who did get a kill, most of them came back messed up in the head.” Lt. Col. Dave Grossman’s book, On Killing, suggests that these numbers aren’t fiction (12).





But this is the reality of war—killing and death are inherent in the nature of battle. If MASS technology makes killing easier (by removing the smells and sounds of death) and keeps soldiers from feeling the aftermath, is it really that bad? Why wouldn’t we want an army that can fight relentlessly, without morality in the way?





The answer lies in our doctor’s soliloquy: we’re human.








Image courtesy of the Military Health System.

Soldiers are not “naturally born” to pull the trigger for a reason, and neural implants that engineer them to be that way would, at the least (and with the current state of technology), cause a great deal of cognitive dissonance—dissonance that, in ways, could be more disturbing than the trauma otherwise caused by warfare. Moral injury in war today could, in part, be a result of this dissonance, where soldiers who engage in “acts of transgression” that grossly undermine moral and ethical expectations suffer from shame, guilt, anxiety, or anger (13).



This construct of moral injury can exhibit itself behaviorally, forcing war veterans who have killed in war (among other things) into self-handicapping behaviors, anomie, and self-harm. A real-life example of this unusual dichotomy between morality and death can be found in pilots of remotely piloted aircrafts (RPA), or drones, who are in the fairly unique situation of not physically being situated in the war theater; still, perhaps surprisingly, RPA pilots have been shown to display war-related challenges beyond PTSD, often exhibiting high levels of fatigue and social isolation as a result of the compartmentalization of their work (14).





Perhaps the last teardrop on Stripe’s face in “Men Against Fire” points to these elements; even with the functioning MASS implant, a part of him may in fact be cognizant of the false reality of the world around him.





So are there other functional consequences of using technology to separate soldiers from the war zones they are otherwise immersed in? Most likely. But would these technologies modulate a soldier’s perception of battle? Of war? Of death? What does putting a device between a human and the environment do to his/her sensitivity to the real world? Perhaps we can answer these questions with regards to our own use of cellular and technologic devices. 








War technology has evolved substantially

since this picture was taken during WWII.

(Image courtesy of Wikimedia.)

(It is worth noting that there continues to be a long-standing debate about “dual-use” or repurposing technologies for war; whether or not we will firmly agree on implementing technologies for military use that were otherwise created for civilian purposes is a question for the near future.)





A whole series of posts on this blog could be dedicated to the misplaced use of DNA technology and “screenings” in “Men Against Fire”—all of which allows for the blatant eugenics of the “roaches.” What is concerning about this argument is the claim that DNA is the direct and only basis for the plethora of disorders the doctor listed—that the environmental contributions to disease are not relevant to the prevalence of the many illnesses we face as a human race. In an paper entitled “Genetic Essentialism: On the Deceptive Determinism of DNA,” authors Ilan Dar-Nimrod and Steven J. Heine argue that “…there are rare cases of ‘strong genetic explanation’ when… responses to genetic attributions may be appropriate, however people tend to over-weigh genetic attributions compared with competing attributions even in cases of ‘weak genetic explanation,’ which are far more common” (15).








Conclusions





While Black Mirror primarily serves to entertain audiences, the underlying messages presented in "Men Against Fire" about the potential consequences of futuristic technology cannot be ignored; it is nonetheless important to remain grounded in the current state of these devices, as MASS or any version of it is far from widespread implementation.



That said, the conversation about wartime morality and the suppression of it is an important one, as soldiers today, in years past, and in the years to come will continue to suffer from the traumatic consequences of combat-related experiences.





References




  1. Tabot, Gregg A., et al. "Restoring the sense of touch with a prosthetic hand through a brain interface." Proceedings of the National Academy of Sciences 110.45 (2013): 18279-18284.

  2. Saal, Hannes P., and Sliman J. Bensmaia. "Biomimetic approaches to bionic touch through a peripheral nerve interface." Neuropsychologia 79 (2015): 344-353.

  3. Dobelle, W. H., and M. G. Mladejovsky. "Phosphenes produced by electrical stimulation of human occipital cortex, and their application to the development of a prosthesis for the blind." The Journal of physiology 243.2 (1974): 553-576.

  4. Falabella, Paulo, et al. "Argus® II Retinal Prosthesis System." Artificial Vision. Springer International Publishing, 2017. 49-63.

  5. Lewis, Philip M., et al. "Restoration of vision in blind individuals using bionic devices: a review with a focus on cortical visual prostheses." Brain research 1595 (2015): 51-73.

  6. House, William F. "Cochlear implants." Annals of Otology, Rhinology & Laryngology 85.3_suppl (1976): 3-3.

  7. Svirsky, Mario A., et al. "Language development in profoundly deaf children with cochlear implants." Psychological science 11.2 (2000): 153-158.

  8. Schwartz, Marc S., et al. "Auditory brainstem implants." Neurotherapeutics 5.1 (2008): 128-136.

  9. Hitselberger, William E., et al. "Auditory brain stem implants." Operative Techniques in Neurosurgery 4.1 (2001): 47-52.

  10. Hoge, Charles W., et al. "Combat duty in Iraq and Afghanistan, mental health problems, and barriers to care." N engl J med 2004.351 (2004): 13-22.

  11. Van Winkle, Elizabeth P., and Martin A. Safer. "Killing versus witnessing in combat trauma and reports of PTSD symptoms and domestic violence." Journal of traumatic stress 24.1 (2011): 107-110.

  12. Pitts, Barbara L., et al. "Killing versus witnessing trauma: Implications for the development of PTSD in combat medics." Military Psychology 25.6 (2013): 537.

  13. Grossman, Dave. On killing. Open Road Media, 2014.

  14. Maguen, Shira, and Brett Litz. "Moral injury in the context of war." Department of Veterans Affairs, www. ptsd. va. gov/professional/pages/moral_injury_at_war. asp (accessed Jan. 13, 2013) (2012).

  15. Otto, Jean L., and Bryant J. Webber. "Mental health diagnoses and counseling among pilots of remotely piloted aircraft in the United States Air Force." Medical Surveillance Monthly Report 20.3 (2013): 3-8.

  16. Dar-Nimrod, Ilan, and Steven J. Heine. "Genetic essentialism: on the deceptive determinism of DNA." Psychological bulletin 137.5 (2011): 800.






Want to cite this post?



Ramesh, S. (2017). The Neuroethics Blog Series on Black Mirror: Men Against Fire. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/10/the-neuroethics-blog-series-on-black_13.html



Tuesday, October 24, 2017

Too far or not far enough: The ethics and future of neuroscience and law



By Jonah Queen








Image courtesy of Pixabay.

As neurotechnology advances and our understanding of the brain increases, there is a growing debate about if, and how, neuroscience can play a role in the legal system. In particular, some are asking if these technologies could ever be used to accomplish things that humans have so far not been able to, such as performing accurate lie detection and predicting future behavior.





For September’s Neuroethics and Neuroscience in the News event, Dr. Eyal Aharoni of Georgia State University spoke about his research on whether biomarkers might improve our ability to predict the risk of recidivism in criminal offenders. The results were published in a 2013 paper titled “Neuroprediction of future rearrest1," which was reported in the media with headlines such as “Can we predict recidivism with a brain scan?” The study reports evidence that brain scans could potentially improve offender risk assessment. At the event, Dr. Aharoni led a discussion of the legal and ethical issues that follow from such scientific findings. He asked: “When, if ever, should neural markers be used in offender risk assessment?”






Dr. Aharoni started by explaining that determining the risk an individual poses to society (“risk triage”) is an important part of the criminal justice system and that it is used when making decisions around bail, sentencing, parole, and more. He presented the cases of Jesse Timmendequas and Darrell Havens as opposite extremes of what can happen when risk is miscalculated. Timmendequas is a repeat sex offender who had served less than seven years in prison for his crimes and had not been considered a serious threat before he raped and murdered a seven-year-old girl, a crime which led to the passing of Megan’s Law. Havens, a serial car thief, is serving a 20-year prison sentence for assaulting a police officer, despite being rendered quadriplegic after being shot by police, because parole boards are reluctant to grant him an early release due to his extensive criminal history.





Risk triage is currently done through unstructured clinical judgements, where a clinician will offer his or her opinion based on an interview of the subject, and the more accurate evidence-based risk assessment, which assesses various known risk factors, such as age, sex, criminal history, drug use, impulsivity, and level of social support. Dr. Aharoni and the other authors of the paper propose that neurological data could potentially be introduced as an additional risk factor to help improve the accuracy of such assessments1.





With the understanding that impulsivity is a major risk factor for recidivism2, the researchers focused their study on the anterior cingulate cortex (ACC), a limbic brain region shown to be heavily involved in impulse control and error monitoring (in fact, behavioral changes in people with damage to the ACC are often extreme enough for those individuals to be classified as having an “acquired psychopathic personality3”).





In Aharoni’s paper1, the volunteers (96 currently incarcerated adult men) were presented with a go/no-go (GNG) task (which tests impulse control) while their ACC activities were monitored with functional magnetic resonance imaging (fMRI, which measures changes in blood flow within different regions of the brain—an increase in blood flow is taken to mean that a region has increased neural activity). The researchers found that participants with greater activation of the ACC during impulse control errors were half as likely to be arrested within four years of their release (when controlling for other factors such as age at release, Hare Psychopathy Checklist scores, drug and alcohol use, and performance on the GNG task). In other words, the study seems to show that, when used in conjunction with currently recognized risk factors, the fMRI data improved the accuracy of the risk assessment. The authors conclude that this finding “suggest[s] a possible predictive advantage” of including the neurological data in risk assessment models1.







Image courtesy of Flickr user Janne Moren.


After emphasizing the need for additional research, the authors discuss several possible applications, for the use of these “neuromarkers.” One of the more controversial ones (and the one that the media has mostly focused on) is to add neuromarkers (such as ACC activity during a GNG task) to the other factors that are currently used for risk triage in the criminal justice system. The authors recognize that this will raise ethical and legal issues, specifically that such scans might not meet the legal standard of proof, and that such techniques might threaten offenders’ civil rights in ways that currently used risk assessment methods do not.





In his presentation, Dr. Aharoni expanded on some of these concerns, focusing on the scientific limitations, legal limitations, and ethical implications of this research. The scientific limitations refer to the accuracy and replicability of this method and the general question of whether current neuroimaging techniques can provide useful data to criminal risk assessments. The legal limitations include questions of how and when such methods could legally be used. Would they be legally admissible, or would they be found to be unconstitutional if used in certain ways? Would the results of a brain scan be legally classified as physical evidence (which, under the Fourth Amendment, can be obtained with a warrant) or testimony (under the Fifth Amendment, an individual cannot be forced to testify if it would incriminate them)? Similar questions are being asked regarding fMRI lie detection.





And then there are the ethical implications. Using such a technique to keep people in jail who would not be otherwise (for lengthened sentences or denying parole, for example) is worrisome to many and runs the risk of violating offenders’ civil rights in an attempt to increase public safety. Dr. Aharoni mentioned that neuromarkers could also be used in an offender’s best interests if, for example, MRI data showed that they might be less likely to reoffend. An audience member pointed out, though, that this could be unfair to the people whose brain data does not help their case.





Another application that the authors mention is how this research could pave the way for possible interventions (including therapies, programs, and medications) for people with poor impulse control caused by low ACC activity. This could still raise concerns around convicts being required to undergo medical treatments (like medication or even surgery) if their criminal activity is thought to be caused by “defective” brain regions. And even if no practical applications come of this research, the authors point out that their findings still contribute to our understanding of the brain and human behavior.







Image courtesy of Wikimedia Commons.


Media outlets that reported on the study mostly focused on the predictive aspect, often referencing the film Minority Report, in which people are arrested for crimes they have not yet committed. Dr. Aharoni explained that incarcerating people based on the likelihood of re-offense is currently happening in cases of involuntary civil commitment, where defendants who are found not guilty by reason of insanity can be confined to psychiatric hospitals until they are deemed safe. If neuromarkers such as brain scans are used to improve the accuracy of the predictions, it might not be as much of a radical change as it seems.





But still, as explained above, even if brain scans were to be incorporated into the predictive models currently used, it would raise many ethical issues. And things could become even more worrisome if this technology were to be (mis)used in ways the researchers have not intended and the science does not support. For example, the criminal justice system could buy into the hype around brain imaging and develop a process that only looks at the scans and not at the other factors. Scans could also be performed on people who have not committed a crime to see if they need “monitoring” or “treatment,” possibly even non-voluntarily, even though they have not done anything wrong (in something more similar to a Minority Report-like scenario). Even without any intervention, there could also be the issue of stigma, like there is with testing for predisposition to mental illness. If someone is found to have a “criminal brain” how would people view them? How would they view themselves? And an audience member raised the possibility of this technology being used in the private sector. There are companies that offer MRI lie detection services—what if a company were to start testing people for predisposition to criminal behavior?





In the paper, the authors admirably discuss the ethical issues that could arise from their research. And the discussion Dr. Aharoni led at the event showed the importance of looking at controversial research such as this with a critical eye and in context in order to avoid resorting to sensationalist claims and unfounded fears. Not only is it important to make sure the science behind new neurotechnologies is accurate, but we also need to consider the societal effects of new technologies, whether they are used in the way their creators intended or not.






References



1)  Aharoni, E., Vincent, G. M., Harenski, C. L., Calhoun, V. D., Sinnott-Armstrong, W., Gazzaniga, M. S., & Kiehl K. A. (2013). Neuroprediction of future rearrest. PNAS, 110(15), 6223-6228. doi:10.1073/pnas.1219302110

2)  Monahan, J. D. (2008) Structured risk assessment of violence. Textbook of Violence Assessment and Management, eds Simon, R., Tardiff, K. (American Psychiatric Publishing, Washington, DC), pp 17–33.

3)  Devinsky, O., Morrell, M. J., Vogt, B. A. (1995) Contributions of anterior cingulate cortex to behaviour. Brain 118(pt 1), 279–306. doi:10.1093/brain/118.1.279





Want to cite this post?



Queen, J. (2017). Too far or not far enough: The ethics and future of neuroscience and law. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/10/too-far-or-not-far-enough-ethics-and.html

Tuesday, October 17, 2017

Hot Off the Presses: The Neuroethics Blog Reader and Issue 8.4



It is our pleasure to present you with two newly released publications: the second edition of The Neuroethics Blog reader and the 8.4 issue of the American Journal of Bioethics Neuroscience.








Image courtesy of Flickr user Leo Reynolds.



The second edition of The Neuroethics Blog reader features the most popular posts on the site, with topics ranging from human knowledge and its enhancement to mental illness to gut feminism. The reader includes posts from luminaries in neuroethics, scientific pioneers, undergraduates, graduate students, and established scholars from both within and outside the field of neuroethics. The Neuroethics Blog, now in its 6th year of creating weekly publications, is pleased to present this reader to you and would like to thank our amazing blog editorial team: Sunidhi Ramesh (Volume Editor of this reader and Assistant Managing Editor), Carlie Hoffman (Managing Editor), Nathan Ahlgrim, Kristie Garza, and Jonah Queen (Supporting Editors and blog contributors). Please find the reader below.





We are also pleased to announce the publication of issue 8.4 of AJOBN, which is a special issue focused on head transplantation. This issue contains two target articles, “HEAVEN IN THE MAKING BETWEEN THE ROCK (the Academe) AND A HARD CASE (a Head Transplant)” by Ren Xiaoping and Sergio Canavero and “Ahead of Our Time: Why Head Transplantation is Ethically Unsupportable” by Paul Root Wolpe. In the editorial for the issue, Karen Rommelfanger and Paul F. Boshears comment,


“The prospect of a near future wherein a head transplant has become a therapeutic procedure available to people suffering from the effects of failing bodies has been surrounded by pageantry, vitriolic responses, and a sense melodrama that has made a caricature of what discussions of neuroethics can and should be . . . It is our position that averting our gaze from the development of technologies and techniques that we find morally repugnant or technically incredible does not free us from the results of those techniques and technologies. Indeed, if we fail to examine and carefully consider this technique and the technologies attendant to Canavero’s work, we risk something greater than the value of our self-satisfaction at the moment of declaring our disinterest.



Whether or not we find them to be morally unsupportable, or we anticipate that head transplants will not be viable procedures in the future, the fact of the matter is that head transplant technology is being developed and practiced right now. And this body of research is happening in China.”


Issue 8.4 also contains numerous open peer commentaries that generate a rousing discussion of the ethics of head transplantation. We would like to thank Karen Rommelfanger for coordinating this special issue and Paul F. Boshears for his contribution to the issue. Keep an eye out for the official publication of Issue 8.4 in the coming weeks.





Please enjoy the latest blog reader and issue 8.4 of AJOBN!











Monday, October 9, 2017

The Neuroethics Blog Series on Black Mirror: San Junipero



By Nathan Ahlgrim










Image courtesy of Wikimedia Commons.

Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we treat our technology and how it can affect us in return. This piece is part of a series of posts that will discuss ethical issues surrounding neuro-technologies featured in the show and will compare how similar technologies are impacting us in the real world.








*SPOILER ALERT* - The following contains plot spoilers for the episode “San Junipero” of the Netflix television series Black Mirror.






Your body, in many ways, is an extension of your identity. The coupling of the physical to the psychological can be represented by straightforward demographic details like sex, ethnicity, and age. Your body can also restrict your identity by illness, injury, and disability. The unavoidable link between body and identity only exists as long as you are stuck with what you’re born with. Science fiction, and some science fact, is working to decouple the mind and body using virtual worlds and virtual minds, casting a lure of limitless possibilities. Location, money, age, ability; all are at the user’s command. Advances in computer technology and neuroscience are making that lure more lifelike, more attractive, and (possibly) more attainable.









Image courtesy of the U.S. Department of Defense.

Technology has moved virtual worlds far beyond the days of The Sims or Second Life. The Emmy Award-Winning Black Mirror Episode “San Junipero” is hardly the first example of pop-culture waxing poetic over the narratives of downloaded minds. Movies like Tron and Avatar describe worlds beyond reality, with such advanced computing that the "virtual reality" became a second reality. Therein, of course, lies the question: how should virtual minds be treated in a virtual world? “San Junipero” describes a world in which the entirety of human consciousness can be transferred and even downloaded via Whole Brain Emulation (WBE). In a world with fantastical technology, the treatment of the disembodied minds is taken as a given. However, real-world scientists and entrepreneurs are setting their sights on their own San Junipero, and we cannot assume the ethics of WBE and the treatment of digital people will fall into place by itself. The rights of computer code are not as straightforward as they seem when the code is Grandma.









 Plot Summary and Technology Used








“San Junipero” opens with a 1980’s party town. The painfully dweebish Yorkie shuffles into a nightclub full of arcade games (and resident arcade geeks) where she meets her polar opposite in Kelly, for all accounts a carefree partier. During the couple’s first night together in San Junipero, Kelly discloses her previous and lengthy marriage. Both are young 20-somethings, and this is the first overt clue (although many subtler ones had appeared before) that the timeline is not quite normal.







The two reunite after Yorkie enters a time-bending search for Kelly through the 80’s, 90’s, and 00’s. Their ensuing conversation reveals that Yorkie is getting married and that Kelly is dying. Kelly insists on meeting up in the real world. She says “the real world” because the city of San Junipero is an elaborate simulation in which living people can visit or take up permanent residence by downloading their brain as their body expires.












San Junipero sharply contrasts with the real world.

Images courtesy of Pexels and Pixabay.

In the real world, both women are elderly: Yorkie a quadriplegic since adolescence and Kelly dying from an illness. Kelly visits Yorkie in the hospital, where she meets Greg, to whom Yorkie will be married later that month. Greg is a caregiver and agreed to marry Yorkie so that his Power of Attorney would allow her to “pass over”— to be euthanized and become a permanent digital resident of San Junipero.



Here we learn the totality of San Junipero’s technology: living people are allowed five hours of virtual life per week to have a “trial run” before making the decision to have their brains completely scanned and uploaded before they die. This, of course, requires WBE: the complete digital representation of a person’s brain.







Kelly successfully cajoles Greg into giving her and Yorkie a covert five minutes in San Junipero. She proposes in the digital world, and Yorkie accepts. The following scene shows Kelly asserting her Power of Attorney back in the real world, authorizing Yorkie’s euthanasia and permanent upload to San Junipero.







The couple reunite in the virtual world, and Kelly repeats her wish to die without becoming a “local” – no brain scan and no eternal life. She is thrown back into the real world at midnight by the state-imposed limits on time in San Junipero, leaving Yorkie to think she may not return.







In a stark tonal change from the majority of Black Mirror endings, Kelly reappears at her San Junipero beach house, and the two literally drive off into the sunset; the final scene shows the two – now two placidly blinking discs of light, robotically being archived and immediately lost amongst a wall of identical lights.







Current State of Technology










Image courtesy of Wikipedia.

San Junipero is a world enabled by completely comprehensive WBE. The residents have their entire consciousness scanned and transferred in digital form to an ostensibly immortal computer bank. Everyone from Cristof Koch of the Allen Brain Institute to Michio Kaku recognizes the human brain as the most complicated object in the universe, so it should come as no surprise that such technology is more of a fantasy than the city of San Junipero itself. The human brain is estimated to have a petabyte (1 quadrillion bytes) memory capacity, and researchers from the Salk Institute say that a computer simulation with similar memory and processing power would need the energy from “basically a whole nuclear power station” for just one person. For all of our supercomputers, deep neural networks, and artificial intelligence, real-life brain modelling has currently peaked at a ‘crumb’ of rat cortex (0.29 ± 0.01 mm3 containing ~31,000 neurons [1]). Even this state-of-the-art model disregards any and all glial cells, blood vessels, and capacity for plasticity. All this to say that many computer scientists and neuroscientists doubt WBE will ever be possible.






Complete WBE seems stuck in sci-fi territory, but scientists and philanthropists love a good moonshot, and all the better if success means immortality. Many projects, from the government-funded Human Brain Project and BRAIN Initiative to the startup Carboncopies, have been fueled by the dream of a San Junipero-esque human simulator. Fields from across the sciences, including engineering, quantum computing, neuroscience, and cryonics are converging on this goal; but they have yet to even find where to stake the finish line. Even Anders Sandberg, one of the most vocal WBE evangelicals, labels it a “theoretical technology” [2]. On the bright side, if we have the technology to upload human brains to a digital form, simulating a California town will be a humble afterthought.










Remembering the good old days often

has positive psychological effects.

Image courtesy of Wikimedia Commons.

In contrast to the potentially unreachable technology, the therapeutic purpose of San Junipero has been modelled outside of the Black Mirror universe, albeit in a decidedly analog fashion. Kelly, in describing the sales pitch for San Junipero, calls it “Immersive Nostalgia Therapy.” Nostalgia is known to encourage positive coping and psychological states, which can be brought on by old songs, familiar smells, or just by reminiscing about the good old days. Constantine Sedikides and colleagues have induced nostalgic feelings in hundreds of research participants easily and reliably, doing nothing more than letting their participants revel in those past experiences. The resulting feelings of reminiscence and bittersweet longing were sufficient to increase coping, closeness, and optimism while decreasing defensiveness [3]. Sadly, none of that research has induced feelings of nostalgia by transporting their subjects’ consciousness to an 80’s dance club.







Ethical Considerations: What Black Mirror gets right and what it misses







The narrative of “San Junipero” operates among people who all have access to the immortalizing technology, so the fair distribution of WBE technology will not be discussed here. Rather, the ethics of these incorporeal minds and the transition between the analog California and the digital San Junipero merit more consideration than was offered in the episode.







Does a computer code have rights? That question sounds flippant, but the premise is not so outlandish when you consider that corporations now have rights according to the U.S. legal system. If code deserves the same protection, will it ever be intelligent enough to deserve the same rights as flesh-and-blood humans? The debate is already swirling over artificial intelligence; the World Economic Forum now lists the rights of intelligent programs as one of the nine ethical challenges for the field. San Junipero locals, who are embodied by a single disc of information, are treated wholly differently than the visitors, with some troubling implications.







San Junipero is a paradise. It is easy to imagine people choosing paradise over real-world pain brought on by loss, illness, disability, or simply the daily grind. Realizing this, the laws of this world put up barriers to entry, as Greg described to Kelly:




“[The] state's got a triple-lock on euthanasia cases. You gotta have sign off from the doc, the patient, and a family member. Stops people passing over just 'cause they prefer San Junipero flat out.”




Compare that to the locals. When Kelly balks at the idea of "forever", Yorkie throws back:




“you can remove yourself like that




This distinction is taken as a given. Why, though, should physical euthanasia be more controlled than a digital death? In being selectively protective, the government is treating the digital copies as less-than-human. Ironically, this distinction flies in the face of San Junipero’s purpose: to extend life after death. The distinction between biological and digital, once a person’s entire consciousness is downloaded, is seen by some to be more artificial than San Junipero itself.







The biological/digital distinction was the focal point of a convincing (but later debunked) fan theory that Kelly never actually changed her mind, and that she really, truly, permanently died. The theory posits that the simulation then made a copy of Kelly to accompany Yorkie, since its purpose is to make its residents happy. There again is the same question: if Kelly is simulated to have all the same behaviors and thoughts, how is that different than the ‘normal’ downloading of consciousness? Michael Hendricks of McGill University points out that there is no distinction between transferring and replicating consciousness, since the computer code of a person did not exist before the transfer. Therefore, even if the fan theory was true, neither Yorkie nor Kelly could tell the difference between a simulation or a “real” copy.










Image courtesy of Wikimedia Commons.

The conversation about digital people is, of course, theoretical. Even WBE proponents acknowledge that “brain emulation is … vulnerable to speculation, ‘handwaving’ and untestable claims.” [2]. But, science is on the brink of inflicting digital pain. Scientists, animal rights activists, and others are pushing for digital animal models, also known as in silico experiments. American and European governments are funding the development of non-animal models, especially for toxicology studies [4,5]. These experiments present precise simulations of biological systems to model new treatments and probe new questions. Currently, in silico experiments are largely performed on digital organs and not whole organisms, but the latter is in the works, which again presents that artificial distinction. If the model is complex enough to simulate the organism, then it stands to reason that it feels the same pain, and suffers just like its skin-and-bones counterpart. The goal of replacing animal models with simulations is self-limiting: the better the simulation, the more digital suffering is inflicted.







It is for this reason that some ethicists follow the “principle of assuming the most.” Doing so assumes that simulations possess the most sentience as is reasonable to believe even in the absence of empirical data. Under this principle, any virtual mouse should be given the same protections as a real mouse would, down to its virtual pain killers. In the words of Anders Sandberg, “it is better to treat a simulacrum as the real thing than to mistreat a sentient being” [6].










Image courtesy of Wikimedia Commons.

If it ever becomes sufficiently sophisticated, WBE will represent a new, practical dualism (the belief that the mind and body are distinct entities) that is completely independent from philosophical beliefs. After successfully recreating consciousness in a digital form, the physical form of the body and brain become artefactual and unnecessary to maintain thought. The question of whether the body is needed to maintain identity and personhood, though, will always remain a philosophical one. In fact, identity and personhood may need entirely new definitions in the face of WBE technology before philosophers can meaningfully debate the issue. Digital lab rats, digital family members, and digital worlds are immune from physical harm, but their complexity gives them the capacity to suffer regardless of “who” they are. As such, consistent ethical standards require the treatment of digital and physical life to be determined by that life’s complexity, not its relation to the physical world.







Conclusions







Digital versions of Grandma may never happen. Even so, the ethics of artificial intelligence and virtual worlds are pertinent to existing technologies. Virtual worlds and virtual personalities on platforms like Second Life have already spawned marriages, divorce, and even semi-official government embassies. These digital actions have real consequences even when those spaces are not populated by fully conscious computer programs. Should these avatars be held to the same moral standards and governed by the same laws that apply to flesh and blood people? “San Junipero” thinks not. It was constructed as an escape that offered wish-fulfillment and freedom from consequences. That kind of marketing pitch makes it obvious why the government put in so many controls to prevent anyone and everyone from ‘passing over’ when in perfect health, with heaven just a zap away.







Sadly, Black Mirror never addresses the most important question of them all: what happens to this heaven when the power goes out?









References










[1] Markram H et al. (2015) Reconstruction and simulation of neocortical microcircuitry. Cell 163:456-492.






[2] Sandberg A, Bostrom N (2008) Whole brain emulation: A roadmap.






[3] Sedikides C, Wildschut T (2016) Past forward: Nostalgia as a motivational force. Trends in cognitive sciences 20:319-321.






[4] Raunio H (2011) In silico toxicology – non-testing methods. Frontiers in Pharmacology 2:33.






[5] Leist M, Hasiwa N, Rovida C, Daneshian M, Basketter D, Kimber I, Clewell H, Gocht T, Goldberg A, Busquet F, Rossi A-M, Schwarz M, Stephens M, Taalman R, Knudsen TB, McKim J, Harris G, Pamies D, Hartung T (2014) Consensus report on the future of animal-free systemic toxicity testing. ALTEX 31:341-356.






[6] Sandberg A (2014) Ethics of brain emulations. Journal of Experimental & Theoretical Artificial Intelligence 26:439-457.



Want to cite this post?







Ahlgrim, N.S. (2017). The Neuroethics Blog Series on Black Mirror: San Junipero. The Neuroethics Blog. Retrieved on , from  http://www.theneuroethicsblog.com/2017/10/the-neuroethics-blog-series-on-black.html.






Tuesday, October 3, 2017

“It is sometimes a sad life, and it is a long life:” Artificial intelligence and mind uploading in World of Tomorrow


By Jonah Queen









"The world of tomorrow" was the motto of the

1939 New York World's Fair

Image courtesy of Flickr user Joe Haupt

“One day, when you are old enough, you will be impregnated with a perfect clone of yourself. You will later upload all of your memories into this healthy new body. One day, long after that, you will repeat this process all over again. Through this cloning process, Emily, you will hope to live forever.”








These are some of the first lines of dialogue spoken in the 2015 animated short film, World of Tomorrow.* These lines provide an introduction to the technology and society that this science fiction film imagines might exist in our future. In response to a sequel, which was released last month, I am dedicating a post on this blog to discussing the film through a neuroethical lens.



Plot Summary (Note: the following contains spoilers for World of Tomorrow)




Those lines are spoken to a young girl named Emily by one of her clones (a “third generation Emily”) who is contacting her from 227 years in the future. The clone of Emily (who I will refer to as Emily) explains that in the future, those who can afford it regularly have their minds uploaded into either a clone of themselves or a cube-shaped digital storage device. Emily’s descriptions of the future are mostly lost on the young Emily (who is referred to in the film as Emily Prime), but Emily continues the conversation undeterred, as if she were speaking to an adult—a dynamic that continues throughout the film.





Emily then teleports Emily Prime to her location in the future and shows her some other technologies, including “view screens,” which allow people to view others’ memories. Emily uses a view screen to share memories of some important events in her life, including the various jobs she has held, her marriage (to a man who was also a clone), and the death of her husband.





After this tour through her memories, Emily suddenly explains that the world will be hit by a large meteor in sixty days. In the hopes of surviving, many are uploading their minds into cubes and having them launched into space. Those who cannot afford mind uploading are turning to “discount time travel,” which frequently results in deadly malfunctions. Emily then explains that the reason she contacted Emily Prime was to retrieve a memory from her that she had forgotten: a memory of her and her mother, which she says will comfort her in her final moments. After removing the memory from Emily Prime’s brain and implanting it into her own with a raygun-like device, she transports Emily Prime back to her present.



Ethical Issues 




In a mere seventeen minutes, this short touches on many of the issues discussed in contemporary bioethics and neuroethics, including human cloning, mind uploading, artificial intelligence, distributive justice, and technologically advanced escapist media. In this post, I will mostly focus on the ethics of mind uploading and artificial intelligence.








One possible method for creating a digital copy of a human brain

Image courtesy of Wikimedia Commons

In the film, mind uploading is depicted as a way for people to attempt to achieve immortality through uploading their minds into either a machine or into the brain of a clone of themselves. While current technology is nowhere close to achieving this goal (though some say that it could happen in our lifetimes), advances in neuroscience and computer science have led many to consider this possibility, and discussions of the ethical implications are currently underway.





One potential concern that the movie touches on is the quality of life (if it could even be considered life) that a person (or disembodied mind) is subject to after such a procedure. Emily tells Emily Prime that their grandfather had his consciousness uploaded into a cube and reads one of the messages he sent to her, which consists entirely of exclamations of horror. What he might be experiencing is not specified, but it is likely that the experience of having one’s consciousness existing within a computer would be so different from our embodied life that it would be disorienting or even unpleasant. Since our brains are not the only parts of us involved in feeling and perception, what would it be like to exist without input from a body? If there is not sufficient stimulation, would the mind suffer the negative effects of sensory deprivation? Would this technology need to simulate the experience of having a body? Would the contents of the entire nervous system (including, for example, the enteric nervous system that innervates the gut) need to be uploaded? You might even ask what the requisite features of a nervous system would be to have a “meaningful life." Some have raised such issues with organoids, so-called “mini-brains.”





And the cloning method does not solve these issues either. As Emily explains, the clones have some mental and physical “defects” that people are willing to overlook in their quest for immortality. She also seems tired and saddened by the length of her life (in addition to their other technologies, the human lifespan has greatly increased in the future) as well as the stress caused by having several lifetimes’ worth of memories. The quote in the title of this post is how she describes her existence to Emily Prime, and the sequel might explore this further, as it is subtitled The Burden of Other People's Thoughts.





One of the other issues that the film raises is the question of whether mind uploading would really be extending life or just creating a separate person (or computer program) with your personality, intellect, and memory. From your perspective (the original you), wouldn’t your consciousness end? That is how it seems to me, and some philosophers and ethicists agree. A previous post on this blog explores this idea and goes even further, raising the possibility of someone having their mind uploaded into multiple entities, creating several “copies” of themselves, which would make it even more difficult to see it as a simple continuation of one’s life. The technology could even be used to copy and upload a person’s consciousness to a computer or clone while they are still alive. In this sense, mind uploading can be seen as creating a new entity—either a person with a bioengineered brain or a sentient artificial intelligence (AI) based on a human brain. A recent post on this blog discusses various technologies that could be used to create a copy of someone’s mind after their death—further blurring the lines between mind uploading and AI.




The sci-fi trope of the robot apocalypse is often

referenced in discussions about AI

Image courtesy of Flickr used Gisela Giardino




World of Tomorrow also addresses AI in a different context. While the ethics of AI is currently a popular topic in tech media, much of the coverage focuses on the risks AI could pose to humans. This can be seen in the sensationalist coverage of Facebook’s recent AI experiment (though the claims that the experiment was stopped out of fear are not entirely accurate). While prominent figures in science and computing (including Stephen Hawking, Elon Musk, and Bill Gates) have expressed concerns about the threat that sufficiently advanced AI could pose, others are focused on the more immediate concerns around programming AI to make life-and-death decisions, whether for self-driving cars or autonomous weapons.





However, the issues concerning AI in World of Tomorrow are different. Emily describes how one of her jobs involved supervising robots on the moon. She programmed the solar-powered robots to fear death so they would stay in the sun on the light side of the moon. After the operation goes out of business, Emily is relocated, but, to save money, the robots are left there, where they continue to move across the moon’s surface and transmit depressing poetry back to earth. This (along with the previous discussion of mind uploading) presents another aspect to the ethics of AI debate: if we can create an AI capable of suffering, how should we treat it? While this issue is complicated by the fact that we can never truly know the subjective feelings of another entity (that is, they could be philosophical zombies), if something can suffer, even if it is a robot or software program that we have created, it seems clear that we should treat it well (though we, unfortunately, often do not even extend that courtesy to organisms that we recognize as living and capable of feeling). And maybe we should not even create such sentient artificial beings in the first place. This is a topic in AI ethics sometimes called robot rights, with some ethicists and philosophers arguing that a conscious AI should be given the same rights as an animal or even a person, depending on its level of complexity. As sentient machines, obviously, do not yet exist, this debate is mostly theoretical, and many see it as unnecessary at this time. Though a similar issue is discussed in neuroethics when it comes to determining if (and when) a collection of cultured neurons in a lab can become complex enough to feel.




World of Tomorrow presents a vision of the future which while bleak, is still very human. The film plays off the phrase “world of tomorrow,” which has often been used to describe an optimistic and utopian vision of the future, to instead show a future where advances in technology have led to even more extreme versions of many of the same problems we have today. If we want to work towards solving these issues (without slowing technological progress) we need to learn how to use our tools wisely.



*As of publication, World of Tomorrow is available to watch on Netflix and Vimeo





Want to cite this post?



Queen, J. (2017). “It is sometimes a sad life, and it is a long life:” Artificial intelligence and mind uploading in World of Tomorrow. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/10/it-is-sometimes-sad-life-and-it-is-long.html