Image courtesy of Wikimedia Commons. |
The Neuroethics Blog hosted a special series on Black Mirror over the past year, originally coinciding with the release of its third season on Netflix. Black Mirror is noted for its telling of profoundly human stories in worlds shaped by current or future technologies. Somnath Das, now a medical student at Thomas Jefferson University, founded the Blog’s series on Black Mirror. Previous posts covered "Be Right Back", "The Entire History of You", "Playtest", "San Junipero", "Men Against Fire", "White Bear", and "White Christmas". With Season 4 released at the end of December 2017, Somnath reconvened with contributing authors Nathan Ahlgrim, Sunidhi Ramesh, Hale Soloff, and Yunmiao Wang to review the new episodes and discuss the common neuroethical threads that pervade Black Mirror.
The discussion has been edited for clarity and conciseness.
*SPOILER
ALERT* - The following contains plot spoilers for the Netflix television series Black Mirror.
Somnath: My first question is: if and when we to collect neural data on people, who really owns it? In the case of "Arkangel", the parents can access and even filter it. The government stepped in to regulate it, but it was owned by a private company. Who really owns that brain data? Who can control it? Do the people own that neural data or do the companies?
Sunidhi: An interesting way to think about this is to think about your current medical records. Who owns your blood test results? Neural data is physical data. It's the same thing, just extending it to the brain. Companies owning that is a serious problem because there are always private interests that can manifest themselves in dangerous ways. The data should be owned by the person whose data it is.
Yunmiao: I feel that, if that’s the case, we should not upload neural data at all. It can easily be misused by others. For example, Apple announced that it is going to transfer the Chinese iCloud operation to a state-owned company on February 28th, 2018. People might worry about who will access to their personal data. Either a government or a private agency could potentially misuse the information for their own interests. Russell Poldrack and Krzysztof J Gorgolewski have suggested the advantages of sharing neuroimaging data. For example, it could maximize the scientific contribution, improve reproducibility, and promote new questions. “Big data” is a trendy phrase, and its broad application have shown promising future for various fields. However, should the potential benefits of data sharing, whether it is neural data or general personal data, outweigh the importance of ownership? Despite the ethical consideration of privacy issues, there are also pros about data sharing, especially in a scientific setting.
Nathan: Let’s consider the more fantastical technology. Even if you willingly give up or sell your neural data through a very thorough informed consent procedure, if there is some sort of neuro-emulation, you have a digital self. There is no control after you make that transfer of ownership. The lack of control is why it is hard for even the most libertarian of thinkers to endorse voluntary slavery. We balk at that transfer of personal ownership. And I think for something as detailed as neural data, it would make sense for it to follow the same norm.
Somnath: My next question with respect to brain data and privacy is more about public opinion and how ethicists respond to public opinion. With Google Glass, we saw that many people were really uncomfortable with brain-computer-interfaces (BCI’s) being integrated into their lives. There were two issues here. One issue was that people didn't want to have random people wearing this glass and taking photos or videos of them, which is a pretty obvious argument. And the second argument was that people were uncomfortable with what the data could be used for. But as we've seen with a lot of technologies, like with cars, people [used to be] scared of the internal combustion engine exploding. And nowadays we accept them. We walk around them very easily. We're very familiar with them. I was wondering, in the vein of the episode “The Entire History of You” where everybody has a brain computer interface that can record and store memories, do you think people would eventually be able to accept these BCI’s as normal?
Hale: People are absolutely comfortable with these things. We saw this as cars replaced carriages, and more recently as different generations have engaged with technology as ‘simple’ as social media. Our standards of privacy have changed in only a generation or two. Many people don’t view privacy as a necessary or engrained part in everyone’s lives to the degree it used to be. But even if you engage with something like Facebook or Instagram in a restrained way and you’re not showing everything, your life can get very wrapped up in the way people are interacting in an online environment. I think newer generations, and some individuals in the older generations will engage with a neural data-based social environment, even if some people dissent to the idea. One of the most effective counterbalances to people's interest in adopting these technologies is lawful regulations. Those would have a significant effect on slowing down or stopping the misuse of these things, for the purposes of avoiding the less than desirable scenarios. Regulations can’t prevent negative consequences 100% effectively, of course. What will be important is whether we have reactive/responsive laws or preventative laws, which will probably be controlled by the speed at which these two things happen.
Yunmiao: For example, in “Crocodile,” people do have access to neural data. One could go to the extreme. Mia (the central protagonist) essentially killed everyone who could potentially have a memory of her murder(s). On one hand, such technology might stop people from committing any crime, knowing someone might be watching. On the other hand, it might also be a threat to the society because people will feel the threat of that information getting out.
Image courtesy of Wikipedia. |
Nathan: I think an unintended consequence of something like Black Mirror is an automatic increase in acceptance in these technologies. Even though a supermajority of the episodes, like “Crocodile,” “Shut Up and Dance,” and “Men Against Fire” end in death – or worse, like the perpetual agony in “Black Museum” – it gets the story out there. Just like science fiction always has. Even if it's a morbid fascination it puts fascination into the public eye. I always see fascination inevitably garnering interest for technology to actually happen even if the first presentation of it was terrifying.
Sunidhi: I think it's just a matter of time. People watching this normalizes it. There are numerous examples of technology that people kind of rejected initially and then slowly took in as more and more people accepted it.
Somnath: My next question is about “The Emulated Self” and focuses on storing people's consciousness against their will. In “Hang the DJ,” however, were introduced to a dating app that simulates hundreds of versions of ourselves and other people with a near perfect emulation of our thoughts feelings and personalities. It basically takes the mystery out of dating. The app then mysteriously kills off the simulations, or deletes their code, when it determines that two people could be matched. But for me that begs the question: would emulating those perfect copies of people, taking their memories away, putting them in an unknown place, and then deleting their code be unethical? Is that considered imprisonment? And does that even matter?
Hale: You're not deleting an individual artificial intelligence within that universe, you’re deleting the entire thing at once. So you're not causing any sort of relational harm. You're not killing an individual that other individuals know and will grieve over. Everyone disappears at once within that universe. But of course, a lot of it comes down to an unanswerable question: how can we possibly know whether a simulated person actually experiences the emotions that they appear to experience?
Nathan: Yes, “Hang the DJ” has a good outcome in the end. But I think it's unfair of us to judge that technology and the consequences of it based on a dating app when the exact same technology could be used differently, like in the finale “Black Museum.” With pretty much the same technology as the dating app, a man, whether deservedly or not, was put into a perpetual torture. Or, at least the emulation of his consciousness is.
Sunidhi: Also, how much of it is actually deleted? Is it fully deleted, or does it continue to exist somewhere?
Somnath: “San Junipero” showed us a positive way a similar technology was used, as a way of ensuring a good death. Or rather, a life beyond death. The episode concluded with one of the most remembered love stories in pop-culture. When the person died in the real world, a new version of that person was created in the simulation. My question is: does the company then own your life? You'd be at the whims of that company. Is that necessarily a bad thing? The people inside are living a good life even though they're dependent on this company owning them. Is it a good thing to live in a simulation or is it not?
Nathan: It can never be a good thing as long as there is a distinction between the simulation and the real world. There was no perceptual difference between the simulation and the real world in “San Junipero.” Even so, the real world seemed to treat the simulation as something quantitatively different. The people in the simulation had different legal rights. We instinctively think of a person undergoing a change like Alzheimer’s Disease as retaining their identity. Their personality is different, their memories change, but you know it’s the same person much more than a simulation in “San Junipero,” where their personality is identical. As long as we think of a simulated world as something demonstrably different, what if you don't renew your contract with the company who built San Junipero? Then they’re entitled to terminate you. You’d die. Again.
Sunidhi: What’s interesting in “San Junipero” is that the simulated copies still retain the same memories. Then it’s an iffy line as to how you can be different people but still retain the same memories, life experiences, etc.
Yunmiao: I think the question is whether the simulated self is continuous with the original person, or whether it’s another life or person. What if they both exist at the same time, like in many other Black Mirror episodes? I don't think the copy, or simulated person, is an extension of the original person. I think they have their own mind, and they are their own person. Thus, the original person should not have any ownership of that emulated self.
Somnath: My final question is about emulation. We’re pretty far away from emulating human bodies. The research is still in its infancy. When I wrote about it on the blog, the research basically said that neuroscientists are still trying to figure out how to emulate ion channels. Never mind complex neural activity or entire human beings. So why do you think the show keeps coming back to the emulated self if we’re so far away from it? Do you think it just makes for a good story, or do you think there is something more important about how the American consciousness reacts to this technology when it is portrayed in the show?
Image courtesy of Pixabay. |
Yunmiao: This is more a philosophical question. What is the self? That question has been going on for centuries, and I think this is just another perspective to view or evaluate what the self is. Do you view yourself the same as you were 10 years ago, 5 years ago, or 5 minutes later? The philosophical question sits on top of the potential technology and ethical issues.
Sunidhi: I think the whole ‘true self’ debate manifests itself in current technology. Think about Deep Brain Stimulation for depression, and how that patient changes. Are they being restored to who they were before? Is this a new person that was made by the treatment? Those questions are still present, so this might just be another interpretation of how those questions will present themselves in the future.
Hale: I agree, and I think that people can’t help but ask themselves when they’re seeing it on the show: how will this affect me and my life? If a technology feels completely distant because you won’t see it for hundreds of years, you will only be casually interested. But these technologies are presenting a pseudo-immortality. Even now, we might be close to a point to saving our brains or brain data, if only cryogenically. One day in the future, when we have the technology to do something with that, we could digitally pop into existence again. People see this and feel it’s not within arm’s reach, but it is just beyond their fingertips.
Nathan: I’m more of a skeptic when it comes to this technology possibly ever bearing fruit. But I still think that, even if it is completely fantastical, it’s important to get into the public consciousness. Science fiction as much as fantasy can serve as an allegory for the questions we are really asking. Like Yunmiao said, questions from immortality to identity and personhood. It’s a lot easier to enter the conversation if you’re asking, ‘What if I upload myself to Star Trek?’ (as in “USS Callister”) instead of, ‘what if I misrepresent myself on Facebook and my boss thinks I’m someone completely different?’
Image courtesy of Flickr user FrenchKheldar. |
Somnath: People with backgrounds in ethics have had a visceral reaction to Black Mirror.Black Mirror is made more intriguing and more constructive when we have real discussions about the cross-pollination of fiction and the real world.
Proponents of these technologies, like those who are trying to make emulation happen, often contend that our hesitancy is driven by fear. They contend that progress is impeded by hand-wringing ethicists. We can’t ignore that the show brings a fascination to all these technologies, regardless of the grim consequences. That's why ethicists do need to respond to the show. Ethicists do better when they get out of their ivory tower. At the same time, pop-culture phenomena like
Before we close, I have to ask: favorite episodes? For me, “White Bear” was the most fascinating for the neuroethical implications. But as a consumer, “The Waldo Moment” is my favorite.
Yunmiao: “White Christmas”
Hale: “USS Callister”
Sunidhi: “Men Against Fire”
Nathan: “Hated in the Nation”
Want to cite this post?
Ahlgrim, N. (2018). The Neuroethics Blog Series on Black Mirror: Black Mirror in the Rear-view Mirror - an Interview with the Authors. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/03/black-mirror-in-rear-view-mirror.html
No comments:
Post a Comment