Pages

Tuesday, June 27, 2017

Mental Privacy in the Age of Big Data


By Jessie Ginsberg




Jessie Ginsberg is a second year student in the Master of Arts in Bioethics program and a third year law student at Emory University. 




A father stood at the door of his local Minneapolis Target, fuming, and demanding to speak to the store manager. Holding coupons for maternity clothes and nursing furniture in front of the manager, the father exclaimed, “My daughter got this in the mail! She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”





Target was not trying to get her pregnant. Unbeknownst to the father, his daughter was due in August.  





In his February 16, 2012 New York Times article entitled, “How Companies Learn Your Secrets,” Charles Duhigg reported on this Minneapolis father and daughter and how companies like Target use marketing analytics teams to develop algorithms to anticipate consumers’ current and future needs. Accumulating data from prior purchases, coupon use, surveys submitted, emails from Target that were opened, and demographics, a team of analysts render each consumer’s decision patterns into neatly packaged data sets tailored to predict their future buying choices. 






Flash forward to 2017, a time where online stores like Amazon dominate the market and cell phones are reservoirs of personal information, storing intimate details ranging from your location to your desired body weight to your mood. Furthermore, data analysis algorithms are more sophisticated than ever before, gobbling up volumes of information to generate highly specific and precise profiles of current and potential consumers. For example, plugging information into an algorithm ranging from social media activity to Internet searches to data collected from smart phone applications unlocks a goldmine of sensitive information that reveal the proclivities, thought processes, self-perception, habits, emotional state, political affiliations, obligations, health status, and triggers of each consumer. We must then ask ourselves, in the age of Big Data, can we expect mental privacy? That is, in a society replete with widespread data collection about individuals, what safeguards are in place to protect the use and analysis of information gathered from our virtual presence? 





In addition to the information deliberately submitted to our phones and computers, we must also worry about the data we subconsciously supply. Take, for example, the brain training program Lumosity. Over the past 10 years, this website has lured over 70 million subscribers with promises that their product will “bring better brain health,” delay conditions like Alzheimer’s and dementia, and help players “learn faster,” be sharper.” Though Lumosity and other similar companies like LearningRx were sued by the Federal Trade Commission for false advertising and must now offer a disclaimer about the lack of scientific support backing their product, has the damage already been done? 








Image courtesy of Pixabay.

More troubling than a brain training company’s use of unsubstantiated claims to tap into consumer fears of losing mental acuity for financial gain, the information collected by these brain training programs may serve as yet another puzzle piece for big data firms. Now, not only can applications and search engine histories provide a robust portfolio of what an individual consciously purchases and searches, but now these brain training websites can provide deeper insights into how individuals reason and analyze information. In their article entitled “Internet-Based Brain Training Games, Citizen Scientists, and Big Data: Ethical Issues in Unprecedented Virtual Territories,” Dr. Purcell and Dr. Rommelfanger express this concern: brain training program (BTP) data “are being interpreted as current demonstrations of existing behaviors and predispositions, and not just correlations or future predictions of human cognitive capacity and performance. Yet, the vulnerability of cognitive performance data collected from BTPs has been overlooked, and we believe the rapid consumption of such games warrants a sense of immediacy to safeguarding these data” (Purcell & Rommelfanger 2015, 357). The article proceeds to question how the data collected through brain training programs will be “secured, interpreted, and used in the near and long term given evolving security threats and rapidly advancing methods of data analysis” (Purcell & Rommelfanger, 357). 





Even more worrisome are the lack of protections currently afforded to those who turn to websites and phone applications for guidance in coping with mental health issues. According to a 2014 article entitled “Mental Health Apps: Innovations, Risks and Ethical Considerations,” research shows a majority of young adults with mental health problems do not seek professional help, despite the existence of effective psychological and pharmacological treatments (Giota & Kleftaras 2014, 20). Instead, many of these individuals turn to mental health websites and phone applications, which “are youth-friendly, easily accessible and flexible to use” (Giota & Kleftaras 2014, 20). Applications such as Mobile Therapy and MyCompass collect and monitor data ranging from lifestyle information, such as food consumption, exercise and eating habits, to mood, energy levels, and requests for psychological treatments to reduce anxiety, depression, and stress (Proudfoot et al 2013). Alarmingly, users of these programs are not guaranteed absolute protection from the developers. That is, current legal mechanisms in the United States do not fully prevent developers from selling personal health information submitted into apps to third party marketers and advertisers. 





Justice Allen E. Broussard of the Supreme Court of California declared in a 1986 opinion, “If there is a quintessential zone of human privacy it is the mind” (Long Beach City Emps. Ass'n. v. City of Long Beach). Indeed, with the advent of cell phones, widespread use of the internet, data analysts, and complex algorithms that predict future behaviors, our claim to privacy is waning. Until laws and regulations are designed to protect information collected from phone applications and Internet use, it is crucial that consumers become fully aware of just how much of themselves they share when engaging in Internet and phone activity.





References 





Giota, K.G. and Kleftaras, G. 2014. Mental Health Apps: Innovations, Risks and Ethical Considerations. E-Health Telecommunication Systems and Networks, 3, 19-23. 





Long Beach City Emps. Ass'n. v. City of Long Beach, 719 P.2d 660, 663 (Cal. 1986). 





Proudfoot, J., Clarke, J., Birch, M.R., Whitton, A.E., Parker, G., Manicavasagar, V., et al. (2013) Impact of a Mobile Phone and Web Program on Symptom and Functional Outcomes for People with Mild-to-Moderate Depression, Anxiety and Stress: A Randomised Controlled Trial. BMC Psychiatry, 13, 312. 





Purcell, R. H., & Rommelfanger, K. S. 2015. Internet-based brain training games, citizen scientists, and big data: ethical issues in unprecedented virtual territories. Neuron, 86(2), 356-359.



Want to cite this post?



Ginsberg, J. (2017). Mental Privacy in the Age of Big Data. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/mental-privacy-in-age-of-big-data.html

Tuesday, June 20, 2017

Fake News – A Role for Neuroethics?



By Neil Levy





Neil Levy is professor of philosophy at Macquarie University, Sydney, and a senior research fellow at the Uehiro Centre for Practical Ethics, University of Oxford.






Fake news proliferates on the internet, and it sometimes has consequential effects. It may have played a role in the recent election of Donald Trump to the White House, and the Brexit referendum. Democratic governance requires a well-informed populace: fake news seems to threaten the very foundations of democracy.





How should we respond to its challenge? The most common response has been a call for greater media literacy. Fake news often strikes more sophisticated consumers as implausible. But there are reasons to think that the call for greater media literacy is unlikely to succeed as a practical solution to the problem of fake news. For one thing, the response seems to require what it seeks to bring about: a better informed population. For another, while greater sophistication might allow us to identify many instances of fake news, some of it is well crafted enough to fool the most sophisticated (think of the recent report that the FBI was fooled by a possibly fabricated Russian intelligence report).





Moreover, there is evidence that false claims have an effect on our attitudes even when we initially identify the claims as false. Familiarity – processing fluency, in the jargon of psychologists – influences the degree to which we come to regard a claim as plausible. Due to this effect, repeating urban legends in order to debunk them may leave people with a higher degree of belief in the legends than before. Whether for this reason or for others, people acquire beliefs from texts presented to them as fiction. In fact, they may be readier to accept that claims made in a fictional text are true of the real world than claims presented as factual. Even when they are warned that the story may contain false information, they may come to believe the claims it makes. Perhaps worst of all, when asked how they know the things they have come to believe through reading the fiction, they do not cite the fiction as their source: instead, they say it is ‘common knowledge’ or they cite a reliable source like an encyclopedia. They do this even when the claim is in fact inconsistent with common knowledge.








Image courtesy of Flickr user Free Press/ Free Press Action Fund.

So we may come to acquire false beliefs from fake news. Once acquired, beliefs are very resistant to correction. For one thing, memory of the information and of correction may be stored separately and have different memory decay rates: even after correction, people may continue to cite the false claim because they do not recall the correction when they recall the information. If they recall the information as being common knowledge or coming from a reliable source, knowing that Breitbart or Occupy Democrats is an unreliable source may not affect their attitudes. Even if they recall the retraction, moreover, they may continue to cite the claim.





Finally, even when we succeed in rejecting a claim, the representation we form of it remains available to influence further cognitive processing. Multiple studies (here and here) have found that attitudes persist even after the information that helped to form them is rejected.





All this evidence makes the threat of fake news – of false claims, whether from unreliable news sources, from politicians and others who seek to manipulate us – all the greater, and suggests that education is not by itself an adequate response to it. We live in an age in which information, true and false, spreads virally across the internet in an unprecedented way. We may need unprecedented solutions to the problem.





What are those solutions? I must confess I don’t know. An obvious response would be censorship: perhaps with some governmental agency vetting news claims. While my views on free speech are by no means libertarian, I can’t see how such a solution could be implemented without unacceptable limitations of individual freedoms. Since fake news has an international origin, the sources can’t effectively be regulated, so regulation would have to target individuals who would share the stories on social media. That kind of regulation would require incredibly obtrusive monitoring and unacceptable degrees of intervention, and would place too much power in the regulating agency.








Image courtesy of Flickr user

Tyler Menezes.

A better solution might be utilize the same kinds of psychological research that warn us about the dangers of fake news to design contrary sources of information. The research that shows us how people may be fooled by false claims also provides guidance as to how to make people more responsive to good evidence. We could utilize this information to design informational nudges, with the aim of ensuring that people are better informed.





This solution itself requires scrutiny. Are such nudges ethical? I think they are, or at least can be. Further, would good information crowd out bad? We aren’t in a position to confidently say right now. What we can say, however, is that fake news is a problem that cries out for a solution. If we can’t solve it, we may find that democratic institutions are not up to the job of addressing the challenges we face today.






Want to cite this post?



Levy, N. (2017). Fake News – A Role for Neuroethics? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/fake-news-role-for-neuroethics_17.html

Tuesday, June 13, 2017

Have I Been Cheating? Reflections of an Equestrian Academic



By Kelsey Drewry





Kelsey Drewry is a student in the Master of Arts in Bioethics program at the Emory University Center for Ethics where she works as a graduate assistant for the Healthcare Ethics Consortium. Her current research focuses on computational linguistic analysis of health narrative data, and the use of illness narrative for informing clinical practice of supportive care for patients with neurodegenerative disorders.






After reading a recent study in Frontiers in Public Health (Ohtani et al. 2017) I realized I might have unwittingly been taking part in cognitive enhancement throughout the vast majority of my life. I have been a dedicated equestrian for over twenty years, riding recreationally and professionally in several disciplines. A fairly conservative estimate suggests I’ve spent over 5000 hours in the saddle. However, new evidence from a multi-university study in Japan suggests that horseback riding improves certain cognitive abilities in children. Thus, it seems my primary hobby and passion may have unfairly advantaged me in my academic career. Troubled by the implication that I may have unknowingly spent much of my time violating the moral tenets upon which my intellectual work rests, I was compelled to investigate the issue.






The study in question, “Horseback Riding Improves the Ability to Cause the Appropriate Action (Go Reaction) and the Appropriate Self-control (No-Go Reaction) in Children,” (Ohtani et al. 2017) suggests that the vibrations associated with horses’ movement activate the sympathetic nervous system, leading to improved cognitive ability in children. Specifically, children 10 to 12 years old completed either simple arithmetic or behavioral (go/no-go) tests before and after two 10 minute sessions of horseback riding, walking, or resting. A large percentage of children demonstrated improved performance on the go/no-go tasks (which largely test impulse control) after 10 minutes of riding compared to children who walked or rested. No significant changes were seen in the arithmetic tasks.





"There are many possible effects of human-animal interactions on child development," study author Mitsuaki Ohta suggests. "For instance, the ability to make considered decisions or come to sensible conclusions, which we described in this study, and the ability to appreciate and respond to complex emotional influences and non-verbal communication, which requires further research to be understood" (Frontiers 2017).





So, have the horses I’ve ridden over my life (the number must be nearing 100) enhanced components of my cognitive abilities and perhaps even predisposed me to a career in bioethics? Have they given me unfair advantages over others by heightening my ability to think about and respond to moral problems? When I go to the barn before writing an exam or term paper (or this blog post) am I cheating via the highly controversial and purportedly unjust act of cognitive enhancement? After all, considered decisions and sensible conclusions are the hallmark of bioethics.








Image courtesy of Wikimedia Commons.

Though I initially—perhaps defensively—want to argue “no,” there seems to be a reasonable case that I have enthusiastically engaged in cognitive enhancement, and that my particular means of doing so is quite unjust. Though different in methodology and perhaps neurologic effect, pharmacological cognitive (or affective) enhancement seems to provide discourse that is surprisingly analogous to my circumstance.





Throughout diverse fields of literature, a host of arguments have been raised regarding the nature, moral and legal status of non-prescription use of stimulant “study drugs,” especially amid young people in academic settings (e.g. Aria 2011; Desantis et al. 2010; Terbeck 2013; Vrecko 2013). Among commonly advanced opposition to this sort of enhancement is the concern that it is unnatural and may result in inauthentic or non-rational choice. In his discussion of the issue, Torben Kjærsgaard writes, “We could risk losing our capacity to pull ourselves together, if we rely on motivation enhancers every time we face hard challenges... we may risk losing touch with ourselves in some sense. Thus, we should wonder how we would be doing if it were not for the enhancers, and ask ourselves how much we would have achieved if it were not for the motivation enhancers” (Kjærsgaard 2015). Now, at first blush riding may not seem to be motivational enhancement, but that is certainly a role it has played in my life. Not only has it become my go-to activity for coping with stress, anxiety, or any undesirable emotion in much the same way that some rely on drugs, my household rules growing up cemented its role as motivator. I was raised with academics as the priority—if I didn’t do well in classes, I didn’t get to ride. Though not ingested orally (rather aurally), I’d certainly say my love of horses, if not the riding itself, has significantly influenced my academic motivation. By this measure, my equestrian habit is at least ethically dubious if not entirely concerning.





Another commonly cited objection to the off-label use of stimulants is medical. In the most extreme cases, overdoses, “The primary clinical syndrome involves prominent neurological and cardiovascular effects… the patient may present with mydriasis, tremor, agitation, hyperreflexia, combative behavior, confusion, hallucinations, delirium, anxiety, paranoia, movement disorders, and seizures” (Spiller 2013). However, even the common “minor” equestrian injuries, which include soft tissue damage, fractures, and concussions (Bixby-Hammet 1990), are not negligible in comparison. I have suffered each category of riding injury multiple times—several broken ribs, a broken arm, an avulsion fracture in my foot, lung and liver contusions, and concussions are among my more certain traumas. Training and competing in the sport considered the most dangerous in the Olympics (van Gilder Cooke 2012) and leading the count in sports-related traumatic brain injuries (Mohney 2016), I’m considered lucky among my equestrian peers even with that list. Undoubtedly, anyone recommending participation in horseback riding could be accused of violating basic nonmaleficence. The risk-benefit ratio does not skew in favor of the saddle, and it seems unimaginable that a medical professional would recommend any intervention with a similar profile.







Image courtesy of Pixabay.

Finally, let’s turn to justice and access. Core virtues of American medicine, just access and opportunity are morally idyllic, and often contribute to strong condemnations of pharmacological enhancement. The general argument is that drugs like Adderall are intended to “restore” normal cognitive (or affective) ability to individuals suffering from attention deficit disorders, and that use by “cognitively normal” individuals provides disproportionate benefit while unfairly disadvantaging those with medical need. Additionally, the high cost of the drugs, especially when purchased illegally, may widen the gap in academic achievement already created by socioeconomic factors (Sirin 2005). With these considerations, I must denounce my horse habit as undeniably unethical. Even more than expensive pills, the financial privilege required to participate in this sport is incontrovertible. Riding lessons cost from $25-$100+ an hour, and that is just instruction. Horseback riding is simply financially untenable for many, regardless of the purported cognitive benefits. Thus, if one accepts Ohtani’s conclusions, my equestrian activities have granted me access to a privileged means of enhancing cognition. I have improved my mental faculties without the effort and intention lauded by Kantian morality, and with disregard to the virtue of justice valued by my society (Timmons 2013).





The conclusion that by participating in a sport that I love, I may have inadvertently acted immorally by enhancing certain cognitive capacities is puzzling and likely provocative to many. My moral intuition suggests that since I neither intended to enhance my abilities, nor was I even aware that this outcome was possible, I did not act unfairly. There also seems to be something about the nature of the act, perhaps that it is physical instead of chemical, that excludes it from being one of the “immoral enhancements” denounced by bioconservative theorists. If we do deem this activity to be equestrian cognitive enhancement, why would this riskier, less accessible, and equally addictive methodology be less egregious than biomedical means? The readily paralleled discourse between cognitive enhancement via riding and the ethical issues of off-label nootropic drug use reveals that we have much more to discuss about what does and does not constitute enhancement. Perhaps after a bit more time in the saddle I’ll be able to come to a sensible conclusion.



References 



Arria, A. M. 2011. Compromised sleep quality and low GPA among college students who use prescription stimulants nonmedi- cally. Sleep Medicine 12(6): 536–537. Available here.



Bixby-Hammett, D., and Brooks, W.H. (1990) Common Injuries in Horseback Riding: A Review. Sports Medicine 9(1): 36-47.



DeSantis, A., S. M. Noar, and E.M. Webb. (2010) Speeding through the frat house: A qualitative exploration of nonmedical ADHD stimulant use in fraternities. Journal of Drug Education 40(2): 157– 171. Available here.



Frontiers. (2017, March 2). Horse-riding can improve children's cognitive ability: Study shows how the effects of horseback riding improve learning in children. ScienceDaily. Retrieved here.



Kjærsgaard, T. (2015) Enhancing Motivation by Use of Prescription Stimulants: The Ethics of Motivation Enhancement. AJOB Neuroscience 6(1): 4-10.



Mohney, G. (2016, April 1) Horse Riding is Leading Cause of Sport-Related Traumatic Brain Injuries, Study Finds. ABC News. Retrieved here.



Ohtani, N., Kitagawa, K., Mikami, K., Kitawaki, K., Akiyama, J., Fuchikami, M., Hidehiko, U., and Ohta, M. (2017) Horseback Riding Improves the Ability to Cause the Appropriate Action (Go Reaction) and the Appropriate Self-control (No-Go Reaction) in Children. Frontiers in Public Health. Published online 6 February 2017 here.



Spiller HA, Hays HL, Aleguas A Jr. (2013) “Overdose of drugs for attention-deficit hyperactivity disorder: clinical presentation, mechanisms of toxicity, and management,” CNS Drugs 27(7): 531-543.



Terbeck, S. 2013. Why students bother taking Adderall: Measurement validity of self-reports. AJOB Neuroscience 4(1): 20–22.



Timmons, M. (2013) Moral Theory: An Introduction. Lanham, Md.: Rowman & Littlefield Publishers.



van Gilder Cooke, S. (2012, July 28) Equestrian Eventing: The Olympics’ Most Dangerous Sport? Time. Retrieved here.



Vrecko, S. (2013) Just How Cognitive is “Cognitive Enhancement”? On the Significance of Emotions in University Students’ Experiences with Study Drugs. AJOB Neuroscience 4(1): 4-12.



Want to cite this post?



Drewry, K. (2017). Have I Been Cheating? Reflections of an Equestrian Academic. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/have-i-been-cheating-reflections-of.html



Tuesday, June 6, 2017

The Neuroethics Blog Series on Black Mirror: Virtual Reality


By Hale Soloff




Hale is a Neuroscience PhD student at Emory University. He aims to integrate neuroethics investigations with his own research on human cognition. Hale is passionate about science education and public science communication, and is pursuing a career in teaching science. 





Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we utilize our technology and how it can affect us in return. This piece is the first in a series of posts that will discuss ethical issues surrounding neuro-technologies featured in the show and will compare how similar technologies are impacting us in the real world. 



Black Mirror – Plot Summary 




Some of the neuro-technologies featured in Black Mirror at first seem marvelous and enticing, but the show repeatedly illustrates how abusing or misusing such technologies can lead to disturbing, and even catastrophic, consequences. This may seem scary enough, but what if the goal of a device was to intentionally frighten its user? 




In the episode “Playtest” a man named Cooper volunteers to help a video game company test out a brand-new device, referred to as a “mushroom.” After being warned that using the device requires a small, reversible medical procedure, supposedly no more invasive than getting his ears pierced, Cooper signs a consent form and the mushroom is injected into the back of his head. The mushroom records electrical activity from his brain, uses intelligent software to determine what he fears the most, and then stimulates his brain with more electricity to make him see a “mental projection” of his fears. As an arachnophobe, Cooper first sees a spider crawling towards him that nobody else can see; in fact, the mental projections he sees are so convincing that he becomes skeptical of whether another human being, whom he can see and hear, is real or simply a projection. 







Image courtesy of Flickr.

Then Cooper’s mushroom device malfunctions. Despite being assured that it can only make him

experience audio and visual stimuli and that nothing he sees can physically hurt him, Cooper feels pain when attacked by a knife-wielding projection. Apparently, this is because “data tendrils” from the mushroom’s neural net dug deeper into his brain and took root, causing him to feel physical pain when he was struck by the projection. Soon after that, the neural net wipes Cooper’s memory, leaving him with no knowledge of himself or his loved ones. In a chilling end to the episode, an incoming phone call to Cooper’s cell phone interferes with the signal of the device, causing the mushroom to malfunction and kill Cooper by over-stimulating his brain. 



The technology used in “Playtest”




The device that Cooper tests in this episode is an “interactive augmented reality system,” a chimera of three technologies that exist today. The first, Virtual Reality (VR), involves wearing a headset that blinds you to the outside world, instead placing you in a virtual 360o environment that you can observe and reach out to touch with wearable, glove-like controllers. VR has become a popular technology in the world of gaming because of the feeling of full immersion that it gives the user. VR has even been used in both research and rehabilitation of human cognitive processes; for instance, scientists at Emory University use immersive VR exposure therapy to help treat combat-related post-traumatic stress disorder (PTSD) in veterans. Through controlling how faithfully a virtual environment represents traumatic stimuli, individuals with trauma disorders and phobias can safely confront “triggering” scenarios while practicing coping methods. The second technology, augmented reality (AR), differs from VR in that it overlays a virtual image onto a real environment. For example, the popular AR game “Pokémon Go" allows users to observe Pokémon in their homes, playgrounds, and shopping malls through their phone’s camera. 







Image courtesy of Wikimedia.

In Black Mirror, Cooper experiences a combination of these two technologies: using the mushroom was fully immersive like VR, but also projected objects into his real-world environment like AR. This combination is made possible through a Brain-Computer Interface (BCI), the third real-life technology used in Black Mirror. BCIs are direct connections between a brain and a computer, meaning the user can control the computer with their voluntary cognition, and the computer can sometimes affect the user with electrical stimulation. The uses for BCI are extensive, from advanced prosthetic limbs that can be moved with the user’s concentration and intent, to computer-controlled Deep Brain Stimulation (DBS) for the treatment of Parkinson’s disease and treatment-resistant depression. We are even on the verge of BCI-controlled video games, some of which will use electroencephalogram (EEG) electrodes to measure and interpret brain waves to control a game



Do I need to be worried about having a “mushroom device” in my brain? 





Though the mushroom device utilized in Black Mirror bears similarity to current technologies, it is important to consider the differences between what is presented in the media and what capabilities we have today. Both the mushroom and BCIs can be used to record the brain’s electrical activity while simultaneously stimulating the brain to affect its behavior. However, the mushroom in “Playtest” is inserted quickly and easily into Cooper’s brain, presumably by somebody with little or no medical training. BCIs, such as DBS or ECoG, that directly stimulate the central nervous system require an invasive surgical procedure performed by highly trained brain surgeons. Although the mushroom is advanced enough to determine Cooper’s fears and thoughts, our current ability to analyze and interpret brain activity does not allow for the degree of “mind-reading” exhibited in the show (contrary to the neuro-hype surrounding consumer BCIs). Neural activity recorded by a BCI device under highly controlled conditions can be translated into meaningful, but limited, psychological information, such as predicting intentions slightly before they are acted upon and recognizing thought patterns that are distinct for different objects. The closest we’ve come to “mind-reading” includes the neuroimaging work of Jack Gallant’s lab, but attempts for “mind-writing” images or words with BCI’s have yet to be done. 



Ethical issues featured in “Playtest” 







A DBS (deep brain stimulation) procedure. Image courtesy

of Wikimedia.

The technologies described in “Playtest” give rise to a host of ethical concerns, with one of the most salient being a violation of autonomy. Normally clinicians and patients work together to determine the correct level of stimulation that a therapeutic stimulation device like DBS should deliver to the brain. A future proposed version—on the horizon, but not yet employed in humans even experimentally—, “closed-loop” DBS, uses a computer algorithm to determine stimulation levels (with the goal of diminishing the need for external control by the user or clinician) based on current brain activity. This closed-loop system is how the mushroom in Cooper’s nervous system can first record his brain activity, then analyze it to determine his fears, and finally deliver a fearful experience to him using stimulation. To be clear, these technologies are not being developed for manipulating these kinds of complex sensory or perceptual images as in “Playtest.” Again, existing brain stimulation technology does not allow for the controlled, vivid hallucinations that Cooper sees. However, some ethicists are exploring whether “closing the loop” on brain stimulation could lessen a user’s real or perceived agency, or the capacity of the individual to act independently. Allowing internal readjustments of stimulation to be decided by the algorithm, rather than consciously manually adjusted by the patient or clinician, may have the end result of diminishing agency of the user, even if not for something as dramatic as illustrated in “Playtest,” like for facilitating movement. 





Similarly, the full immersion experience facilitated by VR and AR in “Playtest” represents an infringement of the user’s autonomy. One of the core appeals of VR in gaming is the factor of immersion: rather than learning which button on the controller makes you grab an object or move the camera, you simply reach out and grab the object with your hand or turn your head to observe more of your environment. This, however, leads us to the doorstep of a disturbing possibility—the inability to escape. I have listened to individuals playing horror games on VR devices exclaim, “Oh my gosh, it’s so much scarier because I can’t just look away.” They were not truly upset, because they knew that escape was as easy as taking off the headset, turning off the device, and leaving the room. But, if the VR capabilities are implanted into your brain, like those seen in this Black Mirror episode, it creates a scenario where there may be no escape from frightening, threatening, or even painful stimuli. While VR technology is currently being used to exhibit light, sound, and even touch through external manifestations, a BCI like the one used in “Playtest” directly “hijacks” your sensory system by causing you to experience stimuli that are not truly present. Black Mirror shows us a scenario that is only possible because of the unique features of BCI and VR: inescapable torture inflicted by an entertainment system. 








Image courtesy of Airman Magazine.

Finally, BCIs give rise to a unique privacy issue: if the gaming company in “Playtest” misplaced Cooper’s data, potentially anyone could know his innermost fears and personal thoughts. If the mushroom device was capable of “mind-reading” his subconscious fears, intentions, and more, that information could easily be saved and sold to interested companies; a law was enacted this April that allows internet service providers and other companies to sell their customers’ personal data, (like your social media and search engine browsing habits without consent). As discussed, our current understanding of brain data allows us to interpret limited information in a controlled environment, but as our ability to interpret brain signals improves (and as more data is aggregated) this issue of brain privacy may become more pressing. It is important that those involved in facilitating BCI understand and minimize the risk involved—engineers can design BCIs to be more safe and secure, clinicians can protect patients’ brain-data and explain the risks of BCI to their patients, and policy-makers can carefully consider how to protect an individuals’ right to his or her brain activity. 




Conclusion – Invasive BCI in Gaming





Entertaining media like Black Mirror raises interesting questions about the ethics of technology, but when trying to answer these questions, we need to be aware of the current state of technology and we must separate fact from fiction. Engineers, neuroscientists, and ethicists are working together to design safer, noninvasive and more ergonomic BCIs, and if they succeed it would improve brain stimulation treatment for patients, allow more individuals to safely use BCIs, and may even permit the implementation of more advanced BCIs in gaming. Perhaps we could even create, as stated in the episode, “the most personal survival horror game in history…one that works out how to scare you using your own mind.”




Want to cite this post?



Soloff, H. (2017). The Neuroethics Blog Series on Black Mirror: Virtual Reality. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/the-neuroethics-blog-series-on-black.html