Pages

Tuesday, March 27, 2018

Neuroprosthetics for Speech and Challenges in Informed Consent










Hannah Maslen is the Deputy Director of the Oxford Uehiro Centre for Practical Ethics, University of Oxford. She works on a wide range of topics in applied philosophy and ethics, from neuroethics to moral emotions and criminal justice. Hannah is Co-PI on BrainCom, a 5-year European project working towards the development of neural speech prostheses. Here, she leads the work package on ‘Ethics, Implants and Society’.  





Scientists across Europe are combining their expertise to work towards the development of neuroprosthetic devices that will restore or substitute speech in patients with severe communication impairments. The most ambitious application will be in patients with locked-in syndrome who have completely lost the ability to speak. Locked-in syndrome is a condition in which the patient is awake and retains mental capacity but cannot express himself or herself due to the paralysis of afferent motor pathways, preventing speech and limb movements (except for some form of voluntary eye movement, usually up and down) (1).






BrainCom is a European Commission Horizon 2020 project that brings together engineers, neuroscientists, clinical researchers, and clinical practitioners to advance the basic understanding of the dynamics and neural information processing in cortical speech networks in addition to developing speech rehabilitation solutions using innovative brain-computer interfaces. 





The basic idea behind the technology under development is that arrays of microelectrodes can be implanted onto the surface of an area of the brain involved in the production of speech. These electrodes would continuously record activity from these speech brain areas, and this activity would then be fed into a brain-computer interface (which processes and decodes the signals) so that they can finally be externalized as synthesized speech (2).







Image courtesy of Flickr.

One plausible recording site in the brain is the articulatory motor cortex – the part of the brain that controls the movement of the mouth, tongue, throat, etc. when talking. A user of the neuroprosthetic device would need to speak covertly – that is, clearly imagine herself speaking, like saying something loudly ‘in her head.’ The brain activity that is generated when people engage in covert speech approximates the brain activity generated when actually speaking out loud. This gives rise to the possibility of mapping the motor features of speech (which the brain activity represents or underlies) to the acoustic properties of those motor features, so that they can be ultimately produced as synthesized speech. 





Impaired communication and informed consent 





The research is still in its early days, and devices are not yet being trialed in patients with severe communication impairments. However, when the research and eventual application reaches the clinical population of intended users, there will be a number of challenges relating to obtaining the informed consent of those users. 





Obtaining informed consent is not only a matter of relaying a list of facts to a patient and acquiring her signature. To be fully informed, patients must have sufficient opportunity to ask questions and to discuss the course of action that will be in their best interests. This is often necessary for the patient’s full understanding of the materially relevant facts; as such, it is most often a prerequisite for autonomous decision-making. Further, since patients differ in their preferences and values, it should not be assumed that patients with similar clinical profiles would necessarily benefit in the same way from the same intervention. Even when statistically equally likely, risks of an intervention may be more significant for one patient than another, given individual differences in personal disposition, circumstances, and goals. 








Image courtesy of Flickr.

Underscoring the importance of the process of obtaining informed consent, those proposing the ‘liberal rationalist' (3) and other models of doctor-patient decision-making have defended an approach whereby doctors, patients, and in some circumstances family members, engage in a rational discussion about which course of action is best for the patient, all things considered. Such approaches are liberal in the sense of being open to disagreement regarding the value or disvalue that should be accorded to the various risks and benefits of a procedure. They are rationalist in the sense that, although the importance of effects of an intervention may be open to individual evaluation, decision-making about whether to undergo an intervention in the context of this evaluation should be made on the basis of comprehensive factual information, and without errors of reasoning. 





Neuroprosthetic challenges: talking about the device 





The ideal consent process outlined above explicitly requires discussion. Clearly, this requirement is going to generate a challenge when the patient or potential research participant has significant or even complete impairment in their capacity to communicate. How will a doctor be able to make sure the patient or potential research participant has understood the materially relevant facts and weighed up the risks and benefits as they pertain to her if she is not able to engage in discussion? 





Of course, this challenge is not unique to neuroprosthestics. There are relevant parallels with consenting aphasic patients for treatments. Although it should be noted that aphasia and communication impairment are not necessarily the same thing (aphasia is a condition typified by problems with verbal fluency, usually as a result of damage to the brain; patients who are locked-in often do not have damage to language areas of the brain), clinicians have had to find ways to facilitate discussion with aphasic patients to ensure that they have understood the materially relevant facts of a recommended treatment, and have had the opportunity to discuss alternatives. The consent process for such patients may require greater involvement from family members who take on the role of asking questions and offering an explanation of what the aphasic patient is trying to say (4). 








Image courtesy of Pixabay.

Although similar approaches may be appropriate in the context of consent for neuroprosthetics for speech, the challenge will be particularly acute here, especially as the research and application reaches the target clinical population. This is a function on the one hand of the complexity of the mechanism of the device and on the other hand of the severity of the communicative impairment of the patient or research participant. 





How the device works and interfaces directly with the user’s ‘thoughts’ may be difficult to grasp, as will the likely phenomenological experience of speaking through the device. Amongst the relevant challenges in delivering and discussing the information will be:


1. Whether the device’s benefits should be framed as restoring or as substituting for speech. Making sure the function of the device is not misrepresented has some parallels with the therapeutic misconception and will remain important even at the stage of clinical translation. Although it is not yet known how good these devices will be at continuously decoding and synthesising covert speech, if early versions are not able to render the user’s synthesised speech as fluent and articulate as her original speech, any such deficiencies must be clearly explained. 




2. How to avoid what might be labelled the ‘voyeuristic misconception’ regarding how the device works. This phrase captures the potential worry that the device will read and externalise all the users thoughts indiscriminately. The fear that the device might permit others to peer into the user’s mind is understandable. However, a challenge in the development of the device is to ensure either that the device can discriminate between covert speech that the user intends to be externalised, and private ‘thoughts’ with linguistic structure, or that some other mechanism of control is built in, such as a verification command. Of course, a misconception is only a misconception if it is not aligned with reality. How the device will operate and what, if any, risk of involuntary ‘speech’ remains will need careful explanation, both to avoid any misconception, but also to acknowledge any risks, if present. 






These two dimensions of understanding the device and what it will and won’t do will require extensive discussion in order to ensure that consent is sufficiently informed. Researchers and clinicians will need to find ways to engage communicatively impaired patients or research participants in this discussion. 





Neuroprosthetic challenges: talking via the device 








Image courtesy of Flickr.

The above challenge is not unique to neuroprosthetic devices for speech, even if particularly acute in this context. A more unique challenge relating to consent will be confronted in the case that users discuss treatment, or even end of life decisions, via the device. Surrogates are useful and even necessary in playing the role of discussant on the patient’s behalf when the patient cannot do so herself; nonetheless, it is always preferable to engage the patient herself as much as possible, since the surrogate will neither know all the questions the patient might want to ask nor interpret communicative efforts perfectly. 





If a patient has been implanted with a neuroprosthesis that allows her to play a greater role in discussion and to communicate preferences and decisions, her input must be given precedence over the input of a surrogate. However, there will be challenges presented by the patient communicating through a neuroprosthesis. Intention is clearly necessary for consent to be autonomous. So, in addition to ensuring that the patient has understood all the materially relevant facts (the first challenge), those engaging with the patient regarding her decisions must also ensure that any decision expressed is done so voluntarily. In the context of patients communicating via a neuroprosthesis, we will be concerned both about: 



1. the accuracy of the synthetic representation of the patient’s inner speech and  


2. whether what is represented (even if accurate) is intended by the patient as a statement of her reflectively endorsed preference or decision. 



The neuroprosthesis is likely to incorporate elements of artificial intelligence, which will serve to improve the continuity of the decoding and synthesis by predicting and correcting the input generated by the user’s brain activity. Whilst this will assist with fluency, it introduces an additional agent contributing to (although not determining) what is said. An interesting philosophical and psychological question will relate to the way in which ‘accuracy’ of speech should be conceived, given that natural speech often does not quite match what a speaker attempts to say and that speakers sometimes seem to find out what they think through speaking. Artificial intelligence that has a corrective and/or predictive function will add complexity to the question of whether the output is ‘accurate.” 








Image courtesy of Pixabay.

In terms of intention, even if there is a sense in which the synthetic output accurately represents the activity underlying the user’s covert speech, close attention will need to be paid to whether the user intended to express the decision or preference as her rationally endorsed ‘final say’ on the matter. Given the likely absence of accompanying tone of voice, facial expressions, and body language as cues to the speaker’s relationship to what they are saying, extra caution will be warranted. 





These challenges will not be insurmountable but will require thought and establishment of good practice. The continued role of surrogates and/or additional participants in the discussion will be crucial, as will multiple layers of verification. Ultimately, however, the hope is that the devices under development will allow individuals, who are otherwise precluded from engaging in discussions about what happens to them, to regain the ability to lead this discussion and to participate more fully in their social worlds. 







References 




1. Miller-Keane Encyclopedia and Dictionary of Medicine, Nursing, and Allied Health, Seventh Edition. (2003). Retrieved January 6 2018 from https://medical-dictionary.thefreedictionary.com/locked-in+syndrome 





2. Bocquelet, F., Hueber, T., Girin, L., Chabardès, S., & Yvert, B. (2016). Key considerations in designing a speech brain-computer interface. Journal of Physiology-Paris, 110(4), 392-401.




3. Savulescu, J. (1997). Liberal rationalism and medical decision-making. Bioethics, 11(2), 115-129.





4. Stein, J., & Brady Wagner, L. C. (2006). Is informed consent a “yes or no” response? Enhancing the shared decision-making process for persons with aphasia. Topics in stroke rehabilitation, 13(4), 42-46.






Want to cite this post?




Maslen, H. (2018). Neuroprosthetics for Speech and Challenges in Informed Consent. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/03/neuroprosthetics-for-speech-and.html

Tuesday, March 20, 2018

Downloading Happiness




By Sorab Arora







Sorab Arora is currently a Master’s in Public Health student at Emory University, specializing in Healthcare Management and Policy. He has researched health technology design and strategy focused on behavioral medicine, most recently at Northwestern University’s Center for Behavioral Intervention Technologies. Arora is a graduate of both the University of Chicago (Summer Business Scholar – 2017) and Grinnell College (2016), where he has bridged social entrepreneurship with mobile technologies and medical innovation. 





With median adult smartphone ownership rising to nearly 70% in advanced markets, individuals ranging from wealthy millennials to homeless youth have unprecedented access to mobile technologies (Poushter, 2016; Ben-Zeev et al., 2013). From “swiping” potential soulmates to ordering prescription glasses to one’s door, the proliferation of opportunities for immediate gratification through mobile applications only continues to grow. In what economists have now termed the “Fourth Industrial Revolution,” this period of integrated consumer technologies focuses on human-centered design and improved efficiency across global sectors (Schwab, 2017). In healthcare especially, mobile health (mHealth) platforms offer an innovative new element to how medicine can be conceptualized, delivered, and implemented. 






The melding of mHealth technologies in the field of psychotherapy offers a marriage of tremendous promise. As Fitbits, Apple Watches, and the like have taken center stage as wearables to enhance wellness, data collected from these sources offers a wealth of vital, longitudinal information. Predictive analytics allow healthcare providers (and consumers) to gain a more precise understanding of their health through personalized strategies based on their past health trends (Siegel, 2013). Coupled with artificial intelligence and machine learning, predictive analytics can offer improved differentiations in care by closely analyzing biological and behavioral markers. The value in doing so is a greater understanding of healthcare patterns at both the individual and the societal level, driving more insightful strategies for improvement. In other words, mobile technologies have opened the door to data-mining metrics that were once nearly impossible to quantify– including behaviors, cognitions, and emotions (Mohr, Zhang & Schueller, 2017). 








Image courtesy of Wikimedia Commons.

As it currently stands, smartphones have access to a tremendous amount of personal data ranging from location, movement, circadian rhythms, exercise, diet, and even ambient light. Applying analytics to the world of mental health can have dramatic impacts not as a replacement for current treatments, but by offering more accurate indicators of what and how to treat. Telemedicine -- usage of technology to remotely deliver healthcare – has also played a key role in psychiatry by facilitating innovative symptom tracking and communication between patients and providers (Mermelstein et al., 2017). Aligned with these strides in telepsychiatry, recent studies indicate efficacy in correlating behavioral markers to clinical disorders in efforts to more precisely pinpoint once-overlooked symptomology (Harari et al., 2016). 




Implementing smartphone usage in clinical settings has been a recent focus for The National Center for Telehealth & Technology through their efforts in improving mood and anxiety disorders, especially for veteran populations (Luxton et al., 2011). The move to focusing on everyday individuals using and benefiting from similar mobile apps, however, comes with its own string of unique legal, ethical, and medical concerns. 





Koko is a crowd-sourced platform for providing positive, constructive feedback to others that was based on cognitive-behavioral therapy to change conceptualization of challenging events ranging from teen bullying to work stress in efforts to build resilience. Woebot helps facilitate conversation and track moods through quantitative and qualitative measures utilizing machine learning. Headspace is a popular mobile app across Android and iPhone systems that teaches meditation and mindfulness through short, daily modules. These applications just scratch the surface of how healthtech innovation and human-centered interactions have fused in recent years. But with this plethora of opportunity comes a slew of related questions. What are the related ethical concerns and how is privacy safeguarded? How would one measure adherence and incentivize actual usage of these apps? Would these new technologies yield clinically significant improvements in psychiatric populations? All these questions (and more) beg to be answered, but the issue of efficacy takes center stage. 








Image courtesy of Wikimedia Commons.

As mental health and neurotechnologies are brought to market, there have been minimal barriers to entry in creating mobile apps with apparent face value. From ideation stages to actual product launch, several healthtech designs are relatively unregulated and untested regarding their actual validity as medical devices or supplements (Mohr, Zhang & Schueller, 2017). Because of this lack of quality control, individuals may be managing their stress and mental health disorders in ways that do more harm than good. 





Calling for evidence-based mental health apps and screening related technologies through more efficacious standards paves the path for creating clinically significant improvements long-term for at-risk patients (Lui, Marcus & Barry, 2017). While resources like PsyberGuide evaluations, the American Psychiatric Associations (APA) App Evaluation Model, and similar criteria have helped equip individuals with skills to discern between mental health apps by providing holistic evaluation criterion, a fundamental issue remains. Too many apps lack efficacy to be hailed as breakthroughs in the current climate of healthtech design and innovation– especially those in the fields of mental health and neuroscience. 





With privacy concerns as a major player in healthcare data storage, requirements for novel healthtech apps, wearables, and software go beyond simply achieving gold standards for randomized clinical trials or patient satisfaction. These technologies handle highly sensitive personal information, making bioethics and legal considerations key factors in data storage and analysis. Concerns over GPS and raw audio data already introduce a design challenge for individuals who cannot allow these systems to run in workplace settings or confidential meetings, reminiscent of “Big Brother” collecting too much information of daily activities (Klasjna et al., 2009). 





As the ability to collect and synthesize more intimate data emerges, the process of ensuring data security through deep learning software must draw on psychologists to effectively collaborate with colleagues across healthcare, computer science, and engineering. If used as a medical technology supplement to face-to-face therapy or psychotherapeutic interventions - compliant with The Health Information Technology for Economic and Clinical Health Act (HITECH Act) – the need for ensuring patient confidentiality and privacy becomes the responsibility of more than just the provider (Luxton et al., 2011). 








Image courtesy of Wikimedia Commons.

As wellness based health-tech products are launched to market, evaluating their efficacies from a more rigorous clinical and legal standpoint becomes crucial. Because advanced technologies have significantly altered daily interactions at a personal and professional level, leveraging human-computer interactions in the field of behavioral medicine offers tremendous potential for enhancing short- and long-term treatment strategies. Tracking emotional states through novel frameworks can therefore serve as a tool for “downloading” a healthier state of mind – given adherence to these applications as if they were tangible medication itself. The potential benefits of improved patient-provider relationships, decreased per capita cost, and increased access to care make discussing cross-disciplinary strategy, limitations, and directions invaluable with regards to emerging neurotechnologies. 





References

 

Ben-Zeev, D., Davis, K. E., Kaiser, S., Krzsos, I., & Drake, R. E. (2013). Mobile technologies among people with serious mental illness: opportunities for future services. Administration and Policy in Mental Health and Mental Health Services Research, 40(4), 340-343. 





Harari, G. M., Lane, N. D., Wang, R., Crosier, B. S., Campbell, A. T., & Gosling, S. D. (2016). Using smartphones to collect behavioral data in psychological science: Opportunities, practical considerations, and challenges. Perspectives on Psychological Science, 11(6), 838-854. 





Klasnja, P., Consolvo, S., Choudhury, T., Beckwith, R., & Hightower, J. (2009). Exploring privacy concerns about personal sensing. Pervasive Computing, 176-183. 





Lui, J. H., Marcus, D. K., & Barry, C. T. (2017). Evidence-based apps? A review of mental health mobile applications in a psychotherapy context. Professional Psychology: Research and Practice, 48(3), 199. 





Luxton, D. D., McCann, R. A., Bush, N. E., Mishkind, M. C., & Reger, G. M. (2011). mHealth for mental health: Integrating smartphone technology in behavioral healthcare. Professional Psychology: Research and Practice, 42(6), 505. 





Madan, A., Cebrian, M., Lazer, D., & Pentland, A. (2010, September). Social sensing for epidemiological behavior change. In Proceedings of the 12th ACM international conference on Ubiquitous computing (pp. 291-300). ACM. 





Mermelstein, H., Guzman, E., Rabinowitz, T., Krupinski, E., & Hilty, D. (2017). The Application of technology to health: The evolution of telephone to telemedicine and telepsychiatry: A historical review and look at human factors. Journal of Technology in Behavioral Science, 1-16. 





Mohr, D. C., Zhang, M., & Schueller, S. M. (2017). Personal sensing: Understanding mental health using ubiquitous sensors and machine learning. Annual Review of Clinical Psychology, 13, 23-47. 





Poushter, J. (2016). Smartphone ownership and internet usage continues to climb in emerging economies. Pew Research Center, 22. 





Schwab, K. (2017). The fourth industrial revolution. Crown Business. 





Siegel, E. (2013). Predictive analytics. Hoboken: Wiley.






Want to cite this post?




Arora, S. (2018). Downloading Happiness. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/03/downloading-happiness.html



Tuesday, March 13, 2018

The Brain In Context





By Sarah W. Denton







Sarah W. Denton is a research assistant with the Science and Technology Innovation Program at the Wilson Center. Denton is also a research assistant with the Institute for Philosophy and Public Policy at George Mason University. Her research primarily focuses on ethical and governance implications for emerging technologies such as artificial intelligence, neurotechnology, gene-editing technology, and pharmaceuticals. 




Tim Brown, University of Washington PhD student and research assistant with the Center for Sensorimotor Neural Engineering’s (CSNE) Neuroethics Thrust, introduced the session titled, “The Brain in Context,” at the International Neuroethics Society’s 2017 Annual Meeting moderated by Husseini Manji, Janssen Global Therapeutic Neuroscience Area Head. This session provided a multidisciplinary view of the challenges we face today in understanding the context of lived experiences and how our brains impact our environments. Getting at the heart of the context in which our brains develop and grow may help us to reduce stigma by increasing our understanding of how our environments impact our brains in a myriad of ways.





Socioeconomic Status and the Brain







Martha Farah, Director of the Center for Neuroscience & Society at the University of Pennsylvania, kicked off the panel discussion by speaking about her recent research on the relationship between socioeconomic status (SES) and the brain. The factors affecting the brain not only arise from our physical bodies, but also include our social environments [1]. Specifically, Farah has focused her attention on socioeconomic status and how it affects everything, from life expectancy to education to income – all of which are inherently connected to the context and the environments in which our brains develop.





The way the brain develops is a causal pathway to a variety of outcomes. For instance, there is a surprisingly strong relationship between cognitive ability, as measured by IQ and school achievement, and SES [2]. Farah’s lab performed three studies that aimed to characterize SES disparities in terms of cognitive neuroscience’s model of mind, rather than through intelligence and standardized test scores [3,4,5]. Cognitive neuroscientists employ the ‘information processing’ view of the mind, which is a fundamental construct of cognitive psychology that “refers to the rule-governed transformation of [both unconscious and conscious] metal representations” (e.g., explicit perception, implicit learning, implicit memory [6]. This view of the mind appeals to computational methods in both cognitive psychology and neuroscience to understand the molecular mechanisms implicated in information processing [7].








Developed from a slide shown during Farah’s panel 

discussion titled, “Socioeconomic Status and the Brain,” 

at the 2017 International Neuroethics Society Annual Meeting 

on November 10, 2017 at the American Academy for the 

Advancement of Science (AAAS) building in Washington, D.C.

Farah’s findings suggest that the most pronounced socioeconomic-derived disparities were both executive function associated with the prefrontal cortex and declarative memory associated with the hippocampus. We know that the brain is usually discussed in a descriptive and mechanistic way, but this conception may be unhelpful. Although there are currently no unique implications, research moving towards a more illustrative and actionable understanding of the brain in context is adding to the weight of evidence that our environment, including SES, has profound affects on our brains. Thus, neuroethics and neuroscience policy is relevant precisely because it increases the weight of evidence. The end goal of Farah’s research program is to understand poverty using insight from neuroscience in order to help “break the cycle” and guide future policy decisions. 






Prenatal Programming of Human Fetal Brain Development 





The second panelist, Moriah Thomason, Director of the Perinatal Neural Connectivity Unit of the Perinatology Research Branch with the Detroit Medical Center and Wayne State University School of Medicine, built upon this discussion and defined the first context of our brain – the womb. Her research centers around prenatal programming of human fetal brain development and has found that alterations in brain development in utero have significant cognitive effects.





Earlier this year, Thomason published a study in Scientific Reports that suggested differences in how certain brain regions communicate with each other in fetuses that were later born prematurely when compared to fetuses that were carried to term [8].





Thomason’s research team used fMRI to determine which brain regions were involved in synchronized activity between brain regions, which suggests that these regions are well connected and share information [9]. The brain in utero is essentially in a state of becoming and sets the stage for our future abilities even before we take our first breaths outside of the womb. For instance, a mother experiencing high levels of stress seems to imprint this stress on the fetal brain [10]. This fetal programming affects the functional connectivity in the fetal brain prior to birth. Her “Prenatal Imaging of Neural Connectivity (PINC)” study has found that the prenatal stress score (depression, perceived stress, satisfaction with life, and anxiety) is correlated to fetal brain connectivity in several notable brain areas, including three subregions of the cerebellum.








Image courtesy of Pexels.

One implication of Thomason’s research is that we no longer need to limit the brain to a postnatal context– neural connectivity begins prior to birth. This suggests that prenatal brain development is intimately tied to the mother’s environment and psycho-physio state, which may have a wide range of implications that have yet to be explored. This is just the beginning for Thomason and prenatal neuro-connectivity research, and I am eager to see neuroethicists explore the implications of the brain in the context of the womb.





Do Brains Matter Using Screens?





The final panelist, Hervé Chneiweiss, Research Director at École des Neurosciences Paris Île-de-France, moved us from the brain in the context of the womb to the brain in the context of our increasing use of technology – particularly screens like those found in our phones, televisions, and tablets. The social context is perhaps the most important while we learn; yet, our increasing reliance on screens as an educational tool may hinder our ability to learn how to interact with others in our physical environments [See 11,12,13].





In this context, neuro-education has evolved from two-dimensional to five-dimensional; but now we are moving back to 2D screens. Moreover, there is a correlation between excessive screen time and the development of psychiatric disorders, lack of sleep, and impaired cognition [14]. Beyond the potential cognitive effects of excessive screen-time, Chneiweiss is also concerned about the marketing of attention, i.e., the subjection to excessive screen time in the workplace and nonmaleficence in advertising the educational benefits of brain training apps.







On the latter, Chneiweiss is particularly concerned about the vague educational benefit claims made by many apps directed at vulnerable populations like children and seniors [15]. The democratization of screens has created two new kinds of pathology: nomophobia, phobia of being without a phone; and fomo, the fear of missing out, fear of being disconnected of the social network. While these characterizations are a bit tongue-in-cheek, they highlight real problems that can significantly affect our cognitive abilities.






Image courtesy of Pixabay.

As a general rule of thumb, owning a console or tablet presents more risks than benefits, such as insomnia and social-skill development, for children under the age of six [16]. But, by the time they reach their teenage years, certain action-oriented games can indeed improve cognitive abilities such as visual attention and decision-making [17]. To address this discrepancy, we must educate children and their parents on how their brains work and how screens affect their brain functions.




Conclusion



All three panelists presented neuroscience research in the social context. Martha Farah’s presentation showed how social and other environmental factors, like income, can have significant effects on brain development. Moriah Thomason’s presentation of her research went even farther – connecting stress levels of mothers to prenatal brain development. Finally, Hervé Chneiweiss spoke on how the use of screens, like those found in television sets and iPhones, can not only affect child and adolescent brain development but can also affect how they interact in the social environments around them.
The primary takeaway from this session is that our brains do not develop in a neuropsychiatric vacuum– our social and cultural environments can have significant implications for neuroscience. In the Q&A after the presentations, I found it of particular interest that each panelist agreed that the social context is the most important context when it comes to understanding the brain and conducting neuroscientific research.

Now, as we move forward, we should approach neuroscience research and its findings in the context of our social environments if we are to create a more holistic understanding of the brain.




References






[1] M. Farah. 2012. “Neuroethics: The Ethical, Legal, and Societal Impact of Neuroscience,” The Annual Review of Psychology: University of Pennsylvania, 63: pp. 571-91 [https://neuroethics.upenn.edu/wp-content/uploads/2015/06/farah-Neuroethics-The-Ethical-Legal-and-Societal-Impact-of-Neuroscience.pdf ]; B. Avants, et al. 2012. “Early childhood environment predicts frontal and temporal cortical thickness in the young adult brain,” presentation at The Society for Neuroscience 2012 Meeting, abstract can be found here: [http://www.abstractsonline.com/Plan/ViewAbstract.aspx?sKey=734b1ccd-cfcf-4394-a945-083ca58f8033&cKey=7b3e8587-f590-4d94-ae3f-e050d52e8488&mKey=%7b70007181-01C9-4DE9-A0A2-EEBFA14CD9F1%7d]; M. Mariani. 2017. “The neuroscience of inequality: does poverty show up in children’s brains?” The Guardian, (13 July) [https://www.theguardian.com/inequality/2017/jul/13/neuroscience-inequality-does-poverty-show-up-in-childrens-brains].







[2] Martha Farah, Socioeconomic Status and Brain. University of Pennsylvania, Center for Neuroscience & Society. [https://neuroethics.upenn.edu/martha-j-farah-phd/research/socioeconomic-status-and-brain/].









[3] K. Nobel, M.F. Norman, and M. Farah. 2005. “Neurocognitive correlates of socioeconomic status in kindergarten children,” Developmental Science, 8(1): pp. 74-87. [https://neuroethics.upenn.edu/wp-content/uploads/2015/06/Development-kindergarten.pdf].





[4] M. Farah, et. al. 2006. “Childhood poverty: Specific associations with neurocognitive development,” Brain Research, 1110: pp. 166-174. [https://neuroethics.upenn.edu/wp-content/uploads/2015/06/Development-povertyassociation.pdf ].





[5] K. Noble, B. McCandliss, and M. Farah. 2007. “Socioeconomic gradients predict individual differences in neurocognitive abilities,” Developmental Science, 10(4): pp. 464-480. [https://neuroethics.upenn.edu/wp-content/uploads/2015/06/Development-gradiants.pdf]









[6] D. David, M. Miclea, and A. Opre 2004. “The Information-Processing Approach to the Human Mind: Basics and Beyond,” Journal of Clinical Psychology, 60(4): pp. 355,357. [https://www.ncbi.nlm.nih.gov/pubmed/15022267].









[7] "The Philosophy of Neuroscience" The Stanford Encyclopedia of Philosophy, Chapter 6: A Result of the Co-Evolutionary Research Ideology - Cognitive and Computational Neuroscience. 2010. [https://plato.stanford.edu/entries/neuroscience/#ResCoEvoResIdeCogComNeu].










[8] M. Thomason et. al. 2017. “Weak functional connectivity in the human fetal brain prior to preterm birth,” Scientific Reports, 7(39286). [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5221666/]. Of course, these findings are preliminary, but Thomason is enthusiastic and plans to continue this research with larger sample sizes.









[9] G. Miller. 2017. “Pioneering study images in fetal brains,” Science Magazine, (9 January). [http://www.sciencemag.org/news/2017/01/pioneering-study-images-activity-fetal-brains].









[10] M. Thomason et. al. 2017. “Weak functional connectivity in the human fetal brain prior to preterm birth,” Scientific Reports, 7(39286). [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5221666/].









[11] J.E. Brody. 2015. “Screen Addiction is Taking a Toll on Children,” The New York Times (6 July) [https://well.blogs.nytimes.com/2015/07/06/screen-addiction-is-taking-a-toll-on-children/]





[12] USC Center for Work and Family Life, “Sleep Deprivation in the Age of Electronics,” [http://cwfl.usc.edu/wellness/sleephandouts/Sleep_Deprivation_in_the_Age_of_Electronics-CWFL.pdf]





[13] G.S. Goldfield, et al., “Screen time is associated with depressive symptomatology among obese adolescents: a HEARTY study,” European Journal of Pediatrics, v. 175(7): pp. 909-919 (July) [https://link.springer.com/article/10.1007/s00431-016-2720-z].









[14] P. Reany. 2011. “Not Getting Enough Sleep? Turn off the Technology,” Reuters (7 March) [https://www.reuters.com/article/us-sleep-technology/not-getting-enough-sleep-turn-off-the-technology-idUSTRE7260RH20110307].









[15] R. Robbins. 2016. “U.S. Cracking Down on ‘Brain Training’ Games,” Scientific American, STAT (6 September) [https://www.scientificamerican.com/article/u-s-cracking-down-on-brain-training-games/]; E. Yong. 2016. “The Weak Evidence Behind Brain-Training Games,” The Atlantic (3 October) [https://www.theatlantic.com/science/archive/2016/10/the-weak-evidence-behind-brain-training-games/502559/].









[16] K. Subrahmanyam, et al. 2000. “The Impact of Home Computer Use on Children’s Activities and Development,” The Future of Children, (Fall/Winter): Princeton University [https://www.princeton.edu/futureofchildren/publications/docs/10_02_05.pdf].









[17] I. Granic, et al. 2014. “The Benefits of Playing Video Games,” American Psychologist, (January) [https://www.apa.org/pubs/journals/releases/amp-a0034857.pdf]; D. Bavelier, et al. 2011. “Brains on video games,” Nature Reviews Neuroscience, 12: pp. 763-768 (18 November) [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4633025/]. 







Want to cite this post?




Denton, S. (2018). The Brain In Context. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/03/the-brain-in-context.html

Tuesday, March 6, 2018

Practical and Ethical Considerations in Consciousness Restoration




By Tabitha Moses




Tabitha Moses is a second year MD/PhD (Translational Neuro-science) Candidate at Wayne State University School of Medicine. She earned a BA in Cognitive Science and Philosophy and an MS in Biotechnology from The Johns Hopkins University. Her research focuses on substance use, mental illness, and emerging neurotechnologies. Her current interests in neuroethics include the concepts of treatment and enhancement and how these relate to our use of new technologies as well as how we define disability.




What does it mean to be conscious? In Arthur Caplan’s plenary session at the 2017 International Neuroethics Society annual meeting (Neuromodulation of the Dead, Persistent Vegetative State, and Minimally Conscious), he explored this question and how the answers may impact research and medicine. 




Concerns about the capacity for consent and what defines a true consent demand conversation. Recently, for instance, the widespread story of a man with a do not resuscitate tattoo sparked discussion about ways in which a person is able to provide consent when unconscious. This is a hard question to answer, and first we must understand the types of consciousness and how they are currently defined. Brain death is an irreversible, total loss of brain function with a complete loss of consciousness and reflexive behavior (1). The vegetative state (also referred to as unresponsive wakefulness syndrome (2) is described as a state wherein the person is not brain dead but also does not demonstrate any awareness. People who are minimally conscious may appear to be in a vegetative state but, when tested, demonstrate an awareness of self and others. Minimally conscious states are the basis for the recent discoveries of communication through MRI with people who had been thought to be in vegetative states (3,4). To define full consciousness is perhaps the most difficult. It is a topic that has been long-debated by philosophers and scientists; however, in medicine, to be fully conscious is most frequently defined as to be aware of oneself, one’s surroundings, and to have the ability to respond to stimuli (5). This is often measured in healthcare settings using the Glasgow Coma Scale (GCS), which rates patients on eye opening, motor responsiveness, and verbal responsiveness. Based on our present definitions and understanding, while brain death is currently a permanent, irreparable state, it is possible for people who are in a vegetative state to transition into a higher level of consciousness (6). 








Image courtesy of Pixabay.

We do not have a good way to measure consciousness objectively (7). Caplan argued that until we can understand both the science and the ethics of this problem, we should not move forward with consciousness-altering technologies such as Deep Brain Stimulation (DBS) and other emerging technologies. 





New research is aiming to bring consciousness back to those who were once viewed as brain-dead, but what does this really mean? These technologies are currently unable to restore an unconscious person’s former self; rather, they transition the person into a minimally-conscious state where they are then fully aware of their functional losses. Caplan provided the example of Guillaume T (GT), a man who was recently brought out of a 20-year vegetative state through vagus nerve stimulation (8). Although this story made headlines, it was not the medical miracle that most assumed. After the procedure, GT was minimally conscious, able to move his eyes on command and lift his head, but otherwise completely trapped. Having regained a degree of awareness from having none, he died two weeks later. For Caplan, this is one of the most egregious acts in the history of research. This is an instance where success was worse than failure. 





Caplan also addressed the issue of consent in consciousness research. This concern is becoming increasingly real, as one company is now claiming to offer a way to reverse brain death (9). Given the many risks of attempting to alter consciousness in this way (most notably, that of being trapped in a minimally conscious state), we must develop a new understanding of who can consent for these trials and treatments and of who is responsible for the outcome. The brain-dead patient clearly cannot provide consent, and without good definitions and measurements of consciousness, it is hard to even conceive of what it would mean to provide families with truly informed consent. A family told that their loved one may regain consciousness is likely to imagine a far different type of consciousness than the minimally conscious state that the researcher envisions. 





So, what happens next? If the research works, a patient may now be in a state of full awareness but still necessitate full-time, total care. The family, who might have once been content with withdrawing treatment from a person who was considered to have no consciousness, would now likely be unable to do so. The resultant emotional and financial burden to both patient and family would be astronomical. 








Image courtesy of Wikimedia Commons.

Caplan reminded us that our understanding of personhood and identity will also have to change. Currently, a human subject in research is a living person. If we are to include those declared brain dead in research with the intent of reviving their consciousness, we need to consider how this impacts our understanding of a human subject. Furthermore, we must consider the potential destruction of personhood. By altering the brain in such a significant manner, we are changing a person’s identity and we do not currently know the ethical or practical impacts of these changes. 





Making meaningful progress with these technologies will hinge on the many ethical considerations Caplan outlined. Researchers will need to be aware of the risks of providing false hope; they will also need to provide greater clarity about the definitions of consciousness and their expectations for the intervention. There would need to be strict regulations as to who can carry out this type of work as well as clear definitions regarding what constitutes failure and where liability falls for failures. This work may provide a novel opportunity for a new kind of advance directive for consciousness research after brain death. This would allow the patient herself the ability to decide whether or not to be included in these studies at a time when she can be fully informed about the potential implications. Finally, there need to be clear media policies for this type of work, and scientists must speak out against the bad science in this field so that patients are not misled into paying for treatments that may have catastrophic outcomes. These are just a few of many concerns that need substantial consideration before we, as a scientific community, strive to make greater headway in the field of consciousness research. 







References 





1. Goila A, Pawar M. The diagnosis of brain death. Indian J Crit Care Med [Internet]. 2009;13(1):7. Available from: http://www.ijccm.org/text.asp?2009/13/1/7/53108 









2. Laureys S, Celesia GG, Cohadon F, Lavrijsen J, León-Carrión J, Sannita WG, et al. Unresponsive wakefulness syndrome: a new name for the vegetative state or apallic syndrome. BMC Med [Internet]. BioMed Central; 2010 Dec 1 [cited 2018 Jan 14];8(1):68. Available from: http://bmcmedicine.biomedcentral.com/articles/10.1186/1741-7015-8-68 









3. Monti MM, Vanhaudenhuyse A, Coleman MR, Boly M, Pickard JD, Tshibanda L, et al. Willful modulation of brain activity in disorders of consciousness. N Engl J Med [Internet]. 2010 Feb 18;362(7):579–89. Available from: http://www.ncbi.nlm.nih.gov/pubmed/20130250 









4. Fiacconi CM, Owen AM. Using facial electromyography to detect preserved emotional processing in disorders of consciousness: A proof-of-principle study. Clin Neurophysiol [Internet]. 2016 Sep;127(9):3000–6. Available from: http://linkinghub.elsevier.com/retrieve/pii/S1388245716304357 









5. Tindall SC. Level of Consciousness. In: Hall WD, Hurst JW, Walker H, editors. Clinical Methods: The History, Physical, and Laboratory Examinations [Internet]. 3rd ed. Boston: Butterworths; 1990. Available from: https://www.ncbi.nlm.nih.gov/books/NBK380/ 









6. Tomaiuolo F, Cecchetti L, Gibson RM, Logi F, Owen AM, Malasoma F, et al. Progression from Vegetative to Minimally Conscious State Is Associated with Changes in Brain Neural Response to Passive Tasks: A Longitudinal Single-Case Functional MRI Study. J Int Neuropsychol Soc [Internet]. 2016 Jul 6;22(6):620–30. Available from: http://www.journals.cambridge.org/abstract_S1355617716000485 









7. Giacino JT, Fins JJ, Laureys S, Schiff ND. Disorders of consciousness after acquired brain injury: the state of the science. Nat Rev Neurol [Internet]. 2014 Jan 28;10(2):99–114. Available from: http://www.nature.com/doifinder/10.1038/nrneurol.2013.279 









8. Corazzol M, Lio G, Lefevre A, Deiana G, Tell L, André-Obadia N, et al. Restoring consciousness with vagus nerve stimulation. Curr Biol [Internet]. 2017 Sep;27(18):R994–6. Available from: http://linkinghub.elsevier.com/retrieve/pii/S0960982217309648 









9. Johnson LSM. Reversing Brain Death: An Immodest Proposal [Internet]. Impact Ethics. 2016 [cited 2018 Jan 14]. Available from: https://impactethics.ca/2016/05/24/reversing-brain-death-an-immodest-proposal/ 







Want to cite this post?




Moses, T. (2018). Practical and Ethical Considerations in Consciousness Restoration. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/03/practical-and-ethical-considerations-in.html

Thursday, March 1, 2018

Black Mirror in the Rear-View Mirror: An Interview with the Authors







Image courtesy of Wikimedia Commons.




The Neuroethics Blog hosted a special series on Black Mirror over the past year, originally coinciding with the release of its third season on Netflix. Black Mirror is noted for its telling of profoundly human stories in worlds shaped by current or future technologies. Somnath Das, now a medical student at Thomas Jefferson University, founded the Blog’s series on Black Mirror. Previous posts covered "Be Right Back", "The Entire History of You""Playtest", "San Junipero", "Men Against Fire", "White Bear", and "White Christmas". With Season 4 released at the end of December 2017, Somnath reconvened with contributing authors Nathan Ahlgrim, Sunidhi Ramesh, Hale Soloff, and Yunmiao Wang to review the new episodes and discuss the common neuroethical threads that pervade Black Mirror.


The discussion has been edited for clarity and conciseness. 











*SPOILER
ALERT* - The following contains plot spoilers for the Netflix television series
Black Mirror.



 




Somnath: My first question is: if and when we to collect neural data on people, who really owns it? In the case of "Arkangel", the parents can access and even filter it. The government stepped in to regulate it, but it was owned by a private company. Who really owns that brain data? Who can control it? Do the people own that neural data or do the companies? 









 

Sunidhi: An interesting way to think about this is to think about your current medical records. Who owns your blood test results? Neural data is physical data. It's the same thing, just extending it to the brain. Companies owning that is a serious problem because there are always private interests that can manifest themselves in dangerous ways. The data should be owned by the person whose data it is. 







 

Yunmiao: I feel that, if that’s the case, we should not upload neural data at all. It can easily be misused by others. For example, Apple announced that it is going to transfer the Chinese iCloud operation to a state-owned company on February 28th, 2018. People might worry about who will access to their personal data. Either a government or a private agency could potentially misuse the information for their own interests. Russell Poldrack and Krzysztof J Gorgolewski have suggested the advantages of sharing neuroimaging data. For example, it could maximize the scientific contribution, improve reproducibility, and promote new questions. “Big data” is a trendy phrase, and its broad application have shown promising future for various fields. However, should the potential benefits of data sharing, whether it is neural data or general personal data, outweigh the importance of ownership? Despite the ethical consideration of privacy issues, there are also pros about data sharing, especially in a scientific setting.





 

Nathan: Let’s consider the more fantastical technology. Even if you willingly give up or sell your neural data through a very thorough informed consent procedure, if there is some sort of neuro-emulation, you have a digital self. There is no control after you make that transfer of ownership. The lack of control is why it is hard for even the most libertarian of thinkers to endorse voluntary slavery. We balk at that transfer of personal ownership. And I think for something as detailed as neural data, it would make sense for it to follow the same norm.









Somnath: My next question with respect to brain data and privacy is more about public opinion and how ethicists respond to public opinion. With Google Glass, we saw that many people were really uncomfortable with brain-computer-interfaces (BCI’s) being integrated into their lives. There were two issues here. One issue was that people didn't want to have random people wearing this glass and taking photos or videos of them, which is a pretty obvious argument. And the second argument was that people were uncomfortable with what the data could be used for. But as we've seen with a lot of technologies, like with cars, people [used to be] scared of the internal combustion engine exploding. And nowadays we accept them. We walk around them very easily. We're very familiar with them. I was wondering, in the vein of the episode “The Entire History of You” where everybody has a brain computer interface that can record and store memories, do you think people would eventually be able to accept these BCI’s as normal? 





Hale: People are absolutely comfortable with these things. We saw this as cars replaced carriages, and more recently as different generations have engaged with technology as ‘simple’ as social media. Our standards of privacy have changed in only a generation or two. Many people don’t view privacy as a necessary or engrained part in everyone’s lives to the degree it used to be. But even if you engage with something like Facebook or Instagram in a restrained way and you’re not showing everything, your life can get very wrapped up in the way people are interacting in an online environment. I think newer generations, and some individuals in the older generations will engage with a neural data-based social environment, even if some people dissent to the idea. One of the most effective counterbalances to people's interest in adopting these technologies is lawful regulations. Those would have a significant effect on slowing down or stopping the misuse of these things, for the purposes of avoiding the less than desirable scenarios. Regulations can’t prevent negative consequences 100% effectively, of course. What will be important is whether we have reactive/responsive laws or preventative laws, which will probably be controlled by the speed at which these two things happen. 





Yunmiao: For example, in “Crocodile,” people do have access to neural data. One could go to the extreme. Mia (the central protagonist) essentially killed everyone who could potentially have a memory of her murder(s). On one hand, such technology might stop people from committing any crime, knowing someone might be watching. On the other hand, it might also be a threat to the society because people will feel the threat of that information getting out. 








Image courtesy of Wikipedia.

Nathan: I think an unintended consequence of something like Black Mirror is an automatic increase in acceptance in these technologies. Even though a supermajority of the episodes, like “Crocodile,” “Shut Up and Dance,” and “Men Against Fire” end in death – or worse, like the perpetual agony in “Black Museum” – it gets the story out there. Just like science fiction always has. Even if it's a morbid fascination it puts fascination into the public eye. I always see fascination inevitably garnering interest for technology to actually happen even if the first presentation of it was terrifying.

Sunidhi: I think it's just a matter of time. People watching this normalizes it. There are numerous examples of technology that people kind of rejected initially and then slowly took in as more and more people accepted it. 





Somnath: My next question is about “The Emulated Self” and focuses on storing people's consciousness against their will. In “Hang the DJ,” however, were introduced to a dating app that simulates hundreds of versions of ourselves and other people with a near perfect emulation of our thoughts feelings and personalities. It basically takes the mystery out of dating. The app then mysteriously kills off the simulations, or deletes their code, when it determines that two people could be matched. But for me that begs the question: would emulating those perfect copies of people, taking their memories away, putting them in an unknown place, and then deleting their code be unethical? Is that considered imprisonment? And does that even matter? 





Hale: You're not deleting an individual artificial intelligence within that universe, you’re deleting the entire thing at once. So you're not causing any sort of relational harm. You're not killing an individual that other individuals know and will grieve over. Everyone disappears at once within that universe. But of course, a lot of it comes down to an unanswerable question: how can we possibly know whether a simulated person actually experiences the emotions that they appear to experience? 





Nathan: Yes, “Hang the DJ” has a good outcome in the end. But I think it's unfair of us to judge that technology and the consequences of it based on a dating app when the exact same technology could be used differently, like in the finale “Black Museum.” With pretty much the same technology as the dating app, a man, whether deservedly or not, was put into a perpetual torture. Or, at least the emulation of his consciousness is. 





Image courtesy of Flickr user Many Wonderful Artists.




Sunidhi: Also, how much of it is actually deleted? Is it fully deleted, or does it continue to exist somewhere? 





Somnath: “San Junipero” showed us a positive way a similar technology was used, as a way of ensuring a good death. Or rather, a life beyond death. The episode concluded with one of the most remembered love stories in pop-culture. When the person died in the real world, a new version of that person was created in the simulation. My question is: does the company then own your life? You'd be at the whims of that company. Is that necessarily a bad thing? The people inside are living a good life even though they're dependent on this company owning them. Is it a good thing to live in a simulation or is it not? 





Nathan: It can never be a good thing as long as there is a distinction between the simulation and the real world. There was no perceptual difference between the simulation and the real world in “San Junipero.” Even so, the real world seemed to treat the simulation as something quantitatively different. The people in the simulation had different legal rights. We instinctively think of a person undergoing a change like Alzheimer’s Disease as retaining their identity. Their personality is different, their memories change, but you know it’s the same person much more than a simulation in “San Junipero,” where their personality is identical. As long as we think of a simulated world as something demonstrably different, what if you don't renew your contract with the company who built San Junipero? Then they’re entitled to terminate you. You’d die. Again. 





Sunidhi: What’s interesting in “San Junipero” is that the simulated copies still retain the same memories. Then it’s an iffy line as to how you can be different people but still retain the same memories, life experiences, etc. 





Yunmiao: I think the question is whether the simulated self is continuous with the original person, or whether it’s another life or person. What if they both exist at the same time, like in many other Black Mirror episodes? I don't think the copy, or simulated person, is an extension of the original person. I think they have their own mind, and they are their own person. Thus, the original person should not have any ownership of that emulated self. 





Somnath: My final question is about emulation. We’re pretty far away from emulating human bodies. The research is still in its infancy. When I wrote about it on the blog, the research basically said that neuroscientists are still trying to figure out how to emulate ion channels. Never mind complex neural activity or entire human beings. So why do you think the show keeps coming back to the emulated self if we’re so far away from it? Do you think it just makes for a good story, or do you think there is something more important about how the American consciousness reacts to this technology when it is portrayed in the show? 








Image courtesy of Pixabay.

Yunmiao: This is more a philosophical question. What is the self? That question has been going on for centuries, and I think this is just another perspective to view or evaluate what the self is. Do you view yourself the same as you were 10 years ago, 5 years ago, or 5 minutes later? The philosophical question sits on top of the potential technology and ethical issues.

Sunidhi: I think the whole ‘true self’ debate manifests itself in current technology. Think about Deep Brain Stimulation for depression, and how that patient changes. Are they being restored to who they were before? Is this a new person that was made by the treatment? Those questions are still present, so this might just be another interpretation of how those questions will present themselves in the future. 





Hale: I agree, and I think that people can’t help but ask themselves when they’re seeing it on the show: how will this affect me and my life? If a technology feels completely distant because you won’t see it for hundreds of years, you will only be casually interested. But these technologies are presenting a pseudo-immortality. Even now, we might be close to a point to saving our brains or brain data, if only cryogenically. One day in the future, when we have the technology to do something with that, we could digitally pop into existence again. People see this and feel it’s not within arm’s reach, but it is just beyond their fingertips. 





Nathan: I’m more of a skeptic when it comes to this technology possibly ever bearing fruit. But I still think that, even if it is completely fantastical, it’s important to get into the public consciousness. Science fiction as much as fantasy can serve as an allegory for the questions we are really asking. Like Yunmiao said, questions from immortality to identity and personhood. It’s a lot easier to enter the conversation if you’re asking, ‘What if I upload myself to Star Trek?’ (as in “USS Callister”) instead of, ‘what if I misrepresent myself on Facebook and my boss thinks I’m someone completely different?’ 








Image courtesy of Flickr user

FrenchKheldar.

Somnath: People with backgrounds in ethics have had a visceral reaction to Black Mirror.Black Mirror is made more intriguing and more constructive when we have real discussions about the cross-pollination of fiction and the real world.

Proponents of these technologies, like those who are trying to make emulation happen, often contend that our hesitancy is driven by fear. They contend that progress is impeded by hand-wringing ethicists. We can’t ignore that the show brings a fascination to all these technologies, regardless of the grim consequences. That's why ethicists do need to respond to the show. Ethicists do better when they get out of their ivory tower. At the same time, pop-culture phenomena like


Before we close, I have to ask: favorite episodes? For me, “White Bear” was the most fascinating for the neuroethical implications. But as a consumer, “The Waldo Moment” is my favorite. 





Yunmiao: “White Christmas” 





Hale: “USS Callister” 





Sunidhi: “Men Against Fire” 





Nathan: “Hated in the Nation”





Want to cite this post?





Ahlgrim, N. (2018). The Neuroethics Blog Series on Black Mirror: Black Mirror in the Rear-view Mirror - an Interview with the Authors. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/03/black-mirror-in-rear-view-mirror.html