Pages

Tuesday, July 25, 2017

Grounding ethics from below: CRISPR-cas9 and genetic modification



By Anjan Chatterjee






The University of Pennsylvania

Anjan Chatterjee is the Frank A. and Gwladys H. Elliott Professor and Chair of Neurology at Pennsylvania Hospital. He is a member of the Center for Cognitive Neuroscience, and the Center for Neuroscience and Society at the University of Pennsylvania. He received his BA in Philosophy from Haverford College, MD from the University of Pennsylvania and completed his neurology residency at the University of Chicago. His clinical practice focuses on patients with cognitive disorders. His research addresses questions about spatial cognition and language, attention, neuroethics, and neuroaesthetics. He wrote The Aesthetic Brain: How we evolved to desire beauty and enjoy art and co-edited: Neuroethics in Practice: Mind, medicine, and society, and The Roots of Cognitive Neuroscience: behavioral neurology and neuropsychology. He is or has been on the editorial boards of: American Journal of Bioethics: Neuroscience, Behavioural Neurology, Cognitive and Behavioral Neurology, Neuropsychology, Journal of Cognitive Neuroscience, Journal of Alzheimer’s Disease, Journal of the International Neuropsychological Society, European Neurology, Empirical Studies of the Arts, The Open Ethics Journal and Policy Studies in Ethics, Law and Technology. He was awarded the Norman Geschwind Prize in Behavioral and Cognitive Neurology by the American Academy of Neurology and the Rudolph Arnheim Prize for contribution to Psychology and the Arts by the American Psychological Association. He is a founding member of the Board of Governors of the Neuroethics Society, the past President of the International Association of Empirical Aesthetics, and the past President of the Behavioral and Cognitive Neurology Society. He serves on the Boards of Haverford College, the Associated Services for the Blind and Visually Impaired and The College of Physicians of Philadelphia. 




In 1876, Gustav Fechner (1876) introduced an “aesthetics from below.” He contrasted this approach with an aesthetics from above by which he meant that, rather than defining aesthetic experiences using first principles, one could investigate people’s responses to stimuli and use these data to ground aesthetic theory. Neuroethics could benefit with a similar grounding by an ethics from below, especially when ethical concerns affect public policy and regulation.



We are in the middle of a scientific revolution (Doudna & Charpentier, 2014) that will transform biological research by profoundly affecting agriculture, animal husbandry, and medicine. It also has profound implications for neuroethics. Genetic modification using CRISPR-Cas9 (clustered regularly interspaced short palindromic repeat–CRISPR-associated protein), a system of adaptive immunity discovered in bacteria, has become feasible and cheap. Described in 2015 as “Science’s breakthrough of the year”, CRISPR-Cas9 offers promises as well as perils. In addition to modifying somatic cells, we can now modify germline cells. We might be able to eliminate single gene neurological disorders like Huntington’s disease, among many others. At the same time, intentional selection of genes for physical and mental traits might reify social inequities and resurrect the possibility of eugenics. Specifically, genetic manipulation could become a deep tool for cognitive and mental enhancement that selects and manipulates genes that contribute to intelligence, attention, memory, and even creativity.







Image courtesy of Wikimedia Commons.

Scientists and ethicists are aware that the public should be involved in discussions about these technologies and their applications. Think tanks, bioethics groups, and scientific societies call for public engagement. For example, in December 2015, the US National Academies of Sciences, Engineering and Medicine held a summit on the regulation of CRISPR-–Cas9 gene-modifying technology (Travis, 2015). PHD physicist and Congressman Bill Foster (D-IL) opened the summit with a reminder that gaining public acceptance of what can be done with CRISPR-Cas9 is critical. The meeting opined that it would be irresponsible to proceed with germline modification without broad societal consensus about the appropriateness of possible uses. The final report from the National Academy of Sciences (National Academies of Science, 2017) walked back from their early call for broad societal consensus (Baylis, 2017), but did offer condition under which germ line genetic modification might be considered. Nonetheless, the report advocates for public involvement as stated on pages 7-8,


“Public engagement is always an important part of regulation and oversight for new technologies. As noted above, for somatic genome editing, it is essential that transparent and inclusive public policy debates precede any consideration of whether to authorize clinical trials for indications that go beyond treatment or prevention of disease or disability (e.g., for enhancement). With respect to heritable germline editing, broad participation and input by the public and ongoing reassessment of both health and societal benefits and risks are particularly critical conditions for approval of clinical trials.
At present, a number of mechanisms for public communication and consultation are built into the U.S. regulatory system, including some designed specifically for gene therapy, whose purview would include human genome editing. In some cases, regulatory rules and guidance documents are issued only after extensive public comment and agency response.” 


Given CRISPR-Cas9’s technical ease, low cost, and potentially wide spread application, knowing current public opinion is crucial to ongoing engagement. The “public” is not a monolithic entity, and understanding how different groups differ in their attitudes becomes critically relevant to any outreach efforts. 







Public opinion on In Vitro Fertilization (IVF) has changed

dramatically since its introduction.

Image courtesy of Flickr user Image Editor.

With these considerations in mind, we investigated what “the public” thinks about genetic modification research by querying 2,493 Americans of diverse backgrounds (Weisberg, Badgio, & Chatterjee, 2017). Respondents were broadly supportive of conducting this research. However, demographic variables influenced the robustness of this support– conservatives, women, African Americans, and older respondents, while supportive, were more cautious than liberals, men, non African American ethnicities, and younger respondents. Support for such research was also muted when the risks, such as unanticipated mutations and possibility of eugenics, were made explicit. We also presented information about genetic modification with contrasting vignettes, using one of five frames: genetic editing, engineering, hacking, modification, or surgery. The media, it turns out, uses different framing metaphors than academics when describing this technology. Journalists, more often than scientists, use “editing” as a metaphor, perhaps not surprising in so far as they are professional writers. It would be useful to know if these metaphors affect people’s opinions. In the context of our vignettes, the contrasting frames did not influence people’s attitudes. Our data offer a current snapshot of public attitudes towards genetic modification research that can inform ongoing engagement. 




Our observations are hardly the last word on the topic. Rather, they are an initial survey of a dynamically changing landscape. Will public attitudes evolve as more people become aware of the possibilities and problems of these technologies? What do we make of demographic differences? Conservatives, women, African Americans, and older people do not group together in an obvious way. Surely the reasons for caution among these groups vary. We did not find an effect of metaphoric framing in our study. This absence of an effect is reassuring in so far as journalists and scientists typically write about genetic modification using different organizing frames. Perhaps the lack of effect was because of an insufficient “dose” of framing language. If we presented more extensive descriptions and reinforcing language, might we have found an effect of framing? The point is that the implications of our results are subject to ongoing refinement, further testing, and continuing discussion as is true of most empirical studies. 




In a rapidly changing world in which biological sciences have the potential to profoundly affect our physical and mental and cognitive lives, public opinion assessed from below may be critical to grounding policy shaped from above. 





References 



Baylis, F. (2017). Human germline genome editing and broad societal consensus. Nature Human Behavior, 1. Retrieved from doi: doi:10.1038/s41562-017-0103



Doudna, J. A., & Charpentier, E. (2014). The new frontier of genome engineering with CRISPR-Cas9. Science, 346(6213), 1258096.



Fechner, G. (1876). Vorschule der Aesthetik. Leipzig: Breitkopf & Hartel.



National Academies of Science, E., and Medicine. (2017). Human Genome Editing: Science, Ethics, and Governance The National Academies Press Retrieved from http://go.nature.com/2ooO6jx.



Travis, J. (2015). Inside the summit on human gene editing: A reporter’s notebook. Retrieved from doi:https://doi.org/10.1126/science.aad7532



Weisberg, S. M., Badgio, D., & Chatterjee, A. (2017). A CRISPR New World: Attitudes in the Public toward Innovations in Human Genetic Modification. Frontiers in Public Health, 5, 117.





Want to cite this post?




Chatterjee, A. (2017). Grounding ethics from below: CRISPR-cas9 and genetic modification. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2017/07/grounding-ethics-from-below-crispr-cas9.html

Tuesday, July 18, 2017

Revising the Ethical Framework for Deep Brain Stimulation for Treatment-Resistant Depression



By Somnath Das






Somnath Das recently graduated from Emory University where he majored in Neuroscience and Chemistry. He will be attending medical school at Thomas Jefferson University starting in the Fall of 2017. Studying Neuroethics has allowed him to combine his love for neuroscience, his interest in medicine, and his wish to help others into a multidisciplinary, rewarding practice of scholarship which to this day enriches how he views both developing neurotechnologies and the world around him. 




Despite the prevalence of therapeutics for treating depression, approximately 20% of patients fail to respond to multiple treatments such as antidepressants, cognitive-behavioral therapy, and electroconvulsive therapy (Fava, 2003). Zeroing on an effective treatment of “Treatment-Resistant Depression” (TRD) has been the focus of physicians and scientists. Dr. Helen Mayberg’s groundbreaking paper on Deep Brain Stimulation (DBS) demonstrates that electrical modulation an area of the brain called subgenual cingulate resulted in a “sustained remission of depression in four of six (TRD) patients” These patients experienced feelings that were described as “lifting a void,” or “a sudden calmness.” (Mayberg et al. 2005). The importance of this treatment lies in the fact participants who received DBS for TRD (DBS-TRD) often have no other treatment avenues, and thus Mayberg’s findings paved the way for DBS to have great treatment potential for severely disabling depression. 






Image courtesy of Wikimedia Commons


Because DBS involves the implantation of electrodes into the brain, Dr. Mayberg and other DBS researchers faced intense scrutiny following publication of their initial findings regarding the ethics of using what to some seems like a dramatic intervention for TRD. A number of ethical concerns surrounding DBS for depression rely on the principles of beneficence, nonmaleficence, and respect for patient autonomy (Schermer, 2011). Beneficence and nonmaleficence focus on, respectively, how much benefit and how much harm do these therapies confer to the patient (Farah, 2015). In the context of depression, these parameters can often discuss, for example, perceived threats to identity weight against benefits of mood changes. An additional issue is the hype surrounding the treatment could give patients false expectations for therapeutic outcomes (Schermer, 2011). Empirical studies have found therapeutic misconception, or the conflation of research and therapeutic intents by participants of clinical trials, to be critical area of further investigation for DBS-TRD researchers (Fisher et al. 2012). 




While these ethical criteria are important for evaluating a biomedical treatment outcome, the analysis is a strong framework for patients suffering from a strictly biological disease with a strictly biological treatment, and a poor framework for patients suffering from a disability experience. Despite the fact that the multiple correlative biomedical agents of depression do include factors such as genes, cortisol levels, and hippocampal volume, there is a critical need to assess how illness narratives reflect patient assessments of their disease, values on identity, and therapy prognosis. By adopting a social model medicine that places emphasis on the experience of illness through personal stories, “Narrative Medicine” confers increased dignity and autonomy to patients with few therapeutic choices by addressing the social consequences of disability such as stigmatization and lack of accommodation (Garden, 2010). Additionally, this framework allows for the reframing of beneficence and maleficence, focusing instead on qualitative improvements or changes over time as opposed to quantitative evaluative measures. 








People can experience depression differently even with

the same biological factors.

Image courtesy of Wikimedia Commons.

Previous literature has demonstrated, at least partly, the need to rigorously evaluate narrative beliefs of depressed patients prior to a therapeutic decision being made. Depression narratives can also give insight as to how a patient’s cognitive interpretation of their disorder effects their treatment-related behavior (Brown et al. 2001). A study by Karasz, Sacajiu, & Garcia in 2003 found that patient beliefs about the cause of their depression could be grouped into biosocial, psychosocial, psychological, situational, and somatic narratives, and subsequent studies found that the type of illness narratives a patient ascribes to predicts preferred treatment options (Khalsa et al. 2011). Other studies were conducted to assess how patient illness conception and treatment preference affects therapeutic outcome. While some studies document no relation (Dunlop et al. 2012), others document a significant interactive effect (Kocsis et al. 2009). Taking previous literature into account, DBS-TRD patient narratives and shared perspectives captured through qualitative interviews could hold critical sources of evaluative data that can help researchers determine whether the treatment is effective for TRD from a holistic perspective. 





Qualitative and narrative studies can also help researchers understand the beneficial impact of DBS on patient livelihoods. A recent study by Lipsman, Zener, & Bernstein (2013) found that when considering identity changes due to removal of brain tumors, patients often considered the ability to carry out their own livelihoods as more important than the possibility of permanent personality changes. A qualitative 2016 study by Hariz, Limousin, and Hamberg found that for patients receiving DBS to treat Parkinson’s Disease, the perceived benefit of having less invasive, more predictable tremor conferred tolerance to adverse events. The importance of the participants personal accounts was that it allowed researchers to understand the social and relational side effects, both positive and adverse, that was conferred to patient livelihoods due to implantation. For example, the study’s participants described having less dependence on family members, and others described not wishing to return to careers post-implantation due to the fear that stress may cause relapse of tremors. 







Experts believe narrative accounts, not just objective measures,

are necessary in ethical interventions.

Image courtesy of Wikimedia Commons.

That being said, the perspective of implanted patients with mood disorders remains poorly characterized. To address this issue, Klein et al. (2016) interviewed patients who underwent implantation for OCD and MDD to analyze a hypothetical situation concerning “closed-loop” DBS devices. These devices allow for the patient to exhibit an increased locus of control on their DBS therapy. Their study found four common themes that patients either strongly agreed or disagreed upon: control over device function, authentic self, relationship effects, and meaningful consent. Patients especially disagreed on how control over the device would impact their livelihoods. Klein et al. therefore demonstrates the complexity by which DBS impacts the lives of the mentally disabled, and how these patients process their disability post-implantation. While clinical data remains important in evaluating how DBS affects the clinical presentation of TRD, qualitative data demonstrates how neurotechnologies fundamentally alter the social and relational aspects of disability. 




A Neuroethics framework with further emphasis on qualitative data can provide DBS-TRD researchers a distinct perspective for analyzing developing neurotechnology that takes into account both the interests of clinical medicine and the social factors modulating a person’s treatment experience. In the context of DBS for TRD, it is important to longitudinally assess disability shifts via patient narratives such that researchers can holistically understand both the benefits and side effects of life-changing neurosurgery on patient disability experiences. Thus, qualitative data consisting of personal accounts of disease and intervention can be used to further explore how neurotechnologies impact patients with a richer analysis of patient experience more holistically than quantitative scales. 



References



Brown, C., Dunbar-Jacob, J., Palenchar, D. R., Kelleher, K. J., Bruehlman, R. D., Sereika, S., & Thase, M. E. (2001). Primary care patients' personal illness models for depression: a preliminary investigation. Fam Pract, 18(3), 314-320.



Dunlop, B. W., Kelley, M. E., Mletzko, T. C., Velasquez, C. M., Craighead, W. E., & Mayberg, H. S. (2012). Depression Beliefs, Treatment Preference, and Outcomes in a Randomized Trial for Major Depressive Disorder. Journal of Psychiatric Research, 46(3), 375-381. doi:10.1016/j.jpsychires.2011.11.003



El-Hai, J. (2011). Narratives of DBS. AJOB Neuroscience, 2(1), 1-2. doi:10.1080/21507740.2011.547421



Farah, M. J. (2015). An ethics toolbox for neurotechnology. Neuron, 86(1), 34-37. doi:10.1016/j.neuron.2015.03.038



Fava, M. (2003). Diagnosis and definition of treatment-resistant depression. Biol. Psychiatry 53, 649–659



Fisher, C. E., Dunn, L. B., Christopher, P. P., Holtzheimer, P. E., Leykin, Y., Mayberg, H. S., . . . Appelbaum, P. S. (2012). The ethics of research on deep brain stimulation for depression: decisional capacity and therapeutic misconception. Ann N Y Acad Sci, 1265, 69-79. doi:10.1111/j.1749-6632.2012.06596.x



Garden, R. (2010). Disability and narrative: new directions for medicine and the medical humanities. Medical Humanities.



Hariz, G. M., Limousin, P., & Hamberg, K. “DBS means everything – for some time”. Patients’ Perspectives on Daily Life with Deep Brain Stimulation for Parkinson’s Disease. J Parkinsons Dis, 6(2), 335-347. doi:10.3233/jpd-160799



Karasz, A., Sacajiu, G., & Garcia, N. (2003). Conceptual Models of Psychological Distress Among Low-income Patients in an Inner-city Primary Care Clinic. J Gen Intern Med, 18(6), 475-477. doi:10.1046/j.1525-1497.2003.20636.x



Khalsa, S. R., McCarthy, K. S., Sharpless, B. A., Barrett, M. S., & Barber, J. P. (2011). Beliefs about the causes of depression and treatment preferences. J Clin Psychol, 67(6), 539-549. doi:10.1002/jclp.20785



Klein, E., Goering, S., Gagne, J., Shea, C. V., Franklin, R., Zorowitz, S., . . . Widge, A. S. (2016). Brain-computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain-Computer Interfaces, 3(3), 140-148. doi:10.1080/2326263X.2016.1207497



Kocsis, J. H., Leon, A. C., Markowitz, J. C., Manber, R., Arnow, B., Klein, D. N., & Thase, M. E. (2009). Patient preference as a moderator of outcome for chronic forms of major depressive disorder treated with nefazodone, cognitive behavioral analysis system of psychotherapy, or their combination. J Clin Psychiatry, 70(3), 354-361.



Lipsman, N., Zener, R., & Bernstein, M. (2009). Personal identity, enhancement and neurosurgery: a qualitative study in applied neuroethics. Bioethics, 23(6), 375-383. doi:10.1111/j.1467-8519.2009.01729.x



Mayberg, H. S., Lozano, A. M., Voon, V., McNeely, H. E., Seminowicz, D., Hamani, C., . . . Kennedy, S. H. (2005). Deep brain stimulation for treatment-resistant depression. Neuron, 45(5), 651-660. doi:10.1016/j.neuron.2005.02.014



Schermer, M. (2011). Ethical Issues in Deep Brain Stimulation. Front Integr Neurosci, 5. doi:10.3389/fnint.2011.00017



Want to cite this post?



Das, S. (2017). Revising the Ethical Framework for Deep Brain Stimulation for Treatment-Resistant Depression. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/07/revising-ethical-framework-for-deep.html

Tuesday, July 11, 2017

Diagnostic dilemmas: When potentially transient preexisting diagnoses confer chronic harm



By Elaine Walker





Elaine Walker is the Charles Howard Candler Professor of Psychology and Neuroscience at Emory University.   She leads a research laboratory that is funded by the National Institute of Mental Health (NIMH) to study risk factors for psychosis and other serious mental illnesses.  Her research is focused on the behavioral and neuromaturational changes that precede psychotic disorders.   She has published over 300 scientific articles and 6 books. 






The diagnostic process can be complicated by many factors. Most of these factors reflect limitations in our scientific understanding of the nature and course of disorders. But in the current US healthcare climate, legislative proposals concerning insurance coverage for preexisting conditions add another layer of complexity to the diagnostic process. It is a layer of complexity that is riddled with ethical dilemmas which are especially salient in the field of mental health care. The following discussion addresses the interplay between medical practice and health-care system policy in the diagnostic process. The diagnosis of psychiatric disorders is emphasized because they present unique challenges [1]. 




Of course, some of the complications associated with diagnosis are a function of ambiguous and/or changing diagnostic criteria. For example, the criteria for designating the level of symptom severity that crosses the boundary into clinical disorder change over time as a function of scientific advances. This has occurred for numerous illnesses, including metabolic, cardiovascular, and psychiatric disorders [2]. Further, especially in psychiatry, diagnostic categories undergo revision over time, even to the extent that some behavioral “syndromes” previously considered an illness have been eliminated from diagnostic taxonomies. Homosexuality is a prime example. It was designated as a disorder in the Diagnostic and Statistical Manual of Mental Disorders (DSM) in 1952, then removed in 1973. 








Definitions of disorders have changed over the years.

Image courtesy of Wikimedia Commons

In the case of psychiatric diagnosis, other complications arise from research findings that raise questions about the reliability and stability of diagnoses. As a case in point, numerous studies have shown that a large proportion of adolescents manifest psychiatric symptoms that are transient. The diagnosis of personality and psychotic disorders is notable in this regard. A majority of adolescents who meet diagnostic criteria for personality disorder no longer meet criteria in young adulthood [3, 4]. Similarly, across cultures, for the majority (75–90%) of adolescents who report psychotic symptoms the episodes are transitory and symptoms disappear by early adulthood [5]. These normative declines in adolescent psychiatric symptoms have been attributed to maturational increases in emotional regulation and cognitive abilities. The transient nature of some symptoms in youth can make health care providers more cautious in their approach to diagnosis. Of course, mental health care providers are also concerned about the stigma associated with psychiatric diagnoses.





Changing diagnostic criteria and normative developmental changes in the manifestation of symptoms present diagnostic dilemmas for health care providers under any circumstances. But the salience of these challenges is amplified when the diagnosis of a condition could have long-term adverse consequences for an individual’s future access to healthcare. It is for this reason that legislation governing the provision of insurance coverage for pre-existing conditions has such broad implications. 





A “pre-existing medical condition” is a health condition that exists prior to application for or enrollment in a health insurance policy, and insurers tend to define such conditions broadly. Public concern about the issue of pre-existing conditions has become more urgent recently because of proposals for reform of the Affordable Care Act (ACA). Prior to passage of the ACA in 2010, there were minimal restrictions on the insurance industry with respect to covering illnesses diagnosed prior to enrollment in the plan. Under the Essential Health Benefits provision of the ACA, insurers are required to provide coverage for pre-existing conditions. But current legislative proposals to reform or eliminate the ACA include no such explicit requirement. 








Image courtesy of picserver.org

The passage of legislation that eliminates the requirement for coverage of pre-existing conditions would pose significant ethical challenges for health care providers, especially those in the field of mental health. The stakes are high. For example, if a psychiatric diagnosis becomes part of a child’s medical record, his future access to insurance, and therefore health care, could be jeopardized. A diagnosis of attention deficit disorder, personality disorder, or brief psychotic episode could portend a lifetime of struggles to obtain insurance coverage. Moreover, the diagnosis may be based on childhood symptoms that ultimately prove to be transitory, in that they resolve without little or no treatment. Given such circumstances, should the health care provider, in the best interests of the child, modulate their diagnostic threshold to reduce the likelihood of such detrimental outcomes? Is that decision consistent with ethical practice? 





There are, of course, other considerations with respect to diagnosis and the child’s best health-care interests. Decisions about diagnosis and treatment are based, in part, on past health conditions. Ethical practice requires that the treatment provider be aware of potential adverse reactions to treatment, and these reactions vary as a function of the patient’s medical history [6]. There is evidence, for example, that individuals who have experienced previous subclinical psychotic symptoms are at greater risk for adverse reactions, including schizophrenia, to the stimulant medications used to treat attention deficit disorders. Consequently, previous psychotic episodes are considered a contraindication for the prescription of stimulant medication. If a health care provider observes subclinical psychotic symptoms in a teenager, but does not record them in the child’s medical record, this could negatively impact the quality of the child’s future health care. Thus, a diagnostic decision based on protecting the patient from later exclusion from coverage due to a pre-existing condition could inadvertently compromise the quality of their future healthcare. 








Some argue that exclusion from medical coverage based

on pre-existing conditions contradicts the principle of "do no harm".

Image courtesy of pixabay.com

There is no doubt that the inherent complexities of diagnosis are made even more challenging by policies that limit or exclude coverage for pre-existing health conditions. In fact, it could be argued that excluding or charging prohibitive premiums for health insurance coverage based on pre-existing conditions undermines the basic foundation of health care ethics, especially the dictum to “do no harm”. When diagnostic decisions have the potential to influence access to future healthcare, and therefore cause ‘harm’ to the patient, physicians and other health care providers are faced with a catch-22. The basic principle of non-maleficence is at odds with health care policy that deems the presence of a clinical diagnosis a potential long-term liability and a barrier to future healthcare access. Many organizations representing health care providers, including the American Medical Association and the American Psychological Association, have voiced their concern about these issues [7]. There is no doubt that debates about the ethical and public health dimensions of US health care reform will intensify as new proposals make their way through our legislative bodies. 



References



1. Eastman, N., & Starling, B. (2006). Mental disorder ethics: Theory and empirical investigation. Journal of medical ethics, 32(2), 94-99.



2. First, M. B. (2017). The DSM revision process: needing to keep an eye on the empirical ball. Psychological Medicine, 47(1), 19.



3. De Fruyt, F., & De Clercq, B. (2014). Antecedents of personality disorder in childhood and adolescence: toward an integrative developmental model. Annual review of clinical psychology, 10, 449-476.



4. Miller, A. L., Muehlenkamp, J. J., & Jacobson, C. M. (2008). Fact or fiction: Diagnosing borderline personality disorder in adolescents. Clinical psychology review, 28(6), 969-981.



5. Van Os, J., Linscott, R. J., Myin-Germeys, I., Delespaul, P., & Krabbendam, L. (2009). A systematic review and meta-analysis of the psychosis continuum: evidence for a psychosis proneness–persistence–impairment model of psychotic disorder. Psychological medicine, 39(02), 179-195.



6. Macdonald, A. N., Goines, K. B., Novacek, D. M., & Walker, E. F. (2017). Psychosis-Risk Syndromes: Implications for Ethically Informed Policies and Practices. Policy Insights from the Behavioral and Brain Sciences, 2372732216684852.



7. Lyon, J. (2017). Uncertain Future for Preexisting Conditions. Jama, 317(6), 576-576. 



Want to cite this post?



Walker, E. (2017). Diagnostic dilemmas: When potentially transient preexisting diagnoses confer chronic harm. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/07/diagnostic-dilemmas-when-potentially.html

Tuesday, July 4, 2017

The Neuroethics Blog Series on Black Mirror: Be Right Back


By Somnath Das




Somnath Das recently graduated from Emory University where he majored in Neuroscience and Chemistry. He will be attending medical school at Thomas Jefferson University starting in the Fall of 2017. The son of two Indian immigrants, he developed an interest in healthcare after observing how his extended family sought help from India's healthcare system to seek relief from chronic illnesses. Somnath’s interest in medicine currently focuses on understanding the social construction of health and healthcare delivery. Studying Neuroethics has allowed him to combine his love for neuroscience, his interest in medicine, and his wish to help others into a multidisciplinary, rewarding practice of scholarship which to this day enriches how he views both developing neurotechnologies and the world around him. 


----




Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we treat our technology and how it can affect us in return. This piece is part of a series of posts that will discuss ethical issues surrounding neuro-technologies featured in the show and will compare how similar technologies are impacting us in the real world. 






*SPOILER ALERT* - The following contains plot spoilers for the Netflix television series Black Mirror




Medical scientists and researchers have pioneered a plethora of technologies that seek to prolong the lives of patients in a state of disease or disability. For example, brain-computer interfaces are actively being investigated in multiple therapeutic contexts that can fundamentally alter the lives of patients with various disabilities. But, what if you could bring someone back from the dead by replicating their brain? The series Black Mirror raises this critical neuroethical issue in the episode “Be Right Back” by presenting a scenario describing the creation of artificial androids that use data to mirror the minds and behaviors of the recently deceased.





 Plot Summary







Image courtesy of Wikimedia Commons.

“Be Right Back” centers around Martha and Ash, a young couple whose lives are fundamentally changed shortly after their move to a new house. Ash is killed in a tragic accident on his way to return a rental vehicle, upsetting the life of his partner, Martha, who is largely alone in their remote community. Martha also discovers that she is pregnant, further complicating her newer, lonelier life. Martha struggles to cope with her lover’s death, but eventually finds some consolation in an online service that uses Ash’s messaging history to create a chat “bot” with which Martha can communicate. As time passes, Martha submits more data from Ash, including videos and photos, so that she can chat with the bot, which now mimics Ash’s voice, on the phone. Soon, Martha’s emotional solace in the bot evolves into an emotional dependency. This is exemplified when Martha damages her phone and has a significant panic attack. The creators of the bot service then inform Martha that she can participate in an experimental phase of their service, which involves the creation of an artificial android that looks just like Ash and uses his data to communicate with Martha. She consents, agreeing to provide data and help the company create the android Ash. The android then comes to live with Martha, and, at first, life is very enjoyable for her with her newfound companion. However, over time, she realizes that the android is not able to fully replace Ash and she grows increasingly horrified when the android displays behavioral anomalies that only remind her of what she lost. 





The technology in Black Mirror is therefore, incomplete. The android Ash lacks the emotional and empathetic capacities (or simply put, the human je ne sais quoi) that Ash possessed (or rather acquired) in real life. The show also subtly points out the issue of informed consent; if Ash were alive, would he have consented to this technology? Finally, the technology itself possesses a crucial gap – likely made in the name of convenience. To fully replicate behavior, software must first understand the structure of the brain and how this structure influences and is influenced by its environment. In seeking to simply mirror Ash, the show’s technology thus commits a crucial error by paying attention to Ash’s behavior and not his brain – a flaw that made it seem so horrifying to viewers. 





The State of Current Technology 



Whole Brain Emulation 







Image courtesy of Vimeo.

Black Mirror’s version of the future proposes a technology that mirrors behavior, whereas some transhumanists and futurists advocate the development of technologies that either upload or replicate the structure of the brain itself. This “digital mortality” aims to preserve the mind of the human indefinitely, allowing for a type of communication that theoretically could be superior to the technology of Black Mirror. Other Futurists have proposed building software that runs as if it were a human brain itself, thereby emulating the brain without having to convert its structure into data. In “Whole Brain Emulation (WBE), A Roadmap,” Sandberg and Bostrom (2008) contend that the technology required to emulate a human brain is within our reach, given the advances of computational neuroscience in understanding the inner neuronal workings of animal nervous systems. They contend that the possibilities for WBE range from its research “being the logical endpoint of computational neuroscience’s ability to accurately model [the brain]” to its potential to test of various philosophical constructs of identity. Sandberg and Bostrom (2008) also contend that WBE is a necessary step in order to achieve mind and (subsequently) personality emulation. Should they turn out to be correct, then perhaps the fault with the technology in Black Mirror was that it was proposed too soon. That being said, it is difficult to predict when the technology proposed by Sandberg & Bostrom could come to fruition, but their own estimates—assuming super-computer technologies being used for simulation and a high level of funding—place complete development within the next century. Thus, the technology of Black Mirror may seem more feasible because the simulation is based off of behavioral, rather than brain, data. 




Animal rights activists hope that as more complex WBE emulations are developed, the software will eventually be able to take the place of research of brains from live animal. Duda & Evers (2014) postulate that, in theory, simulation software could further enhance brain-machine interfaces - “for example by replacing missing dopaminergic input in Parkinsonian patients or by replacing visual input in the blind.” In order to produce a true substitute however, neuroscientists must likely test increasingly complex organisms to fine-tune the software’s emulation of complex brain processes, which poses animal protection and welfare concerns (Sandberg 2014). Perhaps the most concerning aspect of WBE is its assumption that we currently possess fully comprehensive knowledge about the brain and the components that influence its functions. Undiscovered neurotransmitters, brain nuclei, and neurohormones could further complicate the ability of WBE software to perform its functions accurately. 




Sandberg & Bostrom themselves contend that these technologies could constitute a form of human enhancement. Thus, the critical issue of access poses an ethical concern: how do we choose whose brains get emulated first? It is likely that this technology will be very expensive if made available to consumers; however, who will the serve as the initial “guinea pig”?? An additional question is the emulation’s moral status. Sandberg (2014) highlights that if this emulation were to possess all of the brain’s capabilities, including consciousness, then the emulation may be afforded specific rights (as per the Cambridge Declaration of Consciousness). The emulation’s conscious abilities, however, depend entirely on both the accuracy to which neuroscientists simulate brain structure and function, and the nature of how consciousness arises itself (Duda & Evers 2014). Should the emulation be conscious of its environment and truly behave like a human, then it may display human-like attributes such as being aware of its captive state. If we were to manipulate this emulation’s free will to contain it, would this manipulation be considered an actual crime similar to imprisonment? 



Head Transplantation 







Image courtesy of Wikipedia.

While WBE remains a technology of the distant future, neurosurgeons today are aiming to preserve the brains of individuals via head transplantation. Dr. Sergio Canavero claims that the first head transplantation procedure will occur this year. The Italian neurosurgeon bases his claims off of a previous experiment by Dr. Robert White, who transplanted the head of one rhesus monkey to another. The newly formed monkey survived for eight days “without complications,” although details about the monkey’s ability to feel pain remain unclear. Canavero has published papers describing proof of concept experiments. In a 2016 correspondence to Surgical Neurology International, he details both the severing and reattachment of a canine spinal cord, claiming that polyethylene glycol (PEG) – which he plans on using in his head transplant surgeries - contains the ability to fuse neuronal cell membranes following a sharp cut to the spinal cord. In 2017, Canavero and his colleague Xiao-Ping Ren piloted the creation of bicephalic (one body controlled by two brains) Wistar rats. These rats survived for about 36 hours on average. 




Dr. Canavero has repeatedly used news and media outlets, including a TED talk, to promote his work. However, there remains to be a clear consensus on the validity of his proof-of-concept experiments. There are also clear neuroethical issues that have yet to be answered when it comes to head transplantation. In a previous post for The Neuroethics Blog, neuroscientist Dr. Ryan Purcell highlights three key ethical issues associated with the procedure: risk vs. benefit (the possibility of the head being alive yet belonging to a paralyzed body or in a constant state of pain), justice and fairness (who donates their head versus who donates their body?), and issues of personal identity post-transplantation. In the case personal-identity, the juncture between head and body could potentially prove disastrous to conscious perception of reality if the self is indeed static. Our external sensations influence how our internal identities develop, and therefore if a head were transplanted to a different body, then surgeons may end up creating a new person entirely. This issue is vitally important for ethicists and legal experts to debate in the context of informed consent. In an op-ed for The Washington Post, Dr. Nita Farhany notes that that the procedure could be considered active euthanasia under U.S. law because the head transplantation in theory requires the removal of one – if not two – identities prior to completion. 





Conclusions





This episode of Black Mirror presents a reality in which data is used to mirror the behavior of the deceased. This technology has obvious benefits, including not having to fully replicate the brain of the individual (as occurs in WBE) and the mere fact that this technology can be used after the original person has passed away. At least some current efforts are attempted to create technologies that either replicate or save the brain while it is still alive, which pose a multitude of ethical issues that neuroscientists, clinicians, and ethicists are still actively debating. Black Mirror’s technology seems to circumvent these ethical issues by not focusing on brain structure; however, the show itself notes that this technology is an incomplete duplication. Both realities (recreating a person by reproducing their brain or simply reproducing their behavior) therefore propose compelling solutions to extending the lives of our loved ones, and it is clear that there are a host of issues to be resolved before these technologies can become a reality in the future. 





References 



Canavero, S. (2013). HEAVEN: The head anastomosis venture Project outline for the first human head transplantation with spinal linkage (GEMINI). Surgical Neurology International, 4(Suppl 1), S335–S342. http://doi.org/10.4103/2152-7806.113444



Canavero S, Ren X. Houston, GEMINI has landed: Spinal cord fusion achieved. Surg Neurol Int 13-Sep-2016;7: Available from: http://surgicalneurologyint.com/surgicalint-articles/houston-gemini-has-landed-spinal-cord-fusion-achieved/ 



Dudai, Y., & Evers, K. To Simulate or Not to Simulate: What Are the Questions? Neuron, 84(2), 254-261. doi:10.1016/j.neuron.2014.09.031



Hopkins, P. D. (2012). Why Uploading Will Not Work, or, the Ghosts Haunting Transhumanism. International Journal of Machine Consciousness, 04(01), 229-243. doi:10.1142/s1793843012400136



Li, P.-W., Zhao, X., Zhao, Y.-L., Wang, B.-J., Song, Y., Shen, Z.-L., . . . Ren, X.-P. (2017). A cross-circulated bicephalic model of head transplantation. CNS Neuroscience & Therapeutics, 23(6), 535-541. doi:10.1111/cns.12700



Sandberg, A. (2014). Ethics of brain emulations. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 439-457. doi:10.1080/0952813X.2014.895113



Sandberg, Anders; Boström, Nick (2008). Whole Brain Emulation: A Roadmap (PDF). Technical Report 2008. Future of Humanity Institute, Oxford University.



Want to cite this post?



Das, S. (2017). The Neuroethics Blog Series on Black Mirror: Be Right Back. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/the-neuroethics-blog-series-on-black_30.html