Pages

Tuesday, July 25, 2017

Grounding ethics from below: CRISPR-cas9 and genetic modification



By Anjan Chatterjee






The University of Pennsylvania

Anjan Chatterjee is the Frank A. and Gwladys H. Elliott Professor and Chair of Neurology at Pennsylvania Hospital. He is a member of the Center for Cognitive Neuroscience, and the Center for Neuroscience and Society at the University of Pennsylvania. He received his BA in Philosophy from Haverford College, MD from the University of Pennsylvania and completed his neurology residency at the University of Chicago. His clinical practice focuses on patients with cognitive disorders. His research addresses questions about spatial cognition and language, attention, neuroethics, and neuroaesthetics. He wrote The Aesthetic Brain: How we evolved to desire beauty and enjoy art and co-edited: Neuroethics in Practice: Mind, medicine, and society, and The Roots of Cognitive Neuroscience: behavioral neurology and neuropsychology. He is or has been on the editorial boards of: American Journal of Bioethics: Neuroscience, Behavioural Neurology, Cognitive and Behavioral Neurology, Neuropsychology, Journal of Cognitive Neuroscience, Journal of Alzheimer’s Disease, Journal of the International Neuropsychological Society, European Neurology, Empirical Studies of the Arts, The Open Ethics Journal and Policy Studies in Ethics, Law and Technology. He was awarded the Norman Geschwind Prize in Behavioral and Cognitive Neurology by the American Academy of Neurology and the Rudolph Arnheim Prize for contribution to Psychology and the Arts by the American Psychological Association. He is a founding member of the Board of Governors of the Neuroethics Society, the past President of the International Association of Empirical Aesthetics, and the past President of the Behavioral and Cognitive Neurology Society. He serves on the Boards of Haverford College, the Associated Services for the Blind and Visually Impaired and The College of Physicians of Philadelphia. 




In 1876, Gustav Fechner (1876) introduced an “aesthetics from below.” He contrasted this approach with an aesthetics from above by which he meant that, rather than defining aesthetic experiences using first principles, one could investigate people’s responses to stimuli and use these data to ground aesthetic theory. Neuroethics could benefit with a similar grounding by an ethics from below, especially when ethical concerns affect public policy and regulation.



We are in the middle of a scientific revolution (Doudna & Charpentier, 2014) that will transform biological research by profoundly affecting agriculture, animal husbandry, and medicine. It also has profound implications for neuroethics. Genetic modification using CRISPR-Cas9 (clustered regularly interspaced short palindromic repeat–CRISPR-associated protein), a system of adaptive immunity discovered in bacteria, has become feasible and cheap. Described in 2015 as “Science’s breakthrough of the year”, CRISPR-Cas9 offers promises as well as perils. In addition to modifying somatic cells, we can now modify germline cells. We might be able to eliminate single gene neurological disorders like Huntington’s disease, among many others. At the same time, intentional selection of genes for physical and mental traits might reify social inequities and resurrect the possibility of eugenics. Specifically, genetic manipulation could become a deep tool for cognitive and mental enhancement that selects and manipulates genes that contribute to intelligence, attention, memory, and even creativity.







Image courtesy of Wikimedia Commons.

Scientists and ethicists are aware that the public should be involved in discussions about these technologies and their applications. Think tanks, bioethics groups, and scientific societies call for public engagement. For example, in December 2015, the US National Academies of Sciences, Engineering and Medicine held a summit on the regulation of CRISPR-–Cas9 gene-modifying technology (Travis, 2015). PHD physicist and Congressman Bill Foster (D-IL) opened the summit with a reminder that gaining public acceptance of what can be done with CRISPR-Cas9 is critical. The meeting opined that it would be irresponsible to proceed with germline modification without broad societal consensus about the appropriateness of possible uses. The final report from the National Academy of Sciences (National Academies of Science, 2017) walked back from their early call for broad societal consensus (Baylis, 2017), but did offer condition under which germ line genetic modification might be considered. Nonetheless, the report advocates for public involvement as stated on pages 7-8,


“Public engagement is always an important part of regulation and oversight for new technologies. As noted above, for somatic genome editing, it is essential that transparent and inclusive public policy debates precede any consideration of whether to authorize clinical trials for indications that go beyond treatment or prevention of disease or disability (e.g., for enhancement). With respect to heritable germline editing, broad participation and input by the public and ongoing reassessment of both health and societal benefits and risks are particularly critical conditions for approval of clinical trials.
At present, a number of mechanisms for public communication and consultation are built into the U.S. regulatory system, including some designed specifically for gene therapy, whose purview would include human genome editing. In some cases, regulatory rules and guidance documents are issued only after extensive public comment and agency response.” 


Given CRISPR-Cas9’s technical ease, low cost, and potentially wide spread application, knowing current public opinion is crucial to ongoing engagement. The “public” is not a monolithic entity, and understanding how different groups differ in their attitudes becomes critically relevant to any outreach efforts. 







Public opinion on In Vitro Fertilization (IVF) has changed

dramatically since its introduction.

Image courtesy of Flickr user Image Editor.

With these considerations in mind, we investigated what “the public” thinks about genetic modification research by querying 2,493 Americans of diverse backgrounds (Weisberg, Badgio, & Chatterjee, 2017). Respondents were broadly supportive of conducting this research. However, demographic variables influenced the robustness of this support– conservatives, women, African Americans, and older respondents, while supportive, were more cautious than liberals, men, non African American ethnicities, and younger respondents. Support for such research was also muted when the risks, such as unanticipated mutations and possibility of eugenics, were made explicit. We also presented information about genetic modification with contrasting vignettes, using one of five frames: genetic editing, engineering, hacking, modification, or surgery. The media, it turns out, uses different framing metaphors than academics when describing this technology. Journalists, more often than scientists, use “editing” as a metaphor, perhaps not surprising in so far as they are professional writers. It would be useful to know if these metaphors affect people’s opinions. In the context of our vignettes, the contrasting frames did not influence people’s attitudes. Our data offer a current snapshot of public attitudes towards genetic modification research that can inform ongoing engagement. 




Our observations are hardly the last word on the topic. Rather, they are an initial survey of a dynamically changing landscape. Will public attitudes evolve as more people become aware of the possibilities and problems of these technologies? What do we make of demographic differences? Conservatives, women, African Americans, and older people do not group together in an obvious way. Surely the reasons for caution among these groups vary. We did not find an effect of metaphoric framing in our study. This absence of an effect is reassuring in so far as journalists and scientists typically write about genetic modification using different organizing frames. Perhaps the lack of effect was because of an insufficient “dose” of framing language. If we presented more extensive descriptions and reinforcing language, might we have found an effect of framing? The point is that the implications of our results are subject to ongoing refinement, further testing, and continuing discussion as is true of most empirical studies. 




In a rapidly changing world in which biological sciences have the potential to profoundly affect our physical and mental and cognitive lives, public opinion assessed from below may be critical to grounding policy shaped from above. 





References 



Baylis, F. (2017). Human germline genome editing and broad societal consensus. Nature Human Behavior, 1. Retrieved from doi: doi:10.1038/s41562-017-0103



Doudna, J. A., & Charpentier, E. (2014). The new frontier of genome engineering with CRISPR-Cas9. Science, 346(6213), 1258096.



Fechner, G. (1876). Vorschule der Aesthetik. Leipzig: Breitkopf & Hartel.



National Academies of Science, E., and Medicine. (2017). Human Genome Editing: Science, Ethics, and Governance The National Academies Press Retrieved from http://go.nature.com/2ooO6jx.



Travis, J. (2015). Inside the summit on human gene editing: A reporter’s notebook. Retrieved from doi:https://doi.org/10.1126/science.aad7532



Weisberg, S. M., Badgio, D., & Chatterjee, A. (2017). A CRISPR New World: Attitudes in the Public toward Innovations in Human Genetic Modification. Frontiers in Public Health, 5, 117.





Want to cite this post?




Chatterjee, A. (2017). Grounding ethics from below: CRISPR-cas9 and genetic modification. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2017/07/grounding-ethics-from-below-crispr-cas9.html

Tuesday, July 18, 2017

Revising the Ethical Framework for Deep Brain Stimulation for Treatment-Resistant Depression



By Somnath Das






Somnath Das recently graduated from Emory University where he majored in Neuroscience and Chemistry. He will be attending medical school at Thomas Jefferson University starting in the Fall of 2017. Studying Neuroethics has allowed him to combine his love for neuroscience, his interest in medicine, and his wish to help others into a multidisciplinary, rewarding practice of scholarship which to this day enriches how he views both developing neurotechnologies and the world around him. 




Despite the prevalence of therapeutics for treating depression, approximately 20% of patients fail to respond to multiple treatments such as antidepressants, cognitive-behavioral therapy, and electroconvulsive therapy (Fava, 2003). Zeroing on an effective treatment of “Treatment-Resistant Depression” (TRD) has been the focus of physicians and scientists. Dr. Helen Mayberg’s groundbreaking paper on Deep Brain Stimulation (DBS) demonstrates that electrical modulation an area of the brain called subgenual cingulate resulted in a “sustained remission of depression in four of six (TRD) patients” These patients experienced feelings that were described as “lifting a void,” or “a sudden calmness.” (Mayberg et al. 2005). The importance of this treatment lies in the fact participants who received DBS for TRD (DBS-TRD) often have no other treatment avenues, and thus Mayberg’s findings paved the way for DBS to have great treatment potential for severely disabling depression. 






Image courtesy of Wikimedia Commons


Because DBS involves the implantation of electrodes into the brain, Dr. Mayberg and other DBS researchers faced intense scrutiny following publication of their initial findings regarding the ethics of using what to some seems like a dramatic intervention for TRD. A number of ethical concerns surrounding DBS for depression rely on the principles of beneficence, nonmaleficence, and respect for patient autonomy (Schermer, 2011). Beneficence and nonmaleficence focus on, respectively, how much benefit and how much harm do these therapies confer to the patient (Farah, 2015). In the context of depression, these parameters can often discuss, for example, perceived threats to identity weight against benefits of mood changes. An additional issue is the hype surrounding the treatment could give patients false expectations for therapeutic outcomes (Schermer, 2011). Empirical studies have found therapeutic misconception, or the conflation of research and therapeutic intents by participants of clinical trials, to be critical area of further investigation for DBS-TRD researchers (Fisher et al. 2012). 




While these ethical criteria are important for evaluating a biomedical treatment outcome, the analysis is a strong framework for patients suffering from a strictly biological disease with a strictly biological treatment, and a poor framework for patients suffering from a disability experience. Despite the fact that the multiple correlative biomedical agents of depression do include factors such as genes, cortisol levels, and hippocampal volume, there is a critical need to assess how illness narratives reflect patient assessments of their disease, values on identity, and therapy prognosis. By adopting a social model medicine that places emphasis on the experience of illness through personal stories, “Narrative Medicine” confers increased dignity and autonomy to patients with few therapeutic choices by addressing the social consequences of disability such as stigmatization and lack of accommodation (Garden, 2010). Additionally, this framework allows for the reframing of beneficence and maleficence, focusing instead on qualitative improvements or changes over time as opposed to quantitative evaluative measures. 








People can experience depression differently even with

the same biological factors.

Image courtesy of Wikimedia Commons.

Previous literature has demonstrated, at least partly, the need to rigorously evaluate narrative beliefs of depressed patients prior to a therapeutic decision being made. Depression narratives can also give insight as to how a patient’s cognitive interpretation of their disorder effects their treatment-related behavior (Brown et al. 2001). A study by Karasz, Sacajiu, & Garcia in 2003 found that patient beliefs about the cause of their depression could be grouped into biosocial, psychosocial, psychological, situational, and somatic narratives, and subsequent studies found that the type of illness narratives a patient ascribes to predicts preferred treatment options (Khalsa et al. 2011). Other studies were conducted to assess how patient illness conception and treatment preference affects therapeutic outcome. While some studies document no relation (Dunlop et al. 2012), others document a significant interactive effect (Kocsis et al. 2009). Taking previous literature into account, DBS-TRD patient narratives and shared perspectives captured through qualitative interviews could hold critical sources of evaluative data that can help researchers determine whether the treatment is effective for TRD from a holistic perspective. 





Qualitative and narrative studies can also help researchers understand the beneficial impact of DBS on patient livelihoods. A recent study by Lipsman, Zener, & Bernstein (2013) found that when considering identity changes due to removal of brain tumors, patients often considered the ability to carry out their own livelihoods as more important than the possibility of permanent personality changes. A qualitative 2016 study by Hariz, Limousin, and Hamberg found that for patients receiving DBS to treat Parkinson’s Disease, the perceived benefit of having less invasive, more predictable tremor conferred tolerance to adverse events. The importance of the participants personal accounts was that it allowed researchers to understand the social and relational side effects, both positive and adverse, that was conferred to patient livelihoods due to implantation. For example, the study’s participants described having less dependence on family members, and others described not wishing to return to careers post-implantation due to the fear that stress may cause relapse of tremors. 







Experts believe narrative accounts, not just objective measures,

are necessary in ethical interventions.

Image courtesy of Wikimedia Commons.

That being said, the perspective of implanted patients with mood disorders remains poorly characterized. To address this issue, Klein et al. (2016) interviewed patients who underwent implantation for OCD and MDD to analyze a hypothetical situation concerning “closed-loop” DBS devices. These devices allow for the patient to exhibit an increased locus of control on their DBS therapy. Their study found four common themes that patients either strongly agreed or disagreed upon: control over device function, authentic self, relationship effects, and meaningful consent. Patients especially disagreed on how control over the device would impact their livelihoods. Klein et al. therefore demonstrates the complexity by which DBS impacts the lives of the mentally disabled, and how these patients process their disability post-implantation. While clinical data remains important in evaluating how DBS affects the clinical presentation of TRD, qualitative data demonstrates how neurotechnologies fundamentally alter the social and relational aspects of disability. 




A Neuroethics framework with further emphasis on qualitative data can provide DBS-TRD researchers a distinct perspective for analyzing developing neurotechnology that takes into account both the interests of clinical medicine and the social factors modulating a person’s treatment experience. In the context of DBS for TRD, it is important to longitudinally assess disability shifts via patient narratives such that researchers can holistically understand both the benefits and side effects of life-changing neurosurgery on patient disability experiences. Thus, qualitative data consisting of personal accounts of disease and intervention can be used to further explore how neurotechnologies impact patients with a richer analysis of patient experience more holistically than quantitative scales. 



References



Brown, C., Dunbar-Jacob, J., Palenchar, D. R., Kelleher, K. J., Bruehlman, R. D., Sereika, S., & Thase, M. E. (2001). Primary care patients' personal illness models for depression: a preliminary investigation. Fam Pract, 18(3), 314-320.



Dunlop, B. W., Kelley, M. E., Mletzko, T. C., Velasquez, C. M., Craighead, W. E., & Mayberg, H. S. (2012). Depression Beliefs, Treatment Preference, and Outcomes in a Randomized Trial for Major Depressive Disorder. Journal of Psychiatric Research, 46(3), 375-381. doi:10.1016/j.jpsychires.2011.11.003



El-Hai, J. (2011). Narratives of DBS. AJOB Neuroscience, 2(1), 1-2. doi:10.1080/21507740.2011.547421



Farah, M. J. (2015). An ethics toolbox for neurotechnology. Neuron, 86(1), 34-37. doi:10.1016/j.neuron.2015.03.038



Fava, M. (2003). Diagnosis and definition of treatment-resistant depression. Biol. Psychiatry 53, 649–659



Fisher, C. E., Dunn, L. B., Christopher, P. P., Holtzheimer, P. E., Leykin, Y., Mayberg, H. S., . . . Appelbaum, P. S. (2012). The ethics of research on deep brain stimulation for depression: decisional capacity and therapeutic misconception. Ann N Y Acad Sci, 1265, 69-79. doi:10.1111/j.1749-6632.2012.06596.x



Garden, R. (2010). Disability and narrative: new directions for medicine and the medical humanities. Medical Humanities.



Hariz, G. M., Limousin, P., & Hamberg, K. “DBS means everything – for some time”. Patients’ Perspectives on Daily Life with Deep Brain Stimulation for Parkinson’s Disease. J Parkinsons Dis, 6(2), 335-347. doi:10.3233/jpd-160799



Karasz, A., Sacajiu, G., & Garcia, N. (2003). Conceptual Models of Psychological Distress Among Low-income Patients in an Inner-city Primary Care Clinic. J Gen Intern Med, 18(6), 475-477. doi:10.1046/j.1525-1497.2003.20636.x



Khalsa, S. R., McCarthy, K. S., Sharpless, B. A., Barrett, M. S., & Barber, J. P. (2011). Beliefs about the causes of depression and treatment preferences. J Clin Psychol, 67(6), 539-549. doi:10.1002/jclp.20785



Klein, E., Goering, S., Gagne, J., Shea, C. V., Franklin, R., Zorowitz, S., . . . Widge, A. S. (2016). Brain-computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain-Computer Interfaces, 3(3), 140-148. doi:10.1080/2326263X.2016.1207497



Kocsis, J. H., Leon, A. C., Markowitz, J. C., Manber, R., Arnow, B., Klein, D. N., & Thase, M. E. (2009). Patient preference as a moderator of outcome for chronic forms of major depressive disorder treated with nefazodone, cognitive behavioral analysis system of psychotherapy, or their combination. J Clin Psychiatry, 70(3), 354-361.



Lipsman, N., Zener, R., & Bernstein, M. (2009). Personal identity, enhancement and neurosurgery: a qualitative study in applied neuroethics. Bioethics, 23(6), 375-383. doi:10.1111/j.1467-8519.2009.01729.x



Mayberg, H. S., Lozano, A. M., Voon, V., McNeely, H. E., Seminowicz, D., Hamani, C., . . . Kennedy, S. H. (2005). Deep brain stimulation for treatment-resistant depression. Neuron, 45(5), 651-660. doi:10.1016/j.neuron.2005.02.014



Schermer, M. (2011). Ethical Issues in Deep Brain Stimulation. Front Integr Neurosci, 5. doi:10.3389/fnint.2011.00017



Want to cite this post?



Das, S. (2017). Revising the Ethical Framework for Deep Brain Stimulation for Treatment-Resistant Depression. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/07/revising-ethical-framework-for-deep.html

Tuesday, July 11, 2017

Diagnostic dilemmas: When potentially transient preexisting diagnoses confer chronic harm



By Elaine Walker





Elaine Walker is the Charles Howard Candler Professor of Psychology and Neuroscience at Emory University.   She leads a research laboratory that is funded by the National Institute of Mental Health (NIMH) to study risk factors for psychosis and other serious mental illnesses.  Her research is focused on the behavioral and neuromaturational changes that precede psychotic disorders.   She has published over 300 scientific articles and 6 books. 






The diagnostic process can be complicated by many factors. Most of these factors reflect limitations in our scientific understanding of the nature and course of disorders. But in the current US healthcare climate, legislative proposals concerning insurance coverage for preexisting conditions add another layer of complexity to the diagnostic process. It is a layer of complexity that is riddled with ethical dilemmas which are especially salient in the field of mental health care. The following discussion addresses the interplay between medical practice and health-care system policy in the diagnostic process. The diagnosis of psychiatric disorders is emphasized because they present unique challenges [1]. 




Of course, some of the complications associated with diagnosis are a function of ambiguous and/or changing diagnostic criteria. For example, the criteria for designating the level of symptom severity that crosses the boundary into clinical disorder change over time as a function of scientific advances. This has occurred for numerous illnesses, including metabolic, cardiovascular, and psychiatric disorders [2]. Further, especially in psychiatry, diagnostic categories undergo revision over time, even to the extent that some behavioral “syndromes” previously considered an illness have been eliminated from diagnostic taxonomies. Homosexuality is a prime example. It was designated as a disorder in the Diagnostic and Statistical Manual of Mental Disorders (DSM) in 1952, then removed in 1973. 








Definitions of disorders have changed over the years.

Image courtesy of Wikimedia Commons

In the case of psychiatric diagnosis, other complications arise from research findings that raise questions about the reliability and stability of diagnoses. As a case in point, numerous studies have shown that a large proportion of adolescents manifest psychiatric symptoms that are transient. The diagnosis of personality and psychotic disorders is notable in this regard. A majority of adolescents who meet diagnostic criteria for personality disorder no longer meet criteria in young adulthood [3, 4]. Similarly, across cultures, for the majority (75–90%) of adolescents who report psychotic symptoms the episodes are transitory and symptoms disappear by early adulthood [5]. These normative declines in adolescent psychiatric symptoms have been attributed to maturational increases in emotional regulation and cognitive abilities. The transient nature of some symptoms in youth can make health care providers more cautious in their approach to diagnosis. Of course, mental health care providers are also concerned about the stigma associated with psychiatric diagnoses.





Changing diagnostic criteria and normative developmental changes in the manifestation of symptoms present diagnostic dilemmas for health care providers under any circumstances. But the salience of these challenges is amplified when the diagnosis of a condition could have long-term adverse consequences for an individual’s future access to healthcare. It is for this reason that legislation governing the provision of insurance coverage for pre-existing conditions has such broad implications. 





A “pre-existing medical condition” is a health condition that exists prior to application for or enrollment in a health insurance policy, and insurers tend to define such conditions broadly. Public concern about the issue of pre-existing conditions has become more urgent recently because of proposals for reform of the Affordable Care Act (ACA). Prior to passage of the ACA in 2010, there were minimal restrictions on the insurance industry with respect to covering illnesses diagnosed prior to enrollment in the plan. Under the Essential Health Benefits provision of the ACA, insurers are required to provide coverage for pre-existing conditions. But current legislative proposals to reform or eliminate the ACA include no such explicit requirement. 








Image courtesy of picserver.org

The passage of legislation that eliminates the requirement for coverage of pre-existing conditions would pose significant ethical challenges for health care providers, especially those in the field of mental health. The stakes are high. For example, if a psychiatric diagnosis becomes part of a child’s medical record, his future access to insurance, and therefore health care, could be jeopardized. A diagnosis of attention deficit disorder, personality disorder, or brief psychotic episode could portend a lifetime of struggles to obtain insurance coverage. Moreover, the diagnosis may be based on childhood symptoms that ultimately prove to be transitory, in that they resolve without little or no treatment. Given such circumstances, should the health care provider, in the best interests of the child, modulate their diagnostic threshold to reduce the likelihood of such detrimental outcomes? Is that decision consistent with ethical practice? 





There are, of course, other considerations with respect to diagnosis and the child’s best health-care interests. Decisions about diagnosis and treatment are based, in part, on past health conditions. Ethical practice requires that the treatment provider be aware of potential adverse reactions to treatment, and these reactions vary as a function of the patient’s medical history [6]. There is evidence, for example, that individuals who have experienced previous subclinical psychotic symptoms are at greater risk for adverse reactions, including schizophrenia, to the stimulant medications used to treat attention deficit disorders. Consequently, previous psychotic episodes are considered a contraindication for the prescription of stimulant medication. If a health care provider observes subclinical psychotic symptoms in a teenager, but does not record them in the child’s medical record, this could negatively impact the quality of the child’s future health care. Thus, a diagnostic decision based on protecting the patient from later exclusion from coverage due to a pre-existing condition could inadvertently compromise the quality of their future healthcare. 








Some argue that exclusion from medical coverage based

on pre-existing conditions contradicts the principle of "do no harm".

Image courtesy of pixabay.com

There is no doubt that the inherent complexities of diagnosis are made even more challenging by policies that limit or exclude coverage for pre-existing health conditions. In fact, it could be argued that excluding or charging prohibitive premiums for health insurance coverage based on pre-existing conditions undermines the basic foundation of health care ethics, especially the dictum to “do no harm”. When diagnostic decisions have the potential to influence access to future healthcare, and therefore cause ‘harm’ to the patient, physicians and other health care providers are faced with a catch-22. The basic principle of non-maleficence is at odds with health care policy that deems the presence of a clinical diagnosis a potential long-term liability and a barrier to future healthcare access. Many organizations representing health care providers, including the American Medical Association and the American Psychological Association, have voiced their concern about these issues [7]. There is no doubt that debates about the ethical and public health dimensions of US health care reform will intensify as new proposals make their way through our legislative bodies. 



References



1. Eastman, N., & Starling, B. (2006). Mental disorder ethics: Theory and empirical investigation. Journal of medical ethics, 32(2), 94-99.



2. First, M. B. (2017). The DSM revision process: needing to keep an eye on the empirical ball. Psychological Medicine, 47(1), 19.



3. De Fruyt, F., & De Clercq, B. (2014). Antecedents of personality disorder in childhood and adolescence: toward an integrative developmental model. Annual review of clinical psychology, 10, 449-476.



4. Miller, A. L., Muehlenkamp, J. J., & Jacobson, C. M. (2008). Fact or fiction: Diagnosing borderline personality disorder in adolescents. Clinical psychology review, 28(6), 969-981.



5. Van Os, J., Linscott, R. J., Myin-Germeys, I., Delespaul, P., & Krabbendam, L. (2009). A systematic review and meta-analysis of the psychosis continuum: evidence for a psychosis proneness–persistence–impairment model of psychotic disorder. Psychological medicine, 39(02), 179-195.



6. Macdonald, A. N., Goines, K. B., Novacek, D. M., & Walker, E. F. (2017). Psychosis-Risk Syndromes: Implications for Ethically Informed Policies and Practices. Policy Insights from the Behavioral and Brain Sciences, 2372732216684852.



7. Lyon, J. (2017). Uncertain Future for Preexisting Conditions. Jama, 317(6), 576-576. 



Want to cite this post?



Walker, E. (2017). Diagnostic dilemmas: When potentially transient preexisting diagnoses confer chronic harm. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/07/diagnostic-dilemmas-when-potentially.html

Tuesday, July 4, 2017

The Neuroethics Blog Series on Black Mirror: Be Right Back


By Somnath Das




Somnath Das recently graduated from Emory University where he majored in Neuroscience and Chemistry. He will be attending medical school at Thomas Jefferson University starting in the Fall of 2017. The son of two Indian immigrants, he developed an interest in healthcare after observing how his extended family sought help from India's healthcare system to seek relief from chronic illnesses. Somnath’s interest in medicine currently focuses on understanding the social construction of health and healthcare delivery. Studying Neuroethics has allowed him to combine his love for neuroscience, his interest in medicine, and his wish to help others into a multidisciplinary, rewarding practice of scholarship which to this day enriches how he views both developing neurotechnologies and the world around him. 


----




Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we treat our technology and how it can affect us in return. This piece is part of a series of posts that will discuss ethical issues surrounding neuro-technologies featured in the show and will compare how similar technologies are impacting us in the real world. 






*SPOILER ALERT* - The following contains plot spoilers for the Netflix television series Black Mirror




Medical scientists and researchers have pioneered a plethora of technologies that seek to prolong the lives of patients in a state of disease or disability. For example, brain-computer interfaces are actively being investigated in multiple therapeutic contexts that can fundamentally alter the lives of patients with various disabilities. But, what if you could bring someone back from the dead by replicating their brain? The series Black Mirror raises this critical neuroethical issue in the episode “Be Right Back” by presenting a scenario describing the creation of artificial androids that use data to mirror the minds and behaviors of the recently deceased.





 Plot Summary







Image courtesy of Wikimedia Commons.

“Be Right Back” centers around Martha and Ash, a young couple whose lives are fundamentally changed shortly after their move to a new house. Ash is killed in a tragic accident on his way to return a rental vehicle, upsetting the life of his partner, Martha, who is largely alone in their remote community. Martha also discovers that she is pregnant, further complicating her newer, lonelier life. Martha struggles to cope with her lover’s death, but eventually finds some consolation in an online service that uses Ash’s messaging history to create a chat “bot” with which Martha can communicate. As time passes, Martha submits more data from Ash, including videos and photos, so that she can chat with the bot, which now mimics Ash’s voice, on the phone. Soon, Martha’s emotional solace in the bot evolves into an emotional dependency. This is exemplified when Martha damages her phone and has a significant panic attack. The creators of the bot service then inform Martha that she can participate in an experimental phase of their service, which involves the creation of an artificial android that looks just like Ash and uses his data to communicate with Martha. She consents, agreeing to provide data and help the company create the android Ash. The android then comes to live with Martha, and, at first, life is very enjoyable for her with her newfound companion. However, over time, she realizes that the android is not able to fully replace Ash and she grows increasingly horrified when the android displays behavioral anomalies that only remind her of what she lost. 





The technology in Black Mirror is therefore, incomplete. The android Ash lacks the emotional and empathetic capacities (or simply put, the human je ne sais quoi) that Ash possessed (or rather acquired) in real life. The show also subtly points out the issue of informed consent; if Ash were alive, would he have consented to this technology? Finally, the technology itself possesses a crucial gap – likely made in the name of convenience. To fully replicate behavior, software must first understand the structure of the brain and how this structure influences and is influenced by its environment. In seeking to simply mirror Ash, the show’s technology thus commits a crucial error by paying attention to Ash’s behavior and not his brain – a flaw that made it seem so horrifying to viewers. 





The State of Current Technology 



Whole Brain Emulation 







Image courtesy of Vimeo.

Black Mirror’s version of the future proposes a technology that mirrors behavior, whereas some transhumanists and futurists advocate the development of technologies that either upload or replicate the structure of the brain itself. This “digital mortality” aims to preserve the mind of the human indefinitely, allowing for a type of communication that theoretically could be superior to the technology of Black Mirror. Other Futurists have proposed building software that runs as if it were a human brain itself, thereby emulating the brain without having to convert its structure into data. In “Whole Brain Emulation (WBE), A Roadmap,” Sandberg and Bostrom (2008) contend that the technology required to emulate a human brain is within our reach, given the advances of computational neuroscience in understanding the inner neuronal workings of animal nervous systems. They contend that the possibilities for WBE range from its research “being the logical endpoint of computational neuroscience’s ability to accurately model [the brain]” to its potential to test of various philosophical constructs of identity. Sandberg and Bostrom (2008) also contend that WBE is a necessary step in order to achieve mind and (subsequently) personality emulation. Should they turn out to be correct, then perhaps the fault with the technology in Black Mirror was that it was proposed too soon. That being said, it is difficult to predict when the technology proposed by Sandberg & Bostrom could come to fruition, but their own estimates—assuming super-computer technologies being used for simulation and a high level of funding—place complete development within the next century. Thus, the technology of Black Mirror may seem more feasible because the simulation is based off of behavioral, rather than brain, data. 




Animal rights activists hope that as more complex WBE emulations are developed, the software will eventually be able to take the place of research of brains from live animal. Duda & Evers (2014) postulate that, in theory, simulation software could further enhance brain-machine interfaces - “for example by replacing missing dopaminergic input in Parkinsonian patients or by replacing visual input in the blind.” In order to produce a true substitute however, neuroscientists must likely test increasingly complex organisms to fine-tune the software’s emulation of complex brain processes, which poses animal protection and welfare concerns (Sandberg 2014). Perhaps the most concerning aspect of WBE is its assumption that we currently possess fully comprehensive knowledge about the brain and the components that influence its functions. Undiscovered neurotransmitters, brain nuclei, and neurohormones could further complicate the ability of WBE software to perform its functions accurately. 




Sandberg & Bostrom themselves contend that these technologies could constitute a form of human enhancement. Thus, the critical issue of access poses an ethical concern: how do we choose whose brains get emulated first? It is likely that this technology will be very expensive if made available to consumers; however, who will the serve as the initial “guinea pig”?? An additional question is the emulation’s moral status. Sandberg (2014) highlights that if this emulation were to possess all of the brain’s capabilities, including consciousness, then the emulation may be afforded specific rights (as per the Cambridge Declaration of Consciousness). The emulation’s conscious abilities, however, depend entirely on both the accuracy to which neuroscientists simulate brain structure and function, and the nature of how consciousness arises itself (Duda & Evers 2014). Should the emulation be conscious of its environment and truly behave like a human, then it may display human-like attributes such as being aware of its captive state. If we were to manipulate this emulation’s free will to contain it, would this manipulation be considered an actual crime similar to imprisonment? 



Head Transplantation 







Image courtesy of Wikipedia.

While WBE remains a technology of the distant future, neurosurgeons today are aiming to preserve the brains of individuals via head transplantation. Dr. Sergio Canavero claims that the first head transplantation procedure will occur this year. The Italian neurosurgeon bases his claims off of a previous experiment by Dr. Robert White, who transplanted the head of one rhesus monkey to another. The newly formed monkey survived for eight days “without complications,” although details about the monkey’s ability to feel pain remain unclear. Canavero has published papers describing proof of concept experiments. In a 2016 correspondence to Surgical Neurology International, he details both the severing and reattachment of a canine spinal cord, claiming that polyethylene glycol (PEG) – which he plans on using in his head transplant surgeries - contains the ability to fuse neuronal cell membranes following a sharp cut to the spinal cord. In 2017, Canavero and his colleague Xiao-Ping Ren piloted the creation of bicephalic (one body controlled by two brains) Wistar rats. These rats survived for about 36 hours on average. 




Dr. Canavero has repeatedly used news and media outlets, including a TED talk, to promote his work. However, there remains to be a clear consensus on the validity of his proof-of-concept experiments. There are also clear neuroethical issues that have yet to be answered when it comes to head transplantation. In a previous post for The Neuroethics Blog, neuroscientist Dr. Ryan Purcell highlights three key ethical issues associated with the procedure: risk vs. benefit (the possibility of the head being alive yet belonging to a paralyzed body or in a constant state of pain), justice and fairness (who donates their head versus who donates their body?), and issues of personal identity post-transplantation. In the case personal-identity, the juncture between head and body could potentially prove disastrous to conscious perception of reality if the self is indeed static. Our external sensations influence how our internal identities develop, and therefore if a head were transplanted to a different body, then surgeons may end up creating a new person entirely. This issue is vitally important for ethicists and legal experts to debate in the context of informed consent. In an op-ed for The Washington Post, Dr. Nita Farhany notes that that the procedure could be considered active euthanasia under U.S. law because the head transplantation in theory requires the removal of one – if not two – identities prior to completion. 





Conclusions





This episode of Black Mirror presents a reality in which data is used to mirror the behavior of the deceased. This technology has obvious benefits, including not having to fully replicate the brain of the individual (as occurs in WBE) and the mere fact that this technology can be used after the original person has passed away. At least some current efforts are attempted to create technologies that either replicate or save the brain while it is still alive, which pose a multitude of ethical issues that neuroscientists, clinicians, and ethicists are still actively debating. Black Mirror’s technology seems to circumvent these ethical issues by not focusing on brain structure; however, the show itself notes that this technology is an incomplete duplication. Both realities (recreating a person by reproducing their brain or simply reproducing their behavior) therefore propose compelling solutions to extending the lives of our loved ones, and it is clear that there are a host of issues to be resolved before these technologies can become a reality in the future. 





References 



Canavero, S. (2013). HEAVEN: The head anastomosis venture Project outline for the first human head transplantation with spinal linkage (GEMINI). Surgical Neurology International, 4(Suppl 1), S335–S342. http://doi.org/10.4103/2152-7806.113444



Canavero S, Ren X. Houston, GEMINI has landed: Spinal cord fusion achieved. Surg Neurol Int 13-Sep-2016;7: Available from: http://surgicalneurologyint.com/surgicalint-articles/houston-gemini-has-landed-spinal-cord-fusion-achieved/ 



Dudai, Y., & Evers, K. To Simulate or Not to Simulate: What Are the Questions? Neuron, 84(2), 254-261. doi:10.1016/j.neuron.2014.09.031



Hopkins, P. D. (2012). Why Uploading Will Not Work, or, the Ghosts Haunting Transhumanism. International Journal of Machine Consciousness, 04(01), 229-243. doi:10.1142/s1793843012400136



Li, P.-W., Zhao, X., Zhao, Y.-L., Wang, B.-J., Song, Y., Shen, Z.-L., . . . Ren, X.-P. (2017). A cross-circulated bicephalic model of head transplantation. CNS Neuroscience & Therapeutics, 23(6), 535-541. doi:10.1111/cns.12700



Sandberg, A. (2014). Ethics of brain emulations. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 439-457. doi:10.1080/0952813X.2014.895113



Sandberg, Anders; Boström, Nick (2008). Whole Brain Emulation: A Roadmap (PDF). Technical Report 2008. Future of Humanity Institute, Oxford University.



Want to cite this post?



Das, S. (2017). The Neuroethics Blog Series on Black Mirror: Be Right Back. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/the-neuroethics-blog-series-on-black_30.html

Tuesday, June 27, 2017

Mental Privacy in the Age of Big Data


By Jessie Ginsberg




Jessie Ginsberg is a second year student in the Master of Arts in Bioethics program and a third year law student at Emory University. 




A father stood at the door of his local Minneapolis Target, fuming, and demanding to speak to the store manager. Holding coupons for maternity clothes and nursing furniture in front of the manager, the father exclaimed, “My daughter got this in the mail! She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”





Target was not trying to get her pregnant. Unbeknownst to the father, his daughter was due in August.  





In his February 16, 2012 New York Times article entitled, “How Companies Learn Your Secrets,” Charles Duhigg reported on this Minneapolis father and daughter and how companies like Target use marketing analytics teams to develop algorithms to anticipate consumers’ current and future needs. Accumulating data from prior purchases, coupon use, surveys submitted, emails from Target that were opened, and demographics, a team of analysts render each consumer’s decision patterns into neatly packaged data sets tailored to predict their future buying choices. 






Flash forward to 2017, a time where online stores like Amazon dominate the market and cell phones are reservoirs of personal information, storing intimate details ranging from your location to your desired body weight to your mood. Furthermore, data analysis algorithms are more sophisticated than ever before, gobbling up volumes of information to generate highly specific and precise profiles of current and potential consumers. For example, plugging information into an algorithm ranging from social media activity to Internet searches to data collected from smart phone applications unlocks a goldmine of sensitive information that reveal the proclivities, thought processes, self-perception, habits, emotional state, political affiliations, obligations, health status, and triggers of each consumer. We must then ask ourselves, in the age of Big Data, can we expect mental privacy? That is, in a society replete with widespread data collection about individuals, what safeguards are in place to protect the use and analysis of information gathered from our virtual presence? 





In addition to the information deliberately submitted to our phones and computers, we must also worry about the data we subconsciously supply. Take, for example, the brain training program Lumosity. Over the past 10 years, this website has lured over 70 million subscribers with promises that their product will “bring better brain health,” delay conditions like Alzheimer’s and dementia, and help players “learn faster,” be sharper.” Though Lumosity and other similar companies like LearningRx were sued by the Federal Trade Commission for false advertising and must now offer a disclaimer about the lack of scientific support backing their product, has the damage already been done? 








Image courtesy of Pixabay.

More troubling than a brain training company’s use of unsubstantiated claims to tap into consumer fears of losing mental acuity for financial gain, the information collected by these brain training programs may serve as yet another puzzle piece for big data firms. Now, not only can applications and search engine histories provide a robust portfolio of what an individual consciously purchases and searches, but now these brain training websites can provide deeper insights into how individuals reason and analyze information. In their article entitled “Internet-Based Brain Training Games, Citizen Scientists, and Big Data: Ethical Issues in Unprecedented Virtual Territories,” Dr. Purcell and Dr. Rommelfanger express this concern: brain training program (BTP) data “are being interpreted as current demonstrations of existing behaviors and predispositions, and not just correlations or future predictions of human cognitive capacity and performance. Yet, the vulnerability of cognitive performance data collected from BTPs has been overlooked, and we believe the rapid consumption of such games warrants a sense of immediacy to safeguarding these data” (Purcell & Rommelfanger 2015, 357). The article proceeds to question how the data collected through brain training programs will be “secured, interpreted, and used in the near and long term given evolving security threats and rapidly advancing methods of data analysis” (Purcell & Rommelfanger, 357). 





Even more worrisome are the lack of protections currently afforded to those who turn to websites and phone applications for guidance in coping with mental health issues. According to a 2014 article entitled “Mental Health Apps: Innovations, Risks and Ethical Considerations,” research shows a majority of young adults with mental health problems do not seek professional help, despite the existence of effective psychological and pharmacological treatments (Giota & Kleftaras 2014, 20). Instead, many of these individuals turn to mental health websites and phone applications, which “are youth-friendly, easily accessible and flexible to use” (Giota & Kleftaras 2014, 20). Applications such as Mobile Therapy and MyCompass collect and monitor data ranging from lifestyle information, such as food consumption, exercise and eating habits, to mood, energy levels, and requests for psychological treatments to reduce anxiety, depression, and stress (Proudfoot et al 2013). Alarmingly, users of these programs are not guaranteed absolute protection from the developers. That is, current legal mechanisms in the United States do not fully prevent developers from selling personal health information submitted into apps to third party marketers and advertisers. 





Justice Allen E. Broussard of the Supreme Court of California declared in a 1986 opinion, “If there is a quintessential zone of human privacy it is the mind” (Long Beach City Emps. Ass'n. v. City of Long Beach). Indeed, with the advent of cell phones, widespread use of the internet, data analysts, and complex algorithms that predict future behaviors, our claim to privacy is waning. Until laws and regulations are designed to protect information collected from phone applications and Internet use, it is crucial that consumers become fully aware of just how much of themselves they share when engaging in Internet and phone activity.





References 





Giota, K.G. and Kleftaras, G. 2014. Mental Health Apps: Innovations, Risks and Ethical Considerations. E-Health Telecommunication Systems and Networks, 3, 19-23. 





Long Beach City Emps. Ass'n. v. City of Long Beach, 719 P.2d 660, 663 (Cal. 1986). 





Proudfoot, J., Clarke, J., Birch, M.R., Whitton, A.E., Parker, G., Manicavasagar, V., et al. (2013) Impact of a Mobile Phone and Web Program on Symptom and Functional Outcomes for People with Mild-to-Moderate Depression, Anxiety and Stress: A Randomised Controlled Trial. BMC Psychiatry, 13, 312. 





Purcell, R. H., & Rommelfanger, K. S. 2015. Internet-based brain training games, citizen scientists, and big data: ethical issues in unprecedented virtual territories. Neuron, 86(2), 356-359.



Want to cite this post?



Ginsberg, J. (2017). Mental Privacy in the Age of Big Data. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/mental-privacy-in-age-of-big-data.html

Tuesday, June 20, 2017

Fake News – A Role for Neuroethics?



By Neil Levy





Neil Levy is professor of philosophy at Macquarie University, Sydney, and a senior research fellow at the Uehiro Centre for Practical Ethics, University of Oxford.






Fake news proliferates on the internet, and it sometimes has consequential effects. It may have played a role in the recent election of Donald Trump to the White House, and the Brexit referendum. Democratic governance requires a well-informed populace: fake news seems to threaten the very foundations of democracy.





How should we respond to its challenge? The most common response has been a call for greater media literacy. Fake news often strikes more sophisticated consumers as implausible. But there are reasons to think that the call for greater media literacy is unlikely to succeed as a practical solution to the problem of fake news. For one thing, the response seems to require what it seeks to bring about: a better informed population. For another, while greater sophistication might allow us to identify many instances of fake news, some of it is well crafted enough to fool the most sophisticated (think of the recent report that the FBI was fooled by a possibly fabricated Russian intelligence report).





Moreover, there is evidence that false claims have an effect on our attitudes even when we initially identify the claims as false. Familiarity – processing fluency, in the jargon of psychologists – influences the degree to which we come to regard a claim as plausible. Due to this effect, repeating urban legends in order to debunk them may leave people with a higher degree of belief in the legends than before. Whether for this reason or for others, people acquire beliefs from texts presented to them as fiction. In fact, they may be readier to accept that claims made in a fictional text are true of the real world than claims presented as factual. Even when they are warned that the story may contain false information, they may come to believe the claims it makes. Perhaps worst of all, when asked how they know the things they have come to believe through reading the fiction, they do not cite the fiction as their source: instead, they say it is ‘common knowledge’ or they cite a reliable source like an encyclopedia. They do this even when the claim is in fact inconsistent with common knowledge.








Image courtesy of Flickr user Free Press/ Free Press Action Fund.

So we may come to acquire false beliefs from fake news. Once acquired, beliefs are very resistant to correction. For one thing, memory of the information and of correction may be stored separately and have different memory decay rates: even after correction, people may continue to cite the false claim because they do not recall the correction when they recall the information. If they recall the information as being common knowledge or coming from a reliable source, knowing that Breitbart or Occupy Democrats is an unreliable source may not affect their attitudes. Even if they recall the retraction, moreover, they may continue to cite the claim.





Finally, even when we succeed in rejecting a claim, the representation we form of it remains available to influence further cognitive processing. Multiple studies (here and here) have found that attitudes persist even after the information that helped to form them is rejected.





All this evidence makes the threat of fake news – of false claims, whether from unreliable news sources, from politicians and others who seek to manipulate us – all the greater, and suggests that education is not by itself an adequate response to it. We live in an age in which information, true and false, spreads virally across the internet in an unprecedented way. We may need unprecedented solutions to the problem.





What are those solutions? I must confess I don’t know. An obvious response would be censorship: perhaps with some governmental agency vetting news claims. While my views on free speech are by no means libertarian, I can’t see how such a solution could be implemented without unacceptable limitations of individual freedoms. Since fake news has an international origin, the sources can’t effectively be regulated, so regulation would have to target individuals who would share the stories on social media. That kind of regulation would require incredibly obtrusive monitoring and unacceptable degrees of intervention, and would place too much power in the regulating agency.








Image courtesy of Flickr user

Tyler Menezes.

A better solution might be utilize the same kinds of psychological research that warn us about the dangers of fake news to design contrary sources of information. The research that shows us how people may be fooled by false claims also provides guidance as to how to make people more responsive to good evidence. We could utilize this information to design informational nudges, with the aim of ensuring that people are better informed.





This solution itself requires scrutiny. Are such nudges ethical? I think they are, or at least can be. Further, would good information crowd out bad? We aren’t in a position to confidently say right now. What we can say, however, is that fake news is a problem that cries out for a solution. If we can’t solve it, we may find that democratic institutions are not up to the job of addressing the challenges we face today.






Want to cite this post?



Levy, N. (2017). Fake News – A Role for Neuroethics? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/fake-news-role-for-neuroethics_17.html

Tuesday, June 13, 2017

Have I Been Cheating? Reflections of an Equestrian Academic



By Kelsey Drewry





Kelsey Drewry is a student in the Master of Arts in Bioethics program at the Emory University Center for Ethics where she works as a graduate assistant for the Healthcare Ethics Consortium. Her current research focuses on computational linguistic analysis of health narrative data, and the use of illness narrative for informing clinical practice of supportive care for patients with neurodegenerative disorders.






After reading a recent study in Frontiers in Public Health (Ohtani et al. 2017) I realized I might have unwittingly been taking part in cognitive enhancement throughout the vast majority of my life. I have been a dedicated equestrian for over twenty years, riding recreationally and professionally in several disciplines. A fairly conservative estimate suggests I’ve spent over 5000 hours in the saddle. However, new evidence from a multi-university study in Japan suggests that horseback riding improves certain cognitive abilities in children. Thus, it seems my primary hobby and passion may have unfairly advantaged me in my academic career. Troubled by the implication that I may have unknowingly spent much of my time violating the moral tenets upon which my intellectual work rests, I was compelled to investigate the issue.






The study in question, “Horseback Riding Improves the Ability to Cause the Appropriate Action (Go Reaction) and the Appropriate Self-control (No-Go Reaction) in Children,” (Ohtani et al. 2017) suggests that the vibrations associated with horses’ movement activate the sympathetic nervous system, leading to improved cognitive ability in children. Specifically, children 10 to 12 years old completed either simple arithmetic or behavioral (go/no-go) tests before and after two 10 minute sessions of horseback riding, walking, or resting. A large percentage of children demonstrated improved performance on the go/no-go tasks (which largely test impulse control) after 10 minutes of riding compared to children who walked or rested. No significant changes were seen in the arithmetic tasks.





"There are many possible effects of human-animal interactions on child development," study author Mitsuaki Ohta suggests. "For instance, the ability to make considered decisions or come to sensible conclusions, which we described in this study, and the ability to appreciate and respond to complex emotional influences and non-verbal communication, which requires further research to be understood" (Frontiers 2017).





So, have the horses I’ve ridden over my life (the number must be nearing 100) enhanced components of my cognitive abilities and perhaps even predisposed me to a career in bioethics? Have they given me unfair advantages over others by heightening my ability to think about and respond to moral problems? When I go to the barn before writing an exam or term paper (or this blog post) am I cheating via the highly controversial and purportedly unjust act of cognitive enhancement? After all, considered decisions and sensible conclusions are the hallmark of bioethics.








Image courtesy of Wikimedia Commons.

Though I initially—perhaps defensively—want to argue “no,” there seems to be a reasonable case that I have enthusiastically engaged in cognitive enhancement, and that my particular means of doing so is quite unjust. Though different in methodology and perhaps neurologic effect, pharmacological cognitive (or affective) enhancement seems to provide discourse that is surprisingly analogous to my circumstance.





Throughout diverse fields of literature, a host of arguments have been raised regarding the nature, moral and legal status of non-prescription use of stimulant “study drugs,” especially amid young people in academic settings (e.g. Aria 2011; Desantis et al. 2010; Terbeck 2013; Vrecko 2013). Among commonly advanced opposition to this sort of enhancement is the concern that it is unnatural and may result in inauthentic or non-rational choice. In his discussion of the issue, Torben Kjærsgaard writes, “We could risk losing our capacity to pull ourselves together, if we rely on motivation enhancers every time we face hard challenges... we may risk losing touch with ourselves in some sense. Thus, we should wonder how we would be doing if it were not for the enhancers, and ask ourselves how much we would have achieved if it were not for the motivation enhancers” (Kjærsgaard 2015). Now, at first blush riding may not seem to be motivational enhancement, but that is certainly a role it has played in my life. Not only has it become my go-to activity for coping with stress, anxiety, or any undesirable emotion in much the same way that some rely on drugs, my household rules growing up cemented its role as motivator. I was raised with academics as the priority—if I didn’t do well in classes, I didn’t get to ride. Though not ingested orally (rather aurally), I’d certainly say my love of horses, if not the riding itself, has significantly influenced my academic motivation. By this measure, my equestrian habit is at least ethically dubious if not entirely concerning.





Another commonly cited objection to the off-label use of stimulants is medical. In the most extreme cases, overdoses, “The primary clinical syndrome involves prominent neurological and cardiovascular effects… the patient may present with mydriasis, tremor, agitation, hyperreflexia, combative behavior, confusion, hallucinations, delirium, anxiety, paranoia, movement disorders, and seizures” (Spiller 2013). However, even the common “minor” equestrian injuries, which include soft tissue damage, fractures, and concussions (Bixby-Hammet 1990), are not negligible in comparison. I have suffered each category of riding injury multiple times—several broken ribs, a broken arm, an avulsion fracture in my foot, lung and liver contusions, and concussions are among my more certain traumas. Training and competing in the sport considered the most dangerous in the Olympics (van Gilder Cooke 2012) and leading the count in sports-related traumatic brain injuries (Mohney 2016), I’m considered lucky among my equestrian peers even with that list. Undoubtedly, anyone recommending participation in horseback riding could be accused of violating basic nonmaleficence. The risk-benefit ratio does not skew in favor of the saddle, and it seems unimaginable that a medical professional would recommend any intervention with a similar profile.







Image courtesy of Pixabay.

Finally, let’s turn to justice and access. Core virtues of American medicine, just access and opportunity are morally idyllic, and often contribute to strong condemnations of pharmacological enhancement. The general argument is that drugs like Adderall are intended to “restore” normal cognitive (or affective) ability to individuals suffering from attention deficit disorders, and that use by “cognitively normal” individuals provides disproportionate benefit while unfairly disadvantaging those with medical need. Additionally, the high cost of the drugs, especially when purchased illegally, may widen the gap in academic achievement already created by socioeconomic factors (Sirin 2005). With these considerations, I must denounce my horse habit as undeniably unethical. Even more than expensive pills, the financial privilege required to participate in this sport is incontrovertible. Riding lessons cost from $25-$100+ an hour, and that is just instruction. Horseback riding is simply financially untenable for many, regardless of the purported cognitive benefits. Thus, if one accepts Ohtani’s conclusions, my equestrian activities have granted me access to a privileged means of enhancing cognition. I have improved my mental faculties without the effort and intention lauded by Kantian morality, and with disregard to the virtue of justice valued by my society (Timmons 2013).





The conclusion that by participating in a sport that I love, I may have inadvertently acted immorally by enhancing certain cognitive capacities is puzzling and likely provocative to many. My moral intuition suggests that since I neither intended to enhance my abilities, nor was I even aware that this outcome was possible, I did not act unfairly. There also seems to be something about the nature of the act, perhaps that it is physical instead of chemical, that excludes it from being one of the “immoral enhancements” denounced by bioconservative theorists. If we do deem this activity to be equestrian cognitive enhancement, why would this riskier, less accessible, and equally addictive methodology be less egregious than biomedical means? The readily paralleled discourse between cognitive enhancement via riding and the ethical issues of off-label nootropic drug use reveals that we have much more to discuss about what does and does not constitute enhancement. Perhaps after a bit more time in the saddle I’ll be able to come to a sensible conclusion.



References 



Arria, A. M. 2011. Compromised sleep quality and low GPA among college students who use prescription stimulants nonmedi- cally. Sleep Medicine 12(6): 536–537. Available here.



Bixby-Hammett, D., and Brooks, W.H. (1990) Common Injuries in Horseback Riding: A Review. Sports Medicine 9(1): 36-47.



DeSantis, A., S. M. Noar, and E.M. Webb. (2010) Speeding through the frat house: A qualitative exploration of nonmedical ADHD stimulant use in fraternities. Journal of Drug Education 40(2): 157– 171. Available here.



Frontiers. (2017, March 2). Horse-riding can improve children's cognitive ability: Study shows how the effects of horseback riding improve learning in children. ScienceDaily. Retrieved here.



Kjærsgaard, T. (2015) Enhancing Motivation by Use of Prescription Stimulants: The Ethics of Motivation Enhancement. AJOB Neuroscience 6(1): 4-10.



Mohney, G. (2016, April 1) Horse Riding is Leading Cause of Sport-Related Traumatic Brain Injuries, Study Finds. ABC News. Retrieved here.



Ohtani, N., Kitagawa, K., Mikami, K., Kitawaki, K., Akiyama, J., Fuchikami, M., Hidehiko, U., and Ohta, M. (2017) Horseback Riding Improves the Ability to Cause the Appropriate Action (Go Reaction) and the Appropriate Self-control (No-Go Reaction) in Children. Frontiers in Public Health. Published online 6 February 2017 here.



Spiller HA, Hays HL, Aleguas A Jr. (2013) “Overdose of drugs for attention-deficit hyperactivity disorder: clinical presentation, mechanisms of toxicity, and management,” CNS Drugs 27(7): 531-543.



Terbeck, S. 2013. Why students bother taking Adderall: Measurement validity of self-reports. AJOB Neuroscience 4(1): 20–22.



Timmons, M. (2013) Moral Theory: An Introduction. Lanham, Md.: Rowman & Littlefield Publishers.



van Gilder Cooke, S. (2012, July 28) Equestrian Eventing: The Olympics’ Most Dangerous Sport? Time. Retrieved here.



Vrecko, S. (2013) Just How Cognitive is “Cognitive Enhancement”? On the Significance of Emotions in University Students’ Experiences with Study Drugs. AJOB Neuroscience 4(1): 4-12.



Want to cite this post?



Drewry, K. (2017). Have I Been Cheating? Reflections of an Equestrian Academic. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/have-i-been-cheating-reflections-of.html