Pages

Tuesday, February 13, 2018

International Neuroethics Society Annual Meeting Summary: Ethics of Neuroscience and Neurotechnology




By Ian Stevens






Ian is a 4th year undergraduate student at Northern Arizona University. He is majoring in Biomedical Sciences with minors in Psychological Sciences and Philosophy to pursue interdisciplinary research on how medicine, neuroscience, and philosophy connect. 



At the 2017 International Neuroethics Society Annual Meeting, an array of neuroscientists, physicians, philosophers, and lawyers gathered to discuss the ethical implications of neuroscientific research in addiction, neurotechnology, and the judicial system. A panel consisting of Dr. Frederic Gilbert with the University of Washington, Dr. Merlin Bittlinger, with the Universitätsmedizin Berlin – Charité, and Dr. Anna Wexler with the University of Pennsylvania presented their research on the ethics of neurotechnologies.






Dr. Gilbert discussed the cultivation and development of neurotechnologies that use artificial intelligence (AI) to operate brain-computer interfaces (BCI), such as the implanted seizure advisory system, which is implanted invasively into the brain for the treatment of drug-resistant epilepsy (1). He provided three main reasons for the ethical examination of such developing neurotechnologies. The first is that these devices could provide “neuro-signatures” that could aid in the detection of addiction and sexual urges. These issues could challenge our notions of privacy and autonomy, concerns that are being explored with other technologies (2, 3). Secondly these devices, as other similar invasive neurotechnologies, have been shown to cause or be associated with personality changes and because of this we need to understand how these technologies might affect a patient’s notion of self and identity (4). It seems concerning to enter a treatment as a certain person but leave as another. How the risks and benefits of treatment are balanced when a patient prior to surgery might not be the same afterwards challenges conventional standards of risk and benefit. Finally, the field of AI with BCIs is a very ambiguous one with the pace of developing predictive brain implants exceeding our understandings of how they will affect us (5).





Expanding on his second justification, Dr. Gilbert discussed his research on ways these artificially intelligent devices can alter subjects’ perception of themselves. He used qualitative data from interviews to assess the concern that BCIs alter personalities and shared two stories of a 52-year-old woman receiving an AI BCI for epilepsy and a younger female student also being treated for epilepsy with an AI BCI (6). The 52-year-old woman stated that, because of the implanted AI device, she felt like she could do anything, and nothing could stop her (AI BCI induced postoperative distorted perception of capacities).






An open brain-computer interface (BCI) board.

(Image courtesy of Wikimedia.)

This contrasted with the student who experienced postoperative symptoms of depression because she felt the AI device forced her to confront the fact that she was epileptic (AI BCI induced drastic rupture in identity leading to iatrogenic harms). These dialogues have lead Dr. Gilbert to argue for a distinction between restorative and deteriorative personality changes associated with BCIs (what he calls “self-estrangement”) (7).



This distinction is initially helpful for two possible reasons. One, it assists in confirming that a patient’s sense of identity can change in reference to the AI BCI they are treated with, but also that there are certain kinds of patients who are incompatible with being treated with BCIs. Like pharmacological treatments for mental health, some patients might not benefit from the deleterious identify changes associated with their AI BCI treatment. So, in conclusion, Dr. Gilbert advised that those who are not accepting of their neurologic disease should not undergo AI BCI treatment out of concern for the device having a destructive change in their core personality.





Dr. Bittlinger, whose current work focuses on the ethical, legal, and social aspects of psychiatric neurosurgery, presented his research on the ethical evaluation of innovative research involving unknown risk by using the example of deep brain stimulation (DBS) in Alzheimer’s Disease (AD). Dr. Bittlinger emphasized how much of a global burden AD is, with no cure within sight. With only a few drugs available for treating the symptoms of AD, there is an obvious need for innovative research. He said that using DBS as an innovative, or currently unconventional, treatment should be examined ethically before we proceed down the road to other treatment options. To support this, Dr. Bittlinger quoted the Declaration of Helsinki (8) and its sentiments on the need for the patients to be autonomous beings and the importance of consent in research. The notion that the risks undertaken by patients should be low and minimal is not addressed, however, and DBS is in the highest risk class of treatments being explored for AD because of its invasive nature. The Declaration of Helsinki points to this importance, stating “individuals must not be included in a research study that has no likelihood of benefit for them unless it is intended to promote the health of the group represented by the potential subject, the research cannot instead be performed with persons capable of providing informed consent, and the research entails only minimal risk and minimal burden” (9). While all treatments in clinical trials strive for this, the innovative nature of DBS for AD possess large risks for unknown benefits. While AD can be debilitating to the patient, the risk associated with invasive implantation may be too great. Because of this and the fact that clinical trials include possibly un-autonomous decision-makers (the Alzheimer’s populous), Dr. Bittlinger stressed the need for further evidence of DBS efficacy in the long-term.








Image courtesy of Pixabay.

Dr. Bittlinger’s take home message was that “neuroethicists should encourage researchers to see methodological rigor not only as a liability but as an asset.” He is advocating for a form of methodological beneficence. While trials might normally look to cause minimal maleficence, questioning the implicit structure of research to be ethical could provide benefits in the realms of research with the highest risk. After an extensive literature review, Dr. Bittlinger made the important distinction between studies with no unknown risk compared to those with no knowledge of unknown risks (10). This uncertainty of the unknowns is the basis for Dr. Bittlinger’s question of exactly how much pre-clinical data is required to justify clinical interventions with DBS for Alzheimer’s disease. In line with this methodological beneficence and using probability models, Dr. Bittlinger finished off his talk when he stressed the need for neuroscientists to prioritize confirmatory clinical trials over exploratory ones in early stages of research.





Finally, Dr. Wexler presented on the use of brain stimulation in a variety of health and wellness clinics around the United States. Her work focused on the use of tDCS (transcranial direct current stimulation) and how the current studies on the subject have suggested its effectiveness in treating depression, chronic pain, and cognitive enhancement (though there is still debate in the literature about the efficacy of tDCS). She also noted that there is a larger presence of tDCS use in the DIY (Do It Yourself) community, where people fashion their own devices with batteries and sponges; however, it has been more common for tDCS products to be obtained as consumer products (11). Her fascination with the field came from the fact that two groups use these devices: researchers (a very controlled setting) and average consumers (a very uncontrolled setting). However, what struck her was the fact that a third group of people, clinicians, were also using tDCS devices as a means of treatment for their patients (a semi-controlled setting). This semi-controlled setting was curious to Dr. Wexler since it was fraught with ethical concerns distinct from the well-known DIY concerns, and the possible off-label use of tDCS in such a setting.



The semi-structured environment of the clinic presents a clinical bioethical inquiry. How should these devices be regulated and how should they be understood as treatment options? Should they only be approved as a clinical treatment for disease, or perhaps as an off-label procedure to enhance?






Image courtesy of Pexels.

She defined an off-label use as a device or drug used for an intention other than it was approved and referenced using Trazadone, a drug used to treat depression, for alcohol dependency as an example (12). She then went on to discuss the open-ended semi-structured interviews she conducted with health care providers that offered tDCS services. Although the analyses are still underway, she shared some insights she has had so far; namely tDCS use has been tied to complementary and alternative medicine, the pricing of using such devices varies by provider, and the treatment focused on depression, anxiety, and ADD. Out of the practitioners, some thought that tDCS was FDA approved (when in fact it was not), and overall those using tDCS came from people possessing an MD, Ph.D. or no clinical background. Regardless of the legal distinctions between the regulation of the sale of tDCS devices or the use of them, the ethical questions she left us with are pressing ones. Should these devices be allowed to be used in clinics without supporting research?





The developing neurotechnologies are broad in their application, but there are common threads of ethical reflection that Dr. Gilbert, Dr. Bittlinger, and Dr. Wexler have highlighted. As with all new treatment options, our outlook as scientists, philosophers, lawyers, and ethicists should be critical, although not pessimistic. Neurotechnologies look to be great treatment options for many chronic neurological problems; however, the side-effects, and therefore the risk and benefit trade-offs are unknown. The “how” question of connecting the human brain with technology has been solved on some levels; however, what this connection ethically means still needs to be unraveled.







References




1. Mark J. Cook et al., “Prediction of Seizure Likelihood with a Long-Term, Implanted Seizure Advisory System in Patients with Drug-Resistant Epilepsy: A First-in-Man Study,” The Lancet. Neurology 12, no. 6 (June 2013): 563–71, https://doi.org/10.1016/S1474-4422(13)70075-9.





2. Tamara Denning, Yoky Matsuoka, and Tadayoshi Kohno, “Neurosecurity: Security and Privacy for Neural Devices,” Neurosurgical Focus 27, no. 1 (July 1, 2009): E7, https://doi.org/10.3171/2009.4.FOCUS0985.





3. Frederic Gilbert, “A Threat to Autonomy? The Intrusion of Predictive Brain Implants,” Ajob Neuroscience 6, no. 4 (October 2, 2015): 4–11, https://doi.org/10.1080/21507740.2015.1076087.





4. Frederic Gilbert et al., “I Miss Being Me: Phenomenological Effects of Deep Brain Stimulation,” AJOB Neuroscience 8, no. 2 (April 3, 2017): 96–109, https://doi.org/10.1080/21507740.2017.1320319.





5, 6, 7. Frederic. Gilbert et al., “Embodiment and Estrangement: Results from a First-in-Human ‘Intelligent BCI’ Trial,” Science and Engineering Ethics, 2017, https://doi.org/10.1007/s11948-017-0001-5.





8. “WMA - The World Medical Association-WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects,” accessed December 28, 2017, https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/.





9. “WMA - The World Medical Association-WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects.”





10. John Noel M. Viaña, Merlin Bittlinger, and Frederic Gilbert, “Ethical Considerations for Deep Brain Stimulation Trials in Patients with Early-Onset Alzheimer’s Disease,” Journal of Alzheimer’s Disease: JAD 58, no. 2 (2017): 289–301, https://doi.org/10.3233/JAD-161073.





11. Anna Wexler, “The Social Context of ‘Do-It-Yourself’ Brain Stimulation: Neurohackers, Biohackers, and Lifehackers,” Frontiers in Human Neuroscience 11 (2017), https://doi.org/10.3389/fnhum.2017.00224.





12. Letizia Bossini et al., “Off-Label Uses of Trazodone: A Review,” Expert Opinion on Pharmacotherapy 13, no. 12 (August 2012): 1707–17, https://doi.org/10.1517/14656566.2012.699523.






Want to cite this post?




Stevens, I. (2018). International Neuroethics Society Annual Meeting Summary: Ethics of Neuroscience and Neurotechnology. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/02/international-neuroethics-society_10.html

No comments:

Post a Comment