Pages

Tuesday, December 22, 2015

The freedom to become an addict: The ethical implications of addiction vaccines


by Tabitha Moses 





Tabitha Moses, M.S., is Administrative and Research Coordinator at Lehman College, CUNY, as well as a Research Affiliate at the National Core for Neuroethics at the University of British Columbia. Tabitha earned her BA in Cognitive Science and Philosophy and MS in Biotechnology from The Johns Hopkins University. She has conducted research in the areas of addiction, mental illness, and emerging neurotechnologies. She hopes to continue her education through a joint MD/PhD in Neuroscience while maintaining a focus on neuroethics.





The introduction of “addiction vaccines” has brought with it a belief that we have the potential to cure addicts before they have ever even tried a drug. Proponents of addiction vaccines hold that they will:


  1. prevent children from becoming addicted to drugs in the future, 

  2. allow addicts to easily and safely stop using drugs, and 

  3. potentially lower the social and economic costs of addiction for society at large.



However, it is critical to be aware of the limitations and risks - both ethical and physical - of introducing these vaccines into mainstream medical care.








A child receives a vaccine in the 1930s


Before delving deeper into this discussion, we must understand that the term addiction vaccine is a misnomer. The vaccine itself is against a specific drug or substance, not against addiction in general. Currently these vaccines have been produced for nicotine, cocaine, and heroin. (See a previous blog post on cocaine vaccines here). While the different types of addiction vaccines have varying mechanisms, the end result is that an individual who has received the vaccine against a specific substance - cocaine, for instance - will no longer feel any of the effects of the substance that are typically associated with the high. As a result, the idea is that a person can never become addicted to a substance that does not have any physical or emotional feelings associated with it. It is also important to understand what is meant by addiction here. While there is still much controversy surrounding the underlying cause and neurological underpinnings of addiction, in general, addiction can be best described as “an inability to control use [or a substance or behavior], and that’s often best described as continued use despite potently negative consequences.” For more on the topic, see a previous post by Dr. Mike Kuhar on addiction here and an interview with Dr. Steve Hyman here.







It is, however, not an “addiction vaccine." The idea is that preventing an individual from feeling the highs associated with a certain substance will also prevent addiction to that substance. That said, if a nicotine-vaccinated individual tried a cigarette because she was craving a certain release and felt no sensation upon having one, there is nothing stopping her from finding a different high. As such, the vaccine does not stop all addictions per se.





When the news broke about these types of vaccines there were many parental groups espousing the potential positive impact the vaccines would have on their children. Some parents and even researchers suggested the idea of adding these vaccines to the regular pediatric vaccine schedule. However, while addiction indisputably has a negative impact on the individual and his or her family and friends, this vaccine is not in the same category as those in the regular vaccine regimen. Increasing evidence suggests that addiction is not just about the substance at hand, but about many other factors. The majority of addicts become addicted to substances as a result of deeper underlying mental health issues, and exposure to early life trauma and adversity. Removal of one substance will not remove the underlying problems or prevent addiction to other substances or behaviors. Frequently, without the appropriate care, an addict may stop using a specific drug only to replace it with another drug or behavior (substitution). Therefore, by vaccinating children against cocaine we are only stopping them from feeling effects of cocaine, not from becoming an addict.







Chart comparing past year psychiatric disorder between individuals with and without

alcohol and drug use disorder--created by the author from data drawn from Stinson et al.


Additionally, while the deadly viral and bacterial infections against which we usually vaccinate (such as polio, measles, and tetanus) have no known benefits, certain drugs do have potential benefits (for instance, nicotine, LSD, MDMA, and marijuana). If we permanently block the ability of the body to respond to these drugs, this could have future detrimental effects for the individual. For instance, if these drugs are developed for medical purposes the person who was vaccinated will not be able to receive the benefits of the drugs.





Furthermore, the question of autonomy falls heavily into play here. While it seems absurd to think that an individual might want the right to become addicted to a drug, it is less outrageous to believe that he or she might want the right to know firsthand what the effects of the drug are. Many people, particularly artists and others who work in creative fields, cite numerous professional benefits to using certain illicit drugs occasionally. And keep in mind, not everyone who tries a drug will become addicted to it. While others may not agree with these methods, and in particular with the legal implications, it would be difficult to argue that it is ethical to take this option away from a person before she is able to understand it.





If these vaccines are not to be given to children, the question that naturally follows is: when would it be appropriate to provide them to adults? For adults, the biggest concern with the vaccine is that of fully informed consent. To be able to make a fully informed decision, a person should be able to understand all implications. This is not possible unless she has experienced being under the influence of a certain drug. Unfortunately, this leads to its own set of ethical quandaries. Should we insist an individual sample a drug to know the “high” prior to receiving the vaccine? It seems completely unethical to insist that an individual, who is asking to not become addicted to a drug, try the very substance she actively wants to avoid. Although, it could also be unethical to remove the ability for a person to ever feel a certain way without that person’s full awareness of the substance’s effects. 







File:Shots for all, Vaccines keep Airmen healthy 150323-F-IT851-010.jpg
Image by Senior Airman Areca Wilson


However, we must ask what level of informed consent is necessary: We do not require a person to experience a disease prior to receiving a vaccination that prevents that disease. Do the effects of drugs such as cocaine and nicotine fall into this type of category? While there may be no perfect situation in which to provide the vaccine to people, there may be opportunities where it could be used as a therapeutic tool in concert with intensive therapy and treatment. In these cases the person receiving the vaccine should be sober and not in the severe stages of withdrawal when receiving the vaccine. There does not appear to be any situation in which it is appropriate to provide the vaccine non-consensually, be it to children or to individuals unable to consent.





The commonly overlooked question in much of this discussion is why individuals become addicted to certain substances. Addiction is not merely a result of a physiological addiction to a substance; for example, some individuals become addicted to substances as a result of deeper underlying mental health issues. Removal of one substance will not remove the underlying problems or prevent addiction to other substances or behaviors. Therefore, these addiction vaccines would likely not result in a reduction of addiction in general, but rather in the reduction of addiction to the substance for which the vaccine is crafted.





Addiction vaccines have the potential to mask the issue and lead to potentially larger problems. In their current form, they are by no means a cure, and in most circumstances they are unlikely to be appropriate. Nonetheless, these vaccines could play an important role in addiction treatment if administered with proper consent in appropriate situations. While these vaccines have the potential for good, the focus of addiction research should be on the underlying processes that lead to initiating and then continuing drug use as well as factors that lead to relapse.



Want to cite this post?



Moses, T. (2015). The freedom to become an addict: The ethical implications of addiction vaccines. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/12/the-freedom-to-become-addict-ethical.html

Tuesday, December 15, 2015

Combating neurohype

by Mo Costandi




Mo Costandi trained as a developmental neurobiologist and now works as a freelance writer based in London. His work has appeared in Nature, Science, and Scientific American, among other publications. He writes the Neurophilosophy blog, hosted by The Guardian, and is the author of 50 Human Brain Ideas You Really Need To Know, published by Quercus in 2013, and Neuroplasticity, forthcoming from MIT Press. Costandi also sits on the Board of Directors of the International Neuroethics Society.




In 2010, Judy Illes, president elect of the International Neuroethics Society, argued that neuroscientists need to communicate their research to the general public more effectively. Five years on, that message is still pertinent - and perhaps even more so.






Since then, public interest in neuroscience has continued to grow, but at the same time, coverage of brain research in the mass media is often inaccurate or sensationalist, and myths and misconceptions about the brain seem to be more prevalent than ever before, especially in areas such as business and education.





Why is this? And what can be done to remedy the situation? A handful of studies into how neuroscience is reported by the mass media and perceived by the public provide some answers – and reiterate the point made by Illes five years ago.





Several years ago, for example, researchers at University College London analysed nearly 3,000 articles about neuroscience research published in the three best-selling broadsheet and the three best-selling tabloid newspapers in the UK between 1st January 2000 and 31st December 2010.





They conclude that “research was being applied out of context to create dramatic headlines, push thinly disguised ideological arguments, or support particular policy agendas,” and that “neuroscientists should be sensitive to the social consequences neuroscientific information may have once it enters the public sphere.”







Photo courtesy of Pexels


More recently, researchers in the Netherlands examined the reporting on neuroscience in Dutch newspapers. They found that the quality of the coverage depends largely on the time a paper is released, its topic, and the type of newspaper, and that the accuracy of reporting tended to be low, with free and popular newspapers in particular tending to provide a minimal amount of detail.





Researchers sometimes criticize journalists for reporting on neuroscience inaccurately, and press officers at academic institutions and scientific journals can also be subjected to criticism about over-hyped press releases, which are often the source of bad reporting. In one recent case, a correlation between high-strength marijuana and white matter integrity was widely reported as a causal relationship (compare the paper, the press release, and the subsequent media reports). But as another recent study showed, researchers are not entirely faultless, as they sometimes contribute to these processes by providing their press office with exaggerated information about their findings.





Accordingly, there are a number of things that researchers can do to counteract misrepresentations and misunderstanding of neuroscience. Paramount among these is that they communicate their own work and that of others as accurately as possible, without overstating their interpretation of any findings, and also emphasizing any limitations and caveats the research might have.





This mostly refers to interactions with journalists who are reporting on new findings, but growing numbers of researchers are taking to social media – especially blogs and Twitter – as a way of both engaging with the general public, and with each other, directly.





By disseminating accurate information, researchers may help improve the quality of reporting about neuroscience, and help to stem the tide of misunderstanding about the brain. And it could be argued that they have a moral responsibility to do so.





Want to cite this post?



Costandi, Mo (2015). Combating neurohype. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/12/combating-neurohype.html

Tuesday, December 8, 2015

Getting aHead: ethical issues facing human head transplants

By Ryan Purcell






Gummy bear head transplant, courtesy of flickr user Ella Phillips


In a widely circulated Boston Globe editorial this summer, Steven Pinker told bioethicists to “get out of the way” of scientific progress. There is abundant human suffering in the world today, he said, and the last thing we need is a bunch of hand wringing to slow down efforts to attenuate or even eliminate it. The prospect of head transplantation, however, has the potential to make us all a bit more appreciative of our local bioethicists. Even if there were not any technical issues (of which, there are of course plenty), coming to terms with the muddier personal and societal issues inherent in a procedure such as this could take quite a while. Nevertheless, Dr. Sergio Canavero is not planning to wait around and wants to perform a human head transplantation by the end of 2017. Are we ready?




Dr. Jordan Amadio, an Emory neurosurgery resident and co-founder of Neurolaunch, led a discussion on the topic at the Emory Center for Ethics Neuroethics Program’s November “Neuroethics, Neuroscience, and the News” series. As a neurosurgeon he was able to shed light on the technical aspects of Dr. Canavero’s proposal to a full room of students and faculty members from across the humanities and sciences (the topic drew quite a bit of interest on campus). In short, Dr. Amadio was skeptical. Unlike peripheral nerves, spinal nerves do not readily regenerate (but see this ref). There has been an enormous effort in neuroscience and physiology to understand how to regenerate spinal nerves. If this problem could be solved, spinal cord injuries would be less likely to lead to debilitating paralysis. However, Canavero believes he is ready to move beyond this prodigious hurdle using ultra-sharp instruments to cleanly sever the spinal cord with minimal tissue damage (unlike the traumatic break due to a car accident, for example) and using “fusogens” like polyethylene glycol (also commonly used as a clinical laxative) to fuse the donor and recipient spinal cord segments. Dr. Amadio’s conclusion on the science was clear: there is no strong evidence that Canavero’s “Gemini spinal cord fusion” protocol will work (as an aside, the protocol was published as an editorial in Surgical Neurology International).




Undoubtedly, researchers will continue to explore how to regenerate spinal nerves. So if the procedure were technically feasible, should it be attempted? There are huge risks involved, even for a terminally ill prospective patient. As Dr. Hunt Batjer, the president of the American Academy of Neurological surgeons commented to CNN, “there are a lot of things worse than death.” Debilitating, unmanageable neuropathic pain, for example, is a real possibility. Who knows whether one individual’s brain and spinal cord could communicate effectively with another’s spinal cord and body? NYU medical ethicist Dr. Arthur Caplan notes that “The brain is not contained in a bucket—it integrates with the chemistry of the body and its nervous system.” He calls the idea of head transplantation “rotten scientifically and lousy ethically.” Researchers are only beginning to understand the ways in which peripheral organs such as the gut (our “second brain”) and the microbiome within it affect brain function and mood. While this remains an emerging area, it is becoming clear that – no matter how sharp the knife – one cannot cleanly separate the brain (or the mind) from the body.






Will head transplants waste potential organ donations?


There are also considerable concerns related to justice and fairness. The donor for such a procedure would have to be a young, healthy individual who likely died of a traumatic brain injury but whose body was in pristine condition for transplant. There is a considerable opportunity cost here for the thousands of patients waiting for organ donations. In fact, nearly two dozen people die in the US every day while waiting for an organ donation. This donor body could end up providing many critical organs, which would all be lost should Dr. Canavero’s procedure fail. At what probability for success would this opportunity cost be acceptable? At first glance, this seems like the opposite of the trolley problem – should you try to save one person while putting five or more at higher risk of death? I’d also add, who are you saving? The body or the head?




What would this surgery mean for personal identity? Apparently Dr. Canavero believes that the body is simply a vehicle for the brain and that we should not let deteriorating bodies limit our lifespans. This seems to be an extreme view that glosses over the role of the body (below the brainstem) in an individual’s sense of self. To quote Frederik Svenaeus from his 2012 article on organ transplantation and personal identity, “The self becomes attuned through its bodily being, and such attunement is necessary for all forms of human understanding (that we know about).” In other words, even if your brain was kept alive, you might not be. Granted, a patient with a severe degenerative disease (like Canavero’s first volunteer) may not be concerned about identity issues with a pressing need to extend his life, but this concept should give pause to those hoping that in the future they could simply trade in their bodies as they start to fail. Not to mention, the definition of self and identity may differ across cultures.







Head transplants: 1960's science fiction come to life

Dr. Canavero understands that the idea of head transplantation is on the cutting edge (so to speak) and that it will likely make many people uncomfortable. Perhaps this is why he has launched a personal PR campaign including a TEDx talk. Yet he does not seem to fully appreciate the ethical implications of his proposed procedure. He told The Guardian that in science, “what can be done, will be done” and, matter-of-factly that “Cloning will come into play,” presumably to get around those nasty whole-body tissue rejection issues. Somewhat unsurprisingly, Dr. Canavero has lost the support of his colleagues in Italy (where the surgery is now illegal) and will be moving to Harbin, China seeking a less constraining regulatory climate. Indeed, he claims in the same article that the choice to participate in the surgery should be up to the patient. The surgery will be expensive but Canavero claims there is great enthusiasm and fundraising potential from the ultra-rich, presumably stoked by his talk of life-extension.




For his part, Dr. Canavero does have some mainstream support. Dr. Michael Sarr, a retired Mayo Clinic surgeon and Editor-in-Chief of the academic journal Surgery was quoted in The Guardian as saying, “I’m confident that at least in theory the operation will work. The science is there.” Canavero, too, dismisses his critics, claiming that all scientific revolutionaries were dismissed early on. So, are we ready for this? It may not matter as ethicists, surgeons, and the world will just have to watch as Dr. Canavero continues to push forward, full speed “a-head.”





Want to cite this post?



Purcell, Ryan (2015). Getting aHead: ethical issues facing human head transplants. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/12/getting-ahead-ethical-issues-facing.html

Tuesday, December 1, 2015

Don’t miss our Special Issue of AJOB Neuroscience: The Social Brain


By Katie Strong, PhD





If you haven’t already, be sure to read the 6.3 Issue of AJOB Neuroscience, our special issue on The Social Brain guest edited by Dr. Jean Decety. The issue centers on the biological, neuroscientific, and clinical evidence for human social cognition, along with the philosophical and ethical arguments for modifying morality and social emotions and behaviors, such as empathy, trust, and cooperativity.





The first target article by Jean Decety and Jason M. Cowell entitled “Empathy, Justice, and Moral Behavior” argues that despite the importance of empathy for driving our social lives, forging necessary social bonds, and making complex decisions, empathy alone is not enough in regards to moral resolutions and judgements. While empathy underpins cooperativity and the formation of social bonds, empathy has evolved to promote bias and in-group social preferences. The target article provides evidence that empathy does not always lead to moral decisions, and empathy often favors in-group members over out-group members. Decision making can be biased to favor relatives or a single individual over many people and for that reason, reasoning must accompany empathy. “Empathy alone is powerless in the face of rationalization and denial. But reasoning and empathy can achieve great things,” state the authors at the conclusion of the paper.








The second target article that focuses heavily on moral judgment is called “How the Mind Matters for Morality”. Authors Alek Charkroff and Liane Young discuss how the intentions behind an action guide moral judgement. The authors of the paper report that when judging others, intent matters. For example, an accident that causes harm to another with innocent intentions is deemed more forgivable than malicious intentions that have no consequence. But does intent matter when it comes to actions that have no victims, such as purity violations (incest or ingesting taboo foods)? According to a study cited in the target article, we do not weigh the intentions of those who commit harmful acts and impure acts as identical; participants judged accidental harms less morally wrong than accidental incest and the intent to harm as more morally wrong than the intent to commit incest. The authors conclude that a variety of controversial topics in bioethics include what many consider purity violations, such as suicide, cloning, sexual reassignment, and human enhancement. While many condemn these acts because they are harmful to others, we may also be averse to these actions because we regard them as purity violations. Understanding how these contentious acts are judged could reshape certain aspects of many bioethical debates.



The three remaining target articles discuss “the social brain” with specific respect to psychopaths, children, and caretakers. In “A Neural Perspective of Immoral Behavior and Psychopathy,” Tasha Poppa and Antoine Bechara review the evidence in the literature that tie together emotional deficits and immoral behavior – traits of those diagnosed as psychopathic – with dysfunction in specific neural pathways. The authors speak to other contributing factors as to why individuals may participate in immoral behaviors aside from brain abnormalities, including genetic factors, child abuse, and certain environmental stressors. Although rehabilitative treatments for psychopaths has not proven successful, further studying the origin of these psychopathic behaviors may yield more personalized and more effective treatments towards modifying behaviors. “Social Support Can Buffer Against Stress and Shape Brain Activity” by Camelia E. Hostinar and Megan R. Gunnar focuses on the neural mechanisms behind social support for stress with an emphasis on how this impacts children. Children benefit immensely with regards to interpersonal skills and even brain development when raised in an environment that offers parental support. For that reason, the authors suggest a number of social support systems for parents and child-care workers that would encourage positive environments for children (with a positive impact on their brains) including longer paid maternity and paternity leave and home-visitation programs for at-risk families (advocating for these techniques over other neurointerventions such as oxytocin nasal spray).



The final target article, “Improving Empathy in the Care of Pain Patients” by  Philip L. Jackson, Fanny Eugene, and Marie-Pier B. Tremblay cite studies indicating that healthcare workers are not as perceptive or empathic as non-experts when it comes to the pain of patients and therefore risk underestimating levels of pain. Authors provide a number of reasons for this behavior, including gender and race bias, self-preservation against a decline in mental exhaustion, and desensitization following years of exposure. Despite this lapse of empathy, the authors are wary of interventions —such as transcranial direct-current stimulation (tDCS) or oxytocin nasal sprays—that would be designed to improve or enhance empathy of healthcare. Even noninvasive behavior medications such as training programs would need to be further studied to determine the impact on physicians’ mental wellbeing. Improving empathy by any means though is fraught with ethical concerns if “we are aiming for “suprahuman empathy” by labeling as a deficit what should be seen as a healthy empathic response given the situation,” the authors of the paper remind us. 





The 6.4 Issue of AJOB Neuroscience will be hot of the presses soon and will include two target articles: “An ethical evaluation of stereotactic neurosurgery for anorexia nervosa” by Sabine Müller et al. and “A threat to Autonomy? The Intrusion of Predictive Brain Implant” by Frederic Gilbert. Check back with The Neuroethics Blog for a press release and synopsis of the articles!



References



(1) Liane Young, R. S. When Ignorance Is No Excuse: Different Roles for Intent across Moral Domains. Cognition 2011, 120 (2), 202–214.



(2) Expertise Modulates the Perception of Pain in Others. Current Biology 2007, 17 (19), 1708-1713 (accessed Nov 17, 2015).



(3) Decety, J.; Yang, C.-Y.; Cheng, Y. Physicians down-Regulate Their Pain Empathy Response: An Event-Related Brain Potential Study. NeuroImage 2010, 50 (4), 1676–1682.



Want to cite this post?







Strong, K. (2015). Don’t miss our Special Issue of AJOB Neuroscience: The Social Brain. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/12/dont-miss-our-special-issue-of-ajob.html


Tuesday, November 24, 2015

Widening the use of deep brain stimulation: Ethical considerations in research on DBS to treat Anorexia Nervosa

by Carolyn Plunkett






Carolyn Plunkett is a Ph.D. Candidate in the Philosophy Department at The Graduate Center of City University of New York. She is also an Ethics Fellow in The Bioethics Program at the Icahn School of Medicine at Mount Sinai, and a Research Associate in the Division of Medical Ethics at NYU Langone Medical Center. Carolyn will defend her dissertation in spring 2016, and, beginning July 2016, will be a Rudin Post-Doctoral Fellow in the Divisions of Medical Ethics and Medical Humanities at NYU Langone Medical Center. 







This post is part of a series that recaps and offers perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.





Karen Rommelfanger, founding editor of The Neuroethics Blog, heard a talk I gave on deep brain stimulation (DBS) at Brain Matters! 3 in 2012. Three years later, she heard a brief synopsis of a paper I presented a few weeks ago at the International Neuroethics Society Annual Meeting. Afterward, she came up to me and said, “Wow! Your views have changed!” I had gone from being wary about using DBS in adults, much less minors, to defending its use in teens with anorexia nervosa. She asked me to write about this transition for this blog, and present my recent research.






I was introduced to DBS in a philosophy course four years ago and was immediately captivated by questions like: Am I really me with ongoing DBS? Are my emotions really mine? What is an “authentic self,” or an “authentic emotion,” anyway? I wrote a term paper on the topic, and soon after presented on the topic at a couple of conferences. I wrote a post for the Neuroethics Women Leaders blog and published an open peer commentary in AJOB Neuroscience. It was a hot topic, as a couple  of other authors were asking the same questions at around the same time.





But when I presented my ideas to an audience at Brain Matters! 3, in Cleveland, Ohio, I was taken aback by one participant’s question. A neurosurgeon asked what she should tell her patients, and their families, when they request DBS to treat refractory depression or OCD. “Should I tell them no because philosophers are concerned that their emotions might not be authentic?” she wondered. I think I gave a wishy-washy answer about informed consent. It did not satisfy my interlocutor, or me.







Schematic of the deep brain stimulation setup


Her question has stuck with me. It has forced me to reexamine the context in which I think about DBS, from the purely philosophical to the bioethical. Put another way, the neurosurgeon’s concern forced me to reconsider my earlier ideas about DBS—my purely philosophical questions about the nature of identity and emotions, and the relationship between them—and put them in conversation with more concrete bioethical questions about how DBS is actually used and by whom, and how it should be used, including how access to DBS and clinical research could be made more equitable and respectful. Her question drove home the point that bioethicists must take seriously the lived experiences of patients, families, clinicians, researchers, and others involved in the DBS process. These participants are not just characters in a thought experiment, as an analogy between DBS and Nozick’s infamous experience machine would suggest.






So I turned my focus from DBS itself to the experiences of those with the conditions that DBS aims to treat, like anorexia nervosa, depression, and addiction. Do they experience authentic selves or authentic desires? This shift ultimately drove me toward my dissertation project, which defends a subjectivist conception of normative reasons—or, roughly, the view that what we have reason to do is rooted in who we are, what we value, and what we desire. (There is a time and place for philosophy, after all.)





A few more classes and papers further propelled my changing perspective, leading me to my current research agenda on ethical issues in research on DBS for the treatment of anorexia nervosa. The central question I’m now investigating: Given that researchers are testing the efficacy of DBS for anorexia nervosa, should we consider enrolling adolescents in those trials?





Anorexia nervosa, or AN, is characterized by a distorted body image, excessive dieting and exercise that leads to severe weight loss, and a pathological fear of becoming fat. Though it does occur in men and adult women, AN has the highest incidence in adolescent women, and the average age of onset is decreasing. It’s not the most prevalent mental illness, but it is the deadliest: AN has the highest mortality of all mental illnesses and eating disorders. This is because of both dangerous physiological consequences and a high rate of suicide among anorexic individuals.





Despite its high morbidity and mortality, there are not well-defined treatments for AN, and the prospects for recovery are, unfortunately, not great. Among those who receive a diagnosis, the full recovery rate from AN is about 50%. About 30% of patients make partial recoveries, and the remaining 20% do not recover, even with repeated attempts at treatment — quite a substantial subset of AN patients.





For those who fall in that 20% — who are sometimes referred to as having “chronic” or “longstanding” AN — there are no evidence-based treatment options, even though there is evidence that those with chronic or longstanding AN require different treatment than those at an earlier stage of illness.* To date, there has been only one controlled study of adults with chronic AN, and no studies of treatments for adolescents with chronic AN. There is clearly a great need for research within the subgroup of patients, in particular adolescents, with chronic AN.








A representation of anorexia nervosa from flickr user Benjamin Watson

This is where DBS comes in. Readers of this blog are familiar with DBS, but they may not know that it has been shown to be a promising treatment for AN, even chronic AN, and even AN in a small number of teens. An emerging neurobiological understanding of AN supports the notion that DBS will be effective. Plus, case studies and case series [1, 2, 3] on the use of DBS in 14 women with AN have shown that it has been effective in increasing body weight and decreasing AN-associated behaviors in 11 of them.





DBS is not without risk. Along with the physical risks of hemorrhage, infection, and anesthesia, especially among patients with AN, there may be unforeseen negative psychological effects on sufferers’ self-conception and well-being. We might expect this to be especially true in adolescents, who are right at the developmental stage of figuring out who are and who they want to be. A “brain pacemaker” may disrupt that task.





These risks of DBS can be considered reasonable only if substantial benefit is expected from the trials.





I won’t go through an entire risk/benefit analysis here, but I do believe that such an analysis supports the idea that the associated risks of enrolling teens with chronic AN are reasonable, and that the harms of not allowing them to participate in research constitute a great risk. That is, the harm of excluding teens with AN from clinical research, and thus continuing the status quo in the treatment of chronic AN, are so substantial that it justifies the inclusion of teens, even in high risk research.





You might be thinking: Why not give the current research on adults a few years to see if DBS is an effective treatment for chronic AN in that population, and then move on to kids? Indeed, some ethicists have said that this is the way we ought to proceed.





First, a negative response in adults does not necessarily mean that DBS would fail in teens. In fact, data from a small study in China that enrolled teens in trials of DBS support the hypothesis that teens may respond better than adults. We should not predicate research on adolescents on data from adult trials.





And second, waiting for the adult data continues the trend of preventing participation in clinical research among a population that has, arguably, been “over-protected” from research to its detriment. The seriousness of AN and its high incidence in teens ground an interest in improved treatments for all teens with AN.








Should DBS be performed on this "over-protected" population?

There remain obstacles to engaging this population in clinical research on DBS. Because I’m proposing trials for minors, we need to address not only barriers to establishing assent in teens with AN but also parental consent. Both are problematic with this population, but I think the problems are surmountable.





The upshot of the argument I pose is that we should look to enfranchise so-called “vulnerable” populations that have historically been excluded from clinical research. Justice demands it. An ethical and legal framework for clinical research should support responsible and safe research on treatment protocols for members of these populations rather than discourage it.





My views on DBS have changed after spending more time with the topic, considering it from a wider variety of perspectives, and asking different questions. I do not mean to imply that one set of questions or a particular perspective ought to be prioritized. Rather I hope to have highlighted the value of recognizing the limits of one’s research and exploring one’s interests from a variety of viewpoints to reach more inclusive and considered judgments.



*Some researchers call for classifying stages of AN to better guide treatment and research. Morbidity and mortality worsens, and recovery becomes less likely, the longer the disease progresses. Treatment protocols typically do not distinguish between someone with a first diagnosis and someone who has been ill for 5 or 10 years, and research studies usually do not divide patients with AN into further subgroups based on length of illness, even though there is evidence that those who have had the illness longer require different care than those at earlier stages.



Want to cite this post?



Plunkett, C. (2015). Widening the use of deep brain stimulation: Ethical considerations in research on DBS to treat Anorexia Nervosa. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/widening-use-of-deep-brain-stimulation.html

Tuesday, November 17, 2015

Do you have a mouse brain? The ethical imperative to use non-human primates in neuroscience research

by Carlie Hoffman




Much of today’s neuroscience research investigating human brain diseases and disorders utilizes animal models. Animals ranging from flies, rodents, and non-human primates are routinely used to model various disorders, with mice being most commonly utilized. Scientists employ these animal models to approximate human conditions and disorders in an accessible manner, with the ultimate purpose of applying the findings derived in the animal back into the human brain.







Rhesus macaques, a species of NHP often used in research.


The use of animals in research has been the source of much debate, with people either supporting or objecting their use, and objections arising from animal rights activists, proponents of critical neuroscience such as Nikolas Rose and Joelle Abi-Rached, and others. A main focus of this debate has also been the use of non-human primates (NHP) in research. The cognitive functions and behaviors of NHPs are more closely related to those seen in humans than are rodent cognitions and behaviors, thus causing primates to be held as the closest approximation of human brain functioning in both normal and disease states. Though some say NHP research is essential, others call for scaling down or even completely eliminating it. Strides have already been made towards the reduction and removal of NHPs from experimental research, as displayed by the substantial justification required to perform experiments utilizing them, the increasing efforts going towards developing alternative non-animal models (including the Human Brain Project’s goal to create a computer model of the human brain), and the recent reduction of the use of chimpanzees in research [2, 6].  A case was even brought to the New York Supreme Court earlier this year to grant personhood status to two research chimpanzees.






However, if NHPs are completely removed from human brain disease research, this leaves rodents as the primary non-human animal model for the human condition. This raises an important question: are we (both the general public and scientists) okay with accepting a detailed understanding of the mouse brain as being equivalent to a detailed understanding of the human brain? Dr. Yoland Smith, a world-renowned Emory researcher in neurodegenerative disease, says no— work with both NHPs and rodent models must continue to complement and inform each other.






Smith came to Emory University in 1996 and currently works at the Emory-affiliated Yerkes National Primate Research Center. Smith’s work is 90% rhesus macaque-based and 10% rodent-based, with the aim of most of his rodent work being to fine-tune new approaches and methods to apply to his rhesus macaques. While many animal rights advocates feel it is an ethical imperative to eliminate NHP research, Dr. Smith believes there are currently more pressing obstacles to the continued use of NHPs to study neuroscience, namely the increasing regulatory constraints on NHP research and the challenges in translating rapid technology development from mice to NHPs. 






Mice are the animals most commonly used in research.


For instance, the number of rodents (almost exclusively mice) utilized in biomedical research has been increasing at a dramatic pace compared with that of primates, the latter accounting for only about 0.1- 0.3% of all animals currently used in biomedical research in the United States and Europe. This is in part due to the ease of acquiring and maintaining rodent colonies as compared to NHP colonies: the cost of maintaining a mouse at Emory ranges from $0.83- $4.13 per day, while the cost of maintaining a NHP is on the range of $80-$110 per day [3]. While ultimately rodent research is not cheap (most researchers maintain rodent colonies containing anywhere from tens to hundreds of mice, causing total per diem costs to rise rapidly), the relatively lower cost of maintenance and ease of access affiliated with rodent use have allowed for extensive experimental optimization and exploratory testing to be performed in rodents, but not in NHPs. As a result, there has been advancement and development of rodent-based technologies over NHP-based technologies, and techniques such as in vitro electrophysiology, optogenetics, and transgenic strains have been developed for use in mice. Because such techniques do not cross easily into NHP research, they are not currently available for use in the NHP brain. Consequently, new methods are continually being applied and developed almost exclusively for use in the mouse, while the needed time and money to adapt these assays to NHPs are not being spent. Scientists are thus able to dig deeper and answer more sophisticated questions about the mouse brain, while NHP research is being left in the dust. NHP research may ultimately come to an end not just because of the ethical arguments against it, but because rodent research is leaps and bounds ahead of it.





This trajectory also raises questions about what role we think NHP research should continue to play in the field of neuroscience: Can NHP research ever match the pace of rodent research? Should we use NHPs at all? Can all of our questions about the human brain and human health be answered using rodent models?





Smith’s answer to this final question is no. Although Smith feels rodent research produces highly valuable information, contributes to advances in neuroscience, and must continue to grow at a fast pace, he posits that it would be naïve to believe that gaining knowledge about the mouse brain is sufficient to achieve our ultimate goal of “getting to the human.” While it would be easy to say that the human brain is equivalent to the mouse brain, he states this is simply not true—particularly when examining complex brain functions and diseases that involve high-order cortical processing. For instance, rodents are often used to model many complex neuropsychiatric diseases affecting the prefrontal cortex and influenced by social and cultural components; however, there are striking differences in the size and complexity of the prefrontal cortex and other associative cortices in primates versus rodents, and animal models are unable to replicate the integral role culture plays in psychiatric disease pathology (a topic also described in a previous blog post) [1, 4]. The overly simplistic organization of the rodent cerebral cortex compared with that of humans, Dr. Smith believes, makes NHP research absolutely essential. If we hope to make significant progress toward the development of new therapeutic strategies for complex cognitive and psychiatric disorders, then there is an urgent need for the ongoing development of NHP models of such prevalent diseases of the human brain.







Comparative images of human, mouse, and rhesus brains.


Smith also believes there is a critical knowledge gap between the rodent brain and the human brain that cannot be addressed without a deeper understanding of both the healthy and diseased NHP brain. NHP research has already partially helped to fill this gap [5], as evidenced by the breakthroughs made in understanding disease pathophysiology and the development of surgical therapies for Parkinson’s disease through the use of NHP models, the focus of Smith and his colleagues’ work at Emory. However, Smith notes that these advances would not have been possible without a strong foundation of rodent research. This interplay between primate and rodent work serves as a model for how Smith hopes the field of neuroscience will progress in the future, with both rodent research and primate research growing together in parallel and feeding into one another, ultimately helping researchers to “get to the human.”





If significant strategies are not put in place by funding agencies to maintain continued support for NHP research, this parallel growth will not be possible. Smith urges that we maintain NHP research by continuing to develop technologies for use in NHPs and by generating discussions about the use and drawbacks of NHPs in neuroscience research. Smith feels that strides toward maintaining NHP research should be spearheaded by larger funding institutions, such as the National Institutes of Health (NIH). He proposes there should be a call for grant applications specifically involving NHP research as well as the formation of an NIH-based committee to discuss how we can perform NHP research in a successful way; to assess whether we, as a scientific community, are always moving toward the human; and to examine the ethical and practical limitations of using NHPs in research. Smith also encourages scientists to be proactive and unafraid to engage the community in discussions of the ethical dilemmas surrounding primate research—we cannot ignore these issues and instead must be the mediators of such ethical discussions.





Dr. Smith believes NHP research is essential to advancing the understanding of the human brain and improving human health. As such, Smith does not think that this research will ever fully disappear. While young researchers might find the current funding climate and research constraints daunting, he claims it is the responsibility of current primate researchers to continue to train new scientists; if we do not train new NHP scientists, we will just be contributing to the loss. Therefore, while NHP research is receiving pressure both from society and from scientists, the use of primates in neuroscience research must continue. After all, as Smith stated, “I do not have a mouse brain.”



Works Cited



1. Ding, SL (2013) Comparative anatomy of the prosubiculum, subiculum, presubiculum, postsubiculum, and parasubiculum in human, monkey, and rodent. J Comp Neurol 521: 4145-4162. doi: 10.1002/cne.23416



2. Doke, SK, & Dhawale, SC (2015) Alternatives to animal testing: A review. Saudi Pharmaceutical Journal 23: 223-229. doi: http://dx.doi.org/10.1016/j.jsps.2013.11.002



3. International Animal Research Regulations: Impact on Neuroscience Research: Workshop Summary. (2012). Washington DC: National Academy of Sciences.



4. Nestler, EJ, & Hyman, SE (2010) Animal models of neuropsychiatric disorders. Nat Neurosci 13: 1161-1169. doi: 10.1038/nn.2647



5. Phillips, KA, Bales, KL, Capitanio, JP, Conley, A, Czoty, PW, t Hart, BA, Hopkins, WD, Hu, SL, Miller, LA, Nader, MA, Nathanielsz, PW, Rogers, J, Shively, CA, & Voytko, ML (2014) Why primate models matter. Am J Primatol 76: 801-827. doi: 10.1002/ajp.22281



6. Tardif, SD, Coleman, K, Hobbs, TR, & Lutz, C (2013) IACUC review of nonhuman primate research. ILAR J 54: 234-245. doi: 10.1093/ilar/ilt040





Want to cite this post?



Hoffman, C. (2015). Do you have a mouse brain? The ethical imperative to use non-human primates in neuroscience research. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/do-you-have-mouse-brain-ethical.html

Monday, November 9, 2015

Why defining death leaves me cold

by John Banja, PhD




*Editor's note: In case you missed our annual Zombies and Zombethics (TM) Symposium entitled Really, Most Sincerely Dead. Zombies, Vampires and Ghosts. Oh my! you can watch our opening keynote by Dr. Paul Root Wolpe by clicking on the image below. We recommend starting at 9:54 min.









https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgNoGxAv51diXWZlA-mzlM8xji51bJbD8d8nVPa90SsujgoM9fW8RSuoQqPdsO-L3sgho_p9y2eof9f_Mazm7y6ReihILkCpsucpQQV2vtHvVzXDX9I1gkcmxdfy7LImNjmkQTRgIEWvgJ/s320/Meinhardt_Raabe.jpg


Two weeks ago, I attended a panel session on brain death at the annual conference of the American Society for Bioethics and Humanities. Forgive the bad pun, but the experience left me cold and …lifeless(?). The panel consisted of three scholars revisiting the more than a decade old conversation on defining death. Despite a standing room only crowd, there was utterly nothing new. Rather, we heard a recitation of the very familiar categories that have historically figured in the “What does it mean to be dead?” debate, e.g., the irreversible cessation of cardio-respiratory activity, the Harvard Brain Death criteria, the somatic integration account, the 2008 Presidential Commission’s “loss of the drive to breathe,” and so on. I walked out thinking that we could come back next year, and the year after that, and the year after that and get no closer to resolving what it means to be dead.









Dr. Banja in his natural habitat.

I’d suggest that the reason for this failure is the stubborn persistence of scholars to mistake a social practice, i.e., defining death, for a metaphysical event. Philosophers who insist on keeping the “defining death” conversation alive are invariably moral realists: They mistakenly believe that death is an objectively discernible, universally distributed, a priori, naturally occurring phenomenon that philosophical reasoning and analysis can divine. Now, the irreversible cessation of cardio-respiratory functioning or the cessation of all brain functioning certainly are actual biophysiological events. But determining death requires a social decision because its primary purpose consists in triggering various social practices like terminating medical care or preparing a body for organ recovery; commencing rituals of grieving or mourning; disposing the body in a way that protects the public’s health; securing life insurance or inheritance benefits, and so on. Understood this way, it’s up to a community of language users to decide when these activities should commence rather than look to a bunch of academic philosophers to give us the “correct” answer. After all, what are philosophers going to do? They can only argue their moral intuitions, but they must ultimately admit that there is no source of confirmation that proves which of their intuitions is the “correct” one.






Death contains a social component, as depicted in The Court of Death by Rembrandt Peale


The problem with the various death defining criteria is that, at least to me, they all have a ring of plausibility. (Otherwise, they wouldn’t be seriously discussed.) This especially includes Robert Veatch’s position that we should leave the nature of death determination up to the individual.* According to Veatch, if I believe that I’m as good as dead if I enter a state of permanent unconsciousness, then I should be treated as such: discontinue all life prolonging care; prepare to dispose of my bodily remains in a respectable way; and if my beating heart disturbs anyone, inject it with curare or a reasonable substitute to stop it.





The idea that death is a “natural occurrence” is only loosely and metaphorically true. In fact, death is largely a socio-cultural happening that derives from social needs or pressures—like the Harvard Brain Death criteria deriving from the need for a dead organ donor or to assist the courts in their prosecution of murderers. The idea that philosophers can discern the “real and true” essence of death—because we mistakenly think the answer sits out there in the biosphere waiting to be discovered—seems an intellectual conceit. We don’t need philosophers to tell us how our social practices should work. It’s up to the rest of us to experiment with them and retain the ones that work best. And that’s what will happen if there are further chapters in the social narrative around defining death: Future generations will meet that challenge according to the survival pressures that living and dying present to them. Philosophical definitions of death might be interesting and even illuminating. But contemporary, western societies will most likely decide when death occurs according to pragmatically reasonable criteria than philosophically subtle ones.




*Veatch RM. The death of whole-brain death: the plague of the disaggregators, somaticists, and mentalists. Journal of Medicine and Philosophy 2005;30:353–378.



Want to cite this post?



Banja, J. (2015). Why defining death leaves me cold. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/why-defining-death-leaves-me-cold_3.html

Tuesday, November 3, 2015

Shrewder speculation: the challenge of doing anticipatory ethics well


by Dr. Hannah Maslen 





Hannah Maslen is a Research Fellow in Ethics at the Oxford Martin School and the Oxford Uehiro Centre for Practical Ethics. She currently works on the Oxford Martin Programme on Mind and Machine, where she examines the ethical, legal, and social implications of various brain intervention and interface technologies, from brain stimulation devices to virtual reality. 




This post is part of a series that recaps and offers perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.




In its Gray Matters report, the United States Presidential Commission for the Study of Bioethical Issues underscored the importance of integrating ethics and neuroscience early and throughout the research endeavor. In particular, the Commission declared: 






"As we anticipate personal and societal implications of using such technologies, ethical considerations must be further deliberated.  


Executed well, ethics integration is an iterative and reflective process that enhances both scientific and ethical rigor." 






What is required to execute ethics integration well? How can philosophers make sure that their work has a constructive role to play in shaping research and policy-making?






In a recent talk at the International Neuroethics Society Annual Meeting, I reflected on this, and on the proper place of anticipation in the work that philosophers and neuroethicists do in relation to technological advance. Anticipating, speculating and keeping ahead of the technological curve are all laudable aims. It is crucial that likely problems and potential solutions are identified ahead of time, to minimize harm and avoid knee-jerk policy reactions. Keeping a step ahead inevitably requires all involved to make predictions about the way a technology will develop and about its likely mechanisms and effects. Indeed, philosophers will sometimes take leave from discussion of an actual emerging or prototype technology and extrapolate to consider the ethical challenges that its hypothetical future versions might present to society in the near future. Key features of the technology are identified, distilled and carefully subjected to analysis.






Gray Matters report

Speculating about cognitive enhancement 





Cognitive enhancement technologies – a topic discussed in depth in the second volume of the Gray Matters report – have received this sort of treatment. There has been a substantial amount of work dedicated to examining things like whether the use of cognitive enhancement drugs by students constitutes cheating, or whether professionals in high-risk jobs such as surgery or aviation should be required to take them. Some of this work appears to involve greater or lesser degrees of speculation. For example, a philosopher might present herself with the following sort of questions:



Imagine that cognitive enhancer X improves a student’s performance to a level that would be achieved through having extra private tutorials. Does her use of cognitive enhancer X constitute cheating?  


Imagine that cognitive enhancer Y is completely safe, and effective at remedying fatigue-related impairment. Should the surgeon be required to take cognitive enhancer Y? 



Working through these sorts of examples can generate conclusions of great conceptual interest. In relation to the first, we might get clearer on what cheating precisely amounts to, and perhaps which sorts of advantages are and are not unfair in an educational setting. In relation to the second, we might come to interesting conclusions about the limits of professional obligations, or perhaps about the relationship between cognitive capacities and responsibility.





However, working at this level of abstraction – as valuable as it is from a philosophical perspective –cannot give us what we need to determine, for example, whether Duke University should uphold its policy on the use of off-label stimulants as a species of academic dishonesty, or whether the Royal College of Surgeons should recommend the use of Modafinil by surgeons as good practice. Abstracted work undeniably has its place, and is hugely interesting, but it does not integrate well with concrete discussions about scientific research directions and policy. Why is this?




Why might theoretical analysis be difficult to integrate? 




To some extent, conducting the sort of thought experiments involving cognitive enhancer X and Y requires that we strip away the messiness of the details of the technologies. This allows us to carefully isolate and vary the features we think will be morally relevant to see how they affect our intuitions and reasoning. We want the principal consideration in the surgeon case to be the fact that the drug remedies fatigue and reduces error. It also makes the case sufficiently abstract be generalizable to a whole category of cognitive enhancers – there may be different drugs with a variety of properties that all share the impairment-reducing effect. The example might also extrapolate to near-future possible pharmaceuticals – we might not have such a drug now, but what if we did?






Pharmaceuticals image courtesy of Flickr user Waleed Alzuhair 

However – and this is the crucial point – many of the details that are stripped away to enable the philosophical question to be carefully defined and delineated are hugely relevant to determining what we should do; but we cannot add all this detail back in after reaching our conclusions and expect them to remain the same.





In relation to a university’s policy on enhancers, the reality is that different drugs affect different people differently; they may simultaneously enhance one cognitive capacity whilst impairing another; some drugs might have their principal effects on working memory, whilst others enhance wakefulness and task enjoyment. All these features and many others are relevant to the question of fairness and what our policy for particular drugs should be. Importantly, the specific features of different drugs might lead to different conclusions.





In relation to professional duties, it is going to matter that a drug like modafinil is not without side effects; that it can cause gastrointestinal upset; that individuals can perceive themselves as functioning better than they in fact are, and so on. These features bear, amongst other things, on effectiveness, permissibility of professional coercion, and also on whether reasonable policy options might sit somewhere between a blanket requirement and a blanket ban.




What to do?




It’s important that the reader does not take me to be saying that we should give up theoretical work on neurotechnologies. In fact, it is precisely through careful construction of the possible features of technologies that we can learn more about the socially important dimensions for which they have significance. If we want to get clearer on the boundaries of what we can and cannot require a surgeon to do, we need to consider many possibilities sitting just before and beyond the boundary: at some point, perhaps, a requirement would encroach too much into his life beyond his professional role to be justifiable. The degree of encroachment would have to be varied very slightly (almost certainly artificially) until we get to the point somewhere along the line from hand-washing to heroics where we identify the boundary.





Rather, my suggestion is that we need to be clear when we start an ethical analysis about whether we are doing something more conceptual or whether we want to make a statement about what should be done in a particular situation. When we want to do the latter, we have to make sure that we work with as much of the scientific detail as possible. This requires philosophers and ethicists to read scientific papers – perhaps at the detail of review articles – to make sure they retain the detail necessary to offer a practical recommendation. Ideally, such work would be completed in collaboration with scientists, or at least subjected to their scrutiny.





Of course, there’s a difference between speculating because you are not an expert on that technology and speculating because the information is not yet available. There should be none of the former, and the latter should be carefully managed so that recommendations do not far outstrip the limited information base: there’s a lot more we need to know about incorporating computer chips into brains, for example, before we can even start to say anything practical about what should and shouldn’t be done.





Scientific black boxes are to some extent inevitable when speculating about neurotechnological advances. The task for practical ethicists is to open as many as they can and to be mindful of the potential ethical significance of those they cannot. They also need to be careful to determine when they want to conduct theoretical analysis, using real and imagined technologies to illuminate conceptual truths, and when they want to argue for a course of action in relation to a particular neuroscientific application or technology, the details of which are crucial in order for ethical integration to be well executed.







Want to cite this post?



Maslen, H. (2015). Shrewder speculation: the challenge of doing anticipatory ethics well. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/shrewder-speculation-challenge-of-doing.html