Pages

Tuesday, November 24, 2015

Widening the use of deep brain stimulation: Ethical considerations in research on DBS to treat Anorexia Nervosa

by Carolyn Plunkett






Carolyn Plunkett is a Ph.D. Candidate in the Philosophy Department at The Graduate Center of City University of New York. She is also an Ethics Fellow in The Bioethics Program at the Icahn School of Medicine at Mount Sinai, and a Research Associate in the Division of Medical Ethics at NYU Langone Medical Center. Carolyn will defend her dissertation in spring 2016, and, beginning July 2016, will be a Rudin Post-Doctoral Fellow in the Divisions of Medical Ethics and Medical Humanities at NYU Langone Medical Center. 







This post is part of a series that recaps and offers perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.





Karen Rommelfanger, founding editor of The Neuroethics Blog, heard a talk I gave on deep brain stimulation (DBS) at Brain Matters! 3 in 2012. Three years later, she heard a brief synopsis of a paper I presented a few weeks ago at the International Neuroethics Society Annual Meeting. Afterward, she came up to me and said, “Wow! Your views have changed!” I had gone from being wary about using DBS in adults, much less minors, to defending its use in teens with anorexia nervosa. She asked me to write about this transition for this blog, and present my recent research.






I was introduced to DBS in a philosophy course four years ago and was immediately captivated by questions like: Am I really me with ongoing DBS? Are my emotions really mine? What is an “authentic self,” or an “authentic emotion,” anyway? I wrote a term paper on the topic, and soon after presented on the topic at a couple of conferences. I wrote a post for the Neuroethics Women Leaders blog and published an open peer commentary in AJOB Neuroscience. It was a hot topic, as a couple  of other authors were asking the same questions at around the same time.





But when I presented my ideas to an audience at Brain Matters! 3, in Cleveland, Ohio, I was taken aback by one participant’s question. A neurosurgeon asked what she should tell her patients, and their families, when they request DBS to treat refractory depression or OCD. “Should I tell them no because philosophers are concerned that their emotions might not be authentic?” she wondered. I think I gave a wishy-washy answer about informed consent. It did not satisfy my interlocutor, or me.







Schematic of the deep brain stimulation setup


Her question has stuck with me. It has forced me to reexamine the context in which I think about DBS, from the purely philosophical to the bioethical. Put another way, the neurosurgeon’s concern forced me to reconsider my earlier ideas about DBS—my purely philosophical questions about the nature of identity and emotions, and the relationship between them—and put them in conversation with more concrete bioethical questions about how DBS is actually used and by whom, and how it should be used, including how access to DBS and clinical research could be made more equitable and respectful. Her question drove home the point that bioethicists must take seriously the lived experiences of patients, families, clinicians, researchers, and others involved in the DBS process. These participants are not just characters in a thought experiment, as an analogy between DBS and Nozick’s infamous experience machine would suggest.






So I turned my focus from DBS itself to the experiences of those with the conditions that DBS aims to treat, like anorexia nervosa, depression, and addiction. Do they experience authentic selves or authentic desires? This shift ultimately drove me toward my dissertation project, which defends a subjectivist conception of normative reasons—or, roughly, the view that what we have reason to do is rooted in who we are, what we value, and what we desire. (There is a time and place for philosophy, after all.)





A few more classes and papers further propelled my changing perspective, leading me to my current research agenda on ethical issues in research on DBS for the treatment of anorexia nervosa. The central question I’m now investigating: Given that researchers are testing the efficacy of DBS for anorexia nervosa, should we consider enrolling adolescents in those trials?





Anorexia nervosa, or AN, is characterized by a distorted body image, excessive dieting and exercise that leads to severe weight loss, and a pathological fear of becoming fat. Though it does occur in men and adult women, AN has the highest incidence in adolescent women, and the average age of onset is decreasing. It’s not the most prevalent mental illness, but it is the deadliest: AN has the highest mortality of all mental illnesses and eating disorders. This is because of both dangerous physiological consequences and a high rate of suicide among anorexic individuals.





Despite its high morbidity and mortality, there are not well-defined treatments for AN, and the prospects for recovery are, unfortunately, not great. Among those who receive a diagnosis, the full recovery rate from AN is about 50%. About 30% of patients make partial recoveries, and the remaining 20% do not recover, even with repeated attempts at treatment — quite a substantial subset of AN patients.





For those who fall in that 20% — who are sometimes referred to as having “chronic” or “longstanding” AN — there are no evidence-based treatment options, even though there is evidence that those with chronic or longstanding AN require different treatment than those at an earlier stage of illness.* To date, there has been only one controlled study of adults with chronic AN, and no studies of treatments for adolescents with chronic AN. There is clearly a great need for research within the subgroup of patients, in particular adolescents, with chronic AN.








A representation of anorexia nervosa from flickr user Benjamin Watson

This is where DBS comes in. Readers of this blog are familiar with DBS, but they may not know that it has been shown to be a promising treatment for AN, even chronic AN, and even AN in a small number of teens. An emerging neurobiological understanding of AN supports the notion that DBS will be effective. Plus, case studies and case series [1, 2, 3] on the use of DBS in 14 women with AN have shown that it has been effective in increasing body weight and decreasing AN-associated behaviors in 11 of them.





DBS is not without risk. Along with the physical risks of hemorrhage, infection, and anesthesia, especially among patients with AN, there may be unforeseen negative psychological effects on sufferers’ self-conception and well-being. We might expect this to be especially true in adolescents, who are right at the developmental stage of figuring out who are and who they want to be. A “brain pacemaker” may disrupt that task.





These risks of DBS can be considered reasonable only if substantial benefit is expected from the trials.





I won’t go through an entire risk/benefit analysis here, but I do believe that such an analysis supports the idea that the associated risks of enrolling teens with chronic AN are reasonable, and that the harms of not allowing them to participate in research constitute a great risk. That is, the harm of excluding teens with AN from clinical research, and thus continuing the status quo in the treatment of chronic AN, are so substantial that it justifies the inclusion of teens, even in high risk research.





You might be thinking: Why not give the current research on adults a few years to see if DBS is an effective treatment for chronic AN in that population, and then move on to kids? Indeed, some ethicists have said that this is the way we ought to proceed.





First, a negative response in adults does not necessarily mean that DBS would fail in teens. In fact, data from a small study in China that enrolled teens in trials of DBS support the hypothesis that teens may respond better than adults. We should not predicate research on adolescents on data from adult trials.





And second, waiting for the adult data continues the trend of preventing participation in clinical research among a population that has, arguably, been “over-protected” from research to its detriment. The seriousness of AN and its high incidence in teens ground an interest in improved treatments for all teens with AN.








Should DBS be performed on this "over-protected" population?

There remain obstacles to engaging this population in clinical research on DBS. Because I’m proposing trials for minors, we need to address not only barriers to establishing assent in teens with AN but also parental consent. Both are problematic with this population, but I think the problems are surmountable.





The upshot of the argument I pose is that we should look to enfranchise so-called “vulnerable” populations that have historically been excluded from clinical research. Justice demands it. An ethical and legal framework for clinical research should support responsible and safe research on treatment protocols for members of these populations rather than discourage it.





My views on DBS have changed after spending more time with the topic, considering it from a wider variety of perspectives, and asking different questions. I do not mean to imply that one set of questions or a particular perspective ought to be prioritized. Rather I hope to have highlighted the value of recognizing the limits of one’s research and exploring one’s interests from a variety of viewpoints to reach more inclusive and considered judgments.



*Some researchers call for classifying stages of AN to better guide treatment and research. Morbidity and mortality worsens, and recovery becomes less likely, the longer the disease progresses. Treatment protocols typically do not distinguish between someone with a first diagnosis and someone who has been ill for 5 or 10 years, and research studies usually do not divide patients with AN into further subgroups based on length of illness, even though there is evidence that those who have had the illness longer require different care than those at earlier stages.



Want to cite this post?



Plunkett, C. (2015). Widening the use of deep brain stimulation: Ethical considerations in research on DBS to treat Anorexia Nervosa. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/widening-use-of-deep-brain-stimulation.html

Tuesday, November 17, 2015

Do you have a mouse brain? The ethical imperative to use non-human primates in neuroscience research

by Carlie Hoffman




Much of today’s neuroscience research investigating human brain diseases and disorders utilizes animal models. Animals ranging from flies, rodents, and non-human primates are routinely used to model various disorders, with mice being most commonly utilized. Scientists employ these animal models to approximate human conditions and disorders in an accessible manner, with the ultimate purpose of applying the findings derived in the animal back into the human brain.







Rhesus macaques, a species of NHP often used in research.


The use of animals in research has been the source of much debate, with people either supporting or objecting their use, and objections arising from animal rights activists, proponents of critical neuroscience such as Nikolas Rose and Joelle Abi-Rached, and others. A main focus of this debate has also been the use of non-human primates (NHP) in research. The cognitive functions and behaviors of NHPs are more closely related to those seen in humans than are rodent cognitions and behaviors, thus causing primates to be held as the closest approximation of human brain functioning in both normal and disease states. Though some say NHP research is essential, others call for scaling down or even completely eliminating it. Strides have already been made towards the reduction and removal of NHPs from experimental research, as displayed by the substantial justification required to perform experiments utilizing them, the increasing efforts going towards developing alternative non-animal models (including the Human Brain Project’s goal to create a computer model of the human brain), and the recent reduction of the use of chimpanzees in research [2, 6].  A case was even brought to the New York Supreme Court earlier this year to grant personhood status to two research chimpanzees.






However, if NHPs are completely removed from human brain disease research, this leaves rodents as the primary non-human animal model for the human condition. This raises an important question: are we (both the general public and scientists) okay with accepting a detailed understanding of the mouse brain as being equivalent to a detailed understanding of the human brain? Dr. Yoland Smith, a world-renowned Emory researcher in neurodegenerative disease, says no— work with both NHPs and rodent models must continue to complement and inform each other.






Smith came to Emory University in 1996 and currently works at the Emory-affiliated Yerkes National Primate Research Center. Smith’s work is 90% rhesus macaque-based and 10% rodent-based, with the aim of most of his rodent work being to fine-tune new approaches and methods to apply to his rhesus macaques. While many animal rights advocates feel it is an ethical imperative to eliminate NHP research, Dr. Smith believes there are currently more pressing obstacles to the continued use of NHPs to study neuroscience, namely the increasing regulatory constraints on NHP research and the challenges in translating rapid technology development from mice to NHPs. 






Mice are the animals most commonly used in research.


For instance, the number of rodents (almost exclusively mice) utilized in biomedical research has been increasing at a dramatic pace compared with that of primates, the latter accounting for only about 0.1- 0.3% of all animals currently used in biomedical research in the United States and Europe. This is in part due to the ease of acquiring and maintaining rodent colonies as compared to NHP colonies: the cost of maintaining a mouse at Emory ranges from $0.83- $4.13 per day, while the cost of maintaining a NHP is on the range of $80-$110 per day [3]. While ultimately rodent research is not cheap (most researchers maintain rodent colonies containing anywhere from tens to hundreds of mice, causing total per diem costs to rise rapidly), the relatively lower cost of maintenance and ease of access affiliated with rodent use have allowed for extensive experimental optimization and exploratory testing to be performed in rodents, but not in NHPs. As a result, there has been advancement and development of rodent-based technologies over NHP-based technologies, and techniques such as in vitro electrophysiology, optogenetics, and transgenic strains have been developed for use in mice. Because such techniques do not cross easily into NHP research, they are not currently available for use in the NHP brain. Consequently, new methods are continually being applied and developed almost exclusively for use in the mouse, while the needed time and money to adapt these assays to NHPs are not being spent. Scientists are thus able to dig deeper and answer more sophisticated questions about the mouse brain, while NHP research is being left in the dust. NHP research may ultimately come to an end not just because of the ethical arguments against it, but because rodent research is leaps and bounds ahead of it.





This trajectory also raises questions about what role we think NHP research should continue to play in the field of neuroscience: Can NHP research ever match the pace of rodent research? Should we use NHPs at all? Can all of our questions about the human brain and human health be answered using rodent models?





Smith’s answer to this final question is no. Although Smith feels rodent research produces highly valuable information, contributes to advances in neuroscience, and must continue to grow at a fast pace, he posits that it would be naïve to believe that gaining knowledge about the mouse brain is sufficient to achieve our ultimate goal of “getting to the human.” While it would be easy to say that the human brain is equivalent to the mouse brain, he states this is simply not true—particularly when examining complex brain functions and diseases that involve high-order cortical processing. For instance, rodents are often used to model many complex neuropsychiatric diseases affecting the prefrontal cortex and influenced by social and cultural components; however, there are striking differences in the size and complexity of the prefrontal cortex and other associative cortices in primates versus rodents, and animal models are unable to replicate the integral role culture plays in psychiatric disease pathology (a topic also described in a previous blog post) [1, 4]. The overly simplistic organization of the rodent cerebral cortex compared with that of humans, Dr. Smith believes, makes NHP research absolutely essential. If we hope to make significant progress toward the development of new therapeutic strategies for complex cognitive and psychiatric disorders, then there is an urgent need for the ongoing development of NHP models of such prevalent diseases of the human brain.







Comparative images of human, mouse, and rhesus brains.


Smith also believes there is a critical knowledge gap between the rodent brain and the human brain that cannot be addressed without a deeper understanding of both the healthy and diseased NHP brain. NHP research has already partially helped to fill this gap [5], as evidenced by the breakthroughs made in understanding disease pathophysiology and the development of surgical therapies for Parkinson’s disease through the use of NHP models, the focus of Smith and his colleagues’ work at Emory. However, Smith notes that these advances would not have been possible without a strong foundation of rodent research. This interplay between primate and rodent work serves as a model for how Smith hopes the field of neuroscience will progress in the future, with both rodent research and primate research growing together in parallel and feeding into one another, ultimately helping researchers to “get to the human.”





If significant strategies are not put in place by funding agencies to maintain continued support for NHP research, this parallel growth will not be possible. Smith urges that we maintain NHP research by continuing to develop technologies for use in NHPs and by generating discussions about the use and drawbacks of NHPs in neuroscience research. Smith feels that strides toward maintaining NHP research should be spearheaded by larger funding institutions, such as the National Institutes of Health (NIH). He proposes there should be a call for grant applications specifically involving NHP research as well as the formation of an NIH-based committee to discuss how we can perform NHP research in a successful way; to assess whether we, as a scientific community, are always moving toward the human; and to examine the ethical and practical limitations of using NHPs in research. Smith also encourages scientists to be proactive and unafraid to engage the community in discussions of the ethical dilemmas surrounding primate research—we cannot ignore these issues and instead must be the mediators of such ethical discussions.





Dr. Smith believes NHP research is essential to advancing the understanding of the human brain and improving human health. As such, Smith does not think that this research will ever fully disappear. While young researchers might find the current funding climate and research constraints daunting, he claims it is the responsibility of current primate researchers to continue to train new scientists; if we do not train new NHP scientists, we will just be contributing to the loss. Therefore, while NHP research is receiving pressure both from society and from scientists, the use of primates in neuroscience research must continue. After all, as Smith stated, “I do not have a mouse brain.”



Works Cited



1. Ding, SL (2013) Comparative anatomy of the prosubiculum, subiculum, presubiculum, postsubiculum, and parasubiculum in human, monkey, and rodent. J Comp Neurol 521: 4145-4162. doi: 10.1002/cne.23416



2. Doke, SK, & Dhawale, SC (2015) Alternatives to animal testing: A review. Saudi Pharmaceutical Journal 23: 223-229. doi: http://dx.doi.org/10.1016/j.jsps.2013.11.002



3. International Animal Research Regulations: Impact on Neuroscience Research: Workshop Summary. (2012). Washington DC: National Academy of Sciences.



4. Nestler, EJ, & Hyman, SE (2010) Animal models of neuropsychiatric disorders. Nat Neurosci 13: 1161-1169. doi: 10.1038/nn.2647



5. Phillips, KA, Bales, KL, Capitanio, JP, Conley, A, Czoty, PW, t Hart, BA, Hopkins, WD, Hu, SL, Miller, LA, Nader, MA, Nathanielsz, PW, Rogers, J, Shively, CA, & Voytko, ML (2014) Why primate models matter. Am J Primatol 76: 801-827. doi: 10.1002/ajp.22281



6. Tardif, SD, Coleman, K, Hobbs, TR, & Lutz, C (2013) IACUC review of nonhuman primate research. ILAR J 54: 234-245. doi: 10.1093/ilar/ilt040





Want to cite this post?



Hoffman, C. (2015). Do you have a mouse brain? The ethical imperative to use non-human primates in neuroscience research. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/do-you-have-mouse-brain-ethical.html

Monday, November 9, 2015

Why defining death leaves me cold

by John Banja, PhD




*Editor's note: In case you missed our annual Zombies and Zombethics (TM) Symposium entitled Really, Most Sincerely Dead. Zombies, Vampires and Ghosts. Oh my! you can watch our opening keynote by Dr. Paul Root Wolpe by clicking on the image below. We recommend starting at 9:54 min.









https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgNoGxAv51diXWZlA-mzlM8xji51bJbD8d8nVPa90SsujgoM9fW8RSuoQqPdsO-L3sgho_p9y2eof9f_Mazm7y6ReihILkCpsucpQQV2vtHvVzXDX9I1gkcmxdfy7LImNjmkQTRgIEWvgJ/s320/Meinhardt_Raabe.jpg


Two weeks ago, I attended a panel session on brain death at the annual conference of the American Society for Bioethics and Humanities. Forgive the bad pun, but the experience left me cold and …lifeless(?). The panel consisted of three scholars revisiting the more than a decade old conversation on defining death. Despite a standing room only crowd, there was utterly nothing new. Rather, we heard a recitation of the very familiar categories that have historically figured in the “What does it mean to be dead?” debate, e.g., the irreversible cessation of cardio-respiratory activity, the Harvard Brain Death criteria, the somatic integration account, the 2008 Presidential Commission’s “loss of the drive to breathe,” and so on. I walked out thinking that we could come back next year, and the year after that, and the year after that and get no closer to resolving what it means to be dead.









Dr. Banja in his natural habitat.

I’d suggest that the reason for this failure is the stubborn persistence of scholars to mistake a social practice, i.e., defining death, for a metaphysical event. Philosophers who insist on keeping the “defining death” conversation alive are invariably moral realists: They mistakenly believe that death is an objectively discernible, universally distributed, a priori, naturally occurring phenomenon that philosophical reasoning and analysis can divine. Now, the irreversible cessation of cardio-respiratory functioning or the cessation of all brain functioning certainly are actual biophysiological events. But determining death requires a social decision because its primary purpose consists in triggering various social practices like terminating medical care or preparing a body for organ recovery; commencing rituals of grieving or mourning; disposing the body in a way that protects the public’s health; securing life insurance or inheritance benefits, and so on. Understood this way, it’s up to a community of language users to decide when these activities should commence rather than look to a bunch of academic philosophers to give us the “correct” answer. After all, what are philosophers going to do? They can only argue their moral intuitions, but they must ultimately admit that there is no source of confirmation that proves which of their intuitions is the “correct” one.






Death contains a social component, as depicted in The Court of Death by Rembrandt Peale


The problem with the various death defining criteria is that, at least to me, they all have a ring of plausibility. (Otherwise, they wouldn’t be seriously discussed.) This especially includes Robert Veatch’s position that we should leave the nature of death determination up to the individual.* According to Veatch, if I believe that I’m as good as dead if I enter a state of permanent unconsciousness, then I should be treated as such: discontinue all life prolonging care; prepare to dispose of my bodily remains in a respectable way; and if my beating heart disturbs anyone, inject it with curare or a reasonable substitute to stop it.





The idea that death is a “natural occurrence” is only loosely and metaphorically true. In fact, death is largely a socio-cultural happening that derives from social needs or pressures—like the Harvard Brain Death criteria deriving from the need for a dead organ donor or to assist the courts in their prosecution of murderers. The idea that philosophers can discern the “real and true” essence of death—because we mistakenly think the answer sits out there in the biosphere waiting to be discovered—seems an intellectual conceit. We don’t need philosophers to tell us how our social practices should work. It’s up to the rest of us to experiment with them and retain the ones that work best. And that’s what will happen if there are further chapters in the social narrative around defining death: Future generations will meet that challenge according to the survival pressures that living and dying present to them. Philosophical definitions of death might be interesting and even illuminating. But contemporary, western societies will most likely decide when death occurs according to pragmatically reasonable criteria than philosophically subtle ones.




*Veatch RM. The death of whole-brain death: the plague of the disaggregators, somaticists, and mentalists. Journal of Medicine and Philosophy 2005;30:353–378.



Want to cite this post?



Banja, J. (2015). Why defining death leaves me cold. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/why-defining-death-leaves-me-cold_3.html

Tuesday, November 3, 2015

Shrewder speculation: the challenge of doing anticipatory ethics well


by Dr. Hannah Maslen 





Hannah Maslen is a Research Fellow in Ethics at the Oxford Martin School and the Oxford Uehiro Centre for Practical Ethics. She currently works on the Oxford Martin Programme on Mind and Machine, where she examines the ethical, legal, and social implications of various brain intervention and interface technologies, from brain stimulation devices to virtual reality. 




This post is part of a series that recaps and offers perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.




In its Gray Matters report, the United States Presidential Commission for the Study of Bioethical Issues underscored the importance of integrating ethics and neuroscience early and throughout the research endeavor. In particular, the Commission declared: 






"As we anticipate personal and societal implications of using such technologies, ethical considerations must be further deliberated.  


Executed well, ethics integration is an iterative and reflective process that enhances both scientific and ethical rigor." 






What is required to execute ethics integration well? How can philosophers make sure that their work has a constructive role to play in shaping research and policy-making?






In a recent talk at the International Neuroethics Society Annual Meeting, I reflected on this, and on the proper place of anticipation in the work that philosophers and neuroethicists do in relation to technological advance. Anticipating, speculating and keeping ahead of the technological curve are all laudable aims. It is crucial that likely problems and potential solutions are identified ahead of time, to minimize harm and avoid knee-jerk policy reactions. Keeping a step ahead inevitably requires all involved to make predictions about the way a technology will develop and about its likely mechanisms and effects. Indeed, philosophers will sometimes take leave from discussion of an actual emerging or prototype technology and extrapolate to consider the ethical challenges that its hypothetical future versions might present to society in the near future. Key features of the technology are identified, distilled and carefully subjected to analysis.






Gray Matters report

Speculating about cognitive enhancement 





Cognitive enhancement technologies – a topic discussed in depth in the second volume of the Gray Matters report – have received this sort of treatment. There has been a substantial amount of work dedicated to examining things like whether the use of cognitive enhancement drugs by students constitutes cheating, or whether professionals in high-risk jobs such as surgery or aviation should be required to take them. Some of this work appears to involve greater or lesser degrees of speculation. For example, a philosopher might present herself with the following sort of questions:



Imagine that cognitive enhancer X improves a student’s performance to a level that would be achieved through having extra private tutorials. Does her use of cognitive enhancer X constitute cheating?  


Imagine that cognitive enhancer Y is completely safe, and effective at remedying fatigue-related impairment. Should the surgeon be required to take cognitive enhancer Y? 



Working through these sorts of examples can generate conclusions of great conceptual interest. In relation to the first, we might get clearer on what cheating precisely amounts to, and perhaps which sorts of advantages are and are not unfair in an educational setting. In relation to the second, we might come to interesting conclusions about the limits of professional obligations, or perhaps about the relationship between cognitive capacities and responsibility.





However, working at this level of abstraction – as valuable as it is from a philosophical perspective –cannot give us what we need to determine, for example, whether Duke University should uphold its policy on the use of off-label stimulants as a species of academic dishonesty, or whether the Royal College of Surgeons should recommend the use of Modafinil by surgeons as good practice. Abstracted work undeniably has its place, and is hugely interesting, but it does not integrate well with concrete discussions about scientific research directions and policy. Why is this?




Why might theoretical analysis be difficult to integrate? 




To some extent, conducting the sort of thought experiments involving cognitive enhancer X and Y requires that we strip away the messiness of the details of the technologies. This allows us to carefully isolate and vary the features we think will be morally relevant to see how they affect our intuitions and reasoning. We want the principal consideration in the surgeon case to be the fact that the drug remedies fatigue and reduces error. It also makes the case sufficiently abstract be generalizable to a whole category of cognitive enhancers – there may be different drugs with a variety of properties that all share the impairment-reducing effect. The example might also extrapolate to near-future possible pharmaceuticals – we might not have such a drug now, but what if we did?






Pharmaceuticals image courtesy of Flickr user Waleed Alzuhair 

However – and this is the crucial point – many of the details that are stripped away to enable the philosophical question to be carefully defined and delineated are hugely relevant to determining what we should do; but we cannot add all this detail back in after reaching our conclusions and expect them to remain the same.





In relation to a university’s policy on enhancers, the reality is that different drugs affect different people differently; they may simultaneously enhance one cognitive capacity whilst impairing another; some drugs might have their principal effects on working memory, whilst others enhance wakefulness and task enjoyment. All these features and many others are relevant to the question of fairness and what our policy for particular drugs should be. Importantly, the specific features of different drugs might lead to different conclusions.





In relation to professional duties, it is going to matter that a drug like modafinil is not without side effects; that it can cause gastrointestinal upset; that individuals can perceive themselves as functioning better than they in fact are, and so on. These features bear, amongst other things, on effectiveness, permissibility of professional coercion, and also on whether reasonable policy options might sit somewhere between a blanket requirement and a blanket ban.




What to do?




It’s important that the reader does not take me to be saying that we should give up theoretical work on neurotechnologies. In fact, it is precisely through careful construction of the possible features of technologies that we can learn more about the socially important dimensions for which they have significance. If we want to get clearer on the boundaries of what we can and cannot require a surgeon to do, we need to consider many possibilities sitting just before and beyond the boundary: at some point, perhaps, a requirement would encroach too much into his life beyond his professional role to be justifiable. The degree of encroachment would have to be varied very slightly (almost certainly artificially) until we get to the point somewhere along the line from hand-washing to heroics where we identify the boundary.





Rather, my suggestion is that we need to be clear when we start an ethical analysis about whether we are doing something more conceptual or whether we want to make a statement about what should be done in a particular situation. When we want to do the latter, we have to make sure that we work with as much of the scientific detail as possible. This requires philosophers and ethicists to read scientific papers – perhaps at the detail of review articles – to make sure they retain the detail necessary to offer a practical recommendation. Ideally, such work would be completed in collaboration with scientists, or at least subjected to their scrutiny.





Of course, there’s a difference between speculating because you are not an expert on that technology and speculating because the information is not yet available. There should be none of the former, and the latter should be carefully managed so that recommendations do not far outstrip the limited information base: there’s a lot more we need to know about incorporating computer chips into brains, for example, before we can even start to say anything practical about what should and shouldn’t be done.





Scientific black boxes are to some extent inevitable when speculating about neurotechnological advances. The task for practical ethicists is to open as many as they can and to be mindful of the potential ethical significance of those they cannot. They also need to be careful to determine when they want to conduct theoretical analysis, using real and imagined technologies to illuminate conceptual truths, and when they want to argue for a course of action in relation to a particular neuroscientific application or technology, the details of which are crucial in order for ethical integration to be well executed.







Want to cite this post?



Maslen, H. (2015). Shrewder speculation: the challenge of doing anticipatory ethics well. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/shrewder-speculation-challenge-of-doing.html

Tuesday, October 27, 2015

Is football safe for brains?

by Dr. L. Syd M Johnson










Dr. Johnson is Assistant Professor of Philosophy & Bioethics in the Department of Humanities at Michigan Technological University. Her work in neuroethics focuses on disorders of consciousness and sport-related neurotrauma. She has published several articles on concussions in youth football and hockey, as well as on the ethics of return-to-play protocols in youth and professional football.




This post is the first of several that will recap and offer perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.




At the International Neuroethics Society annual meeting in Chicago this month, Nita Farahany and a panel from the Football Players Health Study at Harvard University (FHPS) headlined the public talk “Is professional football safe? Can it be made safer?” The panel declined to provide direct answers to these important questions, but the short answers are “No,” and “Not by much,” respectively.




In recent years, there has been much public concern about the impact of football and other neurotraumatic sports on the brains of athletes. The neuroethics community has been somewhat slow in picking up sport-related concussion and Chronic Traumatic Encephalopathy (CTE) as topics of neuroethical concern. Public and media concern have been fueled by reports stating that the brains of deceased athletes show evidence of the distinctive tauopathy of CTE, attributed by researchers like Bennet Omalu (who described the first case in a retired football player in 2005) and Ann C. McKee (Boston University) to brain trauma sustained while playing sports. To date, there have been approximately 150 documented cases of CTE, and an exceptionally high number of the brains examined by Omalu, McKee, and colleagues have been positive for the characteristic tau depositions.



Of course, there is selection bias in neuropathological case studies, since few retired athletes donate their brains to research after death. Neuroscientist Alvaro Pascual-Leone of the FPHS was openly dismissive of the existing CTE research during his brief discussion of it, criticizing the work as woefully underpowered. The existing science is worth little, Pascal-Leone told the audience, implying that the current alarm about the neurological effects of football-related brain trauma is premature, and probably overblown.







The speakers commented that there are some 15,000 retired, living NFL players—a small, elite group—and the FPHS is attempting to recruit 10,000 of them for its studies. Funded by the National Football League’s Players’ Union, the FPHS proposes to tackle whole lifespan player health through population studies to assess the scope of health problems experienced by retired players, pilot studies to develop interventions, and a law and ethics component that outlines ethical principles important to considerations of player health and is sensitive to the unique conflicts of interest in professional sports. Only some of the work being done by the FPHS addresses brain trauma and its effects on athlete health—that part of their work was, of course, of most interest to the neuroethicists assembled for the meeting, but it received scant attention from the panel. Judging by the questions from the audience, they mostly had brain trauma on their minds as well.





Moderator Nita Farahany and panel members Alvaro Pascual-Leone, I. Glenn Cohen, and Damien Richardson (pictured from left to right).

Concussion and neurotrauma in professional football are the subjects of much neuroscientific activity, but the bigger problem, briefly alluded to by law professor I. Glenn Cohen, is not what happens to adult, professional athletes, but to the large number of junior and amateur players. While there are millions of high school football players in the United States, only several thousands of these players continue to play at the college level, and an even smaller fraction go on to play in the professional ranks. This fall, seven US high football players have already died, most of them due to head trauma-related injuries. The majority of reported concussions in the US occur in high school football players, while the impact of all that head trauma remains largely unknown and understudied. Damien Richardson, a former NFL player, and now a doctor and advisor to the FPHS, discussed his own long path to the pros while sitting on the panel, beginning with Pop Warner football when he was a kid, through high school and college ball. When asked if he thought pro football was safe, he demurred, but explained that knowing what he knows now, he would still play, but would play differently than he did.








Richardson emphasized the need for change in professional football, change that would trickle down to influence the next generation of players coming up through the ranks. That model of top-down change has been endorsed by the NFL as well, but there is already evidence of bottom-up change, with greater attention to and concern about safety leading to fewer kids playing football, and opting for other sports instead. For many young athletes and their parents, there’s no longer any question about the safety of football.



Want to cite this post?



Johnson, LSM. (2015). INS RECAP: Is professional football safe? Can it be made safer? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/10/ins-recap-is-professional-football-safe.html

Tuesday, October 20, 2015

Technologies of the extended mind: Implications for privacy of thought

by Peter Reiner, PhD






Dr. Reiner is Professor and co-founder of the National Core for Neuroethics, at the University of British Columbia. Dr. Reiner began his academic career studying the cellular and molecular physiology of the brain, and in 1998, Dr. Reiner became President and CEO of Active Pass Pharmaceuticals, a drug discovery company that he founded to tackle the scourge of Alzheimer's disease. Upon returning to academic life in 2004, Dr. Reiner refocused his scholarly work in the area of neuroethics. He is also an AJOB Neuroscience board member.






Louis Brandeis in his law office, 1890.


In 1890, Samuel Warren and his law partner Louis Brandeis 
published what has become one of the most influential essays in the history of US law. Entitled The Right to Privacy [1], the article is notable for outlining the legal principles that protect privacy of thought. But it is not just their suggestions about privacy that are illuminating – it is their insight into the ways that law has changed over historical time scales that makes the paper such a classic. In very early times, they write, “the law gave a remedy only for physical interference with life and property...[and] liberty meant freedom from actual restraint.” Over time, as society began to recognize the value of the inner life of individuals, the right to life came to mean the right to enjoy life; protection of corporeal property expanded to include the products of the mind, such as literature and art, trademarks and copyrights. In a passage that resonates remarkably well with the modern experience, they point out that the time was nigh for the law to respond to changes in technology.






Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual …the right “to be let alone”. Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life; and numerous mechanical devices threaten to make good the prediction that "what is whispered in the closet shall be proclaimed from the house-tops."





The notion that privacy is problematic in a world dominated by instant communication is hardly new: as long ago as 1999, Sun Microsystems CEO Scott McNeally famously stated “You have zero privacy anyway. Get over it.”[2] This early sentiment on the invasiveness of technology has been borne out in chilling fashion with revelations that governments and corporations extensively monitor internet and cell phone use. It seems to me that the time is right to consider the proposition that continued changes in technology – in particular with respect to the life of the mind – require that we revisit the contours of the issue known as privacy of thought.





An important starting point is the extended mind hypothesis[3], the idea that cognition extends beyond the brain into the world at large. One example from the original paper - the case of Otto and Inga – illustrates the issue quite nicely. Inga hears about an exhibition at a museum that she recalls is on 53rd Street and sets off to see the artwork. Her neighbor Otto has dementia and so has made a practice of storing important information in a small notebook that he carries with him. When he hears of the exhibition, he consults his notebook, finds that the museum is on 53rd Street and, just like Inga, sets off for the same destination. Thus the cognitive function of storing information is mediated by the brain in one case and pen and paper in the other.





The claims of the extended mind hypothesis are radical: going beyond suggesting that human cognition relies on external structures for scaffolding and support, the extended mind thesis suggests that the physical vehicles that realize (at least some of) our cognitive processes lie outside of the bounds of the skull. Yet the concept resonates with a key feature of modern life: for many, there is a growing sense that computers, smartphones, and increasingly ‘the internet of things’ function as sophisticated extensions of our cognitive toolkit[4]. Conceiving of the mind as a blend between brain and algorithm challenges long-held assertions that there is something exceptional about the brain[5], but one ignores reality at one’s peril. Of late, I have begun to refer to the entire suite of algorithmic agents as "Technologies of the Extended Mind."





If we return to the question of privacy and situate the discussion in the context of a worldview that considers "Technologies of the Extended Mind" as a growing reality, we see that there is some new and interesting terrain to explore. It is well-known that both breaches and oversharing of our digital information has grown from the occasional to an everyday event. But if "Technologies of the Extended Mind" really are extensions of our cognitive toolkits, at some point the ability of others (governments, corporations, employers, friends, hackers, and more) to glimpse this information crosses the line from being a run-of-the-mill invasion of privacy to a more worrisome intrusion upon privacy of thought. Defining this dividing line – even if it turns out to be a fuzzy boundary – is an important challenge for neuroethical discourse.





The fundamental insight of Warren and Brandeis – that changes in technology require us to at least revisit if not update our moral norms – is as relevant today as it was 125 years ago.





REFERENCES





1. Warren, S. D. & Brandeis, L. D. The Right to Privacy. Harvard Law Review 4, 193 (1890).


2. Sprenger, P. Sun on Privacy: “Get Over It." Wired News (1999).


3. Clark, A. & Chalmers, D. The extended mind. Analysis 58, 7–19 (1998).


4. Pew Research Center. Digital Life in 2025. 1–61 (2014) at http://www.pewinternet.org/2014/03/11/digital-life-in-2025/ 




5. Reiner, P. B. The rise of neuroessentialism. in: Oxford Handbook of Neuroethics (eds. Illes, J. & Sahakian, B. J.) 161–175 (Oxford University Press, 2011).

Tuesday, October 13, 2015

The Neuroethics Blog Reader hot off the presses!


It is my pleasure to present you with our first edition of The Neuroethics Blog reader. This reader includes some of the most popular posts on the site and highlights our junior talent.





While the blog showcases cutting-edge debates in neuroethics, it also serves as a mechanism for mentoring junior scholars and students and providing them with exciting opportunities to have their pieces featured alongside established scholars in the field. In addition, the blog allows for community building, inviting scholars from multiple disciplines to participate. Our contributors have included individuals at various levels of education from fields such as law, neuroscience, engineering, psychology, English, medicine, philosophy, women’s studies, and religion, to name a few. Each blog post is a collaborative process, read and edited numerous times by the editorial leadership in partnership with the author.





We aim to continue to mentor and deliver quality posts that serve to cultivate not only our neuroethics academic community, but also members of the public who may be cultivating their own interests in neuroethics. Whether for direct applications in your profession or simply to understand the world in which we live, we hope the blog will help you navigate the implications of new neurotechnologies and explore what is knowable about the human brain.





At this time, I'd like to thank our amazing editorial team including Lindsey Grubbs (Managing Editor), Carlie Hoffman (Editor of this reader), Ryan Purcell, and Katie Strong. I'd also like to highlight our previous Managing Editors Dr. Julia Haas and Julia Marshall who have since graduated and are continuing their scholarship in neuroethics, as well as Jonah Queen who was there from the very beginning. Stay tuned for more great things from this group along with all of our talented contributors.





Thank you for taking the time to embark on this journey with us and please enjoy this reader!





P.S. If you are lucky enough to find yourself at the International Neuroethics Society conference this Oct 15-16, we will have limited printed copies available. Just look for folks wearing the "Ask Me About AJOB Neuroscience" buttons.