Pages

Tuesday, November 24, 2015

Widening the use of deep brain stimulation: Ethical considerations in research on DBS to treat Anorexia Nervosa

by Carolyn Plunkett






Carolyn Plunkett is a Ph.D. Candidate in the Philosophy Department at The Graduate Center of City University of New York. She is also an Ethics Fellow in The Bioethics Program at the Icahn School of Medicine at Mount Sinai, and a Research Associate in the Division of Medical Ethics at NYU Langone Medical Center. Carolyn will defend her dissertation in spring 2016, and, beginning July 2016, will be a Rudin Post-Doctoral Fellow in the Divisions of Medical Ethics and Medical Humanities at NYU Langone Medical Center. 







This post is part of a series that recaps and offers perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.





Karen Rommelfanger, founding editor of The Neuroethics Blog, heard a talk I gave on deep brain stimulation (DBS) at Brain Matters! 3 in 2012. Three years later, she heard a brief synopsis of a paper I presented a few weeks ago at the International Neuroethics Society Annual Meeting. Afterward, she came up to me and said, “Wow! Your views have changed!” I had gone from being wary about using DBS in adults, much less minors, to defending its use in teens with anorexia nervosa. She asked me to write about this transition for this blog, and present my recent research.






I was introduced to DBS in a philosophy course four years ago and was immediately captivated by questions like: Am I really me with ongoing DBS? Are my emotions really mine? What is an “authentic self,” or an “authentic emotion,” anyway? I wrote a term paper on the topic, and soon after presented on the topic at a couple of conferences. I wrote a post for the Neuroethics Women Leaders blog and published an open peer commentary in AJOB Neuroscience. It was a hot topic, as a couple  of other authors were asking the same questions at around the same time.





But when I presented my ideas to an audience at Brain Matters! 3, in Cleveland, Ohio, I was taken aback by one participant’s question. A neurosurgeon asked what she should tell her patients, and their families, when they request DBS to treat refractory depression or OCD. “Should I tell them no because philosophers are concerned that their emotions might not be authentic?” she wondered. I think I gave a wishy-washy answer about informed consent. It did not satisfy my interlocutor, or me.







Schematic of the deep brain stimulation setup


Her question has stuck with me. It has forced me to reexamine the context in which I think about DBS, from the purely philosophical to the bioethical. Put another way, the neurosurgeon’s concern forced me to reconsider my earlier ideas about DBS—my purely philosophical questions about the nature of identity and emotions, and the relationship between them—and put them in conversation with more concrete bioethical questions about how DBS is actually used and by whom, and how it should be used, including how access to DBS and clinical research could be made more equitable and respectful. Her question drove home the point that bioethicists must take seriously the lived experiences of patients, families, clinicians, researchers, and others involved in the DBS process. These participants are not just characters in a thought experiment, as an analogy between DBS and Nozick’s infamous experience machine would suggest.






So I turned my focus from DBS itself to the experiences of those with the conditions that DBS aims to treat, like anorexia nervosa, depression, and addiction. Do they experience authentic selves or authentic desires? This shift ultimately drove me toward my dissertation project, which defends a subjectivist conception of normative reasons—or, roughly, the view that what we have reason to do is rooted in who we are, what we value, and what we desire. (There is a time and place for philosophy, after all.)





A few more classes and papers further propelled my changing perspective, leading me to my current research agenda on ethical issues in research on DBS for the treatment of anorexia nervosa. The central question I’m now investigating: Given that researchers are testing the efficacy of DBS for anorexia nervosa, should we consider enrolling adolescents in those trials?





Anorexia nervosa, or AN, is characterized by a distorted body image, excessive dieting and exercise that leads to severe weight loss, and a pathological fear of becoming fat. Though it does occur in men and adult women, AN has the highest incidence in adolescent women, and the average age of onset is decreasing. It’s not the most prevalent mental illness, but it is the deadliest: AN has the highest mortality of all mental illnesses and eating disorders. This is because of both dangerous physiological consequences and a high rate of suicide among anorexic individuals.





Despite its high morbidity and mortality, there are not well-defined treatments for AN, and the prospects for recovery are, unfortunately, not great. Among those who receive a diagnosis, the full recovery rate from AN is about 50%. About 30% of patients make partial recoveries, and the remaining 20% do not recover, even with repeated attempts at treatment — quite a substantial subset of AN patients.





For those who fall in that 20% — who are sometimes referred to as having “chronic” or “longstanding” AN — there are no evidence-based treatment options, even though there is evidence that those with chronic or longstanding AN require different treatment than those at an earlier stage of illness.* To date, there has been only one controlled study of adults with chronic AN, and no studies of treatments for adolescents with chronic AN. There is clearly a great need for research within the subgroup of patients, in particular adolescents, with chronic AN.








A representation of anorexia nervosa from flickr user Benjamin Watson

This is where DBS comes in. Readers of this blog are familiar with DBS, but they may not know that it has been shown to be a promising treatment for AN, even chronic AN, and even AN in a small number of teens. An emerging neurobiological understanding of AN supports the notion that DBS will be effective. Plus, case studies and case series [1, 2, 3] on the use of DBS in 14 women with AN have shown that it has been effective in increasing body weight and decreasing AN-associated behaviors in 11 of them.





DBS is not without risk. Along with the physical risks of hemorrhage, infection, and anesthesia, especially among patients with AN, there may be unforeseen negative psychological effects on sufferers’ self-conception and well-being. We might expect this to be especially true in adolescents, who are right at the developmental stage of figuring out who are and who they want to be. A “brain pacemaker” may disrupt that task.





These risks of DBS can be considered reasonable only if substantial benefit is expected from the trials.





I won’t go through an entire risk/benefit analysis here, but I do believe that such an analysis supports the idea that the associated risks of enrolling teens with chronic AN are reasonable, and that the harms of not allowing them to participate in research constitute a great risk. That is, the harm of excluding teens with AN from clinical research, and thus continuing the status quo in the treatment of chronic AN, are so substantial that it justifies the inclusion of teens, even in high risk research.





You might be thinking: Why not give the current research on adults a few years to see if DBS is an effective treatment for chronic AN in that population, and then move on to kids? Indeed, some ethicists have said that this is the way we ought to proceed.





First, a negative response in adults does not necessarily mean that DBS would fail in teens. In fact, data from a small study in China that enrolled teens in trials of DBS support the hypothesis that teens may respond better than adults. We should not predicate research on adolescents on data from adult trials.





And second, waiting for the adult data continues the trend of preventing participation in clinical research among a population that has, arguably, been “over-protected” from research to its detriment. The seriousness of AN and its high incidence in teens ground an interest in improved treatments for all teens with AN.








Should DBS be performed on this "over-protected" population?

There remain obstacles to engaging this population in clinical research on DBS. Because I’m proposing trials for minors, we need to address not only barriers to establishing assent in teens with AN but also parental consent. Both are problematic with this population, but I think the problems are surmountable.





The upshot of the argument I pose is that we should look to enfranchise so-called “vulnerable” populations that have historically been excluded from clinical research. Justice demands it. An ethical and legal framework for clinical research should support responsible and safe research on treatment protocols for members of these populations rather than discourage it.





My views on DBS have changed after spending more time with the topic, considering it from a wider variety of perspectives, and asking different questions. I do not mean to imply that one set of questions or a particular perspective ought to be prioritized. Rather I hope to have highlighted the value of recognizing the limits of one’s research and exploring one’s interests from a variety of viewpoints to reach more inclusive and considered judgments.



*Some researchers call for classifying stages of AN to better guide treatment and research. Morbidity and mortality worsens, and recovery becomes less likely, the longer the disease progresses. Treatment protocols typically do not distinguish between someone with a first diagnosis and someone who has been ill for 5 or 10 years, and research studies usually do not divide patients with AN into further subgroups based on length of illness, even though there is evidence that those who have had the illness longer require different care than those at earlier stages.



Want to cite this post?



Plunkett, C. (2015). Widening the use of deep brain stimulation: Ethical considerations in research on DBS to treat Anorexia Nervosa. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/widening-use-of-deep-brain-stimulation.html

Tuesday, November 17, 2015

Do you have a mouse brain? The ethical imperative to use non-human primates in neuroscience research

by Carlie Hoffman




Much of today’s neuroscience research investigating human brain diseases and disorders utilizes animal models. Animals ranging from flies, rodents, and non-human primates are routinely used to model various disorders, with mice being most commonly utilized. Scientists employ these animal models to approximate human conditions and disorders in an accessible manner, with the ultimate purpose of applying the findings derived in the animal back into the human brain.







Rhesus macaques, a species of NHP often used in research.


The use of animals in research has been the source of much debate, with people either supporting or objecting their use, and objections arising from animal rights activists, proponents of critical neuroscience such as Nikolas Rose and Joelle Abi-Rached, and others. A main focus of this debate has also been the use of non-human primates (NHP) in research. The cognitive functions and behaviors of NHPs are more closely related to those seen in humans than are rodent cognitions and behaviors, thus causing primates to be held as the closest approximation of human brain functioning in both normal and disease states. Though some say NHP research is essential, others call for scaling down or even completely eliminating it. Strides have already been made towards the reduction and removal of NHPs from experimental research, as displayed by the substantial justification required to perform experiments utilizing them, the increasing efforts going towards developing alternative non-animal models (including the Human Brain Project’s goal to create a computer model of the human brain), and the recent reduction of the use of chimpanzees in research [2, 6].  A case was even brought to the New York Supreme Court earlier this year to grant personhood status to two research chimpanzees.






However, if NHPs are completely removed from human brain disease research, this leaves rodents as the primary non-human animal model for the human condition. This raises an important question: are we (both the general public and scientists) okay with accepting a detailed understanding of the mouse brain as being equivalent to a detailed understanding of the human brain? Dr. Yoland Smith, a world-renowned Emory researcher in neurodegenerative disease, says no— work with both NHPs and rodent models must continue to complement and inform each other.






Smith came to Emory University in 1996 and currently works at the Emory-affiliated Yerkes National Primate Research Center. Smith’s work is 90% rhesus macaque-based and 10% rodent-based, with the aim of most of his rodent work being to fine-tune new approaches and methods to apply to his rhesus macaques. While many animal rights advocates feel it is an ethical imperative to eliminate NHP research, Dr. Smith believes there are currently more pressing obstacles to the continued use of NHPs to study neuroscience, namely the increasing regulatory constraints on NHP research and the challenges in translating rapid technology development from mice to NHPs. 






Mice are the animals most commonly used in research.


For instance, the number of rodents (almost exclusively mice) utilized in biomedical research has been increasing at a dramatic pace compared with that of primates, the latter accounting for only about 0.1- 0.3% of all animals currently used in biomedical research in the United States and Europe. This is in part due to the ease of acquiring and maintaining rodent colonies as compared to NHP colonies: the cost of maintaining a mouse at Emory ranges from $0.83- $4.13 per day, while the cost of maintaining a NHP is on the range of $80-$110 per day [3]. While ultimately rodent research is not cheap (most researchers maintain rodent colonies containing anywhere from tens to hundreds of mice, causing total per diem costs to rise rapidly), the relatively lower cost of maintenance and ease of access affiliated with rodent use have allowed for extensive experimental optimization and exploratory testing to be performed in rodents, but not in NHPs. As a result, there has been advancement and development of rodent-based technologies over NHP-based technologies, and techniques such as in vitro electrophysiology, optogenetics, and transgenic strains have been developed for use in mice. Because such techniques do not cross easily into NHP research, they are not currently available for use in the NHP brain. Consequently, new methods are continually being applied and developed almost exclusively for use in the mouse, while the needed time and money to adapt these assays to NHPs are not being spent. Scientists are thus able to dig deeper and answer more sophisticated questions about the mouse brain, while NHP research is being left in the dust. NHP research may ultimately come to an end not just because of the ethical arguments against it, but because rodent research is leaps and bounds ahead of it.





This trajectory also raises questions about what role we think NHP research should continue to play in the field of neuroscience: Can NHP research ever match the pace of rodent research? Should we use NHPs at all? Can all of our questions about the human brain and human health be answered using rodent models?





Smith’s answer to this final question is no. Although Smith feels rodent research produces highly valuable information, contributes to advances in neuroscience, and must continue to grow at a fast pace, he posits that it would be naïve to believe that gaining knowledge about the mouse brain is sufficient to achieve our ultimate goal of “getting to the human.” While it would be easy to say that the human brain is equivalent to the mouse brain, he states this is simply not true—particularly when examining complex brain functions and diseases that involve high-order cortical processing. For instance, rodents are often used to model many complex neuropsychiatric diseases affecting the prefrontal cortex and influenced by social and cultural components; however, there are striking differences in the size and complexity of the prefrontal cortex and other associative cortices in primates versus rodents, and animal models are unable to replicate the integral role culture plays in psychiatric disease pathology (a topic also described in a previous blog post) [1, 4]. The overly simplistic organization of the rodent cerebral cortex compared with that of humans, Dr. Smith believes, makes NHP research absolutely essential. If we hope to make significant progress toward the development of new therapeutic strategies for complex cognitive and psychiatric disorders, then there is an urgent need for the ongoing development of NHP models of such prevalent diseases of the human brain.







Comparative images of human, mouse, and rhesus brains.


Smith also believes there is a critical knowledge gap between the rodent brain and the human brain that cannot be addressed without a deeper understanding of both the healthy and diseased NHP brain. NHP research has already partially helped to fill this gap [5], as evidenced by the breakthroughs made in understanding disease pathophysiology and the development of surgical therapies for Parkinson’s disease through the use of NHP models, the focus of Smith and his colleagues’ work at Emory. However, Smith notes that these advances would not have been possible without a strong foundation of rodent research. This interplay between primate and rodent work serves as a model for how Smith hopes the field of neuroscience will progress in the future, with both rodent research and primate research growing together in parallel and feeding into one another, ultimately helping researchers to “get to the human.”





If significant strategies are not put in place by funding agencies to maintain continued support for NHP research, this parallel growth will not be possible. Smith urges that we maintain NHP research by continuing to develop technologies for use in NHPs and by generating discussions about the use and drawbacks of NHPs in neuroscience research. Smith feels that strides toward maintaining NHP research should be spearheaded by larger funding institutions, such as the National Institutes of Health (NIH). He proposes there should be a call for grant applications specifically involving NHP research as well as the formation of an NIH-based committee to discuss how we can perform NHP research in a successful way; to assess whether we, as a scientific community, are always moving toward the human; and to examine the ethical and practical limitations of using NHPs in research. Smith also encourages scientists to be proactive and unafraid to engage the community in discussions of the ethical dilemmas surrounding primate research—we cannot ignore these issues and instead must be the mediators of such ethical discussions.





Dr. Smith believes NHP research is essential to advancing the understanding of the human brain and improving human health. As such, Smith does not think that this research will ever fully disappear. While young researchers might find the current funding climate and research constraints daunting, he claims it is the responsibility of current primate researchers to continue to train new scientists; if we do not train new NHP scientists, we will just be contributing to the loss. Therefore, while NHP research is receiving pressure both from society and from scientists, the use of primates in neuroscience research must continue. After all, as Smith stated, “I do not have a mouse brain.”



Works Cited



1. Ding, SL (2013) Comparative anatomy of the prosubiculum, subiculum, presubiculum, postsubiculum, and parasubiculum in human, monkey, and rodent. J Comp Neurol 521: 4145-4162. doi: 10.1002/cne.23416



2. Doke, SK, & Dhawale, SC (2015) Alternatives to animal testing: A review. Saudi Pharmaceutical Journal 23: 223-229. doi: http://dx.doi.org/10.1016/j.jsps.2013.11.002



3. International Animal Research Regulations: Impact on Neuroscience Research: Workshop Summary. (2012). Washington DC: National Academy of Sciences.



4. Nestler, EJ, & Hyman, SE (2010) Animal models of neuropsychiatric disorders. Nat Neurosci 13: 1161-1169. doi: 10.1038/nn.2647



5. Phillips, KA, Bales, KL, Capitanio, JP, Conley, A, Czoty, PW, t Hart, BA, Hopkins, WD, Hu, SL, Miller, LA, Nader, MA, Nathanielsz, PW, Rogers, J, Shively, CA, & Voytko, ML (2014) Why primate models matter. Am J Primatol 76: 801-827. doi: 10.1002/ajp.22281



6. Tardif, SD, Coleman, K, Hobbs, TR, & Lutz, C (2013) IACUC review of nonhuman primate research. ILAR J 54: 234-245. doi: 10.1093/ilar/ilt040





Want to cite this post?



Hoffman, C. (2015). Do you have a mouse brain? The ethical imperative to use non-human primates in neuroscience research. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/do-you-have-mouse-brain-ethical.html

Monday, November 9, 2015

Why defining death leaves me cold

by John Banja, PhD




*Editor's note: In case you missed our annual Zombies and Zombethics (TM) Symposium entitled Really, Most Sincerely Dead. Zombies, Vampires and Ghosts. Oh my! you can watch our opening keynote by Dr. Paul Root Wolpe by clicking on the image below. We recommend starting at 9:54 min.









https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgNoGxAv51diXWZlA-mzlM8xji51bJbD8d8nVPa90SsujgoM9fW8RSuoQqPdsO-L3sgho_p9y2eof9f_Mazm7y6ReihILkCpsucpQQV2vtHvVzXDX9I1gkcmxdfy7LImNjmkQTRgIEWvgJ/s320/Meinhardt_Raabe.jpg


Two weeks ago, I attended a panel session on brain death at the annual conference of the American Society for Bioethics and Humanities. Forgive the bad pun, but the experience left me cold and …lifeless(?). The panel consisted of three scholars revisiting the more than a decade old conversation on defining death. Despite a standing room only crowd, there was utterly nothing new. Rather, we heard a recitation of the very familiar categories that have historically figured in the “What does it mean to be dead?” debate, e.g., the irreversible cessation of cardio-respiratory activity, the Harvard Brain Death criteria, the somatic integration account, the 2008 Presidential Commission’s “loss of the drive to breathe,” and so on. I walked out thinking that we could come back next year, and the year after that, and the year after that and get no closer to resolving what it means to be dead.









Dr. Banja in his natural habitat.

I’d suggest that the reason for this failure is the stubborn persistence of scholars to mistake a social practice, i.e., defining death, for a metaphysical event. Philosophers who insist on keeping the “defining death” conversation alive are invariably moral realists: They mistakenly believe that death is an objectively discernible, universally distributed, a priori, naturally occurring phenomenon that philosophical reasoning and analysis can divine. Now, the irreversible cessation of cardio-respiratory functioning or the cessation of all brain functioning certainly are actual biophysiological events. But determining death requires a social decision because its primary purpose consists in triggering various social practices like terminating medical care or preparing a body for organ recovery; commencing rituals of grieving or mourning; disposing the body in a way that protects the public’s health; securing life insurance or inheritance benefits, and so on. Understood this way, it’s up to a community of language users to decide when these activities should commence rather than look to a bunch of academic philosophers to give us the “correct” answer. After all, what are philosophers going to do? They can only argue their moral intuitions, but they must ultimately admit that there is no source of confirmation that proves which of their intuitions is the “correct” one.






Death contains a social component, as depicted in The Court of Death by Rembrandt Peale


The problem with the various death defining criteria is that, at least to me, they all have a ring of plausibility. (Otherwise, they wouldn’t be seriously discussed.) This especially includes Robert Veatch’s position that we should leave the nature of death determination up to the individual.* According to Veatch, if I believe that I’m as good as dead if I enter a state of permanent unconsciousness, then I should be treated as such: discontinue all life prolonging care; prepare to dispose of my bodily remains in a respectable way; and if my beating heart disturbs anyone, inject it with curare or a reasonable substitute to stop it.





The idea that death is a “natural occurrence” is only loosely and metaphorically true. In fact, death is largely a socio-cultural happening that derives from social needs or pressures—like the Harvard Brain Death criteria deriving from the need for a dead organ donor or to assist the courts in their prosecution of murderers. The idea that philosophers can discern the “real and true” essence of death—because we mistakenly think the answer sits out there in the biosphere waiting to be discovered—seems an intellectual conceit. We don’t need philosophers to tell us how our social practices should work. It’s up to the rest of us to experiment with them and retain the ones that work best. And that’s what will happen if there are further chapters in the social narrative around defining death: Future generations will meet that challenge according to the survival pressures that living and dying present to them. Philosophical definitions of death might be interesting and even illuminating. But contemporary, western societies will most likely decide when death occurs according to pragmatically reasonable criteria than philosophically subtle ones.




*Veatch RM. The death of whole-brain death: the plague of the disaggregators, somaticists, and mentalists. Journal of Medicine and Philosophy 2005;30:353–378.



Want to cite this post?



Banja, J. (2015). Why defining death leaves me cold. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/why-defining-death-leaves-me-cold_3.html

Tuesday, November 3, 2015

Shrewder speculation: the challenge of doing anticipatory ethics well


by Dr. Hannah Maslen 





Hannah Maslen is a Research Fellow in Ethics at the Oxford Martin School and the Oxford Uehiro Centre for Practical Ethics. She currently works on the Oxford Martin Programme on Mind and Machine, where she examines the ethical, legal, and social implications of various brain intervention and interface technologies, from brain stimulation devices to virtual reality. 




This post is part of a series that recaps and offers perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.




In its Gray Matters report, the United States Presidential Commission for the Study of Bioethical Issues underscored the importance of integrating ethics and neuroscience early and throughout the research endeavor. In particular, the Commission declared: 






"As we anticipate personal and societal implications of using such technologies, ethical considerations must be further deliberated.  


Executed well, ethics integration is an iterative and reflective process that enhances both scientific and ethical rigor." 






What is required to execute ethics integration well? How can philosophers make sure that their work has a constructive role to play in shaping research and policy-making?






In a recent talk at the International Neuroethics Society Annual Meeting, I reflected on this, and on the proper place of anticipation in the work that philosophers and neuroethicists do in relation to technological advance. Anticipating, speculating and keeping ahead of the technological curve are all laudable aims. It is crucial that likely problems and potential solutions are identified ahead of time, to minimize harm and avoid knee-jerk policy reactions. Keeping a step ahead inevitably requires all involved to make predictions about the way a technology will develop and about its likely mechanisms and effects. Indeed, philosophers will sometimes take leave from discussion of an actual emerging or prototype technology and extrapolate to consider the ethical challenges that its hypothetical future versions might present to society in the near future. Key features of the technology are identified, distilled and carefully subjected to analysis.






Gray Matters report

Speculating about cognitive enhancement 





Cognitive enhancement technologies – a topic discussed in depth in the second volume of the Gray Matters report – have received this sort of treatment. There has been a substantial amount of work dedicated to examining things like whether the use of cognitive enhancement drugs by students constitutes cheating, or whether professionals in high-risk jobs such as surgery or aviation should be required to take them. Some of this work appears to involve greater or lesser degrees of speculation. For example, a philosopher might present herself with the following sort of questions:



Imagine that cognitive enhancer X improves a student’s performance to a level that would be achieved through having extra private tutorials. Does her use of cognitive enhancer X constitute cheating?  


Imagine that cognitive enhancer Y is completely safe, and effective at remedying fatigue-related impairment. Should the surgeon be required to take cognitive enhancer Y? 



Working through these sorts of examples can generate conclusions of great conceptual interest. In relation to the first, we might get clearer on what cheating precisely amounts to, and perhaps which sorts of advantages are and are not unfair in an educational setting. In relation to the second, we might come to interesting conclusions about the limits of professional obligations, or perhaps about the relationship between cognitive capacities and responsibility.





However, working at this level of abstraction – as valuable as it is from a philosophical perspective –cannot give us what we need to determine, for example, whether Duke University should uphold its policy on the use of off-label stimulants as a species of academic dishonesty, or whether the Royal College of Surgeons should recommend the use of Modafinil by surgeons as good practice. Abstracted work undeniably has its place, and is hugely interesting, but it does not integrate well with concrete discussions about scientific research directions and policy. Why is this?




Why might theoretical analysis be difficult to integrate? 




To some extent, conducting the sort of thought experiments involving cognitive enhancer X and Y requires that we strip away the messiness of the details of the technologies. This allows us to carefully isolate and vary the features we think will be morally relevant to see how they affect our intuitions and reasoning. We want the principal consideration in the surgeon case to be the fact that the drug remedies fatigue and reduces error. It also makes the case sufficiently abstract be generalizable to a whole category of cognitive enhancers – there may be different drugs with a variety of properties that all share the impairment-reducing effect. The example might also extrapolate to near-future possible pharmaceuticals – we might not have such a drug now, but what if we did?






Pharmaceuticals image courtesy of Flickr user Waleed Alzuhair 

However – and this is the crucial point – many of the details that are stripped away to enable the philosophical question to be carefully defined and delineated are hugely relevant to determining what we should do; but we cannot add all this detail back in after reaching our conclusions and expect them to remain the same.





In relation to a university’s policy on enhancers, the reality is that different drugs affect different people differently; they may simultaneously enhance one cognitive capacity whilst impairing another; some drugs might have their principal effects on working memory, whilst others enhance wakefulness and task enjoyment. All these features and many others are relevant to the question of fairness and what our policy for particular drugs should be. Importantly, the specific features of different drugs might lead to different conclusions.





In relation to professional duties, it is going to matter that a drug like modafinil is not without side effects; that it can cause gastrointestinal upset; that individuals can perceive themselves as functioning better than they in fact are, and so on. These features bear, amongst other things, on effectiveness, permissibility of professional coercion, and also on whether reasonable policy options might sit somewhere between a blanket requirement and a blanket ban.




What to do?




It’s important that the reader does not take me to be saying that we should give up theoretical work on neurotechnologies. In fact, it is precisely through careful construction of the possible features of technologies that we can learn more about the socially important dimensions for which they have significance. If we want to get clearer on the boundaries of what we can and cannot require a surgeon to do, we need to consider many possibilities sitting just before and beyond the boundary: at some point, perhaps, a requirement would encroach too much into his life beyond his professional role to be justifiable. The degree of encroachment would have to be varied very slightly (almost certainly artificially) until we get to the point somewhere along the line from hand-washing to heroics where we identify the boundary.





Rather, my suggestion is that we need to be clear when we start an ethical analysis about whether we are doing something more conceptual or whether we want to make a statement about what should be done in a particular situation. When we want to do the latter, we have to make sure that we work with as much of the scientific detail as possible. This requires philosophers and ethicists to read scientific papers – perhaps at the detail of review articles – to make sure they retain the detail necessary to offer a practical recommendation. Ideally, such work would be completed in collaboration with scientists, or at least subjected to their scrutiny.





Of course, there’s a difference between speculating because you are not an expert on that technology and speculating because the information is not yet available. There should be none of the former, and the latter should be carefully managed so that recommendations do not far outstrip the limited information base: there’s a lot more we need to know about incorporating computer chips into brains, for example, before we can even start to say anything practical about what should and shouldn’t be done.





Scientific black boxes are to some extent inevitable when speculating about neurotechnological advances. The task for practical ethicists is to open as many as they can and to be mindful of the potential ethical significance of those they cannot. They also need to be careful to determine when they want to conduct theoretical analysis, using real and imagined technologies to illuminate conceptual truths, and when they want to argue for a course of action in relation to a particular neuroscientific application or technology, the details of which are crucial in order for ethical integration to be well executed.







Want to cite this post?



Maslen, H. (2015). Shrewder speculation: the challenge of doing anticipatory ethics well. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/11/shrewder-speculation-challenge-of-doing.html