Pages

Tuesday, October 30, 2018

Phenomenology of the Locked-in Syndrome: Time to Move Forward




By Fernando Vidal








Image courtesy of Wikimedia Commons.

The main features of the locked-in syndrome (LIS) explain its name: persons in LIS are tetraplegic and cannot speak, but have normal visual perception, consciousness, cognitive functions and bodily sensations. They are “locked in” an almost entirely motionless body. A condition of extremely low prevalence identified and named in 1966, LIS most frequently results from a brainstem stroke or develops in the advanced stage of a neurodegenerative disease such as amyotrophic lateral sclerosis (ALS), which affects the motor neuron system and leads to paralysis. LIS presents three forms. In total or complete LIS (CLIS), patients lack all mobility; in classic LIS, blinking or vertical eye movements are preserved; in incomplete LIS, other voluntary motion is possible. Mortality is high in the early phase of LIS of vascular origin, but around 80% of patients who become stable live ten years and 40% live twenty years after entering the locked-in state. Persons who are locked-in as consequence of stroke or traumatic injury sometimes evolve from classic to incomplete LIS. They can usually communicate via blinking or vertical eye movement, by choosing letters from an alphabet spell board. When additional movements are regained, they facilitate the use of a computer. It is hoped that brain-computer interfaces (BCI) will enable CLIS patients to communicate too.




In January 2017, under the title “Groundbreaking system allows locked-in syndrome patients to communicate,” The Guardian reported on a study demonstrating that four ALS patients, two in complete LIS and two entering the condition, learned to respond to questions in a way that could be decoded by measuring frontocentral oxygenation changes detected with functional near infrared spectroscopy. Niels Birbaumer, well-known for his pioneering work on BCIs, told the journal that such a result (which has since been questioned) was “the first sign that completely locked-in syndrome may be abolished forever, because with all of these patients we can now ask them the most critical questions in life.”





Yet what do we know about how locked-in persons envisage such critical questions and relate them to the extreme existential situation in which they find themselves? Rather little. A systematic phenomenology, in the sense of a description and analysis of experience as lived by locked-in persons themselves, has not yet been undertaken. It deserves to exist along mainstream, more clinical and quantitative approaches to the question, “What is it like to be conscious but paralyzed and voiceless?”





Speaking of a “happy majority” of locked-in persons may be exaggerated given the response rates to quality of life (QOL) surveys. At the same time, the existing research shows that many locked-in persons report subjective wellbeing and a relatively satisfactory QOL level that stays stable over time. As a population they display low rates of depression, suicidal thoughts, euthanasia requests, and do-not-resuscitate orders. Most respondents to a ground-breaking closed-ended questionnaire about body and personal identity in LIS said they felt they were essentially the same as before entering the locked-in state, reporting a continuous experienced identity when they accepted their bodily changes, and a discontinuous one when they did not. The body, though paralyzed, remains a strong component of identity. The phenomenological dynamics of such relationship to the body has been explored in cases of profound paralysis due to ALS or multiple sclerosis, but not yet for LIS.








Image courtesy of pxhere.

Illness, notes philosopher Havi Carel, is a “limit case of embodied experience.” As an extreme instance of that limit, LIS offers a unique opportunity to investigate on a real-life basis central questions related to notions and practices of personhood and embodiment in the realm of values, beliefs and experiences. These questions, concerning for example the relationships between mind and body, self and other, autonomy and dependency, life in health and illness, or the criteria for ascertaining rights and obligations, are at the heart of significant contemporary debates in philosophy, ethics, and the practice of medicine.





In the perspective of “enactivism,” which sees the mind as embodied, embedded, extended and enacted, LIS appears as a social injury that affects the self through its impact on the individual’s capacity to engage with the social environment. Though operating in a frame that places more emphasis on first-person experience, individual self-awareness and self-narrative, a phenomenologist such as Richard Zaner also attributes a central role to the interactive, relational and communicative processes involved in locked-in individuals’ experience. Beyond their obvious practical import, communication and intersubjectivity emerge as possessing fundamental ontological significance. By describing in detail the processes they involve, phenomenology throws light on philosophical and anthropological issues. But it should also contribute to caring for persons whose lives, contrary to what healthy people and even professionals believe, are worth living – yet whose predicament and capacities have been understood in ways that may strip them of their civil and political rights. Other hitherto ignored dimensions, like gender or emotions, will have to be taken into account. The same applies to such material realities as the level of financial support from the state. These realities help explain why, for example, the use and acceptance of tracheostomy ventilation – a procedure in which a tube is inserted into a person’s windpipe through a cut in the neck to allow breathing – is more frequent in Japan than in Western countries.





A consolidating network of scholars from various disciplines in Europe, North America and Japan aims at working toward a phenomenology of LIS mainly by way of two complementary qualitative methodologies. On the one hand, the project Phenomenology of the Locked-in Syndrome analyzes locked-in persons’ autobiographical narratives. There are about thirty such narratives in Western European languages and at least as many in Japanese. A few articles discuss from a literary or phenomenological standpoint Jean-Dominique Bauby’s The Diving Bell and the Butterfly (1997), the widely translated bestseller Julian Schnabel made into a prize-winning film. But the rest of the memoirs, and the corpus as a whole, remain to be scrutinized. On the other hand, the project studies the experience of LIS by way of open-ended questionnaires and interviews with patients, caregivers and family members. Instances of this approach, also a novelty with regard to LIS, are included in a forthcoming special issue of Neuroethics entitled “The Locked-in Syndrome: Perspectives from Ethics, History and Phenomenology.” [1]








Image courtesy of Wikimedia Commons.

The place of LIS within bioethics and neuroethics looks paradoxical. Because consciousness is preserved in LIS, and because this function is considered as the most critical standard for human personhood, there is never any doubt that locked-in individuals are fully persons. Even when they are subjected to some form of tutelage, their circumstances do not give rise to the ethical and procedural issues that are customary in connection with the disorders of consciousness (DOC). Misdiagnosis (as “vegetative”) and its dramatic consequences (the patient is no longer considered a person) have been often documented, but that does not alter the ontological status of the affected individuals. This situation explains the marginal place of LIS in bioethics and neuroethics. The challenges LIS raises – about enabling communication, the exercise of autonomy, the status of advanced directives, the validity of informed consent, or decision-making about treatment and end-of-life – are not really specific to the condition, and are ethically less knotty than in the case of DOC. Knowledge about LIS patients’ self-assessed QOL and the fact that communicative difficulties are the chief source of their suffering gives rise to a twofold moral imperative: the above-mentioned healthy people’s negative biases toward life in the locked-in state should be avoided, and everything possible has to be done to facilitate communication.





It should be possible to go beyond such considerations. The limited attention devoted to LIS in neuroethics and biomedical ethics may mirror the rarity of the syndrome, but it also reflects the modern Western primacy of (self)consciousness and autonomy as normative criteria for personhood and for defining obligations toward patients. LIS, however, highlights the extent to which communication and relationality are integral to their empirical realization. The philosophy of personhood has emphasized physical and psychological criteria to varying degrees, and the human sciences have argued for a more constitutive role for intersubjectivity and technological systems. In such a context, LIS has to be examined together with conditions, such as DOC and dementias, which more directly problematize personhood at the conceptual and practical levels. Locked-in persons’ experience invites us to explore these issues by turning the usual vantage point around – asking what LIS can do for theories, rather than what theories can do for LIS [2].







_________________









Fernando Vidal is Research Professor of ICREA (Catalan Institution for Research and Advanced Studies) and Professor at the Medical Anthropology Research Center, Rovira i Virgili University (Tarragona, Spain). A former Guggenheim Fellow, he was in 2017 elected to the Academia Europaea, and was Fellow at the Brocher Foundation (Geneva) and Visiting Professor at Ritsumeikan University (Kyoto). His most recent book, Being Brains: Making the Cerebral Subject (with F. Ortega) received the 2018 Outstanding Book Award of the International Society for the History of the Neurosciences.

















Author's Notes





[1] Edited by F. Vidal, it brings together participants of the workshop Personhood and the Locked-in Syndrome (Barcelona, 2016), funded by the Catalan Institution for Research and Advanced Studies with additional support from the Víctor Grifols i Lucas Foundation. The project Phenomenology of the Locked-in Syndrome is attached to the Medical Anthropology Research Center, Rovira i Virgili University, Tarragona.




[2] This post sketches some of the issues extensively discussed in F. Vidal, “Phenomenology of the Locked-in Syndrome: An Overview and Some Suggestions” (Neuroethics, in press). https://doi.org/10.1007/s12152-018-9388-1.




Locked-in persons are scattered, and not easy to find and contact. Individuals in any country interested in collaborating with the project sketched here can write to F. Vidal, fernando.vidal@icrea.cat.






Want to cite this post?




Vidal, F. (2018). Phenomenology of the Locked-in Syndrome: Time to Move Forward. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2018/10/phenomenology-of-locked-in-syndrome.html

Tuesday, October 23, 2018

Normalization of Enhancement: Recap of September’s The Future Now: NEEDs




By Nathan Ahlgrim





As I sit down to write this post, I have just consumed my first Nerv shot. It actually tastes quite nice, the penetrating citrus sensation gone in a couple gulps. The taste, however, is secondary; it’s marketed as “Liquid Zen.” At September’s The Future Now: Neuroscience and Emerging Ethical Dillemas Series (NEEDs), Dr. Michael Jiang presented his motivation for co-founding and developing Nerv. His presentation began just how his company did, with a simple question: “Who here drinks coffee?”






A One-Sided Market





Nerv is a consumer-oriented supplement designed to “manage occasional anxiety and stress, allowing you to focus and be your best self.” In designing this product, Dr. Jiang hopes to counterbalance the mind-boggling array of consumer stimulants, and in the process, normalize the place for relaxants in society. The first part will be easy – be it coffee, soda, nicotine products, or energy drinks, consumers are inundated by stimulants. Not only do they exist in every grocery store and gas station, but they are normal. No one bats an eye at a coffee habit until someone’s daily consumption reaches double digits. In contrast to the shelves of ‘uppers,’ where are the ‘downers?’ Dr. Jiang’s audience gave the few examples they could think of: chamomile, alcohol, antihistamines, CBD oil. As we discovered, the list didn’t get very long before we reached pharmaceutical and controlled substance territory. There was Dr. Jiang’s inspiration: we have plenty of products designed to amp us up, but none to wind us down. He, along with co-founders Holly Ash and Graeme Warring, developed Nerv to do just that.





Lax regulation has prompted the U.S. Military

to warn its members about the potentially

harmful side effects of dietary supplements.

Image courtesy of the Malmstrom Air Force Base.




At its heart, Nerv is a neurotechnology. Even though it is packaged in a two-ounce bottle and is available at a store near you, it merits the same consideration that any other neurotechnology does. It belongs to a class of supplements called nootropics, also known as cognitive enhancers or smart drugs. Like the rest of the supplement industry, nootropics are notorious for unregulated products, unsubstantiated health claims, and marketing that convinces people to forgo standard treatment with disastrous consequences. To break into this market, Dr. Jiang and Nerv must navigate those possible uses and abuses as well.





Normalization:





In the context of calming supplements like Nerv, Dr. Jiang believes normalization is the key to appropriate and beneficial use. Normalization is a tricky effect to engineer, and a fascinating one to observe. As Dr. Jiang said, normalizing a supplement, drug, or behavior typically corresponds with a cultural shift. One of the most recent shifts comes from stimulants as performance enhancers. Within my lifetime, stimulants like Ritalin have transitioned from a source of shame that no parent would willingly give their children to a hot commodity in schools and the workplace. People now talk about their supplier of Ritalin with the same terminology as they talk about their supplier of marijuana; although both products are illicit (at least in some jurisdictions), they are firmly rooted in our culture.





Taking Ritalin as the primary example clearly has its baggage. Proponents of stimulants now have to contend with accusations of overmedicalizing children and normalizing drug use. Yet the children with ADHD symptoms who used to be ostracized and outcast are now raised alongside children considered to be “typically developing.” Whether the modern strategy is a net positive or net negative is often determined by personal ideology.





Criticisms of Dr. Jiang’s solution to the problem of over-stimulation are obvious. By introducing Nerv, he is proposing to solve one chemical dependence with another chemical dependence. That could be worse than a band-aid fix, because in this case the band-aid could be a wound in and of itself. Our world seems primed for these problem-as-a-solution strategies, like the smartphone apps designed to cure you of your smartphone addiction. With Nerv, the major concern is that people with serious anxiety or stress-related problems will take a swig to tamp down larger, chronic problems. Coffee and energy drinks are susceptible to the same criticism. Although they are designed to pick you up when you are acutely tired, people misuse them to the point of generating sporadic reports about of energy drink overdoses and permanently altered sleep schedules. Who’s to say Nerv won’t just be the other side of a destructive coin?








Image courtesy of Pixabay.

Dr. Jiang says Nerv will never be marketed as a fix for chronic problems, and normalizing his product will actually prevent some chronic stressors from taking control. Once you can say, “I’m feeling anxious today, I’m going to take a Nerv” the same way you’d say, “I’m really tired today, I’m going to grab a coffee,” the experience of anxiety and stress is normalized. We all feel stress, but the stigma behind it still stops people from acknowledging their experiences. Refusing to face problems head-on is a recipe for seriously flawed self-medication. Normalizing stress and anxiety makes those experiences manageable because they’re not a disease if they’re something you can react to with a citrus-flavored drink. Most importantly, acute stressors cannot compound into unmanageable chronic problems when they are managed as they arise.






Problematic Expectation:





Consider the possibility of Dr. Jiang’s vision becoming reality: a Nerv store across the street from a Starbucks, to meet the equal and opposite need the coffee chain caters to. In this ideal vision, anxiety is not a taboo topic, and conversations among friends regularly acknowledge life’s stresses. Nerv is a quick and ready barrier against unexpected or unmanageable anxieties in the moment. Are there dangers even in this ideal scenario?





Normalization opens the uncomfortable possibility of creating a new and higher standard. Being tired is no longer an acceptable excuse now that energy drinks are so available. Similar pressures to push natural limits exist in sports and the workplace. With cognitive enhancers gaining traction, a growing minority argue that abstaining from these products is unethical when other’s lives are at stake.





The target, it seems, is balance between acceptance and expectation. Nerv itself is an ambassador for balance between stimulation and relaxation. The product itself cannot directly dictate how and if that balance will happen, but that does not excuse Dr. Jiang and his colleagues from addressing the larger consequences of their product. Nerv, like all neurotechnologies, exists in society. No neurotechnology can restrict its effects on society to the effects of its active ingredient. Luckily, as Dr. Jiang’s seminar proves, he is acutely aware of the wider implications of introducing Nerv into the marketplace. In the same way that he hopes to improve the problems of anxiety and stress by normalizing the conversations around them, he is bolstering his ethical argument by initiating conversations with consumers from the outset.










Nerv may not be mindfulness in a bottle,

but it did something to me.

Image courtesy of Pixabay.

What Nerv Has to Offer:





Even with the best of intentions, is it right to develop neurotechnologies, pharmaceuticals, or other products to fix problems that are inherently social or cultural? It may not be pretty, but it is not wrong. Integrating relaxants, stimulants, or anything in between into the wider culture facilitates more direct solutions to the root problems of anxiety and overwork. Normalizing the consumption of these products normalizes the existence of these problems, which is the first step towards fixing the problems themselves.





Of course, I have yet to answer what is perhaps the most pertinent question: how am I feeling? Did Nerv make me chill, find my Zen, and let me gracefully focus on writing? I do feel surprisingly at ease, for all my skepticism. I honestly cannot seem to muster up the gumption to stress out. I would never call Nerv mindfulness in a bottle (although I’m sure others will), but that is the closest experience I can compare it to. The chatty group in the corner, the loud chewer next to me – they’re all still here, but they fail to stress me. I feel nice. I feel in control. And yes, I am fully aware of the placebo effect and how it could easily be driving my subjective experience. But hey, most of the sugar rush is a placebo effect, too. Regardless of its scientific veracity, Nerv fills a gap in the market. Offering relaxants to an over-stimulated population does more than create a new problem, it offers balance. More importantly, it normalizes the need for relaxation, it normalizes the value of calm. We’ve prized busy schedules, little sleep, and constant stress for long enough; it’s time to try something new.





Recommended Readings:





Below is a list of sources provided by Nerv for those of you interested in the scientific data behind their product.











_________________










 Nathan Ahlgrim is a fifth year Ph.D. candidate in the Neuroscience Program at Emory. In his research, he studies how different brain regions interact to make certain memories stronger than others. He strengthens his own brain power by hiking through the north Georgia mountains and reading highly technical science...fiction.













Want to cite this post?



Ahlgrim, N. (2018).  Normalization of Enhancement:  Recap of September’s The Future Now: NEEDs. The Neuroethics Blog. Retrieved on , http://www.theneuroethicsblog.com/2018/10/normalization-of-enhancement-recap-of.html


Tuesday, October 16, 2018

What can neuroscience tell us about ethics?




By Adina L. Roskies








Image courtesy of Bill Sanderson, Wellcome Collection

What can neuroscience tell us about ethics? Some say nothing – ethics is a normative discipline that concerns the way the world should be, while neuroscience is normatively insignificant: it is a descriptive science which tells us about the way the world is. This seems in line with what is sometimes called “Hume’s Law”, the claim that one cannot derive an ought from an is (Cohon, 2018). This claim is contentious and its scope unclear, but it certainly does seem true of demonstrative arguments, at the least. Neuroethics, by its name, however, seems to suggest that neuroscience is relevant for ethical thought, and indeed some have taken it to be a fact that neuroscience has delivered ethical consequences. It seems to me that there is some confusion about this issue, and so here I’d like to clarify the ways in which I think neuroscience can be relevant to ethics.





1. Efforts to naturalize normativity


One way neuroscience (construed very broadly) might contribute is to enable us to see how normativity arises as a natural phenomenon. Efforts to show how different hormones and receptors, for example, underlie sociality and trust are an example of this, and some believe that a complete neural plus evolutionary account of the development of our norms is all there is to understanding ethics (Churchland, 2012). However, not all agree that a reductionist or historical approach is possible, and many maintain that no descriptive approach to ethics will suffice to capture what is good or right.





2. Examples and counterexamples





Photograph of Phineas Gage, photo courtesy of Jack and

Beverly Wilgus, now in the Warren Anatomical Museum

Some philosophical theories claim to capture the nature of various concepts or constructs. One particular metaethical view, for example, holds that it is true of moral judgment or belief that it necessarily motivates: that judging or believing something to be good or right intrinsically leads to motivation to pursue it. This view, motivational internalism (MI), has been attacked by a thought experiment, the claim that one could coherently conceive of someone who had moral beliefs but was not motivated by them (Brink, 1997). Adherents of MI, however, argue that this is not coherent or conceivable, and that such “amoralists” could not ever exist. Neuroscience has offered up potential counterexamples to MI in the form of a type of brain damage that prima facie results in people who aver moral beliefs that appear normal, but do not seem motivated to act in accordance with them (A. Roskies, 2003). Although adherents of MI can offer similar moves (denying, for example, that they have moral beliefs, asserting that they do have moral motivation, etc.) as in the conceptual case of the amoralist, the existence of these people offers opportunities to test these arguments in the real world, and forces us to constrain our interpretations in ways that respect the fact that these are real people embedded in the actual social/moral world. For example, if the seemingly moral claims these people make have the same psychological profile as other things that they aver and that we count as their beliefs, can we really deny that these people have moral beliefs? The theory that best accommodates the complexity of this real-world data should ultimately win the day.







3. Illuminating the ways things work


Neuroethics has and will continue to illuminate the way in which we reason morally, make choices, etc. Sometimes knowing how things work give us new handles to use in ethical reasoning. For example, Greene and colleagues have described a dual process model of moral judgment wherein emotional triggers prompt us to deem certain actions morally permissible or impermissible, whereas more controlled reasoning may sometimes lead to different judgments  (Greene, Nystrom, Engell, Darley, & Cohen, 2004; Greene, Sommerville, Nystrom, Darley, & Cohen, 2001). Greene has used this data to argue that consequentialism is superior to deontology (Greene, 2014). Although there has been extensive debate as to whether the neuroscience here leads directly to an ethical conclusion, all parties actually concur that it does not (Berker, 2009; Kahane, 2012; Kamm, 2009). Greene himself is clear that what does the work is the claim that the factors the “emotional” system responds to are ethically irrelevant. What is at issue is rather 1) whether that normative premise is true (Greene thinks it is self-evident; others disagree); and whether 2) the deliverances of these neural systems really map reasonably well onto various ethical frameworks. Neither of these questions are purely neuroscientific, but the neuroscience may allow us to answer them to our satisfaction.







Image courtesy of Wikimedia Commons

A second example of how understanding how things could have ethical implications comes from the free will literature. Some have argued that work showing that certain signals from the brain that precede awareness rule out the possibility of free will, and that that has ethical consequences (Kaposy, 2010; Libet, 1985). Although further work shows this claim to be mistaken on empirical grounds, the idea that the neuroscience alone could disprove some complex philosophical concept is mistaken, for the mechanistic commitments of the concept are not explicit. Only given real philosophical work and clear philosophical commitments can the neuroscience ever weigh in on a philosophical issue. In the case of free will, for example, there are alternative philosophical theories of free will which would be unchallenged even if the original interpretation of the neuroscientific claims held up (A. L. Roskies, 2006).




4. Providing factual premises to ethical arguments


The most common way in which neuroscience can contribute to ethics is by providing factual premises to ethical arguments. Indeed, in some sense all the former examples are some variety of this, but they have their distinctive character. And indeed, this is what one would expect if neuroscience is a descriptive enterprise, and ethics fundamentally normative or prescriptive. A clear example of how neuroscientific facts can lead to ethical consequences can be seen by looking at the literature from brain damage. Many people think we owe a certain level of ethical consideration to creatures capable of consciousness, but not to those incapable of it. And some clinical syndromes have been emblematic of lack of consciousness. But suppose neuroscience could show (to a reasonable degree of certainty) that some people, whom we had taken to lack the capacity for consciousness, and thus to lack a certain level of moral standing, were indeed conscious (A. L. Roskies, 2018)? We would then have to conclude that they were due the moral consideration we accord to other conscious entities. This indeed has happened with a subset of people diagnosed to be in Persistent Vegetative State (PVS) (Owen, 2013; Owen et al., 2006), providing a real world example of how neuroscientific evidence could lead, in the presence of the right kind of normative premises, to important and surprising ethical conclusions.


________________





Adina Roskies is the Helman Family Distinguished Professor at Dartmouth College, Professor of Philosophy and chair of the Cognitive Science Program. She is also affiliated with the Department of Psychological and Brain Sciences. She received a Ph.D from the University of California, San Diego in Neuroscience and Cognitive Science in 1995, a Ph.D. from MIT in philosophy in 2004, and an M.S.L. from Yale Law School in 2014. Prior to her work in philosophy she held a postdoctoral fellowship in cognitive neuroimaging at Washington University with Steven Petersen and Marcus Raichle, and from 1997-1999 was Senior Editor of the neuroscience journal Neuron. Dr. Roskies’ philosophical research interests lie at the intersection of philosophy and neuroscience, and include philosophy of mind, philosophy of science, and ethics. She has coauthored a book with Stephen Morse, A Primer on Criminal Law and Neuroscience. 








References






Berker, S. (2009). The Normative Insignificance of Neuroscience. Philosophy & Public Affairs, 37(4), 293–329.







Brink, D. O. (1997). Moral Motivation. Ethics, 108(1), 4–32. https://doi.org/10.1086/233786







Churchland, P. S. (2012). Braintrust: What Neuroscience Tells Us about Morality. Princeton University Press. Retrieved from http://press.princeton.edu/titles/9399.html







Cohon, R. (2018). Hume’s Moral Philosophy. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy ( (Fall 2018 Edition). Retrieved from https://plato.stanford.edu/archives/fall2018/entries/hume-moral/







Greene, J. D. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics, 124(4), 695–726. https://doi.org/10.1086/675875







Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The Neural Bases of Cognitive Conflict and Control in Moral Judgment. Neuron, 44(2), 389–400. https://doi.org/10.1016/j.neuron.2004.09.027







Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI Investigation of Emotional Engagement in Moral Judgment. Science, 293(5537), 2105–2108. https://doi.org/10.1126/science.1062872







Kahane, G. (2012). On the Wrong Track: Process and Content in Moral Psychology. Mind & Language, 27(5), 519–545. https://doi.org/10.1111/mila.12001







Kamm, F. M. (2009). Neuroscience and Moral Reasoning: A Note on Recent Research. Philosophy & Public Affairs, 37(4), 330–345. https://doi.org/10.1111/j.1088-4963.2009.01165.x







Kaposy, C. (2010). The Supposed Obligation to Change One’s Beliefs About Ethics Because of Discoveries in Neuroscience. AJOB Neuroscience, 1(4), 23–30. https://doi.org/10.1080/21507740.2010.510820







Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences, 8(04), 529–539. https://doi.org/10.1017/S0140525X00044903







Owen. (2013). Detecting Consciousness: A Unique Role for Neuroimaging. Annual Review of Psychology, 64(1), 109–133. https://doi.org/10.1146/annurev-psych-113011-143729







Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D. (2006). Detecting Awareness in the Vegetative State. Science, 313(5792), 1402–1402. https://doi.org/10.1126/science.1130197







Roskies, A. (2003). Are ethical judgments intrinsically motivational? Lessons from “acquired sociopathy” [1]. Philosophical Psychology, 16(1), 51–66. https://doi.org/10.1080/0951508032000067743







Roskies, A. L. (2006). Neuroscientific challenges to free will and responsibility. Trends in Cognitive Sciences, 10(9), 419–423. https://doi.org/10.1016/j.tics.2006.07.011







Roskies, A. L. (2018). Consciousness and End of Life Ethical Issues. In Routledge Handbook of Consciousness. Routledge Handbooks Online. https://doi.org/10.4324/9781315676982-34










Want to cite this post?




Roskies, A. (2018). What can neuroscience tell us about ethics? The Neuroethics Blog. Retrieved on , http://www.theneuroethicsblog.com/2018/10/what-can-neuroscience-tell-us-about.htmlfrom

Wednesday, October 10, 2018

Ethical Considerations for Emergent Neuroprosthetic Technology





By Emily Sanborn








Image courtesy of Wikimedia Commons

In the 21st century, there is a push towards producing neurotechnology that will make our lives easier. A category of these technologies are neuroprosthetics, devices that can supplement or supplant the input or output of the nervous system to obtain normal function (Leuthardt, Roland, and Ray, 2014). In the emergence of these technologies, there are ethical issues presented and a question is formed: are we fixing what is not broken? (Moses, 2016). 





A recent article from the Smithsonian magazine reported a technology that will allow humans to develop a “sixth sense” (Keller, 2018). David Eagleman, an adjunct professor at Stanford University’s department of Psychiatry and Behavioral Science, invented a sensory augmentation device called Versatile Extra-Sensory Transducer (VEST), which is a vest covered with vibratory motors that is worn on the body. VEST works by receiving auditory signals from speech and the surrounding environment and translating that signal via Bluetooth to vibrations. The vibrations are transmitted to the vest in dynamic patterns that correlate to specific speech and auditory signals. The user is then able to feel the sonic world. In time, they may be able to use this new touch sensation to understand spoken word (Eagleman, 2015). 





If VEST works as intended, it has a therapeutic value. Since its invention in the mid-80’s, the cochlear implant has been the most prevalent hearing device for the deaf and hard of hearing (cochlear implant, 2017). However, these devices are surgically embedded and can cost up to $100,000. VEST offers a nonsurgical alternative, with an estimated cost of $2,000 (Keller, 2018). This technology has the potential to alter the current market for medical devices that assist those who are deaf. Some say that technology such as VEST or cochlear impacts being viewed as treatments, rather than enhancers, can be problematic, implying that being deaf is abnormal (Moses, 2016). This raises the question: what is normal, and who decides? 








A cochlear implant

Image courtesy of Wikimedia Commons

In our society, we have been taught certain norms and have adapted ways to police each other to conform to these norms. One of these taught “normalities” is the idea that there are five senses that function in a specific way, and one mind that responds to external and internal stimuli in a specific manner (Moses, 2016). For example, some in the deaf community fight against the use of cochlear implants, as deaf individuals view themselves and their community as fully functional individuals who can function without these neuroprosthetic devices (Gupta, 2014)—they do not need devices to obtain normality. In fact, these individuals who fall below “normal range” in their lack of hearing suggest that differences and diversity of abilities and skills can all fall within a typical range of variation (Moses, 2016). Redefining normality, while a tremendous task, could be a next step instead of producing more neuroprosthetics. 





The creators of VEST are also looking to take their technology a step further beyond therapeutic contexts. In people with fully functioning senses, there is a goal to create a new form of sensation. Dr. David Eagleman is looking at how all types of data can be translated into interpretable sensation. The forefront of this idea is turning stock market data into sensation, so a person wearing VEST could be able to predict whether to buy or sell stock by feeling the market (NeoSensory). If VEST users are successfully able to feel and predict the stock market, they will gain an advantage over those who do not have this added sense. This can lead to two types of coercion: implicit and explicit. Implicit coercion is feeling the need to “maintain or better one’s position in some perceived social order” (Chatterjee, 2014). Workers in trading firms may feel they need to use this sensory technology in order to keep up with their coworkers and competitors. Explicit coercion is an “explicit demand of superior performance by others” (Chatterjee, 2014). In the future, firms may only hire staff who have the augmented sensory system. However, this is not the only dilemma that arises. 





Due to the nature of the enhancement technology, there is concern of an opportunity gap. Insurance agencies are most likely not going to pay for VEST for use as an enhancer for the stock market, or other enhancement applications, so only those who can personally afford to pay  will have access to the technology. This increases the gap between the rich and the poor, as the rich will have greater access and means to the technology (Hyman, 2011).





Overall, there are still many questions to be asked as VEST and other neuroprosthetics and cognitive enhancements are being made. The concept of what is normal must be readdressed, and the presented ethical dilemmas must be taken into consideration. Even if VEST does not yield the expected results, a new technology will take its place, so these questions and concerns remain relevant. 







________________











Emily Sanborn is entering her fourth year of undergraduate study at Emory University where she is pursuing a double major in Neuroscience & Behavioral Biology and Environmental Science. Her academic interests focus on exploring ecological processes, and how they can influence the spread and creation of diseases – particularly neurological diseases. This year, she will be completing an honor’s thesis in the Caudle Lab at Emory University’s Rollins School of Public Health, where she’ll be using in vitro and in vivo techniques to investigate the neurotoxic effects of insecticides and flame retardants. 











References





Chatterjee, A., (2004), Cosmetic neurology: The controversy over enhancing movement, mentation, and mood. Neurology, 63: 968-974.





Cochlear Implants, (2017) NIDCD NIH. Retrieved from https://www.nidcd.nih.gov/health/cochlear-implants.





Eagleman D., (2015), Can we create new senses for humans? Retrieved from https://www.ted.com/talks/david_eagleman_can_we_create_new_senses_for_humans?lan guage=en





Gupta, S., (2014), "The Silencing of the Deaf", Medium.





Hyman, S.E., (2011) Cognitive Enhancement: Promises and Perils. Neuron, 69: 595-598. 





Keller, Kate, (2018) Could this futuristic vest give us a sixth sense? Retrieved from https://www.smithsonianmag.com/innovation/could-this-futuristic-vest-give-us-sixth-sense-180968852/.





Leuthardt E., J. Roland, W. Ray, "Neuroprosthetics", The Scientist, 2014.





Moses T., (2016), Emerging technologies: the ethical dangers of fixing what is not broken. ETHICS, DOI: 10.1109/ETHICS.2016.7560048





NeoSensory. Retrieved from https://neosensory.com/








Want to cite this post?




Sanborn, E. (2018). Ethical Considerations for Emergent Neuroprosthetic Technology. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/10/ethical-considerations-for-emergent.html


Tuesday, October 9, 2018

An injection of RNA may transfer memories?




By Gabriella Caceres








Figure 1. Image by Bédécarrats et al. 2018

Imagine a future in which you could tell your spouse about your day by simply transferring the memory to them, or one in which you could pass your memories on even after your death. These scenarios may seem far ahead in the future, but steps are definitely being taken towards this development. To combat our natural memory inaccuracy and decline due to old age or Alzheimer’s disease, which has been found in 1 out of every 10 people over 65 years old (WHO, 2017), scientists are beginning to investigate the biology of memory and the ways in which the process of making memories can be improved. A recent and controversial article published by Science News reported that RNA may be used to transfer memories from one sea slug to another. Bedecarrats et al. 2018 claimed that they were able to transfer memories from neurons of sea slugs (Aplysia californica) by first sensitizing the slugs with shocks until they had a long-lasting withdrawal response to touch. Then, the researchers extracted the RNA from the sensory neurons of the shocked slugs, and injected that RNA into the sensory neurons of non-sensitized sea slugs (figure 1). The authors postulated that the sensitization occurred because the donor sea slug underwent epigenetic changes, or when a methyl group gets attached to the DNA and modulates gene expression (D’Urso et al. 2014). This whole process resulted in a transfer of sensitization (a form of implicit, or unconscious, memory) to the recipient slug, as it experienced the same long-lasting response to touch that the donor slug did.







Figure 2. Image by Deadwyler et al. 2013

This is not the only experiment that has explored neural-memory transfers. Deadwyler et al. 2013 derived information-encoding patterns from the hippocampus of a “donor” rat that was well-trained to perform operant responses to a delayed-non-match-to-sample task, and sent the information via electrical stimulation to a non-trained “recipient” rat, facilitating its task performance (figure 2). Such studies provide proof of concept that direct transfer of memories between two brains is a possibility. Moreover, memories seem to be at the root of who we are and what we achieve in our lives, but what happens when individual and combined memories collide? It is time to begin thinking about the ethical concerns, evaluating the value of memory and how we benefit from the memory of others, as well as the consequences that memory transfers may bring to bear on issues of privacy and individuality.





Thoughts are one of the few private things we have left. With such memory-transfer innovations, this may not be true anymore, and complex privacy problems may arise. For example, it can be difficult to control which exact memories/thoughts will get transferred during a memory transfer procedure: there may be signals the sender is not willing to share (Tamburrini, 2009; Trimper et al., 2014) or signals the receiver may not be able to refuse. A way to prevent unconsented information from being transferred would have to be developed. In addition, with the rise of this new technology and possible commercialization, individuals may feel a pressure to share their memories with family, friends, employers, and even insurance companies. After the embarrassing interview, for example, your spouse may want you to play that memory in their mind. Or after that party, your mother may want to see what was going on. This may lead to a change in the individual’s sense of freedom; everything you experience can be known by others.








Image courtesy of WordPress

Furthermore, some scientists and philosophers would suggest that we are deeply shaped by our memories. The Stanford Encyclopedia of Philosophy states that “memories play a role in our knowledge of the world and our personal past. It underwrites our identity and our ties with other people” (Kourken et al. 2017). Professor of philosophy Dr. Françoise Baylis also argues that people are social constructive beings constituted by their relationships, narratives and stories, and these are built of interactions made from retrieving and making memories. There is no doubt that experiences and emotions such as love, fear, joy, and pain can shape who we are. How would these be affected if our brain is constantly exposed to the experiences and emotions of others? Developmental biologist Michael Levin of Tufts University questions what it means to be a coherent individual that has a coherent bundle of memories, and he implies that the hunt for memories gets at the nature of identity (Blackiston et al. 2015). Being part of a brain-brain dyad may have complex repercussions on a person’s concept of self, and the recipient would end up having two types of memories: his/her own memories and “quasi-memories” that have been transferred by others (Hildt, 2011). Not only that, but in cases such as this one, epigenetically modified RNA is being transferred to the recipient and causing specific physiological alterations of neurons (Bédécarrats et al. 2018). How well he/she will be able to distinguish between the two types of memories is up for question.





Professor Elisabeth Hildt from Illinois Institute of Technology states that “one of the central questions is whether there actually is a need for direct brain-to-brain (BTBI) communication.” Technologies such as brain-to-brain interface could bring about more accurate memories in the military, for example, allowing soldiers to learn from previous wars or the past experiences of their colleagues. One could imagine a scenario where BTBI could serve as an aid for Alzheimer’s patients, where instead of using external memories such as diaries and photos to remember the past, the patient’s spouse or family member could just transfer clear-cut memories. Dr. Michaelian Kourken argues that “given the constructive character of internal memory, stable forms of external memory may make a distinct and valuable contribution to remembering” (Kourken et al. 2017).  Yet there is also the argument that since memory is inaccurate in its nature (Hermundstad et al. 2011), there is no guarantee that this transfer will make the memories more trustworthy. Indeed, scientists and ethicists need to work together to make sure that such technology is developed reliably and ethically correct.





________________







Gabriella Caceres is a student double majoring in Neuroscience and Behavioral Biology (NBB) and Psychology at Emory University in Atlanta, GA. Her research focus is on oxytocin and its effects on social cognition, but she also has a strong interest for the neurobiology of memory. Gabriella developed a curiosity for neuroethics after taking part in the NBB Paris study abroad program. She is 21 years old and originally from Santo Domingo, Dominican Republic.













References





A. Bédécarrats et al. (2018) RNA from trained Aplysia can induce an epigenetic engram for long-term sensitization in untrained Aplysia. eNeuro.





Deadwyler S. A., Berger T. W., Sweatt A. J., Song D., Chan R. H., Opris I., et al. . (2013). Donor/recipient enhancement of memory in rat hippocampus. Front. Syst. Neurosci. 7:120.





D.J. Blackiston, T. Shomrat and M. Levin (2015) The stability of memories during brain remodeling: A perspective. Communicative & Integrative Biology. Vol. 8.





D’Urso, A., & Brickner, J. H. (2014). Mechanisms of epigenetic memory. Trends in genetics, 30(6), 230-236.





Hermundstad, A. M., Brown, K. S., Bassett, D. S., & Carlson, J. M. (2011). Learning, memory, and the role of neural network architecture. PLoS computational biology, 7(6), e1002063.





Hildt E (2015). What will this do to me and my brain? Ethical issues in brain-to-brain interfacing. Frontiers in Systems Neuroscience; 9:17.





Michaelian, Kourken and Sutton, John, "Memory", The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edward N. Zalta (ed.)





Tamburrini G. (2009). Brain to computer communication: ethical perspectives on interaction models. Neuroethics 2, 137–149 10.1007/s12152-009-9040-1





Trimper JB, Wolpe PR and Rommelfanger KS (2014) When “I” becomes “We”: ethical implications of emerging brain-to-brain interfacing technologies. Front. Neuroeng. 7:4.





World Health Organization. (2018). International statistical classification of diseases and related health problems 11th revision. World Health Organization.










Want to cite this post?




Caceres, G. (2018). An injection of RNA may transfer memories? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/10/an-injection-of-rna-may-transfer.html

Tuesday, October 2, 2018

How to be Opportunistic, Not Manipulative



By Nathan Ahlgrim





Opportunistic Research





Government data is often used to

answer key research questions.

Image courtesy of the U.S. Census Bureau




Opportunistic research has a long and prosperous history across the sciences. Research is classified as

opportunistic when researchers take advantage of a special situation. Quasi-experiments enabled by government programs, unique or isolated populations, and once-in-a-lifetime events can all trigger opportunistic research where no experiments were initially planned. Opportunistic research is not categorically problematic. If anything, it is categorically efficient. Many a study could not be ethically, financially, or logistically performed in the context of a randomized control trial.





Biomedical research is certainly not the only field that utilizes opportunistic research, but it does present additional ethical challenges. In contrast, many questions in social science research can only be ethically tested via opportunistic research, since funding agencies are wary of explicitly withholding resources from a ‘control’ population (Resch et al., 2014). We, as scientists, are indebted to patients who choose to donate their time and bodies to participate in scientific research while inside an inpatient ward; their volunteerism is the only way to perform some types of research.





Almost all information we have about human neurons comes from generous patients. For example, patients with treatment-resistant epilepsy can have tiny wires lowered into their brains, a technique known as intracranial microelectrode recording, enabling physicians to listen in on the neuronal chatter at a resolution normally restricted to animal models (Inman et al., 2017; Chiong et al., 2018). Seizures, caused by runaway excitation of the brain, are best detected by recording electrical signals throughout the brain. By having such fine spatial resolution inside a patient’s brain, surgeons can be incredibly precise in locating the site of the seizure and treating the patient. It’s what else those wires are used for that introduces thorny research ethics.









Image courtesy of Wikimedia Commons.

Those wires are already down there, so why not put them to even more use? Scientists dream of poring over the treasure trove of patients’ data. It’s a precious, and rare, resource. The elephant in the room, especially for practitioners of basic research, is that basic research is not expected to directly benefit the individual patient. Any scientific gain may help people in the years to come, but it will not affect that individual patient’s prognosis. Unlike studies trying to optimize deep brain stimulation (DBS) for treatment of Parkinson’s Disease (Müller and Christen, 2011) or depression (Dunn et al., 2011), basic research exists for the sake of science, not patient welfare. With fewer concrete benefits to the patient, the risk to benefit calculation becomes trickier.





Human neuroscience research like this is almost always expensive and demanding. That does not mean, however, that these experiments can be low priority. Our prodigious knowledge of the nervous system is only surpassed by our ignorance of it, and treatments for some of the most pressing health concerns of our time depend on research like this increasing our knowledge. Of course, such a strong motivation to innovate can blind scientists to the need to also protect their research participants, which is why specific ethical standards for opportunistic research need to be robust and ready.





Physician-led Opportunism





In the physician-patient relationship, the power dynamic lies in favor of the physician. Most physicians recognize and accept this dynamic when it comes to healthcare. Even so, many fail to appreciate that the power dynamic does not disappear when the conversation changes topic; the physician remains the physician even when she talks to her patient about non-therapeutic research.








Image courtesy of SVG Silh.



Non-medical invasive brain research, like that using intracranial recordings and brain stimulation in epilepsy patients, is admittedly a niche area. Since it has no immediate
implications for human health, it receives far less publicity and
public scrutiny than clinical trials or even promising treatments in
animal models (Fang and Casadevall, 2010). Although the purpose of basic
research is distinct, it can still benefit from the lessons learned on
the medical side. Clinical human neuroscience research shows that the ability to consent does not guarantee that the decision to consent is a voluntarily one (Swift, 2011). In the shadow of the physician-patient power dynamic, would-be participants can become situationally incapacitated even while retaining full mental capacity (Labuzetta et al., 2011). In effect, their position as a patient, the physician-patient relationship, and the overlap between medical and research practices can all render the patient incapable of freely giving informed consent. Although the mental state of the patient may be sound, many argue that they must be protected just like those who lack the mental capacity to consent on their own behalf. The fear is that any hint of the research influencing the medical care, or even the absence of addressing that interaction explicitly, can force the patient’s decision.





Of course, there is also a strong argument that consent, even if not fully voluntary, can be ethically valid. Even proponents of the so-called Autonomous Authorization criterion, under which consent is only valid when given intentionally, with full understanding, and without controlling influence (Faden and Beauchamp, 1986), often amend or bend those strict guidelines to make them practical (Miller and Wertheimer, 2011). Autonomous authorization can be eroded because of therapeutic misconception of research, when potential participants are influenced to enroll in a study due to confusion between research and medical treatment (Appelbaum et al., 1982). For instance, patients may enroll in a study testing a potential drug to treat Alzheimer’s Disease because they believe they will not be placed in the placebo group given their advanced condition. That is not how randomized control trials are designed. Patients’ misunderstanding inflates the benefits in their mind, which could sway their decision to participate. Yet the demand that all patients be fully knowledgeable before their consent is deemed valid may be too rigorous to be practical, and end up an unrealistic burden to place on researchers. Critics of the Autonomous Authorization model claim that responsibility for protecting patients resides in institutional safeguards (i.e. Institutional Review Boards [IRBs]), not the researchers themselves. With strong institutional standards in place, patients’ best interests can still be protected even if they give non-autonomous consent. That is, at least, the argument. How those safeguards are designed is the determining factor of their effectiveness.





How to Keep Consent Voluntary





We cannot pretend that the physician-patient power dynamic does not exist, or that every patient will become an expert in the research program they sign up for. Still, proactive steps on the institutional and personnel sides can protect participants and make sure they enroll because they want to, not because they feel they have to. The need for such protections is compounded by the specifics to invasive brain research, whose entire participant pool lives with a treatment-resistant brain disorder severe enough to merit invasive brain surgery. It is our unfortunate reality that stigma looms over people living with brain disorders, both external (from others) and internal (self-perception) (Corrigan et al., 2006). Stigma surrounding brain disorders weakens personal empowerment (Corrigan, 2002), tipping the balance of power even more strongly towards the physician and research team. The protections put in place for these participants must be comprehensive and robust to rebalance the relationship.





Teams performing invasive brain research have already made a series of recommendations to directly address the unique environment of non-medical invasive research using human patients (Chiong et al., 2018). Their recommendations are strong and worth implementing, but they fall short because of a common blind spot: they are still thinking like researchers, not patients.








Image courtesy of Pixabay user Catkin.

As a patient, you might be coerced to consent to any research protocol put in front of you out of fear that your medical treatment is dependent on it. You don’t even need to be a cynic who expects the worst out of your physician to fear this. After all, your physician will probably take more of an interest in you, and you’ll get more face time with her, if you sign up for her study. Yes, preferential treatment is wrong, but self-defense against improper treatment requires self-empowerment, something that is often degraded in these patients by the stigma following their brain disorder. To minimize potential coercion, physicians should at the very least complete the consent process as part of a team, alongside people not involved in the patient’s care. Of course, the coercion patients feel would be minimized if their physicians were completely absent during the consent process to minimize any implicit coercion, but such requirements are often impractical. Both medical and research personnel should also be required to explicitly state that medical care will not change for the better or worse regardless of research participation. These statements must be unequivocal, and repeated before, during, and after the consent process.





Even as I and others lay out a list of criteria for researchers to meet, it is important to stress that research teams cannot rely on a one-size-fits-all consent process. Individualization is especially necessary when researchers are working with a vulnerable population dependent on their care. The capacity to consent to medical interventions (which get the patient into the ward in the first place) does not imply the capacity to consent to research interventions. Even after patients do consent, their medical condition can fluctuate, as can their desire to participate. Just like with medical treatment, consent at the start of a project (no matter how ethically it was obtained) cannot be used to rubberstamp the entire study. Such protections are already given to psychiatric patients (Palmer et al., 2013), showing that the best consent is one that is renewed.





Institutional criteria can help bolster these practices, but relying too much on them is dangerous. After all, institutional priorities can bias the definition of “patient interests” and preferentially validate non-autonomous consent that aligns with institutional interests over the individual patient’s interest. Both personnel and institutional approaches fail to fully protect the patient/research participant dual role, which is why the two must work in tandem. It is far too easy for researchers to capitalize on a patient’s therapeutic misconception because it produces the desired outcome, even when the deception is unintentional.








Image courtesy of Wikipedia.

As a patient, being told your medical care is protected regardless of your research participation is not the same as believing it and trusting it to be true. Doubt may be unavoidable, and it is not preferable, but it should also not prevent the study from happening. Invasive brain research can only happen in specific and intensive situations, but it is absolutely necessary to the progress of neuroscience and medicine. Everything from epilepsy to Alzheimer’s Disease to autism is informed by and better treated because of invasive brain research.





Patients will be protected when physicians are trained to not display favoritism to their research participants and IRBs shape research protocols to fairly balance a participant’s risks and benefits. They will be protected even if they do not understand the research as well as the research team. Science does not have to stop until the public are all scientists. Scientists do, however, need to protect non-scientist interests, even when it feels like doing so gets in the way of progress. The discussion of the ethical challenges is not meant to detract that we, as a society, need this kind of research if we hope to continue improving overall health. The brain is boundlessly complex, and we do not understand it well enough to adequately treat those who need it. In short, our deep ignorance of the brain’s inner workings requires deep, and sometimes invasive, research.




________________












 Nathan Ahlgrim is a fifth year Ph.D. candidate in the Neuroscience
Program at Emory. In his research, he studies how different brain
regions interact to make certain memories stronger than others. He strengthens his own brain power by hiking through the north
Georgia mountains and reading highly technical science...fiction.










References



Appelbaum PS, Roth LH, Lidz C (1982) The therapeutic misconception: Informed consent in psychiatric research. International journal of law and psychiatry 5:319-329.



Chiong W, Leonard MK, Chang EF (2018) Neurosurgical patients as human research subjects: Ethical considerations in intracranial electrophysiology research. Neurosurgery 83:29-37.



Corrigan PW (2002) Empowerment and serious mental illness: Treatment partnerships and community opportunities. Psychiatric Quarterly 73:217-228.



Corrigan PW, Watson AC, Barr L (2006) The self–stigma of mental illness: Implications for self–esteem and self–efficacy. Journal of Social and Clinical Psychology 25:875-884.



Dunn LB, Holtzheimer PE, Hoop JG, Mayberg HS, Roberts LW, Appelbaum PS (2011) Ethical issues in deep brain stimulation research for treatment-resistant depression: Focus on risk and consent. AJOB Neuroscience 2:29-36.



Faden RR, Beauchamp TL (1986) A history and theory of informed consent: Oxford University Press.



Fang FC, Casadevall A (2010) Lost in translation—basic science in the era of translational research. Infection and Immunity 78:563-566.



Inman CS, Manns JR, Bijanki KR, Bass DI, Hamann S, Drane DL, Fasano RE, Kovach CK, Gross RE, Willie JT (2017) Direct electrical stimulation of the amygdala enhances declarative memory in humans. Proceedings of the National Academy of Sciences.



Labuzetta JN, Burnstein R, Pickard J (2011) Ethical issues in consenting vulnerable patients for neuroscience research. Journal of Psychopharmacology 25:205-210.



Miller FG, Wertheimer A (2011) The fair transaction model of informed consent: An alternative to autonomous authorization. Kennedy Institute of Ethics Journal 21:201-218.



Müller S, Christen M (2011) Deep brain stimulation in parkinsonian patients—ethical evaluation of cognitive, affective, and behavioral sequelae. AJOB Neuroscience 2:3-13.



Palmer BW, Savla GN, Roesch SC, Jeste DV (2013) Changes in capacity to consent over time in patients involved in psychiatric research. The British Journal of Psychiatry 202:454-458.



Resch A, Berk J, Akers L (2014) Recognizing and conducting opportunistic experiments in education: A guide for policymakers and researchers In. Washington, D.C.: U.S. Department of Education.



Swift T (2011) Desperation may affect autonomy but not informed consent. AJOB Neuroscience 2:45-46.



Want to cite this post?



Ahlgrim, N. (2018). How to be Opportunistic, Not Manipulative. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/10/how-to-be-opportunistic-not-manipulative.html