Pages

Tuesday, August 27, 2013

Report from the Society for Disability Studies: Bringing Ethics, Bioethics, and Disability Studies Together

By Jennifer C. Sarrett, MEd, MA



Jennifer Sarrett is a 2013 recipient of the Emory Center for Ethics Neuroethics Travel Award. She is also a doctoral candidate at Emory University’s Graduate Institute of Liberal Arts working on her dissertation which compares parental and professional experiences of autism in Atlanta, GA and Kerala, India as well as the ethical issues that arise when engaging in international, autism-related work.




From June 26 - 29, 2013, the Society for Disability Studies (SDS) held their annual conference in Orlando, Florida. SDS is the primary scholarly association for the field of Disability Studies, which is an academic field of study exploring the meanings and implications of normativity, disability, and community. As with other identity-based fields of studies, including Women’s Studies, Queer Studies, and African-American Studies, the Society for Disability Studies thinks about difference and works to expose and eradicate stigma and inequality related to people who identify as disabled. This particular field of identity-based work is closely related to Bio- and Neuroethics, as differences in minds and bodies present medical and scientific concerns to physicians, researchers, and scholars.





At SDS this year, I presented a paper titled “The Ethics of Studying Autism Across Cultures,” which is based on my research fieldwork. My dissertation looks at how culture influences parental and professional experiences of autism in Atlanta, GA and Kerala, India with the aim of developing guidelines for future scholars, interventionists, or advocates embarking on international work on autism and related disabilities. Because of many of the ethical issues I came across in my studies and research, my work extends to thinking about autism within current models of human rights and critically examining contemporary and historical ways of talking about and treating people on the autism spectrum. 






My work on autism in and out of my dissertation relates to several prominent concerns presented in current bioethical and neuroethical scholarship. In regards to research practices, issues related to obtaining informed consent, communicating research goals to participants and collaborators, and ensuring research aims and practices are not harmful (emotionally or otherwise) to participants were present. I have also engaged with concerns about the appropriateness and usefulness of promoting and exporting psychiatric labels from the West into regions without the ability or need to use and address these labels. And, because autism is a condition with no known etiology and is diagnosed based on behaviors and development, there are myriad autism-specific neuroethical issues, including pharmacological interventions, prenatal diagnosis, and the presence or absence of morality in autistic individuals. Additionally, there are debates concerning the need to provide intense intervention to autistic individuals: at one extreme is the belief that autism is an unwanted state of being and all efforts should be made to bring the ‘real’ person out of the autistic shell while at the other extreme is neurodiversity, the perspective that autism is just one manifestation of human neurological development that is necessary for the diversity and balance of the human race and should not be eradicated, rehabilitated, or treated.





At SDS, I focused on some of the ethical concerns that arise when culture is brought into consideration. My paper was built on the premise that ethical codes and guidelines set forth by institutions such as the American Anthropological Association (AAA) and the field of Bioethics are Western-centric and broad, making them difficult to apply to on-the-ground situations when researching intellectual, behavioral, or psychiatric disability outside of the Global North. Additionally, many ethnographers and cross-cultural psychiatrists do not report the ethical issues they encounter during research or applied work meaning that, despite their ubiquity, these issues are not widely discussed. Researchers and international mental health workers will continue encountering ethical issues in the field, therefore beginning a discussion of these concerns is critical. I argue that given the situation-specific and ambiguous nature of ethical issues—which, in a sense, differentiates ethics from morals—the best way to address this topic is through story-telling.





In his ethnography on HIV in Russia, anthropologist Jarrett Zigon uses the phrase “moral breakdown” to describe times when “some event or person intrudes into the everyday life of a person and forces him to consciously reflect upon the appropriate ethical response (be it words, silence, action or nonaction)” (2010; 69). This is similar to what is commonly called an ethical dilemma, but I prefer Jarrett’s term because often, in the moment, these events feel like a shattering of moral precepts perviously considered to be indestructible. In my presentation, I described several of the moral breakdowns I experienced while doing research in India. These fell into three themes: navigating my various roles (e.g., researcher, ‘expert’), differences in health care privacy, and the context of maltreatment (in this case, restraint). 


 

My paper was presented alongside a paper describing a program in Cambodia that encouraged and empowered disabled youth in their communities through employment, education, and social events. The audience brought up and participated in discussion on issues concerning how to further academic work and scholarly discussions on ethics of field work related to disability, differing perspectives on the importance of diagnostic labels, and how and whether to push an agenda of empowerment and advocacy rather than immediate physical and/or financial needs.





The discussion during and after my presentation was incredibly helpful for how I think about my work, as were other talks I attended. For instance, one presentation presented a project that brought women with disabilities from the U.S. to Jordan to learn about and discuss education, employment, and daily living. Many events presented here were familiar as participants faced similar ethical dilemmas, including how to discuss disabled sexuality in a conservative culture and how to respond to professionals who describe needing to quell the hopes and dreams of disabled youth for more ‘realistic’ goals.  





The conference also included a panel on bioethics and disability studies, two fields that often conflict on topics such as end of life decision-making and prenatal diagnosis. In the three years I have attended SDS, this was the first year I remember seeing a panel on bioethics and it was clear some more work needs to be done to better bridge these disciplines. The connection with bioethics was unclear in two of the three papers. The third was a talk by a Disability Studies scholar about his experience being hired at a bioethics center at a prestigious university. This talk touched on some of the concerns related to merging Disability Studies and Bioethics, however did not directly address the nature of these issues.





As I noted, I have attended SDS for three years and always leave feeling more connected to the field of Disabilities Studies and armed with new ways to approach my own work. It is my hope that next year, I can attend and promote more discussion on bioethics, neuroethics, and the ethics of disability and mental health related fieldwork. Disability Studies and Bioethics both focus on issues related to the body, health, illness, and have much to learn from as well as teach each other. A forum like SDS is just one venue to promote this collaboration. 






For more information on autism, psychiatry, culture, disability studies, and neuroethics see:




Daley, Tamara. (2002). The need for cross-cultural research on the Pervasive Developmental Disorders. Transcultural Psychiatry, 39. DOI: 10.1177/13634615203900409




Davis, Lennard. (2010). The Disability Studies Reader, 3rd Edition. Ed. Lennard Davis. New York: Routledge. 




Farah, Martha. (2010). Neuroethics: An Introduction with Readings. Cambridge: The MIT 


Press.




Feinstein, Adam. (2010). A History of Autism: Conversations with Pioneers. West Sussex: 


Wiley-Blackwell.




Grinker, Roy R. (2007) Unstrange Minds: Remapping the World of Autism. New York: Basic Books.




Kleinman, Arthur. (1988). Rethinking Psychiatry: From Cultural Category to Personal Experience. New York: Free Press.




Sarrett, Jennifer. (2012). Autistic Human Rights—A Proposal. Disability Studies Quarterly, 


32(4). http://dsq-sds.org/article/view/3247/3186




Seibers, Tobin. (2008) Disability Theory. Ann Arbor: The University of Michigan Press.


World Health Organization. (2001). The World Health Report 2001: Mental Health: New Understanding, New Hope. Geneva: World Health Organization.










Want to cite this post?



Sarrett, J. (2013). Report from Society for Disability Studies: Bringing Ethics, Bioethics, and Disability Studies Together. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2013/08/report-from-society-for-disability.html

Tuesday, August 20, 2013

Perceptions of Animals




Dr. Frans de Waal

By Frans de Waal, Ph.D.



Frans de Waal is the Charles Howard Candler Professor of Primate Behavior at Emory University and the Director of the Living Links Center at the Yerkes National Primate Research Center. He is also a member of the United States National Academy of Sciences and the Royal Netherlands Academy of Sciences and a member of the AJOB Neuroscience editorial board. His research focuses on primate social behavior, including conflict resolution, cooperation, inequality aversion, and food-sharing. 





At a recent workshop on
"Beastly Morality" (April 5, 2013, Emory Ethics Center), which drew
participants from all over the country, I asked an innocent question. We
had about sixty scholars presenting or listening to academic papers on
the human-animal relationship or the place of animals in literature, and
I asked how many of them worked with animals on a daily basis. The
answer: no one.




It was a naive question, because if I had
expected half of them to say that they did work with animals, these same
academics would probably be writing on something totally different,
such as the behavior of animals, their treatment by us, or their
intelligence. That's what I do, being a scientist. We rarely write about
anything that cannot be observed or measured, and so we assume it must
be the same for everybody else. But if one's focus is how Thomas Aquinas
viewed animals, the definition of personhood, or the moral status of
animals in Medieval Japan -- all of which were topics at the workshop --
first-hand knowledge of animals is hardly required.



Undeniably, there is a dearth of exchange between scientists and other
academics on the issue of animals, the reason being that for scientists
the animal is a concrete study object, whereas for scholars in English
departments or other corners of the humanities, the animal often is an
abstract entity judged by its place in literature, its perception in
history, its role in religion, or its relation to human self-identity.
Are we animals? Positions seem to be gradually shifting in this
direction, but none of this relates much to the essence of the animal
itself, even less to any specific species, such as our closest
relatives, the anthropoid apes.



On the other hand, it would
be naive for scientists to think that how we study animals is free from
cultural biases. It is impossible for us to break away from human
perceptions. There is a reason, for example, why treatment of animals as
individuals by giving them names and following their lives over time --
a common technique today -- is not a Western invention. Lacking souls,
animals were traditionally viewed as all the same. European ethologists
kept talking about species-typical behavior, and American behaviorists
did not even appreciate that species might differ. B. F. Skinner bluntly
said: “Pigeon, rat, monkey, which is which? It doesn’t matter” (Bailey,
1986).








There was enormous resistance, therefore, to the
personalization of animals, so much so that when Kinji Imanishi, the
father of Japanese primatology, visited American universities in 1958 to
explain how his students recognized a hundred different monkeys in each
troop, he only met raised eyebrows. His audience felt that doing so was
an impossibility (de Waal, 2001). The first to recognize the potential
of the Japanese approach was Ray Carpenter, an American primatologist.
Carpenter himself identified individuals by means of tattoos, hence with
an initial underestimation - typical for Western science - of their
individuality. It would be a bit like me going to a party and putting
colored dots on everyone’s foreheads saying that otherwise I couldn’t
tell these people apart. It is obvious, however, that Carpenter was an
astute observer. When he first heard of the Japanese studies, he did not
share the skepticism of his colleagues, who reacted with disbelief that
monkeys could be distinguished just from sight. They viewed all this
naming of individuals as hopelessly anthropomorphic, which at that time
was about the most damning label one could come up with. Animals were
supposed to be different. They wondered if the Japanese were not grossly
overestimating the social lives of their monkeys. Who said that monkeys
could tell each other apart even if human observers said that they
could? Even though the Japanese approach has now won many converts, I
call it a "silent invasion" given how reluctant Westerners have been to
recognize Imanishi's priority and influence (de Waal, 2003).






Kinji Imanishi with a baby gorilla

Clearly, the way we perceive animals affects how we conduct science.
There is every reason for scientists to listen to exposés on the
cultural views of animals, just as there is every reason for anyone
writing on animal representations to investigate what science actually
knows about the species in question. This way, both groups may come
together and have a more fruitful exchange than we have had thus far.

 



References     


  • Bailey, M. B. (1986). Every animal is the smartest: Intelligence and
    the ecological niche. In: Animal Intelligence. R. Hoage & L. Goldman
    (Eds.), pp. 105-113. Washington, DC: Smithsonian Institution Press.



  • de Waal, F. B. M. (2001). The Ape And The Sushi  Master: Cultural Reflections by a Primatologist. Basic Books, New York.



  • de Waal, F. B. M. (2003). Silent invasion: Imanishi’s primatology and cultural bias in science. Animal Cognition 6: 293-299.




Want to cite this post?



de Waal, Frans. (2013). Perceptions of Animals. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/08/perceptions-of-animals_20.html









Tuesday, August 13, 2013

(Hypothetical) Crimes Against Neural Art

We would expect that if there was any moral outrage to have over the treatment of cultured neural tissue, it would occur in an art gallery. Something about an art gallery sensitizes us to the well-being of critters we might not usually care about - as in the case of Garnet Hertz's Cockroach Controlled Mobile Robot (a three wheeled robot about half the size of R2D2, driven by a Madagascar hissing cockroach) - and to cry out over events that we might otherwise willfully ignore or even accept as routine - as in Guillermo Vargas's infamous “You Are What You Read,” (where a starving dog was taken off the street and brought into a gallery) [1].  Instead, when neural tissue is given a robotic body and placed on display (sometimes remotely) in an art gallery, most responses seem to focus on the ambiguous nature of the works.  Artist Stephane Dumas wrote, referring to MEArt (a drawing robot controlled by a culture of rat brain cells), that “the public can experience the drawing activity and at the same time sense the presence of its remote initiator, the brain [2],” implying a felt mental presence associated with the biological components of the work.  However, Dr. Stuart Bunt, one of the scientists who worked on Fish and Chips (a precursor to MEArt that used tissue taken from fish rather than rats), wrote that “many viewers of Fish and Chips embodied it with impossible sentience and feared it unnecessarily [3],” indicating that the attributed mental life (and implied moral obligations towards it) was an illusion constructed by the framing of the piece. This contradiction between the audience and creator's interpretation of these pieces is reflected in Dumas's assertion that embodied neural bioart (here referring to Silent Barrage, which featured a distributed robotic body that audience members could walk through) “is a work in progress that raises more questions about the relationship between neural mechanisms and creative consciousness than it answers [2].”  This ambiguity is even praised by artist Paul Vanouse, who states that “MEArt's creators have cleverly designed their thought-provoking apparatus to maximize cognitive dissonance [4],” while Emma McRae describes MEArt as an example of one of “an infinite multiplicity of agencies [5]” that don't fit into well established categories, which  humans must learn to share the world with [6].










If even cockroaches become objects of empathy in an art gallery, what would it take for us to feel sorry for neural culture?  Above photos by Sharmanka and Douglas Repetto, from here.







If Fish and Chips, MEArt, and Silent Barrage didn't raise any overt ethical alarms, what would be required for such an embodied neural artwork to be 'wronged'?  While moral transgressions are certainly possible on a multitude of grounds (e.g. affronts to the dignity of the cultures, or more likely the dignity of the animals they were derived from), for the moment let us narrow ourselves down to the possibility of causing morally relevant pain in such a system.  Previously on this blog I've discussed several different ways of looking at the possibility of pain embodied neural cultures.  Here I'd like to present my own hypotheses for what might be the minimal requirements for creating morally relevant pain under these different perspectives.



Both behavioral and anatomical perspectives on pain have serious problems with identifying the presence of morally relevant pain in neural culture.  This is as these perspectives both require the existence of reference points that both clearly demonstrate pain, as well as defining that pain based on qualities can be shared with neural culture.  From the behavioral perspective, such pain might seem possible if one created a robotic body that constantly whimpered or otherwise generated pain-associated behaviors.  However, we wouldn't trust that system to have any sort of 'authentic' pain; the signal produced could just as easily be giggling, and there wouldn't be any change from the perspective of the culture.  The relationship between bodily activity and morally relevant mental states, which while not fixed does possess evolved biological structure in 'full' animals, is arbitrary in the case of embodied neural culture.  From the anatomical perspective, pain also might seem possible at first.  Works like Silent Barrage and MEArt share some cellular similarities to some of the most morally relevant parts of the pain system - the cortical regions that appear necessary for caring about pain.  However, these isolated neural tissues lack the connections to other parts of the brain and body that usually interpret what the cortex does in such a way that pain is produced.






What would it take to 'wrong' a culture of dissociated rat neurons?  Image by Dr. Steve Potter, from here.

 A mathematical perspective on pain gets around the reference issues of the behavioral and anatomical perspectives by holding that the internal structure of a system determines its ability to feel different states, including pain.  Thus, this perspective can be applied to any system, whether naturally or artificially constructed - the only thing that matters are the (mathematical) rules that govern how the system changes over time.  Such a definition hasn't been built yet, but Integrated Information Theory of Consciousness seems poised to construct such a definition.  In the mean time, we might hypothesize that such a description of pain would include essential qualities such as aversion, negative reinforcement, redirection of attention, and sensitization to other 'painful' stimuli.  The described mathematical structure would need to be rich enough that it was convincingly authentic, such that even if the system under investigation was 'wired up incorrectly' - as in the case of the neural culture that could just as easily be made to laugh as to cry - there would be some latent structure present in the culture's actions that was clearly identifiable as pain.



Lastly, we might trade these views on moral pain that focus on the neural culture in isolation for a perspective that focuses on the interactions between the neural culture and its environment - what I have previously termed a 'social' perspective.  This perspective looks for grounds on which audience members might interpret the activity of a neural culture the same way as they might interpret morally relevant pain in animals - as a signal that the audience is obliged in some way to help the culture deal with its imminent destruction, whether real or perceived.  As the neural culture is not any living animal that the audience is familiar with, the social perspective does not attempt to interpret the neural culture as such. (Though certainly, such similarity if it did exist would be reasonable grounds for a different interpretation.  We have the option of treating a slice of ACC as if it still existed with the rest of a rat, just as we have the option of treating a deposed dictator with the same respect they commanded while in power.)  It is useful to note here that by treating the neural culture as a sign open for interpretation, the social view actually encompasses the other views examined above, each as methods of interpretation in their own right, and perhaps appropriate in their own sets of situations.






A torn image created by MEArt.  In early shows the control system for the robotic arms was still being developed.  Was the destruction of one of MEArt's products, the drawing, a moral failing of the artists who created the piece?  Could such destruction be considered analogous to the painful experience of the destruction of one's own body?  Image from here

Without an evolutionary 'narrative' to tell us how to behave (as we do when interpreting the facial expressions and vocalizations of social mammals, for instance), we are left up to the artists and scientists who created the work to provide some sort of moral structure.  From this perspective, pain might be something as simple as a LED that the culture could trigger if the pH of its media deviated to far from homeostatic limits.  All the necessary components are there - the audience can receive the signal, the signal signifies the pending doom of the culture, and the signal is in some way generated by the culture itself (that is, the signal would stop if the culture actually did die - as might be the case if we 'euthanize' the culture).  However, while there is enough structure in this scenario for some audience members to interpret the signal as pain, the lack of interaction between the audience and the culture makes this pain seem flat and robotic.  A more 'authentic' pain might require a richer relationship between the audience and the culture.  For instance, perhaps the audience has the ability to feed the culture and thus 'relieve' its pain (by bringing the media back to a safe pH limit) - or at least a way to request the researchers to feed the culture.  Such a moral relationship could be strengthened by adding repercussions for the audience's actions [6] - the culture could respond to feeding with a neutral sound that was described by the creators as a 'thank you,' the culture could blink its LED with greater frequency if it found that 'polite' requests for feeding were ignored, or human protesters could be staged outside of the gallery crying for 'tissue culture liberation' and preaching that only a philosopher could ignore pain in neural culture.  The key bit here isn't so much the qualities of the culture itself, but the interactions between the culture and the audience that might generate a truly moral pain.



Want to cite this post?



Zeller-Townson, RT. (2013). (Hypothetical) Crimes Against Neural Art. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/07/hypothetical-crimes-against-neural-art.html





[1] Other examples of this trend include  Gregor Schneider's “Death Room” (a room intended specifically for someone to die in, with plans to allow a volunteer to pass away publicly, of natural causes), or Huang Yong Ping's “Theatre of the World” (where a variety of reptiles, amphibians, insects, and arachnids were allowed to hunt and eat one another).  Perhaps in an art context, with its overtones of control and even frivolity, prevents us from excusing events such as starvation and natural death as being unavoidable, and predation as being necessary for some greater good.



[2] Dumas, Stephane. "Creation as Secretion. An externalist model in esthetics."  Situated aesthetics: art beyond the skin. Ed. Riccardo Manzotti.  Imprint Academic, 2011.



[3] Bunt, Stuart.  "Cybernetics and the Interaction Between Pure and Applied Sciences and the Humanties."  Proceedings of The 17th International Symposium on Electronic Art, Istanbul, September 14-21 2011. retrieved from <http://isea2011.sabanciuniv.edu/paper/cybernetics-and-interaction-between-pure-and-applied-sciences-and-humanities> on July 19th, 2013



[4] Vanouse, Paul. "Contemplating MEArt- the semi-living artist"  retreived from <http://www.paulvanouse.com/MEART_PV_essay.pdf> on July 19th, 2013



[5] McRae, Emma. "A report on the practice of SymbioticA Research Group in their creation of MEART-the semi living artist." retrieved from <http://www.fishandchips.uwa.edu.au/project/emma_text.pdf> on July 19th, 2013



[6] McRae does go on to point out how such works, performed in a university setting and using animal tissue, required ethical approval by animal ethics committees/Institutional animal care and use committees.  She notes that such committees often focus on the scientific merits of these works to justify them, even refusing to comment on works where the value is primarily artistic.  These committees are primarily concerned with the use of the animals from which these tissues will be derived, however, so they don't reflect ethical concerns with neural cultures themselves so much as the processes that produce them.  

Tuesday, August 6, 2013

Intervening in the brain: with what benefit?

By Hannah Maslen, DPhil and Julian Savulescu, PhD




Hannah Maslen is based at the Oxford Martin School, University of Oxford





Julian Savulescu is Uehiro Professor of Practical Ethics at the University of Oxford, Fellow of St Cross College, Oxford and the Director of the Oxford Uehiro Centre for Practical Ethics. He is also a member of the AJOB Neuroscience editorial board.



Novel neurotechnologies

Last week, Nuffield Council on Bioethics released its report entitled Novel neurotechnologies: intervening in the brain. The aim of the report is to provide a reflective assessment of the ethical and social issues raised by the development and use of new brain intervention technologies. The technologies that the report examines include transcranial brain stimulation, deep brain stimulation, brain-computer interfaces and neural stem cell therapies. Having constructed and defended an ethical framework to navigate the ethical and social concerns raised by novel neurotechnologies, the report proceeds to discuss 1) the care of the patients and participants undergoing interventions, 2) what makes research and innovation in neurotechnologies responsible research and innovation, and 3) how novel neurotechnologies should be regulated.



The remainder of the report moves on to explore non-therapeutic applications of novel neurotechnologies (such as enhancement and gaming) and how research into these technologies should be communicated in the media. Amongst the Council’s conclusions is the view that whilst the ethical issues raised by novel neurotechnologies are not necessarily unique or exceptional, the significance of the brain in human existence (to sense of self and to personal relationships) generates powerful reasons both to intervene when function is damaged and to proceed with caution before intervening without good evidence of safety and benefit (para10.3).



Assessing the benefits of a technology

Requiring evidence of the benefits of a potentially risky technology is common to assessments of a technology’s overall permissibility, particularly within the clinical domain. We wish to focus here on the Council’s conception of benefit as outlined in its ethical framework, suggesting that whilst its approach is appropriate for assessing the permissibility of clinical applications, it should not transfer to discussions of neurotechnologies used for enhancement.





Paragraph 4.20 of the report explains:

"The ethical challenges presented by uncertainty do not pertain to knowledge of risks alone; it is equally important that the benefits of intervening are well understood. … Even if, as in the case of non-invasive neurostimulation, risks are considered low, given the special status of the brain even less serious risks must be counterbalanced by clear indications of effectiveness in comparison with other therapeutic options if their use is to be supportable. (Second emphasis added)"



Whether the Council’s intention or not, this paragraph portrays the benefits of a technology as being closely linked – or perhaps even identical – to its effectiveness. This makes sense in the clinical domain where interventions are intended to have particular remedial or protective effects, easily measured as improvements to, or maintenance of, function or physiology. What constitutes an improvement or decline in health is mostly not controversial and can be measured objectively. For example, how far a person can walk after hip surgery is objectively measurable? Further, in the clinical domain, whilst the informed consent of patients is routinely obtained before proceeding with any intervention, a patient’s decline in health puts her in a vulnerable position where it is likely she will be inclined to accept the treatments on offer. This inclination may be bolstered by the perception that the intervention on offer is ‘endorsed’ by the medical profession, with its authority. This being the case, good evidence of effectiveness (benefit) must be gathered before offering interventions posing any risks.



The benefits of enhancement

Some of the technologies under discussion by the Council are also being marketed for the purpose of enhancement. Brain stimulation devices and other neurotechnologies are, among other things, being used in pursuit of improvements to memory and concentration. The Council is of the view that the effectiveness of interventions used for enhancement is yet to be established (para 8.44), and further suggests that it is not even clear how the benefits of technologies used for enhancement should be assessed, nor what constitutes proportionate risk where an intervention is non-essential (para 8.30). Whilst equating benefit with effectiveness is a sound strategy for an ethical framework assessing the use of neurotechnologies in the clinical context, we suggest that, when technologies are marketed to competent individuals not considered unwell, 1) ‘benefit’ should be understood differently and 2) the requirement of strong evidence of benefit should (partly as a consequence) be relaxed.



Although the risks and side effects of neurotechnologies used for enhancement could be assessed in a similar way to the risks and side effects associated with their clinical application, it is less clear how the benefits of these interventions should be measured. It could be argued that, unlike clinical interventions – which succeed or fail in improving or maintaining health to a measurable degree – technologies used for enhancement confer benefits that are more subjective and context specific. Parallels might be drawn with cosmetic enhancements: a nose might be made smaller or straighter in a way that we can measure, but how beneficial this is will vary from person to person and culture to culture. Granted, it is possible to measure the size of any improvement to cognitive performance: an improvement to the memory of an individual using a brain-stimulating device will be something that could be determined through laboratory tests. However, whilst we can measure the size of improvements to cognitive function, it could be argued that the value of enhancement is something that varies between people to a greater extent than the value usually attached to health. This value will depend on the circumstances specific to each individual. Improvement of memory for a vigorous professor will have a different value to improvement of memory for a retired gardener, though both will have some objective value.





Consequently, we suggest that ‘benefit’ should be understood as an estimation of the technology’s propensity to increase wellbeing, where an increase in a person’s wellbeing is related to the chances of her leading a good life in the relevant set of circumstances. Crucially, what constitutes a good life will vary depending on the person’s goals and values, their nature and their circumstances. In fact, this ‘welfarist’ definition of enhancement also subsumes those effects commonly thought to be treatments: if a neurointervention is used to alleviate symptoms of Parkinson’s Disease, for example, it is likely to have increased the patient’s chances of leading a good life .



But, it could be asked,  if this concept of increase-to-wellbeing is supposed to encompass both effects seen as treatments and effects seen as enhancements then why do we agree with the Council that benefit should be understood as effectiveness when assessing technologies used in the clinical context? We emphasize our earlier points: the first reason is that the ‘therapeutic’ effects of the clinical applications are likely to be valued by most people – to be necessary for leading a good life on most conceptions. Most people want to be able to walk around after hip surgery, and get back to the ‘activities of daily living’. This value accorded to health is likely to be more universal than the value accorded to enhancement. The second reason appeals to our argument that decisions about undergoing an intervention made in the clinical context are importantly different from the decisions made in the non-clinical context due to the particular vulnerabilities present when one’s health is in jeopardy. Understanding size of benefit as degree of effectiveness in the clinical context serves as a justifiable safeguard.



However, absent these particular vulnerabilities, the concept of benefit should be understood as the broader notion of increase-to-wellbeing. Both these factors speak in favour of giving individuals more choice about how to assess the risks and benefits of any particular device in the context of their own values, nature and life circumstances. As medical need falls, consumer freedom-to-choose should rise, other things being equal. People are generally the best judge of what is best for themselves, a point made a long time ago by John Stuart Mill.



Implications of the well-being framework for enhancement regulation

As noted, the Council suggests that it is neither clear how the benefits of technologies used for enhancement are to be assessed nor what constitutes proportionate risk where an intervention is not essential for maintaining an individual’s health. However, given their recommendation that neurotechnologies used for enhancement should be regulated in the same way as medical devices (para 8.52; we have argued for a similar model elsewhere ), these issues are important ones to resolve: the legislation controlling the placing of medical devices on the market requires a comprehensive risk-benefit assessment. For clinical neurointerventions, we have argued that there is a good case for imposing strict restrictions based on risk and efficacy in order to protect vulnerable patients. By contrast, when technologies are intended for enhancement we have suggested that, whilst it will be very important that potential consumers are well informed about an intervention’s mechanism, risks and effectiveness, the assessment of benefits and the weight they should be accorded should be made by the consumer. This points to a regulatory model whereby the most dangerous enhancement technologies will be filtered out of the market, leaving individuals free to choose which small-to-moderate risks they are willing to take in pursuit of their wellbeing.









References 

Nuffield Council on Bioethics, ‘Novel neurotechnologies: intervening in the brain’, published 24th June 2013.



Savulescu, J., Sandburg, A. and Kahane, G. (2011), ‘Well-being and Enhancement, in J. Savulescu, R. ter Meulen, and G. Kahane (eds.), Enhancing Human Capacities, Wiley-Blackwell.



Kahane, G. and Savulescu, J. 2009. “The welfarist account of disability”. In Disability and disadvantage, Edited by: Brownlee, K. and Cureton, A. 14–53. Oxford: Oxford University Press.



Mill, J.S., On Liberty



Maslen, H., Douglas, T., Cohen Kadosh, R., Levy, N. and Savulescu, J. (forthcoming), ‘Do-it-yourself brain stimulation: a regulatory model’, Journal of Medical Ethics.





Want to cite this post?

Savulescu, J. and Maslen, S. (2013). Intervening in the brain: with what benefit? The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/08/intervening-in-brain-with-what-benefit.html