Pages

Tuesday, October 29, 2013

Neuroethics Journal Club: The Ethical Issues behind Brain-to-Brain Interface (BTBI) Technologies

The first Neuroethics Journal Club of the Fall 2013 semester was a discussion led by graduate student John Trimper on the ethical implications behind brain-to-brain interface (BTBI) technologies. John introduced the topic by presenting the experimental details and results from a recent paper, published by the Nicoleis lab at Duke University (Vieira et al.), where researchers utilized a BTBI to transfer sensorimotor information between two rats. The BTBI technology allowed for a transfer of information from an “encoder” rat to a “decoder” rat, not using typical bodily interactions, but instead through intracortical microstimulation (ICMS).






"Rodent Mind Meld" (Via Wired)



The researchers conducted three experiments that demonstrated an artificial communication channel where cortical sensorimotor signals, coded for a specific behavioral response, were recorded in the encoder rat and transmitted to the decoder rat. Once received from the encoder rat, the decoder rat was instructed by these signals in making behavioral choices.  In the first experiment, a motor task, the encoder rat pressed one of two levels indicated by a LED light. This information was transferred via ICMS to the decoder rat, who would then choose the same lever without the help of the LED light. While the encoder rat performed better than the decoder rat, the decoder rat did perform correctly at levels significantly above chance. In the second experiment, the decoder rat again performed significantly better than chance, but in a tactile discrimination task. The encoder rats were trained to discriminate the size of an aperture with their whiskers; if the aperture were narrow, then the rats would nose poke on the left, while if the aperture were wide, the rats would nose poke on the right. Encoder rats explored the aperture, nose poked the right or the left, and then again, through ICMS, this information was sent to the decoder rat. The decoder rat would then also poke to the right or the left, but without any hint about the size of the aperture. Not only did researchers conduct this experiment with encoder and decoder rats residing in the same Duke laboratory, but impressively the same tactile discrimination task was also completed with an encoder rat in Brazil and a decoder rat at Duke, showing the potential of long-distance BTBI technology.



The authors state at the end of the paper (Vieira et al.) that “multiple reciprocally interconnected brains,” as opposed to the dyad formed by one encoder rat and one decoder rat, would represent the “first organic computer capable of solving heuristic problems that would be deemed non-computable by the general Turing-machine,” and there is no doubt that this research is at the forefront of BTBI technology. While many of the mass media outlets sensationalized this story with headlines like “Two rats, thousands of miles apart, cooperate telepathically via brain implant” (NBC News) or “Rodent Mind Meld: Scientists Wire Two Rats’ Brains Together” (Wired), many of the journal club audience members were a bit more skeptical regarding the experimental details and the immediate consequences of the research on society. It was pointed out by more than one member of the audience that the decoder rat making the correct lever or nose poke choice was not a result of direct communication from the encoder rat, but instead was due to conditioning resulting from the training program. Both the encoder rat and the decoder rat underwent extensive training before being connected and allowed to communicate through ICMS. Potential encoder and decoder rats were first trained to respond to either the LED visual stimulus or the width of the aperture, the tactile stimulus, until 95% accuracy was reached. Rats then chosen for the decoder position underwent further training after being implanted with microstimulation electrodes; these mice were trained to recognize that multiple ICMS pulses were associated with the right lever/left nose poke, whereas a single ICMS pulse was associated with the left lever/right nose poke. When the encoder rat chose the right lever over the left lever, the decision was sent by ICMS to the encoder rat, where the encoder rat would perceive multiple ICMS pulses, a sequence that the animal had already been trained to associate with the right lever. It was generally agreed that the decoder rat’s choice is not completely due to a “mindmeld” like popular media would suggest, but also involves the rat’s ability to perceive a difference in stimulation.






The encoder and decoder rats (Pais-Vieira et al.)



Even if directly transferring complex thoughts among individuals will remain science fiction for now, this research and other experiments, such as the BTBI between a human and rat (Yoo et al.), are important because of the numerous ethical issues that accompany BTBI. BTBI takes brain-machine interface (BMI) to a new level, and for that reason, BTBI is already associated with multiple ethical issues, such as the likelihood of the extraction of incidental information, neurosecurity to protect individual neural mechanisms, and the potential of hacking. Since BTBI is a direct transfer of thoughts though, BTBI takes many ethical issues to a new level as well, and journal club took time to discuss many of these concerns. If two individuals are sharing ideas, then does a collective identity result? Who is responsible for the actions committed by the decoder if the encoder is on the other end of the communication line dictating the decoder’s moves? If responsibility for any decision is shared or not, does the encoder or the decoder have ownership over any ideas? In an ideal situation, the communication line would never be flawed, but what if the computer’s algorithm made a mistake and the wrong information was transferred? Many of these questions were framed for a discussion involving a military scenario: The encoder is a soldier, completely removed from combat, but still with a complete view of the battlefield, and the decoder is an active duty military personnel physically experiencing the fighting. A friendly fire ensues and the decoder kills a fellow solider due to the transfer of information from the encoder. Who should be held accountable for the kill – the decoder, encoder, or the scientists responsible for BTBI military setup? This kind of long-distance combat could also lead to new and complex forms of PTSD for the decoder and the encoder, which could potentially require new research and treatments. After discussing many of these questions, it was agreed that not only will many more scientific breakthroughs have to accompany the transition from the current BTBI set-up with rats pushing levers or nose poking to a militaristic BTBI, but so will many more important ethical discussions and decisions. Perhaps BTBI is a novel form of social interaction as well, and in the future, journal club meetings with powerpoints and spoken dialogue will become archaic – we will instead use BTBI to communicate and transfer the material.





References



Trimper, J. (2013). Let’s Put Our Heads Together and Think About This One: A Primer on Ethical Issues Surrounding Brain-to-Brain Interfacing. The Neuroethics Blog. Retrieved on October 1, 2013, from http://www.theneuroethicsblog.com/2013/05/lets-put-our-heads-together-and-think.html.



Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., & Nicolelis, M.A.L. (2013). A Brain-to-brain interface for real-time sharing of sensorimotor information. Scientific Reports, 3, 1319.



Subbaraman, N. (2013). Two rats, thousands of miles apart, cooperate telepathically via brain implant. NBC News Science. Retrieved on October 1, 2013, from http://www.nbcnews.com/science/two-rats-thousands-miles-apart-cooperate-telepathically-brain-implant-1C8608274.



Miller, G. (2013). Rodent Mind Meld: Scientists Wire Two Rats’ Brains Together. Wired. Retrieved on October 1, 2013, from http://www.wired.com/wiredscience/2013/02/rodent-mind-meld/.



Yoo, S-S., Kim, H., Filandrianos, E., Taghados, S.J., Park, S. (2013). Non-Invasive Brain-to-Brain Interface (BTBI): Establishing Functional Links between Two Brains. PLoS ONE, 8(4), e60410.





Want to cite this post?



Strong, K. (2013). Neuroethics Journal Club: The Ethical Issues behind Brain-to-Brain Interface (BTBI) Technologies. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/10/neuroethics-journal-club-ethical-issues.html

Tuesday, October 22, 2013

Tibetan monastics and a neuroscientist: Some lessons learned and others taught

By Guest Contributor Brian Dias, PhD



Imagine your day starting out near the Northern Indian town of Dharamshala with thirty minutes of spiritual chanting and meditation among Tibetan Buddhist monastics. Now you follow that by spending the whole day teaching Neuroscience to these same monastics. “Bliss”, “introspection”, “questioning”, “challenging” and “why” are some of the words that may come to mind. They certainly did for me while I had the privilege of being a Neuroscience faculty member as part of the Emory Tibet Science Initiative (ETSI) this past summer in India. Other faculty members included Dr. Melvin Konner (Evolutionary Anthropology, Emory University), Dr. Ann Kruger (Developmental Psychologist, GSU) and Dr. Carol Worthman (Medical Anthropology, Emory University).






An audience with His Holiness The XIV Dalai
Lama,
and teaching monastics in Dharamshala, India.

I intend to use this blog post to shed light on the intersection of Buddhist philosophy and western science as seen through my fifteen days with the monastics (a term used to include both monks and nuns). Started in 2007 with the blessing of His Holiness The XIV Dalai Lama, the ETSI has been administered by The Library of Tibetan Works and Archives and Geshe Lobsang Negi who is a professor in the Department of Religion at Emory University. Over these years, the ETSI has been teaching Math, Physics, Neuroscience and Biology to cohorts of monastics from monasteries across India. After a 5 year science curriculum, this was the second ETSI graduating class. An immediate survey of the monastics revealed a skewed sex-ratio in that the class comprised of 42 monks and only 2 nuns. This inequality of representation is being slowly but surely remedied with the first group of nuns sitting for their Geshe exams that will confer upon them the status of a Buddhist scholar equivalent to the male scholars.



Having had some experience teaching and mentoring within academic circles in India and USA, I felt equipped to deal with normative classroom experiences. What I experienced was anything but normative. For one, the monastics revere their teachers to an extent I have never experienced. This is in keeping with the philosophy that teachers help the pupil uncover knowledge that helps in the attainment of Nirvana. Not to confuse this reverence with obeisance, the monastics were among the most engaged and questioning audience that I have ever taught.






The monastics listening to the Neuroscience faculty teaching.

In keeping with this questioning was a long session on the ethics of using humans and animals for research purposes. While using human subjects for research seemed to pass by relatively unquestioned by the monastics after they ensured that the well being of the subjects was taken into account, animal research and consequently my work came under intense scrutiny and discussion. A central tenet of Buddhism is to minimize the suffering of all sentient beings and animal research is hence at odds with that pursuit. Research with rats, mice and monkeys was glossed over to discuss why cockroaches were used in research and what was done to ensure that the cockroaches were treated in a respectful manner. Such an audience attentive to the ethics of working with humans and animals would make IACUC and IRB panels proud.






Tamden, one of our translators with a topic that the monastics debated.

From a mechanistic point of view, we taught with the help of translators. These amazing individuals have science backgrounds, are trained by the LTWA and are instrumental in any success that ETSI has had with this undertaking. This of course meant that we spoke in 2 sentence bursts that were then translated before we went on. Any questions by the monks went through this route as well. What is even more awe-inspiring is that a whole new vocabulary has needed to be invented to implement neuroscience terms into the Tibetan dictionary. I was informed that while ancient Tibetan scripture does talk about biology, physics and math, there is very little mention if any, of anything neuroscience-related and consequently the need for a new lexicon.




Most fascinating to me was the debating that is the cornerstone of the monastics’ learning and teaching process. As explained to me, a monk or nun reads a scripture, interprets it and then receives a teaching from his or her teacher at the monastery. Armed with this interpretation and teaching, the monastic then enters debating court, wherein either in a paired or group setting, engages in a debate about the scripture, interpretation and teaching with peers. What is particularly striking about this exercise is the seriousness and intensity of the debate paired with an absence of ego. Neither the challenger (standing up) nor the defender (seated) is trying to prove the other wrong; instead the goal is to move closer to the truth via intense debate and discussion. Accompanying such debating is a lot of gesturing, mainly by the challengers. Focusing on one of the many gestures, I have been given to understand that the slapping of the wrist is done to make a point but also contains some nuances. To begin, the slap of the palm is striking a wisdom nerve in the hope that the challenger and defender receive wisdom while closing the door to ignorance with the downward motion of the palm. A slight backward pull of the hand after striking the palm of the other hand is meant to convey two pieces of information: (1) let us open the door of knowledge, and (2) let neither of us hold on to our opinions too tightly. I find this last motivation quite poignant in that we in the West would do well to emulate this lack of attachment to our own agenda when we enter into discourse with parties not sympathetic to our point of view.






Paired debate with one of the monks in the midst of vociferous gesturing

As applied to their learning and teaching practices, the monastics imbue themselves with characteristics from which any academic, business, entrepreneurial and personal pursuit would benefit from: the lack of compulsion to be “right” about what they know, checking their egos in at the door and in lieu being like sponges eager to learn, keeping an open mind to a variety of opinions, ensuring that all beings are treated with respect, and the child-like joy that accompanies their learning. In summary, my experience with ETSI this year has left me with a profound respect for the reverence and mind-fullness that the monastics bring to every aspect of teaching, learning, and existing.







Want to cite this post?



Dias, B. (2013). Tibetan monastics and a neuroscientist: Some lessons learned and others taught. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/10/tibetan-monastics-and-neuroscientist.html

Tuesday, October 15, 2013

Gearing Up for New Currents in Sports Enhancement



By Anjan Chatterjee, M.D., F.A.A.N



Anjan Chatterjee is a Professor of Neurology at the University of Pennsylvania. His clinical practice focuses on patients with cognitive disorders. His research focuses on spatial cognition and language, attention, neuroethics, and neuroaesthetics. He is President of the International Association of Empirical Aesthetics and the Chair of the Society for Behavioral and Cognitive Neurology. He is also a member of the AJOB Neuroscience editorial board.





Alex Rodriguez is the latest in a long list of superstar athletes embroiled in a doping scandal. Lance Armstrong, Tyson Gay, Barry Bonds, Marion Jones, and Mark McGuire, among many others, preceded him. Competition in sports is predicated on athletes following rules; rules that try to codify fairness. Some combination of natural talent and effort is rewarded. Each athlete strives and may the best man and woman win.



Despite this ethos, doping scandals abound. Almost a third of the athletes responding to an anonymous survey about the 2011 World Track and Field competition admitted to using performance enhancing drugs [1]. Such competitions showcase the allure of enhancements that is magnified by winner take all environments. Rewards in sports do not scale linearly. Incremental differences, especially at top levels, deliver disproportionate rewards. Being the twentieth fastest runner in the world may be an extraordinary personal achievement, but it does not garner fame and fortune. Regulation, monitoring, and the possibility of public shame presumably restrain the desire to win by any means necessary when those means include breaking rules.



Most sporting organizations ban some technological enhancements and allow others. The International Olympic Committee (IOC) first imposed restrictions on drugs in sports when it banned steroid use in 1976. In the early 1960s fiberglass poles were allowed in the vault. More recently, in 2010, polyurethane swimsuits were banned. The IOC allowed Oscar Pistorius, the “the blade runner” so dubbed because of his prosthetic carbon-fiber legs, to compete with able-bodied athletes in 2012. Experts debated whether his prosthetics gave him an advantage. The fact that he did not qualify for the 400m finals or actually win the event may have kept the controversy from escalating.



Two recent developments, one scientific and the other commercial, make it worth reexamining the use of technological aids in sports. In the last decade noninvasive brain stimulation (NIBS) has become a mainstream investigative tool in cognitive neuroscience. Unlike functional neuroimaging, NIBS offers a direct way to make causal inferences about brain-behavioral relationships. NIBS is also used to treat depression and is being investigated to treat neurological syndromes such as aphasia and neglect. Of the NIBS techniques, transcranial direct current stimulation (tDCS) is particularly germane to this discussion. tDCS is cheap, easy to administer, and appears to be relatively safe when used appropriately.



Recent laboratory investigations demonstrate the potential for tDCS to enhance various abilities [2], some directly relevant to athletic performance (reviewed in [3, 4]). Low intensity current delivered to different parts of the brain has been reported to increase muscle strength, quicken reaction times, ameliorate muscle fatigue, accelerate motor learning and improve attention and perception. Twenty minutes of motor cortex stimulation for 5 consecutive days, improves the acquisition and consolidation of complex motor skills; these improvements remain evident 3 months later [5]. The extent to which improvements detected in the laboratory will enhance real world performance is not known. Whether improvements in some abilities will produce declines in other abilities is also not known.






"Be First, Be Fast, Be Foc.us"



The commercial development relevant to this discussion is the availability of tDCS equipment. Doping typically requires prescriptions from physicians or drugs from scientists as occurred in the Bay Area Laboratory Co-operative (BALCO) affair. tDCS faces no such barriers to access. DIY tDCS communities exist and online tutorials for how to apply the stimulating current are easily available. The equipment is being marketed to gamers and is now sold commercially for under $250 (http://www.foc.us/). The company’s website exhorts gamers to “Be First, Be Fast, Be Foc.us.” The current delivered by the headset will be controllable by mobile apps. As of this writing, the company has sold out of their headsets since sales began in July 2013. Their website states that the product is not a medical device and is not regulated by the FDA. The headset is designed elegantly. One senses its appeal to fashion sensibilities of early adopters of tech gear who can add it to their Google glass and smart watch ensemble.



Appealing to gamers is a short step from enticing sports competitors more broadly. Whether the initial laboratory findings will generalize to meaningful real world performances will not be known for a while. In the meantime anecdotal endorsements will likely abound. Many at different levels of sports will experiment with the device. Sports clinics will frame tDCS use as a cutting-edge training technique. Its use will be difficult to monitor and near impossible to regulate since as of now there are no known biomarkers capable of detecting tDCS’s prior use.



It may turn out that statistically better laboratory performance after applying current to the head does not translate into better performance on the field. Then, controversies around tDCS in sports will be short-lived. But what if the use of this device actually improves sport performances? Should enhancements in sports be allowed or banned?



The position to allow enhancements questions the assumption that sports are fair. Epstein’s [6] book “The Sports Gene” reviews how people are born with vastly different genetic endowments that make some people much better suited to excel in certain sports than others. Perhaps enhancements will help level the genetically tilted playing field. If effort is what we value, then we should minimize all factors that contribute to variance in performances that do not depend on effort. The position to ban enhancements is motivated by a respect for rules, a concern for safety, and a desire to avoid an arms race in enhancements. The position to allow some but not other enhancements, as is the case now, is based on drawing a principled distinction between allowable and non-allowable interventions. How best to draw such a distinction is far from clear and open to debate.



Ultimately, sports depend on competing within agreed upon rules. Every new technology that might enhance performance forces us to examine the logic underlying those rules, the values they embody, and the meaning of sports [7]. This is not a new issue, as evident with fiberglass poles, anabolic steroids, erythropoietin, buoyant swimsuits, laser eye surgery, and many other examples. What is new about tDCS is that monitoring the use of this technological enhancement, for the foreseeable future, will be impossible.



Should rules in sports adhere strictly to principles, if we can agree on those principles? Or should rules in sports be guided by the pragmatics of detection and enforcement?







References

1. Rohan T. Antidoping Agency Delays Publication of Research. The New York Times 2013 August 22. Retrieved from http://www.nytimes.com/2013/08/23/sports/research-finds-wide-doping-study-withheld.html



2. Hamilton R, Messing S, Chatterjee A. Rethinking the thinking cap: Ethics of neural enhancement using noninvasive brain stimulation. Neurology 2011;76:187-193.



3. Levasseur-Moreau J, Brunelin J, Fecteau S. Noninvasive brain stimulation can induce paradoxical facilitation . Are these neuroenhancements transferable and meaningful to security services? Frontiers in Human Neuroscience 2013;7. http://www.frontiersin.org/Journal/10.3389/fnhum.2013.00449/full



4. Banissy MJ, Muggleton NG. Transcranial direct current stimulation in sports training: potential approaches. Frontiers in Human Neuroscience 2013;7. http://www.frontiersin.org/Journal/10.3389/fnhum.2013.00129/full



5. Reis J, Schambra HM, Cohen LG, et al. Noninvasive cortical stimulation enhances motor skill acquisition over multiple days through an effect on consolidation. Proceedings of the National Academy of Sciences 2009;106:1590-1595.



6. Epstein D. The Sports Gene: Inside the Science of Extraordinary Athletic Performance: Penguin Group US, 2013.



7. Murray TH. Sports Enhancement. Crowley M(Hg): From Birth to Death and Bench to Clinic: The Hastings Center Bioethics Briefing Book for Journalists, Policymakers, and Campaigns New York 2008:153-158.



Acknowledgment

I thank Roy Hamilton and Tom Murray for helpful feedback.





Want to cite this post?



Chatterjee, A. (2013). Gearing Up for New Currents in Sports Enhancement. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2013/10/gearing-up-for-new-currents-in-sports.html


Tuesday, October 8, 2013

Consciousness and Ethical Pain


Imagine you find that a beloved uncle has received a terrible injury that leaves him paralyzed, but still totally aware of his environment - a condition known as locked in syndrome. Now imagine that a doctor comes to you with a miracle cure: a new experimental treatment will repair your uncle's damaged brainstem allowing him to regain control of his body. The catch, however, is that this procedure is extremely painful. It actually seems like it might be the most painful experience possible: fMRI scans reveal that all the brain regions that are active during extreme pain are activated during this (imaginary) procedure. And it lasts for hours. However, your uncle won't complain about the procedure because 1) he's paralyzed and thus can't voice his suffering, and 2) the experience of this miracle treatment will mercifully be forgotten once the procedure is over, so your uncle won't raise any complaint afterwards. While many of us would probably sign off on the procedure, we might still feel guilty as we imagine what it must be like to go through that, even if our uncle wouldn't recall it later.







The neural 'signature' of pain, as seen through fMRI [8].  Image from here.



This scenario is meant to illustrate that there seems to be an aspect of the moral weight of pain - its significance in ethical discussion and decision making and guilt - that has to do specifically with what pain feels like. Not the way it makes us act, not the danger it represents, but that first person, qualitative, subjective experience of being in pain, or suffering through pain. The ability to have such qualitative, subjective experiences is called qualitative (or sometimes phenomenal) consciousness. We tend to assume that most humans are conscious, and that this is the primary reason why hurting them is wrong- indirect selfish reasons (like avoiding jail time or losing them as a friend and ally) are seen as being secondary to this primary fact: the evil of pain[1].





For this reason, discussions of pain taking place in unfamiliar creatures (which I'm using to refer to anything that isn't able to explicitly tell you how it feels - including humans with certain neurological conditions, as well as almost all non-human animals, and perhaps even stranger entities) are often intimately tied to the possibility of that creature being conscious. This occurs for instance when deciding whether a patient with Unresponsive Wakefulness Syndrome (formerly called vegetative state) should receive analgesia[2,3], or when debating about the necessary precautions that should be taken when fishing or slaughtering chickens. If it can be demonstrated that something doesn't meet our requirements for consciousness, suddenly we have free range to treat that thing as more of an object than a person[4].  If consciousness is suspected on the other hand, we become much more cautious with our treatment of the entity.  






Unfortunately, consciousness is very difficult to work with, especially from a neuroscience perspective. Since qualitative consciousness is by definition a private, personal thing, neuroscientists are limited to dealing with consciousness indirectly. This is done by looking at neural correlates of consciousness. These are the physical events that occur in brains that we agree are conscious (such as awake, healthy humans) but not in brains that we agree aren't conscious (such as the brains of humans that are in dreamless sleep)[5]. By comparing and contrasting these two 'known' points, it should be possible to identify what it is about 'conscious' brains that gives them 'consciousness,' right?





In healthy humans, probably. The danger might come from extrapolating these results to a neuroessentialist view of consciousness - the idea that consciousness is created purely as the result of the brain doing its thing. We might imagine that some neural circuit, once carefully tuned through currently unknown self-organizing properties, spontaneously generates a mysterious feedback loop.  Like Robert Downey Jr's glowing arc reactor in the Iron Man films, this unassuming physical matter almost magically produces something totally alien and awesome. But instead of enough power to defeat Jeff Bridges in hand-to-hand combat, the neural circuit produces something much stranger - a metaphysical shift that creates a conscious entity in a sea of unconscious biochemical machinery (that is, the rest of the brain). If this circuit was missing from either a brain damaged human or a non-human animal (or not yet fully developed in new born infants), we could calmly assert that no matter how much these entities appeared to suffer, such displays were unconscious reflexes that didn't actually reflect any internal, private, real suffering.  They would merely be biological machines, with the same moral status as toenails.  As Stephen Pinker phrased it in a 2007 article in Time [6], "...once we realize that our own consciousness is a product of our brains and that other people have brains like ours, a denial of other people's sentience becomes ludicrous. 'Hath not a Jew eyes?' asked Shylock. Today the question is more pointed: Hath not a Jew--or an Arab, or an African, or a baby, or a dog--a cerebral cortex and a thalamus?"










Compare and contrast. Left: the Arc Reactor activates in the first Iron Man film. Right: one of many blue glowing brains that one can find on while looking through popular articles on neuroscience. Is this how we should think about consciousness emerging from the brain? Images from here and here.




An issue that can be raised with this neuroessentialist view is that consciousness and suffering, in their roles of marking creatures as having a moral status, don't necessarily refer to events that happen in the nervous system.  Instead, they could refer to ethical relationships between entities, irrespective of the underlying neural circuitry that might enable those relationships.  Much like a giant set of quadriceps might enable a sprinter to win a race, a healthy brain might enable ethical relationships between humans.  However, much like the quadriceps need skeletal and nervous systems, as well as the cultural notion of a 'race' and a worthy opponent in order to win, so too does a brain need 'opposing players' and a culturally constructed moral landscape to participate in any ethically charged interaction, including pain and suffering.  'Winning' isn't something that occurs purely in the muscles of the sprinter, and neither is 'suffering' something that occurs purely in the brain.  





This dependence of morally charged terms like 'consciousness' and 'suffering' on non-neural components has been argued both in the discussion on the moral status of humans and non-humans.  In regards to humans, neuroethicist Grant Gillett points out that human identity and subjectivity (a term that is related to qualitative consciousness, specifically its necessarily private nature) is constructed through social relationships, not something that can be reduced down to individual neural circuits[7].  Likewise, neuroscientist Patrick Wall reprimanded veterinarians to avoid using human conceptions of pain to describe neural events in non-human animals.  Dr. Wall suggested that such a practice was purely culturally driven, rather than based on scientific evidence, and that non-human animals should be dealt with using values derived from their own reality, not by imposing human values onto them[8].  We might imagine that if the concept of consciousness developed to describe something about humans (about their moral status, ability to socially interact or have intentions, etc), using consciousness as the measuring stick to determine moral status in non-humans is effectively saying, "My compassion towards you is proportional to your resemblance towards me."





What are we left with if we avoid using consciousness to determine the value of neural correlates of pain and suffering (such as the fMRI signal discussed earlier)?  Should we still feel bad about the uncle we abandoned after the first paragraph?  I think the answer is still yes, but for different reasons than those initially given.  I think that the neural correlates of suffering that are now accessible through fMRI[9] and similar devices don't provide direct insight into a hidden world of subjective experience, so much as they provide novel channels of communication with otherwise isolated individuals[10].  Thus, these patients can now voice their suffering and participate in an ethically charged interaction - an interaction that is similar to describing their suffering verbally, though now mediated through complicated machinery that they do not control.  Likewise, a neural correlate of (human) suffering in non-human animals needs to be viewed as, at most, a cry for help rather than a private hell. It should be understood that different animals will likely cry for help in totally different ways (and thus have totally different neural correlates of suffering-like states).  Using this perspective, neuroscience is simply one more tool to enable ethically charged interactions to occur, rather than a final statement about the reality underlying all ethical interactions.  








 References:



[1] Ryder, Richard.  "All beings that feel pain deserve human rights" The Guardian, Saturday 6 August 2005 


[2]Farisco, Michele. "The ethical pain." Neuroethics (2011): 1-12.


[3] Demertzi, Athina, et al. "Pain perception in disorders of consciousness: neuroscience, clinical care, and ethics in dialogue." Neuroethics (2012): 1-14.


[4] Shriver, Adam. "Knocking out pain in livestock: Can technology succeed where morality has stalled?" Neuroethics 2.3 (2009): 115-124.


[5]Tononi, Giulio. "An information integration theory of consciousness." BMC neuroscience 5.1 (2004): 42.

[6] Pinker, Steven. "The mystery of consciousness." Time Magazine 29 (2007): 55-70.


[7] Gillett, Grant R. "The subjective brain, identity, and neuroethics." The American Journal of Bioethics 9.9 (2009): 5-13.


[8] Wall, Patrick D. "Defining pain in animals." Animal pain. New York: Churchill-Livingstone Inc (1992): 63-79.

[9] Wager, Tor D., et al. "An fMRI-based neurologic signature of physical pain." New England Journal of Medicine 368.15 (2013): 1388-1397.

[10] This view is also very similar to one discussed here: Levy, Neil, and Julian Savulescu. "Moral significance of phenomenal consciousness." Progress In Brain Research 177 (2009): 361-370.





Want to cite this post?



Zeller-Townson, RT.
(2013). Consciousness and the Ethical Pain. The Neuroethics Blog.
Retrieved on
, from http://www.theneuroethicsblog.com/2013/09/consciousness-and-ethical-pain.html

Tuesday, October 1, 2013

2013 Neuroethics Scholar, Jen Sarrett: Autism and the Communication of 'Risk'


The Neuroethics Scholar Program




By Jennifer C. Sarrett


This project is done through The

Neuroethics Scholar Program






As defined in the new Diagnostic and Statistical Manual of Mental Disorders (DSM 5), Autism Spectrum Disorder (ASD) is diagnosed in individuals who show differences in social communication—such as a reliance on non-verbal communication techniques or difficulties interpreting social signals—and specific behavioral patterns—such as repetitive vocal or motor behaviors or intense interests in specific items. These characteristics must be evident before the age of 3 in order to quality for an ASD diagnosis1. While there are diagnostic tools that are able to reliably diagnose ASD by the age of 2, most reports show that, on average, children are not diagnosed until school age. These rates vary by several factors, including race and urbanicity2. For many professionals, this delay in diagnosis is concerning because, as the Centers for Disease Control and Prevention3 and most other autism professionals stress, early identification and diagnosis leads to early intervention which leads to better outcomes for many children and families.





Finding innovative ways to identify autism earlier in life is a goal for many researchers, including a group of professionals at the Marcus Autism Center4. Marcus is a well-known research and clinical facility in Atlanta that is affiliated with Emory University, Children’s Healthcare of Atlanta, and Autism Speaks. In 2010, renowned autism researcher, Dr. Ami Klin, became the new director of Marcus and brought much of his research and clinical team from the Yale Child Study Center along with him. This team has been working on cutting edge technologies to identify autism-associated traits in infants and toddlers. In particular, they are using eye tracking technologies and vocalization software because, as their previous research has shown, very young children who eventually become diagnosed with autism show differences in their looking and vocalization patterns. 






The Marcus Autism Center, Atlanta, GA

Specifically, this research shows that children, adolescents, and adults who prefer to look at objects, rather than people, and at mouths, rather than eyes, in social situations are more likely to be on the autism spectrum than those with the opposite pattern. These patterns in looking are associated with vocalization patterns. Previous research has shown that autistic children prefer to look at items showing physical and acoustic synchronicity. This means that when looking at faces of people who are talking, autistic children prefer to look at the mouth probably because mouths, unlike eyes, move in time with verbal sounds. Mouths, however, provide much less social-communication information than eyes and so may influence a child’s social-communicative development. This different way of learning languagethat is, through physical rather than social contingencies—may contribute to the delay in verbal language development often seen in autistic children5. The researchers and clinicians at the Marcus center are interested in correlating eye tracking and vocalization techniques over the first few years of life to determine if there is a strong correlation between infant and toddler looking and vocalization patterns with later autism diagnoses. With this information, it is hoped that early screening tools can be developed and widely used to identify as early as possible the likelihood that a child will later be diagnosed with autism. 







As the 2013-2014 Neuroethics Scholar, I will be working closely with both the Marcus Autism Center and Center for Ethics to explore ways to communicate autism ‘risk’ to caretakers if and when this technology becomes widely used. To do so, I will be observing clinical assessments and practices, learning about the eye tracking and vocalization technologies, and meeting regularly with professionals at the Marcus Autism Center. The outcome will be a set of recommendations on how to communicate to caregivers that their child may later be diagnosed with autism in a way that is accurate, realistic, and respectful to families and to the wider autistic community. 




One place I plan on starting is evaluating the usefulness of the word ‘risk,’ a value-ladened identifier of the possibility of a later diagnosis. In the words of Yudell, et al.:





"While value judgments implicit in risk communication may sometimes be uncontroversial (e.g. that developing cancer is a harm), in other cases, they may be both rationally contestable and politically controversial. As regards to ASDs, there is a need for more nuanced discussions in the sciences as to whether “risk” is an appropriate descriptor for all ASD-related phenotypes. Arguably, the most important value judgement presupposed by risk communication for ASDs is that ASDs are harmful outcomes to be avoided" (2012, p 5)6.





There is a relatively small subset of individuals who have significant autistic characteristics that make a high quality of life for themselves and their families very difficult. Another, somewhat larger, subset of the autistic population are individuals whose characteristics are less significant and who may find their autistics traits useful and valuable7. As of now, there is no reliable way to determine which infants, toddlers, or children will end up in which group. Thus, the word ‘risk’ is an inappropriately fearful term to use when discussing the prognosis of an infant based on early visual or vocalization patterns. Henceforth, I will be recommending the use of the phrase ‘autism possibility’ or ‘likelihood of autism.’ 





The recommendations I will make, including the abandonment of the word ‘risk,’ are somewhat influenced by the concept of neurodiversity. This is the advocacy position which states autism and related conditions are just one of a range of outcomes of human neurological development. Similar to how biodiversity argues that we need to maintain all species for global balance, neurodiversity argues that people with autism and other neurological differences should exist alongside neurotypical individuals. To maintain this balance, autistic individuals should be neither cured nor normalized. I will describe this position in more depth in my next blog post, but it is important to emphasize here that neurodiversity works towards autism acceptance, rather than autism awareness. As such, using a word like ‘risk,’ which is suggestive of highly undesirable outcomes, is inappropriate because many autistic adults greatly value their autistic characteristics and many families find living with autism a rewarding and comfortable experience.








In addition to the larger project of the communication of autism possibility, I will also be in contact with the individuals at Marcus who are responsible for redesigning the resource room. This room is available to parents and professionals and contains a wide variety of information about ASDs. For this smaller project, I will be proposing several materials that provide a neurodiverse perspective to be included in the new resource room. While I do not believe it is appropriate to force people to hear about and rely on this perspective, I do believe it is ethical to provide it as an alternative narrative of what autism is and means. The choice to adhere to it in their daily lives remains a personal decision. 




This project will provide an important base for future projects on the ways we talk about and represent autism to providers and families. It is unproductive to begin communication about autism with a veneer of fear and hopelessness. Instead, providing factual and realistic information in clear and palatable formats is paramount to empowering families to make well-informed decisions about their child’s life and can lead to better family-provider relationships. 





Jennifer C. Sarrett

References





[1] American Psychological Association, (2013). Diagnostic and Statistical Manual of Mental Disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.



[2] Daniels, Amy M & Mandell, David S. (2013). Explaining differences in age at autism spectrum disorder diagnosis: A critical review. Autism, doi: 10.1177/1362361313480277.



[3] See the “Learn the Signs. Act Early” campaign by the CDC here: http://www.cdc.gov/ncbddd/actearly/index.html.



[4] See their website here: http://www.marcus.org



[5] Jones, Warren, Carr, Katelin, & Klin, Ami. (2008). Absence of preferential looking to the eyes of approaching adults predicts level of social disability in 2-year-old toddlers. Arch Gen Psychiatry, 65(8): 946; Klin, Ami & Jones, Warren. (2008). Altered face scanning and impaired recognition of biological motion in a 15-month-old infant with autism. Developmental Science, 11(1): 40-46; Klin, Ami, Jones, Warren, Schultz, Volkmar, Fred, & Cohen, Donald. (2002). Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Archives of General Psychiatry, 59: 809-814; Klin, Ami, Lin, David J., Gorrindo, Phillip, Ramsay, Gordon, & Jones, Warren. (2009). Two-year-olds with autism orient to non-social contingencies rather than biological motion. Nature, 459(14): 257-263; Rice, Katherine, Moriuchi, Jennifer M., Jones, Warren, & Klin, Ami. (2012). Parsing heterogeneity in Autism Spectrum Disorders: Visual scanning of dynamic social scenes in school-aged children. Journal of the American Academy of Child & Adolescent Psychiatry, 51(3): 238-248; Shultz, Sarah, Klin, Ami, & Jones, Warren (2011). Inhibition of eye blinking reveals subjective perceptions of stimulus salience. PNAS, 108(52): 21270-21275.



[6] Yudell, Michael, Tabor, Holly K, Dawson, Geraldine, Rossi, John, Newschaffer, Craig, & Working Group in Autism Risk Communication and Ethics. (2012). Priorities for autism spectrum disorders risk communication and ethics. Autism. doi: 10.1177/1362361312453511.



[7] Klein, S.D., and K. Schive (eds.). (2001). You will dream new dreams. Inspiring personal stories by parents of children with disabilities. New York: Kensington.





 

Want to cite this post?



Sarrett, J. (2013). Autism and the Communication of 'Risk'. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/09/autism-and-communication-of-risk.html