Pages

Tuesday, January 29, 2013

Pain in a Vat

Previously on this blog I've discussed the case of cultures of living rat neurons, removed from their natural environment (the inside of the skull of a rat), and grown on top of an electrical interface that allows the neurons to communicate with robotic systems - effectively, we remove part of the rat's brain, and then give this reprocessed bit of brain a new, robotic body.  One of the stranger issues that pops up with this system is that it is extraordinarily easy to 'switch' between bodies in this situation. [1] For instance, I could easily write a computer program that creates a brief, pleasant sound reminiscent of raindrops every time the culture increases it's electrical activity.  Alternatively, the same burst of activity could be used to trigger an emotionless, electronic voice to say “Please help me. I am in pain.”






While nociception (the low-level transmission of pain information) and unconscious reactions to pain both occur in the spine and peripheral nervous system, the brain seems to hold the neurons that are responsible for the conscious sensation of pain.  This leads to the interesting suggestion that factory farmed chickens should be grown without their brains to prevent all that unnecessary suffering from occurring.  (And if you remove the feet, the chickens are stackable!) Image from here

Is it possible for a neural culture to feel pain?  This is admittedly an absurd suggestion.  Starting from common sense and our normal range of experience, if we see a small motionless, barely visible sliver of brain tissue sitting in a Petri dish, we have no reason to believe that we should feel sorry for it.  Even if we see that this sliver of brain tissue is actually quite active, generating a variety of patterns of electrical activity, such activity might seem so alien to us so as to not be worth our attention, much less our sympathy.  But as absurd as pain in a Petri dish might sound, we do in some sense have a duty  to explore the idea.  Pain and suffering are key moral issues in the treatment of biological systems.  Animal liberation ethicist Richard Ryder goes as far as to say that being able to experience pain is the only requirement for having rights [2], as pain is the only true evil.  If we are to consistently value the absence of pain, no matter who or what experiences it, do we need more comprehensive regulations that cover tissues as well as “full” animals? [3]



To begin with, let's be careful about what we mean when we use the word “pain.”[4]  Neuroscientists have long broken pain itself down into the “sensory” component (the location of the pain) and the “affective” component (the emotional, “unpleasant” side of pain).  The “affective” component is commonly held to be the “morally relevant” one. [6]  It is interesting to note that these two aspects of pain seem to be somewhat distinct at a neural level - for instance, morphine and endorphins both selectively inhibit the activity of structures that seem to underlie the affective component [7], and humans with damage to different brain regions can report either a loss of the sensory component (lesions of the somatosensory cortex [8]) or loss of the affective component (lesion of the anterior cingulate cortex). [9] So in principle, it might be possible to find the "affective pain circuit" in the brain, perhaps the Anterior Cingulate Cortex (ACC), and remove it from an animal.  Now we have an animal that can't suffer - but what about the tissue we removed (assuming it wasn't destroyed in the process)?  Is it still suffering? [10] Or does it need to be connected to the rest of the brain for that “suffering” to mean anything?






Electrodes placed in the Anterior Cingulate Cortex (ACC) of a patient suffering from chronic pain.  The electrodes were used to selectively lesion the ACC.  If the ACC had somehow been carefully removed, would we have had a moral obligation to prevent it from feeling pain?  From [9].

So if some sort of "affective pain circuit" were isolated [11], how would we determine if it was in pain or not?  Neural culture poses an interesting problem here.  The methods that have been used to suggest that other living systems feel pain, such as humans in a vegetative state, or non-human animals, don't work for neural culture.  The prime method for determining if a living creature is suffering from pain is to examine their behavior, and look for things like avoidance, emotional displays, or learning to associate neutral stimuli with pain. [12]  However, I've already pointed out how problematic the notion of “behavior” is for neural culture.  A second strategy might be to examine neural activity directly, and correlate that with the activity of “full” animals - however, there is disagreement over whether ACC activity means the same thing in different animals [15], so using that method to evaluate the significance of an ACC that was completely removed is even more problematic.



Without behavior or a known set of subjective correlates to use to determine if neural culture is suffering, we are left to mathematical and philosophical tools.  One such tool is Giulio Tononi's Integrated Information Theory (IIT) of consciousness. [16] For our purposes, the important parts of this theory are that the pattern of connections (not just the size, or the number of connections) in the network plays a big part in determining how conscious it is (called the “Phi” value), and the subjective state is defined by its relationship to all other possible subjective states.  So, pain would be defined by not being pleasure, and by all of the thoughts and actions and desires that pain can cause, or be caused by. [17] From this perspective, even a cultured slice of ACC wouldn't necessarily be experiencing affective pain when removed from an animal, as it's electrical activity wouldn't behave in the same way.  Additionally, as the network would be much smaller than the brain it was previously part of, the “Phi” value would be lower (and it could be argued that any affect that it did have was “less rich” or “less meaningful”).



However, note that currently even neural cultures are far too complex and difficult to measure to accurately estimate their Phi values, much less to understand the structure of their "qualia space."  Despite this, the promise of theories like IIT (or future developments within that theory) exposes a part that neural culture might play in the development of tools to scientifically evaluate consciousness.  With neural culture, we are forced to use theories that depend directly on tying network structure to conscious experience, rather than "surface level" features like behavior.  And despite current limitations, it is significantly easier to access the sorts of data required to use structural theories in culture, than it is in "full" organisms.  This accessibility associated with neural culture means that as both neurotechnologies (such as electrophysiology tools) and philosophical/mathematical tools (such as IIT) develop, neural culture will likely be one of the first places where the theory will be able to meet up with experiment to provide a rich understanding of what generates subjective experience.







Want to cite this post?

Zeller-Townson, RT. (2013). Pain in a Vat. The Neuroethics Blog. Retrieved on


from http://www.theneuroethicsblog.com/2013/01/pain-in-vat.html








References:

[1] Note that over time the culture could potentially learn the differences between these two systems, but in that moment immediately after that switch it would be very difficult to tell the difference.

[2] Note that Ryder's views are controversial, however, even within the animal liberation community.

[3] God, I hope not.  Dealing with IACUC is bad enough as it is.

[4] It is also important to note that pain and suffering are usually not equated.  David DeGrazia and Andrew Rowan defined [5] both pain and suffering as 'inherently unpleasant sensations' - that is, they are both feelings (rather than physical events), and are in part defined by their unpleasantness.  DeGrazia and Rowan differentiate pain from suffering by specifying that pain is sensed to be local to a specific body part, whereas suffering is not.  By specifying pain as an experience, DeGrazia and Rowan distance themselves from some of the neuroscience literature which at times interchangeably speaks of pain and nociception, the physical process by which painful stimuli are relayed to the brain.

[5] DeGrazia, David, and Andrew Rowan. "Pain, suffering, and anxiety in animals and humans." Theoretical Medicine and Bioethics 12.3 (1991): 193-211.

[6]Shriver, Adam. "Knocking out pain in livestock: Can technology succeed where morality has stalled?" Neuroethics 2.3 (2009): 115-124.

[7] Jones, Anthony K., Karl Friston, and Richard S. Frackowiak. "Localization of responses to pain in human cerebral cortex." (1992).

[8] Ploner, M., H-J. Freund, and A. Schnitzler. "Pain affect without pain sensation in a patient with a postcentral lesion." Pain 81.1 (1999): 211-214.

[9]  Foltz, EL and White, LE, Pain 'relief' by frontal cingulumotomy, J. Neurosurg., 19 (1962) 89-100

[10] As I'm implying that the Anterior Cingulate Cortex would be the region removed, I should be clear and that I don't mean to say that this is all the ACC does.  The ACC is a pretty complicated beast, and has been implicated as playing a role in decision making, the evaluation of errors, tasks that require effort, as well as processing of empathy and emotion.

[11] Currently, the closest experimental preparation to this would be to culture a thin slice of tissue taken from the ACC.  This is often done to investigate how the ACC differs from other regions of the cerebral cortex, including how these differences could lead to new drugs that decrease the affective component of pain.  While the ACC is the neural tissue that we might most suspect a priori to be suffering, in theory it could be possible to grow (whether by accident or design) neural circuits from scratch (by first breaking down the connections between the neurons, and then allowing them to re-grow, a process called dissociation) that in some way replicate the "suffering" experience of the in vivo ACC.  The following discussion applies equally to ACC slices and dissociated culture.

[12] While it is easy to imagine all of these behaviors being performed by an unfeeling robot that was attempting to trick us into feeling sorry for it, it is interesting to note how much that last item (learning) is suggestive of a subjective negative experience.  The ACC, as mentioned, is used for several things beyond just feeling bad- it also appears to be used for turning that bad feeling into a learning experience, where the ACC equipped neural system learns to avoid whatever caused that painful experience in the first place. [13]  Thus, animals that can learn from pain (a category which was recently found to include crabs [14]) might be equipped with other ACC-associated properties, like suffering.  I'm curious how tricky it would be to argue that the subjective experience of suffering is the mental correlate of high-level avoidance learning- implying that if one learns to avoid abstract entities through association with nociception, one is suffering.  This view would further imply that temporary suffering is natural and even necessary for life, and that morally relevant suffering is effectively attempting to avoid something that cannot be avoided.

[13] Johansen, Joshua P., Howard L. Fields, and Barton H. Manning. "The affective component of pain in rodents: direct evidence for a contribution of the anterior cingulate cortex." Proceedings of the National Academy of Sciences 98.14 (2001): 8077-8082.

[14] Magee, Barry, and Robert W. Elwood. "Shock avoidance by discrimination learning in the shore crab (Carcinus maenas) is consistent with a key criterion for pain." The Journal of Experimental Biology 216.3 (2013): 353-358.

[15] Farah, Martha J. "Neuroethics and the problem of other minds: implications of neuroscience for the moral status of brain-damaged patients and nonhuman animals." Neuroethics 1.1 (2008): 9-18.

[16] Tononi, Giulio. "An information integration theory of consciousness." BMC neuroscience 5.1 (2004): 42.

[17] IIT also gives us a framework to tackle the question of why neural tissue should be so privileged to consciousness- what about other biological networks, like the immune system?  What about non-biological networks, like simulated neural networks or even the internet?  IIT says that the extent of consciousness is determined by the variety of possible states the system can be in, as well as how much the sub compartments of the network communicate with each other.  Thus, in principle any well connected network could be conscious, but some neural systems seemed to be optimized for high levels of consciousness.


Tuesday, January 22, 2013

Judging brains with preclinical disease

By Guest Contributor, Jagan Pillai, MD, PhD



Dr. Jagan Pillai is a neurologist at the Lou Ruvo Center for Brain Health, Cleveland Clinic and works to help people with cognitive changes from neurological disorders and to develop diagnostic and treatment strategies in neurodegenerative diseases. He trained as a medical doctor at the University of Kerala, Trivandrum, India. He obtained a PhD from Northwestern University. He trained in Neurology at the Albert Einstein College of Medicine and at the University of California San Diego.






As a neurologist interested in neurodegenerative disorders, I met Phil and a few others with preclinical Huntington’s disease (HD), on a trip to Phoenix, AZ to take in their perspectives. Phil is a self-appointed counselor, caretaker, and community leader of PHDs. He chuckles as he credits his accomplishments to having been born a PHD (in his lingo, Person with Huntington’s disease). HD is a neurodegenerative disorder caused by an expanded number of triplet repeat CAG in the gene encoding the protein Huntingtin. Prevalence is about 4–10 per 100 000 people in the West (1). HD is clinically characterized by motor dysfunction, cognitive decline, and psychiatric disturbance. The age of onset roughly correlates with the number of CAG repeats inherited, with a mean age of onset of 40 years (2).








in Huntington's Disease, image width 250 µm



The clinical diagnosis of HD is based on characteristic motor signs in a person with a positive family history and confirmed by genetic testing for HD. However, there is increasing recognition that disease onset may happen many years before clinical diagnosis. Subtle cognitive, motor, and behavioral symptoms and underlying neuropathlogical changes can now be detected by neuroimaging and neuropsychological tools, among individuals who otherwise appear healthy. These people at risk of developing clinical HD are classified preclinical or presymptomatic or preHD (3). We can detect changes in neural connectivity, degeneration of brain regions and behavioral changes including, impaired control of affect following disrupted emotion processing circuits (4,5,6).



For the ‘at risk’ people like Phil, ascribing some of the difficult moments in their life to inherent propensities of HD, even before clinical symptoms are noted by people around them, has pressing implications. Before recognizing preHD as a distinct stage, symptoms tended to brushed aside in the tumble of life as they were often vague. But many preHD symptoms do influence social functional abilities like problems with attention, depression and irritability. It is difficult for patients and physicians to posit meaningful relevance to seemingly nonspecific symptoms unless tied to a diagnosis of preHD. Some people with preHD are often prematurely judged to be liable to slip-up in some situations and even lose jobs, even as they have not yet individually made anything resembling embarrassing blunders (7) and patients themselves may often be fearful of it. Others may have had long frustrating search for answers about changes that they are experiencing and the diagnosis of preHD finally sheds some light on their circumstances.



This profound ability, to know a tangled mind, not by a warm empathetic embrace but by a diagnostic leap and the moral challenges involved, is dramatically portrayed in Ian McEwan’s neuro-lit novel Saturday. Here he describes the internal monologue of Perowne, a neurosurgeon. In the course of the drama, lived out on a single Saturday, an assailant mugs Perowne (‘Life is a wretched gray Saturday, but it has to be lived through,’ Anthony Burgess). But the surgeon recognizes what seem to be early symptoms of HD in his assailant. He intellectually bridges a gap to reach out to his assailant that might be unbridgeable for even the most empathetic among us at a time of startling violent injury and he is able to provide some level of insight to the assailant about his preHD condition. Is real life close to fictional scenarios?




Among the legal cases littering the country’s courtrooms, moral expectations are being challenged by our increasing ability to peer into brains and not limited to HD alone. How did the wife react when she found out her husband with pre HD had just attempted to sexually abuse their 9-year-old daughter? Did it matter that the husband had soon felt remorse and was the one who confessed to her in tears? Or how is a woman who delivered a baby girl in a bathroom at a homeless shelter and the submerged child was pulled from a toilet minutes later by police to be judged? Would any of the victims themselves be willing to take a different moral ground in ascribing eventual culpability to proclivities coming seemingly unbidden from deranged neural clusters and entirely outside the person’s earlier self?




In some cases people ‘at risk’ for such Promethean tragedies did not know they were at risk for making mistakes of judgment as they had not been diagnosed with a disease yet. Some knew of their diagnosis but were not aware of the specifics of the risk involved. The rest stumbled even after they knew where they stood with respect to the implications of a diagnosis, as they felt swept away in a perfect storm of situations overwhelming their arguably dented defenses. The latter is how Phil sees it. Are these startling cases different from a more common event, like being arraigned for issues related to alcoholism? Here a familial predisposition to alcoholism exists and has been used successfully to plead for a lighter sentence (8).








Schematic for preclinical and clinical stages of
neurodegenerative diseases




Does self-knowledge from better acquaintance of your neural apparatus have a redemptive value, i.e. enable you in overcoming your neural limitations to live a more valued and valuable life? An answer to this key question would influence everyday decisions and is very likely to be answered in the near future. The popularity of neuroplasticity literature points to our insistent hopes in this regard (9). We have a glimmer of an answer from people ‘at risk’ who are facing many of these pressing questions in their lives. For individuals with a family history of HD, knowing their neurological status is often seen to increase the level of psychological distress. Many even decide to forgo definitive diagnosis. My clinical impression is that HD patients often tend to consciously avoid disease related situations and thoughts. Are these responses related to our present lack of definitive treatments available?




How does the lack of curative treatments impact our moral imperative to alert patients of their preHD status and pending worsening of symptoms? Even as making an early diagnosis in diseases that lack curative treatments bring up questions of moral imperative and appear to be problematic, these diagnoses are often deeply meaningful for individuals and their families. It often brings a sense of closure to the families and helps them rally around the people who need more care and attention. They also enable families to make effective concrete plans for the future whether they may be financial or choosing a place to live. But can knowledge of their own nature help pre HD subjects avoid situational mistakes they are fearful of? How is one to know when their lack of judgment is situational and can be restored, versus more permanent changes to self with their knowledge of their own disease state? Can one improve their lack of specific cognitive abilities through practice? These questions are yet to be looked into carefully and could be relevant in social judgments rendered. Increasingly there is a growing interest in defining pre-disease stages in other neurological conditions including Alzheimer’s disease where many of these questions again become relevant.




Issues related to the pre-disease state are now at the forefront due our ability to push the diagnostic threshold to earlier stages of the disease and from new insights into the nature of the brain changes from disease pathology. But it is also increasingly recognized that there is significant heterogeneity in the time course of these neurodegenerative conditions and in their clinical presentation due to multiple interacting factors. Due our limited understanding at present, we lump everyone with a disease marker in the same category but often fail to recognize the implications of their inherent differences. Among the challenges pre-disease diagnoses would have to address in the future is how to tailor individual treatments (and judge moral standards) taking into account differences in pathology burden, rate of progression of the disease and areas of the brain affected and not render judgments based on a single genetic or pathological marker of a disease state.




We may not stop to think about how we too are at risk and vulnerable to our neural propensities even beyond neurodegeneration. Our awareness of novel neural level differences could force us increasingly to take stock on how much of a fine grain appraisal an individual can be put through to differentiate qualities like interest, skill, motive, memory, judgment before taking on a job or being on trial. The problems the ‘at risk’ groups are already grappling with are in some ways precursors to issues we as a culture might be tackling in the near future.




--Jagan Pillai








Want to cite this post?


Pillai, J. (2013). Judging brains with preclinical disease. The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2013/01/judging-brains-with-preclinical-disease.html





References




1. Hoppitt T, Pall H, Calvert M, Gill P, Yao G, Ramsay J, James G, Conduit J, Sackley C. A systematic review of the incidence and prevalence of long-term neurological conditions in the UK. Neuroepidemiology. 2011;36(1):19-28.




2. International Huntington's Disease Collaborative Group, Langbehn DR, Brinkman RR, Falush D, Paulsen JS, Hayden MR. (2004). "A new model for prediction of the age of onset and penetrance for Huntington's disease based on CAG length." Clin Genet 65:267-277.




3. Tabrizi SJ, Scahill RI, Durr A, Roos RA, Leavitt BR, Jones R, Landwehrmeyer GB, Fox NC, Johnson H, Hicks SL, Kennard C, Craufurd D, Frost C, Langbehn DR,Reilmann R, Stout JC; TRACK-HD Investigators. Biological and clinical changes in premanifest and early stage Huntington's disease in the TRACK-HD study: the 12-month longitudinal analysis. Lancet Neurol. 2011 Jan;10(1):31-42.




4.Lawrence AD, Hodges JR, Rosser AE, Kershaw A, ffrench-Constant C, Rubinsztein DC, Robbins TW, Sahakian BJ. Evidence for specific cognitive deficits in preclinical Huntington's disease. Brain. 1998 Jul;121 ( Pt 7):1329-41.




5. Klöppel S, Stonnington CM, Petrovic P, Mobbs D, Tüscher O, Craufurd D, TabriziSJ, Frackowiak RS. Irritability in pre-clinical Huntington's disease. Neuropsychologia. 2010 Jan;48(2):549-57.




6. Julien CL, Thompson JC, Wild S, Yardumian P, Snowden JS, Turner G, Craufurd D. Psychiatric disorders in preclinical Huntington's disease. J Neurol Neurosurg Psychiatry. 2007 Sep;78(9):939-43.




7. Lenox, Genetic Discrimination in Insurance and Employment: Spoiled Fruits of the Human Genome Project, 23 U. Dayton L. Rev. 189, 190 (1997).




8. Andrews, Body Science, 83 ABA Journal 44, 49 (1997).




9. Silberman. J.H. The Brain That Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science. Penguin (Non-Classics); 1 Reprint edition (December 18, 2007).



Wednesday, January 16, 2013

The Violence of Assumed Violence: A Reflection on Reports of Adam Lanza’s Possible Autism

By Guest Contributor Jennifer C. Sarrett, MEd, MA
Doctoral Candidate, Graduate Institute of Liberal Arts
Emory University




On Friday, December 14th 2012, the country learned of the mass shooting of 5- and 6-year-old children and several adults in Newtown, CT. By the end of the day, we learned that Adam Lanza, the perpetrator of the heinous act, may be autistic. Although we now know that this is not the case, it has spurred conversations about the link between autism and violence. This mental illness guessing-game has become the norm in the wake of such tragedies. Jared Loughner and James Holmes may have been schizophrenic; Sueng-Hi Cho may have been depressed, anxious, and also possibly autistic; Eric Harris and Dylan Klebold may have been depressed and/or psychopathic. These speculations are understandable – the public yearns to understand the motives behind such acts and recognizes that good mental health and mass shootings are never coupled–however, the way these representations are presented to the community create stigma and blames others with similar disabilities. 






Adam Lanza

In Media Madness: Public Images of Mental Illness, psychologist Otto Walh explains that the public does not get its information about mental illness from evidenced-based, professional sources, rather, “[i]t is far more likely that the public’s knowledge of mental illness comes from sources closer to home, sources to which we are all exposed on a daily basis–namely, the mass media.” [1] The media (i.e. news, television, movies, video games, popular literature) often provides these links casually but carefully. Reports may mention Adam Lanza had autism, but don’t make the causal link between this diagnosis and his crimes. Yet in the minds of readers, the association is made.



The link between mental illness and violence has a long history. [2] In addition to the news, popular movies and TV shows contribute by featuring violent characters with a history of mental illness. Films such as Psycho,  the Halloween seriesMisery Silence of the Lambs,  and Natural Born Killers center on violent characters with some kind of mental illness. [4-7] The Law & Order  and Criminal Minds  series both frequently implicate a mentally ill person in some violent and incomprehensible crime. [8-10]  The majority of these representations are of individuals with some sort of undefined psychosis, however, as the country wonders over Adam Lanza’s possible autism diagnosis it is fair to expect autism as the next violent scapegoat.



Autistic individuals are represented in the media as aloof, shy, detached, rigid, and unpredictable – a seeming recipe for apathy. [11] The first popular portrayal of autism in America, Rain Man (1988), portrays Raymond Babbit (Dustin Hoffman), an autistic man who, in one memorable scene, screams and hits is own head at the prospect of getting on an airplane and refuses to connect with his brother, Charlie (Tom Cruise). [12] He is detached and unpredictable. Mercury Rising (1998) features a young autistic boy who does not communicate verbally and who relates more to numbers than to people, a skill that eventually saves his and FBI agent Art Jefferies’ (Bruce Willis) lives. [13]






Charles Babbit getting angry

Beyond these fictional portrayals, autism is usually portrayed one of two ways in popular news stories: (1) stories of amazing abilities or unexpected outcomes – most of which are entirely unremarkable events for non-disabled people (such as this well known story or this recent story), or (2) the immense difficulty of life with autism (with sometimes fatal consequences). While these stories are written with an underlying sense of hope or struggle, they all serve to set up distinct separations in the mind of the public between us (i.e. the non-autistics) and them (i.e. the autistics). [14] The recent reports, however, come dangerously close to creating a new category of representation – the violent autistic.



Though reports that Adam Lanza did not, in fact, have autism or Asperger’s syndrome came out days after the shooting, the damage has been done. [15] Blogs, articles, and op-eds quickly denounced the relationship between autism and violence, but this was simply triage and not nearly as interesting, memorable, or comforting as the original reports of psychiatric difference. What is needed is more responsible reporting of psychiatric and cognitive differences from the very first mention in place of later, cursory amendments.



For example, did any initial reports of Adam Lanza, Sueng-Hui Cho, James Loughner, James Holmes, or Eric Harris and Dylan Klebold’s possible mental illnesses mention the fact that people with mental illness are much more likely to be the victims, rather than the perpetrators, of violent crime? [16] Does news of a crime perpetrated by a person with a severe mental illness, such as schizophrenia, also report the extremely high levels of comorbidity between mental illness and substance abuse in instances of violent crimes? [17] The lack of a connection between mental illness and violence has been reported by the Surgeon General (in this 1999 report) and the National Institute of Mental Health, who, in this 2006 report, stated that “...the amount of violence committed by people with schizophrenia is small, and only 1 percent of the U.S. population has schizophrenia,” yet “by comparison, about 2 percent of the general population without psychiatric disorder engages in any violent behavior in a one-year period…” [18]



I am not claiming that people who commit the atrocities like those at Sandy Hook Elementary are not mentally ill or that their psychiatric state should be ignored. What I am proposing is that when these reports come out, they should come out alongside accurate information about the stated mental illness or disability and its actual relationship to violence. People on the autism spectrum are not and have never been clinically associated with premeditated violence, yet I fear that when Adam Lanza and Sueng-Hi Cho are linked with autism, this is an implicit suggestion that autism is the cause of their violent behavior.[19] Yes, autism may be part of the profile of a perpetrator, but it is usually no more a cause of violence than if they had, for instance, been diagnosed with heart disease. When a diagnosis is recklessly implicated in the reporting of violent crimes, the public tends to remember this association much more so than follow-up reports of no diagnosis or actual levels of violence among diagnosed individuals. In respect for the majority of the population of people with psychiatric or cognitive difference, this context needs to be presented at the very beginning.



The WHO reports that stigma is the biggest barrier to overcome for individuals with mental illness  and an association with violence is among the most common and damaging representations. [20] In all the talk of mental health reform, a top priority must be a reduction of the stigma of violence. This stigma leads effortlessly into hate crimes, injustices, and poor care for people with cognitive and psychiatric differences.



I’d like to close with a quote from the Autistic Self Advocacy Network’s statement on the Newtown, CT shootings that highlights the faulty logic and stigma that follows these assumptions: “Should the shooter in today’s shooting prove to in fact be diagnosed on the autism spectrum or with another disability, the millions of Americans with disabilities should be no more implicated in his actions than the non-disabled population is responsible for those of non-disabled shooters.”



To learn more about autism and related issues, I recommend the following resources:



Books

Frith, Uta, ed., Autism and Asperger Syndrome. Cambridge: Cambridge University Press, 1991.



Grandin, Temple. Different...Not Less: Inspiring Stories of Achievement and Successful

Employment, Arlington, TX: Future Horizons, Inc., 2012.



Grinker, Roy R., Unstrange Minds: Remapping the World of Autism. New York: Basic Books, 2008.



Murray, Stuart, Autism. New York: Routledge, 2012.



Offit, Paul. Autism’s False Prophets: Bad Science, Risky Medicine, and The Search for a Cure. New York: Columbia University Press, 2010.



Online



Autistic Self Advocacy Network, http://autisticadvocacy.org/



National Institutes of Health: Neurological Disorders and Stroke, “Autism Fact Sheet,” http://www.ninds.nih.gov/disorders/autism/detail_autism.htm







Want to Cite this Post?



Sarrett, J. (2012) The Violence of Assumed Violence: A Reflection on Reports of Adam Lanza’s Possible Autism. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2013/01/the-violence-of-assumed-violence_16.html





Author Bio



Jennifer C. Sarrett started working with people on the autism spectrum in 1999 in Athens, GA while getting her B.S. in Psychology. In 2005, she completed her M.Ed. in Early Childhood Special Education with a focus on autism from Vanderbilt University. She is currently a fifth year doctoral student in Emory University’s Graduate Institute of Liberal Arts working on her dissertation which compares parental and professional experiences of autism in Atlanta, GA and Kerala, India as well as the ethical issues the arise when engaging in international, autism-related work.



Jennifer Sarrett is the co-organizer for Critical Juncture–an upcoming conference aimed at exploring the ways multiple social identities (i.e. race, sexuality, disability, gender, ethnicity) influence experiences of difference and inequality in communities, science, medicine, and public spaces. This interdisciplinary conference will be held at Emory University on March 22-23, 2013 and aims to highlight the work of emerging scholars. Please visit the website for additional information, including registration: www.criticaljunctureconference.wordpress.com.
 



References



[1] Walh, Otto. Media Madness: Public Images of Mental Illness. (New Brunswick: Rutgers

University Press, 2005), 2.



[2] JC Phelan & BG Link, “The growing belief that people with  mental illnesses are violent: The role of the dangerousness criterion for civil commitment,” Social Psychiatry and Psychiatric Epidemiology, 33 (1998): S7-S12; Wahl, Media Madness.

 

[3] Psycho, directed by Alfred Hitchock (1960; Universal City, CA: Universal Pictures, 1998), DVD.



[4] Halloween, directed by John Carpenter (1978; Los Angeles, CA: Compass International Pictures, 1978), video.

[5] Misery, directed by Rob Reiner (1990; Los Angeles, CA: Columbia Pictures, 1990), video.

 

[6] Silence of the Lambs, directed by Jonathan Demme (1991; Los Angeles, CA: Orion Pictures, 1991), video.



[7] Natural Born Killers, directed by Oliver Stone (1994; Los Angeles, CA: Warner Brothers, 1994), video.

 

[8] Law & Order (franchise), created by Dick Wolf (1990; New York, NY: NBC, 1990), television.

 

[9] Criminal Minds, created by Jeff Davis (2005; Los Angeles, CA: The Mark Gordon Company, 2005), television.



[10] Patricia Owens, “Portrayals of schizophrenia by entertainment media: A content analysis of contemporary movies,” Psychiatric Services, 63, no. 7 (2012): 655-659.; Wahl, Media Madness.



[11] There is a conversation over the use of the phrase “autistic person” or “person with autism.” While the latter follows “person first” language, often used in disability advocacy to represent that a person is more important and, thus, comes first than a disability, autistic self-advocates are increasingly promoting the former phrase. Not only is a person’s autism often viewed as an integral component into who they are, but some argue that the phrase “person with autism” seems to suggest the need to remind others that autistic people are, in fact, people. (Steven Kapp, personal communication). I choose which phrase to use based on the representation I am referring to; in other words, I use “autistic person” when advocating and “person with autism” when referring to non-advocacy based positions.



[12] Rain Man, directed by Barry Levinson (1988; Los Angeles, CA: United Artists, 1988), video.



[13] Mercury Rising,directed by Harold Becker (1998; Universal City, CA: Universal Pictures, 1998), video.



[14] For more on this see: Rosemarie Garland-Thomas, “Seeing the disabled: Visual rhetorics of disability in popular photography,” The New Disability History: American Perspectives, ed. Paul K. Longmore and L. Umansky (New York: New York University Press, 2001), 335 and Stuart Murray, Representing Autism: Culture, Narrative, Fascination. (Liverpool: Liverpool University Press, 2008).



[15] Asperger’s syndrome is a diagnosis under the autism spectrum that is characterized primarily by differences in social interaction preferences and styles.



[16] KA Hughes, MA Bellis, L Jones, S Wood, G Bates, L Eckley, E McCoy, C Mikton, T Shakespeare & A Officer, “Prevalence and risk of violence against adults with disabilities: A systematic review and meta-analysis of observational studies,” Lancet, 379, no 9826(2012): 1621-9, doi:10.1016/S0140-6736(11)61851-5; Patricia Owens, “Portrayals of schizophrenia by entertainment media: A content analysis of contemporary movies,” Psychiatric Services, 63, no. 7 (2012): 655-659; Katherine Quarmby, Scapegoat: Why We are Failing Disabled People, (London: Portobello Books, 2011); LA Teflon, GM McClelland, KM Abram, et al., “Crime victimization in adults with severe mental illness: Comparison with the National Crime Victimization Survey,” Archives of General Psychiatry, 62 (2005): 911-931.



[17] Seena Fazel, Guam Gulati, Louise Linsell, John R. Geddes, & Martin Grann, “Schizophrenia and violence: Systematic review and meta-analysis.” PLoS Medicine, 6, no. 8 (2009): e1000120. doi: 10.1371/journal.pmed.1000120; U.S. Department of Health and Human Services. Mental Health: A Report of the Surgeon General. Rockville, MD: U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration, Center for Mental Health Services, National Institutes of Health, National Institute of Mental Health, 1999.



[18] JW Swanson, MS Swartz, RA Van Dorn, EB Elbogen, HR Wagner, RA Rosenheck, TS Stroup, JP McCoy, JA Lieberman, “A national study of violent behavior in persons with schizophrenia,” Archives of General Psychiatry, 63, no 5 (2006): 490-9; U.S. Department of Health and Human Services, Mental Health.



[19] Judy Endow, “No linkage between autism and planned violence.” Special-Ism. retrieved January 9, 2012 http://special-ism.com/no-linkage-between-autism-and-planned-violence/; Though there is little research on the subject, the following article concludes: “...a potential risk for aggression (but not necessarily for criminal behavior) in this populations and that, when violence does occur, it is often in distinct ways relevant to the symptomatology of HFASDs [High-Functioning Autism Spectrum Disorders]”; Matthew D. Lerner, Omar Sultan Haque, Eli C. Northrop, Lindsay Lawer, & Harold J. Bursztajn, “Emerging perspectives on adolescents and young adults with high-functioning autism spectrum disorders, violence, and criminal law,” American Academy of Psychiatry and the Law, 40, no. 2 (2012): 187.



[20] World Health Organization, “The World Health Report 2001: Mental Health: New Understanding, New Hope.” Geneva: World Health Organization, 2009.


Wednesday, January 9, 2013

Scrap or Save? A Triune Brain Theory Account of Moral Improvement



Last month, I wrote a post called “Uncovering the Neurocognitive Systems for 'Help This Child,'” where I suggested that understanding certain facts about our brains is not enough to get us to do the 'right thing.’I argued that we also have to 'outsmart' our least rational tendencies and get ourselves to apply our knowledge to real-life problems. This month, I want to explore a different aspect of the relationship between knowledge and practical action. I want to ask, 'What happens when researchers ground their work in a controversial scientific framework, but use it to introduce a set of ideas that could make a meaningful contribution?' The case I have in mind is Darcia Narvaez and Jenny Vaydich’s use of Paul D. MacLean’s ‘Triune Brain Theory’ to ground work on emotional and ethical ‘expertise development.’






Jenny Vaydich




Darcia Narvaez





In their paper entitled “Moral development and behavior under the spotlight of the neurobiological sciences,” Narvaez and Vaydich set out to defend the practical applications of neurobiological and neuroscientific research for moral development and education. [1] Reviewing over a hundred studies from across these disciplines, they argue that the neurosciences can contribute to our understanding of moral development in at least three ways: (1) by shedding light on previously not-well-understood factors of moral behavior, (2) by enriching the debate regarding the relative intentionality or automatic/unconscious nature of moral responses, decision-making, and action, (3) and by examining the influence of caregiver behavior and the surrounding environment on the development of healthy brain function or, in other words, “better- or worse-equipped brains” for participating in moral life. [1] Interestingly (and quite uniquely), Narvaez and Vaydich also go on to consider the implications of these findings for the formulation of practical frameworks intended to foster ‘expertise development’ in the domain of moral functioning.



Narvaez and Vaydich propose that neurobiological research findings do indeed confirm the possibility of moral improvement. They argue that since brain structures and functions are malleable, “unless the damage is severe, there is the possibility for change.” [1]  More specifically, they argue that change can be undertaken and actualized with respect to different components of moral function and at any point in human life, although the process becomes increasingly difficult with age. [2] In addition, Narvaez outlines a set of practices, which she calls the ‘Integrative Ethical Education’ model (IEE), that can be applied in classrooms and other learning environments to help foster moral character and functioning. Among other ideas, IEE advocates for the structuring of caring relationships and supportive environments; for the engagement of ethical skills through a novice-to-expert pedagogy; for the fostering of self-authorship; and for the reintroduction of what Narvaez calls an ‘ecological’ system of support, according to which the family and surrounding community coordinate to help advance and guide the learning process. [3] Citing the Minnesota Community Voice and Character Education project as an example, Narvaez and Vaydich argue that these principles could be widely applied in education programs in schools, youth organization and other learning-oriented public institutions.






A visual representation of TBT

Problematically, Narvaez and Vaydich’s work rests on Narvaez’s own more extended project, Triune Ethic Theory (TET), which is in turn based on Paul D. MacLean’s Triune Brain Theory. [4]  MacLean’s triune brain theory suggests that the human brain comprises three basic formations, known as the reptilian complex, the paleomammalian complex (corresponding to the limbic system) and the neomammalian complex (corresponding to the neocortex), and proposes that these formations reflect the evolution of reptiles, lower mammals, and late mammals. Citing Panksepp and others [5], Narvaez argues that although the theory is “on its face simplistic in separating brain structures from one another, in fundamental ways animal and human research support MacLean’s basic theory. Accumulating research in affective neuroscience not only confirms the general thrust of MacLean’s Triune Brain Theory, but points out the critical importance of early experience in gene expression in emotional circuitry.” [6]



Unfortunately, as David Nicholson points out in his excellent post, “Snakes On a Brain, or, Why Care About Comparative Neuroanatomy (Vol.1),” Triune Brain Theory is in fact not supported by comparative neuroanatomy. Paul Patton argues that since the early 1980’s, evolutionary biologists have learned a great deal about vertebrate evolutionary history and as a result, he suggests, “it is now apparent that a simple linear hierarchy cannot adequately account for the evolution of brains or of intelligence” [7] Even more to the point, in Comparative Vertebrate Neuroanatomy, William Hodos maintains that the “extensive body of work in comparative neurobiology over the past three decades unequivocally contradicts this theory” [8]. Or, as David puts it, Triune Brain Theory is just “completely wrong.”



So what does this mean for Narvaez and Vaydich’s efforts to extend neuroscientific research into the practical domain? Do we reject their ideas as unfounded? Do we shun them for even subscribing to Triune Brain Theory? Or do try and persuade them to change their minds?



I’m wondering whether the old ‘photo-shop’ approach might not be the most efficient solution here: i.e., to put the head of the theory on a different body, and to test it further from there. What if we simply separated Narvaez and Vaydich’s practical applications from their proposed neurobiological foundations, and tried to see if they work on a different set of principles about the brain? That way, if the theoretical transplant proved successful, we could keep their more pragmatic suggestions; if not, we could still throw them out, but we would now have done so with more thorough-going reasons. This strikes me as an economical approach, especially when the ideas in question discuss less a frequently-discussed topic in neuroethics research. And we might even learn a thing or two about salvaging findings more generally. For example, we may want to use a related principle to comb through Marc Hauser’s now-compromised findings, or to carry forward ideas from other neurobiological models that have since fallen short of our scientific standards for understanding.






Can we attach the head of an old theoretical model onto a new body?


But then, I’m a philosophy student and not a neuroscientist. What do you think: does relying on Triune Brain Theory compromise even a plausible, pragmatic idea?





--------------------------

References



[1] Narvaez, D. & Vaydich, J. (2008) Moral development and behavior under the spotlight of the neurobiological sciences. Journal of Moral Education, 37(3), 289-313.

[2] Mahncke, H. W., Bronstone, A. & Merzenich, M. M. (2007) Brain plasticity and functional losses in

the aged: scientific bases for a novel intervention (San Francisco, Posit Science Corporation).

Available online at: http://www.positscience.com/pdfs/science.

[3] Narvaez, D. (2010). The emotional foundations of high moral intelligence. In B. Latzko & T. Malti (Eds.). Children’s Moral Emotions and Moral Cognition: Developmental and Educational Perspectives, New Directions for Child and Adolescent Development, 129, 77-94. San Francisco: Jossey-Bass.

[4] Maclean, P.D. (1973). A triune concept of the brain and behavior. Toronto: University of Toronto Press.

[5] Panksepp, J. (2007) Neurologizing the psychology of affects: how appraisal-based constructivism

and basic emotion theory can coexist, in Perspectives in Psychological Science, 2(3), 281–296.

[6] Narvaez, D. (2009). Triune Ethics Theory and moral personality. In D. Narvaez & D.K. Lapsley (Eds.), Moral Personality, Identity and Character: An Interdisciplinary future. New York: Cambridge University Press, 136-158.

[7] Patton, Paul (2008), One World, Many Minds: Intelligence in the Animal Kingdom, in Scientific American.

[8] Comparative Vertebrate Neuroanatomy: Evolution and Adaptation, Second Edition, Eds. Ann B. Butler and William Hodos, 1996.






Want to Cite This Post?

Haas, J. (2013). Scrap or Save? A Triune Brain Theory Account of Moral Improvement. The Neuroethics Blog. Retreived on , from http://www.theneuroethicsblog.com/2013/01/scrap-or-save-triune-brain-theory.html




Wednesday, January 2, 2013

Neuroethics Journal Club: Hooked on Vaccines

Imagine a vaccine that causes our immune system to create antibodies against a drug like cocaine. After being vaccinated, we could snort cocaine and the antibodies would sequester the drug before it could reach our brain. A recent article in Nature Immunology’s Commentary section, “Immune to Addiction”, considers the ethical implications of such vaccines. We discussed the article at December's meeting of the Neuroethics Journal Club, led by Emory Neuroscience graduate student, Jordan Kohn.








Jordan Kohn: his provocative Powerpoint for journal club momentarily made me think I was an anti-vaxxer

Here's what I took away from this month's meeting: we need to integrate the different ways we study addiction. You might wonder what the ways we study substance abuse have to do with a vaccine against it. As the authors of the article say, "How substance dependence is characterized and classified informs the appropriateness of strategies aimed" at preventing or treating it. In other words, we can’t make clear ethical decisions about how to deal with substance abuse until we agree on what it is.



I know I'm not an expert on addiction, but it seems to me that even the experts don't agree on the definition of what they're studying. That’s why I think addiction researchers need to do even more than they’re doing now to relate brain and behavior. I might be saying something that's obvious to people in the field, but the discussion at the journal club made me think what I'm going to say here isn't obvious to everyone. So I'm putting my ideas about how we can further our understanding of addiction into blog post form. Hopefully, it will help continue the discussion.






This image, from the internet, succinctly conveys why I'm worried about how the debate on addiction vaccines may play out



The authors of the commentary talk about how addiction used to be seen as a failure of willpower. Some say neuroscience has replaced this view with the concept of addiction as a "brain disease." Jordan pointed this out as he introduced the paper. Addiction can also be defined in terms of behavior, as he then made a point of telling us. To hammer home the idea of addiction as behavior, Jordan showed us a clip of physician Gabor Maté speaking at a TED conference. Maté puts addiction in a social context. He pointedly states that "if you want to ask the question of why people are in pain, you can't look at their genetics. You have to look at their lives." Similarly, some neuroethicists have argued that calling addiction a "brain disease" ignores the social components of this pattern of behavior[1] and could increase the stigma faced by people suffering from addiction[2].



Yes, addiction can be described as a behavior. Dr. Steven Hyman begins a 2007 review[3] on addiction by stating that it is "defined as compulsive drug use despite negative consequences." Note this definition does not mention the brain. Yet Hyman is a molecular biologist who has spent his career relating genes to behavior, and he continues to lobby heavily for this approach to understanding psychiatric disorders (as described in previous posts on this blog).



We stopped thinking of the brain as a black box that gives rise to behavior a long time ago. If I want proof of this, I don’t have to look further than the work of Emory's Dr. Mike Kuhar, who I couldn't help noticing among the people attendance at this month's journal club meeting. Kuhar, as a young scientist in Sol Snyder's lab, helped provide some of the first evidence that receptors in the brain are real and that they bind drugs like cocaine. If I was blind to the fact that Jordan Kohn and Mike Kuhar are both complicated human beings, I’d be tempted to write up this journal club meeting as if they were the living embodiments of these opposing viewpoints: one view of addiction as a behavior, and another view of addiction as a brain disease.






This is drugs on your brain: Kuhar helped demonstrate where receptors in the brain are located that bind both signalling molecules and drugs like morphine that mimic those molecules' shapes

What we really need to do is embrace both viewpoints. Yes, there's a ton of studies providing evidence that drugs have long-term dramatic effects on reward systems in the brain. Our brains are not perfectly designed. It's easy for chemicals to come along and hijack the reward system. This explains why I have spent enough on coffee to fund the founding a few fincas full of those beany bushes in Brazil. However, the things that happen in my brain when I crave coffee are not exactly what happens in a rat’s brain when he has nothing better to do but press a lever in a cage the size of a shoebox so that a machine injects another bolus of cocaine into his cerebral ventricles. Suddenly studies of other animals that are social drinkers [4,5] don’t seem so ridiculous. (Most studies of alcoholism, for example, use rats or mice, species that live as loners in the wild and often don’t drink alcohol unless sugar is added.) To their credit, some addiction researchers have realized for decades that social environment does play a role in susceptibility to alcoholism and other forms of substance abuse[6]. While we can learn from animal studies, we also need to take a more holistic view of humans, including their environment. There’s only so much that animal models and clinical studies can tell us much about why our society drives people to put certain chemicals in their brains. By the same token, we need to realize there's more to an addict's brain than the reward system, and recovering addicts surely leverage those other brain regions to their advantage.





I realize I'm not the first person to argue that we need to change how we talk about addiction (see for example the article from Buchman, Illes, and Reiner I cited above). As Dr.Kuhar pointed out during the discussion at journal club, it might
help to say "addiction" when we're talking about the behavior, and
"chemical dependence" when we're talking about changes in the brain. However I want to emphasize that changing people's perceptions of addiction does not change what the science is telling us (although a well-designed experiment might disprove what we think we know). People suffering from many types of "brain diseases" face stigma, but the stigma is a problem in other peoples' brains. It worries me that a focus on the social aspects of substance abuse might make us forget what we've learned in the lab. Let's not set up a false dichotomy between brain and behavior, when most people now concede that the former is responsible for the latter, even when we're not sure who to hold responsible for our brains. At the same time, we need to understand why a brain placed in a certain environment will tend towards a certain behavior, such as addiction. Only then can we have a clear discussion about when we should give up our dependence on some substance that alters our brain chemistry, so we can instead be hooked on vaccines.





Want to cite this post?



Nicholson, D. (2013). Neuroethics Journal Club: Hooked on Vaccines. The Neuroethics Blog. Retrieved on

from http://www.theneuroethicsblog.com/2013/01/neuroethics-journal-club-hooked-on.html



[1] Buchman, Daniel Z., Wayne Skinner, and Judy Illes. "Negotiating the relationship between addiction, ethics, and brain science." AJOB neuroscience 1.1 (2010): 36-45.



[2] Buchman, Daniel Z., Judy Illes, and Peter B. Reiner. "The paradox of addiction neuroscience." Neuroethics 4.2 (2011): 65-77.



[3] Hyman, S. E. (2005). Addiction: A Disease of Learning and Memory. Am J Psychiatry, 162(8), 1414-1422.



[4] Anacker, Allison MJ, et al. "Prairie voles as a novel model of socially facilitated excessive drinking." Addiction biology 16.1 (2010): 92-107.



[5] Anacker, Allison MJ, Jennifer M. Loftis, and Andrey E. Ryabinin. "Alcohol intake in prairie voles is influenced by the drinking level of a peer." Alcoholism: Clinical and Experimental Research 35.10 (2011): 1884-1890.



[6] Spear, Linda P. "The adolescent brain and age-related behavioral manifestations." Neuroscience & Biobehavioral Reviews 24.4 (2000): 417-463.