Pages

Tuesday, June 25, 2013

Legal Pains

When a friend accidentally burns themselves on a stove-top, their pain is usually very obvious - cursing, gesturing wildly, and even the explicit verbal pronouncement of "I am in pain."  It's also very clear from this display that their pain is viewed as a "bad" thing - they want it to stop, they will be more vigilant in the future to prevent it from happening again, and they very likely either want or even expect you to help out in these endeavors. Pain, while being a survival affirming biological phenomena, is (at least in this simple case here) also inherently ethical.



We can then imagine that this same friend's nervous system might be manipulated (whether through mutation, injury, or pharmacological manipulation) to prevent them from feeling pain.  While we might initially be shocked at such a turn of events, we could be convinced of such a change if our friend stopped responding to usually painful stimuli (such as our villain the stove-top) with the same clear "pained" reaction.  So changing the nervous system clearly changes whether or not the individual can feel pain, leading us to believe that pain is something the brain does.






Pain is something the brain does- but how do we know when it is doing it?  Image from here.  

However, in addition to our verbal friend, there are an untold number of animals, (including other types of humans), plants, virions, rocks, alien beings, artificial intelligences, puppets, and tissue cultures whose internal states aren't as readily apparent. Whether it is that they can't talk (as in the case of the bulk of that list) or we suspect that their speech might not be truthful (as in the case of artificial intelligences or puppets), we must rely on other, perhaps subtler methods of inferring their internal state. Holding off on the very important question of how we should evaluate a creature's pain, let us instead focus on evaluating how we currently determine if a strange being is in pain or not. To simplify the problem significantly, let's focus on how this process occurs in the present day United States. While even in this limited scope we see substantial debate over what counts as pain in non-human animals, there is at least one place where some sort of official, if temporary, compromise is forced into existence: animal cruelty law.  Can US animal cruelty laws provide a formal definition for how Americans think of pain?

The answer is a bellowing, headache-inducing “no.”  Besides the fact that animal cruelty laws are all over the map, they also assume the reader already knows what terms such as 'torture,' 'cruelty,' and 'abuse' mean. [1] Wording is such that the legality of acts such as killing an animal - which includes euthanasia - are dependent on whether or not the human performing the act was acting with 'malicious intent,'[1] rather than specifying behavioral or physiologic indicators of pain in the victim itself.  Even the definition of 'animal' can be horribly vague and ancient-sounding - California's, for instance, is “any dumb brute,”[2] while Alaska excludes fish from their definition. [3] And then there are the built in loop holes - activities such as hunting, fishing, pest control and agriculture are often explicitly excluded from prosecution under these laws. [4]



If the laws themselves are vague and variable, perhaps we can pass the buck onto the courts to determine what the 'official' definition of pain is in practice.  The American Prosecutors Research Institute published a guide [5] to prosecuting animal cruelty cases in 2006, which focuses on motivating the prosecution of these crimes (as indicators of future criminal activity [6], or as “broken window” crimes that can reduce the public trust of law enforcement if they go unchecked, in addition to violations of animal welfare), as well as describing the types of cases, and resources available for prosecutors.  Though prosecution might focus on determining very anthropocentric factors such as how much the criminal act represents the potential for harm to the (human) community at large, there are several recommendations on ways to argue for the amount of pain and suffering inflicted into the victims of animal cruelty.  First among these techniques is to consult with a veterinarian, who being “among the most respected members of the community” can offer expert “opinions regarding the speed of unconsciousness or death, and the degree of suffering to evaluate whether the death or killing was humane.” [5]




Image from here



So the legislators have given a vague direction to the courts, and the courts hand the technical questions off to the veterinarians.  How do the vets determine if an animal is in pain?  Even after narrowing our scope to the veterinary community, we see substantial variation in how pain is viewed and treated. While "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage[7]" is often cited in medical  contexts, how does this translate to actual diagnosis in a veterinary setting?  One approach is to see what behavioral and physiologic changes are associated with major tissue damage, as in the case with surgery, and then identify how accurately such measures can be used to deduce if an animal was provided with analgesia following such a stimulus. If a measurement (such as heart rate, subjective measures of posture, amount of locomotor activity, wound licking) leads to accurate indications of the presence or absence of known pains, and is reliably measured by multiple experimenters, than the measurement is considered indicative of pain[8]- though confounds can occur[9].  Additionally, behavioral measurements may be selected for evaluation based on common practical knowledge or past experience with the organism[10].  It should be noted however that Dr. Patrick Wall, one of the world's most respected pain researchers, admonished veterinary practitioners to abandon the idea of an intrinsically moral animal pain. Wall asks vets, “instead of agonizing over an undefinable concept of pain, why do we not simply study the individual's efforts to stabilize its internal environment and then aid it, or at least not intrude on those efforts, without good reason?”[11]



This is hardly an exhaustive depiction of all the ways pain can be construed, even when limiting ourselves to American veterinary practice.  We haven't found a clear ground to seat pain on, though certainly behavioral changes, nociception, and similarity to human responses are all good starting points.  Going back up to the courts and the law, we find that in practice this notion of animal pain is further modulated by tradition and human intention when determining if the pain is considered 'bad' or not.  The 'clearly' neural and moral pain we see with humans becomes cloudier as we move away from humans and into other entities, becoming more and more troublesome as the entity becomes stranger and stranger.  While we might expect (or at least hope for) direct neural measures of pain to sharpen this very fuzzy working definition I've sketched above, we should remember that the 'ground truth' for pain, the facts that we build more nuanced definitions off of, are human behavioral responses (such as the response to the stove-top).  We can look for similar behaviors, or similar neural correlates of these behaviors, in non-human animals, but both of these are correlative indicators.  Until we specify (or agree upon) what exactly about pain makes it moral [12], and then find the behavioral or neural causes of that aspect, such correlations are all we will have for determining pain.





[1] West's Annotated California Codes. Penal Code. Part 1. Of Crimes and Punishments. Title 14. Malicious Mischief. § 597. Cruelty to animals

[2] West's Annotated California Codes. Penal Code. Part 1. Of Crimes and Punishments. Title 14. Malicious Mischief. § 599b. Words and phrases; imputation of knowledge to corporation

[3] Alaska Statutes. Title 11. Criminal Law. Chapter 81. General Provisions. Section 900. Definitions.

[4] West's Code of Georgia Annotated. Title 16. Crimes and Offenses. Chapter 12. Offenses Against Public Health and Morals. Article 1. General Provisions.

[5] American Prosecutors Research Institute.  Animal Cruelty  Prosecution: Opportunities for Early

 Response to Crime and Interpersonal Violence.  American Prosecutors Research Institute Special Topics Series, July 2006

[6] Luke, Carter, Jack Levin, and Arnold Arluke. Cruelty to Animals and Other Crimes: A Study by the MSPCA and Northeastern University. Massachusetts Society for the Prevention of Cruelty to Animals, 1997.

[7] Merkey, H. "Pain Terms: a list of definitions and notes on usage." Pain 6 (1979): 249-252.  Note that the definition is expanded further in the original text, though it is usually this sentence that gets cited.

[8] Stasiak, Karen L., et al. "Species-specific assessment of pain in laboratory animals." Journal of the American Association for Laboratory Animal Science 42.4 (2003): 13-20.

[9] Liles, J. H., et al. "Influence of oral buprenorphine, oral naltrexone or morphine on the effects of laparotomy in the rat." Laboratory animals 32.2 (1998): 149-161.

[10] Gillingham, Melanie B., et al. "A comparison of two opioid analgesics for relief of visceral pain induced by intestinal resection in rats." Journal of the American Association for Laboratory Animal Science 40.1 (2001): 21-26.

[11] Wall, Patrick D. "Defining pain in animals." Animal pain. New York: Churchill-Livingstone Inc (1992): 63-79.

[12] Major contenders here might be "because pain Feels bad" (so, the qualitative, subjective aspect of pain), "because I want it to stop" (pain as an example of interests), or "because pain implies a request for help" (pain as a social event), or some combination thereof.





Want to cite this post?

Zeller-Townson, R. (2013). Legal Pains. The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2013/06/legal-pains.html

Tuesday, June 18, 2013

Can Human Brain Tissue Make Mice Smarter? Emory Neuroethics Journal Club Review

What makes humans smart?  This was the primary question posed in the final Journal Club of the Spring 2013 semester.  Led by Riley Zeller-Townson, the club discussed Han et al. (2013), a paper that discusses the enhancement of learning in mice after grafting human glial progenitor cells into their brains. Riley began by explaining the paper and the work leading up to it. Most of the roles of glial cells involve supporting and protecting neurons, such as synaptic plasticity, myelination, and maintaining the blood-brain barrier (Barres, 2003). This study focuses on one subtype of glia, called astrocytes, cells that provide nutrients to neurons (Tsacopoulos et al, 1996).






Neurons (shown on left) possess both axons and dendrites and are shaped differently than glial cells (Source).  The glial cell shown on the right is an astrocyte, which is more “star” shaped due to its many branched processes (Han et al 2013).

While people generally think of neurons as being the important type of brain cells, research is beginning to show that the merits of glial cells were previously underestimated. Interestingly, post-mortem analysis of Albert Einstein’s brain showed that he had more glia than the average person (Diamond et al, 1985).  Along the same vein, previous studies have shown that primate glia are larger, more complex, and faster than those of mice (Colombo, 1996; Oberheim et al., 2009). Therefore, is it possible that glial cells are the root of intelligence?

Han et al. tested this hypothesis by grafting human astroglia progenitors into neonatal immune-deficient mice.  Because the mice retained their own astroglia as well, the mice were chimeric, or having both mouse and human glia.  The human glial cells were fluorescently labeled before implantation so that they could be identified after each mouse was sacrificed, which occurred anywhere from 0.5-20 months of age.  The successfully grafted glial cells were found to have distributed across the forebrain, including the hippocampus and cortex.  Additionally, human glia were found in the amygdala, thalamus, and neostriatum in mice ages 12-20 months.



Next, the authors did a battery of tests that revealed that the cells were not only present, but also functional.  The authors found that not only were the human astrocyte progenitor cells present in the mouse brain, but they actually differentiated into humanoid protoplasmic astrocytes that formed synapses with mouse astrocytes.  In the chimeric mouse brain, the human astrocytes produced Ca2+ signals that were three times faster than Ca2+ signals produced by mouse astrocytes (this had previously been demonstrated in human tissue, but held true for chimeric mouse tissue). Next, the authors compared hippocampal dentate synaptic activity in chimeric mice to unengrafted and allografted mice and found that tissues with engrafted human glia showed a greater level of excitatory synaptic transmission. This finding goes hand in hand with the next subsequent finding that Long Term Potentiation (LTP), an experimental measure correlated with memory formation, increased in chimeric mice. The authors speculate that this increase was due to insertion of GluR1, a specific type of excitatory receptor often associated with memory and learning, into mice neurons by human glia, which lowered the threshold for LTP induction.



Because LTP was increased in chimeric mice, one would assume that these mice, not just their cells, would also showed quicker learning behaviors than the control mice. As expected, mice engrafted with human glia performed better in all four behavioral tasks compared to unengrafted and allografted mice, showing that the improvement was due to the human glia, not the act of engrafting cells into the mice.



Overall, the results of the paper suggest that astrocytes play a role in learning, and that more complex astrocytes might increase learning by lowering the threshold at which LTP occurs. Furthermore, the paper implies that human astrocytes, and possibly other subtypes of untested human glial cells, play a role in the high level of cognition observed in humans (and possibly in mice).







The Rats of
NIMH (Source)


After introducing the paper and its findings, the subject turned to the ethical considerations associated with the study.  Riley introduced another paper (Greely et al 2007) as a response to Han et al.  In his paper, Greely discusses the possible ethical issues that would arise by performing this type of study.  Greely lists several possible ethical issues, including the risk of conferring humanity upon the mice, the potential for pain and suffering, public reaction, and respect for human tissues.



In response to Greely’s first concern, journal club attendee and bioethics professor Dr. Jonathan Crane pointed out that “conferring humanity” upon the mice was the wrong term, as “humanity” simply means the traits that make us human.   Dr. Crane pointed out that humans do not really have any unique capabilities that other animals do not have, we just have them in a different degree.  He points out the Greely is most likely concerned about the risk of conferring “personhood,” or making the mouse a autonomous individual to the same extent that an adult human is.



Of course, if personhood were conferred upon a species that we cannot communicate with, would we even be able to recognize it?  It seems a big issue with this type of study is not only the possibly innately unethical problem of creating an unnatural sentient being, but also the issue of creating a (more?) autonomous being, not realizing it as such, and continuing to treat it like a typical lab animal.



Greely’s second concern, the potential for pain and suffering in the chimeric mouse, was also addressed.  While Greely’s paper noted the possibility of increased pain in the mouse due to having human neurons, the group discussed the mouse’s possibly increased emotional pain as well.  It was addressed that a mouse with human “intelligence” might also gain a human-like propensity for depression or realizing the futility of life as an experimental animal.   The authors seem to have guarded against this in the Han et al. study by noting that the chimeric mice were just as social as normal mice.  However, if these types of studies continue and mice intelligence is further increased, depression and suffering in the chimeric mouse will be a concern. The experimenters compared the reaction times and pain thresholds of chimeric mice to normal mice and found them to be the same.  These findings, coupled with the observation that the chimeric mice are just as social as normal mice are checks the experimenters used to show that the mice were not suffering mentally or physically.  These checks should also be present in future studies with chimeric species.  However, the downside is that these tests cannot gauge suffering until it has already happened.



Greely’s concern about public reactions was also discussed.  When discussing public reactions, it was addressed that different groups would be opposed to this study for different reasons.  Some groups would be opposed because of the belief that implanting human brain cells into a disvalues human intelligence.   Emory medical ethicist Dr. John Banja noted that some would question whether or not it is ethical to create hybrid species in the first place.  Dr. Crane pointed out that the study is also an affront to the mouse, as it is attempting to improve an animal that is perfect to begin with.



Riley’s main concern was that progression of this experiment will continue to create new species that we know nothing about.  Without knowing which mouse traits and which human traits a hybrid will have, it is impossible to care for it in the correct way from the beginning.  Riley pointed out that sometimes it takes thousands of years to create a cultural consensus on what is ethical regarding a certain issue, so if a new species pops up overnight society may not treat the animals respectfully.  Furthermore, these animals could completely change biological science: furthering this study could lead to a greater understanding of the mechanisms for learning, memory, and eventually even consciousness in humans.  An alternative to this however would be is a unique and species is created that is later discovered to have human-like consciousness and increased suffering.  The scientific community would stall as it would have to deal with public dissent amidst trying to find a more ethical method of testing the same hypotheses.



While future advancements along this line of work could have the potential to help us understand exactly what intelligence is, how it works, and why some people have more of it than others, these types of experiments could also have the ability to create a brand new species that society is not ready to care for or treat with respect.  Furthermore, it may not be possible to determine when the ethical boundary has been overstepped until it has already happened. Therefore, in continuations of this study, it is important to use chimeric animals with discretion and continue testing for possible suffering.



If you missed out on this journal club meeting you can watch a video of it and previous meetings here.



References



Alexander V. Gourine, V. K. (2010). Astrocytes Control Breathing Through pH-Dependent Release of ATP. Science, 571-575.



Barres, B. A. (2003). What is a Glial Cell? Glia, 4-5.



Colombo, J. (1996). Interlaminar astroglial processes in the cerebral cortex of adult monkeys but not of adult rats. Cells Tissues Organ, 57-62.



Henry T. Greely, M. K. (2007). Thinking About the Human Neuron Mouse. The American Journal of Bioethics, 27-40.



Magistretti, M. T. (1996). Metabolic Coupling between Glia and Neurons . Journal of Neuroscience, 877-885.



Marian C. Diamond, A. B. (1985). On the brain of a scientist: Albert Einstein. Experimental Neurology, 198–204.



Oberheim, N. W. (2006). Astrocytic complexity distinguishes the human brain. Trend in Neuroscience, 547-553.



Xiaoning Han, M. C. (2013). Forebrain Engraftment by Human Glial Progenitor Cells Enhances Synaptic Plasticity and Learning in Adult Mice. Cell, 342-353.




Want to cite this post?



Young, E. (2013). Can Human Brain Tissue Make Mice Smarter? Emory Neuroethics Journal Club Review. The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2013/06/can-human-brain-tissue-make-mice.html.

Tuesday, June 11, 2013

How do neuroscientists integrate their knowledge of the brain with their religious and spiritual beliefs?

By Kim Lang

Graduate Student, Neuroscience

Emory University 

This post was written as part of the Contemporary Issues in Neuroethics course 






As scientists, we’re generally a skeptical bunch (I’ll leave
speculation of whether that is a cause and/or effect of a career in science for
the Comments section).  While 95% of the
American public believe in a deity or higher power (83% believe in God and 12%
believe in a higher power) [1], only 51% of
surveyed scientists believe the same (33% believe in God and 18% believe in a
universal spirit or higher power) (Figure 1). [2]











According to surveys, this discrepancy is nothing new.  In 1914, sociologist James H. Leuba found
that 42% of the polled US scientists believed in God while 58% did not. [1,3]  In 1996,
Larry Witham and Edward Larson repeated Leuba’s survey and found that 40% of
scientists believe in a personal God while 45% do not4.  While the wording of questions can be
critiqued [3], the overall trend
remains and is fairly constant across different scientific fields.  According the 2009 Pew Research Center
survey, 51% of scientists in biological and medical fields believe in God or a
higher power, as well as 55% of those in chemistry, 50% of those in geosciences,
and 43% of those in physics and astronomy (Figure 2). [2]










As informative as this survey is, those are frustratingly wide
categories of science.  With so much
research into the neurological mechanisms of religious experiences and
discussion about whether the brain is wired to “produce” or “perceive” God5, I’m curious to
know how the neuroscience community would respond to this survey (being part of
the neuroscience community also biases my curiosity a bit).  I’d also like to know how those
neuroscientists that do believe in God or a higher power integrate this belief
with their neuroscience knowledge (the atheist view seems pretty
self-explanatory).






I’m not aware of any surveys that address this question, so I decided
to look into the issue through “case studies” – works by neuroscientists or
neurologists describing their beliefs and explanations of how neuroscience and
religion are compatible in their lives. There are numerous opinions and varieties of personal belief systems out
there and I have listed a few below. This list is far from exhaustive, but I think it shows the range of ways
in which neuroscience knowledge and religious or spiritual beliefs can coexist
in an individual.  I’d be interested to
hear other possibilities in the Comments section.






Mario Beauregard


Associate Research Professor in the Departments of Psychology and
Radiology and the Neuroscience Research Center at the University of Montreal, author
of The Spiritual Brain. http://drmariobeauregard.com/





Dr. Beauregard disagrees with the materialism view and in his book, he makes
the case for the existence of the soul, explaining  why “there is good reason for believing that
human beings have a spiritual nature, one that even survives death”. [6]   In an interview, he explains his
“non-materialist neuroscience” beliefs, saying that “the mind is real and can
change the brain...I have demonstrated, via brain imaging techniques, that
women and girls can control sad thoughts, men can control responses to erotic
films, and people who suffer from phobias such as spider phobia can reorganize
their brains so that they lose the fear.” Of spiritual experiences he says, the “brain mediates all experiences of
living human beings. That does not mean that the brain creates the experiences”. [7]    





Of the four possible modes of interaction between scientific and
religious belief (Conflict, Independence, Dialogue, Integration, as outlined by
physicist Ian Barbour [8]), Beauregard
appears to subscribe to Integration, in which a person is both a “biological
organism” and a “responsible self.” The
“self” (perhaps another term for mind in this analysis) has causal efficacy as
it interacts with the brain.[8] For Beauregard, the brain mediates
perception, but is itself mediated by the mind.      







Michael Graziano


Professor of Neuroscience at the Princeton University Neuroscience
Institute; author of God Soul Mind Brain:
A Neuroscientist's Reflections on the Spirit World.






Dr. Graziano says that “evidence is now overwhelming that every aspect
of the mind is produced by the brain,” and “I draw two personal lessons from the
neuroscience of mind.  First, far from
dismissing mind, or spirit, or soul as nonsense, I see these quantities as far
more precious precisely because they are vulnerable and finite. In a sense I've
become more spiritual as my scientific understanding deepens and I realize that
spirit is a passing conjunction of information. 
Second, the neuroscience of the mind gives me a wonderful opportunity to
work on a scientific problem that is truly meaningful. About 25 years ago
Francis Crick, famous for his role in understanding DNA, posed a question. Is
it possible for brain science to address consciousness, a topic traditionally
studied by philosophers and theologians? The answer is a definite yes. Many
neuroscientists including myself have joined that effort.” [9] [For those who are interested, Crick explored
consciousness in his book titled “The Astonishing Hypothesis”, http://www.amazon.com/Astonishing-Hypothesis-Scientific-Search-Soul/dp/0684801582]





In Graziano’s beliefs, the physical brain gives rise to the mind, which
is interchangeable with spirit, soul, and a temporary confluence of
information.  It seems his neuroscientific
knowledge has actually deepened his spirituality. Like Beauregard, Graziano fits Barbour’s Integration
model, in which there is little conflict between science and spirituality
because brain and mind are not considered separate entities.  Instead, they are seen as two different aspects
of one process.    





Eben Alexander III


Neurosurgeon, Lynchburg General Hospital; author of Proof of Heaven






Dr. Alexander did not put much stock in “near-death revelations of God
and heaven” until bacterial meningitis put him into a coma. During that time, he had vivid experiences of
“an ‘orb’ that interprets for an all-loving God.” Despite the professional risk, he shared his experiences,
eventually writing a book about them. He
sums up his new beliefs by saying, “our spirit is not dependent on the brain or
body. It is eternal, and no one has one
sentence worth of hard evidence that it isn't.” [10]





Alexander takes a more traditional view of religion, discussing God
instead of spirituality and asserting that our spirit is independent of our
physical selves (his religious experiences occurred when his physical brain was
“not working at all” and not simply “working improperly.”) [10] This idea of soul-body dualism can be
categorized as Barbour’s Independence model, in which “there can be no conflict
between scientific and religious assertions …if they are independent and
unrelated to each other.” [8] While most scientists disagree with this
idea, the science-religion conflict may be averted another way – by
understanding that “there is a conflict in metaphysics but not in ethics.” [11] In this light, the discrepancies between the
scientific and religious details are immaterial to one’s daily conduct.     





Andrew Newberg


Director of Research at the Myrna Brind Center for Integrative Medicine
at Thomas Jefferson University Hospital and Medical College, author of How God Changes Your Brain: Breakthrough
Findings from a Leading Neuroscientist
and Why God Won’t Go Away: Brain Science and the Biology of Belief.







On his website, Dr. Newberg states that, “Our research indicates that
our only way of comprehending God, asking questions about God, and experiencing
God is through the brain. But whether or not God exists ‘out there’ is
something that neuroscience cannot answer.” [12] He also explains his own beliefs by saying, “My
initial attempts to find answers arose from the Western traditions, with an
emphasis on science and philosophy. Over the years, my personal search evolved
into a more meditative approach, which appeared similar to some of the Eastern
traditions. However, although my approach is in many ways is a form of
meditation, I have never practiced a specific religious or meditative technique
for any period of time. In order to continue my search, I have had to learn
about many disciplines and traditions. This typically was to enhance my own
approach, which I do consider a spiritual journey.” [12]





Newberg talks about both God and spirituality and even the role of
science in the development of his personal beliefs.  He might be best characterized not by one of
Barbour’s models but by Elaine Howard Ecklund (author of Science vs. Religion: What Scientists Really Think) and her term
“spiritual entrepreneur,” which describes people who pursue an individual spirituality
that meshes easily with science. [13]   







Concluding Thoughts


As the surveys reveal, there are a large percentage of religious or
spiritual scientists and

(estimating from the fairly consistent ratios of beliefs across different
scientific disciplines, Figure 2), there are likely a sizeable percentage of
neuroscientists with religious or spiritual beliefs. Within this group, individuals use a range of
conceptualizations to combine their neuroscientific and religious/spiritual
understandings.  Some, such as Beauregard
and Graziano, seem to have integrated the two. 
Others, such as Alexander, take the exact opposite approach and view the
two domains as independent.  Still
others, such as Newberg, seem to forge a new path of “spiritual entrepreneurship,”
crafting a spirituality that meshes well with scientific understandings. 





Though demonstrated above only briefly (Alexander), I posit that compartmentalization
is another way in which scientists avoid conflict between religion and
science.  As noted physicist Richard
Feynman observed, “there is a conflict in metaphysics but not in ethics.” [11]  That truth, coupled with the infrequency of
religious discussions in labs and other scientific realms, makes it is rather
easy to defer indefinitely the challenge of thoroughly reconciling one’s
religious and scientific beliefs.  





But perhaps rising to that challenge (or at least discussing it) might
benefit the scientific community.  A
clearer understanding of how scientists relate science and religion in their
own lives could improve communication with the more religious public (who may
be unaware that scientists share some of their views).  Additionally, understanding the factors that
inform our sense of morality and ethics and our research decisions (What topic
will I pursue?  What animal models am I
comfortable using?) would make us more thoughtful investigators.  In light of the (perhaps unexpected) fact
that nearly half of scientists are religious or spiritual, this discussion may
be more relevant than we previously thought. 
However, this conversation seems practically nonexistent, especially
within neuroscience.  None of the polls I
found presented neuroscientists as an independent group and I found only a few
outspoken neuroscientists who share their beliefs publicly.  I suspect this may be due to a stereotype of
religion as irrational and thus incompatible with science (a stigma that’s certainly
not helped by media coverage of creationism curriculum, the Westboro Baptist
Church protests, etc.).  But is this the
case?  The polls suggest that about half
of scientists think not (though few seem comfortable publicly expressing this
view).  While I certainly agree that scientific
efforts should exist apart from the direct influence of religion, it may be a
good idea for us to consider and be more willing to discuss some of the human
factors (i.e., religion and spirituality) that influence
the conduct of science and the lives of scientists.









References








1.  Masci, D. Scientists and Belief: The
Pew Forum on Religion & Public Life; 2009 [4-2-2013]. Available from: http://www.pewforum.org/Science-and-Bioethics/Scientists-and-Belief.aspx.









3.  Scott E.C. Do Scientists Really Reject God?: New Poll
Contradicts Earlier Ones. Reports of the National Center for Science Education.
1997;18(2):24-5.





4.  Larson E.J. WL. Scientists are still keeping the faith
[Commentary]. Nature. 1997;386:435-6. doi: 10.1038/386435a0.





5.  Fingelkurts, A.A., Fingelkurts A.A. Is our brain hardwired to
produce God, or is our brain hardwired to perceive God? A systematic review on
the role of the brain in mediating religious experience. Cognitive processing.
2009;10(4):293-326. Epub 2009/05/28. doi: 10.1007/s10339-009-0261-3. PubMedPMID: 19471985.





6.  Beauregard, M., O'Leary D. The Spiritual Brain: A
Neuroscientist's Case for the Existence of the Soul New York, NY:
HarperCollins; 2007.





7.  Beauregard, M. Author Interview: Harper Collins
Publishers;  [cited 2013]. Available
from: http://www.harpercollins.com/author/AuthorExtra.aspx?displayType=interview&authorID=30251.





8.  Barbour, I.G. When Science Meets Religion. New York, NY.:
HarperCollins; 2000.





9.  Graziano, M. The Spirit Ends When The Brain Dies:
Huffington Post; 2011 [cited 2013 April 14]. Available from: http://www.huffingtonpost.com/michael-graziano/the-spirit-dies-when-the-brain-dies_b_983852.html.





10.  Kaufman, L. Readers Join Doctor’s Journey to the
Afterworld’s Gates: New York Times; 2012 [cited 2013 April 14]. Available from:
http://www.nytimes.com/2012/11/26/books/dr-eben-alexanders-tells-of-near-death-in-proof-of-heaven.html?pagewanted=1&_r=4&adxnnlx=1364410868-fOn/vk0ZyryfyttwF7ih1g.





11.  Feynman, R. Where the Two Worlds Tangle.  There is a Conflict in Metaphysics - but Not
in Ethics. In: Kurtz P, editor. Science and Religion: Are They Compatible? New
York, NY: Prometheus Books; 2003.





12.  Newberg, A. Questions & Answers  [cited 2013 April 14]. Available from: http://www.andrewnewberg.com/qna.asp.





13.  Ecklund, E.H. Science vs
Religion: What Scientists Really Think. Oxford: Oxford University Press; 2010.







Want to cite this post?


Lang, K. (2013). How do neuroscientists integrate their knowledge of the brain with their religious/spiritual beliefs? The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2013/04/how-do-neuroscientists-integrate-their.html


Friday, June 7, 2013

International Neuroethics Society Meeting on Nov 7-8, 2013 in San Diego!

The International Neuroethics Society announces its 5th Annual Meeting (a satellite of the Society for Neuroscience Meeting) November 7 & 8 San Diego.









 Abstracts are due June 15, 2013.  For more information and the program see here.



 Listen to INS Member Molly Crockett cordially invite you here.



Bring your friends and family to the open-to-the-public program November 7 on Neurogaming: What’s Neuroscience and Ethics Got to Do with It?

Register for the meeting on November 8 here.



The speaker lineup includes Barbara Sahakian & John Pickard, University of Cambridge, Julian Savulescu, University of Oxford, Patricia Churchland, University of California-San Diego, Molly Crockett, University of Zurich, Jens Clausen, University of Tubingen, Lisa Claydon, Bristol Law School, University of the West England, Joe Fins & Niko Schiff, Weill Cornell Medical College, Holly Moore, Columbia University, New York State Psychiatric Institute, Mauricio Delgado, Rutgers University, Catherine Sebastian, Royal Holloway, University of London, J. David Jentsch, University of California – Los Angeles, and Honorable Robert Trentacosta, Presiding Judge of San Diego Superior Court.

See you in San Diego!





Tuesday, June 4, 2013

NEW OPENING! Graduate Editorial Intern for The American Journal of Bioethics Neuroscience

A unique opportunity for graduate students to get high-level editorial experience
for the premier neuroethics journal and the official journal of the International
Neuroethics Society. Interns will have access to an international community of
renowned neuroethics scholars and innovation in neuroethics scholarship. 




As editorial intern, you will be responsible for attending biweekly editorial meetings
and contributing intellectually to the editorial responsibility of the journal;
organization and transcription of interviews of prominent neuroethicists for
publication in the journal; publicity of the journal to the neuroscience community;
and maintenance of an internal organizational database. Innovation and initiative
is valued and there is some liberty to pursue projects of your own design within the
scope of the journal’s mission. Work runs approximately 10-20hrs a week,
depending on the editorial cycle. 




Please contact jequeen@emory.edu for more information. 



Deadline for applications: June 28, 2013 



Eligibility: Must currently be a graduate student, from any discipline,
with an interest in neuroethics and editorial work. Must be organized
and capable of meeting deadlines. Web management skills an asset.
Must be able to attend regular meetings at Emory University. 




How to apply: Send a 1-pg letter of interest, CV, and letter of
recommendation to jequeen@emory.edu




American Journal of Bioethics Neuroscience
Emory University
1531 Dickey Drive
Atlanta, GA 30322
http://www.ajobneuroscience.com/











A Life With Others…In Your Head?

By Stepheni Uh

Undergraduate Neuroscience and Behavioral Biology Major

Emory University

This post was written as part of the Contemporary Issues in Neuroethics Course



Although decades have passed since the world first heard
of Billy
Milligan
, his predicament and story still cause confusion and wonder. As
the field of neuroscience is expanding, more light has been shed upon his
condition: an extreme case of dissociative identity disorder (DID), formerly
known as multiple personality disorder. Advancements in neuroscience (i.e. in
research techniques) has led to the investigation of possible neurobiological
correlations to the symptoms of DID – yet, due to the rare cases of this
disorder, the possible neurobiological basis for DID has not been established.
Despite the lack of raw data, per se, neuroscience has fueled new perspectives
regarding the nature of DID such as those involving the ideas of culpability,
personhood, and individuality.












Billy Milligan

Billy Milligan, whose birth name is William
Stanley Milligan, had approximately 24 different personalities that fought to
take over his body – Arthur the intelligent Englishman; Philip the Brooklyn
criminal; David the eight-year-old “keeper of pain”; Adalana the lesbian and
everyone else, including the Teacher who could fuse all of the personalities and
help them develop [1,4]. Milligan was involved in robberies and other crimes
before he was prosecuted for kidnapping and raping three women from the Ohio
State University campus in October 1977 [4]. According to his psychiatric
report, Adalana had taken over Milligan and consequently raped the women due to
her desire for affection. The other personalities, however, had no recollection
of the incident [4]. Billy Milligan was eventually acquitted of his crimes by
reason of insanity and sent to the Athens Mental Health Center to “recover.”
Experts attempted to treat him by fusing all the personalities into one, which
was already established by the Teacher; so they attempted to make the Teacher
take over his “consciousness,” which had never happened before. Milligan was
finally released in 1988 and then became free from supervision in 1991 [4]. As
of today, no one knows what has happened to Billy Milligan and many questions
remain unanswered.

According to the DSM-IV, DID is a “dissociative disorder” in
which individuals with DID present two or more distinct personalities that
repeatedly affect the individual’s behavior [3]. Dissociative disorders are
ones that involve significant “…disturbances in memory, identity,
consciousness, and/or perception of the external environment” [3]. In the case
of a DID patient, his or her personalities may show evident differences in
handwriting, voice, and even physical characteristics [3]. Another interesting
aspect of this disorder is that many times dissociative amnesia is present for
these individuals. Dissociative amnesia refers to the process of separating
events or memories from one’s “stream of consciousness” due to the overwhelming
stress that the event caused [5]. Often times for DID individuals, the behavior
of one personality or alter are not recalled by the other alters [5]. In Billy
Milligan’s case, therefore, it could be that when he became Adalana, the other
personalities did not recall her actions due to the extremity of the crimes.



The causes of DID are controversial and still
not agreed upon. Some psychotherapists, psychologists, and researchers believe
that the core features of DID are a result from various social conventions such
as therapists (who may cause the release or creation of more personalities by
questioning the individual whether other personalities exist), media
influences, as well as stigmatization of the disorder itself [3]. Others,
meanwhile, advocate the idea that severe and traumatic experiences such as
physical and sexual abuses during early childhood result in the dissociating of
personalities as a means to cope with their pain, which then causes DID [3, 4].
The neurobiological basis of DID is also unclear, but one study found that DID
individuals had smaller hippocampal and amygdalar volumes than healthy
individuals [6]. Information and research on DID are lacking because of the
small number of reported cases of DID, which could be due to either the rarity
of the disorder or the difficulty of diagnosing the disorder itself. Thus, many
dilemmas are present – particularly in the legal setting – when dealing with
people who claim to have multiple personalities.



Several challenging and controversial ethical
issues arise from the existence and characterizations of this disorder: determining
the level of responsibility DID individuals hold over their actions; if criminals with DID are competent
to stand trial and whether multiple personalities should
be treated as multiple people. Due to the lack of knowledge concerning the
basis and origins of DID, it is difficult to come up with any conclusions to
the aforementioned issues. The first issue revolves around the extent of
control DID individuals have over their actions as well as their intents. In
particular, the ways in which alters take turns (not necessarily equally) to
take over an individual’s body are ambiguous. Second, the difficulty of
determining the level of competency for DID individuals to stand trial is underscored
by amnesia of behaviors committed by specific alters. The inability to recall
the actions of one alter by all other alters is unclear in terms of why and how
this occurs. If they cannot remember what one of their alters did, how can they
defend themselves [5]? It is possible, as mentioned before, that the criminal
actions of one alter were so extreme that the individual dissociated the memory
from all other alters. This postulation, however, then brings up the point of
whether or not the individual is aware of right versus wrong as well as the
ties to his/her consciousness. Another challenge is detecting if they are lying
about their amnesia, which could potentially be investigated through
brain-imaging techniques such as fMRI. The validity of brain imaging as a lie
detector, however, is still debated and further investigated by many
researchers[1].















Finally, there is much controversy in whether or
not the multiple alters present in DID individuals are fully developed and
autonomous personalities. This issue ties into the dilemma of criminal
responsibility. In Billy Milligan’s case, Adalana is the one who committed the
rapes, so should she be tried independently with separate legal representation
from all the other alters? There actually have been some incidents where trial
judges have required all DID alters to be sworn in before providing testimony
[3]. Future neuroscience studies may be able to investigate whether there are
functional, or perhaps even structural, brain changes when a DID individual
becomes another alter. The data from this type of research could contribute to
the issue of determining whether these alters are autonomous and separate from
one another. Yet, this also raises the dilemma of defining “personhood”: can
the human brain be used as a marker for different persons? Determining the
criteria for personhood is a complex and philosophical issue that has yet to
reach a conclusion. There are many factors that have been tied to defining
personhood such as self-awareness, autonomy, and rationality, but neuroscience has
opened the possibility of defining a person by his or her brain. For a DID
individual, however, it seems that these alters have arisen from the various
experiences of the individual. Thus, the alters are still technically part of
one “person,” but simply represent the individual’s mental and emotional
capacities. As in the case of Billy Milligan, he still contains the Teacher,
which somehow encompassed all of his alters. In this case, they do not seem to
be completely separate and autonomous individuals – two key aspects that I believe are necessary to define a person or an individual. Whether
or not the brain can be used as a factor for distinguishing persons is also an
interesting area of study that remains quite controversial. If the brain is in
fact identified as a criterion for personhood, it would imply that the entity
of a person is correlated – or “reduced” – to the brain. This notion of course
is controversial and possibly impossible to ever solve.



DID exemplifies one of the most complex and
controversial psychiatric disorders. How and why DID arises remain unanswered
and therefore create many problems in determining how to treat DID individuals.
The meanings and levels of responsibility, personality, and individuality are
all questioned by DID thus making one wonder if this is a disorder of the mind,
the brain, or perhaps both. Neuroscience can help provide more answers in terms
of the possible correlations between the brain and behavior of DID individuals.
Studies on the neurobiological aspects of DID patients can shed light onto
whether or not there are significant changes in the brain when transitioning
from one alter to the next. Determining the amount of control these individuals
have over their behaviors while they are in their certain alters, however, will
be an immense challenge. Thus, determining legal culpability of these
individuals will not be as black and white. As the field of neuroscience
continues to expand and progress, nevertheless, we may be able to define the
nature of personhood in individuals who display multiple personalities. Until
then, establishing the criteria for criminal culpability for and understanding
the disposition of people like Billy Milligan will remain challenges that
neuroscience may eventually help resolve.















[1] Wolpe and colleagues (2005)
discuss and analyze the discourse on fMRI as a valid lie detector and emphasize
the need for the ethical considerations of cognitive privacy, threats to civil
liberties by this type of research, and subjective interpretations of fMRI data. 









References





1.  Coles, R. (1981, November 15). Arthur, Ragen, Allen, et al. The New York Times. Retrieved from: http://www.nytimes.com/1981/11/15/books/arthur-ragen-allen-et-al.html





2.  Lewis, D. O., Yeager, C. A., Swica, Y., Pincus, J. H., & Lewis, M. (1997). Objective documentation of child abuse and dissociation in 12 murderers with dissociative identity disorder. The American Journal of Psychiatry 154(12):1703-1710. 





3.  Lilienfeld, S. O., & Lynn, S. J. (2003). Dissociative identity disorder: multiple personalities, multiple controversies. In Lilienfeld, Lyn, & Lohr (Eds.), Science and pseudoscience in clinicial psychology (pp. 109-142). New York: Guilford Press. 





4.  Maher, J. (2007, October 28). 30 years later, multiple-personality case still fascinates. The Columbus Dispatch. Retrieved from: http://www.dispatch.com/content/stories/local/2007/10/28/BILLY.ART_ART_10-28-07_A1_EV89AGB.html





5.  Porter, S., Birt, A. R., Yuille, J. C., & Herve, H. F. (2001). Memory for murder: a  psychological perspective on dissociative amnesia in legal contexts. International Journal of Law and Psychiatry 24(1):23-42. 





6.  Vermetten, E., Schmahl, C., Lindner, S., Loewenstein, R. J., & Bremner, J. D. (2011). Hippocampal and amygdalar volumes in dissociative identity disorder. Am J Psychiatry 163(4):630-636.







7.  Wolpe, P. R., Foster, K., & Langleben, D. D. (2005). Emerging neurotechnologies for lie-detection: promises and perils. American Journal of Bioethics 5(2):39-49.













Want to cite this post?


Uh, S. (2013). A Life With Others…In Your Head? The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2013/04/a-life-with-othersin-your-head.html