Pages

Tuesday, November 13, 2012

Zombie Philosophy: Is It Coming For Your Brain?

When I told my friends I was helping to
put together a conference on zombie ethics with the Emory Center for Ethics, I invariably
received one of two responses:






1) That’s really cool! Where do I sign
up?


2) Sorry, what?





If you’re in category (1) and didn’t
manage to make it to the conference, read on to find out what happened. If you’re
closer to category (2), keep
an open mind. There may be more going on with zombies than initially
meets the eye.





Anatomy of a
Zombie


Dr. Steven Schlozman, an Assistant Professor of Psychiatry at Harvard Medical
School, delivered the first talk of the morning via Skype. Dr. Schlozman, a
zombie fanatic who grew up reading zombie stories and watching movies like Dawn of the Dead, has speculated extensively on what a zombie brain might look like
. First, Dr. Schlozman suggests, zombies likely suffer from an
underactive frontal lobe that leads to impaired impulse control. Frontal lobe
dysfunction might stem from an overactive amygdala, where high levels of
activity have been linked to strong feelings of anger and lust. The anterior
cingulate cortex, which mediates the signal between the amygdala and the
frontal lobe, could also be impaired in a way that eliminates moral restraint. Together,
brain dysfunction in these three critical areas could lead to the insatiable
bloodlust that characterizes most classical zombies.








Dr. Schlozman cited several other zombie
characteristics that may be explained via brain pathology. Impairment of the
ventromedial hypothalamus has been associated with extreme hunger
,
perhaps explaining zombies’ tireless pursuit of human flesh. Further, the slow,
lumbering gait often associated with zombies, (28 Days Later, Resident Evil, and the recent remake of Dawn of the Dead being prominent exceptions to this rule) may be associated with lesions to the basal ganglia and cerebellum, brain areas
that control balance and motor activity.





Philosophical
Zombies


If Dr. Schlozman is correct, Dawn of the Dead-type zombies could
conceivably be produced by an appropriate set of neurological interventions.
Yet some philosophers have argued that there may already be zombies amongst us:
what have been called, fittingly, “philosophical zombies.”
 Philosophical zombies are posited as materially identical to normal human
beings, yet lacking in consciousness. We tend to believe that our friends,
family, and coworkers have the same sort of conscious minds that we do, but how
do we really know? Is it possible that a person could have the same brain and body
as me, but not the same mind?








One rendering of the philosophical zombie problem.

Dr. Robert McCauley, Director of Emory’s Center for Mind, Brain, and Culture,
reviewed some of the major arguments for and against the existence of
philosophical zombies. According to McCauley, there are two major strands of
thought in modern philosophy: monism and dualism. Monism posits that the
universe consists of only a single kind of substance. In physicalism, that
substance is material; in idealism, it’s mental or otherwise immaterial. Dualism,
on the other hand, posits that both mental and physical substances exist. Exactly
what it means for a substance to be “mental” or “physical” may be somewhat
unclear, but McCauley points out that despite this ambiguity, we intuitively
have a sense of what these ideas mean. Regardless of what the world actually consists
of, it certainly “seems” that there are physical things, and it “seems” that
there are mental things.





In
the past several decades, McCauley argues that a new philosophical notion called
psycho-social identity theory (PIT) has risen to prominence. The PIT stipulates
that mental causation and physical causation are one and the same. As a result,
says McCauley, “we know where and how mental causation occurs” – it occurs in
the mind, which is also the brain. PIT has proven to be a useful paradigm for
modern-day neuroscientists, who have used the assumption that “the mind is the
brain” to derive a vast set of empirical findings on how the brain operates.





According
to McCauley, philosophical zombies have been raised as one of the main
objections to PIT. Some philosophers, most notably David Chalmers, have argued
that the conceivability of philosophical zombies suggests that dualism must be
correct. If it’s possible to imagine an individual materially identical to
myself but with no conscious mind, the argument goes, there must be a kind of
substance that is not material. Other philosophers, including philosopher of mind Daniel Dennett, have argued that arguments premised upon
philosophical zombies are best understood as an “intuition pumps”: thought
experiments that are “wonderful imagination grabbers,” but that rely largely on intuition and often fall apart when exposed to
rational scrutiny. From this perspective, says McCauley, positing a
materially-identical but non-conscious human is like arguing that health can be
removed without damaging organs or materially altering the body.





Given
that PIT’s assumptions are foundational to modern neuroscience, the
plausibility of philosophical zombies has significant implications for
scientific practice. If it’s possible to have a working brain without a working
mind, neuroscience may be missing important data by focusing only on material
brain structures.





Zombie Freedom


Following Dr. McCauley’s talk, Georgia
State associate professor of philosophy Dr. Eddy Nahmias 
considered a related issue: do zombies have free will?





Dr. Nahmias began by asking the audience
to raise their hands if they believed zombies had free will. Nobody, it seemed,
believed that this was the case. Dr. Nahmias then asked if humans possessed
free will, and most (though not all) of the audience agreed that we do. Finally,
Dr. Nahmias asked: why? What is it that grants humans, but not zombies, free
will? Answers to this question varied. Some suggested that human free will
exists due to our ability to suppress “animal impulses” and “instincts.” Others
argued that “working brains” or “personalities” are features that distinguish
humans as unique. Dr. Nahmias, however, suggested that a single principle
underlies all of these characteristics: consciousness. For Dr. Nahmias, zombies
lack free will simply because they lack consciousness.





This issue is important, Dr. Nahmias
argues, in light of controversies that have arisen largely in the last decade
regarding the existence of free will in humans. Daniel Wegner
, Sam Harris, and Jerry Coyne  have all argued that free will is an illusion. According to Dr. Nahmias, these
critiques rely on an implicit model that looks something like this:









My own visual sketch, inspired by a model Dr. Nahmias presented at the conference.




In this model, the brain communicates
with an immaterial thing called the “mind” or “soul,” in which free will takes
place. When the soul has done its work, it communicates a decision back down to
the brain, and the brain causes us to take action. Free will skeptics often
argue that science has demonstrated the soul not to exist, and this being the
case, free will must not exist either.






Dr. Nahmias, however, argues that free
will doesn’t necessarily require any notion of an ethereal soul. Rather, he
argues that we should proceed from the following premises: first, that we have
conscious experience; second, that consciousness is probably located, more or
less, in the cortex; and third, that “it would be shocking” if conscious reasoning
had no effect on action. Given these premises, we can imagine a free will based
simply on the fact that consciousness exists. In this sense, free will exists,
and it influences our behavior insofar as conscious states influence our
behavior.






Zombies, says Dr. Nahmias, “are a
remarkably effective tool for thinking about free will” because they force us
to more fully examine our intuitions about what constitutes a free being. Clarity
on those intuitions, in turn, is vital to understanding what we mean when we
talk about “free will,” and without that clarity it’s difficult to engage on
questions of whether and to what extent free will exists.





The Case for
Zombies


There’s at least one other reason why
zombies might be an ideal starting point for public philosophical and ethical
discussions: everyone knows about them, and almost everyone agrees that they’re
at least a little bit cool. Personally, I don’t tend to get all that excited
about zombies – the only zombie movie I’ve seen recently is Zombieland
, and my feelings were mixed – but I’m still a lot more likely
to get involved in a conversation that draws me in with this:










A bloodthirsty zombie.




than, say, this:








Rene Descartes.



It’s probably not a coincidence, then, that Dr.
Scholzman lectures at the Harvard Graduate School of Education in addition to
his work in psychiatry, or that Dr. Bradley Voytek and Dr. Timothy Verstynen  have converted their extended blogosphere discussion of zombie
neuroanatomy into a TedX talk on education
.
Zombies are the rare commodity that can legitimately compete for the highly
sought-after Educational Triple Crown of philosophical, scientific, and pop cultural
relevance. For that reason, I wouldn’t be surprised if academic discussion on zombies mirrored the modus operandi
of zombies themselves: understated, powerful, and ceaselessly searching for new
brains in which to propagate themselves.








Want to cite this post?

Gordon, R. (2012). Zombie Philosophy: Is It Coming For Your Brain? The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/11/zombie-philosophy-is-it-coming-for-your.html



Tuesday, November 6, 2012

Exquisite Corpse: Why a Frighteningly Multifaceted Imaginary Creature can Help Tie Neuroscience to Society

Signs of the times:  candy corn is on clearance, already-cheap makeup and costumes are further discounted in bins at Wal-mart, and you're wondering when the next occasion will be where it is socially acceptable to dress like a sexy Klingon in public.  To add to the post-Halloween zeitgeist, here's a report on a recent zombie-themed neuroethics conference.




AMC's The Walking Dead. Which, if you aren't familiar with it, is about the zombie apocalypse and is watched religiously by all of your friends. From http://blogs.amctv.com


Why Zombies?


One of the themes throughout the conference was the question of what exactly is it about zombies, among all the rest of the undead and other monstrous phyla, that is so... appealing? The speakers often used the AMC series “The Walking Dead” as the zombie example du jour, highlighting the human-zombie interaction that series excels at as key for perking our interests. (Dr. Steve Schlozman, MD, writer of “The Zombie Autopsies” and die-hard zombie nerd made the point that a film about a bunch of zombies and no humans would be rather boring.  I might see this as a challenge to David Attenborough.)








Well, at least with most zombie movies it isn't about you.

Image from imdb.com

The point was made (again by Dr. Schlozman) that with zombies, it isn't about you - the fight is impersonal, like fighting against a natural disaster.  And that is part of what makes it so frustrating, and what makes us eventually turn against ourselves- in the end, you really can't hate a zombie for what it does, since it is by definition doing it thoughtlessly.  This idea of zombies as a fictionalized natural disaster was echoed through numerous hypotheses of what particular societal fears zombie movies and TV shows were helping us, as a culture, deal with (and perhaps the fact that there were so many that fit into this category spoke to the richness of genre).  These included the fear that “science” would enter our bodies and make us material (with a virus turning normal humans into soulless zombies in many versions of the story),  the fear of loss of individuality within a community, the fear of death, the fear of the futility of dieting and cosmetic salvage of physical beauty, the disenchantment of the body (as a physical mechanism rather than as a Godly miracle), as a reaction to selfishness through a loss of self, as a background consideration that allows us to imagine what a new post-apocalyptic community would look like, even as a fear of relentless and irrational religious fundamentalism.



In addition to this discussion of what the fear of zombies means about western culture, there was also talk of what zombies themselves could mean when viewed through philosophical and religious lenses.  Scott Poole and John Morehead suggested that zombies could be considered objects of religious contemplation in a(n increasingly) secular world (though it was admitted that the zombie apocalypse was in many ways a parody of the Christian vision of the apocalypse).  For example, the zombie could be thought of as a depiction of an “anti-angel,” or as the “void,” the sadness that is basic to human existence within certain worldviews.  Even the cosplay (or 'costume play') act of the “zombie walk” can be put in a religious context, as a discarding of one's own individuality and personhood in favor of joining a strange new kind of community.  In addition to these religious interpretations, Dr. Bob McCauley brought in the famous philosophical view of zombies-  entities that look and act human but lack that internal world that you are sure you have, and suspect others have as well.  (Note that Dr. McCauley quickly "de-animated" the philosophical zombies using an argument of philosophical zombie slayer  Dr. Daniel Dennett).




Where does Neuroscience come in?


Yes, there was a neurological explanation for zombies presented at the conference- and the reader is encouraged to look into the work of Steve Schlozman  as well as the Zombie Research Society for the meat on that - but what is more telling about fictional zombies than the details of their hypothesized neuropathology is the fact that the neurological explanation made the most sense for explaining what was happening. No one talked about genetic causes, gamma radiation, aliens, rotting flesh, rigor mortis, or moral/spiritual failings when providing a 'rational' explanation for zombies- somehow, we were all buying the idea that this looked a lot like something that was in neuroscience's turf (an explanation that has been used to great success by “The Walking Dead”).



From this perspective, zombie-ism is an illustration of a neurological disorder that has cultural, philosophical, and even religious significance. This provides us with a host of neuroethical questions that zombies can help us answer by acting as cultural guideposts. What sorts of brain damage might inhibit or destroy “free will,” “personhood,” or “consciousness”? Zombies are a familiar example where many think those qualities have been destroyed.  How extensive must brain damage be before we consider someone “brain dead”?  The zombie might provide an example of a “high-functioning” individual that is written off as “no longer human” or at the very least “no longer worth protecting” (in a fictional post-apocalyptic setting). What sorts of resuscitation (or even “resurrection”) might be desirable, in the face of traumatic brain injury? Probably not the zombie case (though Fido voices some disagreement on this). What sorts of brain functions are the root of theological concepts such as "the void" and "basic human sadness?" Potentially those are still present in zombies. Beyond just connecting the science to pop culture, this is a way to connect the science to familiar points in human values - through a thoroughly value-laden and often discussed monster.



I look forward to next year's symposium, and between now and then, whether in idyll horrific fantasy or in use of popular memes as guideposts for mapping out societal values, I suggest you consider the zombie.





Want to cite this post?

Zeller-Townson, RT. (2012). Exquisite Corpse: Why a Frighteningly Multifaceted Imaginary Creature can Help Tie Neuroscience to Society. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/11/exquisite-corpse-why-frighteningly.html

Tuesday, October 30, 2012

Teaching Intersex, Teaching Interdisciplinarity: Interview with Sara Freeman








Sara Freeman

Graduate Student

Department of Neuroscience

Emory University

In this post, I would like to highlight the work of another
Emory graduate student, Sara Freeman. Just when Cyd Cipolla and I (in the
Department of Women’s, Gender, and Sexuality Studies) were coming up with our
plan to teach an interdisciplinary course bringing together gender studies and
neuroscience, we found out that Sara (in the Neuroscience Graduate Program) was
developing her own interdisciplinary course bringing together developmental
biology and the sociology of gender.





Sara’s course, which she is teaching this semester, is
called “Intersex: Biology & Gender,” and is cross-listed in the departments
of Biology, Sociology, and Women’s, Gender, and Sexuality Studies. “Intersex”
is a general term used for a variety of conditions in which a person is born with
physical reproductive or sexual characteristics that cannot be easily
classified as male or female (for more information, visit the Intersex Society of North America or the
American Psychological Association’s page on intersex). FYI: October 26th was Intersex Awareness Day! In Sara’s course, she is teaching about both the developmental biology of
intersex in humans and the social, political, legal and ethical issues related
to intersex.






I wanted to interview Sara about her course because I see
her work as highly relevant to the field of Neuroethics. First, Neuroethics
benefits from interdisciplinary collaboration between scientists (especially
neuroscientists) and researchers in the social sciences and the humanities, and
by including material from the sciences, social sciences, and humanities, and
by bringing together students from all of these fields, Sara’s course is
fostering exactly the kind of interdisciplinary collaboration that Neuroethics
needs. Second, Sara’s course is encouraging her students to grapple with
important neuroethical and bioethical questions, including ethical issues
related to the medical treatment of intersex individuals (see Dreger for a
review) and ethical issues related to the use of intersex individuals as
research subjects in scientific studies on sex/gender development. Read on to
find out more about Sara’s course!





Question: What inspired you to teach the course?








Image from "One in 2,000" (used with permission)

Almost a year and a half ago, a friend of mine showed me a
short documentary called “One in 2,000” which is about intersex in humans [for
info about the film, visit this link]. I was surprised
to learn how prevalent intersex is, and as a graduate student in the biomedical
sciences, I was even more surprised to learn that the most common “treatments”
for these conditions include surgical intervention to “normalize” ambiguous
genitalia (often performed on newborns), gonad removal, and lifetimes of
hormone therapy. I was even more outraged to learn that in the past, these
procedures were oftentimes performed without full disclosure of medical
information to the parents or to the patients themselves. Later in the video, as
I watched dozens of names of intersex conditions drift across the screen, my
outrage began to shift to embarrassment, because I only recognized four of
them! Because my undergraduate and graduate training has been in reproductive
biology, sexual behavior, and neuroscience, I expected that I would be fairly
knowledgeable about the hormones, receptors, and developmental pathways that
lead from an embryo through stages of sexual differentiation during
development, and finally into a reproductively-capable and arousable adult man
or woman. At that moment in time, my apparent intellectual ignorance of this
topic, my outrage at the ethics of medical authority, and my growing interest
in gender studies started to work together as a motivating force. I started
doing some research and quickly learned that although many of the biological
causes for intersex are well described, there remains a considerable lack of
social awareness and understanding of intersexuality. With this in mind, I
decided to draft a syllabus for an interdisciplinary undergraduate course about
intersex in humans.





Question: When you originally designed the course, what did
you want your students to learn from it? How did you plan to make the material
accessible to students from the sciences, social sciences, and humanities?





I designed this class as a cross-disciplinary introduction
to both the developmental biology and the sociology of intersex. It examines
how biologically an intersexed body might develop and what the existence of
these bodies means for the binary concept of gender in our society. I wanted to
pose the question to my students, “What is a male and what is a female?” and to
draw answers from a variety of disciplines. Thus, the course includes topics
from the biomedical sciences, like chromosomal biology, embryology,
endocrinology, and anatomy, as well as topics from sociology and the humanities,
such as gender theory, the role of gender in our society, relevant legal,
medical, and ethical issues, and what the study of intersexuality can glean
from historical movements such as feminism.





It was very important to me that the course be cross-listed
in departments from different areas of academia, and to my great delight, it
was listed in the Biology department, the Sociology department, and the
department of Women’s, Gender, and Sexuality Studies. I hoped that by studying
medical ethics, the biology of human sexual development, and gender theory and
sexuality, students from any academic background would appreciate the natural
individual sexual variation in both human bodies and behaviors. Gaining these
new perspectives relies on the unique level of discussion and engagement that
results from a classroom of students from diverse disciplines who would
otherwise not interact academically. For example, pre-med students from the Biology
department would learn about the gender-related issues in the practice of
medicine, which is a discipline that tends to uphold the binary approach to sex
and gender. These pre-med students will undoubtedly have to make important
decisions regarding sex and gender later in their careers as physicians, and it
is critical for them to start discussing these concepts with students outside
of their discipline. Similarly, non-biology majors from the Sociology or
Women’s Studies departments would learn the specifics of the genetic, hormonal,
and developmental basis of the human sexual variation that their disciplines
have been studying and celebrating for decades.





At that point, as your question reflects, the challenge for
me was to make sure that the material was accessible to the students. I hoped
to accomplish this in a few different ways. First, I established an online,
collaborative glossary and encouraged the students to keep track of each word
in the course material that they’ve never seen before and to add those words to
the collective glossary, along with a brief definition. I hoped that this
exercise would attack the common and often reflexive feeling of stupidity that
accompanies being exposed to a term or idea of which one was previously
ignorant. In this way, as the students completed the reading assignments each
week, they could celebrate and share the new words they learned. Right now, we
have over 140 entries in the glossary, and almost all of my students are participating!



Second, I acknowledged that I cannot be an expert in every
topic, and I invited several guest instructors from various departments to help
teach the course. These incredible individuals include: Kara Kittelberger
(Neuroscience), Jordan Kohn (Neuroscience), Kevin Watkins (Neuroscience), Katy
Renfro (Psychology), Anson Koch-Rein (Institute of the Liberal Arts), Aimi
Hamraie (Women’s, Gender, and Sexuality Studies), Dr. Kim Wallen (professor in Psychology),
Dr. Briana Patterson (pediatric endocrinologist from Emory Hospital), Dr. Pat
Marstellar (director, Center for Science Education), and Dani Harris, an
intersex person and advocate from the Atlanta community (Atlanta Police
Department). During the planning and conceptualizing phases of my class, I also
met with several other generous graduate students (Kristina Gupta, Cyd Cipolla,
Mairead Sullivan, Pii Dominick), instructors (Dr. Kevin Cryderman, Dr. Sara
McClintock, Dr. Deboleena Roy, Dr. Irene Browne, Dr. Tracy Scott), and staff
members (Sasha Smith, Danielle Steele). Without these people, my class would
not exist.







Lastly, in order to make two of the more challenging
biological concepts accessible for all students, I collaborated with Matt
Gilbert (web developer, graphic designer, instructor, and the friend who first
showed me the “One in 2,000” documentary mentioned above!), to create two online,
interactive “games”. The first is a meiosis game that allows students to
interact with the X and Y chromosomes as they replicate and divide during the
creation of eggs and sperm. Students can directly observe how variation in this
process can result in an embryo with a set of sex chromosomes other than the
expected XX or XY. The second online tool is a steroidogenesis game that
outlines the complex enzymatic pathways that synthesize steroid hormones, such
as testosterone and estrogen. Students can visualize how a body’s enzymes build
these hormones and can interact with that process to test their knowledge. By
making difficult biological pathways interactive, we improved the accessibility
of these core biological concepts for all students.







The "meiosis game" Sara and Matt Gilbert created for the course

HHMI Howard Hughes Medical Institute Grant No. 52005873







The "steroidogenesis game" Sara and Matt Gilbert created for the course

HHMI Howard Hughes Medical Institute Grant No. 52005873



Question: How has the course been going so far? Have you
been surprised by anything that has happened?







The course has been going wonderfully so far, in my opinion.
The guest lecturers have been engaging, the students have been open and
inquisitive during class discussion, and there has been tremendous support from
the Emory community. As far as surprises, I think my own biggest personal
surprise was that I haven’t really felt nervous. I was expecting to have to
deal with some nerves each day before class, but I find myself comfortable in
front of a classroom, which was definitely a pleasant surprise! I have also
been pleasantly surprised at the great level of maturity that the students have
exhibited in class. I imagined that because our discussion topics were often
going involve details about the genitals and sexuality, it might be challenging
for students to feel comfortable saying some of these words out loud. But they
have been very mature and confident and have handled the course material better
than I ever expected!





Question: Do you have any advice for other instructors who want to
teach interdisciplinary courses?





Collaborate. Don’t be afraid to reach out to a distant
department. Speak freely about your ideas. There are no dumb questions.
Acknowledging the intellectual areas in which you feel weakest will likely
result in a plethora of knowledge and support, which will overwhelm any potential
inadequacies that may accompany that acknowledgement. It’s ok not to know the
answer. Spend time following tangents. Be prepared to have the foundations of
your knowledge shaken. Welcome alternative viewpoints. Learn to speak using
unencumbered and discipline-free language, and if you find a situation where
that is too difficult, be prepared to explain your word choice and its
foundations to others.





Question: Has teaching this course influenced your current research in
any way or your plans for future research? Has it influenced how you think
about neuroscience research on sex and gender?





Yes, this course has definitely changed the way I think
about neuroscience research about sex and gender. I’m much more critical of the
field in general, and I’m also more aware of the misuse of the word “gender” to
describe animals in a research study. Also, after learning so much about intersex
subjectivity and medical ethics, I considered shifting my career track
entirely! I felt compelled by the idea of studying the issues surrounding why
many intersex individuals are lost to follow-up in the medical community, and I
thought about applying for a post-doctoral position at the Kinsey Institute at
Indiana University. Instead, I decided to accept a post-doc offer to continue
neuroscience research on peptide hormones and primate social behavior at UC-Davis
next summer. While my research aims for the immediate future don’t include any
specific plans to study intersex or gender, I definitely plan on teaching my
intersex course in the future as often as I am able to.






Want to Cite This Post?


Gupta, K. (2012). Teaching Intersex, Teaching Interdisciplinarity: Interview with Sara Freeman. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/10/teaching-intersex-teaching.html

Friday, October 26, 2012

Who is to blame when no one is to be praised?

Let’s imagine for a moment that I am extraordinarily brilliant, but my brilliance is not due to my own hard work nor is it due to the wonderful instruction I have received; rather my brilliance is due to the fact that I was born with gene X. Let’s further imagine that the effects of gene X are robust. That is, the effects of gene X (my extraordinary brilliance!) are largely insensitive to environmental variation and developmental course. As long as some minimal conditions of life are met, having gene X guarantees that I will be exactly as brilliant as I in fact turned out to be.



Question: Who is to be praised for my brilliance?






The glasses pair well with the genes.







If your intuition is similar to mine, your answer is probably something along the lines of: “No one, really. Your genius appears to have been innately determined in a way that your genius didn’t really depend on the actions or efforts of you or other people. Neither you nor anyone really deserves praise for that.”



As you can tell from my answer, I think that this ‘no praise’ intuition is driven, at least in part, by me classifying the development of the characteristic as being innately determined in such a way that the development of the characteristic is largely insensitive to environmental variation.



But here is the kicker: According to a new and provocative paper by Joshua Knobe and Richard Samuels, people are significantly less willing to classify an attribute as innate when the attribute is negative than when the attribute is positive. This is true even when the scenario that leads to the negative attribute is described as being the result of a gene that is largely insensitive to environmental conditions.



Even more surprising is that this effect held true even for scientists who were actively working as researchers in fields that regularly used the concept of innateness (e.g., biology, psychology)!



But, of course, the badness or goodness of the characteristic is irrelevant for whether a given characteristic is innate.



But the problem may be even worse. If praise/blame attributions are moderated, at least in part, by innateness judgments, then our praise/blame judgments would be asymmetrical for reasons that are unjustifiable.



But not all news is bad news.



When people viewed both the “bad” and the “good” version of each vignette prior to making the innateness classification, people were able to recognize that the goodness and badness of the characteristic was irrelevant, and, on average, the asymmetry in innateness classifications disappeared.



What does this all mean?



Well, these findings suggest another potential strike against the idea of the “objective juror” (see related posts by Cyd Cipolla here and here). If (certain degrees of) innateness should mitigate responsibility, and if we are biased toward not seeing negative characteristics as innate, we will be biased away from mitigating responsibility in criminal cases even in cases when we should.



This research also suggests a way to guard against potential biases. If a lawyer or judge is worried that a particular piece of information might bias jurors, the lawyer or judge may only need to provide the appropriate contrast case in order to get jurors to recognize that a particular piece of evidence may be irrelevant.



Want to cite this post?

Shepard, J. (2012). Who is to blame when no one is to be praised? The Neuroethics Blog. Retrieved on
from http://www.theneuroethicsblog.com/2012/10/who-is-to-blame-when-no-one-is-to-be.html


Wednesday, October 24, 2012

Zombies and Zombethics! Symposium

Zombies and Zombethics! Walking with the Dead: An Ethics Symposium for the Living on Halloween 2012.









This symposium will feature a Zombie Braains Inspired Panel (among many other exciting panels, see Zombethics Tab for more info) inspired by The Walking Dead Clip Below. This discussion will include a stellar group of virtual and local speakers followed by a moderated discussion.



If you already RSVP'd, lucky you! If you didn't RSVP, see how to get on the waiting list below.  The Zombie Walk at 1130am is open and does not require registration. Contact Alison Kear (akear@emory.edu) for details.



The Walking Dead: Season 1 Episode 6: Test Subject-19 Brain Scan of Transformation









GUEST PANELISTS










Also joining us will be our local friends of the Neuroethics Program:




  • Eddy Nahmias PhD. Neurophilosopher and Associate Professor of Philosophy at Georgia State University. Dr. Nahmias will discuss the relationship between the brain and free will.








  • Darryl Neill, PhD. Professor of Psychology at Emory.  Darryl Neill will discuss the psychological underpinnings of zombiehood.






The session will be moderated by Emory's Neuroethics Program Director, Dr. Karen Rommelfanger.





This event is hosted by the Center for Ethics and organized by the Neuroethics Program and the Religion and Public Health Ethics Program.







The event is free, but seating is limited and attendance will be by RSVP only! Seating is now by waiting list only. Contact Alison Kear (akear@emory.edu) to be added to the waiting list.











 For full day's events, please click the Zombethics Tab.

Wednesday, October 17, 2012

The Military and Dual Use Neuroscience, Part II

In a previous post, I discussed several promising neuroscience technologies
currently under investigation by the military. Simply knowing that such
technology exists, however, does not in itself dictate a way forward for
neuroscientists and others who are concerned about the possible consequences of
military neuroscience research. In part, the complexity of the situation
derives from the diversity of possible viewpoints involved: an individual’s
beliefs about military neuroscience technology likely stem as much from beliefs
about the military in general, or technological advancement in general, as from
beliefs about the specific applications of the neuroscientific technologies in
question.








Star Trek's Commander Spock was generally ethical in his personal use of directed energy weapons, but not all of us are blessed with a Vulcan’s keen sense of right and wrong (Image).  




With that in mind, I think there are at
least three distinct angles from which an individual might find him or herself
concerned with respect to military neuroscience research. First, there are
those who are opposed to neuroscientific advancement in general due to its
potential implications on identity, moral responsibility, and human nature. Second,
there are those who may be amenable to scientific advancement in general, but
who distrust the military and are therefore suspicious of any technology that may
improve its combat capability. Finally, there are those who have no a priori problem with either the
military or technological advancement, but who have concerns about the ways
that particular technologies may be deployed within a military context.






Technological
Skeptics


Perhaps the most well-known skeptic of biotechnological
advancement is political scientist Francis Fukuyama
, known largely for his 1989 prediction that the fall of the
Soviet Union would usher in the “end of history.”
 While Fukuyama does not oppose all biotechnological
advancement, his 2002 book Our Posthuman
Future: Consequences of the Biotechnology Revolution
 raises concerns about a number of particularly revolutionary
biotechnologies. For Fukuyama, belief in human equality depends upon a consensus
that there are certain essential qualities that unite all human beings. Modern
liberal societies, he argues, are notable in that they attribute essential
humanness to a range of persons – women, for instance, and racial minorities –
to whom such respect has historically been denied. Fukuyama’s fear is that
novel biotechnologies may undermine our collective belief in essential
humanness, and consequently the philosophical basis for political equality. One
area of particular concern for Fukuyama is neuropharmacology, where he predicts
the development of “sophisticated psychotropic drugs with more powerful and
targeted effects” with the capacity “to enhance intelligence, memory, emotional
sensitivity, and sexuality.” Many of Fukuyama’s arguments involve potential
inequalities generated through genetic engineering, but it is not difficult to
extend his logic to many neurotechnologies – for instance, brain-controlled prosthetic limbs,
or transcranial magnetic stimulation – currently under investigation by the military.








Francis Fukuyama, author of Our Posthuman Future 


Fukuyama, and others who hold similar belief
systems, tend to oppose a wide range of military and civilian biotechnologies
rather than military neuroscience per se.
 Such viewpoints, however, hold particular implications for military research if only because DARPA tends to be very, very good at what it does. Historically,
the military has played key roles in the development of microchips, cell
phones, GPS, and the Internet.
This is no coincidence: compared to other research programs, DARPA researchers
enjoy a number of unique advantages.  In Mind Wars, bioethicist Jonathan Moreno points out
that “in the DARPA framework decades of development are acceptable” and that,
according to the DARPA strategic plan, “its only
charter is radical innovation.” That DARPA’s innovation is of this radical kind should constitute a point
of concern for those who believe that neuroscientific research is already
progressing too quickly. While private pharmaceutical firms might develop drugs
targeted to specific symptoms or diseases, DARPA has a particular incentive to
invent drugs that produce what might reasonably be called “superhumans”: drugs
that make humans more intelligent, that increase human endurance and stamina,
or that substantially enhance human physical performance.





Constraining The
Military


Other concerns
stem from the nature of the military itself: as an institution that uses force
to achieve political ends, it is worth asking whether military neuroscience
should be avoided insofar as it renders the military better capable of
achieving these ends.





One area of
concern relates to privacy, where several new technologies suggest a fundamental shift in the nature of state surveillance. In
my first post on military neuroscience, I discussed the Veritas TruthWave EEG
helmet and fMRI lie detection as two especially notable surveillance technologies. While both technologies have significant drawbacks – accuracy in the
case of the EEG helmet, and usability in the case of fMRI – such advances may
nevertheless have worrisome implications. An article on TruthWave
 raises one relevant concern regarding false positives: “When
a person’s life or freedom is at stake, what is an acceptable margin of error?”
In at least some cases – drone strikes on suspected terrorists, for instance, in
which “all military-age males in a strike zone” 
are assumed to be enemy combatants – the military has demonstrated a tendency to set the “acceptable margin of error” higher than it might be in, say, a court of law. The present generation of mind-reading technologies occupy a grey area in the sense that they’re likely accurate enough to be more reliable than intuition, but not so accurate as to reliably avoid false positives. In conjunction with high-stress combat situations or high-priority military objectives, this grey area may become particularly difficult to negotiate. 




Further, it is possible that some innovations in military neuroscience may lower the cost of warfare, increasing the likelihood of intervention and violent conflict. In a 2005 article, Arizona State engineering and ethics professor Brad Allenby suggests that “military prowess, embodied in incredibly potent technological capabilities, acts like a drug, leading to dysfunctionally oversimplistic policy choices,” citing the Iraq war as a prominent example. Innovations in remote-operated robotic weapons – commonly known as UAVs, or “drones” – have already increased the U.S. military’s propensity to engage in combat operations. As one Washington Post editorialist puts it: “The detachment with which the United States can inflict death upon our enemies is surely one reason why U.S. military involvement around the world has expanded over the past two decades.”[1] Military neurotechnology seems poised to reinforce and accelerate this trend. Neurologically-controlled robots, for instance, may drastically improve the effectiveness and flexibility of current UAVs. More broadly, it seems likely that a war we are more likely to win is also a war that we are more likely to fight: in that sense, cognitively-enhanced soldiers, innovative prosthetics, and improved surveillance may all foreshadow a similar future involving a more powerful, more active U.S. military.   







UAVs, or “drones,” have likely decreased the U.S. military's threshold for global interventions (Image

Legality And
Other Issues


Many problematic aspects of new military
neuroscience technologies derive simply from the fact that they are new. Consequently,
many current international legal frameworks designed to constrain warfare may
not apply to new neuroscience technologies.





Concerns of this sort are discussed at
length in the U.K. Royal Society report
For instance, newly-developed pharmaceuticals may have uses
in the interrogation of suspected terrorists or prisoners of war. Coercion of
POWs is outlawed by the Geneva Convention, as are medical procedures contrary
to “the rules of medical ethics.” The extent to which the Geneva Convention
applies to terrorists, however, has been a matter of dispute
.
Given recent reports of coercive pharmaceutical use at Guantanamo Bay,
concerns of this nature are particularly salient.







As is often the case, key questions of ethics and policy hinge on exactly how similar our universe is to “24” (Image). 


The deployment of novel chemical
incapacitants in combat raises complex legal questions as well. The Chemical Weapons Convention (CWC), an
international agreement ratified by the U.S. in 1997, bans the use of chemical
weapons in conflict. At times, however, the CWC can be ambiguous: according to a U.K.
Royal Society report on neuroscience and conflict, parts of the CWC may be interpreted to allow “the use
of toxic chemicals to enforce domestic law extra-jurisdictionally or to enforce
international law.” Insofar as international law and extra-judicial enforcement
of domestic law are themselves open to interpretation
, the CWC may leave the door open to some offensive uses of
chemical weapons.





Finally, the use of cognitive-enhancing drugs to aid friendly soldiers is complicated by a provision of the
Uniform Code of Military Justice requiring servicemembers “to accept medical
interventions that make them fit for duty.”
 On one hand, novel enhancement drugs might provide safer alternatives to
currently-employed stimulants such as amphetamines,
thereby lowering the health risks to military personnel. Alternatively,
however, safer drugs could lower the costs associated with coercive drug employment,
resulting in a lower threshold for coercion of military servicemembers.





It’s Not All Bad


Depending on your perspective, of
course, what I have called “concerns” may in fact be substantial benefits. I'm certain that leading transhumanist Nick Bostrom
 sincerely looks forward to to brain-computer interfaces, and that pro-military foreign policy analyst Robert Kagan wholeheartedly supports technologies that sustain U.S.
military dominance. More generally, neurologically-enabled prosthetics have obvious
benefits for disabled persons,
and civilian spin-offs of many military neurotechnologies hold considerable
medical and commercial promise.  







A DARPA-produced prosthetic arm (Image


Nevertheless,
there is a large enough breadth of military neuroscience research, and an
similarly diverse set of ideological positions from which to criticize it, that any
given individual is likely to find something
in DARPA that they consider worthy of concern. Coming to such a conclusion,
however, is only a first step, and the second step: “what to do about it?” – does
not have a simple answer. To a large extent, those who publish on the
implications of dual-use neuroscience technology have made similar suggestions.
The Bulletin of Atomic Scientists, for instance, recommends greater interdisciplinary
discussion on dual-use technology, and a 2003 Nature editorial suggests that “researchers should perhaps spend more time pondering the
intentions of the people who fund their work.” While these publications are
certainly correct in identifying a need for additional discussion on dual use
technology, I’ve personally found myself frustrated at the lack of specific,
practical suggestions in much of the literature. 







It is Jonathan Moreno who, to my knowledge, has offered the most concrete suggestions for coping with the implications of military neuroscience. In Mind Wars, Moreno warns neuroscientists against rejecting all military funding, arguing that such a position would only push DARPA research into greater secrecy. If military neuroscience research is inevitable, he reasons, it is better for it to be done transparently (and therefore in some sense democratically) than for it to be done underground, where it cannot be criticized simply because it is not known about. Instead, he recommends that neuroscientists attempt to influence public discussion on military neuroscience in a similar fashion to scientists who opposed nuclear weapons use during the Cold War. From a regulatory perspective, Moreno further recommends the creation of a “neurosecurity equivalent to the National Security Advisory Board for Biosecurity,” a committee that would include scientific input from a range of disciplines, including neuroethics. Moreno is also optimistic that at least some in the defense industry might take ethical issues seriously, citing one defense contractor who has “had ethics advice from the very beginning [of a brain-machine interface project].” 





I don't agree with all of Moreno's suggestions - I think, for instance, that there's something to be said for the symbolic value of refusing to work on a project that you disagree with - but I admire his attempt to provide concrete solutions to an issue that defies simple analysis. In a 2012 article in PLoS Biology, Moreno and Michael Tennison suggest that “discussions in themselves will not ensure that the translation of basic science into deployed product will proceed ethically... These considerations must be embedded and explored at various levels in society: upstream in the minds and goals of scientists, downstream in the creation of advisory bodies, and broadly in the public at large.” Whether and how the ethics of military neuroscience are “embedded and explored” have yet to be determined, but the answer to these questions will have significant implications for the control and ultimate consequences military neuroscience research.






[1] Though it’s worth noting that the author, John Pike of Globalsecurity.org, considers this to be basically a good thing, writing that “the large-scale organized killing that has characterized six millenniums of human history could be ended by the fiat of the American Peace.”





Want to cite this post?

Gordon , R. (2012). The Military and Dual Use Neuroscience, Part II. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/10/the-military-and-dual-use-neuroscience.html

Tuesday, October 9, 2012

Neurolaw: Brains in the Courtroom











Regular readers of this blog know we often touch on issues
about law and neuroscience: whether 
it’s about crime, the lie detection seminar Emory hosted last spring, or
work on ethics and free will. (Also, spoiler alert, neurolaw is to be the focus
of our next journal club meeting- please come!) The field of neurolaw, which
is exactly what it sounds like- neuroscience and law, has been growing rapidly
over the past decade. Most of the discussions in neurolaw focus on how, and if,
new discoveries in neuroscience will affect legal definitions of responsibility
and culpability by changing the way we understand how the decision to commit a
crime is made. However- in the past year there have been several studies
looking at another side of brains the courtroom: that is, the neuroscience of
judgment itself. These studies are exploring how people consider evidence and how they balance moral and ethical decisions against empathic and sympathetic reactions. This new work opens up new avenues for interventions from neurolaw and neuroethics around the construction and use of institutions like the judge and the jury.




Science says: Lock 'em up.

(image courtesy of Special Collections, University of Houston Libraries)






Although I want to focus here on what I think is a new area of neurolaw, I’ll begin with a recent study that exemplifies the sort of work that is traditionally considered in the field. In August of 2012, Science published an article by Lisa G. Aspinwall, Teneille R. Brown and James
Tabery of the University of Utah titled “The
Double-Edged Sword: Does Biomechanism Increase or Decrease Judges' Sentencing
of Psychopaths?”
This study focuses on the sentencing portion of a criminal
trial, where judges decide how to punish a person who has already been convicted.
They weigh aggravating factors (basically evidence that the person should get a
longer sentence) against mitigating factors (basically, evidence that the
person should get a shorter sentence.) In this study, researchers gave 181
trial judges a hypothetical case (based on a real case,
Mobley v.
State
)[1]
where the convicted person had been diagnosed with psychopathy. All judges
received the same psychiatric testimony of diagnosis, but some were also given
additional proof of psychopathy in the form of  “expert testimony from a neurobiologist who presented an
explanation of the biomechanism contributing to the development of psychopathy
(here, low MAOA activity, atypical amygdala function, and other
neurodevelopmental factors).” (846) Judges who received the version with the
additional biomedical information were more likely to list mental illness or
psychopathy as a mitigating factor. One judge, quoted in the article, said that
the biomedical evidence “makes possible an argument t that psychopaths are, in
a sense, mo- rally 'disabled' just as other people are physically disabled.”
(847) Judges who received the additional information gave sentences that were,
on average, a little over a year shorter.





This study touches on a concept I talked about last
month-
namely that there is a distinction between innate and acquired mental disorders when it comes to sympathy and empathy. However, unlike what I talked about in that post, this
study shows that perhaps it is not the origin of a behavior which is significant, but how tangible the evidence for that behavior is. It is telling that the
anonymous judge quoted in the article described psychopathy as a moral
disability rather than a moral disorder, seeing morality as a capacity that can be limited with a physical disruption to the limb. Although the results don’t seem that
significant- remember that this study wasn’t about deciding guilt – it is
important to note, as the authors themselves do, that the crime they discuss
was a particularly violent one and the assailant was presented, in all versions
of the case, as entirely lacking in empathy or remorse. Taking that into
consideration, the one-year difference in sentencing is a bit more significant,
and it is possible that there would be even larger discrepancies in cases that
are less violent or less reprehensible. I am interested to see if further
work is done.








Image by Abu badali, based on public domain Aiga's icons.


Where Aspinwall et al. focused on the impact of
different types of evidence on judgment, the other two studies I want to
highlight looked at the neuroscience of judging itself. In February of 2012, Social
Cognitive and Affective Neuroscience

published an
article
from a team at George Mason University about the role of oxytocin
in the perception of crime. A group of male subjects were given oxytocin (or
placebo) and then asked to read a series of descriptions of crimes.[2]
They were then asked, as unaffiliated third-parties, to rate the crimes in
terms of the degree of harm caused to the victim and then rate how much the
offenders deserved to be punished. 
The researchers that those administered oxytocin were more likely to see
increased harm to the victims but that this did not significantly impact how
much they thought the assailants should be punished.[3]The
authors of the study emphasized that it was important that the subjects saw
themselves as an uninvolved third-party – first, because this removed the
confounding factor of personal relationship to either victim or perpetrator,
and second, because it more closely mimicked the circumstances of a jury trial.








image from the Library of Congress
Clarence Darrow, lawyer most likely to

make all sorts of borderline prejudicial remarks

(image in the Public Domain)

In a similar study, published in the March 2012 of Nature
Communications,
a team of researchers in
Japan used fMRI
machines to measure neural activity
in people while they were making
decisions about punishing a convicted murderer. The study was designed to
examine how arguments designed to elicit sympathy mapped onto brain regions
and, in turn, how those were utilized in the decision making process. The
researchers found that, in fact, brain activity was consistent with regions
previously associated with sympathy and that the sympathetic response was
correlated with shorter sentences. This left the research team with the
following question: Although judges often instruct jury members to disregard
certain prejudicial remarks while making their decisions, such as pleas for
sympathy or comments not admitted to the record, is it reasonable to ask people
to do such a thing?





These studies open a possibility within neurolaw for an
examination of the institution of the jury trial, raising important ethical
questions about how we, as citizens, make moral judgments, and the level of
conscious control we can have over our sympathetic reactions. The jury trial is
a sacred institution within United States law, and for good reason.  Yet so far, the nacent field of
neurolaw has focused almost exclusively on the impact neuroscientific evidence
will have on the courtroom in terms of how it reframes criminal actions. Neuroscience, and most importantly neuroethics, is giving us more information
about morality in general. What will happen to the jury trial if it is found
that people cannot lay aside urges for sympathy, no matter how they are instructed? Or what if it turns out we are able to judge situations in which we are an
uninvolved third-party with more reason than we do situations where we have
personal involvement, regardless of the level of empathy we may feel for the
persons involved? How will this work change the rules of evidence and criminal
procedure, if at all? And should they?





Want to cite this post?


Cipolla , C. (2012). Neurolaw: Brains in the Courtroom. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/10/neurolaw-brains-in-courtroom.html












[1] In 1995,
Stephen Anthony Mobley was found guilty of the murder of John C. Collins. The
case was one of the first to introduce the concept of a dysfunctional MAO-A
gene as a factor in the courtroom. Mobley was sentenced to death and executed
in 2005. The hypothetical case used some of the same descriptive details,
(e.g., the assailant attacked his victim during a restaurant robbery) but,
importantly, was not a murder conviction, and thus, not a capital crime.




[2] They chose men because previous
investigations have shown that men frequently score lower on standard tests of
empathy than women.




[3] One final
point about this study that was interesting to me- although it seems, on the
surface, to be more relevant to a study of how potential jurors make decisions,
one of the recommendations based on the findings is for potential treatment of
psychopaths.