Pages

Tuesday, October 28, 2014

What is uniquely human? A report from The Social Brain Conference




Photo credit: Anders Gade

By James Burkett



James Burkett is a 2014 recipient of the Emory Center for Ethics Neuroethics Travel Award. He is a graduate student in Emory's Neuroscience program, conducting research on social attachment and empathy in Dr. Larry Young's lab.








This October 5th thru the 8th I had
the pleasure of attending the Federation of
European Neuroscience Societies’ (FENS)
bi-annual Brain
Conference
, held in Copenhagen, Denmark. FENS represents the neuroscience
societies of 42 different societies in 32 countries, and is the primary
organization for neuroscience in Europe. The conference, titled “The Social
Brain,” focused on how the brain produces and is affected by social behaviors
in humans and in animals. Chaired by eminent scientists Sarah-Jayne Blakemore
(Director of the University College London’s Institute of Cognitive
Neuroscience), Frans
de Waal
(world-famous primatologist at Emory University), and Giacomo
Rizzolatti
(discoverer of mirror neurons at University of Parma, Italy),
the conference brought together a wide array of human and animal researchers at
the top of their fields. Throughout the conference, this bipolar grouping was
frequently brought to the same question: what is it that makes humans different
from animals? What is uniquely human? As with a sculpture, this conference
seemed to answer this question by chipping away at the monolith of things
commonly thought of as unique to the human species.





For a long time, humans were thought to be unique for their
tool use [1,2]. However, many surprising examples of tool use have now been
seen in animals. Chimpanzees are now known to fashion weapons for use in
hunting, as well as using tools for nut cracking and termite retrieval; and will
sometimes be seen carrying favorite tools for great distances [1]. Even this
behavior is not unique to apes, however: Caledonian crows also craft and use
tools for grub
retrieval
, and even have local tool-making traditions
they pass on to the next generation [2]. There are now many internet videos
showing crows solving
extremely complex tasks
with available tools.




Several speakers showed that the human species is not unique
in its ability to cooperate and to understand cooperative relationships [1,3,4].
Chimpanzees, for instance, are perfectly capable of learning cooperative tasks
without training, and even spontaneously develop individual styles, preferred
partners, reputations, and feedback between partners on their choices [1]. They
may do this through the use of specialized “mirror neurons,” which are present
in motor planning and emotional areas of the brain and fire both when an action
or emotion is being experienced, and when it is being observed in others [3,4].
These mirror neurons were first discovered in Rhesus macaques, but have since
been found in humans and chimpanzees. Elephants readily learn cooperative
tasks
as well, even waiting for their partner to arrive when a task is
presented that cannot successfully be performed alone [1]. Even more distant
from humans was a striking example of inter-species cooperative hunting between
groupers and moray eels,
where groupers show signs of shared intentionality and referential gesturing in
order to get moray eels to help them catch fish [5]. Tiny 5 gram cleaner wrasses, which
have more than 2,000 inter-species social interactions a day while cleaning
parasites off of other fish, show signs of cooperative strategies, individual
recognition, social prestige, audience effects, tactical deception and
reconciliation.





If an animal naturally cooperates, it should also be
sensitive to the results of cooperation, and there is now ample evidence that
this is the case. In a now-famous experiment, capuchins were shown to be
sensitive to unfairness:
when rewarded for a task with cucumber, they eagerly accept; but when they see
another capuchin being rewarded for the same effort with a grape, they bitterly
refuse the cucumber [1]. This sensitivity to inequality has since been seen in
many other species, including dogs and birds, and may be a general and
necessary behavior in cooperative species. Chimpanzees take this one step
further by showing a sense of fairness as well, choosing to reward others when
there is no benefit to themselves, and sometimes refusing a reward unless a
partner is also rewarded.








Photo credit: Anders Gade




Social animals also employ very similar tactics to humans in
the maintenance of their relationships [1,6]. In social groups, conflicts are
inevitable, and for many years ecologists thought that conflict served to break
bonds of attachment and disperse social groups. However, more recent research
has demonstrated time and again in many species that social group members
actively reconcile after fights, which actually serves to bring group members
together and strengthen bonds. Risky pro-social interactions with new group
members may also serve to help form new social bonds [6]. In some species,
consolation happens after fights as well, typically initiated by individuals
with close relationships with the loser. These consoling behaviors have been
seen in many great apes, dogs, birds and elephants. In addition, recent
experiments in rats show that they are motivated to help their trapped
cagemates escape from an enclosure, and once they learn to do so, they will
release them very quickly even if they receive less of a food reward after
doing so [7]. Finally, the same brain neuropeptides that mediate social
behavior and social bond formation in animals seem to influence human
relationships as well, suggesting a common evolutionary origin for these
mechanisms in the brain [8].





In a talk presenting my own thesis work, I discussed
evidence of consolation in the prairie vole, a highly social monogamous rodent
[9]. Through a series of laboratory experiments, I demonstrated that male
prairie voles will show increased partner-directed grooming toward a male
sibling or female partner if that individual has been exposed to stress. I also
showed that observing their stressed cagemates induces stress and anxiety in
male prairie voles, suggesting that their behavior is based on empathy.





In order to console one another, animals must first be
capable to detect each other’s emotional states [1,7,9,10]. This is perhaps one
of the most broadly observed capacities of all, being present in virtually all
mammals and some other species as well. This is believed, at least in mammals,
to have its origins in motherhood. All mammals are raised by a mother, and
those mothers who are sensitive to the emotional signals and needs of their
offspring are more successful at rearing offspring. As an evolutionary result,
nearly all mammals have an implicit awareness of emotional cues, often
extending to other adults of their species. Sometimes this sensitivity even
extends across species, as has been observed in pet dogs that are responsive to
distress in their owners. Empathy shows similar patterns in humans as in other
animals: it is most often extended toward those closest to the subject, and the
more distressed a subject is by emotional displays in others, the less likely
the subject is to extend them help.








Photo credit: Mihaela Vincze





Related to empathy is observational/social learning, or the
ability to learn from watching another individual. This, too, is observed widely
among animal species, ranging from tool making/using techniques, to
child-rearing traditions, to food choices [11]. Some of the most striking
examples include the potato-washingJapanese macaques, which learned from one macaque to start washing the
dirty potatoes that the caretakers provided. Even when clean store-bought
potatoes were substituted, macaques for generations continued to wash their
potatoes in the sea. This also seems to suggest a form of culture in animals, which
pass along socially learned mannerisms to future generations that sometimes
have little or no functional relevance. Indeed, evidence of cultural practices
is quite strong in chimps, and some evidence exists even in rats.





Some believe that humans are unique in their capacity to
think about the mental states of others, an ability sometimes referred to as
“theory of mind.” However, this is also being challenged by clever beasts.
Studies have shown that human children develop theory of mind at exactly the
same age as they develop mirror self-recognition – a capacity which some of the
most advanced animals, including dolphins, elephants and great apes, also show in
adulthood [1]. However, even more convincing are recent experiments in Caledonian crows,
showing that they recognize the difference between a human-operated and a
randomly operated threatening device [2]. Human studies implicate specific
areas of the brain – the temporo-parietal junction, superior temporal sulcus,
and medial frontal cortex – as regions involved in thinking about the
minds of others [12,13]. In studies on Rhesus macaques, analogous brain regions
tend to expand with expanding group size, suggesting that they are involved in
thinking about group members in other species as well [13].  So, even the imaginative capacity of humans
seems not entirely unique.





The answer to the question of human uniqueness could be
something seemingly obvious: language [14]. “Language seems to invade almost
every aspect of cognition,” added Frans de Waal during a free discussion period.
Language itself is a symbolic representation of thought, and the subsequent
reliance on symbolic representations is reflected across a wide range of
cognitive processes, especially those involved in social communication and
learning. However, there is increasing evidence that some bird songs have
characteristics similar to language; that chimps and dogs can understand spoken
language, and may use gestural communication; and that some parrots may be able
to use spoken language in the same way that humans do [14].





Despite all of the shared capacities between animals and
humans, there is still significant resistance, even among scientists, to using
traditionally “human” terms to describe the emotions, abilities and mental
states of animals [15]. Nonetheless, the uniqueness of humanity may be simply a
matter of combination and degree. Redouan Bshary, the
scientist behind the grouper and cleaner wrasse studies, said in his talk that
all animal brains represent a set of solutions to ecological problems [5]. When
an ecological problem can best be solved through a single mental capacity, you
are likely to find an animal that possesses that capacity. However, you might
find it in isolation – for instance, a grouper that can form shared plans and
make referential gestures toward a moray eel may not be able to think
abstractly, imagine the thoughts of others, or use tools. Each animal
independently evolves only those capabilities it needs for its own set of
ecological problems. Furthermore, while chimps and crows may build tools, and
dogs and birds may understand language, they are nowhere near to matching
humans in the degree to which these capabilities are developed. So, humans may
represent a unique evolutionary amalgam of capabilities that individually can
be seen elsewhere, but rarely all together, and never to the same degree.









Bibliography





[1] de Waal FB (2014 October). Mammalian origins of empathy and pro-sociality. Talk presented at
The Social Brain Conference, Copenhagen, Denmark.


[2] Gray R (2014 October). Can social and technical intelligence be decoupled? Cooperation, causal
cognition and inferences about agency in New Caledonian crows.
Talk
presented at The Social Brain Conference, Copenhagen, Denmark.


[3] Ferrari PF (2014 October). Hard-wired, soft-wired and re-wired. Brain plasticity, sensorimotor experience
and early social development in primates.
Talk presented at The Social
Brain Conference, Copenhagen, Denmark.


[4] Rizzolatti G (2014 October). Understanding others from inside: a neural mechanism. Talk
presented at The Social Brain Conference, Copenhagen, Denmark.


[5] Bshary R (2014 October). The social brain hypothesis applied to fishes. Talk presented at
The Social Brain Conference, Copenhagen, Denmark.


[6] Calcutt S (2014 October). Chimpanzees in newly formed social groups choose high-risk social
investments over low-risk ones.
Talk presented at The Social Brain
Conference, Copenhagen, Denmark.


[7] Mason P (2014 October). Helping another in distress: Lessons from rats. Talk presented at
The Social Brain Conference, Copenhagen, Denmark.


[8] Young LJ (2014 October). The neurobiology of social relationships: implications for novel
therapies for Autism.
Talk presented at The Social Brain Conference,
Copenhagen, Denmark.


[9] Burkett JP (2014 October). Consoling behavior in the prairie vole: neurobiology and basis in
empathy.
Talk presented at The Social Brain Conference, Copenhagen,
Denmark.


[10] Keysers C (2014 October). The empathic brain and its plasticity. Talk presented at The Social
Brain Conference, Copenhagen, Denmark.


[11] Whiten A (2014 October). Imitation, culture, and the social brains of primates. Talk
presented at The Social Brain Conference, Copenhagen, Denmark.


[12] Mars R (2014 October). From monkey social cognition to human mentalizing. Talk presented
at The Social Brain Conference, Copenhagen, Denmark.


[13] Rushworth M (2014 October). The medial frontal cortex and social cognition in humans and other
primates.
Talk presented at The Social Brain Conference, Copenhagen,
Denmark.


 [14] Mooney R (2014
October). Neural mechanisms of
communication.
Talk presented at The Social Brain Conference, Copenhagen,
Denmark.

[15] Panksepp J (2014 October). Three primary-process animal social brain
networks 9PANIC, SEEKING, PLAY) and development of three new antidepressants
for humans.
Talk presented at The Social Brain Conference, Copenhagen,
Denmark.





Want to cite this post?




Burkett, J. (2014). What is uniquely human? A report from The Social Brain Conference. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2014/10/what-is-uniquely-human-report-from.html

Tuesday, October 21, 2014

Burden of proof: does neuroscience have the upper hand?

As an undergraduate, I took several introductory level philosophy classes while
majoring in neuroscience. Some of it I could appreciate and most of it went
over my head, but a thought that kept nagging me was, “haven’t neuroscientists
solved all of these issues by now?” It was only after I had worked in
neuroscience laboratories for a few years that I began to realize just how
qualified all of our statements had to be due to the plethora of limitations that
go along with any result. I began to wince anytime I heard someone use the word
“proof” (only salesmen use the term “clinically proven”, but don’t get me
started on that…). It seems clear to me now that, for the most part, natural
scientists, social scientists and humanities scholars are really all working
toward the same goal just in different, albeit complimentary ways. At the first
“Neuroscience, ethics and the news” journal club of the semester, Lindsey
Grubbs, a PhD student in Emory University’s English Department, facilitated our
discussion about a topic that she has previously written about for this site.
The main focus was on what role neuroscience can and should play in answering
questions that have long been in the realm of the humanities and how these
results should be communicated to the general public.






From the Daily Mail Online





At the center of
our discussion were two papers about the effects of reading that each created
quite a stir
in the popular press. First, Gregory Berns
laboratory at Emory reported on a study that they had conducted which was aimed
at determining how a good book can leave such a lasting impression (1).  They reasoned, “It seems plausible that if
something as simple as a book can leave the impression that one’s life has been
changed, then perhaps it is powerful enough to cause changes in brain function
and structure.” Berns and colleagues asked undergraduates to read a novel over
the course of nine days and had them come into the lab for a short
resting-state functional Magnetic Resonance Imaging (fMRI) scan each morning
during that time. They utilized an internal control model where each
participant was also scanned prior to reading and after finishing the novel.




Dr. Berns and his
colleagues found significant short-term changes in activity levels in areas
that had previously been associated with “story comprehension” and “perspective
taking” – the “left angular/supramarginal gyri and right posterior temporal
gyri” – and somewhat persistent changes in somatosensory cortical connectivity.
They interpreted the latter results as a possible substrate for the phenomenon
of “embodied semantics,” where the brain’s somatosensory processing machinery
can be recruited by just the thought of performing an action or experiencing a
sensation (2).
Importantly, nowhere in the paper do the authors put a valuation on these
changes as “good” or “bad” yet headlines such as “Brain
function improves for DAYS after reading a novel
” appeared. Increased
connectivity between discrete brain regions does not necessarily mean that
brain function is improved. Is this simply a result of a positive bias in favor
of reading? If the same neurological changes were found in chronic drug abusers,
I doubt they would be interpreted as improvements.







From ScienceDaily



The second study (3),
published in Science by Kidd and
Castano (2013), took a very different approach and also received a great deal
of attention (positive,
negative,
and borderline
ridiculous
). Here, the researchers from The New School
set out to determine if literary fiction – defined as award-winning and/or
canonical rather than best-selling – alters measures of Theory of Mind, the ability to
infer and understand the mental states of others. To do this, they put subjects
through a battery of tests aimed at measuring affective and cognitive Theory of
Mind after reading either literary or popular fiction or nothing at all. Kidd
and Castano conclude their article by arguing that their results highlight the
importance of reading literature in contrast to the controversial US Common Core standards which de-emphasize
reading fiction
in secondary education. Slate’s Mark Liberman has written at length
about the shortcomings
of this study’s design and its perhaps over-reaching interpretations. His piece
was titled “That study on literary fiction and empathy proves exactly nothing”
– I couldn’t agree more (but I could say that about any paper). Liberman’s
somewhat aggressive title may have been in response to Zach Schonfeld’s “Now
we have proof that reading literary fiction makes you a better person
” (shudder).




Hyperbole aside,
there are two issues here that deserve attention. First, do we really need neuroscientific evidence in order
to promote reading, and in particular, reading great, canonical works? Even
hardcore, card-carrying reductionists would likely agree that a lack of
biological evidence for the benefit of reading does not necessarily mean it
isn’t good for you. The risk is that the public comes away from these articles
thinking that now that neuroscientists have weighed in, the debate is over.
Also of note, Kidd and Castano never use the word “brain” (or “neuron”,
“neural”, etc.) because this is a psychology study, aimed at understanding the
effects of reading on how the mind
works. Neuroscientists could certainly look for neural correlates for the
psychological changes exerted specifically by literary fiction on the brain but
even if they were not able to find anything that would not mean that the
changes aren’t real. An underlying issue here, outlined by Lindsey and
discussed by the group, is that natural science results – and in particular
those from neuroscience – are highly persuasive to the public (4, 5).
Here, the media largely reported that these studies proved the benefits of
reading, which hardly seems like a controversial topic. However, Common Core
standards are quite controversial
and the amount and type of reading (i.e. fiction vs. non-fiction) are hotly
contested
topics. While neuroscientists and psychologists certainly could
weigh in here, there is a concern that their results may be more influential
than perhaps they should be.







From alanrinzler.com



Second, a problem
that arose with several of these popular press articles is the attachment of a
value judgment to the changes that scientists reported. As mentioned above, it
is likely that the reason that words like “improvement” were added to describe
these neural changes is because reading is already seen as a positive
influence. Internet pornography addicts probably would have had a similar change
– if it existed – labeled as a “pathological rewiring.” Perhaps this is a
remnant of the brain being thought of as a muscle that needs to be exercised.
In that analogy, any increase does seem like an improvement but that obviously
is not always the case (for example, uncontrolled excitatory activity can lead
to seizures).




In reality, most
of the things we do and see and feel can probably change our brains in some
way, for better or worse – but whether or not neuroscientists alone are able to
find these changes doesn’t actually prove
anything.





References




1.         Berns GS, Blaine K, Prietula MJ, &
Pye BE (2013) Short- and long-term effects of a novel on connectivity in the
brain. Brain connectivity
3(6):590-600.


2.         Aziz-Zadeh
L & Damasio A (2008) Embodied semantics for actions: findings from
functional brain imaging. Journal of
physiology, Paris
102(1-3):35-39.


3.         Kidd
DC & Castano E (2013) Reading literary fiction improves theory of mind. Science 342(6156):377-380.


4.         Caulfield
TR, C. Zarzeczny, A (2010) “Neurohype” and the Name Game: Who's to Blame? AJOB Neuroscience 1(2):13-15.


5.         Weisberg
DS, Keil FC, Goodstein J, Rawson E, & Gray JR (2008) The seductive allure
of neuroscience explanations. Journal of
cognitive neuroscience
20(3):470-477.






Want to cite this post?




Purcell, R. (2014). Burden of proof: does neuroscience have the upper hand? The Neuroethics Blog. Retrieved on , from
http://www.theneuroethicsblog.com/2014/10/burden-of-proof-does-neuroscience-have.html

Tuesday, October 14, 2014

Ambivalence in the Cognitive Enhancement Debate

By Neil Levy, PhD




Neil Levy is the Deputy Director of the Oxford Centre for Neuroethics, Head of Neuroethics at Florey Neuroscience Institutes, University of Melbourne, and a member of the AJOB Neuroscience Editorial Board. His research examines moral responsibility and free will.



The most hotly debated topic in neuroethics surely concerns the ethics of cognitive enhancement. Is it permissible, or advisable, for human beings already functioning within the normal range to further enhance their capacities? Some people see in the prospect of enhancing ourselves the exciting prospect of becoming more than human; others see it as threatening our humanity so that we become something less than we were.




In an insightful article, Erik Parens (2005) has argued that truthfully we are all on both sides of this debate. We are at once attracted and repulsed by the prospect that we might become something more than we already are. Parens thinks both frameworks are deeply rooted in Western culture and history; perhaps they are universal themes. We are deeply attached to a gratitude framework and to a more Promeathean framework. Hence we find ourselves torn with regard to self-transformation.




When someone feels torn in this kind of way about how they should think about or respond to something, they are ambivalent. Parens thinks that ambivalence is in fact the right response to cognitive enhancement: we ought to recognize that we are torn in both directions and acknowledge and respect this fact. We should not seek to resolve the ambivalence; we ought to embrace it. While I think that Parens highlights something of great importance when he argues that we are torn, I think he is wrong that we ought to attempt to respect both frameworks.




The mere fact that we are torn is no reason to think we ought to give equal – or indeed, any – weight to both directions in which we are torn. There is plentiful evidence that many people are conflicted in their attitudes toward members of other races, for instance (Payne & Gawronski 2010). They have egalitarian beliefs but inegalitarian implicit attitudes. But they should not ‘respect’ those implicit attitudes; they should attempt to eliminate them.




If there is good reason to reject one (or both) the attitudes that we feel in response to cognitive enhancement, we ought to do so. I will not argue, here, that we ought to reject one or other attitude. Here I want to highlight the costs of ambivalence.







Image from Nature
452,
674-675
(2008)



There is a very large body of evidence that when people are strongly conflicted, they – unconsciously – seek to resolve the conflict, and they will do so even at the cost of confabulation. Consider the extensive data on cognitive dissonance (Cooper 2007). Cognitive dissonance paradigms induce people to be disposed to attribute contrary attitudes to themselves. For instance, college students may be led into a state of cognitive dissonance by gentle situational pressure to write an essay defending tuition fee rises. If they regard themselves as having willingly written the essays, and are unable to explain why they did so by reference to payment or reward, they are in a state of cognitive dissonance: they were antecedently disposed to judge that their fees shouldn’t rise, but they now have evidence that they have a different belief. In this kind of paradigm, they self-attribute the belief that fees should rise, even though matched controls (who, for instance, write an essay defending the same claim in exchange for payment) are very unlikely to agree that their tuition fees should rise.




When it would resolve cognitive dissonance to do so, people’s standards for accepting a claim are much lower than otherwise. It causes us to engage in highly motivated reasoning: for instance, accepting a weak argument just because committing to it resolves the ambivalence. Allan Buchanan (2011) is apparently puzzled that in the debate over human enhancement, incredibly intelligent and sophisticated thinkers often “substitute high-sounding rhetoric for reasoning” (Buchanan has the bio-conservatives in mind; I think the charge can be extended to some of those in the pro-enhancement camp as well). I think this is just what we ought to expect, if Parens is right. If cognitive enhancement causes ambivalence in us, then we should not be surprised to see bad arguments advanced and accepted by good thinkers.




The way forward is not to embrace ambivalence but to avoid triggering it in the first place. Just how to do that is not at all easy. One recommendation that seems to follow is that we ought to embrace a practice that we have independent reasons to adopt, in any case: avoid hype. Too often we greatly exaggerate the potential that existing drugs and other techniques of intervening in the mind have for enhancing ourselves. In fact, all the available methods seem to promise very little improvement for those who are already functioning well and what improvement they promise seems to come at a cost in other domains. Perhaps bearing these facts in mind will enable us to come to a more realistic and a more sober assessment of cognitive enhancement.






References




Buchanan, A. E. 2011. Beyond Humanity: The Ethics of Biomedical Enhancement. New York: Oxford University Press.



Cooper, J. 2007. Cognitive Dissonance: Fifty Years of a Classic Theory. Los Angeles: Sage Publications.



Parens, E. 2005. Authenticity and Ambivalence: Toward Understanding the Enhancement Debate. Hastings Center Report 35: 34-41.



Payne, B. K., & Gawronski, B. 2010. A history of implicit social cognition: Where is it coming from? Where is it now? Where is it going? In B. Gawronski, & B. K. Payne (Eds.), Handbook of implicit social cognition: Measurement, theory, and applications, New York, NY: Guilford Press, pp. 1–17.






Want to cite this post?




Levy, N. (2014). Ambivalence in the Cognitive Enhancement Debate. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2014/10/ambivalence-in-cognitive-enhancement_14.html







Tuesday, October 7, 2014

Can neuroscience discuss religion?

In a previous post, Kim Lang presented the views of several prominent neuroscientists and neurologists on spirituality and religion. With
the knowledge that atheism is prevalent in the scientific community, she wondered how is it that some neuroscientists are nevertheless able to integrate their
religious and scientific beliefs. One of the neuroscientists whose standpoint
she surveyed was Michael Graziano, a Professor of Neuroscience at the Princeton University Neuroscience Institute. Dr. Graziano believes that current research on the neurological basis of consciousness proves that spirituality is
not only a natural tendency of humans, but also that its foundations are visible in the very structure of the brain [1].




Several questions arise from Dr. Graziano’s statement, and I will try to shed some light on each.



To start with, is neurotheology actually studying spirituality, religion, or both?
What is the difference between the two? The conceptual separation between the
two terms is definitely blurred. In this interview for Big Think, American Buddhist writer and academic Robert Thurman says
that spirituality is “love and compassion”, is “going into a deeper area of
your mind where you are asserting your free will”, where “you let go of your
self-protective and defensive controls, and what you tap into is the nature of
the universe, the flow of energy interconnecting things”. In contrast, Thurman
believes religion is built upon spirituality, but has taken a secondary role
as a tool of social and state organizations. Rituals and rules specific to each
religion end up regulating the access to the spiritual, and become, in
Thurman’s words, a control rather than a regulating mechanism. Neuroscientist
and philosopher Sam Harris, who has recently authored a book called
Waking Up: A Guide to Spirituality Without Religion, seems to have similar views on the issue. He explains:




"Although
the claim seems to annoy believers and atheists equally, separating religion
from spirituality is a perfectly reasonable thing to do. It is to assert two
important truths simultaneously: Our world is dangerously riven by religious
doctrines that all educated people should condemn, and yet there is more to
understanding the human condition than science and secular culture generally
admit."





While Thurman and Harris see the separation of religion and spirituality as a necessity, Native American
specialist Jack Forbes defines religion in a way that resembles that in which the former two authors define spirituality. For Forbes, religion is the way we live and the dreams, hopes,
and aspirations we have. In Columbus and Cannibals: The Wétiko Disease of Exploitation, Imperialism, and Terrorism, he
says:



"Religion is not prayer, it is not church, it is not theistic, it is not atheistic, it
has little to do with what white people call ‘religion’. It is our every act."




The controversy around the meanings of religion and spirituality leads me into my next question: What are the research questions asked by neurotheologists? For example, Andrew Newberg,
a neuroscientist who studies the brain functions of various mental states, used
SPECT (single
photon emission computed tomography
) to scan the brains of Tibetan
Buddhists and to draw conclusions about “the neurophysiology of religious and
spiritual practices”.























He found decreased activation in the parietal lobe (an area responsible for spatial and temporal orientation)
and increased activation in the frontal lobe (region responsible for attention and
concentration) of Buddhists who were instructed to meditate than in those of
participants who did not receive any instructions. In addition, activity in the
frontal lobe and inferior parietal lobe (a language area) increased during
prayer. Should meditation and prayer be taken as processes equally associated
with religion and spirituality? While the results of Dr. Newberg’s study are
truly thought provoking, it is important to not lose sight of what was actually
studied and what domains the results of the research can be extended to.




My third question is, what are the tools with which
neuroscientists are trying to study the brain’s take on religion and
spirituality?
I provoke you to the following thought experiment.



God.




Let your mind freely explore the meaning of the word. Allow your senses to paint an
image, produce sounds, smells, tastes, feelings. Embrace the physiological and
emotional states that the word creates within you. Equally welcome the absence
of a reaction.




Now imagine that you were doing this exercise inside an fMRI machine, with the
specific background noise
present at all times, and a gigantic magnet inches above your face. Unless you
have significant training in isolating background noise and other distractors
(or what Dr. Robert Puff calls “finding
the silence in which the noise resides”), you will probably not reach the same
level of focus as you would were you mediating or praying in a dimly lit room
infused with the smell of incense or in a natural setting that smelled of
blossom, either alone or with others who are also seeking that transcendental,
spiritual state of the mind.




Yet, these are the kind of
procedures from which the field of neurotheology extracts its data and draws
its conclusions. In 2008, for example, a team of neuroscientists led by Sam
Harris
at the University
of California in Los Angeles measured brain activation in committed Christian
believers and nonbelievers who were presented with various religious and
nonreligious propositions – like “The Biblical God really exists” and “Santa
Clause is a myth” – that they had to judge as either true or false [2]. The results of the study suggested that both religious and
nonreligious belief – in other words, statements that participants judged to be
true – was correlated with greater activation of the ventromedial prefrontal cortex (see image below).






Belief minus disbelief in Both
Categories, Both Groups – image from Harris et al, 2010







This area is associated with representation of self, theory of mind, reward,
emotional associations, and goal-driven behavior. Added to other studies that
found religion to be related to medial prefrontal cortex circuitry [3] [4] or
even to the activity of the left hemisphere [5], this
study seemed to suggest that sustaining religious activities requires the activation
of particular brain areas.




I will return to the fact that religion
is widespread. Nine-in-ten Americans believe in the existence of God or
consider themselves spiritual [6] Is the religious or spiritual experience
simply a cognitive tendency that has been shaped by evolution?
Cognitive
scientists such as Pascal Boyer, for example, have explained that the
propensity to religion is a consequence of the social nature of humans [7]. Our brains are built to deal with large amounts of social
information, and do that constantly. In order to integrate in today’s complex
societies, we spend incredible amounts of time creating, evaluating, and
updating our social relationships. Consider the amount of time we spend trying
to understand what others think and why they think it. Think about the ease
with which we remember faces, compared to other objects or even other parts of
the body [8]. Now, remember the occasion when you looked at the sky and
saw a cloud shaped like a human face, or when your breakfast toast had a
human-like design on it. We seek agency continuously, or, as Pascal Boyer calls
this phenomenon, we have a hyperactive agency detection system. So can it be
that gods and other supernatural agents exist because we are cognitively
disposed to believe in their existence? They certainly seem to master large
amounts of socially relevant information, and people often turn to them for
answers and guidance. The fact that gods and supernatural agents often have the
powers or characteristics of humans only makes them more memorable.




It might well be that spirituality is a cognitive by-product of the social
tendencies of humans. But even so, can
religion be a process within the reach of an empirical framework?
Few neuroscientists
venture so far as to say that religion/spirituality is a product of activating certain
brain areas, rather remaining in the more neutral territory of “associated
with” and “important for”. As neuroscience research attempts to penetrate
spirituality, it will be fascinating to discuss the diversity in religious
experience, atheism, and why people who live in radically different
circumstances nevertheless share a similar connection with a god, gods, or
other transcendental spirits.




As for now, I
suggest that the field of neuroscience is too young to draw any conclusions about
religion and spirituality. While questioning the neurological basis of spirituality
is indeed fascinating, there is no clear consensus on how these brain processes
should be studied. Furthermore, technology available to do so is still not
sensitive enough. And even if the research methods agreed, and technological
advancements promised accurate results, what are we to do with the findings of
this kind of research? I believe that while this research is indeed promising,
there are many ethical matters arising around assigning something as intangible
as spirituality to a brain mechanism. Those who study spirituality and
religion, therefore, should be sincere about the potency of their research
methods and techniques, meticulous about their generalizations, and straightforward
in their reports to the public.






References




[1] Graziano, Michael. "The Spirit Constructed in the Brain." The Huffington Post. TheHuffingtonPost.com,
29 Apr. 2011. Web.




[2] Harris S, Kaplan JT, Curiel A, Bookheimer SY, Iacoboni M, Cohen MS (2009) The neuralcorrelates of religious and nonreligious beliefPLoS One 4(10): e7272.




[3] Azari NP, Nickel J, Wunderlich G (2001) Neural correlates of religious experience. Eur. J. Neurosci.




[4] Muramoto O (2004) The role of the medial prefrontal cortex in human religious activity. Med Hypotheses. 62(4): 479-85.




[5] Ramachandran VS (2010) “Split Brain with One Half Atheist and One Half Theist”. Youtube video www.youtube.com/watch?v=PFJPtVRlI64




[6]"Summary of Key Findings." Statistics on Religion in America Report. PewResearch. http://religions.pewforum.org/reports#. Retrieved 04 Sept. 2014.




[7] Boyer, P (2001) Religion Explained: The Evolutionary Origins of Religious Thought.




[8] Waxman OB (2014) “It’s ‘Perfectly Normal’ to see Jesus in Toast, Study Says”. Time, Newsfeed Science http://time.com/90810/its-perfectly-normal-to-see-jesus-in-toast-study-says/



Forbes J.D. (2008) Columbus and Cannibals: The Wétiko
Disease of Exploitation, Imperialism, and Terrorism
. Seven Stories Press.






Want to cite this post?




Lucaciu, I. (2014). Can neuroscience discuss religion? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2014/10/can-neuroscience-discuss-religion.html