Pages

Tuesday, June 30, 2015

New neuro models for the interdisciplinary pursuit of understanding addiction

by Katie Givens Kime



The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Katie Givens Kime is a doctoral student in Religion, with foci in practical theology, psychoanalysis, and neuroethics, and her research investigates the religious and spiritual aspects of addiction recovery methods.  



A few years ago, a highly respected and accomplished philosopher at Duke University, Owen Flanagan, surprised everyone when he stood up to speak at Society for Philosophy and Psychology.  A garden-variety academic presentation it was not.  In “What Is It Like to Be An Addict?” Flanagan revealed to 150 of his esteemed colleagues that he had been addicted to various narcotics and to alcohol for many, many years.  Not so long ago, every gruesome morning looked like this:





I would come to around 6:15 a.m., swearing that yesterday was the very last time...I’d pace, drink a cup of coffee, and try to hold to my terrified resolve.  But by 6:56—every time, failsafe, I’d be in my car, arriving at the BP station...at 7 a.m. sharp I’d gather my four or five 16-ounce bottles of Heineken, hold their cold wet balm to my breast, put them down on the counter only long enough to be scanned....I guzzled one beer in the car.  Car cranking, BP, a beer can’s gaseous earnestness—like Pavlov’s dogs, when these co-occur, Owen is off, juiced...the second beer was usually finished by the time I pulled back up to the house, the house on whose concrete porch I now spent most conscious, awake, time drinking, wanting to die.  But afraid to die.  When you’re dead you can’t use.  The desire to live was not winning the battle over death.  The overwhelming need – the pathological, unstoppable – need to use, was. (Flanagan, 2011, p. 77) 





Research on addiction is no small niche of medical science.  It’s an enormous enterprise.  This seems appropriate, since addiction (including all types of substance abuse) is among the top public health crises in the industrialized West. The human suffering and the public (and private) expense wrought by addiction is immense. (See data here, here, and here.)





To that end, two accomplished researchers recently guest lectured here in Atlanta, representing a few dynamic edges of such research.  Dr. Mark Gold lectured for Emory University’s Psychiatry Grand Rounds on "Evolution of Addiction Neurobiology and Treatment Over the Past 40 Years,” and Dr. Chandra Sripada lectured for the Neurophilosophy Forum at Georgia State University on "Addiction, Fallibility, and Responsibility.”






However, before we get into the work of Gold and Sripada, let’s establish a big picture for the status quo of addiction research.  Nobody debates the severity of the problem of addiction.  Views diverge dramatically, however, on the nature and etiology (what is it? how is it caused?) of addiction (Jacobson, 1995). Etiologies and descriptions of addiction vary: addiction as moral failure, as disease, as inherited vulnerability, as pathological attachment, as disordered choice (picking short-term goods over long-term goods), as self-medication…the list continues. 





If we can’t agree, at least provisionally, on what addiction is and how it happens, then it’s tough to agree on how best to treat it.  It is even tougher to answer the ethical question: “is the addict responsible?” Previous posts on this blog have offered excellent points and counterpoints on various sides of this question.  I won’t rehash them.  Instead, I think that Flanagan, Gold and Sripada hold different but compelling and practically useful answers that end up reframing the question itself. 





Flanagan desperately wanted to use, and desperately did not want to use.  He made clear that his philosophical conundrum of “performative inconsistency” – P & ~P – did not take the form of “a calm, Kantian transcendental pose,” but, as Flanagan put it, rather a more wrenching, “how is this f***ing possible?” (Flanagan, 2011, p. 70)  Interestingly, Flanagan points out that if you talk with addicts, they speak about being responsible for their past and present actions in the same way the rest of us do -- or perhaps, those of us who are not professional ethicists.  This is where I think we mis-frame the question when we ask, “is the addict responsible?”  Most of us, in most of our daily living, at various levels of awareness, manage to paradoxically understand our agency as more multifaceted than just “my fault” or “not my fault.” 





With regard to the ethics in play here, I can’t hope to summarize all that both Dr. Gold and Dr. Sripada presented, but a few relevant elements stood out. Sripada very ably argued that there is overwhelming evidence that addicts lack self-control (cited by the “Irresistibility Defenders”), and there is overwhelming evidence that addicts have substantial self-control (cited by the “Irresistibility Skeptics”). 







Dr. Chandra Sripada, University of Michigan


To resolve this standoff, Sripada proposed a new model, one based on the idea that addicts’ ability to exert self-control is fallible. He argued that people often fail to realize the obsessionality dimension of addiction—addicts face recurrent urges for drugs throughout the day, and especially when they are stressed. Now suppose the ability to exert self-control is reliable but fallible—sometimes a person makes mistakes in exerting self-control which lead to giving in to the urge. Then given enough time, the cumulative probability that the person will eventually have a relapse rises ever closer to 1. Why should we suppose self-control is in fact fallible? This is an area of active neurobiological investigation, and there are some emerging theories that say the issue might lie with the interaction between certain large-scale brain networks. But even before the neurobiological evidence is in, it seems reasonable to suppose that some amount of fallibility is inevitable. After all, exerting self-control is a highly complex activity and just about any complex activity is going to have a non-zero rate of random failure.





The interesting thing about the Fallibility Model of relapse is that it allows that addicts have substantial control over their drug-directed desires. For any given urge, there is a very high probability that the addict will successfully resist that urge. The problem arises when the person faces a lots and lots of urges. In this context, even a very low rate of fallibility can, over time, lead to a very high probability of relapse.






To me, this model more adequately accounts for that paradigmatic irrationality of the addict’s behavior. 






As for the work of Dr. Gold: his presentation offered an excellent perspective on 40 years of engaging the incredibly complex problem of addiction. (If you don’t know about the Drunk Monkeys of St. Kitts, you really should.  This video from BBC is hilarious, disturbing, fascinating, and only about 3 minutes long.) More clinically framed, Dr. Gold recalled the dark days of the 1970’s when, even at Yale Medical Center, addicts (including alcoholics) couldn’t get past E.R., refused hospital admittance due to their “untreatability.”  Though far from agreeing, various medical understandings of addiction have at least progressed from the social stigmas that have everything to do with the ethical question at hand: is addiction the addict’s fault?







Dr. Mark S. Gold, M.D., University of Florida College of Medicine





From Dr. Gold’s perspective, the question of agency is not as pressing, or is too complex, when questions like “which treatment methods actually work?” are more important in the project of alleviating the destruction and suffering wrought by addiction.  Gold pointed out that studies on treatment methods need to be far more rigorous in their longevity – in his view, if a study does not have 5-year data attached, it is not trustworthy.   For instance, “unnatural competition between psychiatry and 12-step programs is profoundly misguided,” Gold said, pointing to the “great data for 12-step programs in 5-year studies.” 





In the end, my view is that though there is value in the “is the addict responsible” question, we must do the work of viewing human agency with more complexity.  As Flanagan points out, “Addicts think they are responsible for what they do.  However, it has proved useful for addicts to admit they are powerless over [the addict’s drug of choice]” (Flanagan p. 291).  Paradoxically, like millions of recovering addicts everywhere, it has only been by persisting in understanding his lack of agency over the substance-of-abuse that Flanagan has been able to regain a sense of agency over his life. 





So it seems that neuroethicists and researchers across disciplines (social and natural sciences, and humanities too!) must engage in the truly difficult task of examining addiction from different epistemological starting points – for what is a person responsible?  Is it a change of mind, or a change of biology?  Or is a change of mind also a change of biology?  Easier said than done.  I suggest we head to the beach at St. Kitts! 





References





Flanagan, O. (2011). What is it like to be an addict? In J. S. Poland & G. Graham (Eds.), Addiction and responsibility (pp. 269–272). Cambridge, Mass: MIT Press.





Jacobson, J.G. (1995). Chapter 10: The Advantages of Multiple Approaches to Understanding Addictive Behavior. The Psychology and Treatment of Addictive Behavior, 175-190














Want to cite this post?



Kime, K. (2015). New neuro models for the interdisciplinary pursuit of understanding addiction. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/06/new-neuro-models-for-interdisciplinary.html






Tuesday, June 23, 2015

Selfhood and ethics: Who am I and why does it matter?

by Keenan Davis



The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Keenan is a graduate student in Bioethics, whose work focuses on the use of virtue ethics and natural law to evaluate novel biotechnologies. He will be pursuing a PhD in the Graduate Division of Religion in the fall.



What should I be doing with my life? Many approach this timeless question by considering first another: Who am I? For a wide range thinkers from Plato to Dr. Phil, we can only know what to do with ourselves when we truly know ourselves. Who we are determines and constrains how we ought to behave. For example, because my parents caused me to exist, I should behave towards them with a level of gratitude and love. Perhaps through a cause-and-effect dynamic, as a result of being their son, I should treat them respectfully. We will return to this example at the conclusion of our exploration.



Historically, the question of selfhood was assessed in terms of an afterlife, seeking to resolve what happens to us when we die. If, as Plato claimed, a person is nothing more than his soul, "a thing immortal," then he will survive physical death. Indeed, perhaps one should look forward to the separation of the soul from material constraints. How we ought to behave then is for the sake of existence after and beyond this world, a position shared by many adherents to Abrahamic religion. On the other hand, if we are no more than our bodies, then we do not persist after death and have no reason to orient our behavior toward post-mortem expectations. Such is the position of Lucretius and the Epicureans who conclude that our practical task is instead to flourish within a strictly material context. Our behavior should be for the sake of this world. For both Lucretius and Plato, the metaphysical substance of self is what mattered foremost.






John Locke

As part of the 17th century Enlightenment, John Locke changed the focus from the substance of self and more explicitly addressed the issue of selfhood with an eye to its normative consequences. For instance, he believed the self to be based entirely on memory and consciousness, regardless of the relationship between body and soul. By defining personhood as continuous self-identification through memory, Locke aimed to establish psychological criteria for moral agency and responsibility. Only if one is responsible for particular actions ought he be liable for judgment, reward, or punishment. Despite his emphasis on the psychological, as opposed to the biological or spiritual, Locke's definition of self still follows the cause-and-effect pattern of is then ought: who I am determines how I should behave.







Using thought experiments like the famous Ship of Theseus conundrum, philosopher Trenton Merricks of the University of Virginia undermines this line of thought by suggesting that there is no metaphysical answer to the question of who we are. There simply are no necessary and sufficient criteria—psychological, bodily, or otherwise—of identity over time for any object. Lest we take this conclusion too far, Merricks explains that it does not mean that persons and objects lack essential properties or evade description: "Among my essential properties are, I think, being a person and failing to be a cat or hatbox." His assessment just means that not all explanations or identifications involving characteristics need to be stated in terms of absolute proof. Allowing a modest concession to unavoidable skepticism, we need not (nor do we ever) demonstrate infallibly that "the tree in my yard today is the same tree that was in my yard yesterday" to warrant that belief. We can still be warranted in our beliefs regarding who we are without proving them absolutely certain.





Merricks demonstrates that a strict criterialist account of the self is insufficient alone: we cannot point to a single necessary and sufficient criterion of self that might help us figure out what to do with ourselves. Perhaps then we should consider an emergent understanding of self, in which the many aspects of our selves coalesce to a self that is greater than the sum of its parts. Plato, Lucretius, and Locke all seemed to be somewhat right in their descriptions of who we are: our minds, bodies, and souls all at least contribute to our sense of self, even if no one of them defines it entirely.



This is the starting premise of psychologist Nina Strohminger and philosopher Shaun Nichols, who sought to determine if there exists a perceived hierarchy of these components. Their paper “The Essential Moral Self” reveals which aspects of our identity contribute most to our narrative of self. Their experiments involved surveys asking people to consider the fate of someone who suffers brain trauma, takes a psychoactive drug, moves from one body to another, is reincarnated after death, or undergoes age-related cognitive changes. From these surveys, distinct patterns emerged illuminating how personal identity is actually defined by most people.






Plato

Strohminger and Nichols found that "folk intuitions largely accord with the psychological view" endorsed by Locke but note that specific aspects of the psychological criteria are much more highly valued than others, "challenging a straightforward view of psychological continuity" as the definition of selfhood. Indeed, even various bodily traits are ranked above many psychological traits. Across the five experiments, though, they found "strong and unequivocal support” for their “essential moral self hypothesis," which states that the subset of psychological traits they refer to as moral traits (e.g. kindness, empathy, goodness) are considered more important than any other in defining personal identity. For instance, in the survey asking participants which traits would follow a person through a body-swap, the traits most highly ranked were honesty, goodness/evilness, and conscientiousness. The category of moral traits as a whole was prioritized significantly higher than other categories, including perceptual traits, somatic traits, and even memory. These studies demonstrate quite plainly that our morality is central to what it means to be oneself and to know who we are.



Rather than basing how we should behave on the metaphysics of who we are, this study indicates that the reverse is true. Our ethical orientation appears to be the primary constituent of our selfhood! The relationship between who we are and how we ought to behave seems to be more complex than a direct cause and effect. It is instead more like an irresolvable dialectic. Our selfhood is in large part constituted by our ethics, which is informed and directed in turn. Returning to our opening example, perhaps my status as a son is only truly and deeply established by the extent to which I fulfill my obligations to my parents, by how respectfully I treat them. The "essential moral self hypothesis" of Strohminger and Nichols merits greater exploration and will certainly have much to contribute to understanding this complex dynamic. Clarity regarding this dynamic will go far moving forward to ethically evaluate biotechnologies that potentially threaten our authentic selves, such as cognitive enhancers and moral enhancers in particular.



References



John Locke, An Essay Concerning Human Understanding (1694), Book II, Chapter XXVII, pp.33-52



Trenton Merricks, There Are No Criteria of Identity Over Time, Noûs 32 (1998): 106-124



Nina Strohminger, Shaun Nichols, The Essential Moral Self, Cognition, 131 (2014): 159-171



Want to cite this post?



Davis, K. (2015). Selfhood and ethics: Who am I and why does it matter? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/06/selfhood-and-ethics-who-am-i-and-why_15.html




Tuesday, June 16, 2015

Changing the Way We Think




by David Michaels



The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. David is a student at Emory University working on his Master's degree in Bioethics. After completing his graduate studies he will be attending medical school in Texas.  





Have you ever wondered what it would be like to have the ability to read minds? If you're like me, you've daydreamed about possessing this superpower. It's easy to imagine all of the fascinating ways you could exploit this gift to your liking. But after a while this myopic perspective is turned on its head when we imagine our own thoughts being read.  Quickly, almost instantaneously, we conclude with absolute certainty, "Nope, absolutely not - the power to read minds is a bad idea..." Some thoughts are probably best left alone in the mysterious impenetrable fortress of privacy--our mind.





However, recent breakthroughs in neuroscience may challenge the notion that our mind is impervious to infiltration. Did you know that we may have the ability in the near future to record our dreams so that we can watch them later? Scientists have been working on developing technology that translates brain activity (measured in an fMRI machine) to visible images, allowing us to "see" our thoughts. Although this technology currently only utilizes real-time brain activity and cannot produce images from stored thoughts (i.e. memories), it nevertheless introduces the possibility that people will be able to "see" our thoughts - and maybe "read" them too - in the future.





This is just one of many controversies over emerging 'neurotechnological lie detection' Sarah Stoller and Dr. Paul Root Wolpe discuss in a 2007 paper. They explore the question of whether or not the government has the right to invade our minds in order to obtain evidence that can be used in a court of law. Neuroscience has, for the first time in history, allowed researchers to bypass the peripheral nervous system and gather data directly from the brain (Wolpe et al. 2005). Although Stoller and Wolpe focus on the legality of these technologies and whether or not they violate our 5th amendment right, I want to explore whether adopting technologies that unveil the privacy of the mind will change the way we think and the way that we live.








Let's start at the beginning. Why do we think of our mental "mind space" (i.e., thoughts and memories) differently from our physical property? One reason we feel differently about our thoughts is that they reside in the only place in the universe that is genuinely secure. Our mind is a loyal confidant, a safe haven for our thoughts, feelings and memories and no one, without our permission, is allowed access. This is a big deal since most of us dislike (or hate) the idea of a friend or family member, certainly a stranger, having unrestricted access to our smart-phones or laptop computers and reading just a tiny fraction of our thoughts - imagine every thought being made public.





This certainty of privacy and safety affects the way we think and what we think about. It gives us the ability to entertain ideas regardless of their legality, deviance from the norm or seemingly foolish nature. It allows us to simulate immature scenarios in our head or respond to people with exceedingly clever (if I do say so myself) smart-aleck quips without suffering the consequences. It grants us the precious opportunity to "think before we speak" so that we may predict how others will react to our comments. It fosters creativity and fends off boredom.





What happens if these thoughts are no longer private? What happens when our memories become analogous to a file cabinet - available for retrieval and viewing by anyone who knows where to look? How will this affect the dynamic of our thoughts and ultimately our lives? Would your computer and smart-phone habits change if you knew lots of people had unrestricted access to them? For most people, I think the obvious answer is yes. Living in a society with the capability to breach our mental privacy would change everything. There are two likely outcomes of living in such a world.






from Gajitz






On one hand, a world with no mental privacy would create a society of people hypersensitive to their surroundings. Not only would we be afraid to do "bad" things, but we would be afraid to merely think them. We would hesitate before looking at shameful pictures or controversial videos. We would become obsessed with placing ourselves in positions that do not evoke illicit or shameful thoughts. It would be as if our mind was placed in an interrogation room, its every move and whisper recorded and archived. Fear would dominate our lives.





On the other hand, the complete opposite may occur. A world devoid of privacy may lead to a hyper-tolerant society where feelings of embarrassment and shame are rarities. We would become so accustomed to other people's way of thinking that terms like "politically correct" would be meaningless. Our hyper-tolerance would result from increased exposure and subsequent understanding of how other people live their lives. A similar parallel can be seen with the technological boom of the last half-century. The internet has revolutionized the way humans communicate, exposing individuals to a great variety of people and their cultures fostering cultural tolerance ultimately promoting harmony and acceptance within society.​





Do we really want to jeopardize the final frontier of human privacy by invading the mind? Are we prepared to suffer the consequences? Will it change the way we think?






References:






Sarah E. Stoller & Paul R. Wolpe, Emerging Neurotechnologies for Lie Detection and the Fifth Amendment, 33 Am. J. Bioethics, 33 (2007), 359-375.



Paul R. Wolpe et al., Emerging Neurotechnologies for Lie-Detection: Promises and Perils, 5 AM. J. Bioethics 39, 39 (2005)






Want to cite this post?



Michaels, D. (2015). Changing the Way We Think. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/06/changing-way-we-think.html

Tuesday, June 9, 2015

The Ambiguity of "Neurotheology" and its Developing Purpose

by Shaunesse' Jacobs



The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Shaunesse' is a dual masters student in Theological Studies and Bioethics at Emory and her research interests lie in end-of-life care and religious practices surrounding death and dying.



Are religion and spirituality authentic belief systems that have thrived for millennia because of their truth? Or are they simply constructs of the brain to help humanity cope with the unknown? With the advancement of science, can religion and science work together to understand humanity? What do religion and science have to say collectively that has not been said individually? These questions continue to be asked with each scientific advancement, and even more so now that neurotheology is beginning to develop as a sub-discipline of neuroscience. Neurotheology is generally classified as a branch of neuroscience seeking to understand how religious experience functions within the brain. The field has recently taken off and continues to grow thanks to the research of Andrew Newberg and Mark Robert Waldman, but its aims were first pursued by James Ashbrook.


For Ashbrook, the goal of neurotheology is to question "and explore theology from a neurological perspective, thus helping us to understand the human urge for religion and religious myths." These definitions seem very similar, but one implies that neurotheology is subordinate to theology and the other presents neurotheology as subordinate to neuroscience. This ambiguity becomes more muddled by Newberg in his work Principles of Neurotheology, where he supports the notion that competing and open-ended definitions for terms such as “religion,” “theology,” “spirituality,” and “neuroscience” are acceptable. In promoting open-ended definitions, Newberg may have suggested starter definitions as a basis for terms in this emerging field, such as “religion” as a particular system of faith and worship; “theology” as the study of God and God’s relation to the world; “spirituality” as the search for independent or transcendent meaning; and “neuroscience” as the study of how the nervous system develops, its structure, and what it does.



from wbur


Newberg further elaborates on this open-ended discussion in an interview with Skeptico guest host, Steve Volk. In the interview, Newberg suggests that “the neuro side include not just neuroscience but psychology and sociology and anthropology, and all of the different aspects that go into how we understand the human mind and the human brain.”[vii]  A similar suggestion is made for those on the theology side of the conversation to broaden the incorporated sub-disciplines to include “religious and spiritual practices like meditation and prayer, different types of experiences, mystical experiences, as well as theology and philosophy itself.”



Although the idea of open-ended definitions for an emerging field allows many people to be in conversation with one another to uncover how religion and the mind operate together, the terms and their relationships to one another will lead different populations to speak “different languages” and have aspirations of using the field to promote unintended conclusions. This effect is already occurring among Catholic and Shamanist scholars who are seeking to prove the validity and authenticity of their tradition, and use neurotheology to answer questions that it is currently not equipped to answer. Professor Michael Winkelman understands neurotheology to be the science of altering the conscious and inducing spiritual experiences for the sake of health benefits.[ix] By his understanding, shamanism was the first branch of neurotheology because its aim was to induce spiritual experiences for health and personal benefits. I doubt many, if not all scholars would say that Winkelman’s interpretation is the actual purpose of neurotheology. Also misguided in understanding neurotheology’s purpose is Wilfried Apfalter, who desires to conform neurotheology to fit into a specific Catholic theological framework. Within this framework, neurotheologians would be well trained in neuroscience to remain abreast of contemporary scientific research while also making Catholic beliefs and doctrines divinely revealed by the Magisterium of the Church or the word of God openly acceptable from a neuroscientific lens.[x]


Not just unique to religious and spiritual communities, scientists also have misguided understandings of neurotheology’s intended purpose, desiring to use the branch as a means of ruling out religion as something that humanity and the mind should still be subjected to in an age of rapid scientific advancement. Through a series of experiments Michael Persinger induced various types of thoughts and feelings while research participants were told to imagine seeing God. His experiments led him to the conclusive theory that "[s]eeing God’ is really just a soothing euphemism for the fleeting awareness of ourselves alone in the universe: a look in that existential mirror. The ‘sensed presence’-now easily generated by a machine pumping our brains with electromagnetic spirituality-is nothing but our exquisite and singular self, at one with the true solitude of our condition, deeply anxious.”[xi] Maintaining open-ended definitions continues to pit the scientific community against the religious community because both desire to use a seemingly unified field for the purpose of proving the truth of their own claims.



from Salon


Neurotheology has the potential to unite scientific inquiry and religious experience to not only understand the relationship between the two as they exist in the brain, but also to answer questions of who we are and why we are who we are; however, Newberg’s hopes of having such a broadened dialogue with two opposing fields leads to the loss of neurotheology’s purpose. Misinterpretations, intentional rejection of purpose, and unsuccessful methodologies abound as obstacles to neurotheology uncovering more about the nature of being human and humanity’s relationship to generations of persistent beliefs. Can these two fields be successfully integrated? If so, is “neurotheology” the best way to characterize the uniting of two fields that themselves include the conflation of many different disciplines and practices? Maybe we can get there, but one of the first steps must be to concretize the objectives, terminology, and scientific approaches before we can truly uncover the relationship between the mind and religious experience.


References:



[i] Shukla, Samarth, Sourya Acharya, and Devendra Rajput. "Neurotheology-Matters of the Mind or Matters That Mind?" Journal of Clinical and Diagnostic Research : JCDR. JCDR Research and Publications (P) Limited, July 2013. Web. 01 Mar. 2015.  

[ii] Newberg, Andrew B. Principles of Neurotheology. Farnham, Surrey, England: Ashgate Pub, 2010. Print.

[iii] “Religion." Merriam-Webster.com. 2015. http://www.merriam-webster.com/dictionary/religion (8 April 2015).
[iv] “Theology.” Merriam-Webster.com. 2015. http://www.merriam-webster.com/dictionary/theology (8 April 2015).
[v] "Body/Mind/Spirit-Definitions and Discussion of Spirituality and Religion." Body/Mind/Spirit. Georgetown. Web. 08 Apr. 2015. <http://nccc.georgetown.edu/body_mind_spirit/definitions_spirituality_religion.html>.
[vi] "About Neuroscience." About Neuroscience. Georgetown University Medical Center, n.d. Web. 08 Apr. 2015. <http://neuro.georgetown.edu/about-neuroscience>.
[vii] Newberg, Andrew. "Dr. Andrew Newberg On God of the Fundamentalist Athesit." Interview by Steve Volk. Skeptico: SCience at the Tipping Point. N.p., 26 Apr. 2011. Web. 20 Mar. 2015. <http://www.skeptiko.com/135-dr-andrew-newburg-on-god-of-the-fundamentalist-atheist/>.
[viii] Ibid.
[ix] Winkelman, Michael. "Professor Argues That Shamanism Is the Original Neurotheology." Neurotheology & Shamanism. Greenwood Press, 5 June 2001. Web. 20 Mar. 2015. <http://www.cognitiveliberty.org/neuro/winkelman1.htm>.
[x] Wilfried Apfalter (2009) Neurotheology: What Can We Expect from a (Future)
Catholic Version?, Theology and Science, 7:2, 163-174, DOI: 10.1080/14746700902796528
[xi] Hitt, Jack. "This Is Your Brain On God." Wired. N.p., Nov. 1999. Web. 20 Mar. 2015. <http://archive.wired.com/wired/archive/7.11/persinger.html?pg=1&topic=&topic_set=>.




Want to cite this post?



Jacobs, S. (2015). The ambiguity of "neurotheology" and its developing purpose. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/06/the-ambiguity-of-neurotheology-and-its.html







Tuesday, June 2, 2015

23andMe: The Ethics of Genetic Testing for Neurodegenerative Diseases


by Liana Meffert



The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Liana is a senior at Emory University majoring in Neuroscience and Behavioral Biology and Creative Writing (poetry). She is currently applying to Public Health graduate schools and considering a future in medicine. In her free time she enjoys running, reading, and her research on PTSD at Grady Memorial Hospital.




23andMe logo 



The face of genetic testing and counseling is in the midst of a major overhaul. Historically, a patient had to demonstrate several risk factors including familial and medical health history or early symptoms in order to be tested for the likelihood of developing a neurodegenerative disease. For the first time, the public has unrestricted and unregulated access to the relative probability of developing certain neurodegenerative diseases.






So why is finding out you may develop a neurodegenerative disease in later years different than learning you’re at high risk for breast cancer? Neurodegenerative diseases are unique in that they essentially alter one’s concept of “self.” Being told you may succumb to cancer at some point in your life is a much different scenario than being told your memories will slowly deteriorate or that the way you relate to your loved ones, or even the very things you enjoy, may change. For the first time in history, the potential for these drastic changes in your “future self” are available at the click of a button.






23andMe” was* one such DTC (Direct-to-Consumer) genetic testing service providing information for individuals to learn about and explore their genetic susceptibility. When the service was originally launched in 2008, anyone willing to submit a saliva sample and pay a fee could receive a report containing health-related genetic information. I was one of the customers of the original genetic testing service. After several weeks, the time it takes to process a sample, I could go online and view my health-related genetic information. What did I learn? To name a few things: I have a reduced risk of Alzheimer’s and Parkinson’s (possibly), my genetic makeup suggests I am very unlikely to have red hair (true) or enjoy the taste of cilantro (also true).






But what if I had a high probability of developing an untreatable neurodegenerative disease? One that would negatively influence my quality of life in later years? Information such as this leaves the individual in a precarious position, yet the news may not be as detrimental as one would expect. Studies on quality of life after predictive testing for Alzheimer’s Disease (AD), Huntington’s Disease (HD), and ataxias have shown that: “(a) extreme or catastrophic outcomes are rare; (b) consequences commonly include transiently increased anxiety and/or depression; (c) most participants report no regret; (e) many persons report important benefits from receiving the genetic information” (Paulsen, 2013).



Great, right? Not so fast. All of these studies were done in a typical genetic counseling environment, likely equipped with clinical geneticists, genetic counselors, and psychotherapists. As Roberts (2013) addresses in his paper on the practical and ethical issues of genetic susceptibility testing: “the impact of testing on people without post-test counseling is unknown because it is considered standard of care to deliver predictive genetic test results within the traditional genetic counseling model—.” Essentially, the outcomes for DTC genetic testing are unknown. This is a concerning phenomenon that needs to be addressed. 





Furthermore, we know relatively little about how our genes interact with our environment, so those official-looking results you get on the internet may not be as “official” as they seem. It may be that a woman in Atlanta with a specific genotype identified by “23andMe” develops a chronic illness, while a woman living on a farm in Iowa with a similar genotype does not. We don’t know. The ability to accurately predict the phenotype (how our genes are actually expressed) is limited. At best, it’s an informed estimate that remains open to interpretation. This is a hard thing to explain over one page on the Internet. Roberts also addresses these concerns in his paper: “APOE [the risk allele in this gene has some predictive value] testing has limited predictive value, and there are currently no proven prevention options for AD; for these and other reasons (e.g., potential psychological and social harms), the medical community has recommended against its use.”






I propose an intermediary: someone to review and screen the results, sharing pertinent information with the patient and putting the results in context when necessary. As of August 2011, two out of thirteen of the companies offering genetic susceptibility testing for neurodegenerative diseases required results to be given through a physician (Roberts, 2013). This is what needs to change, particularly since testing for neurodegenerative diseases is becoming increasingly accessible. In 2013, “nine companies market DTC genetic tests related to risk for AD, nine for MS, three for PD, three for ALS, two for PSP, one for Niemann-Pick disease, one for CJD, and one for vascular dementia (Paulsen, 2013). “23andMe” is one of many, and an increasing number, of companies that will have to negotiate the line between helpful and harmful health information.






Remind me again why neurodegenerative diseases present a special case of ethics?






Neurodegenerative diseases are unique in that they have the potential to change an individual’s sense of self: the discussions surrounding neurodegenerative diseases necessitate a certain level of expertise to guide patients through the appropriate steps in dealing with, and responding to, their results. Regulations should be put in place to prevent consumers, “patients,” from viewing the results of neurodegenerative diseases online, instead re-routing the information to a doctor, genetic counselor, or some other licensed professional. The emotionally laden aspects of neurodegenerative diseases is paramount: person-to-person is much more comforting than your computer screen, or even a “live chat.” The necessity of structured support surrounding such a life-altering disease is ten-fold when it is not just a discussion of how to die, or when to die, but rather, how to live.






*As of September 2013, the FDA suspended “23andMe” from releasing any results of genetic testing out of concern for consumers. The FDA cited concerns of false negatives and positives and overall lack of validity of some of the tests. Similar concerns are addressed in this paper. 






References






Paulsen, J. S., Nance, M., Kim, J. I., Carlozzi, N. E., Panegyres, P. K., Erwin, C., ... & Williams, J. K. (2013). A review of quality of life after predictive testing for and earlier identification of neurodegenerative diseases. Progress in Neurobiology, 110, 2-28.






Roberts, J. S., & Uhlmann, W. R. (2013). Genetic susceptibility testing for neurodegenerative diseases: ethical and practice issues. Progress in Neurobiology, 110, 89-101.






Robillard, J. M., Federico, C. A., Tairyan, K., Ivinson, A. J., & Illes, J. (2011). Untapped ethical resources for neurodegeneration research. BMC Medical Ethics, 12(1), 9.






Want to cite this post?



Meffert, L. (2015). 23andMe: The Ethics of Genetic Testing for Neurodegenerative Diseases. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/05/23andme-ethics-of-genetic-testing-for.html