Pages

Tuesday, October 25, 2016

Zombethics 2016: (in)visible disabilities and troubling normality


By Shweta Sahu







Zombethics Case Graphic 

With Halloween just around the corner, zombies and other atypical creatures are much on our minds, but such constructs are rarely thought of from an ethical perspective. This year, on October 26th at 5:30 pm at the Center for Ethics, 1531 Dickey Drive, Ethics Commons Room 102, Emory Center for Ethics is collaborating with Emory Integrity Project (EIP) to boggle your mind with ethical considerations and encourage you to consider how students should engage across (in)visible differences at Emory. The discussion will be based around three interesting case studies which can be found here. These scenarios will lead to questions such as, ‘should people ask others what gender pronouns they prefer to be associated with, even if the answer may seem “obvious” at first glance.’ On the other hand, what are the implications of assuming non-visible disability based on a person’s behaviors or appearance? The goal of the symposium will be to help participants handle controversial issues like these and to guide them to effectively deal with such situations.




To find out more about the event, I spoke with coordinator Dr. Paul Wolpe from the Emory Center for Ethics as well as Ms. Emily Lorino and Dr. Rebecca Taylor from the Emory Integrity Project, and Dr. Karen Rommelfanger, chair of the Zombethics® conference series. Here’s what I asked:




In your own words, what do you think ZombethicsTM is/ what does it represent?


According to Dr. Wolpe, “through history there have been portrayals of people with deformities and grotesque faces that are cast as alien and “other”, and against whom we measure our own humanity… we use either real or imagined monsters to try to understand the nature of what it means to be human and contrast ourselves by and against. The portrayal of deformed humans fascinates us and leads us to ask questions about ourselves and others we see as strange… ZombethicsTM explores that in a deep way.”





Why would you encourage people to come?


Dr. Wolpe notes all of us are involved in this activity of casting people aside as “other,” especially since people now have the ability to alter themselves in various ways, “including body modifications like tattoos and piercings. When CDC creates a website telling you what to do during the apocalypse” you know there’s “deep resonance in pop culture” and it “poses age old deep, theological and social questions.” On top of that, he says it’s just fun!





Ms. Lorino, Project Coordinator of the EIP encourages people to come to the symposium because it asks and addresses lots of questions students have but are afraid to ask. Additionally, she feels it “addresses lots of buzz words about diversity and inclusion, but provides more of a practical approach.” From experience, she says lots of people say they know what to say, but not what to do in certain scenarios, so “this panel will approach that—what does this look like in action.”




What are some questions or topics you secretly hope will come up in this year’s symposium?


Dr. Wolpe answers that he hopes to see “deep, introspective conversations about what it means to interact with people who appear different or have different perspectives than we do.” He hopes attendees will learn ways to respond in sensitive and supportive ways and walk away with an understanding of what’s the right way to speak to someone who has issues, such as problems with gender identity. He hopes this will help people negotiate the social world more easily and to “look at the nature of peoples with differences and how we respond to them.”




Similarly, Ms. Lorino adds that she hopes students will be able to take lessons learned from the panel and apply it as something that’s a regular practice on campus, outside of the department or program having a special event for it. Furthermore, as part of the mission of EIP, she asks, “how do we integrate this into the life of the Emory community” and “how do we create better spaces to have these conversations?”





Dr. Taylor, postdoctoral fellow with the EIP, also articulates that the discussion will be grounded on the case study scenarios (see one below) but she really anticipates participants will bring what experiences they have into discussion, specifically success stories they have had in navigating difficult scenarios in a way that’s respectful and hopeful. The discussion will feature the perspectives of three panelists: Dr. Jennifer Sarrett, a Lecturer in the Center for the Study of Human Health; Michael Shutt, the Senior Director of Community in Emory Campus Life; Malcolm Jones, a senior in the College from Stone Mountain, Georgia; and finally the moderator, Hannah Heitz, a senior in the College studying Psychology and Human Health.




Example case study scenario 


How did you first get interested and involved in this?


Dr. Wolpe, who has been a bioethicist for 30 years, states he has been involved in all these issues and that he has thought about the idea of monsters and spent much time thinking about the “other”, so it was only a natural progression. More fundamentally, these questions of how to treat others (those we consider different or somehow separate even from ourselves) are basic questions of bioethics. He also claims that just because we put a whimsical spin on it does not make it any less serious.





The Emory Integrity Project is involved this year, as Dr. Taylor states, to reach the undergrad audience, as past years have brought in more post grads and scholars.




So how is this year’s Zombethics related to zombies?


“Zombies are in this other category,” Ms. Lorino states, “not person, not dead, so what are they?” So they loosely draw upon this concept of zombies by talking about invisible disabilities. We characterize people as other, but at the same time they are “still people, but why are they in another category as if they’re distinct from regular humans? Why are they not “normal?” Who do we decide to put on the sidelines because they’re ‘not like us’?”





Click here to learn more about this year’s conference, which will take place on on October 26th at 5:30 pm at the Center for Ethics, Ethics Commons Room 102. For more info about past years’ conferences, please refer here or contact Zombies and Zombethics! Chair and Co-founder of the ZombethicsTM Annual Conference, Dr. Karen S. Rommelfanger, tells us that this year’s ZombethicsTM is a teaser for a much larger ZombethicsTM conference next year that will be themed around Frankenstein, so stay tuned!



Want to cite this post?



Sahu, S. (2016). Zombethics 2016: (in)visible disabilities and troubling normality. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/10/zombethics-2017-invisible-disabilities.html



Tuesday, October 18, 2016

Racial Biases in Face Judgment- When You “Look” Criminal


By Carlie Hoffman






Image courtesy of Pixabay


Racial bias can, and often does, occur in several elements of the criminal justice process, including on initial police contact, during eye-witness identification, and in jurors’ decisions. This disparity of how people are treated throughout the justice system is likely influenced by the criminal black male stereotype that pervades our American culture (1). Some propose that this stereotype originated in the slavery and post-slavery eras, with the onset of Jim Crow laws and other post-slavery codes that instigated segregation and also sanctioned racially-biased punishments for blacks, and especially for black males. Racially-biased punishments are still present today, with a 2015 article in Slate magazine citing that black Americans are more likely to have their cars searched, to be arrested for drug use, to be jailed while awaiting a trial, and to serve longer sentences for the same offense as white Americans.





The presence of racial bias in the criminal justice system is irrefutable, and investigation into the elements fueling this bias has recently moved into the realm of neuroscience. In last month’s Neuroethics and Neuroscience in the News talk, Dr. Heather Kleider-Offutt from Georgia State University explained that not all black men are stereotyped in the same way. Instead, certain black men are subject to a higher degree of negative bias than other black men, and inclusion in this select subgroup is based on face-type and not skin color alone.






What is this face-type that engenders such a high degree of negative bias? In her article, “Black Stereotypical Features: When a Face-Type Can Get You in Trouble,” Kleider-Offutt stated that research participants judged faces with so-called “afro-centric” or “stereotypic” features as being more threatening than faces with non-stereotypic, or “atypical” black features. Previous studies performed in Kleider-Offutt’s lab also found that faces with afro-centric features were judged as being more criminal, aggressive, and violent than atypical faces, regardless of facial expression or attractiveness. This bias against afro-centric features also extended beyond the boundaries of race and sex: both white male and female faces with afro-centric features were viewed as being more aggressive than atypical faces (2). Kleider-Offutt explained that there is no particular feature or even a specific combination of features that underlies this negative bias. Instead, it is a combination of several features, such as full lips, a wide nose, eye color, coarse hair, and a dark complexion, that work together to influence face-type judgment. She further posited that these stereotypical features may represent the prototypical black male face and thus may more strongly elicit the criminal black male stereotype.





Decision makers in the legal system are also prone to this face-type bias: males with stereotypic features are given harsher sentences (including having a higher probability of being given the death sentence if convicted for murder) and are more vulnerable to misidentification for a crime than males with atypical features (2-4). One study performed by Kleider-Offutt’s group, which she also demonstrated for our audience, involved showing participants a series of stereotypical and atypical faces that were paired with certain careers (artist, teacher, and drug dealer). The study results indicated that a stereotypic face was more likely to be misremembered as a drug dealer than an atypical face. Similarly, people with stereotypic faces were more likely to be correctly remembered as drug dealers than were people with atypical faces. This effect held true for men and women as well as for black and white people with stereotypic features, indicating that the negative bias was associated with stereotypic features in general instead of solely with race or sex (2). Results were varied in our audience, with some members stating that they felt, understandably, self-conscious about shouting out answers openly in the group.








Image courtesy of Wikipedia

Alarmingly, this bias is difficult to suppress. Kleider-Offutt discussed a study that had participants read biological descriptions of people and then showed the participants a series of faces. Participants were told to avoid using race- or feature-based stereotyping as they judged whether the biological descriptions they read could have been about one of the faces they saw. Some participants also performed a secondary task while making these decisions, which introduced a “cognitive load,” or mental burden. The study found that when participants were not cognitively loaded, they were able to avoid making race-biased decisions. However, even without a cognitive load, participants were unable to avoid making feature-biased judgments. Thus, feature-bias was still present even when the participant was consciously attempting not to use it, indicating that this bias was both difficult to overcome and seemingly unavoidable (5). Furthermore, Kleider-Offutt pointed out that she and others have found that this bias is not just restricted to white participants, but is also seen in black participants (2).





Taken together, these studies indicate that bias against stereotypic facial features is engrained in our culture so much so that it has become almost a knee-jerk reaction. But, if everyone is biased, what should we do about it?





Integral to answering this question is understanding the fact that we, as Americans, are not born with a bias against stereotypic features. Instead, we are culturally conditioned to respond to stereotypic faces in a negative way. We have been taught, through our society, to perceive some faces as being threatening and others as neutral. But, because we have been taught both implicitly (and, unfortunately for some, explicitly) to think in such a way, this means that our bias is not inevitable and can be avoided--with effort on both the individual and the societal level.





For too long, our culture has associated black men (specifically) and people with stereotypic features (generally) with criminal behavior. This association between stereotypic features and criminality is perpetuated in part through the media and in part through the way in which we talk about black males and crime. This engrained bias has altered how we as a society interact socially, criminally, and legally with black men and with all people with stereotypic features. Thus, to start reducing and eventually removing these race- and feature-biases, we need to be more mindful of the way in which we talk about and present black males, people with stereotypic features, and other disenfranchised groups in the media and with each other. Furthermore, we need to start replacing stereotypes, engaging in counter-stereotype-imaging and perspective taking, and exposing ourselves to people from races, cultures, and perspectives beyond our own. Such techniques have been successfully used to reduce racial bias (6) and may also be able to reduce our bias against stereotypic features.





Moving forward, remember that we are not born this way (contrary to what Lady Gaga would say), and we are also not helpless products of our sociocultural environment. By being aware of our cultural bias against stereotypic features, we can now start working on changing our culture, taking pause with our biased responses, and challenging the automatic assumptions we make about individuals solely based on their face or skin color.




References



1. Smiley C, Fakunle D. From "brute" to "thug:" the demonization and criminalization of unarmed Black male victims in America. J Hum Behav Soc Environ. 2016;26(3-4):350-66. doi: 10.1080/10911359.2015.1129256. PubMed PMID: 27594778; PMCID: PMC5004736.



2. Kleider HM, Cavrak SE, Knuycky LR. Looking like a criminal: stereotypical black facial features promote face source memory error. Mem Cognit. 2012;40(8):1200-13. doi: 10.3758/s13421-012-0229-x. PubMed PMID: 22773417.



3. Blair IV, Judd CM, Chapleau KM. The influence of Afrocentric facial features in criminal sentencing. Psychol Sci. 2004;15(10):674-9. doi: 10.1111/j.0956-7976.2004.00739.x. PubMed PMID: 15447638.



4. Kleider HM, Knuycky LR, Cavrak SE. Deciding the fate of others: the cognitive underpinnings of racially biased juror decision making. J Gen Psychol. 2012;139(3):175-93. doi: 10.1080/00221309.2012.686462. PubMed PMID: 24837019.



5. Blair IV, Judd CM, Fallman JL. The automaticity of race and Afrocentric facial features in social judgments. J Pers Soc Psychol. 2004;87(6):763-78. doi: 10.1037/0022-3514.87.6.763. PubMed PMID: 15598105.



6. Devine PG, Forscher PS, Austin AJ, Cox WT. Long-term reduction in implicit race bias: A prejudice habit-breaking intervention. J Exp Soc Psychol. 2012;48(6):1267-78. doi: 10.1016/j.jesp.2012.06.003. PubMed PMID: 23524616; PMCID: PMC3603687.



Want to cite this post?



Hoffman, C. (2016). Racial Biases in Face Judgment- When You “Look” Criminal. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/10/racial-biases-in-face-judgment-when-you.html





Tuesday, October 11, 2016

Rethinking Irreversibility and Its Implications on Determining Death


By Alex Lin




Alex Lin is an undergraduate student at Rutgers University pursuing a dual degree in Biological Sciences and Philosophy. As an aspiring physician, he is interested in medical ethics and runs the Rutgers Bioethics Society alongside a diverse team of student thinkers. Alex is from Paramus, New Jersey, and volunteers as an emergency medical technician for his community.





Death, by definition, is irreversible. The notion of irreversibility is a central component of the current standards of death, cardiopulmonary and neurological alike. Given that the neurological criteria−the irreversible cessation of whole brain function−is the legally recognized criterion of death in many countries, including the United States [1], forthcoming advancements in neurotechnology under the BRAIN Initiative will be crucial to the accurate determination of death. With the development of technologies that allow scientists to study how individual neurons interact in significantly greater detail, questions emerge concerning the particular moment of truly irreversible total brain failure.





Consider the relatively new discovery of human adult neurogenesis. The established view was that the nervous system is fixed and neurons are unable to regenerate. However, this old dogma has been confounded by recent research in neuroscience. Studies have revealed that new neurons are continuously generated in the hippocampus and olfactory bulb, and adult hippocampal neurogenesis may even contribute to human brain function [2]. Modern technologies and research techniques enable scientists to study neurogenesis, which demonstrates the role that new scientific discoveries have in debunking long-standing views of neuroanatomy.







Image courtesy of WikiCommons.

Also, recent advancements in neuroimaging have enabled physicians to detect signs of awareness in patients diagnosed as being in a vegetative state, whereas traditional clinical assessments that attempt to elicit predictable behavioral responses fail to do so. In recent years, numerous studies have recommended the practice of neuroimaging techniques, such as fMRI and EEG, in addition to standard behavioral assessments in order to obtain a more accurate diagnosis of various disorders of the consciousness [3,4,5]. For example, in Dr. Adrian Owen's pioneering 2006 study, fMRI revealed that a patient in a vegetative state demonstrated residual cortical activity, and the patient was able to express signs of her covert awareness by following specific mental imagery tasks [5].





With the advancement of neuroimaging and development of more sensitive brain electrography monitoring devices, researchers may start detecting previously undetectable brain activity. Negative readings may become positive readings with more sensitive devices. In order to diagnose irreversible brain failure, the physician must perform a series of neurological examinations. These examinations can include assessing the absence of certain brainstem reflexes, such as the corneal reflex and the pharyngeal or gag reflex, as well as other muscle movement tests [6]. Still, the clinical examination of brain death is not consistent, even across the U.S. [7]. To achieve a confirmatory diagnosis, physicians can request additional tests, such as electroencephalography (EEG). However, current EEGs have a number of limitations. A few square centimeters of the cortex have to be activated simultaneously in order to generate readings that can be detected by the electrodes, which makes EEG insufficiently sensitive to less robust neural activities [8]. Furthermore, false readings can occur due to electronic background noise, especially in the ICU setting [6]. With increasingly sensitive neuroimaging devices, perhaps the determination of total brain failure, and generally what is considered irreversible, can be further refined.





In 2013, President Obama announced the BRAIN Initiative, a collaborative research initiative to advance our understanding of the human brain. The research goals of the BRAIN Initiative include generating circuit diagrams of the brain and developing new technologies that will allow researchers to rigorously study the most complex organ of the human body. In addition to previous investments made by the NIH, the BRAIN Working Group outlined a commitment of $4.5 billion in federal funding over the next 10 years, starting in fiscal year 2016. Thus, this year marks the start of the first five-year phase of the BRAIN Initiative: technological development and validation. Ultimately, research projects of the BRAIN Initiative will continue to redefine what counts as physiologically irreversible and thus challenge the moment of death.




The ethical consequences of such developments should be explored. In particular, it would be important to encourage research projects that explore irreversibility as a component of our scientific conceptions of death. It is important to note that legal death, namely death by neurological criteria, is biological death [9]. Grasping the role of irreversibility in brain death will advance our understanding of death as a biological phenomenon. Moreover, the swiftness of post-mortem action makes urgent the need to accurately determine the moment of death. Such action includes the procurement of life-sustaining organs for transplantation and experimental research on brain-dead patients. Because organ retrieval before the patient is truly dead would be morally reprehensible, as expressed by the Dead Donor Rule, organs must be procured (albeit shortly) after death to ensure they are viable for transplantation. Thus, the determination of death must be rigorous to ensure that the condition is truly irreversible in potential organ donors.





As we obtain more information about neural networks and develop tools to measure brain activity with greater accuracy, it is likely that what is currently considered irreversible brain failure will cease to be irreversible in the near future. In this way, the identification and determination of brain death are bound by the limits of current technology. Discoveries made through the BRAIN Initiative will likely continue to challenge the notion of irreversibility in neuroscience and may lead to novel methods to treat−or even reverse−the dying process.



Acknowledgement 

I would like to thank Nada Gligorov, PhD for providing inspiration and comments on these issues. 



References 



 1. Wijdicks, E. F. (2002). Brain death worldwide Accepted fact but no global consensus in diagnostic criteria. Neurology, 58(1), 20-25.



 2. Ernst, A., & Frisén, J. (2015). Adult Neurogenesis in Humans- Common and Unique Traits in Mammals. PLoS Biology, 13(1). 



3. Cruse D, Chennu S, Chatelle C, Bekinschtein TA, Fernandez-Espejo D, et al. (2011). Bedside detection of awareness in the vegetative state. Lancet, 378(9809), 2088–94.



 4. Monti M. M., Vanhaudenhuyse A., Coleman M. R., Boly M., Pickard J. D., et al. (2010). Willful modulation of brain activity in disorders of consciousness. N. Engl. J. Med., 362(7), 579–89.



 5. Owen, A. M. (2013). Detecting Consciousness: A Unique Role for Neuroimaging. Annual Review of Psychology Annu. Rev. Psychol., 64(1), 109-33.



 6. Wijdicks, E. F. (2001). The diagnosis of brain death. N. Engl. J. Med., 344(16), 1215-1221.



 7. Greer, D. M., Wang, H. H., Robinson, J. D., Varelas, P. N., Henderson, G. V., & Wijdicks, E. F. (2016). Variability of brain death policies in the United States. JAMA neurology, 73(2), 213-218.



 8. Smith, S. J. (2005). EEG in the diagnosis, classification, and management of patients with epilepsy. Journal of Neurology, Neurosurgery & Psychiatry, 76(suppl 2).



 9. Gligorov, N. (2016). A defense of brain death. Neuroethics, 1-9.




Want to cite this post?



Lin, A. (2016). Rethinking Irreversibility and Its Implications on Determining Death. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/10/rethinking-irreversibility-and-its.html


Friday, October 7, 2016

Prescribing the Placebo Effect


By Sarika Sachdeva




This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016. 





Sarika Sachdeva is an undergraduate junior at Emory studying Neuroscience and Behavioral Biology and Economics. She is involved with research on stimulant abuse and addiction under Dr. Leonard Howell at Yerkes National Primate Research Center. 





In 2006, Dr. Ted Kaptchuk designed a clinical drug trial to evaluate a new pain pill in patients with severe arm pain. Participants in the study were assigned to receive either the pill or an acupuncture treatment for several weeks. Dr. Kaptchuk found that the people who received acupuncture ended up with more pain relief than those who had taken the pain pill. This difference was surprising, not because the pain pill was expected to be more effective, but because neither treatment was real- the pain pills contained cornstarch and the acupuncture was done with false needles that never pierced the skin.




Placebos are often considered baseline measurements, used as the standard scientific method to determine if a drug is actually making a biological difference or if its effects are just ‘inside the head’ and no better than a sugar pill (Anderson 2013). Utilizing the placebo effect as a form of treatment carries a stigma: only 0.3% of physicians admit to regularly prescribing them, in contrast with data that indicates around 50% of physicians actually do (Rommelfanger 2013).







Image courtesy of Pixabay


Recently, however, there has been a growing body of evidence that placebos produce real physiological changes, making them an active treatment not unlike ibuprofen, aspirin, or other traditional pharmaceuticals. This would explain why Dr. Kaptchuk’s study resulted in two separate outcomes for two different placebos. The first evidence of the idea that placebos are not inert came from a study on coronary heart disease that compared the mortality rates of a fat-lowering drug against a placebo. The results were insignificant; there was less than a 1% difference in mortality between the drug and the placebo, but a closer examination of the results revealed an interesting trend. In both the drug and the placebo groups, the participants who occasionally missed doses had a much higher mortality rate than the group that fully complied with the schedule by about 10% (Speers 2011). In other words, how often the placebo, or inactive pill was taken had a significant impact on mortality. This challenged the assumption that placebos have no physical impact on the body. Other studies using functional brain imaging scans have provided further evidence of the physiological changes produced by mere placebos. One of these studies compared fMRI scans in the brains of depressed men who had either taken a placebo or an antidepressant and found that both led to similar increases in activity in areas associated with pain in the prefrontal cortex and decreases in areas associated with anxiety in the amygdala, a clear indication of the validity of a placebo (Benedetti et al. 2005).





The physiological effects of a placebo treatment have important implications when considering treatment options for patients. This is of special importance for conditions that have no known treatment, or for when many treatment options have been exhausted with no success. Because placebos are not accepted as an active form of treatment, they are often not considered or taken seriously (Hernandez et al. 2014). In light of the fact that placebos have proven physiological effects and success no different from other medications and treatments, not using their benefits in a mainstream way may actually be prolonging the pain and afflictions of people with conditions that could be treated with a placebo (Rommelfanger 2012).





Use of the placebo as a treatment must be done with caution and within ethical limitations (Lichtenberg et al. 2004). The most polarizing aspect of the placebo is if the patient is being deceived because they think they are being given an effective medication when instead they are being treated with an inert pill seemingly unrelated to their condition. This does not qualify as deception: firstly, the placebo pill is not inert and has been shown to have physiological effects as mentioned previously. Secondly, if the physician informs the patient that they are being given a treatment that has proven to be effective, they are not lying. The physician does not have to pretend the placebo is a miracle cure or that he or she knows how it works in order to prescribe it to a patient that could benefit. Deception is believed to be necessary for the placebo effect to take place, but this is not necessarily the case (Blease et al. 2016). A physician could simply tell their patient that they are being given a placebo that has been shown to work for no apparent reason, eliminating all concerns about deception while still benefiting from the placebo’s effects.





Based on research into the efficacy of placebos and evidence that indicates they are not inert, I advise that the placebo be recognized as a valid form of treatment which physicians may choose to prescribe at their discretion with patient consent. I recommend medical practice regarding placebos be refined in order to accommodate recent research and account for potential ethical concerns through a three-fold approach:





1) Educating patients about the known and unknowns of placebo as a treatment, thereby allowing patients to make an informed decision when deciding to consent to a treatment that may be a placebo, or that they know is a placebo.



If a placebo is a viable treatment, physicians should present it as an option along with any other treatments being considered. The physician should detail the advantages of a placebo treatment, ideally in person, and clarify the word ‘inert’ as it inaccurately defines the placebo as a fake treatment whose effects are ‘all in the head’. Patients should also be informed of the difference between not knowing they are taking a placebo and knowing that they are, and physicians who use placebos should respect a patient’s decision if they would or would not like to be told if they are being given a placebo.






2) Training healthcare professionals with a standardized protocol in order to avoid misconceptions and ensure that the latest research is accounted for in their practice.



This will help avoid patients being given contradictory information by different physicians, pharmacists, or nurses and keep them up to date with the latest research on the efficacy of placebos so that they can incorporate details of optimal conditions into their practice. This training can be implemented as part of their license renewal.




The standardized protocol should include guidelines on how to treat consent for placebo treatments. Physicians can choose not to disclose that the medication they are given is a placebo; however, if asked about the drug specifically by a patient, the physician must disclose that the medication is a placebo. Physicians must honor a patient’s decision to choose not to be treated with a placebo after presenting them with that option.






3) Encouraging and funding further research into the physiology, efficacy, and optimal conditions of placebos.



Further research into placebos, especially into the potential negative effects, is needed to determine the conditions and subset of patients under which it is most effective. The method through which a placebo is administered is important to its outcome, so even factors such as the prescribed dosage and color of the pill can impact the results. This information can also be used to make non-placebo medication more effective, similar to how dosage curves are established in clinical drug trials. Research is also needed into the mechanisms behind which placebos work, which can provide valuable insight towards several diseases.



The above approach will help destigmatize placebo treatments in medicine and open up viable treatment options to people unable to be treated through other medication. Placebo treatments expand the potentials of modern medicine by creating real physiological results through nonspecific treatment. The placebo effect is seen even when knowledge that a placebo is being given is known and informed consent can allow patients to be comfortable that the medicine they are receiving is legitimate, even if it is a ‘just’ a sugar pill.



References: 



Anderson, L. 2013. What is a Placebo. Drugs. Available here



Benedetti, F., H. S. Mayberg, T. D. Wager, C. S. Stohler, and J. Zubieta. 2005. Neurobiological Mechanisms of the Placebo Effect. The Journal of Neuroscience. Available here.



Blease, C., L. Colloca, and T. J. Kaptchuk. 2016. Are open-label placebos ethical? Informed consent and ethical equivocations. Bioethics. Available here.



Hernandez, A., J. Banos, C. Llop, and M. Farre. 2014. The definition of placebo in the informed consent forms of clinical trials. PloS One. Available here.



Kaptchuk, T. J., W. B. Stason, R. B. Davis, et al. 2006. Sham device versus inert pill: randomized controlled trial of two placebo treatments. BMJ. Available here.



Lichtenberg, P., U. Hereseco-Levy, U. Nitzan. 2004. The ethics of the placebo in clinical practice. Journal of Medical Ethics. Available here.



Rommelfanger, K. S. 2012. Take two placebo pills and call me in the morning. Huffpost Science. Available here.



Speers, R. 2011. The power of drug compliance: Active ingredient or placebo. Modern Medicine Network. Available here.




Want to cite this post?



Sachdeva, S. (2016). Prescribing the Placebo Effect. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/10/prescribing-placebo-effect.html


Thursday, October 6, 2016

Guilty or Not Guilty: Policy Considerations for Using Neuroimaging as Evidence in Courts



By Sunidhi Ramesh




This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016. 





Sunidhi Ramesh, an Atlanta native, is a third year student at Emory University where she is double majoring in Sociology and Neuroscience and Behavioral Biology. She plans to pursue a career in medicine and holds a deep interest in sparking conversation and change around her, particularly in regards to pressing social matters and how education in America is both viewed and handled. In her spare time, Sunidhi is a writer, bridge player, dancer, and violinist.





 In 1893, Dr. Henry Howard Holmes opened his World’s Fair Hotel to the world [1].





But what his guests did not know was that the basement was filled with jars of poison, boxes of bones, and large surgical tables. Chutes from the guest rooms existed only to slide bodies into a pile downstairs. In the few months that the hotel was open for the public, Holmes, dubbed America’s first serial killer, killed an estimated number of 200 guests. Two years later, he was put on trial, found guilty, and sentenced to death [1].







H. H. Holmes, image courtesy WikiCommons


"I was born with the devil in me. I could not help the fact that I was a murderer, no more than the poet can help the inspiration to sing,” Holmes is quoted to have said [1]. But our judicial system does not care much for whether or not a murderer “can help it.” A crime is a crime; a murder is a murder. A guilty crime and a guilty mind are enough to warrant retribution—a punishment.





Holmes’ crimes were premeditated. They were planned, thought out, and acted upon. And that is all that the judge and jury knew for sure. They didn’t know if he was “born with it” or if it was out of his control. They had no way of knowing that, so they hanged him.





But what if we can know? What if we have the technology to determine that a crime is not due to free will but to a series of chemical signals and biological outputs entirely out of conscious control? 





Cue neuroimaging and the ongoing debate on its use in criminal courts.





Functional scanning, including PET, SPECT, and, most recently fMRI, are techniques that are beginning to infiltrate the field of forensics. These technologies assume that “disruptions in the function of discrete parts of the brain can lead to alterations in particular aspects of cognition and behavior” [2]. For example, damage in the prefrontal cortex, a region in the brain critical for reasoning, planning, impulse control, and moral judgment, can imply marred decision-making abilities [2]. Other areas such as the amygdala (involved with sudden emotions, including anger) and the anterior cingulate cortex (involved with empathy and compassion) can be in question in legal settings as well. But ultimately, these scanning techniques simply measure changes in blood flow and correlate these changes with increased or decreased brain activity in specific areas [2].





Recently, functional neuroimaging evidence has been used in criminal cases “in support of the insanity defense, claims of incompetence to stand trial, and pleas for mitigation in sentencing” [2].





In 2007, Peter Braunstein, dressed as a firefighter, set off a smoke bomb and knocked on a woman’s apartment door. After knocking her out with chloroform, Braunstein “sexually assaulted her for the next 13 hours” [2]. During his trial, his defense team used PET scans to argue that “Braunstein had decreased function in his frontal lobes,” the part of the brain that controls “initiation and cessation of behavior, planning, and moral judgment.” The defense contended that Braunstein was “completely unable to plan” and incapable of thinking ahead [2].





Perhaps, if this technology was around in the 1800’s, Holmes’ claims could also have been corroborated by the structures in his brain. If this did happen, America’s first serial killer would have avoided his death. He could have even walked free, with a greatly reduced punishment to follow his crimes.





These are some of the applications of neuroimaging in courts today. It is a matter that needs regulation and guidelines to ensure appropriate and justified use.





According to a study conducted in 2006, “there have been roughly 130 reported opinions involving PET and/or SPECT evidence” [2]. Of those 130 cases, about four-fifths of them have admitted these functional neuroimaging scans as evidence. And although “[neuroimaging] evidence [is most] often admitted along with other neurological and clinical evidence of the party’s mental condition,” the implications of using this evidence alone in future cases still remain [2].





In terms of ethical considerations, the current debate primarily questions matters of privacy, the current factual value of neuroimaging outside of a medical context, and the relationship between brain and thought [3].





That said, current policy on neuroimaging is lacking. Advocates and opponents of using neuroimaging in courts have tried to push the current regulations either way; still, the technology is new enough to where there aren’t enough guidelines in place.





 Here is what we should consider:







Image courtesy of WikiCommons


a) Understanding the meaning and limitations of neuroimaging





Neuroimaging isn’t perfect. A simple fMRI or brain scan generates images that are open to interpretation and human subjectivity. The vast majority of the general public seems to be under the impression that these scans can give direct insight into what a person is thinking or feeling, but this is generally untrue [4].





Thus, first and foremost, if this evidence is to be admissible in courts, judges, juries, and attorneys need to be trained in the value, meaning, and limitations of these scans. They need to understand that these scans yield pictures that are then interpreted as well as the full extent of what this technology can imply. Personality traits, mental illness, sexual preferences, and predisposition to drug addiction are all types of information that can be gathered from neuroimaging [4]. There are, however, important limits to the type and accuracy of this information; these restrictions need to be openly shared and discussed among members of the court as a part of training programs that take place long before actual cases are reviewed.





b) Slippery slope of inferring a state of mind 





Although behavior can be explained by brain evidence, brain evidence cannot directly imply behavior. It is easy to find an anomaly in the brain and point to it directly as the source of any kind of misdemeanor. But no two brains are exactly alike. Where do we draw the line between person-to-person variation and a serious abnormality that caused a crime to be committed?





To address this, a conference must be put into place. This conference, consisting of the most renown ethicists, neurologists, judges, and criminal defense lawyers, must set a goal to determine where this line is drawn. What are the parameters of an “average brain”? How different is “abnormal”? Case studies should be referenced to understand how abnormalities in specific regions could impact behavior.





c) Considering “free will”





What does giving neuroimaging validity in courts imply about our own selves? If we deem all neuroimaging evidence as admissible, are we acknowledging that all crime is committed due to the wiring of the brain? Where do we draw the line between intentionality/free will and the biochemical processes of the brain in regards to committing a crime?





 The implications of this must be considered through further research, and the idea of “free will” as a concept should be brought up during conversations about neuroimaging. Furthermore, these discussions should occur in coordination with philosophers and scientists; ideally, this may begin some level of understanding of the intersection between biological decision-making machinery and “our own” goal-driven decisions.





d) Protecting the privacy of the defendant





It is important to remember that a great deal of neuroscience relies on the assumption that the mind is simply “the processes of the organic brain.” This has huge implications because “it means that when we look at the functioning brain through [these imaging technologies], we are essentially looking into the mind of another person” [5].





 What is critical is that these neuroimaging techniques are used only for what they need to be used for. There is a range of information that can be procured from the scans, but evidence that is not relevant to these cases should not be researched or referred to.





 That being said, criminal court cases, especially those that are tried in higher courts, receive a great deal of media attention. With the nature of technology today, the public eye sees all; how, then, do we keep the defendant’s brain records private? Thus, it is necessary for this information to be concealed and protected. It should be presented solely to the judge, the jury, and the relevant criminal attorneys with security measures taken to prevent it from reaching the general public. Furthermore, the neurologists involved with the testimony should be unrelated to the case, selected by the courts, to prevent any bias from either the plaintiff or the defendant.





e) Reliability and accuracy of results 





With the technology as it is today, one test and one expert cannot be enough to reliably present this information. At least two independent, separate tests must be conducted with more than one neurologist to corroborate the findings. If any inconsistencies are found either between the two scans or between the opinions of the two neurologists, the information should be deemed inadmissible in the courts on the grounds of being unreliable.





f) Lie detection as a basis for neuroimaging policy





Lie detection through the polygraph has been used as to discriminate lying between truth telling since the early 1900’s. Although the technique is far from perfect, many courts use it as a form of supporting evidence. Lie detection poses many similar questions as neuroimaging. How accurate is it? Is there any privacy involved? When can we use it and when can we not [6]?





 In order to address these questions in regards to neuroimaging, we must consider turning to current lie detection policies regarding its use in the courtroom and frame neuroimaging policy around them.





g) Use of neuroimaging when necessary





There is a range of implications that come with the consistent implementation of neuroimaging in court cases [7]. Will criminals end up always relying on this technology to bail them out? Will we get to a point where every criminal is effectively “made innocent” through neuroimaging? Perhaps. To avoid this, stipulations need to be drawn for cases where neuroimaging can be used. This could be in cases where the intentionality of the defendant is uncertain or where no real evidence exists. Neuroimaging should not be open for use in every case, and a strong regulation system should be built to ensure this.





h) Neuroimaging as the “be-all and end-all”





Similar to the current role of lie detection techniques, neuroimaging should be taken with a grain of salt. It should not be used as the only evidence that decides the fate of a case; the judge, once trained in understanding the science behind these techniques, should be given the right to decide whether or not the scans are relevant or admissible in relation to each specific case. Until the technology is refined to near-perfection, neuroimaging should never be the only evidence available on the mental capacity of an individual.





In addition to these policy considerations, more research on the topic is necessary to ensure that this technology is indeed accurate and applicable to this field.





Ideally, neuroimaging as a technology in general would need to be furthered to avoid premature, inappropriate use in criminal courts. Whatever the case, these techniques are picking up speed as relevant tools in courts around the United States. They must be properly regulated, monitored, studied, and discussed to ensure their safe and ethical use as a form of evidence in criminal case proceedings.




 References 



 1. "Dr. H. H. Holmes." The Devil In The White City, by Erik Larson. N.p., 2008. Web. 18 June 2016.



2. Appelbaum, P. S. (2015). Law & psychiatry: Through a glass darkly: Functional neuroimaging evidence enters the courtroom. Psychiatric Services.



3. Agid, Yves, and Ali Benmakhlouf. "Ethical Issues Arising out of Functional Neuroimaging." National Consultative Ethics Committee for Health and Life Sciences 116 (n.d.): 1-19. Web. 17 June 2016.



4. Feigenson, N. (2006). Brain imaging and courtroom evidence: On the admissibility and persuasiveness of fMRI. International Journal of Law in Context, 2(03), 233-255.



5. Finn, D. P. (2006). Brain imaging and privacy: how recent advances in neuroimaging implicate privacy concerns. bepress Legal Series, 1752.



6. Rusconi, E., & Mitchener-Nissen, T. (2013). Prospects of functional magnetic resonance imaging as lie detector.



7. Baertschi, B. (2011). Neuroimaging in the Courts of Law. Journal of Applied Ethics and Philosophy, 3, 9-16.




Want to cite this post?



Ramesh, S. (2016). Guilty or Not Guilty: Policy Considerations for Using Neuroimaging as Evidence in Courts. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/10/guilty-or-not-guilty-policy.html



Wednesday, October 5, 2016

The Predictive Power of Neuroimaging


By Ethan Morris




This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016. 





Ethan Morris is an undergraduate senior at Emory University, majoring in Neuroscience and Behavioral Biology with a minor in History. Ethan is a member of the Dilks Lab at Emory and is a legislator on the Emory University Student Government Association. Ethan is from Denver, Colorado and loves to ski.   





Background and Current Research





Neuroscience is a rapidly burgeoning field that is increasingly facing complex issues as scientists learn more about the human brain and by extension, about personal identity. One technology that has gained attention in the last two decades is brain imaging, a technique that uses various tools to evaluate the brain’s functional response to the world. Some of the more commonly used brain imaging devices are functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), both of which measure blood flow (albeit by different mechanisms) through the brain. These blood flow results show which areas of the brain are metabolically active, and are thus activated by the task at hand. Using these devices, researchers can determine the activity of certain brain regions associated with certain types of sensory and perceptual processing, as well as cognitive function.




While used in clinical settings for neurological and psychiatric diagnoses, neuroimaging is also applied in a variety of research contexts to learn about the neural correlates of human behavior. One study examined fMRI activation levels in the amygdala, one of the brain’s centers for processing salient stimuli and emotion. The researchers found that white individuals displayed greater amygdala activation for unfamiliar black faces than familiar white faces, and moreover, there was a positive correlation between amygdala activation and unconscious racial bias (Phelps et al., 2000). Importantly, imaging cannot read human minds, but it is significant that brain-imaging patterns are being used currently to make important inferences about unconscious thoughts, even if they are not manifested behaviorally.








Image courtesy of WikiCommons

In another study, researchers found that prisoners with higher levels of psychopathy were more likely to have fewer connections between the parietal cortex and the anterior cingulate cortex (Philippi, 2015). The implication of this study is that it may be possible to identify psychopaths and potentially predict who is more likely to be rearrested based on brain connectivity. In a juvenile study, researchers used fMRI and found that certain patterns of functional connectivity between the premotor and prefrontal cortices were predictive of future impulsivity (Shannon et al., 2011). These studies demonstrate the current capability of neuroimaging to assess unconscious biases and perhaps predict future behavior, such as recidivism or impulsivity.



Ethical Considerations 




In order to inform policy, there are important ethical considerations regarding both current neuroimaging knowledge and future applications of this technology. Provided these studies are replicated and verified, neuroimaging might be used to infer unconscious attitudes and predict future behavior. Is it ethical to image the brains of prisoners to determine their likelihood of ending up back in jail? Even if certain images are correlated with rates of recidivism, it is still difficult to accurately predict future human behavior using neuroimaging. Brain images are transient portraits, which limits researchers’ abilities to extrapolate moment-to-moment brain states to label the brain and person (Fuchs, 2006). Additionally, brain imaging is susceptible to misinterpretation by researchers who do not fully understand the appropriate conclusions one can make with imaging. This could lead to dangerous conclusions from brain images about the entire identity of a person without meaningful evidence. Another limit of neuroimaging is the lack of causational data (e.g. brain activity X causing behavior Y). With neuroimaging, researchers are often only able to correlate brain images with certain functions or mental states (Miller, 2008). Knowing these limits, it does not seem possible right now to definitively predict future behavior. However, placed in eager hands, brain imaging could be used to predict recidivism, which may inevitably result in false positive results, placing prisoners at the mercy of their brain’s activity, perhaps without justification.





On a fundamental level, is it fair to judge a person for what their brain looks like? In the case of the correctional system, this may undermine its purported goal of assisting “offenders in becoming law-abiding citizens” (US Federal Bureau of Prisons). For example, consider a prisoner who appears completely rehabilitated, but whose brain images show a prefrontal cortex deficit associated with impulsivity and future recidivism. Would society deem it fair to place him under stricter parole than it would have without brain imaging? This can be reduced to whether brain images should be accounted for, even if what is observed does not manifest in behavior.





Another ethical concern is society’s widespread belief in free will. It is a commonplace belief that, as a human, one has an intrinsic ability to choose what one will do, no matter the environment or genetics that may predispose certain behaviors. Would society think it is ethical to judge a person for their neurobiology? Some may argue that it would contradict the belief that released prisoners have the ability to avoid committing another crime. Humans value the right to autonomy, or self-determination, so should parole boards meddle in the autonomy of others based on imaging conclusions about their risk for future behavior?








Image courtesy of Pixabay

This ethical issue is particularly pertinent for juvenile offenders. To what degree should the justice system implement brain imaging to predict recidivism or impulsivity if it has been shown the human brain does not finish developing until the mid-20s (Giedd, 1999)? Because studies have shown that adolescents gain white matter and lose impulsivity with age, it may not be ethical to use brain images to predict behavior if they are no longer accurate within a couple years (Casey, 2005). One final ethical consideration of neuroimaging is privacy. There is potential that in the future, scientists may be able to use brain imaging as an identity scanner. Scientists might be able to “’read personality features, psychiatric history, truthfulness and hidden deviations from a brain scan” (Fuchs, 2006). As Fuchs mentions, this application could get misappropriated quickly and invasively, as private companies and lawyers may misuse brain imaging to label and potentially defame people, all based on brain scans. The issue of consent also arises—how does someone lying in an fMRI scanner know what the person behind the operating computer is looking at?



Policy Recommendations 




With these technological limitations and ethical issues in mind, there are multiple policy recommendations to prevent violation of privacy and consent, false positives, and dangerous conclusions. On the issue of consent and privacy, the Department of Health and Human Services (HHS) should ensure that institutional review boards (IRBs) enforce limits on what researchers can image. These limitations should extend into the courtroom, where fMRI could be applied as superior or overriding evidence without sufficient basis. Researchers should only be able to image regions of the brain needed for their research and should be prohibited from using unrelated information that may be outside of participants’ consent/privacy and their research’s purview. Participant consent forms should contain explicit explanation regarding the technology, capabilities, and targeted brain areas so all parties are informed.





Whether through the US Security and Exchanges Commission (SEC) or the US Food and Drug Administration (FDA), the private sector should not have access to brain imaging in its current state. Due to the realistic limitations of imaging, false positives and unwarranted speculation about personal identity are likely to result from unregulated use of brain imaging and should be prevented to avoid personal judgments that may not have any tangible basis. Additionally, the possibility for this research to negatively influence public understanding of neuroscience dictates that powerful tools such as neuroimaging should not be introduced outside of research settings until the tool’s capabilities and limitations are fully understood.





The US Department of Justice should outlaw use of brain imaging in youth detention centers to avoid rampant false positive predictions, given the current knowledge about how decision-making improves with brain development. In addition, parole boards should not be allowed to use imaging to determine the chances an adult prisoner will commit another crime. If the justice system collectively decides brain images are paramount to demonstrated human behavior, brain imaging could theoretically antiquate and undermine the justice system’s efforts to improve actual human behavior. This must not be the case—the ultimate goal of prison should be to change behavior, not neurobiology.





 In the realm of research, review boards must vigorously review brain-imaging studies. It is simply too dangerous to publish conjectural conclusions about brain imaging because of the distinct possibility for misappropriation and for sensationalist media stories. The caveats and limitations of brain imaging (e.g. the lack of causational data) must be emphasized at the front line—the researchers—in an attempt to prevent false positives, and to prevent non-researchers from misapplying neuroimaging. Researchers and journals must be vigilant about disseminating correct results with appropriately stipulated interpretations to prevent misreporting. Furthermore, researchers must be held accountable by IRBs for their studies in an effort to prevent publications of flimsy associative data that may be misinterpreted. Additionally, there should be restrictions on researchers’ conflicts of interest regarding neuroimaging; for example, the US Department of Justice should be barred from funding neuroimaging in an effort to predict post-prison behavior until the technology proves reliable and causational.





Neuroimaging should continue to be used appropriately and realistically will be used in both research and clinical contexts going forward. Because of this, the HHS should implement a public education strategy to inform media members, lawyers and parole board members, as well as the public of the capabilities and limitations of brain imaging to prevent avoidable ethical problems and public fear of neuroimaging.




References 



Casey, B.J., A. Galvan, T.A. Hare. 2005. Changes in cerebral functional organization during cognitive development. Current Opinion in Neurobiology, 15: 239-244.



Federal Bureau of Prisons. About our agency: A foundation built on solid ground. Available here.



Fuchs, T. 2006. Ethical issues in neuroscience. Current Opinion in Psychiatry, 19: 600-607.



Giedd, J.N., J. Blumenthal, N.O. Jeffries, et al. 1999. Brain development during childhood and adolescence: a longitudinal MRI study. Nature Neuroscience, 2(10): 861-863.



Miller, G. 2008. Growing pains for fMRI. Science, 320(5882): 1412-1414.



Phelps, E.A., K.J. O’Connor, W.A. Cunningham, et al. 2000. Performance on indirect measures of race evaluation predicts amygdala activation. Journal of Cognitive Neuroscience, 12(5): 729-738. 



Philippi, C.L., M.S. Pujara, J.C. Motzkin, J. Newman, K.A. Kiehl, M. Koenigs. 2015. Altered resting-state functional connectivity in cortical networks in psychopathy. The Journal of Neuroscience, 35(15): 6068-6078.



Racine, E., O. Bar-Ilan, J. Illes. 2005. fMRI in the public eye. Nature Reviews Neuroscience, 6(2): 159-164. Shannon, B.J., M.E. Raichle, A.Z. Snyder, et al. 2011.



Premotor functional connectivity predicts impulsivity in juvenile offenders. PNAS, 108(27): 11241-11245. University of Washington. Brain imaging. Available here.




Want to cite this post?



Morris, E. (2016). The Predictive Power of Neuroimaging. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/10/the-predictive-power-of-neuroimaging.html


Sunday, October 2, 2016

Neuroimaging in Predicting and Detecting Neurodegenerative Diseases and Mental Disorders


By Anayelly Medina




This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016.



Anayelly is a Senior at Emory University majoring in Neuroscience and Behavioral Biology. 




If your doctor told you they could determine whether or not you would develop a neurodegenerative disease or mental disorder in the future through a brain scan, would you undergo the process? Detecting the predisposition to or possible development of disorders or diseases not only in adults but also in fetuses through genetic testing (i.e. preimplantation genetics) has been a topic of continued discussion and debate [2]. Furthermore, questions regarding the ethical implications of predictive genetic testing have been addressed by many over the past years [4,8]. However, more recently, neuroimaging and its possible use in detecting predispositions to neurodegenerative diseases as well as mental disorders has come to light. The ethical questions raised by the use of predictive neuroimaging technologies are similar to those posed by predictive genetic testing; nevertheless, given that the brain is the main structure analyzed and affected by these neurodegenerative and mental disorders, different questions (from those posed by predictive genetic testing) have also surfaced.






Computerized Axial Tomography (CAT), Positron Emission Tomography (PET) and radioactive tracers, Magnetic Resonance Imaging (MRI), and Functional Magnetic Resonance Imaging (fMRI) are all current neuroimaging technologies used in the field of neuroscience. While each of these technologies function differently, they ultimately all provide information on brain functioning or structure. Furthermore, these neuroscientific instruments have, in recent years, been used to explore the brain in order to determine predictive markers for neurodegenerative diseases and mental disorders, such as Parkinson’s disease, Schizophrenia, Huntington’s disease, and Alzheimer’s disease [1,9,11,12]. For example, Stoessl [11] explains how PET scans and radiotracers have shown evidence of abnormal dopamine dysfunction (a pathway known to be compromised in PD) in asymptomatic individuals from families with known inherited PD (although whether this dysfunction is an early measure of those who will develop PD or if it is associated with the inherited PD gene is unclear). In addition, Callicott et al. [1] provided evidence, through fMRI scans, that showed greater response in the right dorsolateral prefrontal cortex in cognitively intact siblings of patients with schizophrenia (this abnormal response was similar to that in patients with schizophrenia). Furthermore, Paulsen et al [9] used fMRI scans to show that striatal and white matter volumes in the brain could be used to predict diagnosis proximity (estimated years to diagnosis) of Huntington’s disease. Finally, the use of neuroimaging techniques to establish predictive markers of disease and mental disorders has been clearly seen in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) that was started in 2007 and is currently active [12]. The potential ability to predict whether or not an individual will develop a neurodegenerative disease or mental disorders seems like an initiative without faults. However, there are questions surrounding the ethics of such an ability that must be addressed.








Image courtesy of Flikr

The use of neuroimaging data to predict neurodegenerative diseases and mental disorders is an initiative that should continue to be pursued as it could help in the prevention or delay the disease or disorder by early intervention, but that research should also take into account the ethical implications of conducting and providing such information for and to the public. Some of the main ethical issues that have developed with the increased use of predictive neuroimaging include concerns surrounding intervention, privacy, and access. In terms of intervention, the main concern involves determining when to notify the patient—this would require having an established degree of probability, as well as prevalence of false positives, that would count as being sufficient to warrant patient knowledge of the neurodegenerative disease or mental disorder [3]. This would be further complicated when assessing brains of younger individuals given that their brains are still undergoing developmental changes, and the reliability of prognoses at such an early age has yet to be assessed. In addition to timing and accuracy, other issues involve the use of neuroimaging to predict diseases or mental disorders that have no cures or treatment as well as taking into account the impact that providing said information could have on the patient (such as the burden of knowledge [3] or the impact of stigma). Furthermore, the diseases and mental disorders that are being predicted using neuroimaging all affect the brain and its function, and thus possibly “also affect mental competence, mood, personality, and sense of self” [10]. In addition to intervention, privacy and discrimination are other issues at play as employers or insurers could determine whether or not a person is hired or what type of healthcare policy an individual receives based on predisposition/predictive neuroimaging tests [7]. Finally, the ethical concerns surrounding the access to neuroimaging technology must also be addressed. Neuroimaging scans are typically expensive, and its use in predicting the development of diseases and disorders may lead to more healthcare disparities; this could become a greater problem if the technology were to become commercialized and only accessible to those who are privileged [6;7].





In order to address these ethical issues, changes must be implemented at various stages of this predictive neuroimaging technology’s implementation. In addressing the issue of timing and accuracy in intervention, more studies looking at the correlation between various brain structures or functions and predisposition/prediction of neurodegenerative diseases and mental disorders must be conducted; specifically, studies in which the complementary approach of testing for genetic predisposition is controlled for would provide more conclusive and valid data supporting the associations made between brain scan findings and prediction of disease or mental disorder. Furthermore, more initiatives, like that of the Alzheimer’s Disease Neuroimaging Initiative (ADNI), should be created for other neurodegenerative diseases and mental disorders (with possible funding from the NIH, through the Functional Magnetic Resonance Imaging Core Facility (FMRIF)). Organizations like the ADNI hope to create a network of shared data, concerning biomarkers in the brain, in order to facilitate early detection of disease [12]. Moreover, initiatives like this could further facilitate the development and establishment of methods and protocols for predicting the onset of disease. In addition, regulation of the neuroimaging devices in predictive neuroimaging testing must also be implemented; as explained by Greely [5] the Food and Drug Administration (FDA) typically has jurisdiction over the use of drugs, biologicals, and medical devices, but if a test involves using a device (in this case a neuroimaging device) that has already been approved in the past, the FDA would not need to approve its use in a new test. Thus, an appeal to the FDA in order to reevaluate this decision should be pursued in order to establish safe and effective use of the technology. Furthermore, in order to protect the privacy of the patient, as well as protect against discrimination or unfair actions against the patient through the use of predictive neuroimaging data by insurers or employers (in terms of predictive data of neurocognitive disease or mental disorder development) protocols and regulations should be established by the U.S. Department of Human Health Services (HHS) and the U.S. Equal Employment Opportunity Commission. Finally, involvement of the U.S. DHHS in making this predictive neuroimaging accessible to those who cannot afford these services should also be established.



 References 



 1. Callicott, J. H., Egan, M. F., Mattay, V. S., Bertolino, A., Bone, A. D., Verchinksi, B.,

&Weinberger, D. R. 2003. Abnormal fMRI Response of the Dorsolateral Prefrontal Cortex in Cognitively Intact Siblings of Patients With Schizophrenia. American Journal of Psychiatry AJP 160(4): 709-719. doi:10.1176/appi.ajp.160.4.709.



 2. Farrimond, H. R., & Kelly, S. E. 2011. Public Viewpoints on New Non-invasive Prenatal Genetic Tests. Public Understanding of Science, 22(6): 730-744. doi:10.1177/0963662511424359



 3. Fuchs, T. 2006. Ethical Issues in Neuroscience. Current Opinion in Psychiatry 19(6): 600-607. doi:10.1097/01.yco.0000245752.75879.26.



 4. Fulda, K. G. 2006. Ethical Issues in Predictive Genetic Testing: A Public Health Perspective. Journal of Medical Ethics 32(3): 143-147. doi:10.1136/jme.2004.010272.



 5. Greely, H. 2004. Markula Center for Applied Ethics. The Neuroscience Revolution, Ethics, and the Law. Available here. (accessed June 19, 2016).



 6. Illes, J., & Racine, E. 2005. Imaging or Imagining? A Neuroethics Challenge Informed by Genetics. The American Journal of Bioethics 5(2): 5-18. doi:10.1080/15265160590923358 .



 7. Illes, J., Rosen, A., Greicius, M., & Racine, E., 2012. Ethics Analysis of Neuroimaging in Alzheimer’s Disease. Annals of the New York Academy of Sciences 1097: 278-295. doi:10.1196/annals.1379.030.



 8. Leah, DH., Williams J., Donahue MP., 2005. Ethical Issues in Genetic Testing. Journal of Midwifery & Women's Health 50(3): 234-240. doi:10.1016/j.jmwh.2004.12.016.



 9. Paulsen, J. S., Nopoulos, P. C., Aylward, E., Ross, C. A., Johnson, H., Magnotta, V. A., . . . Nance, M. 2010. Striatal and White Matter Predictors of Estimated Diagnosis for Huntington Disease. Brain Research Bulletin 82(3-4): 201-207. doi:10.1016/j.brainresbull.2010.04.003.



 10. Roskies, A. 2016. Neuroethics. The Stanford Encyclopedia of Philosophy. Available here. (accessed June 19, 2016).



 11. Stoessl, A. J. 2012. Neuroimaging in the Early Diagnosis of Neurodegenerative Disease. Translational Neurodegeneration 1(1), 5. doi:10.1186/2047-9158-1-5.



 12. Weiner, M. W., Veitch, D. P., Aisen, P. S., Beckett, L. A., Cairns, N. J., Cedarbaum, J., . . . Trojanowski, J. Q. 2015. 2014 Update of the Alzheimer's Disease Neuroimaging Initiative: A Review of Papers Published Since its Inception. Alzheimer's & Dementia 11(6). doi:10.1016/j.jalz.2014.11.001.




Want to cite this post?



Medina, A. (2016). Neuroimaging in Predicting and Detecting Neurodegenerative Diseases and Mental Disorders. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/10/neuroimaging-in-predicting-and.html