Pages

Showing posts with label Neuroethics and Neuroscience in the News. Show all posts
Showing posts with label Neuroethics and Neuroscience in the News. Show all posts

Tuesday, January 9, 2018

Dog Days: Has neuroscience revealed the inner lives of animals?



By Ryan Purcell






Image courtesy of Pexels.


On a sunny, late fall day with the semester winding down, Emory neuroscientist Dr. Gregory Berns gave a seminar in the Neuroethics and Neuroscience in the News series on campus. Berns has become relatively famous for his ambitious and fascinating work on what he calls “the dog project”, an eminently relatable and intriguing study that has taken aim at uncovering how the canine mind works using functional imaging technology.





The seminar was based on some of the ideas in his latest book, What It’s Like to Be a Dog (and other adventures in Animal Neuroscience). In it, Berns responds to philosopher Thomas Nagel’s influential anti-reductionist essay “What Is It Like to Be a Bat?” and recounts his journey to perform the world’s first functional magnetic resonance imaging (fMRI) session on an awake, unrestrained dog. Like so many seemingly impossible tasks, when broken down into many small, discrete steps, getting a dog to step into an fMRI machine and remain still during scanning became achievable (see training video here). 




In his book, Berns returns to the central question of “What it’s like to be a dog” several times and offers partial answers that hint at a bigger idea. For example, after training the dogs to exhibit restraint and delayed gratification, Berns observed a homologous brain area (the inferior frontal gyrus of the prefrontal cortex) become active that is also activated in humans when they perform an analogous task. Therefore, he suggested that while it may be a stretch to know all at once what it’s like to be a dog, we may be able to infer pieces of the canine experience. When a dog delays gratification, it feels a lot like it does for us.






A bat's use of echolocation depends on echoes to

build a sonic map of the world around them.

(Image courtesy of Wikimedia.)


Nagel chose to write about bats because, in his words, they are “a fundamentally alien form of life.” To illustrate this point, he highlights how the primary sensory perception for bats as they move through the world is echolocation. On its face, echolocation does indeed seem like a completely separate sense that we humans do not possess, and would preclude our understanding of what it is like to be a bat. Berns, however, noted that while most of us do not use sonar to navigate the world, we can certainly tell the difference between the sound of our voice in a closet and in a concert hall. Moreover, there is substantial evidence that people with vision impairments can, with training, use echolocation to navigate the world. With some effort, the alien can become understandable.




Listening to the presentation I started to wonder, has a study ever found that an animal is less intelligent or less capable than we had thought? Have we – particularly scientists who conduct research with animals (like myself) – been knowingly and willfully ignorant of their conscious experience in order to avoid the really difficult questions? In the audience, Center for Ethics director Dr. Paul Root Wolpe raised the question of whether progress will mean a continued, inexorable expansion of the type of research restrictions that we now have on non-human primates into so-called “lower species.” Is an implication of Berns’ research that fundamental animal experiences are very similar to ours? Essentially, is there something special about dogs, or do they just serve as a highly trainable window into the underappreciated abilities of the animal kingdom? From what I heard, I think Berns would say yes to both—that there is in fact something special about dogs, but that they also can serve as “ambassadors” to teach us about the inner lives of animals. The way that dogs have co-evolved with humans for the past 30,000 years may make them a unique case, particularly because of our ability to work closely with them, even in a research setting.







Image courtesy of Wikimedia.


Dr. Berns told us that, in his mind, the most subversive element of the dog project is how he tried to offer his canine participants self-determination. He explained that they were effectively treated in much the same way as if they were small children participating in research studies, granting them the right to refuse at any time. The ethicists in the room pushed back on this point – if the dogs were trained with treats, weren’t they coerced, or at least manipulated? It is difficult to know for sure. However, in the book, Berns discusses one particular dog that trained extremely quickly and made him wonder if she had any will of her own that wasn’t shaped by her owner. The dogs may have been manipulated to some degree, but they all climbed into the scanner on their own and faced no physical barriers to leaving it at any time, which is radically different from how most research animals are treated.





A significant, largely unintended consequence of this work has been its implications for animal rights. In the final chapter of his book, Berns lays out his thoughts on what the dog project means for this very issue. He acknowledges, “Neuroscience isn’t going to be able to tell us exactly what we should do, but it…will change what we know about animals’ internal experiences.” For some, the idea that there may be far more similarity in conscious experience among vertebrates than we had thought will change their personal views on the use of animals for food, clothes, and research. For others, this knowledge further muddies an already impossible problem. Perhaps it is time to finally shake off the insidious view of animals as Cartesian automatons and consider, for a change, erring on the side of assuming a bit more conscious self-awareness in animals than we have evidence for at the moment. This is easy for dogs, but what does it mean for less cuddly, more supposedly necessary creatures (e.g. cattle, chickens, lab animals, etc.)? At minimum, this could be an opportune time to remember the three R’s of animal research: replacement, reduction, refinement. Berns himself does not seem to be advocating for ending animal research or livestock production, but instead for taking a hard look at how we treat the creatures that we directly or indirectly depend on while they are in our care.





As he was wrapping up the discussion, Dr. Berns mentioned that in many ways this study tells us more about people, than dogs. Neuroethics program director Dr. Karen Rommelfanger then asked, “so has this changed how you work with human subjects?”





“Yeah.” Berns replied, with a wry smile. “I don’t anymore.”




Want to cite this post?



Purcell, R. (2018). Dog Days: Has neuroscience revealed the inner lives of animals? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/01/dog-days-has-neuroscience-revealed.html

Tuesday, December 12, 2017

Neuroethics in the News Recap: Psychosis, Unshared Reality, or Clairaudiance?



By Nathan Ahlgrim








Even computer programs, like DeepDream, hallucinate.

Courtesy of Wikimedia Commons.

Experiencing hallucinations is one of the most sure-fire ways to be labeled with one of the most derogatory of words: “crazy.” Hearing voices that no one else can hear is a popular laugh line (look no further than Phoebe in Friends), but it can be a serious and distressing symptom of schizophrenia and other incapacitating disorders. Anderson Cooper demonstrated the seriousness of the issue, finding the most mundane of tasks nearly impossible as he lived a day immersed in simulated hallucinations. Psychotic symptoms are less frequently the butt of jokes with increasing visibility and sensitivity, but people with schizophrenia and others who hear voices are still victims of stigma. Of course, people with schizophrenia deserve to be treated like patients in the mental healthcare system to ease their suffering and manage their symptoms, but there is a population who are at peace with the voices only they can hear. At last month’s Neuroethics and Neuroscience in the News meeting, Stephanie Hare and Dr. Jessica Turner of Georgia State University painted the contrast between people with schizophrenia and people that scientists call “healthy voice hearers.” In doing so, they discussed how hearing voices should not necessarily be considered pathological, reframing what healthy and normal behavior should include.





Their discussion centered around the work out of Dr. Philip Corlett’s lab [1], which compared how people with schizophrenia and self-described psychics experience auditory hallucinations. An article in The Atlantic later followed, profiling one of the self-described psychic mediums and her relationship with the voices only she hears. The problem with labels comes to the fore in the very premise of the study: the psychics are labeled non-psychotic even while perceiving sounds in the absence of a noise. Mental health practitioners then must decide whether to pathologize the experience – to label it as a symptom of a disorder. Refraining from pathologizing their experience makes sense with the current definition of a “disorder,” which contains the criterion of causing distress. Because psychics are not bothered by the voices they hear, their hearing voices is not considered to be a symptom of a disorder or psychosis. However, given our society’s negative view on hallucinations and psychosis, how many people are inappropriately pathologized for similar

experiences?








Image courtesy of Pixabay.

David Rosenhan presented a pessimistic critique of how Western medicine deals with hallucinations in the 1970’s with his report, On Being Sane in Insane Places [2]. He and his colleagues presented themselves to a psychiatric ward, reporting auditory hallucinations without any other symptoms. Once committed, they behaved as they normally would and no longer reported any hallucinatory events. Even so, the healthcare professionals did not accuse or suspect them of malingering, and never granted them a clean bill of health. As a result, Rosenham and others argued that mental health professionals focus on symptoms to the exclusion of a holistic picture, and that hallucinations are overly pathologized.





Rosenhan would be happy to see the recent changes in how the American medical system treats hallucinations over the intervening decades, with continuing improvements in the Diagnostic Standards Manual (DSM). Qualifying symptoms for schizophrenia are hallucinations, delusions, disorganized speech or behavior, and negative symptoms (social withdrawal, anhedonia, etc.). Beginning with DSM III in 1980, a diagnosis required “significant impairment” associated with at least one of these symptoms. With the publication of DSM V in 2013, a diagnosis now needs at least two of these qualifying symptoms presented with significant occupational or social dysfunction. Therefore, hallucinations are no longer sufficient for a diagnosis of schizophrenia in and of themselves, and healthy voice hearers are free from diagnosis.








Stephanie Hare describing how the brain acts

during auditory hallucinations

Non-voice hearers can balk at the idea that hallucinations are part of a typical or “normal” spectrum of experience. However, anywhere between 5 and 28% of the general population experience auditory hallucinations, and only 25% of those meet the criteria for psychosis [3]. Can anything be considered abnormal if one-quarter of the population experiences it? Surprisingly, the neuroscientific evidence also supports the dissociation between auditory hallucinations and neurological disorders. Brain activity does not differ between healthy voice hearers and those with a psychiatric diagnosis when experiencing auditory hallucinations [4]. The brains of the people we label sick and healthy seem to produce auditory hallucinations in the same way. How, then, should auditory hallucinations and healthy voice hearers be treated by psychiatrists and society writ large?





The concept of non-distressing hallucinations is foreign to those whose only exposure to the phenomenon is in portrayals of schizophrenia. And yet, most healthy voice hearers classify their voices as positive, controllable, and not bothersome [1]. The resulting argument is that if hallucinations do not make the person want to seek help for their condition, we should let them be. But treatment-seeking is not always a prerequisite for a mental disorder; some disorders do not feel out of place to the individual at all. People with personality disorders often fit that category. Although Borderline Personality Disorder causes significant distress and treatment seeking, people with other personality disorders do not perceive their behavior as abnormal and are not distressed by their own behavior, as with Narcissistic Personality Disorder [5]. Overall, people with personality disorders are very likely to push against the need to treat the underlying condition [6]. Diagnoses occur because their disorder causes deleterious effects on the person’s social and professional life, not treatment seeking. But psychics can experience similar ostracization. As the medium interviewed for The Atlantic article states “You just can’t go into a room and say ‘Hey, I’m a psychic medium’ and people are gonna accept you.” Hallucinations can interfere with a person’s life whether they are attributed to schizophrenia or psychic sensitivity, with prejudices stemming from either fear or disdain. How mental health professionals define “significant distress” to accurately account for both experiences will inform how the perception of illness, and the stigma surrounding it, evolves.





Applying Lessons Learned








People with chromesthesia associate sounds with color.

Image courtesy of Wikipedia

Healthy voice hearers are beginning to speak out and seek acceptance. Their mission to erode the stigma surrounding auditory hallucinations does not need to start from scratch. Synesthesia, the perceptual experience of blending two or more senses, is outside the realm of typical experience, and yet synesthetes are viewed with wonder, not fear or pity. As with so many other topics, our collective internet search behavior gives away our prejudices: the top Google result for “synesthesia in the media” is the BBC article “How synaesthesia inspires artists.” In contrast, “schizophrenia in the media” returns an academic article finding the majority of schizophrenic characters in movies released between 1990 and 2010 “engaged in dangerous or violent behaviors”. Synesthesia has uniquely captured a positive public image. More directly, a specific type of hallucination has already been accepted as normal and healthy. In the grey zone between wakefulness and sleep, hypnagogic hallucinations trigger extra-sensory experiences like sudden noises or vivid visual scenes. In contrast to wakeful hallucinations, these are accepted as a non-pathological occurrence in many people’s lives.





To get to a point of similar acceptance, healthy voice hearers would benefit from a spectrum approach. Binary health/illness evaluations are now being replaced with dimensionality assessments. Auditory hallucinations may belong on one end of a spectrum of perceptual vividness, inside the range of normal experience for many people. All these strategies have one common theme: deliberate language. Calling a person ‘schizophrenic,’ ‘crazy,’ and even ‘hallucinating’ instantly pathologizes and strips the person of identity. Replacing that vocabulary with inclusive language like ‘person with schizophrenia,’ ‘psychosis,’ and even ‘nonconsensual reality’ give agency and acknowledge divergent experiences. Such deliberation over language is often accused of being too politically correct; but this is the first step in fostering a safe environment for people within the entire range of sensory experiences. Only in a safe environment can voice hearers seek help if they need it, or be transparent about their experiences if not.




References






[1] Powers AR, 3rd, Kelley MS, Corlett PR. (2017). Schizophr Bull 43: 84-98.


[2] Rosenhan DL. (1973). Science 179: 250-8.


[3] de Leede-Smith S, Barkus E. (2013). Frontiers in Human Neuroscience 7:


[4] Diederen KMJ, Daalman K, de Weijer AD, Neggers SFW, van Gastel W, Blom JD, Kahn RS, Sommer IEC. (2012). Schizophrenia Bulletin 38: 1074-82.


[5] Caligor E, Levy KN, Yeomans FE. (2015). The American journal of psychiatry 172: 415-22.


[6] Tyrer P, Mitchard S, Methuen C, Ranger M. (2003). Journal of personality disorders 17: 263-8.



Want to cite this post?



Ahlgrim, N. (2017). Neuroethics in the News Recap: Psychosis, Unshared Reality, or Clairaudiance?. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/neuroethics-in-news-recap-psychosis.html

Tuesday, October 24, 2017

Too far or not far enough: The ethics and future of neuroscience and law



By Jonah Queen








Image courtesy of Pixabay.

As neurotechnology advances and our understanding of the brain increases, there is a growing debate about if, and how, neuroscience can play a role in the legal system. In particular, some are asking if these technologies could ever be used to accomplish things that humans have so far not been able to, such as performing accurate lie detection and predicting future behavior.





For September’s Neuroethics and Neuroscience in the News event, Dr. Eyal Aharoni of Georgia State University spoke about his research on whether biomarkers might improve our ability to predict the risk of recidivism in criminal offenders. The results were published in a 2013 paper titled “Neuroprediction of future rearrest1," which was reported in the media with headlines such as “Can we predict recidivism with a brain scan?” The study reports evidence that brain scans could potentially improve offender risk assessment. At the event, Dr. Aharoni led a discussion of the legal and ethical issues that follow from such scientific findings. He asked: “When, if ever, should neural markers be used in offender risk assessment?”






Dr. Aharoni started by explaining that determining the risk an individual poses to society (“risk triage”) is an important part of the criminal justice system and that it is used when making decisions around bail, sentencing, parole, and more. He presented the cases of Jesse Timmendequas and Darrell Havens as opposite extremes of what can happen when risk is miscalculated. Timmendequas is a repeat sex offender who had served less than seven years in prison for his crimes and had not been considered a serious threat before he raped and murdered a seven-year-old girl, a crime which led to the passing of Megan’s Law. Havens, a serial car thief, is serving a 20-year prison sentence for assaulting a police officer, despite being rendered quadriplegic after being shot by police, because parole boards are reluctant to grant him an early release due to his extensive criminal history.





Risk triage is currently done through unstructured clinical judgements, where a clinician will offer his or her opinion based on an interview of the subject, and the more accurate evidence-based risk assessment, which assesses various known risk factors, such as age, sex, criminal history, drug use, impulsivity, and level of social support. Dr. Aharoni and the other authors of the paper propose that neurological data could potentially be introduced as an additional risk factor to help improve the accuracy of such assessments1.





With the understanding that impulsivity is a major risk factor for recidivism2, the researchers focused their study on the anterior cingulate cortex (ACC), a limbic brain region shown to be heavily involved in impulse control and error monitoring (in fact, behavioral changes in people with damage to the ACC are often extreme enough for those individuals to be classified as having an “acquired psychopathic personality3”).





In Aharoni’s paper1, the volunteers (96 currently incarcerated adult men) were presented with a go/no-go (GNG) task (which tests impulse control) while their ACC activities were monitored with functional magnetic resonance imaging (fMRI, which measures changes in blood flow within different regions of the brain—an increase in blood flow is taken to mean that a region has increased neural activity). The researchers found that participants with greater activation of the ACC during impulse control errors were half as likely to be arrested within four years of their release (when controlling for other factors such as age at release, Hare Psychopathy Checklist scores, drug and alcohol use, and performance on the GNG task). In other words, the study seems to show that, when used in conjunction with currently recognized risk factors, the fMRI data improved the accuracy of the risk assessment. The authors conclude that this finding “suggest[s] a possible predictive advantage” of including the neurological data in risk assessment models1.







Image courtesy of Flickr user Janne Moren.


After emphasizing the need for additional research, the authors discuss several possible applications, for the use of these “neuromarkers.” One of the more controversial ones (and the one that the media has mostly focused on) is to add neuromarkers (such as ACC activity during a GNG task) to the other factors that are currently used for risk triage in the criminal justice system. The authors recognize that this will raise ethical and legal issues, specifically that such scans might not meet the legal standard of proof, and that such techniques might threaten offenders’ civil rights in ways that currently used risk assessment methods do not.





In his presentation, Dr. Aharoni expanded on some of these concerns, focusing on the scientific limitations, legal limitations, and ethical implications of this research. The scientific limitations refer to the accuracy and replicability of this method and the general question of whether current neuroimaging techniques can provide useful data to criminal risk assessments. The legal limitations include questions of how and when such methods could legally be used. Would they be legally admissible, or would they be found to be unconstitutional if used in certain ways? Would the results of a brain scan be legally classified as physical evidence (which, under the Fourth Amendment, can be obtained with a warrant) or testimony (under the Fifth Amendment, an individual cannot be forced to testify if it would incriminate them)? Similar questions are being asked regarding fMRI lie detection.





And then there are the ethical implications. Using such a technique to keep people in jail who would not be otherwise (for lengthened sentences or denying parole, for example) is worrisome to many and runs the risk of violating offenders’ civil rights in an attempt to increase public safety. Dr. Aharoni mentioned that neuromarkers could also be used in an offender’s best interests if, for example, MRI data showed that they might be less likely to reoffend. An audience member pointed out, though, that this could be unfair to the people whose brain data does not help their case.





Another application that the authors mention is how this research could pave the way for possible interventions (including therapies, programs, and medications) for people with poor impulse control caused by low ACC activity. This could still raise concerns around convicts being required to undergo medical treatments (like medication or even surgery) if their criminal activity is thought to be caused by “defective” brain regions. And even if no practical applications come of this research, the authors point out that their findings still contribute to our understanding of the brain and human behavior.







Image courtesy of Wikimedia Commons.


Media outlets that reported on the study mostly focused on the predictive aspect, often referencing the film Minority Report, in which people are arrested for crimes they have not yet committed. Dr. Aharoni explained that incarcerating people based on the likelihood of re-offense is currently happening in cases of involuntary civil commitment, where defendants who are found not guilty by reason of insanity can be confined to psychiatric hospitals until they are deemed safe. If neuromarkers such as brain scans are used to improve the accuracy of the predictions, it might not be as much of a radical change as it seems.





But still, as explained above, even if brain scans were to be incorporated into the predictive models currently used, it would raise many ethical issues. And things could become even more worrisome if this technology were to be (mis)used in ways the researchers have not intended and the science does not support. For example, the criminal justice system could buy into the hype around brain imaging and develop a process that only looks at the scans and not at the other factors. Scans could also be performed on people who have not committed a crime to see if they need “monitoring” or “treatment,” possibly even non-voluntarily, even though they have not done anything wrong (in something more similar to a Minority Report-like scenario). Even without any intervention, there could also be the issue of stigma, like there is with testing for predisposition to mental illness. If someone is found to have a “criminal brain” how would people view them? How would they view themselves? And an audience member raised the possibility of this technology being used in the private sector. There are companies that offer MRI lie detection services—what if a company were to start testing people for predisposition to criminal behavior?





In the paper, the authors admirably discuss the ethical issues that could arise from their research. And the discussion Dr. Aharoni led at the event showed the importance of looking at controversial research such as this with a critical eye and in context in order to avoid resorting to sensationalist claims and unfounded fears. Not only is it important to make sure the science behind new neurotechnologies is accurate, but we also need to consider the societal effects of new technologies, whether they are used in the way their creators intended or not.






References



1)  Aharoni, E., Vincent, G. M., Harenski, C. L., Calhoun, V. D., Sinnott-Armstrong, W., Gazzaniga, M. S., & Kiehl K. A. (2013). Neuroprediction of future rearrest. PNAS, 110(15), 6223-6228. doi:10.1073/pnas.1219302110

2)  Monahan, J. D. (2008) Structured risk assessment of violence. Textbook of Violence Assessment and Management, eds Simon, R., Tardiff, K. (American Psychiatric Publishing, Washington, DC), pp 17–33.

3)  Devinsky, O., Morrell, M. J., Vogt, B. A. (1995) Contributions of anterior cingulate cortex to behaviour. Brain 118(pt 1), 279–306. doi:10.1093/brain/118.1.279





Want to cite this post?



Queen, J. (2017). Too far or not far enough: The ethics and future of neuroscience and law. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/10/too-far-or-not-far-enough-ethics-and.html

Tuesday, October 6, 2015

Your Brain on Movies: Implications for National Security


by Lindsey Grubbs





An intellectually diverse and opinionated crowd gathered recently for the most recent Neuroethics and Neuroscience in the News journal club at Emory University—“Your brain on movies: Implications for national security.” The discussion was one of the liveliest I've seen in the years I've been attending these events, which is perhaps not surprising: the talk touched on high-profile issues like neuromarketing (which is controversial enough that it has been banned in France since 2011) and military funding for neuroscience.





The seminar was led by Dr. Eric Schumacher, Associate Professor of Psychology at Georgia Tech, director of the Georgia State University/Georgia Tech Center for Advanced Brain Imaging, and principle investigator of CoNTRoL—Cognitive Neuroscience at Tech Research Laboratory. Currently, the lab investigates task-oriented cognition, as well as the relationship between film narratives and “transportation” (colloquially, the sense of “getting lost” in a story), which is a complex cognitive puzzle involving attention, memory, and emotion.







Cary Grant chased by an airplane in North by Northwest,

courtesy of Flickr user Insomnia Cured Here.


Schumacher presented his recent article, “Neural evidence that suspense narrows attentional focus,” published in Neuroscience. Subjects in the study were placed in an MRI scanner and shown film clips of suspenseful films including Alien, Blood Simple, License to Kill, and three Hitchcock films: North by Northwest, Marnie, and The Man Who Knew Too Much (I think I enrolled in the wrong studies to pay for college). The scanner revealed when suspense in the film increased, people's gaze was focused on the film.





Researchers correlated this fMRI data with moments of increased suspense—as when Cary Grant was chased by a plane in North by Northwest. This revealed two key findings: first, during moments of heightened suspense, subjects had increased activity in visual regions processing the film and corresponding decreases in activity to visual regions processing the visual periphery. Second, follow-up questions testing memory initially showed a slight but not significant increase of memory during suspenseful moments for questions like “What color was the truck at the end of the film?” However, when the questions were re-tooled to include plot elements, the memory increase became statistically significant. Thus, memory for plot-relevant information was shown to improve with increasing suspense.







EEG, courtesy of Flickr user Markus Spring


Researchers from this study also collaborated with a group investigating how brain coherence (i.e., the similarity of activity across participants) as monitored on EEG relates to subjective preference. In this experiment, EEG coherence predicted population preference of Super Bowl ads. That is, the more similar the brain signal across participants, the higher rated the Super Bowl ad was. Schumacher identified that some of the same attention and visual processing regions related to suspense are also more active with increasing preference of commercials. According to Schumacher, combining the research into suspense in films and attention in Super Bowl advertisements suggests that when attention is allocated to films and commercials, we can see changes in the brain, especially in visual processing, attention, and memory—and these factors are related more broadly to preference.





These facts alone were enough to spur intense conversation. Participants worried that this kind of neural research into visual engagement might result in more manipulative ads, or in a profusion of dull blockbuster-style action movies designed to trigger neural engagement. Some suggested that there would be nothing wrong with creating films engineered to maximize enjoyment. Others asserted that “enjoyment” is not what art is actually about, and claimed that they want films that make them uncomfortable and push them out of their comfort zone. Still others--including Schumacher--thought that the two are not mutually exclusive, and that art engineered to maximize the enjoyment of the kind of viewer who likes edgy indie films would be even edgier and indie-er, and that everyone could win in the end.










Would Hitchcock use neuro-insights to make more suspenseful films?

 Image courtesy of Wikimedia commons


“Art” seems to be a touchy subject when it comes to neuroscience. The animated discussion highlighted anxiety about science taking on topics we conceive of as belonging to subjective human experience. To many, “art” is intrinsically linked with “humanity,” and hence mechanizing how we think about art seems to make people fear for the mechanization of the individual or society. The fear, apparently, is that films produced using insights from neuroscience would result in a loss of agency or taste that is somehow intrinsic to our being—that films will manipulate or control us. It’s worth noting, though, that science is no more alien to our self-expression than art. Both come from a creative impulse, and science is always shaping our relationship to our humanity, just as our humanity is always shaping the way we engage in science.





The anxiety surrounding neuromarketing or neuro-aesthetic research is compounded in this case by military involvement. A Washington Post article about Schumacher’s research is provocatively titled, “Why DARPA is paying people to watch Alfred Hitchcock Cliffhangers.” The study was funded by “Narrative Networks,” a program of the Defense Advanced Research Projects Agency (DARPA)—the Department of Defense agency heading all kinds of totally wild sci-fi style research.





Narrative Networks specifically is interested in funding research into quantitative methods for studying narratives and their effects, into the neurobiology and endocrinology of responses to narratives, and into simulating and monitoring the impact of narratives and “doctrinal modifications” in the real world. DARPA claims, “Narratives exert a powerful influence on human thoughts and behavior. They consolidate memory, shape emotions, cue heuristics and biases in judgment, influence in-group/out-group distinctions, and may affect the fundamental contents of personal identity. It comes as no surprise that because of these influences stories are important in security contexts: for example, they change the course of insurgencies, frame negotiations, play a role in political radicalization, influence the methods and goals of violent social movements, and likely play a role in clinical conditions important to the military such as post-traumatic stress disorder.”





An article in Wired proclaims, “Darpa wants to Master the Science of Propaganda” and the BBC reported on “Building the Pentagon’s ‘like me’ weapon.” Given their titles, both are actually quite (disappointingly?) measured, and present the project as a defensive, not aggressive, one. The latter quotes neuroscientist Read Montague, who says, “I see a device coming that’s going to make suggestions to you, like, a, this situation is getting tense, and, b, here are things you need to do now, I’ll help you as you start talking.”







The dystopic Ludovico treatment in A Clockwork Orange, 

gif courtesy of Flickr user Gwendal Uguen


Despite this emphasis on defense, the mention of DARPA led our group, again, to spirited debate (also known as rampant conspiracy theorizing by those of us raised on the X-Files). When the research at hand is relatively straightforward and non-threatening, we might ask why military research is such a hot button issue. The most obvious answer is that many object to military activity and are uninterested in advancing science that could be used for violent or nefarious purposes. I will admit that my first glance at DARPA’s innocuous “Narrative Networks” immediately yields the more threatening “propaganda” or “mind control,” but there are at least two ways that this knee-jerk reaction can be elaborated on.





First, understanding narratives can of course yield pacifist as well as violent results. DARPA claims that they hope that understanding the ability of narratives to radicalize, for instance, could lead to more successful methods for de-escalating radicalization. They also point to the possibility of better treatments for PTSD, and to more effective measures for disseminating public health information.







Cold War propaganda, courtesy of Flickr user Dan H.


Second, although the word “propaganda” has an undoubtedly sinister ring to it, it is important to keep in mind that rhetorical appeals meant to influence belief and behavior are omnipresent and not inherently linked to an ethical judgment. We are all trying to convince people of things at all times. Public health campaigns, education, and this blog post itself are all propagandistic in their own ways—but that does not mean that they are necessarily reprehensible or ethically unacceptable. Schumacher hinted at this when he noted that after being quoted saying, “governments use stories,” he wishes he had stated more broadly that “people” use stories. You apparently can’t talk about defense research into narrative without the inevitable Goebbels reference, but rather than reactionary blanket judgments, it will more productive to think about ethical and unethical ways that research can be employed.





So, what are the ethics of military research funding? Although the topic originally calls to mind weapons development, chemical warfare, and the creation of Terminator-esque super soldiers, funders like DARPA provide enormous resources for researchers doing non-nefarious work, for instance, Schumacher’s suspense and transportation study, or Greg Berns’ work on neural connectivity when reading fiction (which I've written about elsewhere on this blog).  For researchers like these, should DARPA be seen as just another (and often extremely generous) source of grant money? Schumacher noted that the money came with no restrictions or conditions on the publication of the data received, and the work doesn’t go directly to some mysterious military database—it is published in major journals in order to advance the field.







Should pacifists have reservations about using DARPA money?

Image courtesy of Flickr user wwwuppertal


But are there other reservations? Can pacifists or conscientious objectors ethically pursue research with military funding? Are there ways that a Quaker graduate student, for example, could refuse to work on a DARPA project for which their PI obtained funding without stigma? Is the source of the funding important when the objective of the study is simply to increase our knowledge of the neural correlates of processes like reading or watching films? Scientific interest in narrative pre-exists this initiative, so in one way DARPA simply funds things we were already curious in. It is worth noting that DARPA is a significant funder for the BRAIN Initiative in the US, and hence is a major partner in advancing study into the brain. But does the military framing change the kinds of questions we ask or research agendas we pursue?





After hearing the skepticism that greeted the idea of military research into narratives, one can almost understand DARPA's enormous investment in controlling narrative and belief. Judging from our conversation, DARPA and the military could really use some work on their PR. But at this point, perhaps many, many years before the successful integration of this research into field tools (if that day ever comes), DARPA’s scientific approach to narrative reminds me less of Obi-Wan Kenobi's “These aren’t the droids you’re looking for” and more of Star Trek’s android Data when he uses his processors to try to act naturally--like when he picks a fight with a girlfriend in order to foster intimacy, explaining to her, “In my study of interpersonal dynamics, I have found that conflict, followed by emotional release, often strengthens the connection between two people.” (The relationship is not a success.)










We should definitely question and discuss the aims guiding research and the ways that the gains of research will be put into action—perhaps especially when the military is involved, but also when it comes to targeted advertising or the creation of appealing art. Perhaps science fiction, dystopia, and conspiracy theorizing provide some protective benefit, as they allow us to imagine possible negative futures that we can then avoid. But (despite the fun of conspiracy theorizing) can we see these conversations as an opportunity to discuss ethical paths forward, not simply unethical nightmares to avoid?





Want to cite this post?





Grubbs, L. (2015). Your brain on movies: Implications for national security. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/10/your-brain-on-movies-implications-for.html

Tuesday, March 17, 2015

The Newly Released 6.1 Issue of AJOB Neuroscience

The 6.1 Issue of the American Journal of Bioethics Neuroscience (AJOB Neuroscience) is now hot off the presses with two target articles highlighting ethical issues behind the use of two very different therapeutic interventions: first-in-human trials to treat Parkinson’s disease using stem-cell based therapies and prescription stimulants to enhance motivation.











The Target Article “Ethical Criteria for Human Trials of Stem-Cell Derived Dopaminergic Neurons in Parkinson’s Disease”1 by Samia A. Hurst et al. discusses three specific considerations of a phase I(safety)-II (efficacy) clinical trial designed to test an experimental neurorestorative stem cell therapy for Parkinson’s disease. Parkinson’s disease is a result of the loss of dopamine-producing neurons in the substantia nigra, and significant depletion of dopamine leads to the tremors, rigidity, and difficulty initiating or halting movement that is often seen as the disease progresses. To compensate for the diminishing levels of the neurotransmitter, standard treatment relies on the drug levodopa, which is converted to dopamine in the body. Levodopa is not curative though, and for that reason, researchers are beginning to study the neurorestoration technique of dopamine-producing stem cells transplants2. As promising and groundbreaking as stem-cell therapy is, protecting human subjects will be of utmost importance as the therapies enter clinical trials.



Parkinson’s disease is progressive, meaning that patients could exhibit a wide spectrum of mild to debilitating symptomology. After considering the risk-to-benefit ratio of enrolling patients who have either just been diagnosed or are at an advanced state of the disease, the authors suggest that only patients with approximately less than 15 years to live with “moderately advanced” Parkinson’s disease should be enrolled. Moderately advanced Parkinson’s is defined as a time when patients have been diagnosed and are responding to levodopa therapy. These patients should have minimal motor impairments and no impairment of cognitive function. Since neurosurgery is not without risks and stem-cell therapy benefit may not be obvious for a long time period, a clear informed consent process is critical. The patients must understand that the study’s purpose is not to alleviate Parkinson’s symptomatology immediately, but to instead increase understanding for the disease and the experimental, high risk nature of the procedure.



The authors conclude that a sham surgery, which involves inserting a needle into the brain, but not injecting stem cells, cannot be justified in a phase I-II clinical trial. Sham surgery is fraught with its own ethical concerns related to the powerful placebo effect, especially when research has suggested that Parkinson’s disease patients are especially susceptible to the placebo effect3,4. While a 2005 investigation of sham surgeries in Parkinson’s disease research suggested that the majority of clinicians support sham surgeries over unblinded controls5, finding interventions better than placebo or sham is challenging if sham is the ultimate threshold over which novel therapy should pass6. The authors ultimately recommend an open-label clinical trial design or a trial that compares stem-cell therapy to alternative medicines other than highly invasive surgery. The authors also remark that even if Parkinson’s disease patients are more likely to improve based on a placebo, Parkinson’s is degenerative and the placebo effect would most likely not survive declining motor function over the long-term of this study.



In the second Target Article, “Enhancing Motivation by Use of Prescription Stimulants: The Ethics of Motivation Enhancement,”7 author Torben Kjaersgaard raises questions about the nature and value of human effort. The enhancement debate has been ongoing for over a decade8, and revolves around the value of “hard work” and integrity to complete a goal. For some, the idea of enhancing cognitive function is a shortcut that creates an unfair playing field where certain individuals have surpassed their natural talents. Others argue that there is nothing wrong with a world full of only the smartest and motivated people. The discussion is made more complicated by research suggesting that current stimulants do not enhance performance9,10.



While this debate is not new, Kjaersgaard adds a fresh perspective by arguing that prescription stimulants, such as Adderall and Ritalin, impact motivation (rather than the task performance itself), raising distinct ethical concerns from those related to simply enhancing cognitive function. A lack of motivation or a tendency to become distracted could be signs of moderate depression or ADHD, or as the article notes, feelings of worthlessness and escape. If stimulants are needed to get through most days, then is the user trying to escape from reality and avoiding tackling a more significant problem? Medically enhancing motivation turns a lack of drive or inspiration into a fixable, physiological problem, which is convenient, but Kjaersgaard argues that this undermines important aspects of the human condition.






from Institute for Ethics and Emerging Technology



The editorial written by Karen S. Rommelfanger and L. Syd M Johnson discusses the first Gray Matters Report from the President’s Commission for the Study of Bioethical Issues and what lies ahead for research, funding, and education for future neuroscientists in light of the lauded and equally contested Human Brain Project and the BRAIN Initiatives. The authors note that these articles are examples of the types of pertinent discussions that result from infusing neuroscience research and medical advances with ethical questions. In moving forward with the UK and US Brain Projects, it is absolutely necessary that ethicists work alongside researchers to ensure these types of dialogues continue.



The Emory Neuroethics Program offers forums for discussion on these pertinent issues and cutting edge topics in neuroscience. The upcoming Neuroethics and Neuroscience in the News event will be held on March 18th, 2015. AJOB Neuroscience Editor John Banja and AJOB Neuroscience Editorial Intern Ryan Purcell will facilitate a discussion on the science, ethics, and media portrayals of neuroenhancement including the recent Kjaersgaard article.





References


(1)  Hurst, S. A.; Mauron, A.; Momjian, S.; Burkhard, P. R. Ethical Criteria for Human Trials of Stem-Cell-Derived Dopaminergic Neurons in Parkinson’s Disease. AJOB Neurosci. 2015, 6, 52–60.


(2)  Grealish, S.; Diguet, E.; Kirkeby, A.; Mattsson, B.; Heuer, A.; Bramoulle, Y.; Van Camp, N.; Perrier, A. L.; Hantraye, P.; Björklund, A.; Parmar, M. Human ESC-Derived Dopamine Neurons Show Similar Preclinical Efficacy and Potency to Fetal Neurons When Grafted in a Rat Model of Parkinson’s Disease. Cell Stem Cell 2014, 15, 653–665.


(3)  Goetz, C. G.; Wuu, J.; McDermott, M. P.; Adler, C. H.; Fahn, S.; Freed, C. R.; Hauser, R. A.; Olanow, W. C.; Shoulson, I.; Tandon, P. K.; Parkinson Study Group; Leurgans, S. Placebo Response in Parkinson’s Disease: Comparisons among 11 Trials Covering Medical and Surgical Interventions. Mov. Disord. Off. J. Mov. Disord. Soc. 2008, 23, 690–699.


(4)  McRae, C.; Cherin, E.; Yamazaki, T. G.; Diem, G.; Vo, A. H.; Russell, D.; Ellgring, J. H.; Fahn, S.; Greene, P.; Dillon, S.; Winfield, H.; Bjugstad, K. B.; Freed, C. R. Effects of Perceived Treatment on Quality of Life and Medical Outcomes in a Double-Blind Placebo Surgery Trial. Arch. Gen. Psychiatry 2004, 61, 412–420.


(5)  Kim, S. Y. H.; Frank, S.; Holloway, R.; Zimmerman, C.; Wilson, R.; Kieburtz, K. Science and Ethics of Sham Surgery: A Survey of Parkinson Disease Clinical Researchers. Arch. Neurol. 2005, 62, 1357–1360.


(6)  Freed, C. R.; Greene, P. E.; Breeze, R. E.; Tsai, W.-Y.; DuMouchel, W.; Kao, R.; Dillon, S.; Winfield, H.; Culver, S.; Trojanowski, J. Q.; Eidelberg, D.; Fahn, S. Transplantation of Embryonic Dopamine Neurons for Severe Parkinson’s Disease. N. Engl. J. Med. 2001, 344, 710–719.


(7)  Kjærsgaard, T. Enhancing Motivation by Use of Prescription Stimulants: The Ethics of Motivation Enhancement. AJOB Neurosci. 2015, 6, 4–10.


(8)  Farah, M. J.; Illes, J.; Cook-Deegan, R.; Gardner, H.; Kandel, E.; King, P.; Parens, E.; Sahakian, B.; Wolpe, P. R. Neurocognitive Enhancement: What Can We Do and What Should We Do? Nat. Rev. Neurosci. 2004, 5, 421–425.


(9)  Lucke, J. C.; Bell, S.; Partridge, B.; Hall, W. D. Deflating the Neuroenhancement Bubble. AJOB Neurosci. 2011, 2, 38–43.


(10)  Smith, M. E.; Farah, M. J. Are Prescription Stimulants “Smart Pills”? The Epidemiology and Cognitive Neuroscience of Prescription Stimulant Use by Normal Healthy Individuals. Psychol. Bull. 2011, 137, 717–741.



Want to cite this post?



Strong, Katie. (2015). The Newly Released 6.1 Issue of AJOB Neuroscience. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2015/03/the-newly-released-61-issue-of-ajob.html

Tuesday, November 18, 2014

Can Neuroscience Validate the Excuse “Not Tonight, Dear, I have a Headache?"

Men
and women experience fluctuations in sexual motivation over a lifetime. Whether
sexual desire is enhanced or diminished at any particular time can depend on a
number of factors and circumstances, but researchers from McGill University
recently set out to determine specifically how pain impacts sexual behavior.1 Results from this study, published in The Journal of Neuroscience earlier this
year, were the topic of the most recent “Neuroethics and Neuroscience in the
News” discussion facilitated by Emory Women’s
Gender and Sexuality graduate student Natalie Turrin and Neuroscience
graduate student Mallory Bowers.




To study how pain impacts sexual
motivation, researchers used a partitioned Plexiglas chamber where the
partition contained small, semi-circular openings only large enough for the
female mice to pass through (this study required that male mice be greater than
45 g and female mice smaller than 25 g). In this set-up, the females were free
to either cross the partition and engage in sexual activity with the male mice
or “escape” to the side where the males were unable to follow. Sexual
motivation in this study was measured by how many total mounts occurred, and since
mounting involves male participation, time spent on the male side of the
chamber was also a measure of female sexual motivation. When researchers
injected female mice with inflammatory agents in the vulva, hind paw, tail, or
cheek to induce pain, female mice consistently participated in less mounting
behavior and spent less time on the male side of the cage compared to no
injections. Males, on the other hand, when injected with the same inflammatory
agents in either the penis, hind paw, tail, or cheek, experienced unimpeded
sexual activity (total number of mounts did not decrease compared to controls) in
an open field paradigm where the males had unrestricted access to the females. Although
it has been observed that female mice can have a higher sensitivity to pain
than male mice,2 researchers observed that male and female
mice exhibited the same level of sensitivity towards inflammation to the hind
leg according to the mouse grimace scale (MGS), a visual observation of a mouse’s facial
features to determine pain levels.




The final experiments to study sexual
activity involved rescuing the lack of sexual motivation from female mice using
either an antinflammatory agent or two different prosexual drugs. The analgesic
pregabalin reversed the reduction of total mounts that resulted from inducing
pain in females, and according to the MGS, also reduced the level of pain. “Prosexual”
drugs, apomorphine (APO) and melanotan-II (MT-II), had the same rescuing effect,
but based on the MGS, did not have the ability to relieve pain from the
inflammatory injections. It should be noted though that APO increases
locomotion3 in mice, which may partially account for
the females moving towards the male side of the cage more often.




From these experiments, researchers
concluded that female mice have lower levels of sexual motivation when in pain,
but even in penile pain, male mice maintain a desire to participate in sexual
activity. However, the decrease in sexual motivation can be rescued in females
by either pain reduction or aphrodisiacs,
in this case a dopamine agonist (APO) or an α-melanocyte-stimulating hormone
analog (MT-II). Perhaps these claims made regarding mice are reasonable, but it
is even more problematic that the authors confidently extrapolate the results
to humans. The final line of the abstract reads “These findings suggest that the well known context sensitivity of the
human female libido can be explained by evolutionary rather than sociocultural
factors, as female mice can be similarly affected.”





Of course, media outlets ran with this
conclusion and multiple articles were published with definitive titles like “Women ARE more likely to go off sex when
they are in pain”
and “That headache excuse is real: For females,
pain kills sexual desire.”

The authors of this paper perpetrated the idea that a woman’s lack of sexual
motivation at any given moment is
either
a biological or
a sociocultural one. In the press release and the paper, the authors refer to the
apparently common aphorism “Not tonight, dear, I have a headache,” and mention
that this would be evidence that sometimes wives are in too much pain to have
sex that has been initiated by their husbands. But sexual relations are so much
more complicated than just a simple relationship such as a pain from a headache
equals lack of sexual motivation. What if the woman (or man, for that matter)
doesn’t really have a headache, but there is another underlying reason that a
partner is too embarrassed to share? Or, what if pain from a headache makes you
feel less sexy, and that feeling is the sexual deterrent, not the pain alone? Pain,
either directly or indirectly, would most likely make a person feel less
sexual, but why does is take a study with mice (who aren’t insecure about love
handles or annoyed with a spouse due to an insensitive comment) to validate
this thought? It is reminiscent of neuro-realism, the
idea that attaching a brain scan to any study or correlation suddenly qualifies
the findings as real or more true.4 While this study only involved mice,
researchers did use fMRI to study the difference between the brains of women
with and without acquired hypoactive sexual desire disorder (HSDD) in this paper.5 But no one - including females, their sexual
partners, researchers, or doctors - really wins when it is being advertised
that the female libido is something that can be can characterized as either
biologically or socioculturally driven. 







Via The Telegraph




One reason for ascribing a biological reason
to the lack of sexual motivation could involve drug development; if a
biological target can be found that is responsible for diminishing sex drive,
then perhaps there is a pill to fix that. The work in the paper was supported
by a Pfizer Pain Research Award from Pfizer Canada, and Pfizer Canada did
kindly provide the pregabalin that was used in the sexual recovery research. There
have been a number of pharmaceutical companies that have sought FDA approval
for low female sexual desire,6 even when the diagnosis of disorders such as
HSDD and female sexual dysfunction (FSD) are controversial. One example though
is Lybrido, a drug meant to treat HSDD. (Mallory actually gave a journal club talk last year about the implications of pharmaceutical
companies targeting the female sex drive with a focus on Lybrido). Lybrido
is interesting because it was ineffective in a cohort of women who “suffer from
HSDD as a result of inhibitory mechanisms,” resulting from negative
associations with sex and for that reason Lybridos was developed.7 Lybidos has an additional
component that targets the prefrontal cortex areas of the brain and is meant to
alleviate these inhibitory mechanisms.8 A discussion of drug
development for women that have negative associations with sex is beyond the
scope of this post, but the mentality that this could be relieved with only a
pill is grossly overly simplifying the complexities of the female libido and
how this affects relationships women have with their sexual partners. If
researchers in academia though are willing to commit to the idea that female
sexuality can be classified as solely biologically determined, then can we
really expect that pharmaceutical companies, marketing campaigns, and sensationalized
news articles won’t try to capitalize on that idea?








Via The Neurocritic





References 




(1)  Farmer, M. A.; Leja, A.; Foxen-Craft,
E.; Chan, L.; MacIntyre, L. C.; Niaki, T.; Chen, M.; Mapplebeck, J. C. S.;
Tabry, V.; Topham, L.; Sukosd, M.; Binik, Y. M.; Pfaus, J. G.; Mogil, J. S.
Pain Reduces Sexual Motivation in Female But Not Male Mice. J. Neurosci.
2014, 34, 5747–5753.


(2)  Mogil, J. S. Sex
Differences in Pain and Pain Inhibition: Multiple Explanations of a
Controversial Phenomenon. Nat. Rev. Neurosci. 2012, 13,
859–866.


(3)  Horn, C. C.;
Kimball, B. A.; Wang, H.; Kaus, J.; Dienel, S.; Nagy, A.; Gathright, G. R.;
Yates, B. J.; Andrews, P. L. R. Why Can’t Rodents Vomit? A Comparative
Behavioral, Anatomical, and Physiological Study. PLoS ONE 2013, 8,
e60537.


(4)  Racine, E.; Bar-Ilan,
O.; Illes, J. fMRI in the Public Eye. Nat. Rev. Neurosci. 2005, 6,
159–164.


(5)  Woodard, T. L.;
Nowak, N. T.; Balon, R.; Tancer, M.; Diamond, M. P. Brain Activation Patterns
in Women with Acquired Hypoactive Sexual Desire Disorder and Women with Normal
Sexual Function: A Cross-Sectional Pilot Study. Fertil. Steril. 2013,
100, 1068–1076.e5.


(6)  Shames, D.; Monroe,
S. E.; Davis, D.; Soule, L. Regulatory Perspective on Clinical Trials and End
Points for Female Sexual Dysfunction, in Particular, Hypoactive Sexual Desire
Disorder: Formulating Recommendations in an Environment of Evolving Clinical
Science. Int. J. Impot. Res. 2006, 19, 30–36.


(7)  Lybrido
http://www.emotionalbrain.nl/lybrido (accessed Oct 30, 2014).


(8)  Lybridos
http://www.emotionalbrain.nl/lybridos (accessed Oct 30, 2014).





Want to cite this post?




Strong, K. (2014). Can Neuroscience Validate the Excuse “Not Tonight, Dear, I have a Headache?" The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2014/11/can-neuroscience-validate-excuse-not.html