Pages

Tuesday, February 24, 2015

Neuroimaging in the Courtroom

If just any picture is worth a thousand words, then how much weight should we ascribe to a picture of our own brain? Neuroimaging can be quite compelling, especially when presented in the media as evidence for neuroscientific findings. Many researchers have pointed out though that the general public may be too entranced by fMRI images highlighting which parts of the brain are activated in response to certain stimuli, such as your iPhone, high-fat foods, or even Twitter. Neuro-realism is the idea that attaching a brain scan to a scientific finding suddenly makes the conclusion more credible, and examples of this have populated the media and the scientific literature1. But, from where does this theory of “neuro-seduction” really stem and is there even ample evidence to support it? For the first journal club of the new semester Emory undergraduate student and AJOB Neuroscience Editorial Intern Julia Marshall along with Emory professor Scott Lilienfeld discussed the role that neuroimaging plays in the courtroom, and whether brain scans have the potential to help or hurt those convicted of crimes in light of neuro-realism, neuro-seduction, and neuroredundancy.






from Scientific American blog



Recently, an article by Martha Farah and Cayce Hook2 took a critical look at the two studies that are most frequently cited as being evidence for neuro-realism and discussed why this theory has continued to persist despite its lack of evidence. The first study by McCabe and Castel3 analyzed whether people consider scientific findings more believable when accompanied by functional brain images, and the collected data suggested that scientific reasoning in research descriptions made more sense to participants when a brain image was provided as evidence. However, Farah and Hook point out that these brain images are actually more informative than a bar graph or topographic map, and participants should find them more compelling. The second paper often cited in relation to neuro-realism is a study by Weisberg, et al.4 which asked participants to consider whether an explanation for a psychological phenomenon, which did or did not include irrelevant neuroscientific rationale, was good or bad. Participants that were not neuroscience experts were more likely to rate a bad explanation as favorable when accompanied by neuroscience data. This study, however, did not include images, and even the authors of the paper admit that people may respond in a similar fashion to information that comes from specialties outside of neuroscience and psychology; there could be a general fascination with science that makes poor explanations appear reasonable. Farah and Hook also highlight a number of experiments5–7 that have been unable to replicate the findings from these two studies, helping to cast a shadow of doubt on neuro-realism.









Whether or not we really are unnecessarily enthralled by brain images is still out for debate, but is neuro-seduction real in the courtroom when neuroimaging is presented as evidence? This is relevant because a study by Bright and Goodman-Delahunty 8 found that mock jurors presented with gruesome and neutral images of a crime scene convicted defendants at a significantly higher rate than jurors that were not exposed to any images. These results beg the question that if a neutral image can provoke a response, then what is the effect of an image of a brain? Schweitzer et al.9 conducted four experiments in an attempt to determine the effect of neuroimaging in cases involving the mens rea defense where jurors did not need to decide whether a defendant was guilty or not, but instead whether or not the defendant possessed the mental state to be guilty. In brief, researchers found that neuroimages had no significant effect on the proportion of guilty verdicts or sentence recommendation length compared to other types of evidence for a neurological defect (specifically a defect in the frontal lobe). The mock jurors were either subjected to evidence of neurological damage that could render the fictional defendant unable to have mens rea in the form of a clinical psychiatrist describing behavioral traits, a clinical neurologist who identified brain damage based on a physical exam, a neuroscientist only describing a neuroimage that was not presented, a neuroscientist describing brain injury accompanied by a graph, and a neuroscientist describing injury accompanied by an image of the brain. Interestingly, when jurors judged the responsibility of the defendant, those who heard testimony from a clinical psychiatrist actually judged the defendant to have to more control over his actions than those that were exposed to neuroscientific testimony in any form. The only significant finding from the experiments was that neurological data – that which included images and that which did not – was more persuasive than data from a clinical psychiatrist when judging responsibility, but this judgment did not translate during the conviction and sentencing phase of the mock trial.









How relevant is neuroimaging in the courtroom based on the results? According to Stephen J. Morse in a recent AJOB Neuroscience article,10 neuroimaging has very little relevance in cases that require judges and jurors to evaluate the mental capacity of a defendant, and this view is supported by the findings from the experiments conducted by Schweitzer et al.9,11 While there may be less bias toward neuroimages than was initially believed, neuroscience and neurotechnologies are constantly evolving. Brain scans require the viewer to make a reverse inference, which is to “infer the engagement of particular cognitive functions based on activation in particular brain regions.”12 This requires reasoning backwards, and an example of this would be that low activity in your frontal lobe area means you are psychopath. This assumes though that specific brain activity can be directly correlated to thoughts, behaviors, or tendencies, and we know that obtaining and interpreting the images is much more complicated. At this time it is probably reassuring that juries do not appear to take brain scans more seriously than other factors in cases where neuroimaging could help to provide evidence of intent. However, there could be a time in the future when neuroimaging can provide more compelling evidence than only expert testimony and at that time it may be reasonable to assume that neurological data could not be faked. In this future scenario, neuroimaging should play a larger role in sentencing and convictions, but we are not there yet. There is still much to consider when it comes to neuroimaging, but neuroscientists must work with lawyers, judges, and the media to ensure that neuroscientific findings and results are appropriately applied to courtroom scenarios.






References

 


(1)  Racine, E.; Bar-Ilan, O.; Illes, J. fMRI in the Public Eye. Nat. Rev. Neurosci. 2005, 6, 159–164.


(2)  Farah, M. J.; Hook, C. J. The Seductive Allure of “Seductive Allure.” Perspect. Psychol. Sci. 2013, 8, 88–90.


(3)  McCabe, D. P.; Castel, A. D. Seeing Is Believing: The Effect of Brain Images on Judgments of Scientific Reasoning. Cognition 2008, 107, 343–352.


(4)  Weisberg, D. S.; Keil, F. C.; Goodstein, J.; Rawson, E.; Gray, J. R. The Seductive Allure of Neuroscience Explanations. J. Cogn. Neurosci. 2008, 20, 470–477.


(5)  Gruber, D.; Dickerson, J. A. Persuasive Images in Popular Science: Testing Judgments of Scientific Reasoning and Credibility. Public Underst. Sci. 2012, 21, 938–948.


(6)  Hook, C. J.; Farah, M. J. Look Again: Effects of Brain Images and Mind–Brain Dualism on Lay Evaluations of Research. J. Cogn. Neurosci. 2013, 25, 1397–1405.


(7)  Michael, R. B.; Newman, E. J.; Vuorre, M.; Cumming, G.; Garry, M. On the (non)persuasive Power of a Brain Image. Psychon. Bull. Rev. 2013, 20, 720–725.


(8)  Bright, D. A.; Goodman-Delahunty, J. Gruesome Evidence and Emotion: Anger, Blame, and Jury Decision-Making. Law Hum. Behav. 2006, 30, 183–202.


(9) Schweitzer, N. J.; Saks, M. J.; Murphy, E. R.; Roskies, A. L.; Sinnott-Armstrong, W.; Gaudet, L. M. Neuroimages as Evidence in a Mens Rea Defense: No Impact; SSRN Scholarly Paper ID 2018114; Social Science Research Network: Rochester, NY, 2011.


(10)  Morse, S. J. Brain Imaging in the Courtroom: The Quest for Legal Relevance. AJOB Neurosci. 2014, 5, 24–27.


(11)  Roskies, A. L.; Schweitzer, N. J.; Saks, M. J. Neuroimages in Court: Less Biasing than Feared. Trends Cogn. Sci. 2013, 17, 99–101.


(12)  Poldrack, R. A. Can Cognitive Processes Be Inferred from Neuroimaging Data? Trends Cogn. Sci. 2006, 10, 59–63.





Want to cite this post?



Strong, K. (2015). Neuroimaging in the Courtroom. The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2015/02/neuroimaging-in-courtroom.html

Tuesday, February 17, 2015

Exchanging 'Reasons' for 'Values'

Julia Haas is a McDonnell Postdoctoral Fellow in the Philosophy-Neuroscience-Psychology program at Washington University in St. Louis. Her research focuses on decision-making.



Over the past two decades, computational and neurobiological research has had a big impact on the field of economics, bringing into existence a new and prominent interdisciplinary field of inquiry, ‘neuroeconomics.’ The guiding tenet of neuroeconomics has been that by combining both theoretical and empirical tools from neuroscience, psychology and economics, the resulting synthesis could provide valuable insights into all three of its parent disciplines (Glimcher 2009). And although some economists have resisted the influence of neuroscience research (Gul and Psendorfer 2008), neuroeconomics has by all measures thrived as a theoretical endeavor, and proven itself as a discipline capable of marshaling substantial institutional and financial resources.



For example, theories from economics and psychology have already begun to restructure our neurobiological understanding of decision-making, and a number of recent neurobiological findings are beginning to suggest constraints on theoretical models of choice developed in both economic and psychological domains. Similarly, a study by the Eigenfactor project at the University of Washington showed that while there were no citations from either of these disciplines to the other in 1997, by 2010, there were 195 citations from economics journals to neuroscience journals, and 74 citations from neuroscience journals to economics journals.






Disciplinary cross-pollination 

This interdisciplinary partnership has caught the attention of the National Institutes of Health, which finances 21 current research projects with "neuroeconomics" in their descriptions, to the tune of $7.6-million. The agency gives out many more millions for other neurobiology work related to decision-making: Caltech got $9-million this month to establish a center in this field. The National Science Foundation has backed eight neuroeconomics projects with $3.5-million in research money.



Neuroeconomics: A Role Model for the Neuroscience of Ethics 



Neuroeconomics has thus been one of the most significant and astute beneficiaries of computational and neuroscientific research on decision-making. By contrast, the discipline of philosophy has fallen behind. Although many insights from computational and decision neuroscience are directly relevant to philosophical discussions about deliberation and choice, the vast majority of them have fallen by the philosophical wayside. This is not to say that philosophy has ignored neuroscience: this would not at all be true. Beginning with the publication of Patricia Churchland’s Neurophilosophy in 1985, both neurophilosophy and the philosophy of neuroscience have become active research areas across philosophy departments. But many of these neuroscientific contributions have focused on issues pertaining to traditional metaphysics (such as consciousness and free will) and epistemology (such as perception and representation). By contrast, the implications of computational and decision neuroscience for philosophical theories of decision-making and practical reasoning have yet to be realized.






Where it all got started

Again, this is not to say that neuroscience has not been brought to bear on issues in ethics! I have written about Molly Crockett’s research on this blog, Neil Levy and Julian Savulescu have made important contributions, and there are many valuable neuroscientific contributions to the study of altruism, utilitarianism, spirituality, aggression, and so on. But what I want to suggest that is that the neuroscience of decision-making can help philosophers arrive at a more wide-ranging theory underlying specific kinds of moral decisions: namely, it can help us understand how we make decisions in general. And this understanding should in turn provide a valuable constraint and useful platform for understanding what happens in moments of having to make tough, moral decisions.



Some general principles are, I think, beginning to emerge. For example, while philosophers frequently turn to concepts such as reasons and intentions to try and explain human action, there is good evidence to suggest that human beings rely on something that is closer to the metaphor of evaluating or ‘weighing.’ We come to value objects and actions over the course of our experiences, and these positive valuations lead us to elect those objects or actions when it comes time to make a concrete decision. Moreover, computational neuroscientists are beginning to understand the mechanisms whereby these valuations are carried out in the mind/brain, and they are increasingly in position to make detailed predictions about how human beings make decisions in all kinds of situations involving risk, delay and stress. From my perspective, these same situations often form the backdrop for our toughest ethical dilemmas, so we should gradually be able to untangle why people ‘mis-value’ certain options and make unethical decisions.



Some might argue that moral decisions are too complex for neuroscience to help us understand them. But the same was once said of economic choices, and it is safe to say that neuroeconomics has come a long way in advancing our understanding of them. I look forward to, and hope to be a part of, seeing practical and moral philosophy follow suit.





References



Glimcher, P. W. (2009). Choice: towards a standard back-pocket model. Neuroeconomics: Decision making and the brain, 501-519.



Gul, F., & Pesendorfer, W. (2008). The case for mindless economics. The foundations of positive and normative economics, 3-42.





Want to cite this post?

Hass, J. (2015). Exchanging 'Reasons' for 'Values'. The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2015/02/exchanging-reasons-for-values.html

Tuesday, February 10, 2015

Obama’s BRAIN and Free Will

By Eddy Nahmias, PhD



Eddy Nahmias is professor in the Philosophy Department and the Neuroscience Institute at Georgia State University. He is also a member of the AJOB Neuroscience editorial board.



On April 2, 2013 President Barack Obama announced the BRAIN Initiative, a 10-year, $3 billion research goal to map all of the neurons and connections in the human brain. The BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative is modeled on the Human Genome Project, which successfully sequenced the entire DNA code of the human genome in 2003. Our brains, with 100 trillion neuronal connections, are immensely more complicated than our DNA, so the BRAIN Initiative has a much higher mountain to climb.



But let’s suppose that, finally, during the next Clinton presidency, the BRAIN Initiative is completed…. that is, the presidency of Charlotte Clinton, Bill and Hilary’s grandchild. In fact, suppose that eventually neuroimaging technology advances to the point that people’s brains can be mapped fully enough to allow real-time computations of all of their occurrent brain activity. Neuroscientists can then use this information to predict with 100% accuracy every single decision a person will make, even before the person is consciously aware of their decision. Suppose that a woman named Jill agrees to wear the lightweight BrainCapTM for a month. The neuroscientists are able to detect the activity that causes her thoughts and decisions and use it to predict all of Jill’s thoughts and decisions, even before she is aware of them. They predict, for instance, how she will vote in an election. They even predict her attempts to trick them by changing her mind at the last second.






From interbilgisayar.com



Question: Do you think it is possible for such technology to exist in the future (the “near” future of Charlotte Clinton’s presidency or perhaps a more distant future)? And if such technology did exist, what would it tell us about whether we have free will?



Some people have used such neuro-prediction scenarios to explain why they think free will is an illusion. For instance, in his book Free Will (2012) Sam Harris asks us to “imagine a perfect neuroimaging device that would allow us to detect and interpret the subtlest changes in brain function.” He concludes, “You would, of course, continue to feel free in every present moment, but the fact that someone else could report what you were about to think and do would expose this feeling for what it is: an illusion” (10-11; see also Greene and Cohen 2004, p. 1781).



Others have drawn on recent neuroscientific experiments in which information about brain activity from EEG or fMRI that proceeds awareness provides predictive information about simple decisions, and they extrapolate from these experiments to conclude that all of our decisions are caused by brain activity that bypasses conscious activity, challenging free will. For instance, neuroscientist John Dylan Haynes (2008) says, “Our decisions are predetermined unconsciously a long time before our consciousness kicks in… It seems that the brain is making the decision before the person themselves.”1



I call those who claim that science shows free will is an illusion, willusionists. Typically, they assume that free will would require that the conscious mental activity involved in our deliberation and decision-making is distinct from brain activity. And they assume that the ordinary definition of ‘free will’ requires this dualistic view of the mind. If they are right, then they should predict that most people would reject the possibility that the BRAIN Initiative could succeed in the way I describe above. After all, non-physical minds could never be fully understood or predicted based on a complete mapping of brain activity. And if we had a magical free will untethered to brain activity, then we could exercise it to make some decisions that could not be predicted by neuroscientists scanning our brain. Are the willusionists’ accurate in their predictions about how most people understand free will?



Fortuitously, while the BRAIN Initiative was being hatched, my collaborators and I were working on a much less complicated (or expensive!), project in ‘experimental philosophy’, an emerging field that uses empirical methods to consider people’s views about philosophical questions. Two former neurophilosophy MA students at Georgia State, Jason Shepard (a Neuroethics Scholars Program Alum and Psychology PhD student at Emory University) and Shane Reuter (now in the PNP Program at Washington University St. Louis), and I developed various detailed descriptions of the neuro-imaging technology above that allow perfect prediction of decisions based on prior brain activity. One scenario concluded with a statement of physicalism about the mind-body relationship: “These experiments confirm that all human mental activity just is brain activity such that everything that any human thinks or does could be predicted ahead of time based on their earlier brain activity.”






Dilbert, by Scott Adams



We asked our participants (students at GSU) whether such technology was possible. Contrary to the predictions of willusionists, we found that 80% said yes. Of the 20% who said no, most did not explain their response by referring to non-physical minds or souls or free will. Instead, most raised ethical concerns (society would not allow anyone to gain so much information about our minds) or financial limitations, or they mentioned problems pointing towards what I think is actually the right answer: No, the technology could never be that perfectly predictive because the brain is too complex for real-time calculations to occur faster than the brain actually carries out complex deliberations and decision-making. But these responses do not suggest a commitment to a non-physical mind.



Furthermore, the vast majority of participants did not respond as willusionists predict regarding free will: three-quarters or more said that Jill had free will even though her decisions were predicted by the neuroscientists and that, even if such technology existed, people would have free will and would be morally responsible for their actions. The only scenarios that led people to respond that the technology would undermine free will were ones in which we added that the neuroscientists could also alter people’s brain activity, and hence their decisions. (See our article in Cognition for more details.)



The question is why our participants do not seem to be ‘freaked out’ by the possibility of such neuro-prediction, while willusionists assume they would be, and should be.



One possibility is that our participants just didn’t get it. Perhaps they have a deep, implicit commitment to dualist free will such that they either reject the stipulations of the scenarios or ignore their implications when responding to the questions about free will (while nonetheless saying the technology is possible). I think this explanation is likely true for some of our participants, but unlikely for most of them, given the patterns of responses to the many questions we asked.



Instead, I think most of our participants simply do not have an implicit or explicit commitment to dualist free will. Most people, even some who may talk as if the mind is non-physical or have religious beliefs about souls, seem ‘theory-lite’ about the mind and free will. They know we are conscious and make choices, but they don’t know how (or in what) these mental processes are implemented. And for good reason, since we don’t yet have a neuroscientific theory to explain things like conscious deliberation, reasoning, and imagination of future options for action. But most people seem willing to accept that neuroscience might explain how these mental processes work… at least as long as it does not thereby explain them away.



For instance, most participants responded that the neuroimaging technology does not mean that “people’s reasons have no effect on what they do,” and that seems to be the right way to interpret it. When people’s decisions are predicted while wearing this futuristic technology, it’s based on information about the neural activity that implements their conscious reasons and reasoning. That activity is not bypassed by earlier brain activity; it is a crucial cause of some decisions we make. When we imagine future options, it opens up those options as possibilities for action, even if our brains carry out the imagining.



Why then do willusionists seem to neglect this possibility that free will could be understood in terms of the complex activity of the human brain? I think it is because they are not theory-lite. Instead, they theorize that a neuroscientific explanation of behavior either replaces an explanation in terms of conscious mental processes (a form of eliminativism) or cuts those processes out of the causal picture (a form of epiphenomenalism). Such views are understandable. Neuroscience is a relatively young science, and we lack a theory to explain how consciousness works in terms of neural activity. So, for scientists who are used to thinking in terms of physical mechanisms such as neurons causing physical events such as bodily movements, it may be hard to see how conscious mental events—yet to be explained in terms of neural mechanisms—get into the story.



Some willusionists argue that getting people to recognize that free will is an illusion will have beneficial consequences, especially for our legal system. For instance, if criminals lack free will, then they don’t deserve the harsh retributive punishment typically meted out to them. If we come to accept that no one deserves such punishment, we’ll focus on more useful solutions to crime, such as deterrence, rehabilitation, and restoration. We may also be more understanding, and less judgmental, of people in poverty or with mental illnesses or addictions. (See, e.g., Harris and Greene & Cohen).



I too think our legal system is overly retributive and that criminals, and the rest of us, would typically be better served if we focused more of our resources on alternatives to retributive punishment. I also think we should give up our ‘just world’ beliefs that lead us to think people are responsible for their unfortunate circumstances or deserve all their good fortune (or literal fortunes). But I think the willusionist view of free will may influence us to see people as objects or mechanisms, some of which need to be repaired, perhaps even opening up problematic forms of brain manipulation.



A naturalistic view instead says that we have degrees of free will to the extent that we possess the psychological capacities for imagining and assessing various future options and for self-control to actualize the better options. But this view also reminds us that we often have less free will than we tend to think, and that some people’s opportunities to develop and exercise the capacities for free will are far more constrained than others.



The BRAIN Initiative won’t lead to BrainCaps that allow perfect neuro-prediction. But even if it could, it would not illuminate some new challenge to the possibility of human free will. Instead, the BRAIN Initiative will continue the recent trend of helping people come to recognize and accept that everything we think and do is enabled by what our amazingly complex brains do. It may even provide information that leads to a satisfying theory of how our brains explain consciousness and decision-making. It will surely provide more information about when and why people’s decision-making and self-control are diminished, suggesting mitigated responsibility. And it will also raise difficult neuroethical questions about whether and how we should use all this information to alter people’s brains and hence their minds.





1 Haynes’ and his collaborators’ fMRI studies carry on the tradition of the infamous studies by Benjamin Libet. For explanations for why these studies, along with others thought to challenge free will (such as Daniel Wegner’s), do not have these implications, see, e.g., Mele (2009) and Nahmias (2014).





References



Greene, J. & Cohen J. (2004). For the law, neuroscience changes nothing and everything. Philosophical Transactions of the Royal Society of London B, 359, 1775-1778.



Mele, A. (2009). Effective intentions: the power of conscious will. New York: Oxford University Press.



Nahmias, Shepard, Reuter. 2014. It’s OK if ‘My Brain Made Me Do It’: People’s Intuitions about Free Will and Neuroscientific Prediction. Cognition 133(2): 502-513.



Nahmias, E. 2014. Is Free Will an Illusion? Confronting Challenges from the Modern Mind Sciences. In Moral Psychology, vol. 4, Free Will and Moral Responsibility, ed. by W. Sinnott-Armstrong (MIT Press, 2014), 1-25.



Soon, C., Brass, M., Heinze, H., & Haynes, J. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience, 11, 543-545.



Related Reading



Nahmias, E. 2011. Is Neuroscience the Death of Free Will? The New York Times



Nahmias, E. 2015. Why We Have Free Will. Scientific American 312(1).



Shepard, J. (2012). Who is redefining free will? The Neuroethics Blog. Retrieved on February 9, 2015, from http://www.theneuroethicsblog.com/2012/09/who-is-redefining-free-will-response-to.html





Want to cite this post?



Nahmias, E. (2015). Obama’s BRAIN and Free Will. The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2015/02/obamas-brain-and-free-will.html





Tuesday, February 3, 2015

When the Hype Doesn’t Pan Out: On Sharing the Highs-and-Lows of Research with the Public

By Jared Cooney Horvath



Jared Cooney Horvath is a PhD student at the University of Melbourne in Australia studying Cognitive Psychology / Neuroscience.





15-years ago, a group of German researchers decided to revive the ancient practice of using electricity to effect physiologic change in the human body. Using modern equipment and safety measures, this group reported that they were able to alternately up- and down-regulate neuronal firing patterns in the brain simply by sending a weak electric current between two electrodes placed on the scalp1.





tDCS electrode placement



Today, this technique is called Transcranial Direct Current Stimulation (tDCS) and over 1,400 scientific articles (calculated by combining non-replicated articles from a joint PubMed, ISI Web of Science, and Google Scholar search using the keywords “Transcranial Direct Current Stimulation”: October 15, 2014) have been published suggesting that passing an arguably innocuous amount of electricity through the brain of a healthy individual can improve his/her memory, learning, attention, inhibitory control, linguistic function, etc. In parallel with these findings (often fueled by the researchers themselves), the public hype surrounding tDCS has grown to impressive proportions: in fact, in the last year alone, stories about this device and its ability to improve cognition and behavior have appeared in popular news outlets ranging from the BBC2 to Wired3 to The Wall Street Journal4.




Doubtless fueled by this hype, there are currently 3 tDCS devices (and 2 in development) available for public purchase without the need of a medical prescription. In fact, as you read this, there are likely hundreds of people around the world trying to ‘boost’ their own brain power using these unregulated devices.



Unfortunately, a series of quantitative reviews undertaken by this group5, 6 has revealed that tDCS does not generate a significant or reliable effect on neurophysiology, cognition, or behavior. When combined, the last 15-years of data strongly suggest that either, A) tDCS does not have an actual effect, or B) tDCS generates an effect that we can neither explain, elucidate, nor predict.








This raises an incredibly important question: what are the responsibilities of a researcher when data he/she once publicized comes under question? More specifically, the tDCS data makes it quite clear that we do not have a solid handle on the mechanisms or effects of this device. As such, what role do we (researchers) play in ensuring the public are made aware of these developments and protected from possible neural injury or, at the least, economic waste?



It seems acceptable (almost expected) that researchers will publicize positive, potentially beneficial results – especially with regards to health and well-being. But, as has recently been reported in areas of research beyond tDCS7, 8, 9, the number of retractions and amendments made to scientific articles is growing wildly. Unfortunately, the public is rarely made aware of these changes, leaving them expecting results utilizing paradigms that may no longer be viable or accepted in the scientific cannon.







From someecards.com



One reason we (researchers) chose to avoid hyping our negative results is obvious: research is a very messy endeavor and there’s always danger in letting the customer see inside the kitchen. It’s an intelligent, safe decision to put only the most exciting, interesting, and applicable work forward for public scrutiny. However, as is becoming clear in articles like those cited above, more and more people are becoming aware of the fact that science is not the ideal, straightforward endeavor it’s often claimed to be - in fact, it is rife with the same unpredictable changes and sudden shifts that define all human endeavors. I fear if we continue to ignore the uncertain, vacillatory nature of our profession in the public and continue to only hype ‘success’, we will quickly lose the faith of the very people we are trying to inspire.




It will certainly be interesting to see how the most prominent voices in the field chose to respond to the changing, increasingly less-certain landscape of tDCS. Is it safe to let the public know that we may have jumped-the-gun, and that we require more time and basic research before we can determine whether or not this is an efficacious tool? I believe that, although this type of message may damage our reputation in the short-term, not being honest with the public and trying to keep controversies ‘in-house’ will only serve to damage our reputation far more in the long-term.






References


  1. Nitsche, M. A., & Paulus, W. (2000). Excitability changes induced in the human motor cortex by weak transcranial direct current stimulation. The Journal of physiology, 527(3), 633-639. 

  2. Mosley, M. (2014, October 30). Unexpected Ways to Wake Up Your Brain. Retrieved from http://www.bbc.com/news/magazine-29817519.

  3. Miller, G. (2014, May 5). Inside the Strange New World of DIY Brain Stimulation. Retrieved from http://www.wired.com/2014/05/diy-brain-stimulation/

  4. Kangaris, S. (2014, Feb. 18). Can Electric Current Make People Better at Math? Retrieved from http://www.wsj.com/articles/SB10001424052702303650204579374951187246122

  5. Horvath, J. C., Forte, J. D., & Carter, O. (2015). Evidence that transcranial direct current stimulation (tDCS) generates little-to-no reliable neurophysiologic effect beyond MEP amplitude modulation in healthy human subjects: A systematic review. Neuropsychologia, 66, 213-236.

  6. Horvath, J. C., Forte, J. D., & Carter, O. (2015). Quantitative Review Finds No Evidence of Cognitive Effects in Healthy Populations from Single-Session Transcranial Direct Current Stimulation (tDCS). Brain Stimulation. [EPub before Print].

  7. The Economist (2013, October 19). Unreliable Research: Trouble at the Lab. Retrieved from http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

  8. The Economist (XXX). How Science Goes Wrong. Retrieved from http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong

  9. Ioannidis, J. P. (2005). Why most published research findings are false. PLoS medicine, 2(8), e124.






Want to cite this post?




Horvath, J. (2015). When the Hype Doesn’t Pan Out: On Sharing the Highs-and-Lows of Research with the Public. The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2015/02/when-hype-doesnt-pan-out-on-sharing.html