By Robert T. Thibault
Robert Thibault is interested in expediting scientific discoveries through efficient research practices. Throughout his PhD in the Integrated Program in Neuroscience at McGill University, he has established himself as a leading critical voice in the field of neurofeedback and published on the topic in Lancet Psychiatry, Brain, American Psychologist, and NeuroImage among other journals. He is currently finalizing an edited volume with Dr. Amir Raz, tentatively entitled “Casting light on the Dark Side of Brain Imaging,” slated for release through Academic Press in early 2019.
We all hate being deceived. That feeling when we realize the “health specialist” who took our money was nothing more than a smooth-talking quack. When that politician we voted for never really planned to implement their platform. Or when that caller who took our bank information turned out to be a fraud.
These deceptions share a common theme—the deceiver is easy to identify and even easier to resent. Once we understand what happened and who to blame, we’re unlikely to be misled by such chicanery again.
But what if the perpetrator is more difficult to identify? What if they are someone we have a particular affection for? Can we maintain the same objectivity?
What if the deceiver is you?
In the case of self-deception, a different set of rules seem to apply. Self-deception is rarely deliberate and generally well-intentioned; it often stems from common cognitive biases and remains difficult to recognize. In this post, I discuss self-deception in the context of biomedical research. More specifically, I argue that researchers and practitioners can deceive themselves by clinging to promising seminal findings, overlooking emerging data, and in turn, believing an effect is present when it is not.
The cognitive biases that misdirect us are well established. For example, when people are presented with new information that contradicts their folk understanding of the world, they tend to “quietly exempt themselves” from the general conclusions (1). In other words, if we don’t like the experimental results, we easily ignore them. This tendency is an example of confirmation bias.
In a similar vein, experimenters have shown that the first information we hear on a particular topic often holds more weight than subsequent data. To test this concept, researchers provided participants with a script—helping them establish an initial belief—and later revealed that the script contained only false information (2). Nonetheless, participants continue to answer questions as if the initial script held some truth. This study, and others like it, depict what psychologists call the primacy bias or continued influence effect (3).
How, you may ask, are these biases relevant to biomedicine? Take for example the case of well-established treatments like antidepressants for depression, knee surgery for arthritis, acupuncture for lower back pain, neurofeedback for attention deficits, and even implanting tubular supports into coronary arteries for chest pain. In all of these cases, robust randomized-controlled-trials or meta-analyses reveal that these treatments provide little clinical benefit above and beyond placebo effects (4–8). Nonetheless, one in eight Americans continue to ingest anti-depressants (9), surgeons perform up to a million arthroscopic knee surgeries every year (10), over 14 million people have undergone acupuncture (11), thousands of neurofeedback practitioners continue to read brainwaves, and doctors implant hundreds of thousands of coronary stents annually.
Of course, we like the idea that these treatments work through the presumed biological mechanisms (driving a confirmation bias) and we were probably first exposed to data suggesting they do (promoting a primacy bias). So now that conflicting, and notably stronger, evidence comes out against our original beliefs, we find the new conclusions difficult to swallow. Undoing our biases is hard, but the stakes are high.
How did we get here?
The overrepresentation of positive results in the published literature (i.e., publication bias) likely contributes to the confusion surrounding the evidence of many biomedical treatments. When analyzing antidepressant research, for example, scientists were only looking at a biased subset of the data until 2002 when Irving Kirsch submitted a Freedom of Information Request and meta-analyzed all published and unpublished data together. This analysis revealed that antidepressants modestly outperformed placebos in terms of statistical significance, but carried little additional clinical benefits. A recent meta-analysis found comparable results (12). Similarly, when evaluating neurofeedback research, we generally see only positive findings; until recently, it was notoriously difficult to publish a null finding in this field (5).
Publication bias remains commonplace not only because researchers may forego publishing null findings, but also because journals are less likely to accept a paper presenting such results (13). This trend drives a state of affairs where the first paper published on any particular topic almost always reports positive findings. When follow-up studies deflate the hype surrounding seminal publications, which is often the case, we end up with a situation I call the fake news effect in biomedicine—a less reliable positive finding gets trumped by a more decisive null result, and yet, we cling to what we heard first and what makes us feel good.
Beyond publication bias, some important experiments are simply never conducted due to narrowly-framed ethical concerns. For example, it’s rare to see placebo-controlled experiments in surgery because many scientists feel that they cannot justify exposing a patient to the potential complications of surgery without guaranteeing a genuine treatment. Likewise, regulatory agencies seldom require a placebo-controlled study before approving a new surgical technique.
The results from placebo-controlled trials, however, challenge this position. They demonstrate that some procedures, such as knee surgery, hardly outperform a sham comparator. Thus, in a broader frame, we expose millions of patients to the potential complications of certain surgeries while providing little more than placebo benefits. A panel of experts now strongly recommends against knee surgery for arthritis (14). Unfortunately, these placebo-controlled trials were performed after the medical profession established the infrastructure to practice knee surgery. If the robust null findings were published before the uncontrolled positive results, perhaps fewer practitioners would recommend this surgery.
Even with these findings in mind, both ethics review boards and researchers themselves continue to shy away from certain placebo-controlled experiments. And we can’t blame them: as humans, we tend to regret choices that stem from action more than those that stem from inaction (15). For example, if an institution runs a protocol where a placebo control patient experiences serious adverse effects, lawyers are likely to get involved. Alternatively, if the institution refuses to conduct a placebo-controlled experiment for an invasive technique that turns out to provide only placebo benefits, few repercussions will surface. In a narrow-frame, we can praise our inaction for how it minimized exposure to invasive treatments; using a broad-frame, however, we can appreciate how this inaction may help perpetuate invasive placebo treatments which sometimes carry serious side-effects.
Taken together, our scientific publishing model with its disdain for null findings, and our tendency to narrowly-frame ethical concerns and assume inaction as the default, stack the deck against us. They feed our cognitive biases and drive us toward self-deception.
What next?
If we were infallible interpreters of science, self-deception would become a non-issue. We could instantaneously weigh the influence of publication bias and we would never forget that studies without controls necessarily conflate placebo and treatment effects. Upon exposure to new and more convincing data, we would change our opinion accordingly. With inspiration from the economist Richard Thaler, let’s call this hypothetical character homo scientificus, or a scicon for short (Thaler depicts the perfectly rational and omniscient economic agent as homo economicus; an econ) (16).
If we gave two scicons the same set of data, it wouldn’t matter what order we presented it in, what journal it was published in, or whether it was even published at all. They couldn’t deceive themselves even if they tried. As humans, however, we interpret data in relation to the order we receive it, our field of expertise, our own theoretical and methodological preferences, and even our emotional state at the time of reading. Needless to say, even the most seasoned scientists fall short of scicon status.
Image courtesy of Pixabay. |
While we can’t reset the past and clean our slate of biases, we can strive to override them when we look back at data and to circumvent them as we move forward. For example, we can use what statistician Andrew Gelman calls the time-reversal heuristic (17) to override the fake news effect in biomedicine. He encourages us to conduct thought experiments where we imagine that a robust null study was published before an uncontrolled positive result—and then to re-evaluate our belief. This technique attempts to override our biases in that we remain exposed to the same data while acknowledging our predispositions and attempting to minimize errors in thinking.
To circumvent our biases—i.e., avoid information that feeds them or present data that hinders them—at least two practices can help. We can (1) publish null results immediately and unbashfully, if not in a journal, at least in a freely accessible repository; and (2) assume a broad frame when considering the ethical pros and cons of conducting a particular study.
It remains difficult to identify when we’ve been deceiving ourselves, even more difficult to assume the blame, and perhaps most difficult of all to implement a lasting behavioral change in light of our discovery. As a first step to evade the perils of self-deception we can remain wary of our cognitive biases and present research in formats designed for humans, not scicons.
References
1. Nisbett RE, Borgida E. Attribution and the psychology of prediction. J Pers Soc Psychol 1975; 32: 932–43.
2. Anderson CA, Lepper MR, Ross L. Perseverance of social theories: The role of explanation in the persistence of discredited information. J Pers Soc Psychol 1980; 39: 1037–49.
3. Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N, Cook J. Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychol Sci Public Interes Suppl 2012; 13: 106–31.
4. Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, Johnson BT. Initial severity and antidepressant benefits: A meta-analysis of data submitted to the food and drug administration. PLoS Med 2008; 5: 0260–8.
5. Thibault RT, Raz A. The Psychology of Neurofeedback: Clinical Intervention even if Applied Placebo. Am Psychol 2017; 72: 679–88.
6. Al-Lamee R, Thompson D, Dehbi HM, et al. Percutaneous coronary intervention in stable angina (ORBITA): A double-blind, randomised controlled trial. Lancet 2017; : 31–40.
7. Moseley JB, O’Malley K, Petersen NJ, et al. A controlled trial of arthroscopic surgery for osteoarthritis of the knee. N Engl J Med 2002; 347: 81–8.
8. Harris CS, Lifshitz M, Raz A. Acupuncture for Chronic Pain? Clinical Wisdom Undecided Despite Over 4000 Years of Practice. Am J Med 2015; 128: 331–3.
9. Pratt LA, Brody DJ, Gu Q. Antidepressant use among persons aged 12 and over: United States, 2011–2014. NCHS Data Breif 2017; : 1–8.
10. Kim S, Bosque J, Meehan JP, Jamali A, Marder R. Increase in Outpatient Knee Arthroscopy in the United States: A Comparison of National Surveys of Ambulatory Surgery, 1996 and 2006. J Bone Jt Surgery-American Vol 2011; 93: 994–1000.
11. Zhang Y, Lao L, Chen H, Ceballos R. Acupuncture use among american adults: What acupuncture practitioners can learn from national health interview survey 2007? Evidence-based Complement Altern Med 2012; 2012. DOI:10.1155/2012/710750.
12. Cipriani A, Furukawa TA, Salanti G, et al. Comparative efficacy and acceptability of 21 antidepressant drugs for the acute treatment of adults with major depressive disorder: a systematic review and network meta-analysis. Lancet 2018; : 1–10.
13. Smaldino PE, McElreath R. The Natural Selection of Bad Science. 2016. DOI:10.1098/rsos.160384.
14. Siemieniuk RAC, Harris IA, Agoritsas T, et al. Arthroscopic surgery for degenerative knee arthritis and meniscal tears: a clinical practice guideline. BMJ 2017; 357: j1982.
15. Zeelenberg M, Van Dijk E, Van Den Bos K, Pieters R. The inaction effect in the psychology of regret. J Pers Soc Psychol 2002; 82: 314–27.
16. Thaler RH, Sunstein CR. Nudge: Improving Decisions about Health, Wealth, and Happiness. 2008.
17. Gelman A. The time-reversal heuristic—a new way to think about a published finding that is followed up by a large, preregistered replication (in context of Amy Cuddy’s claims about power pose). 2016. http://andrewgelman.com/2016/01/26/more-power-posing/ (accessed April 1, 2018).
Want to cite this post?
Thibault, R. (2018). The Fake News Effect in Biomedicine. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/04/the-fake-news-effect-in-biomedicine.html
No comments:
Post a Comment