Pages

Tuesday, October 2, 2018

How to be Opportunistic, Not Manipulative



By Nathan Ahlgrim





Opportunistic Research





Government data is often used to

answer key research questions.

Image courtesy of the U.S. Census Bureau




Opportunistic research has a long and prosperous history across the sciences. Research is classified as

opportunistic when researchers take advantage of a special situation. Quasi-experiments enabled by government programs, unique or isolated populations, and once-in-a-lifetime events can all trigger opportunistic research where no experiments were initially planned. Opportunistic research is not categorically problematic. If anything, it is categorically efficient. Many a study could not be ethically, financially, or logistically performed in the context of a randomized control trial.





Biomedical research is certainly not the only field that utilizes opportunistic research, but it does present additional ethical challenges. In contrast, many questions in social science research can only be ethically tested via opportunistic research, since funding agencies are wary of explicitly withholding resources from a ‘control’ population (Resch et al., 2014). We, as scientists, are indebted to patients who choose to donate their time and bodies to participate in scientific research while inside an inpatient ward; their volunteerism is the only way to perform some types of research.





Almost all information we have about human neurons comes from generous patients. For example, patients with treatment-resistant epilepsy can have tiny wires lowered into their brains, a technique known as intracranial microelectrode recording, enabling physicians to listen in on the neuronal chatter at a resolution normally restricted to animal models (Inman et al., 2017; Chiong et al., 2018). Seizures, caused by runaway excitation of the brain, are best detected by recording electrical signals throughout the brain. By having such fine spatial resolution inside a patient’s brain, surgeons can be incredibly precise in locating the site of the seizure and treating the patient. It’s what else those wires are used for that introduces thorny research ethics.









Image courtesy of Wikimedia Commons.

Those wires are already down there, so why not put them to even more use? Scientists dream of poring over the treasure trove of patients’ data. It’s a precious, and rare, resource. The elephant in the room, especially for practitioners of basic research, is that basic research is not expected to directly benefit the individual patient. Any scientific gain may help people in the years to come, but it will not affect that individual patient’s prognosis. Unlike studies trying to optimize deep brain stimulation (DBS) for treatment of Parkinson’s Disease (Müller and Christen, 2011) or depression (Dunn et al., 2011), basic research exists for the sake of science, not patient welfare. With fewer concrete benefits to the patient, the risk to benefit calculation becomes trickier.





Human neuroscience research like this is almost always expensive and demanding. That does not mean, however, that these experiments can be low priority. Our prodigious knowledge of the nervous system is only surpassed by our ignorance of it, and treatments for some of the most pressing health concerns of our time depend on research like this increasing our knowledge. Of course, such a strong motivation to innovate can blind scientists to the need to also protect their research participants, which is why specific ethical standards for opportunistic research need to be robust and ready.





Physician-led Opportunism





In the physician-patient relationship, the power dynamic lies in favor of the physician. Most physicians recognize and accept this dynamic when it comes to healthcare. Even so, many fail to appreciate that the power dynamic does not disappear when the conversation changes topic; the physician remains the physician even when she talks to her patient about non-therapeutic research.








Image courtesy of SVG Silh.



Non-medical invasive brain research, like that using intracranial recordings and brain stimulation in epilepsy patients, is admittedly a niche area. Since it has no immediate
implications for human health, it receives far less publicity and
public scrutiny than clinical trials or even promising treatments in
animal models (Fang and Casadevall, 2010). Although the purpose of basic
research is distinct, it can still benefit from the lessons learned on
the medical side. Clinical human neuroscience research shows that the ability to consent does not guarantee that the decision to consent is a voluntarily one (Swift, 2011). In the shadow of the physician-patient power dynamic, would-be participants can become situationally incapacitated even while retaining full mental capacity (Labuzetta et al., 2011). In effect, their position as a patient, the physician-patient relationship, and the overlap between medical and research practices can all render the patient incapable of freely giving informed consent. Although the mental state of the patient may be sound, many argue that they must be protected just like those who lack the mental capacity to consent on their own behalf. The fear is that any hint of the research influencing the medical care, or even the absence of addressing that interaction explicitly, can force the patient’s decision.





Of course, there is also a strong argument that consent, even if not fully voluntary, can be ethically valid. Even proponents of the so-called Autonomous Authorization criterion, under which consent is only valid when given intentionally, with full understanding, and without controlling influence (Faden and Beauchamp, 1986), often amend or bend those strict guidelines to make them practical (Miller and Wertheimer, 2011). Autonomous authorization can be eroded because of therapeutic misconception of research, when potential participants are influenced to enroll in a study due to confusion between research and medical treatment (Appelbaum et al., 1982). For instance, patients may enroll in a study testing a potential drug to treat Alzheimer’s Disease because they believe they will not be placed in the placebo group given their advanced condition. That is not how randomized control trials are designed. Patients’ misunderstanding inflates the benefits in their mind, which could sway their decision to participate. Yet the demand that all patients be fully knowledgeable before their consent is deemed valid may be too rigorous to be practical, and end up an unrealistic burden to place on researchers. Critics of the Autonomous Authorization model claim that responsibility for protecting patients resides in institutional safeguards (i.e. Institutional Review Boards [IRBs]), not the researchers themselves. With strong institutional standards in place, patients’ best interests can still be protected even if they give non-autonomous consent. That is, at least, the argument. How those safeguards are designed is the determining factor of their effectiveness.





How to Keep Consent Voluntary





We cannot pretend that the physician-patient power dynamic does not exist, or that every patient will become an expert in the research program they sign up for. Still, proactive steps on the institutional and personnel sides can protect participants and make sure they enroll because they want to, not because they feel they have to. The need for such protections is compounded by the specifics to invasive brain research, whose entire participant pool lives with a treatment-resistant brain disorder severe enough to merit invasive brain surgery. It is our unfortunate reality that stigma looms over people living with brain disorders, both external (from others) and internal (self-perception) (Corrigan et al., 2006). Stigma surrounding brain disorders weakens personal empowerment (Corrigan, 2002), tipping the balance of power even more strongly towards the physician and research team. The protections put in place for these participants must be comprehensive and robust to rebalance the relationship.





Teams performing invasive brain research have already made a series of recommendations to directly address the unique environment of non-medical invasive research using human patients (Chiong et al., 2018). Their recommendations are strong and worth implementing, but they fall short because of a common blind spot: they are still thinking like researchers, not patients.








Image courtesy of Pixabay user Catkin.

As a patient, you might be coerced to consent to any research protocol put in front of you out of fear that your medical treatment is dependent on it. You don’t even need to be a cynic who expects the worst out of your physician to fear this. After all, your physician will probably take more of an interest in you, and you’ll get more face time with her, if you sign up for her study. Yes, preferential treatment is wrong, but self-defense against improper treatment requires self-empowerment, something that is often degraded in these patients by the stigma following their brain disorder. To minimize potential coercion, physicians should at the very least complete the consent process as part of a team, alongside people not involved in the patient’s care. Of course, the coercion patients feel would be minimized if their physicians were completely absent during the consent process to minimize any implicit coercion, but such requirements are often impractical. Both medical and research personnel should also be required to explicitly state that medical care will not change for the better or worse regardless of research participation. These statements must be unequivocal, and repeated before, during, and after the consent process.





Even as I and others lay out a list of criteria for researchers to meet, it is important to stress that research teams cannot rely on a one-size-fits-all consent process. Individualization is especially necessary when researchers are working with a vulnerable population dependent on their care. The capacity to consent to medical interventions (which get the patient into the ward in the first place) does not imply the capacity to consent to research interventions. Even after patients do consent, their medical condition can fluctuate, as can their desire to participate. Just like with medical treatment, consent at the start of a project (no matter how ethically it was obtained) cannot be used to rubberstamp the entire study. Such protections are already given to psychiatric patients (Palmer et al., 2013), showing that the best consent is one that is renewed.





Institutional criteria can help bolster these practices, but relying too much on them is dangerous. After all, institutional priorities can bias the definition of “patient interests” and preferentially validate non-autonomous consent that aligns with institutional interests over the individual patient’s interest. Both personnel and institutional approaches fail to fully protect the patient/research participant dual role, which is why the two must work in tandem. It is far too easy for researchers to capitalize on a patient’s therapeutic misconception because it produces the desired outcome, even when the deception is unintentional.








Image courtesy of Wikipedia.

As a patient, being told your medical care is protected regardless of your research participation is not the same as believing it and trusting it to be true. Doubt may be unavoidable, and it is not preferable, but it should also not prevent the study from happening. Invasive brain research can only happen in specific and intensive situations, but it is absolutely necessary to the progress of neuroscience and medicine. Everything from epilepsy to Alzheimer’s Disease to autism is informed by and better treated because of invasive brain research.





Patients will be protected when physicians are trained to not display favoritism to their research participants and IRBs shape research protocols to fairly balance a participant’s risks and benefits. They will be protected even if they do not understand the research as well as the research team. Science does not have to stop until the public are all scientists. Scientists do, however, need to protect non-scientist interests, even when it feels like doing so gets in the way of progress. The discussion of the ethical challenges is not meant to detract that we, as a society, need this kind of research if we hope to continue improving overall health. The brain is boundlessly complex, and we do not understand it well enough to adequately treat those who need it. In short, our deep ignorance of the brain’s inner workings requires deep, and sometimes invasive, research.




________________












 Nathan Ahlgrim is a fifth year Ph.D. candidate in the Neuroscience
Program at Emory. In his research, he studies how different brain
regions interact to make certain memories stronger than others. He strengthens his own brain power by hiking through the north
Georgia mountains and reading highly technical science...fiction.










References



Appelbaum PS, Roth LH, Lidz C (1982) The therapeutic misconception: Informed consent in psychiatric research. International journal of law and psychiatry 5:319-329.



Chiong W, Leonard MK, Chang EF (2018) Neurosurgical patients as human research subjects: Ethical considerations in intracranial electrophysiology research. Neurosurgery 83:29-37.



Corrigan PW (2002) Empowerment and serious mental illness: Treatment partnerships and community opportunities. Psychiatric Quarterly 73:217-228.



Corrigan PW, Watson AC, Barr L (2006) The self–stigma of mental illness: Implications for self–esteem and self–efficacy. Journal of Social and Clinical Psychology 25:875-884.



Dunn LB, Holtzheimer PE, Hoop JG, Mayberg HS, Roberts LW, Appelbaum PS (2011) Ethical issues in deep brain stimulation research for treatment-resistant depression: Focus on risk and consent. AJOB Neuroscience 2:29-36.



Faden RR, Beauchamp TL (1986) A history and theory of informed consent: Oxford University Press.



Fang FC, Casadevall A (2010) Lost in translation—basic science in the era of translational research. Infection and Immunity 78:563-566.



Inman CS, Manns JR, Bijanki KR, Bass DI, Hamann S, Drane DL, Fasano RE, Kovach CK, Gross RE, Willie JT (2017) Direct electrical stimulation of the amygdala enhances declarative memory in humans. Proceedings of the National Academy of Sciences.



Labuzetta JN, Burnstein R, Pickard J (2011) Ethical issues in consenting vulnerable patients for neuroscience research. Journal of Psychopharmacology 25:205-210.



Miller FG, Wertheimer A (2011) The fair transaction model of informed consent: An alternative to autonomous authorization. Kennedy Institute of Ethics Journal 21:201-218.



Müller S, Christen M (2011) Deep brain stimulation in parkinsonian patients—ethical evaluation of cognitive, affective, and behavioral sequelae. AJOB Neuroscience 2:3-13.



Palmer BW, Savla GN, Roesch SC, Jeste DV (2013) Changes in capacity to consent over time in patients involved in psychiatric research. The British Journal of Psychiatry 202:454-458.



Resch A, Berk J, Akers L (2014) Recognizing and conducting opportunistic experiments in education: A guide for policymakers and researchers In. Washington, D.C.: U.S. Department of Education.



Swift T (2011) Desperation may affect autonomy but not informed consent. AJOB Neuroscience 2:45-46.



Want to cite this post?



Ahlgrim, N. (2018). How to be Opportunistic, Not Manipulative. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/10/how-to-be-opportunistic-not-manipulative.html



No comments:

Post a Comment