Pages

Tuesday, May 29, 2018

Ethical Implications of fMRI In Utero




By Molly Ann Kluck






Image courtesy of Wikimedia Commons.

When my neuroethics mentor approached me with a publication from Trends in Cognitive Science called “Functional Connectivity of the Human Brain in Utero” (1) in hand, I was immediately delighted by the idea of performing an ethical analysis on the use of functional Magnetic Resonance Imaging (fMRI) on fetuses in utero. As of right now, I’m still conducting this ethical analysis. 





Using fMRI to look at human brains as they develop in utero is groundbreaking for a couple reasons. For one, there is a vast difference between the fMRI method currently used to investigate developing brains and previous methods that were used to examine fetal brain development. Research on developing brains had utilized preterm neonates, or babies born prematurely. While these data are valuable, there are issues with validity associated with this method: early exposure to an abnormal environment (e.g. being in the intensive care unit, where many preterm babies go after birth, being in an MRI machine, etc.) for a fetal brain, incomplete exposure to the essential nutrients and protection offered by the womb, and the plasticity of the fetal brain all can cause preterm neonates to experience differences in brain development (2). An accurate map of the brain as it typically develops will not be truly accurate if it is produced solely using preterm neonates. However, surveying a developing brain while it is still in utero, as can be done with fMRI in utero, is a different matter altogether. The chances of this research providing a more accurate picture of the developing brain increase due to the uninterrupted development of the fetus in utero. 





Secondly, the sheer amount of insight that can be gained by performing fMRI in utero opens doors to talk about consciousness. For example, fMRI allows for the functional connectivity of different brain structures like the default mode network (DMN) and the cognitive control network (CNN) to be observed. The DMN includes the connected regions of the brain thought to work together when the brain is at rest and is an area of interest for research on various mental disorders such as autism (3). The CNN on the other hand includes the connected regions of the brain that are active when you are exerting cognitive control over your thoughts and actions (12). Both of these functionally connective areas are of interest when exploring what sentience, consciousness, and (potentially) disease arise from. 








Image courtesy of Flickr.

Using fMRI in utero would assist in providing an accurate picture of how brain connectivity develops and would give insight into when these areas become operational. Potentially, attributes like "sentient" and "conscious" could be accurately applied, or conversely, removed from fetuses at specific developmental stages. The idea of fetal sentience would perhaps change the conversation about the way fetuses are popularly considered. Killing a sentient creature isn’t a particularly savory thought, and recently it’s only been a best guess as to when a fetus becomes sentient (10). With fMRI leading the charge on understanding when structures in the brain connect to generate sentience, the exact moment when a fetus becomes sentient, or the developmental stages in which degrees of sentience appear may become clearer. This would inform everything from policy on abortion to women’s health care and fetal moral status. Beyond the observation of sentience, there is evidence that a connectivity difference may be seen in adults with neuropsychological disorders such as ADHD (4) and schizophrenia (5). If the underlying structures for brain connectivity are understood, these disorders may be predicted as early as in the womb, which would make fMRI a relevant diagnostic tool that could be used in utero. 





An early diagnostic tool for neuropsychological disorders may also inform the abortion debate. A pregnant woman who is informed that the fetus she’s carrying will be born with a disorder may choose to abort (14). There are some who would claim it is her duty to do so (11) and others who would claim that selecting a fetus based on this potential for neurodevelopmental difference or disorder is a type of discriminatory act (12). 





As the scientific community continues to explore these possibilities and performs fMRI studies on pregnant women and their fetuses, the safety of the test subject (i.e. the pregnant mother) should be the priority. Some situations may place a pregnant woman at a higher risk of harm or stress than a non-pregnant woman would experience in the same situation. Fetuses are also vulnerable to these harms or stressors since a danger to the mother results in danger to the fetus (2). Three potential risks to the fetus are heat, sound, and stress due to multiple fMRI sessions. These three risks could be encountered by the fetus multiple times throughout the pregnancy, should the mother be enrolled in fMRI research for the duration of her pregnancy. For example, heat exposure in fMRI scanning may be difficult for a fetus to tolerate since fetuses are incapable of regulating body temperature (9). In the fetal stage, the growth of important structures of the fetus, like systems of muscles and nerves, can be reduced by abnormal exposure to heat. It has been noted that the nervous system may develop with defects due to heat exposure as well (6). Heat effects on embryos and fetuses have been studied using rats, guinea-pigs, hamsters, and other smaller rodents. In all these species, at all stages, fetuses were at risk for developmental delays or defects depending on when heat was administered (6). A fMRI will not cause a pregnant woman's body temperature to raise as much as the rodents experienced in the studies and certainly not for so long.






Image courtesy of Flickr.

However, my concern isn’t that in one iteration the temperature will cause developmental damage to the fetus; my concern is what happens to a fetus developmentally after multiple exposures and an irregular amount of heat exposure over a long period of time. Due to fetuses being mobile subjects, multiple iterations of a fMRI session may be needed to capture a specific developmental period and to capture the brain at many different parts of the developmental time frame. With all the exposure to fMRI heat, sound, and the stress the mother may accrue, precautions should be in place to prevent overexposure to these risks. Additionally, the informed consent document for this research should mention that there may be potential long-term risks to participating that are currently unknown and thus possibly unaccounted for. 





While attending the 2017 International Neuroethics Society (INS) Annual Meeting, I was put in touch with a radiologist who suggested that to moderate the amount of heat energy the fetus and mother absorb during the fMRI trial, you must first note what type of pulse sequence is used. Pulse sequences in fMRI scans determine the contrast between different types of brain structures to provide better clarity, e.g. between grey and white matter (8). Furthermore, repeated and lengthy exposure to sounds above the threshold that fetuses hear has been linked to abnormal chromosomal development, hearing damage, and (as a result) poor sociability (7). The same radiologist offered his expertise here as well. He told me that modifying the pulse sequence modifies both sound and heat. As I mentioned in my first paragraph, my ethical analysis is still underway. Moving forward, I will be referring to experts in radiology on safe parameters and safety precautions. 





Stress that mothers experience over the course of a pregnancy may cause developmental problems, including, but not limited to, central nervous system damage (2). It’s possible that being involved in a longitudinal study may put extra stress on some, but certainly not all, pregnant women. Researchers should observe pregnant women closely for signs of stress and have follow-up interviews to watch for abnormal stress as well. I had the pleasure of speaking briefly with Dr. Thomason, one of the authors of the paper that sparked this journey, about her own test subjects. She claimed that her mothers did not feel abnormal stress throughout her fMRI studies. I think this is wonderful, but I would be remiss if I assumed this will always be the case. 





Moving forward, precautions need to be made to ensure that the well-being of the subjects is prioritized. Not only because this is the ethical path, but also because protecting the development of the fetus would also preserve the validity of the data collected. In addition, special attention should be paid to this research due to the insight it can provide regarding fetal brain structure and connectivity and even perhaps what qualifies as (morally relevant) sentience.



_______________



Molly is an undergraduate psychology major, in her third year, at George Mason University. Her minor is neuroscience and she has been interested in neuroethics research since her second year. She has been conducting psychology research in multiple labs on campus since right after her first year. She hopes to someday explore the ethical implications that come about from AI creation and wants to explore this question from the viewpoint of what consciousness is and if it is a relevant aspect of moral status.  









References




1. Heuvel, M. I. van den, & Thomason, M. E. (2016). Functional Connectivity of the Human Brain in Utero. Trends in Cognitive Sciences, 20(12), 931–939. https://doi.org/10.1016/j.tics.2016.10.001






2. Marques, A. H., Bjørke-Monsen, A.-L., Teixeira, A. L., & Silverman, M. N. (2015). Maternal stress, nutrition and physical activity: Impact on immune function, CNS development and psychopathology. Brain Research, 1617, 28–46. https://doi.org/10.1016/j.brainres.2014.10.051 







3. Buckner, Randy L., et al. “The Brain's Default Network.” Annals of the New York Academy of Sciences, vol. 1124, no. 1, Mar. 2008, pp. 1–38. NCBI, doi:10.1196/annals.1440.011. 







4. Uytun, M. C., Karakaya, E., Oztop, D. B., Gengec, S., Gumus, K., Ozmen, S., … Ozsoy, S. D. (2016). Default mode network activity and neuropsychological profile in male children and adolescents with attention deficit hyperactivity disorder and conduct disorder. APA PsycNET. https://doi.org/10.1007/s11682-016-9614-6 







5. Garrity AG, Pearlson GD, McKiernan K, Lloyd D, Kiehl KA, Calhoun VD. Aberrant “default mode” functional connectivity in schizophrenia. American journal of psychiatry. 2007 Mar;164(3):450-7. 






6. Edwards, M. J., Saunders, R. D., Shiota, K. (2003). Effects of heat on embryos and foetuses. International Journal of Hyperthermia, 19(3):295-324 






7. Krueger, C., Horesh, E., Crosland, B. A. (2012). Safe Sound Exposure In the Fetus and Preterm Infant. Journal of Obstretic, Gynecologic & Neonatal Nursing, 41(2):166-170 


“What Is FMRI?” What Is FMRI?, UC San Diego Center for Functional MRI, fmri.ucsd.edu/Research/whatisfmri.html







8. Asakura, Hirobumi. “Fetal and Neonatal Thermoregulation.” Journal of Nippon Medical School, vol. 71, no. 6, 2004, pp. 360–370. J-Stage, doi:10.1272/jnms.71.360. 







9. Sumner, W. I., & Feinberg, D. S. (1997). “A Third Way.” In The Problem of Abortion (3rd ed., pp. 98-117). Westminster Publishing Company. 







10. Savulescu, Julian. (2001) “Procreative Beneficence: Why We Should Select the Best Children.”Bioethics, vol. 15, no. 5-6, Oct. 2001, pp. 413–426. PubMed.gov, doi:10.1111/1467-8519.00251. 







11. Elizabeth Gedge, ‘‘Healthy’ Human Embryos and Symbolic Harm,’ In J. Nisker, F. Baylis, I. Karpin, C. McLeod, & R. Mykitiuk, eds. The ‘Healthy’ Embryo: Social, Biomedical, Legal and Philosophical Perspectives (Cambridge: Cambridge University Press, 2010). 







12. Cole, Michael W., and Walter Schneider. “The Cognitive Control Network: Integrated Cortical Regions with Dissociable Functions.” NeuroImage, vol. 37, no. 1, Apr. 2007, pp. 343–360.ScienceDirect.com, doi:10.1016/j.neuroimage.2007.03.071. 







13. Natoli, J. L., Ackerman, D. L., McDermott, S., & Edwards, J. G. (2012). “Prenatal diagnosis of Down syndrome: a systematic review of termination rates (1995-2011).” Prenatal Diagnosis, 32(2), 142–153. https://doi.org/10.1002/pd.2910







Want to cite this post?



Kluck, M. (2018). Ethical Implications of fMRI In Utero. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/05/ethical-implications-of-fmri-in-utero.html

Tuesday, May 22, 2018

Should you trust mental health apps?








By Stephen Schueller







Image courtesy of Pixabay.

If you were to search the Google Play or Apple iTunes store for an app to help support your mental health you’d find a bewildering range of options. This includes nearly 1000 apps focused on depression, nearly 600 focused on bipolar disorder, and 900 focused on suicide (Larsen, Nicholas, & Christensen, 2016). But how much faith should you have that these apps are actually helpful? Or to take an even more grim position, might some apps actually be harmful? Evidence suggests the latter might be true. In one study, researchers who examined the content in publicly available bipolar apps actually found one app, iBipolar, that instructed people to drink hard liquor during a bipolar episode to help them sleep (Nicholas, Larsen, Proudfoot, & Christensen, 2015). Thus, people should definitely approach app stores cautiously when searching for an app to promote their mental health.







One reason people might believe such apps could be helpful is that Google Play and the Apple iTunes store list them within the “Health & Fitness” category rather than as entertainment, games, productivity, or social apps. One’s expectations of benefits are likely tied to how things are presented. If we looked elsewhere for examples of responsible uses of technology and mental health, we find a growing example of online mental health tests. Google, for example, has started providing tests for depression (Duckworth & Gilbody, 2017) and posttraumatic stress disorder in collaboration with the National Alliance on Mental Health and Mental Health America tests for various mental health problems in their screening platform. One would expect that these tests could be helpful and would tell you something valuable about your mental health. BuzzFeed tests, on the other hand, probably would not. Although some BuzzFeed tests might be loosely based on psychological concepts and theories, these tests function as entertainment meant to generate interest and clicks and not to promote useful health-related knowledge.







Google and Mental Health America, on the other hand, have collaborated with clinical scientists to select validated and widely-used tests that represent the gold-standard within the field and thus have proven value for mental health. This comparison is useful because it demonstrates what app stores are not doing prior to allowing an app to be added to the “Health & Fitness” section. Stores review the app for some aspects but do not ensure it is validated, represents widely-used best practices, or has any proven value. Instead, they verify that the app meets their guidelines of technical, content, and design criteria. These guidelines lead to rejecting apps that risk physical harm, but not mental nor psychological harm. Furthermore, the review teams lack the expertise needed to truly evaluate the apps using a higher standard. Where might people turn then to get the necessary information to make informed decisions about mental health apps?








Image courtesy of Max Pixel.

Over the past few years we’ve led a project that aims to fill this gap. Our goal at PsyberGuide is to empower consumers to make informed decisions to lead them to effective, usable, and safe mental health apps. Each app is rated on a variety of aspects: its credibility (direct and indirect research evidence and the quality of the development team), its user experience (aesthetics, usability, functionality), and its transparency and quality of its description of data security and privacy. Furthermore, for many apps we provide narrative reviews from experts in the field to help people better understand how and why to use that particular app. Through this process, we’ve learned some important things about the current state of mental health apps (Neary & Schueller, 2018).





Most apps that have direct empirical support (e.g., a scientific study evaluating that app itself) do not become publicly available. Of the many apps available, and again, there are numerous (van Ameringen et al., 2017), few have direct empirical support. Some app developers justify this by translating another technology-based treatment, like a web-based platform, into an app. This is the case for reSET®, which received FDA approval based on data drawn from a web-based version of the intervention (Campbell et al., 2014). Other app developers claim their apps draw on evidence-based principles, such as cognitive-behavioral therapy. But researchers who actually evaluate the content within apps find this is rarely the case (Huguet et al., 2016). Therefore, having an independent third-party review the evidence supporting an app produces a useful benchmark.






Even though an app may have direct research evidence, that does not mean it has a good user experience. In fact, in our ratings we find that credibility and user experience have a very low correlation (Neary & Schueller, 2018). It is not surprising, then, that many mental health apps experience low levels of real-world engagement. Clinical experts such as academic teams rarely have the expertise, funding, or incentives to build an engaging mental health app. Commercial app developers rarely have the expertise, interest, or incentives to conduct rigorous scientific evaluations. It is worth noting that the absence of evidence is not the evidence of absence and it could be that apps that do not currently have research support might later be supported by research. It is also possible, however, that popular apps might not lead to significant benefits when subjected to rigorous scientific evaluation, as was the case in a recent study of Headspace (Noone & Hogan, 2018). Clearly, more research is needed but also more researchers need to team with experts in fields such as design, human-computer interaction, gaming, and software development to build more engaging products that might result in not just benefits in research studies, but also use in real-world environments.





Lastly, although many apps exist, few receive many downloads. Therefore, a project like PsyberGuide might have the largest impact by focusing on better understanding the most popular apps and raising awareness regarding their strengths, limitations, and expected benefits, and helping point people towards specific apps that might be helpful for them. We need to be careful not to overhype the potential of technology to deliver effective mental health treatments while also advancing the science and practice of this field to help people find apps that might actually benefit them.



_______________




Stephen Schueller, PhD, is an Assistant Professor of Preventive Medicine at Northwestern University and a member of Northwestern’s Center for Behavioral Intervention Technologies (CBITs). He also serves as the Executive Director of PsyberGuide, a project of One Mind that aims to identify, evaluate, and disseminate information about digital mental health products. His research broadly looks at increasing the accessibility and availability of mental health services through technology. He has developed, deployed, and evaluated digital mental health interventions including Internet websites and mobile apps for the treatment and prevention of depression, anxiety, and smoking cessation and the promotion of well-being including positive affect and happiness.








References





1. Campbell, A. N., Nunes, E. V., Matthews, A. G., Stitzer, M., Miele, G. M., Polsky, D., ... & Wahle, A. (2014). Internet-delivered treatment for substance abuse: a multisite randomized controlled trial. American Journal of Psychiatry, 171(6), 683-690.





2. Duckworth, K., & Gilbody, S. (2017). Should Google offer an online screening test for depression?. Bmj, 358, j4144.











3. Huguet, A., Rao, S., McGrath, P. J., Wozney, L., Wheaton, M., Conrod, J., & Rozario, S. (2016). A systematic review of cognitive behavioral therapy and behavioral activation apps for depression. PLoS One, 11(5), e0154248.









4. Larsen, M. E., Nicholas, J., & Christensen, H. (2016). Quantifying app store dynamics: longitudinal tracking of mental health apps. JMIR mHealth and uHealth, 4(3), e96.









5. Neary, M., & Schueller, S. M. (2018). State of the Field of Mental Health Apps. Cognitive and Behavioral Practice.









6. Nicholas, J., Larsen, M. E., Proudfoot, J., & Christensen, H. (2015). Mobile apps for bipolar disorder: a systematic review of features and content quality. Journal of medical Internet research, 17(8), e198.









7. Noone, C., & Hogan, M. J. (2018). A randomised active-controlled trial to examine the effects of an online mindfulness intervention on executive control, critical thinking and key thinking dispositions in a university student sample. BMC psychology, 6(1), 13.









8. van Ameringen, M., Turna, J., Khalesi, Z., Pullia, K., & Patterson, B. (2017). There is an app for that! The current state of mobile applications (apps) for DSM-5 obsessive-compulsive disorder, posttraumatic stress disorder, anxiety and mood disorders. Depression and anxiety, 34(6), 526-539.










Want to cite this post?




Schueller, S. (2018). Should you trust mental health apps? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/05/should-you-trust-mental-health-apps.html

Thursday, May 17, 2018

Presenting... The Neuroethics Blog Reader: Black Mirror Edition!


It is our pleasure to present you with The Neuroethics Blog Reader: Black Mirror Edition!










This reader features the seven contributions from the blog's Black Mirror series, in which six different student writers explored the technology and neuroethical considerations presented in various  episodes of the British science fiction anthology television series. 



As Dr. Karen Rommelfanger puts it: 





This reader "... features critical reflections on the intriguing, exciting and sometimes frightful imagined futures for neurotechnology. Every day, in real life, we move closer to unraveling the secrets of the brain and in so doing become closer to understanding how to intervene with the brain in ways previously unimaginable. Neuroscience findings and the accompanying neurotechnologies created from these findings promise to transform the landscape of every aspect of our lives. As neuroethicists, we facilitate discussions on the aspirations of neuroscience and what neuroscience discoveries will mean for society. Sometimes this means dismantling overhyped neuroscience and staving of possible dystopian futures, but ultimately neuroethics aims to make sure that the neuroscience of today and of the future advance human flourishing."





The Neuroethics Blog, now in its 7th year of creating weekly publications, runs mostly in part to our amazing blog editorial team. A special thank you to: Sunidhi Ramesh (Volume Editor of the reader and outgoing Assistant Managing Editor), Carlie Hoffman (Managing Editor), Nathan Ahlgrim (incoming Assistant Managing Editor), Kristie Garza (Supporting Editor and blog contributor), and Jonah Queen (Supporting Editor and blog contributor). We would also like to thank the authors of the pieces featured in the reader; you can read more about them on the last page of the publication.







Want to read more? Check out a digital copy of the reader below.









Tuesday, May 15, 2018

Regulating Minds: A Conceptual Typology





By Michael N. Tennison 








Image courtesy of Wikimedia Commons.

Bioethicists and neuroethicists distinguish therapy from enhancement to differentiate the clusters of ethical issues that arise based on the way a drug or device is used. Taking a stimulant to treat a diagnosed condition, such as ADHD, raises different and perhaps fewer ethical issues than taking it to perform better on a test. Using a drug or device to enhance performance—whether in the workplace, the classroom, the football field, or the battlefield—grants the user a positional advantage over one’s competitors. Positional enhancement raises issues of fairness, equality, autonomy, safety, and authenticity in ways that do not arise in therapy; accordingly, distinguishing enhancement from therapy makes sense as a heuristic to flag these ethical issues. 






These categories, however, do not capture the entire scope of the reasons for and contexts in which people use drugs or devices to modify their experiences. Consider psychedelic drugs like psilocybin and LSD that induce mystical-type experiences. Studies show that these drugs also have the potential to treat depression, anxiety, and addiction. They can even lead to positive changes in the personalities and behaviors of healthy subjects, such as increased openness and altruism, that can persist long-term without retaking the drug—effects that are neither therapeutic in the sense of treating a diagnosable condition, nor are they positional enhancements akin to taking steroids to gain a leg up on one’s athletic competition. 





Legal regimes that regulate drugs in the United States present a different dichotomy for distinguishing uses of drugs, ostensibly based on their risk/benefit ratios. In general, FDA laws and regulations authorize the therapeutic use of drugs proven to be sufficiently safe and effective, and controlled substance laws criminalize the use of drugs determined to lack medical value or that are obtained and used outside the scope of medical treatment. Unfortunately, the “War on Drugs”—the laws and policies implementing controlled substance restrictions—produces myriad side effects. These include unintuitive, unscientific, and politically-motivated drug classifications, interference with medical research, misinformation about the risks and benefits of drugs, stigmatization of non-problematic drug use, and the privileging of medical value as the only redeeming benefit an illegal drug can have, not to mention the panoply of harms imposed by criminalization. 







In a step toward developing a robust assessment of drug harms, a UK study ranked drugs based on their harm to users and others, using physical, psychological, and social criteria.  The findings, later endorsed by a panel of experts from across the EU, suggest a mismatch between the legal treatment of drugs and their comparative harms.  


Although ensuring the safety and effectiveness of drugs is an absolutely essential government function, generalizing and criminalizing non-medical use as “recreational drug use” or “drug abuse” fails to account for, and stifles, the other ways drugs can and do benefit individuals and society. Again, consider psilocybin, a drug designated in the United States as having no accepted medical value, a high potential for abuse, and no capacity to be used safely under medical supervision—a stricter legal classification than for cocaine, methamphetamine, and fentanyl. Even if conclusively proven safe and effective for medical or enhancement purposes, it would still be illegal to obtain and use psilocybin to enhance insight, catalyze personal development, and cultivate altruism and openness. Short of congressional action, there is no legal mechanism to acknowledge the scientific evidence of a scheduled drug’s safety and effectiveness for enhancement, and therefore authorize its non-medical use. 





The legal classification scheme fails, therefore, to accurately represent and respond to rapidly-accumulating, novel evidence about the actual risk/benefit ratios of certain drugs like psilocybin, while the neuroethical enhancement/therapy paradigm fails to account for the uses of drugs and devices that fall outside the boundaries of medicine and positional enhancement. I propose creating a single framework of functional categories that serves as both a descriptive typology and a normative spectrum that balances a higher level of precision and accuracy with the categorical bundling of thematic clusters of ethical issues. Ordered from the most to least ethically-justified, based on risk/benefit ratios at individual and collective levels, this framework comprises therapy, virtue enhancement, utility enhancement, and recreation enhancement. 





Therapy 





Therapy aims to restore the health and “normal” functions of individuals impaired by disease or injury. In discussions about enhancement, therapy tends to be the normative baseline against which enhancement is compared and contrasted because, over time, we have accrued a set of familiar attitudes, norms, and legal structures that drive our understanding of medical treatment. We understand that health and healthcare are prerequisites to flourishing as individuals and societies. 





Virtue Enhancement 








Image courtesy of Flickr.

Studies show that the controlled and supervised administration of psychedelics reliably induces a “peak” or “mystical” subjective experience of altered perceptions, mood, and cognition characterized by feelings of oneness, transcendence, and ineffability. Anxiety and fear are not uncommon elements, but careful management of the “set and setting” of the experience successfully protects the psychological safety of study participants. In a 2006 study, most subjects rated their psilocybin experience as “either the single most meaningful experience of his or her life or among the top five most meaningful experiences of his or her life.” Further research recently demonstrated that this may lead to sustained, positive changes in personality, worldview, or behavior that may benefit society, especially if reinforced by personal development practices such as meditation. 





I differentiate virtue enhancement from the kind of “moral enhancement” envisioned by bioethicists that would neurologically force the expression of a pro-social behavior, such as honesty, and that would only be effective so long as the individual is under the enhancement’s neurochemical influence. By contrast, studies show that psychedelic experiences may prompt a more “natural” or authentic pursuit of personal development that persists long after the drug experience itself has passed





Utility Enhancement 





This category refers to the use of a drug or device to enhance performance in an outcome-based endeavor. Whereas virtue enhancement is valuable in itself and in consequence for individuals and society, utility enhancement is valuable primarily in consequence. And despite enhancing performance on collectively-valued activities, such as work, academics, and sports, the overall value of utility enhancement is limited by the myriad of ethical issues raised. 





Utility enhancements are typically positional enhancements that grant leverage in zero-sum endeavors where people are competing for limited resources. For example, it is generally considered unfair for an athlete to use steroids to enhance performance in sports, because a better outcome for the enhanced individual or team automatically means a worse outcome for the opponent. Additionally, players may not have equal or affordable access to the most effective, least detectable, and safest enhancements, and many athletes may not want to risk professional or legal penalties if they are caught. Yet to remain competitive, they may feel forced to enhance themselves, thereby reducing their freedom and autonomy. 





Recreation Enhancement 








Image courtesy of Pxhere.

On one hand, recreation enhancement entails the use of a drug or device to enhance one’s experience of a recreational activity. Using a transcranial stimulation device to enhance video game performance, for example, provides some individual benefit without raising substantial ethical issues at the collective level. Studies now show that psychedelics can enhance the emotional appreciation of music, a finding that spans recreation, virtue enhancement, and even therapy—depending on the context and intent of use—and could have both individual and collective benefit. 





On the other hand, people also modify their minds with drugs and devices as a form of recreation itself. This parallels the concepts of “recreational drug use” and Robert Nozick’s “experience machine”—a critique of hedonism—entailing escapism or “tuning out” from reality. By chemically or electrically inducing a desired state, such as satisfaction or pleasure, one circumvents the natural process of effort and achievement, raising concerns about authenticity and the prospect of an unfavorable ratio between benefits and health risks. If a user seeks a change in consciousness as a short-term end goal in itself, without harnessing it to pursue other ethically-sound ends, it may provide little overall benefit to the individual or society. However, it is not necessarily harmful if not pursued to the point of interference with health or responsibilities. 





None of the above categories should be conflated with use disorders. In fact, a use disorder could result from the inappropriate pursuit of any of the above categories of substance use. According to SAMHSA, the DSM-5 defines a substance use disorder as being present when “the recurrent use of alcohol and/or drugs causes clinically and functionally significant impairment, such as health problems, disability, and failure to meet major responsibilities at work, school, or home.” Despite the prevalence of use disorders revealed by the nation-wide opioid crisis in the United States, it is important to remember that 80-90% of users of illegal drugs do not have a drug problem, according to Dr. Carl Hart, neuroscientist and chair of psychology at Columbia University. Similarly, data from a 2017 report of the UN Office on Drugs and Crime show that globally, almost 90% of people who use internationally-controlled drugs do not have substance use disorders. Indeed, by taking seriously the studies that demonstrate the non-problematic and even beneficial uses of drugs legally-classified as necessarily harmful, we are better equipped to identify risk factors of addiction and problematic use. 





Laws should encourage activities that benefit individuals and society and discourage those that do not. Creating a unified and coherent approach to understanding how and why people manipulate their consciousness with drugs and devices is a first step toward systematically incorporating intuitive, evidence-based risk/benefit analyses into our ethical, legal, social, and policy discussions. This facilitates the accurate identification and assessment of the unique clusters of ethical issues associated with different purposes and outcomes of drug and device use. Such assessments could be translated into laws and regulations that promote the discovery and application of beneficial manipulations of consciousness, even if non-medical, while implementing structures and processes to reduce their risk of harm. The next steps entail sorting out how drugs and devices could be approved and administered for different kinds of enhancement, including the requisite thresholds of scientific proof of safety and effectiveness and the proper context of use to minimize risks—whether inpatient, over the counter, by prescription, or after demonstrating sufficient knowledge and competence, as with obtaining a driver’s license. A model like the one proposed above can enhance our neuroethical analyses and help us to overcome our dangerous experimentation with harmful drug laws and policies.


_______________




Michael N. Tennison is a Senior Law & Policy Analyst at the University of Maryland Center for Health and Homeland Security. His research interests focus on the ethical, legal, social, and scientific issues associated with drug policy. The opinions expressed are the author's own and do not represent the view of the Center for Health and Homeland Security or the University of Maryland. Portions of this post are adapted from the author’s poster presentation at the 2017 Annual Meeting of the International Neuroethics Society







Want to cite this post?





Tennison, M. (2018). Regulating Minds: A Conceptual Typology. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/05/regulating-minds-conceptual-typology.html



Tuesday, May 8, 2018

Trust in the Privacy Concerns of Brain Recordings





By Ian Stevens







Ian is a 4th year undergraduate student at Northern Arizona University. He is majoring in Biomedical Sciences with minors in Psychological Sciences and Philosophy to pursue interdisciplinary research on how medicine, neuroscience, and philosophy connect. 




Introduction





Brain recording technologies (BRTs), such as brain-computer interfaces (BCIs) that collect various types of brain signals from on and around the brain could be creating privacy vulnerabilities in their users.1,2 These privacy concerns have been discussed in the marketplace as BCIs move from medical and research uses to novel consumer purposes. 3,4 Privacy concerns are grounded in the fact that brain signals can currently be decoded to interpret mental states such as emotions,5 moral attitudes,6 and intentions.7 However, what can be interpreted from these brain signals in the future is ambiguous.






The current uncertainty that surrounds future capacities to decode complex mental states – and the privacy vulnerabilities that this uncertainty creates for research subjects – requires that participants put trust in BRT researchers. I argue that “trust as reliability” is insufficient to understand the obligations of BRT researchers and that a more robust understanding of trust is needed. The populous deserves to know the possibility for privacy losses in neuroscientific research.





There is a growing body of literature addressing how developing neurotechnologies may change, harm, or undermine our senses of self,8,9 and there are potential deleterious effects of neurotechnology on privacy.10 Technologies like positron emission tomography (PET), functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and electrocorticography (ECoG) present numerous ways that scientists can “look into” the brain. Here, I will not address all the concerns that arise with technologies that peer through the skull; my more limited focus is the idea that brain recordings produced as part of research protocols now may one day be used to decipher mental states of individual research participants. This narrow topic will serve as the groundwork for examining the importance of trust in brain recording research (and perhaps beyond).





Privacy, Brain Recording Technologies & Big Data








Image courtesy of Flickr.

Privacy scholars have described multiple types of privacy, such as physical, informational, decisional, and associational.11-13 I will focus on informational privacy because it is tied to the control of personal information. This broad category can be distilled to information about yourself that you may wish to withhold from certain parties. Such information can be as common as your social security number or as pertinent as possible unconscious racial biases. The concern with informational privacy for brain recording technology is it might allow for the discovery and revelation of mental content “hidden” within recorded brain data. In other words, research subjects may lose the ability to withhold personal mental information.





This topic is nothing new, as the ability to draw additional information from brain data is already of philosophical intrigue.14,15 As I have said, intentions, moral attitudes, and visual imaging16



can be deciphered through the collection and interpretation of brain signals from various neurotechnologies (like the electroencephalogram recorded on the exterior of the cranium or, electrocorticography recorded on the surface of the brain).* Depending on which BRT is used, what kind of mental states the study is examining, and by what means researchers look to interpret brain recordings, a research team could discover different things (i.e. emotions and intentions). The type of procedure could also change what kind of brain recordings are collected (from which area of the brain) and the quantity of recordings collected.





The notion of “big data”18 might come to mind when considering the vast amount of brain recordings collected by use of these technologies. In research contexts, brain data is actively shared amongst researchers to “spread the wealth.”19 This practice proves efficient to allow multiple types of research to be done when the costs of conducting research are high. However, the tension rises if/when such data can be re-identified. Although currently this has a low probability of occurring from publicity and small sample groups,20,21 the chance that it could happen highlights the concerns for the future privacy of brain recordings. If brain recordings are left in a free state of movement and can hypothetically be read to infer the identity (among other things) of the subject, the privacy concerns seem obvious. Therefore, future unknown privacy vulnerabilities of using such BRTs for subjects needs some guidance to aid in the balance between current capabilities for decoding brain data and what could be around the bend for brain recording interpretation. 





Trust








Image courtesy of The Blue Diamond Gallery.

Trust is an important, if often under-theorized, feature of medical research 22-24 and of the patient-physician relationship.25,26 It is defined as the knowledge or feeling that we can place something of significance in another’s control. Philosophically, one important aspect in the nature of trust lies in the vulnerability of the truster to the trustee.27 We have all experienced this in some form or another when building relationships. In a sense, we “open ourselves up” to others and feel hurt if we are rejected. This rejection should be thought of as being synonymous with feeling betrayed and hints at the importance of a relationship dynamic in this trust schema.28,29 I will focus on our relationships with researchers (participant-researcher relationship) here.





Reliability is generally the attribute we associate with those people who follow through on their actions. There is an important distinction between trust and reliability in that the social understanding of trust can assist in clarifying. Here, I will define reliability as following an agreed upon contract,29-31 while trust is associated with more implicit standards of situations not explicitly stated.28,32 For example, a researcher failing to follow the regulations set by HIPAA is being unreliable with someone’s brain recordings. But if that researcher then sold such information to marketing companies, a more implicit understanding of what should not be done with brain recordings, it would produce a feeling of betrayal. The importance here is the ability of trust to be used in the implicit vulnerabilities of privacy that could be exploited in the future of brain recordings. As the previous section stressed, the future for what brain recordings could tell about us is largely unknown. Therefore, since a patient cannot consent to such exhaustive hypothetical practices completely, an amount of trust between researcher and subject needs to be in place to account for these unknown vulnerabilities. Learning about this part of the research process will allow participants to have more knowledge of when they are vulnerable to better foster the informed consent process.





Clinical Application








Image courtesy of Wikimedia Commons.

The implementation of trust in the patient-researcher relationship is an epistemological question; that is, how can you know who you can trust? Initially, the satisfaction of reliably is used to classify trustworthiness: does what the researchers say they are going to do, happen? The more difficult step is creating the knowledge and feeling that if something does go wrong, then the researchers have your well-being in mind. A topic that addresses this, while not intended for such a use, is the notion of ancillary care. Ancillary care is the care allotted to study participants if something medically concerning is discovered while a participant is in a study. Research obligations are grounded in a kind of implicit understanding of participant-researcher relationship.33 Lessons about trust found in the ancillary care debates can be applied to BRT research. This may provide a way to understand the activities of researchers accessing, tracking, and possibly controlling brain data as taking place within trusting relationships. So, while it may be premature to speak of codifying trust-based guidelines for BRT research, this may be an avenue worth exploring. This way, while participants might not meet the holders of their brain data, they can still trust them.





*It is important to note that in these practices patients are consenting to have such information recorded.17 And since ethically such a practice is sound, recording brain states that a patient is unaware of is an important distinction to make about informational privacy. 21 Such a practice does require further ethical evaluation but for this paper, the important point to extrapolate is a subject’s vulnerabilities. 


Unknown vulnerabilities are common in scientific research. Will a patient be harmed by the side effects of this drug? Can this plastic be safe around children? Such unknowns are usually the original motivation to conduct research. However, I will only analyze possible harms to privacy in brain recordings. Future literature could explore how trust could work with unknown physical harms in other research fields. 





Acknowledgements



I want to thank the Neuroethics Thrust at the Center for Sensorimotor Neural Engineering for having me as an intern this summer and especially Dr. Eran Klein and Dr. Sara Goering for mentoring me.





References



  1. Bonaci, T., Herron, J., Matlack, C. & Chizeck, H. J. Securing the exocortex: A twenty-first century cybernetics challenge. in 2014 IEEE Conference on Norbert Wiener in the 21st Century (21CW) 1–8 (2014). doi:10.1109/NORBERT.2014.6893912

  2. Klein, E. & Rubel, A. Privacy and ethics in brai-computer interface research. in Brain-Computer Interfaces Handbook: Technological and Theoretical Advances (eds. Nam, C., Nijholt, A. & Fabien, L.) 653–668 (Taylor & Francis).

  3. Bonaci, T., Calo, R. & Chizeck, H. J. App Stores for the Brain?: Privacy and Security in Brain-Computer Interfaces. IEEE Technol. Soc. Mag. 34, 32–39 (2015).

  4. BiddleMay 22 2017, S. B. & A.m, 9:26. Facebook Won’t Say If It Will Use Your Brain Activity for Advertisements. The Intercept Available at: https://theintercept.com/2017/05/22/facebook-wont-say-if-theyll-use-your-brain-activity-for-advertisements/. (Accessed: 10th July 2017)

  5. Daly, I. et al. Affective brain–computer music interfacing. J. Neural Eng. 13, 46022 (2016).

  6. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. & Cohen, J. D. An fMRI Investigation of Emotional Engagement in Moral Judgment. Science 293, 2105–2108 (2001).

  7. Haynes, J.-D. et al. Reading Hidden Intentions in the Human Brain. Curr. Biol. 17, 323–328 (2007).

  8. Mecacci, G. & Haselager, W. F. G. (Pim). Stimulating the Self: The Influence of Conceptual Frameworks on Reactions to Deep Brain Stimulation. AJOB Neurosci. 5, 30–39 (2014).

  9. de Haan, S., Rietveld, E., Stokhof, M. & Denys, D. Becoming more oneself? Changes in personality following DBS treatment for psychiatric disorders: Experiences of OCD patients and general considerations. PloS One 12, e0175748 (2017).

  10. Ahmadi, M. & Ahmadi, L. Privacy Aspects of Nanoneuroimplants from the Point of View of a Human Dignity Perspective in Related International Conventions. J. Biomater. Tissue Eng. 4, 315–337 (2014).

  11. Decew, J. W. In Pursuit of Privacy: Law, Ethics, and the Rise of Technology. (Cornell University Press, 1997).

  12. Allen, A. L. Uneasy Access: Privacy for Women in a Free Society. (Rowman & Littlefield Publishers, 1988).

  13. Nissenbaum, H. Privacy in Context: Technology, Policy, and the Integrity of Social Life. (Stanford University Press, 2009).

  14. The Oxford Centre for Neuroethics - Neil Levy. Available at: http://www.neuroethics.ox.ac.uk/our_members/neil_levy. (Accessed: 28th July 2017)

  15. Edwards, S. D. R. R. J. L. I Know What You’re Thinking: Brain imaging and mental privacy by Sarah D. Richmond. (Oxford University Press, 2012).

  16. Schoenmakers, S., Barth, M., Heskes, T. & van Gerven, M. Linear reconstruction of perceived images from human brain activity. NeuroImage 83, 951–961 (2013).

  17. I Know What You’re Thinking: Brain imaging and mental privacy. (Oxford University Press, 2012).

  18. Mittelstadt, B. D. & Floridi, L. The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts. Sci. Eng. Ethics 22, 303–341 (2016).

  19. Poldrack, R. A. & Gorgolewski, K. J. Making big data open: data sharing in neuroimaging. Nat. Neurosci. 17, 1510–1517 (2014).

  20. Obama shakes mind-controlled robot hand wired to sense touch. US News & World Report Available at: https://www.usnews.com/news/news/articles/2016-10-13/paralyzed-man-feels-touch-through-mind-controlled-robot-hand. (Accessed: 28th July 2017)

  21. May 16, S. P. C. N., 2012 & Pm, 8:41. Paralyzed woman uses mind-control technology to operate robotic arm. Available at: http://www.cbsnews.com/news/paralyzed-woman-uses-mind-control-technology-to-operate-robotic-arm/. (Accessed: 28th July 2017)

  22. Perusco, L. & Michael, K. Control, trust, privacy, and security: evaluating location-based services. IEEE Technol. Soc. Mag. 26, 4–16 (2007).

  23. Kerasidou, A. Trust me, I’m a researcher!: The role of trust in biomedical research. Med. Health Care Philos. 20, 43–50 (2017).

  24. Kass, N. E., Sugarman, J., Faden, R. & Schoch-Spana, M. Trust The Fragile Foundation of Contemporary Biomedical Research. Hastings Cent. Rep. 26, 25–29 (1996).

  25. Mainous, A. G., Baker, R., Love, M. M., Gray, D. P. & Gill, J. M. Continuity of care and trust in one’s physician: evidence from primary care in the United States and the United Kingdom. Fam. Med. 33, 22–27 (2001).

  26. Anderson, L. A. & Dedrick, R. F. Development of the Trust in Physician Scale: A Measure to Assess Interpersonal Trust in Patient-Physician Relationships. Psychol. Rep. 67, 1091–1100 (1990).

  27. McLeod, C. Trust. in The Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) (Metaphysics Research Lab, Stanford University, 2015).

  28. Baier, A. Trust and Antitrust. Ethics 96, 231–260 (1986).

  29. Hardin, R. Trust and Trustworthiness. (Russell Sage Foundation, 2002).

  30. Black, D. Autonomy and Trust in Bioethics. J. R. Soc. Med. 95, 423–424 (2002).

  31. Dasgupta, P. Trust as a Commodity. in Trust: Making and Breaking Cooperative Relations (ed. Gambetta, D.) 49–72 (Blackwell, 1988).

  32. Jones, K. Trust as an Affective Attitude. Ethics 107, 4–25 (1996).

  33. Richardson, H. S. Gradations of Researchers’ Obligation to Provide Ancillary Care for HIV/AIDS in Developing Countries. Am. J. Public Health 97, 1956–1961 (2007).




Want to cite this post?




Stevens, I. (2018). Trust in the Privacy Concerns of Brain Recordings. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/05/trust-in-privacy-concerns-of-brain.html

Tuesday, May 1, 2018

The Promise of Brain-Machine Interfaces: Recap of March's The Future Now: NEEDs Seminar





Image courtesy of Wikimedia Commons.


By Nathan Ahlgrim



If we want to – to paraphrase the classic Six Million Dollar Man – rebuild people, rebuild them to be better, stronger, faster, we need more than fancy motors and titanium bones. Robot muscles cannot help a paralyzed person stand, and robot voices cannot restore communication to the voiceless, without some way for the person to control them. Methods of control need not be cutting-edge. The late Dr. Stephen Hawking’s instantly recognizable voice synthesizer was controlled by a single cheek movement, which seems shockingly analog in today’s world. Brain-machine interfaces (BMIs) are the emerging technology that promise to bypass all external input and allow robotic devices to communicate directly with the brain. Dr. Chethan Pandarinath, assistant professor of biomedical engineering at Georgia Tech and Emory University, discussed the good and bad of this technology in March’s The Future Now NEEDs seminar: "To Be Implanted and Wireless". He shared his experience and perspective, agreeing that these invasive technologies hold incredible promise. Keeping that promise both realistic and equitable, though, is an ongoing challenge.






BMIs are currently designed as assistive technologies. They can take many forms: a cochlear implant, a cursor on a screen, a robotic arm, or even a complete exoskeleton. All serve the same general purpose: to restore a person’s ability to connect and communicate with the world. The most common patients are those with some form of paralysis. Given the potential to restore movement or speech to people, many see the development of BMIs as a moral imperative. However, agreeing that BMI research is a worthwhile and necessary endeavor cannot will these devices into being. There is a good reason why controlling a robot arm with your brain feels like something out of science fiction – it is incredibly difficult to do.







An example of an intracortical array.

Image courtesy of Wikimedia Commons.


Reliable BMIs depend on first being able to record brain activity. Scientists have been able to do this for decades at great precision, but the unfortunate trade-off is that the level of precision tracks directly with the level of invasiveness. As Dr. Pandarinath described, scalp electroencephalograms (EEGs) require no surgery at all, but analyzing the resulting data is like standing outside of a football stadium. You may hear the roar of the crowd, but you need to get in the stands before you can pick up individual conversations. For scientists, that means you need to open up the skull and place arrays of wires (known as intracortical microelectrodes) into the brain itself in order to eavesdrop on the brain’s conversations.








Display of the BrainGate system.

Image courtesy of Wikimedia Commons.

Figuring out what those brain conversations mean is the hard part. All our billions of neurons firing at once produce gigabytes of data, and the challenge of making sense of that data is what draws engineers and computer scientists towards neuroscience. Dr. Pandarinath is one of these people, a self-described “engineer that managed to run into the brain one day and thought it was pretty cool.” Approaching the problem as an engineer, he and many others have developed a host of technologies around the BrainGate system. Their tagline says it all: “Turning thought into action.” Targeting the motor cortex of the brain, which controls voluntary movements in healthy individuals, BrainGate technology allows paralyzed people to control robotics just by thinking about them (Pandarinath et al., 2017). Perhaps most shocking of all, learning to control the device is like learning to walk. At first it’s a struggle (there’s a reason we label toddlers as such), but adults do not consider walking a skill. As one patient described, “it was hard work getting [it to work]. I struggled greatly to [move the arm] up and down at the beginning, now up and down is so easy I don’t even think about it.” In effect, BrainGate lets patients control a robot as an extension of their own body. No mental gymnastics needed.





Is the ease of use a good thing? Once patients can “automatically” control BMIs, are they at fault for any harm caused by the machine? Dr. Karen Rommelfanger raised one possible scenario: following an argument between the patient and researcher, the patient’s robotic hand crushes the researcher’s hand during testing. Who is at fault? Did the patient misuse the technology, or did the researcher cause her own injury by creating a faulty system?





One possibility is to have a universal limit to the strength and ability of all BMIs. Even though we can create machines that rip cars apart like tissue paper, maybe we should never build a robotic arm to have more grip strength than that of a child. Such a solution prevents the person (or BMI) from doing any physical harm, but it then fails the primary goal of BMIs: to restore patients’ abilities. A universal set-point on what these abilities should be is problematic because, for better or worse, there is no singular ‘human ability.’





By the end of the seminar, the conversation landed on where to draw the line between restoration and enhancement. Of course, this debate is not new to BMIs. Everything from sports supplementation to psychostimulants like Adderall are subject to the same debate: who deserves to receive these treatments, and how much is too much? Researchers do not even need to design superhuman BMIs (although it is certainly possible) to join the conversation. The arm strength of an editorial intern is a far cry from Game of Thrones’ Hafþór Björnsson, but we are both decidedly human. If I became paralyzed, must I be restricted to my previous strength? I could always argue that I was just going to start a strongman program before I became paralyzed, and therefore I deserve a robotic arm to match.








Could and should BMIs make everyone

as strong as humanly possible?

Image courtesy of Wikimedia Commons.

The premise that researchers will be in charge of setting a limit (if any) may be inherently flawed, given that machine learning is starting to drive BMI research. Algorithms succeed by optimizing solutions, which in the case of BMIs would mean the most efficient, the most precise, and perhaps the strongest BMI possible. Normal humans are hardly the optimal physical form, so it is hard to imagine a sophisticated algorithm being complacent at returning me to my previous strength.





To many, “supplementing” people with artificial intelligence (AI)-guided BMIs is a good thing, and perhaps even necessary. Elon Musk, famous for his dire warnings on the impending AI threat, posits that coupling AI with humans via BMIs is the best protection our species has against it. By making ourselves more than human, we will at least have a fighting chance against the AIs we design with the express goal of being better than human.





In the end, BMIs do offer great promise. No, a paraplegic will not be able to walk normally in the next year using a BMI. Anyone who promises that is peddling in false hope and unrealistic expectations. But BMIs, like all other technologies, never stop improving. Questions about limits to and access to these incredible tools will only become more pressing as the technology improves. Who gets to set the limit? Who will act as gatekeeper? The patient or the manufacturer? Dr. Pandarinath does not think BMIs are different than any other cutting-edge product: “by default, it’ll be the wallet.” And adjusting for inflation, it will now take thirty-five million dollars to build the Six Million Dollar Man.





References





Pandarinath C, Nuyujukian P, Blabe CH, Sorice BL, Saab J, Willett FR, Hochberg LR, Shenoy KV, Henderson JM (2017) High performance communication by people with paralysis using an intracortical brain-computer interface. eLife 6:e18554.



Want to cite this post?



Ahlgrim, N. (2018). The Promise of Brain-Machine Interfaces: Recap of March's The Future Now: NEEDs Seminar. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/05/the-promise-of-brain-machine-interfaces.html