Pages

Tuesday, August 28, 2018

Smart AI




By Jonathan D. Moreno








Image courtesy of Flickr

Experiments that could enhance rodent intelligence are closely watched and long-term worries about super-intelligent machines are everywhere. But unlike avoiding smart mice, we’re not talking about industry standards for the almost daily steps toward computers that possess at least near-human intelligence. Why not? 





Computers are far more likely to achieve human or near-human intelligence than lab mice, however remote the odds for either. The prospects for making rodents smarter with implanted human neurons have dimmed as the potential for a smart computer continues to grow. For example, a recent paper on systems of human neurons implanted in mice didn’t make them smarter maze-runners. By contrast, in 2016 a computer program called AlphaGo showed it could defeat a professional human Go player. Those machine-learning algorithms continue to teach themselves new, human-like skills, like facial recognition -- except of course that they are better at it than the typical human. 




There’s been no lack of focus on the unsettling prospect of “pretty smart” mice. In 2000, Stanford’s Irving Weissman proposed to transplant human neural cells into fetal mice. The purpose was to study human neurons in vivo, but the proposal quickly turned into a flash point for debate about the ethical issues involved in human/non-human chimeras. Then-Senator Brownback introduced legislation to criminalize attempts to make such creatures, called the “Human Chimera Prohibition Act,” though most of its definitions seemed to deal with hybrids. President George W. Bush called for a ban on “human-animal hybrids” in his 2006 State of the Union address





Despite these presidential-level worries and much forehead-furrowing about smart mice among us bioethicists, an intelligence enhancement event is far more likely to take place in the AI realm than in animal studies. Although both are far-fetched, the very criteria for AI intelligence enhancement are still very much at issue whereas physical limitations like the size of a rodent skull appear to be decisive limiting factors for the effects of any humanized implant. 








Image courtesy of pngimg.com

This is not a matter of the “existential risk” that Skynet-style computers will align against us and turn all the atoms of the universe into whatever their basic programming instructs them to do, come what may. Nor is it the challenge of deciding whether it’s wrong to be cruel to self-aware machines a la Westworld. Instead, there’s the very practical and immediate problem of whether the next AI improvement risks machine awareness and how to prepare for it. Should some anticipatory legal arrangements be made? Should ethical standards for the treatment of this new kind of intelligent creature be made ready? Perhaps most fundamental, who will have the authority to communicate with this intelligent device or to decide its fate? 





To make matters more confusing, when that moment supposedly comes not everyone will agree that the AI in question is truly conscious, just as there is disagreement about whether non-human animals or even certain profoundly disabled human beings are conscious.  That debate itself will be contentious. 





Industry standards have been applied to such efforts as putting human neural stem cells into non-human animals. Despite skepticism about scientists self-regulating, those standards have worked. A similar process should be undertaken by the major players in AI. They could promulgate a process for the periodic evaluation of progress toward smart and potentially self-aware AI. This wouldn’t be regulation, but self-governance. In fact, earnest steps along these lines might help convince lawmakers that they don’t need to step in. 








Image courtesy of Pixabay

Even without considering doomsday scenarios or ethical quandaries, the creation of a pretty smart AI with the consciousness that goes with it would be a world-changing event, not unlike discovering intelligent aliens. Indeed, in at least one respect that would be more significant because humans would be the ones to create this new kind of person, not the contingencies of extra-terrestrial evolution. 





Despite the ongoing anxieties expressed in many quarters about AI intelligence enhancement, groups formed to address the policy implications (e.g., Stanford’s AI 100 group) have not taken up the nearer-term question of how we should react to and treat conscious AI, as for example, possessors of “rights.” That debate has already begun in Europe and is only likely to be intensified, Rather, they are focused on the far more likely disruptions like those of markets even if AI is not conscious in any meaningful sense.
Of course we should worry about big-picture changes that stand to be triggered by better and better AI, but we need to keep an eye on more subtle first signals of what we might call consciousness or self-awareness. That could come sooner and certainly would have more impact on humanity and society than robots stealing jobs or becoming our sex partners.
There is a gap between the most extreme concerns about intelligent AI and monitoring the incremental movement that might lead to those results. Even normally anti-regulatory entrepreneurs like Elon Musk have suggested that this is one area in which they would advocate some form of regulation. But if the major companies working on enhancing AI established an industry standard of prior review, that extreme and possibly counter-productive result can be avoided.
Pretty smart AI is coming. It’s in the industry’s interest to be a step ahead.




_______________






Jonathan D. Moreno is the David and Lyn Silfen University Professor at the University of Pennsylvania where he is a Penn Integrates Knowledge (PIK) professor. At Penn he is also Professor of Medical Ethics and Health Policy, of History and Sociology of Science, and of Philosophy.  His latest book is Impromptu Man: J.L. Moreno and the Origins of Psychodrama, Encounter Culture, and the Social Network (2014), which Amazon called a “#1 hot new release.”  Among his previous books are The Body Politic, which was named a Best Book of 2011 by Kirkus Reviews, Mind Wars (2012), and Undue Risk (2000).   














Want to cite this post?




Moreno, J. D. (2018). Smart AI. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/08/smart-ai.html

Tuesday, August 21, 2018

Worrisome Implications of Lack of Diversity in Silicon Valley




By Carolyn C. Meltzer, MD








Image courtesy of Wikimedia Commons

The term “artificial intelligence” (AI) was first used in 1955 by John McCarthy of Dartmouth College to describe complex information processing (McCarthy 1955). While the field has progressed slowly since that time, recent advancements in computational power, deep learning and neural network systems, and access to large datasets have set the stage for the rapid acceleration of AI.  While there is much painstaking work ahead before transformational uses of AI catch up with the hype (Kinsella 2017), substantial impact in nearly all aspects of human life is envisioned. 






AI is being integrated in fields as diverse as medicine, finance, journalism, transportation, and law enforcement. AI aims to mimic human cognitive processes, as imperfect as they may be.  Our human tendencies to generalize common associations, avoid ambiguity, and more tightly identify with others who are more like ourselves may help us navigate our world efficiently, yet how they may translate into our design of AI systems is yet unclear.  As is typically the case, technology is racing ahead of our ability to consider the societal and ethical consequences of its implementation (Horvitz 2017). 





AI algorithms already introduced into common use have given us a glimpse of the downside of the lack of diversity in computing, biotech, engineering, and other STEM fields.  If you own an iPhone X, you routinely rely on facial recognition technology to unlock your phone and sign in to various apps such as for personal banking.  Such algorithms appear to work best if you are a white man. For women and dark-skinned persons, the errors increase considerably (from <1% for a white male to up to 35% for a dark skin-toned woman) (Buolamwini 2018, Lohr 2018). Why is this the case? Two widely used aggregate datasets applied to develop face recognition algorithms were overwhelmingly male and white. So, the algorithm works exactly as it was trained, that is, for a largely white, male world.





AI strategies such as machine learning and natural language processing may also amplify gender, racial, and other stereotypes by generalizing associations found in the training datasets. For example, photographs that include nurses may mis-identify images of male nurses as female due to associated text annotations of datasets showing nurses to be more commonly women (Bolukbasi et al. 2016). Bolukbasi and colleagues (2016), in their aptly titled work “Man is to Computer Programmer as Woman is to Homemaker?”, set out to de-bias Google News texts by modifying word embeddings to remove gender stereotypes. Their work is a cautionary tale of the bias that is foundational to commonly available data.









How doctors are typically represented

Image courtesy of  Max Pixel

I have found that performing a google image search of the term “doctor” will turn up a page of images that largely reinforce our unconscious bias of what a doctor looks like, that is male and white. The notorious 2016 case of Dr. Tamika Cross, the young, black, female physician barred from treating an ill passenger on a Delta flight, set off a social media backlash (#whatadoctorlookslike) to raise public awareness of the social harm of this form of stereotype bias. 





Other sources of adaptive cognitive heuristic principles or bias that help humans make quick judgements can be translated into AI algorithms, particularly machine learning (Doell and Siebert 2016). These include confirmation bias (e.g., human classification error in labeling of data in a training set) and priming (e.g., resulting in yellow items over-labeled as “banana” when a banana was over-represented in the training sample).  Once embedded in widely distributed technology platforms -- such as smartphones, law enforcement databases, transportation platforms, and medical diagnostic systems -- algorithmic bias can have disastrous effects (Horovitz 2017; Bass and Huet 2017). The potential power and authority of “black box” automated AI systems could result in misidentification errors support false criminal charges, failure of a self-driving car to correctly identify a human figure crossing the road or contribute to misdiagnoses and inappropriate health care. Further, the more complex the problem AI is targeted to address – such as in guiding medical diagnoses - the more challenging it may be to discern embedded bias effects.





_______________














Dr. Meltzer, William P. Timmie Professor and Chair of Radiology and Imaging Sciences Professor and Associate Dean for Research at Emory University School of Medicine, is a neuroradiologist and nuclear medicine physician whose translational research has focused on brain structure-function relationships in normal aging, dementia, and other late-life neuropsychiatric disorders. Her work in imaging technologies includes oversight of the clinical evaluation of the world’s first combined PET/CT scanner. Dr. Meltzer has held numerous leadership roles in national professional societies and advisory boards, including the Advisory Council for the National Institute for Biomedical Imaging and Bioengineering, and has authored approximately 200 publications. 











References:







McCarthy J, Minsky M, Rochester N, Shannon CE, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf August, 1955









Gartner Hype Cycle Suggests Another AI Winter Could Be Near




Bret Kinsella, November 5, 2017 





(accessed May 29, 2018)















Facial Recognition Is Accurate, if You’re a White Guy




New York Times; By Steve Lohr, February 9, 2018










Buolamwini J, Gebru T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. 2018;81:77-91









Horvitz E. AI, People, and Society. Science 2017; 357(6346):7.









Researchers Combat Gender and Racial Bias in Artificial Intelligence. 




Bloomberg; by Dina Bass and Ellen Huet, December 4, 2017.





(accessed May 29, 2018)









Bolukbasi T, Chang K-W, Zou J, Saligrama V, Kalai A. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.









Doell C, Siebert S. Evaluation of Cognitive Architectures Inspired by Cognitive Biases. Procedia Computer Science Volume 88, 2016, Pages 155–162 7th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2016







Want to cite this post?




Meltzer, C. (2018). Worrisome Implications of Lack of Diversity in Silicon Valley. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/08/worrisome-implications-of-lack-of.html

Tuesday, August 14, 2018

The Stem Cell Debate: Is it Over?




By Katherine Bassil








Image courtesy of Flickr

In 2006, Yamanaka revolutionized the use of stem cells in research by revealing that adult mature cells can be reprogrammed to their precursor pluripotent state (Takahashi & Yamanaka, 2006). A pluripotent stem cell is a cell characterized by the ability to differentiate into each and every cell of our body (Gage, 2000). This discovery not only opened up new doors to regenerative and personalized medicine (Chun, Byun, & Lee, 2011; Hirschi, Li, & Roy, 2014), but it also overcame the numerous controversies that accompanied the use of embryonic stem (ES) cells for research purposes. For instance, one of the controversies raised by the public and scholars was that human life, at every stage of development, has dignity and as such requires rights and protections (Marwick, 2001). Thus, the use of biological material from embryos violates these rights, and the research findings gathered from this practice does not overrule basic human dignity. With a decline in the use of ES cells in research, the use of induced-pluripotent stem (iPS) cells opened up avenues for developing both two- and three-dimensional (2D and 3D, respectively) cultures that model human tissues and organs for both fundamental and translational research (Huch & Koo, 2015). While the developments in this field are still in an early phase, they are expected to grow significantly in the nearby future, thereby triggering a series of ethical questions of their own.




Organoids of the liver, kidney, intestine, thyroid and other organs are currently grown in-vitro in 3D cultures and are characterized by a less complex architecture and physiology than human adult organs (Lavazza & Massimini, 2018), but are more complex than 2D cultures (Lavazza & Massimini, 2018). However, brain organoids seem to carry the most controversies and ethical issues (Qian, Nguyen, Jacob, Song, & Ming, 2017). First, due to their increasingly complex nature when compared to 2D cultures, and second, that we as humans identify ourselves the most with our brain and attribute all of our actions, thoughts and behavior to our brains. iPS-derived neurons and brain organoids are now increasingly being used in laboratories across the world and are constantly proving to outperform previous models, such as post-mortem tissue, which only capture the disease at its end stages; biopsies, which are invasive (Marchetto, Brennand, Boyer, & Gage, 2011); or even animal models, which can have poor translation to the bedside (Denayer, Stöhr, & Van Roy, 2014). 








An inner ear organoid

Image courtesy of Flickr

Studies looking into neurological disorders like microcephaly (Lancaster et al., 2013), neurodegenerative diseases like Alzheimer’s disease (Tong, Izquierdo, & Raashid, 2017), and psychiatric disorders like major depressive disorder (Licinio & Wong, 2016), typically differentiate adult mature cells into iPS cells and then into (several types of) neurons, with the aim of developing brain organoids to model both biological and behavioral diseases of the brain (Soliman, Aboharb, Zeltner, & Studer, 2017) in a controlled environment. Gaining further understanding of underlying disease mechanisms, screening for drugs, and potentially proposing meaningful therapies are all practices under progress that are performed today in neuroscientific research with iPS-derived neurons (Marchetto & Gage, 2012). This technology is unique in inferring disease mechanisms specific to an individual before, during and even after onset of the disease, hence capturing critical developmental stages of a certain disorder (Marchetto et al., 2011). To some, stem cell technology is the future of medicine.





While the model itself carries several technical challenges, including reductionist tendencies, the validity of the model, variability between cell lines, heterogeneity of the cell population, underlying genetic abnormalities,  and more (Marchetto & Gage, 2012), the points discussed here will cover the unspoken ethics. Independent of the ethical challenges carried by the use of ES cells in research, using iPS cells to create brain organoids holds its own ethical issues that necessitate further discussion. 





Sentience





Ethicists are often judged for committing science-fiction prototyping (Baron, Halvorsen, & Cornea, 2017). This term implies investigating the implications of future technologies in relation to public opinion by using a fictional story. This strategy is often used to involve the public in setting up policies in research and other sectors. In the brain organoid discussion, this entails “consciousness in a dish.” But, before taking a leap into consciousness (which we do not have a single clear definition of or an objective way of measuring) (Moses, 2018), let us question whether brain organoids can be (or will one day be) considered sentient entities. Recently, advances in technology have allowed scientists to generate cerebral and neural tissue organoids with highly specialized nociceptive neurons (Boisvert et al., 2015) usually responsible for the sensation of pain, sensory interneurons (Gupta et al., 2018) involved in the relay of information, and networks of living human neurons with mature firing patterns (Camp & Treutlein, 2017). Additionally, scientists have been successful in growing brain organoids with a maturity compared to a 5-week-old fetus (Watanabe et al., 2017). Similar practices have called for ethical concerns from both biological (Greely, Cho, Hogle, & Satz, 2007) and non-biological perspectives (Ashrafian, 2017), with increasing probability of giving rise to neuronal entities with potentially “conscious-like” states. 





That brings us to the question: what are the ethical implications if these neurons are able to feel pain or even sense their environment? In practice, the networks formed are functional networks when cellular activity is measured using molecular and electrophysiological techniques (Goparaju et al., 2017). In most assays, researchers treat the cells with drugs, electric stimulation and other stimuli, expecting the networks to respond in morphological and cellular changes, which is often the case (Soliman et al., 2017). However, the challenge lies in being able to draw the line between what is functional and what is sentient, or capable of integrating information received from the environment and to have a subjective experience of any sort. In order to address this challenge, first a clearer definition of sentience in relation to brain organoids needs to be set. Second, current techniques measuring activity in brain organoids need to either be optimized for greater sensitivity or new technologies need to be utilized for objective measurements of sentience (Lavazza & Massimini, 2018).





Chimerism








The Chimera from Greek mythology

Image courtesy of Wikimedia Commons 

The discussion even covers the notion of chimeras – hybrids of human and animal tissue (Council, 2010). To boost the growth of brain organoids in a way that most resembles in-vivo conditions, scientists are now seeding laboratory animal scaffolds with human brain organoids.  For instance, human brain organoids are transplanted into a mouse brain, where the grafted tissue integrates and makes use of the animal’s vasculature and circulatory system to grow and develop (Choi et al., 2017). That being said, scientists have indeed overcome a practical challenge of growing the organoid in a more “realistic” fashion (Marchetto & Gage, 2012), yet have the moral concerns of chimeras been considered? Some bioethicists have raised their concerns regarding this practice, questioning whether the animals grafted with human brain tissue are capable of greater intelligence than control animals. A recent study, led by a world-renowned neuroscientist Fred H. Gage (Mansour et al., 2018), has refuted those speculations but has not completely denied it. It is clear now more than ever that due to more and more research using increasingly complex human neuronal models, looking into changes in animal abilities, sentience and even species identity needs to be performed to answer this and several related questions. Additionally, the successful integration of human tissue with rodent tissue brings us back to the concept of sentience: has the organoid gained increased sentience now that it is part of a fully functional and viable organism, fully responsive to its environment? 





Moving forward





This breakthrough technology surely favors great advancements in biology by allowing what once seemed impossible and challenging to become a breaking-point in biomedical research. Hence, it is a fallacy to minimize the importance of brain organoids in the progression of neuroscientific research, especially because the prevalence of mental health diseases is on the rise worldwide. Nevertheless, the stem cell debate is unfortunately not over, and it is becoming clearer that the ethical discussions should catch up with the rapid advances in scientific discoveries and applications. A brain organoid with human consciousness is perhaps indeed science-fiction, but ethical assumptions derived from a “conscious-like” entity greatly differ from those of a simple assembly of brain tissue, and in this case a distinction between the two is necessary.





Researchers will need to be aware of and concerned about the ethical issues just as much as the practical challenges, which might change the way brain organoids are manipulated at the bench-side, or even how they are destroyed in the future. The handling of brain organoids bearing moral standing will have to be approached more cautiously, and that constitutes a set of guidelines for their use in research. That being said, the day the level of complexity of these biological entities will be established is the day when limitations on their use in research will be demanded. Researchers will need to collaborate with ethicists to set those necessary boundaries, such as defining a developmental threshold where beyond that point, research using these entities becomes stricter. This might be extended further to forming policies that define their moral and legal status, especially when cases involve living donors (Truog, 2005). This work should not be perceived as a hindrance to scientific progress but as an example of responsible scientific practice, not only in the eyes of the scientific community but also in society. Stem cell research was indeed revolutionized 12 years ago and that is exactly why scientists and ethicists must participate in ensuring this field keeps on striving, both effectively and responsibly.





  _______________
















I am a soon to be a Master graduate in fundamental neuroscience at Maastricht University in the Netherlands. I have gained great interest in the ethics of neuroscience (or neuroethics) throughout my studies and have attempted to integrate it in my work where possible. I hope that one day I’ll be able to bridge the fields of neuroscience and neuroethics and hopefully inspire others to see the importance of such an effort. 













References






Ashrafian, H. (2017). Can artificial intelligences suffer from mental illness? A philosophical matter to consider. Science and engineering ethics, 23(2), 403-412.







Baron, C., Halvorsen, P. N., & Cornea, C. (2017). Science Fiction, Ethics and the Human Condition: Springer.







Boisvert, E. M., Engle, S. J., Hallowell, S. E., Liu, P., Wang, Z.-W., & Li, X.-J. (2015). The specification and maturation of nociceptive neurons from human embryonic stem cells. Scientific reports, 5, 16821.







Camp, J. G., & Treutlein, B. (2017). Human development: Advances in mini-brain technology. Nature, 545(7652), 39.







Choi, H. W., Hong, Y. J., Kim, J. S., Song, H., Cho, S. G., Bae, H., . . . Do, J. T. (2017). In vivo differentiation of induced pluripotent stem cells into neural stem cells by chimera formation. PLoS One, 12(1), e0170735.







Chun, Y. S., Byun, K., & Lee, B. (2011). Induced pluripotent stem cells and personalized medicine: current progress and future perspectives. Anatomy & cell biology, 44(4), 245-255.







Council, N. R. (2010). Guide for the care and use of laboratory animals: National Academies Press.







Denayer, T., Stöhr, T., & Van Roy, M. (2014). Animal models in translational medicine: Validation and prediction. New Horizons in Translational Medicine, 2(1), 5-11.







Gage, F. H. (2000). Mammalian neural stem cells. Science, 287(5457), 1433-1438.







Goparaju, S. K., Kohda, K., Ibata, K., Soma, A., Nakatake, Y., Akiyama, T., . . . Kimura, H. (2017). Rapid differentiation of human pluripotent stem cells into functional neurons by mRNAs encoding transcription factors. Scientific reports, 7, 42367.







Greely, H. T., Cho, M. K., Hogle, L. F., & Satz, D. M. (2007). Thinking about the human neuron mouse. The American Journal of Bioethics, 7(5), 27-40.







Gupta, S., Sivalingam, D., Hain, S., Makkar, C., Sosa, E., Clark, A., & Butler, S. J. (2018). Deriving Dorsal Spinal Sensory Interneurons from Human Pluripotent Stem Cells. Stem cell reports.







Hirschi, K. K., Li, S., & Roy, K. (2014). Induced pluripotent stem cells for regenerative medicine. Annual review of biomedical engineering, 16, 277-294. 







Huch, M., & Koo, B.-K. (2015). Modeling mouse and human development using organoid cultures. Development, 142(18), 3113-3125. 







Lancaster, M. A., Renner, M., Martin, C.-A., Wenzel, D., Bicknell, L. S., Hurles, M. E., . . . Knoblich, J. A. (2013). Cerebral organoids model human brain development and microcephaly. Nature, 501(7467), 373. 







Lavazza, A., & Massimini, M. (2018). Cerebral organoids: ethical issues and consciousness assessment. Journal of medical ethics, medethics-2017-104555. 







Licinio, J., & Wong, M. (2016). Serotonergic neurons derived from induced pluripotent stem cells (iPSCs): a new pathway for research on the biology and pharmacology of major depression: Nature Publishing Group.







Mansour, A. A., Gonçalves, J. T., Bloyd, C. W., Li, H., Fernandes, S., Quang, D., . . . Gage, F. H. (2018). An in vivo model of functional and vascularized human brain organoids. Nature biotechnology, 36(5), 432. 







Marchetto, M. C., Brennand, K. J., Boyer, L. F., & Gage, F. H. (2011). Induced pluripotent stem cells (iPSCs) and neurological disease modeling: progress and promises. Human molecular genetics, 20(R2), R109-R115. 







Marchetto, M. C., & Gage, F. H. (2012). Modeling brain disease in a dish: really? Cell stem cell, 10(6), 642-645. 







Marwick, C. (2001). Embryonic stem cell debate brings politics, ethics to the bench. Journal of the National Cancer Institute, 93(16), 1192-1193.







Qian, X., Nguyen, H. N., Jacob, F., Song, H., & Ming, G.-l. (2017). Using brain organoids to understand Zika virus-induced microcephaly. Development, 144(6), 952-957. 







Soliman, M., Aboharb, F., Zeltner, N., & Studer, L. (2017). Pluripotent stem cells in neuropsychiatric disorders. Molecular psychiatry, 22(9), 1241. 







Takahashi, K., & Yamanaka, S. (2006). Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell, 126(4), 663-676. 







Tong, G., Izquierdo, P., & Raashid, R. A. (2017). Human Induced Pluripotent Stem Cells and the Modelling of Alzheimer’s Disease: The Human Brain Outside the Dish. The open neurology journal, 11, 27. 







Truog, R. D. (2005). The ethics of organ donation by living donors. New England journal of medicine, 353(5), 444-446. 







Watanabe, M., Buth, J. E., Vishlaghi, N., de la Torre-Ubieta, L., Taxidis, J., Khakh, B. S., . . . Gong, D. (2017). Self-Organized Cerebral Organoids with Human-Specific Features Predict Effective Drugs to Combat Zika Virus Infection. Cell reports, 21(2), 517-532. 









Moses, T. (2018). Practical and Ethical Considerations in Consciousness Restoration. The Neuroethics Blog. Retrieved on March 28, 2018, from http://www.theneuroethicsblog.com/2018/03/practical-and-ethical-considerations-in.html








Want to cite this post?




Bassil, K. (2018). The Stem Cell Debate: Is it Over? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/08/the-stem-cell-debate-is-it-over.html

Tuesday, August 7, 2018

Is the concept of “will” useful in explaining addictive behaviour?




By Claudia Barned and Eric Racine








Image courtesy of Flickr

The effects of substance use and misuse have been key topics of discussion given the impact on healthcare costs, public safety, crime, and productivity (Gowing et al., 2015). The alarming global prevalence rates of substance use disorder and subthreshold “issues” associated with alcohol and other drugs have also been a cause for concern. For example, in the United States, with a population of over 318 million people (Statista, 2018), 21.5 million people were classified with a substance use disorder in 2014; 2.6 million had issues with alcohol and drugs, 4.5 million with drugs but not alcohol and 14.4 million had issues with alcohol only (SAMHSA, 2018). Similarly, in Canada, with a population of over 35 million people (Statistics Canada, 2018), a total of 6 million met the criteria for substance use disorders in 2013, with the highest rates among youth aged 18 – 24 (Statistics Canada, 2013). Concerns about addiction are particularly evident in widespread media alarm about the current fentanyl crisis affecting the U.S., Canada, Australia and the U.K, and the climbing rates of fentanyl related deaths globally (NIDA, 2017; UNDC, 2017).




In ethics theory and practice, the capacity to act freely and choose alternative courses of action is of chief importance for moral responsibility. For people with an addiction, the capacity to act freely is sometimes hindered due to associated compulsions or impulses.  Indeed, substance use disorder (or addiction) refers to a condition characterized by a problematic relationship with a substance such that a person’s choices are biased toward the use of that substance even though it may be contrary to their deeper wishes. Accordingly, several authors have described addiction as a disorder of choice or will (Heyman, 2009; Levy, 2013; Volkow, 2015; Wallace, 1999), which is an especially useful framing in the field of ethics. Some key players in the field have claimed that persons with an addiction, to some degree, lack the will to resist their drug of addiction (Charland, 2002; Charland, 2003; Hyman, 2007; Volkow, 2015). But what does such an assertion mean? Does it imply that the person has no choice? Or does it mean that the person does not have sufficient willpower to enact the desired behaviour? In this post, we explore the pluralism of concepts used in discussions about volition in addiction (see Figure 1). As an example, we break down our understanding of the term ‘will’ and discuss the relationship between free will and will power.







Figure 1: Semantic pluralism in describing impairments
of voluntary decision and action in addiction


(Figure created/generated from worditout.com)



Free will or Willpower? 





The term “will” has important colloquial use. It is used when referring to freedom of the will, that is, the ability to make choices or act without restraint. It is also used when describing one’s capacity (or lack thereof) to carry out a specific task. For example, “I didn’t have the will to resist”. Other examples of the use of will include “where there’s a will, there’s a way”, and “I willed it to happen”. Interestingly, discussions on addiction are sometimes situated within the context of an individual’s will. For example, addiction has been referred to as a “disease of the will” (Volkow, 2015), a “defect of the will” (Wallace, 1999), among other phrases related to the shortcomings or deficiencies of the will. In these instances, free will is the referent, but who is to say that a disease or defect of one’s willpower doesn’t apply as well? Given the possible overlap between the two concepts, it is sometimes unclear which concept is being referred to in certain contexts. In fact, the use of “will” within the context of addiction can mean: 1) people with an addiction do not have the option to freely choose an alternative (free will) or 2) people with an addiction do not have the capacity to resist their drug of addiction (willpower). In the first scenario, will is situated within the context of choice; in the second, it is situated within the context of capability to act. It is possible that the two aspects are related (Racine, 2017) although it is useful to distinguish them here.




Compulsion, Freewill and Willpower




Compulsion is a core component of the disease view of addiction (Heather, 2017). To understand the role of compulsion, one must also understand how the concepts of free will and willpower are involved. These three concepts each explain a piece of the addiction puzzle as they examine unique components of the voluntary/involuntary aspects of decision-making. In the disease view of addiction, it is the involuntary nature of one’s behaviour, i.e., the inability to choose an alternative course of action, that differentiates addictive behaviour from others. As Heather (2017) argues, “to say an addict’s behaviour is compulsive is to say, in respect of their addiction, that they are not free to behave other than they do; they have no choice in the matter or, at least, their ability to choose is severely constrained by the effects of their disease of addiction” (p.15).







Image courtesy of Flickr

Within particular research paradigms, addiction has been explicitly classified as a “disease of free will” (Volkow, 2015) (not just will) particularly because it deprives individuals of volition, rationality and the ability to make autonomous choices (Karasaki, Fraser, Moore & Dietze, 2013). Those in support of this view argue that the ability to make free and informed decisions is compromised among those with an addiction; as a result, any “voluntary” decisions made should be challenged, especially in cases where individuals are consenting to research involving their drug of addiction (Charland, 2003). Such a view, however, is not without criticism. Several scholars have noted the degree of difficulty involved in breaking addictive habits; however, they argue it can and has been done, even without formal treatment programs (Keane, 2002; Peele, 2016). Peele (2016) argues that there are successful cases where people overcome the need to take part in addictive behaviours by means of self-cure and moderation, but that these cases are rarely noted in the neuroscience addiction literature. Rather, a common lay sentiment toward drug users, which is reproduced in the neuroscience literature, is that they have no real choice. However, scholars critical of this view (Foddy & Savulescu, 2006) have showed that if we are indeed unable to control our urges and we are in fact driven by compulsions, then people cannot be held responsible for their addiction (and their recovery), which is problematic. Categorizing addiction as a disease can shift blame from the person to the forces outside their control. If this is the case, and addiction is the outcome of forces beyond one’s control, then people with an addiction are simply victims, as their behaviour cannot be altered by free will or willpower. Such an inference has critical implications for health ethics and society.




Pluralism: Alternate Terminology and Concepts




Beyond free will and will power, many other concepts are used to explain the failure of volition in addiction. In the addiction literature, concepts such as autonomy, self-regulation, self-control and compulsion are commonly investigated; each seems to capture part of what seems to be the issue at stake. However, subtle differences in underlying philosophy could potentially have an effect on treatment practices and research directions. For example, treatments targeting a lack of willpower could require a potentially different management approach than treatments focused on restoring autonomy. Similarly, research on the attribution of free will may draw on different investigative approaches in comparison to research examining lack of self-control. In future work, we hope to explore how these concepts are used and operationalized in addiction research, and the kinds of studies facilitated by each conceptual tradition. We expect that the diversity of concepts partly reflects subtle nuances in the way volition is affected by substance use disorders. It also partly represents a challenge for more integrative views on the phenomenon of volition since exchanges can be challenged by distinct research paradigms evolving in tandem but also talking at cross-points. 





Acknowledgments: Writing of this blog was possible thanks to a grant of the Social Sciences and Humanities Research Council of Canada. Thanks to John Aspler, Ariel Cascio, and Jelena Poleksic who provided feedback on a previous version of this blog post.




_______________




Claudia Barned, PhD is a postdoctoral research fellow in the Neuroethics Research Unit at the Institut de recherches cliniques de Montréal (IRCM) and is affiliated with the Department of Neurology and Neurosurgery at McGill University. At IRCM, Claudia works on research exploring the voluntary aspects of decision-making in the context of drug addiction. Her prior research explored the social, legal and ethical implications of involving children with inflammatory bowel disease in biomedical research.







Eric Racine, PhD, is Full research professor and Director of the Neuroethics Research Unit at the Institut de recherches cliniques de Montréal (IRCM) with cross-appointments at Université de Montréal and McGill University. He is a leading researcher in neuroethics and the co-editor with John Aspler of Debates About Neuroethics: Perspectives on Its Development, Focus, and Future. Inspired by philosophical pragmatism, his research aims to understand and bring to the forefront the experience of ethically problematic situations by patients and stakeholders and then to resolve them collaboratively through deliberative and evidenced-informed processes.








References





Charland, L. (2002). Cynthia’s dilemma: consenting to heroin prescription. Am J Bioeth. 2(2), 37-47.







Charland, L. (2003). Heroin addicts and consent to heroin therapy: a comment on Hall et al. (2003). Addiction, 98(11), 1634-1635.







Foddy, B., & Savulescu, J. (2006). Addiction and autonomy: can addicted people consent to the prescription of their drug of addiction? Bioethics, 20, (1), 1-15.







Gowing, L. R., Ali, R. L., Allsop, S., Marsden, J., Turf, E. E., West, R., & Witton, J. (2015). Global statistics on addictive behaviours: 2014 status report. Addiction, 110(6), 904-919. 







Heather, N. (2017). Is the concept of compulsion useful in the explanation or description of addictive behaviour and experience? Addict Behav, 6, 15-38.







Heyman, G. M. (2009). Addiction: A disorder of choice. Harvard University Press.









Hyman, S (2007). The neurobiology of addiction: implications for voluntary control of behavior. Am J Bioeth. 7(1), 8-11.







Karasaki, M., Fraser, S., Moore, D., & Dietze, P. (2013). The place of volition in addiction: Differing approaches and their implications for policy and service provision. Drug and Alcohol Review, 32(2), 195-204.







Keane, H. (2002). What's wrong with addiction? Melbourne University Publish.







Levy N., editor. (ed.). (2013). Addiction and Self-Control. New York, NY: Oxford University Press







National Institute on Drug Abuse (NIDA) (2017). Overdose Death Rates. Retrieved from: https://www.drugabuse.gov/related-topics/trends-statistics/overdose-death-rates







Peele, S. (2016). People control their addictions: no matter how much the “chronic” brain disease model of addiction indicates otherwise, we know that people can quit addictions–with special reference to harm reduction and mindfulness. Addic Behav, 4, 97-101. 







Racine, E. (2017). A proposal for a scientifically-informed and instrumentalist account of free will and voluntary action. Frontiers in psychology, 8, 754.







Statista (2018). Total population in the United States from 2012 to 2022 (in millions). Retrieved from: https://www.statista.com/statistics/263762/total-population-of-the-united-states/







Statistics Canada (2013).  Mental and Substance Use Disorders in Canada. Retrieved from: https://www.statcan.gc.ca/pub/82-624-x/2013001/article/11855-eng.pdf







Statistics Canada (2018). Population by year, by province and territory. Retrieved from: http://statcan.gc.ca/tables-tableaux/sum-som/l01/cst01/demo02a-eng.htm







Substance Abuse and Mental Health Services Administration (SAMHSA) (2018). Mental and Substance Use Disorders. Retrieved from: https://www.samhsa.gov/disorders







United Nations Office on Drugs and Crime (UNODC). Global SMART Update. Fentanyl and its analogues – 50 years on. Volume 17, March 2017. Retrieved from: https://www.unodc.org/documents/scientific/Global_SMART_Update_17_web.pdf







Volkow, N. (2015). Addiction is a disease of free will. National Institute on Drug Abuse Blog.







Wallace, J. (1999). Addiction as defect of the will: some philosophical reflections. Law & Philosophy, 18(6), 621-654.









Want to cite this post?




Barned, C. and Racine, E. (2018). Is the concept of “will” useful in explaining addictive behaviour? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/08/is-concept-of-will-useful-in-explaining.html