Pages

Tuesday, February 27, 2018

The Ethical Design of Intelligent Robots




By Sunidhi Ramesh







The main dome of the Massachusetts

Institute of Technology (MIT).

(Image courtesy of Wikimedia.)

The morning of February 1, 2018, MIT President L. Rafael Reif sent an email addressed to the entire institute community. In it was an announcement introducing the world to a new era of innovation—the MIT Intelligence Quest, or MIT IQ.





Formulated to “advance the science and engineering of both human and machine intelligence,” the project aims “to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.” The kicker? MIT IQ not only exists to develop these futuristic technologies, but it also seeks to “investigate the social and ethical implications of advanced analytical and predictive tools.”





In other words, one of the most famous and highly ranked universities in the world has dedicated itself to preemptively consider the consequences of the future of technology while simultaneously developing that same technology in hopes of making a “better world.”






But what could these consequences be? Are there already tangible costs incurred from our current advances in robotics and artificial intelligence (AI)? What can we learn from the mistakes we make today to cater to a more just, whole, and objective tomorrow?





These questions are similar to the ones posed by Dr. Ayanna Howard at the inaugural The Future Now NEEDs... (Neurotechnologies and Emerging Ethical Dilemmas) talk on January 29th at Emory University. Speaking to the Ethical Design of Intelligent Robots, Dr. Howard presented a series of lessons and considerations concerning modern-day robotics— many of which will guide the remainder of this post.





But, before I discuss the ethics hidden between the lines of robotic design, I’d like to pose a fundamental question about the nature of human-robot interactions: do humans trust robots? And I’m not talking about whether or not humans would say they do; I’m asking about trust based on behavior. Do we, today, trust robots so much that we would turn to them to guide us out of high-risk situations?








A line of prototype robots developed by Honda.

(Image courtesy of Wikimedia.)

You’re probably shaking your head no. But Dr. Howard’s research suggests otherwise.





In a 2016 study (1), a team of Georgia Tech scholars formulated a simulation in which 26 volunteers interacted “with a robot in a non-emergency task to experience its behavior and then [chose] whether [or not] to follow the robot’s instructions in an emergency.” To the researchers’ surprise (and unease), in this “emergency” situation (complete with artificial smoke and fire alarms), “all [of the] participants followed the robot in the emergency, despite half observing the same robot perform poorly [making errors by spinning, etc.] in a navigation guidance task just minutes before… even when the robot pointed to a dark room with no discernible exit, the majority of people did not choose to safely exit the way they entered.” It seems that we not only trust robots, but we also do so almost blindly.





The investigators proceeded to label this tendency as a concerning and alarming display of overtrust of robots—an overtrust that applied even to robots that showed indications of not being trustworthy.





Not convinced? Let’s consider the recent Tesla self-driving car crashes. How, you may ask, could a self-driving car barrel into parked vehicles when the driver is still able to override the autopilot machinery and manually stop the vehicle in seemingly dangerous situations? Yet, these accidents have happened. Numerous times.





The answer may, again, lie in overtrust. “My Tesla knows when to stop,” such a driver may think. Yet, as the car lurches uncomfortably into a position that would push the rest of us to slam onto our breaks, a driver in a self-driving car (and an unknowing victim of this overtrust) still has faith in the technology.





“My Tesla knows when to stop.” Until it doesn’t. And it’s too late.







What will a future of human-robot

interaction look like?

(Image courtesy of Wikimedia.)

Now, don’t get me wrong. Trust is good. It is something that we rely on every single day (2); in fact, it is a critical component of our modern society. And, in an increasingly probable future of commonplace human-robot interaction, trust will undeniably play an increasingly significant role. Not many will disagree that we should be able to trust our robots if we are to interact with them positively.





But what are the dangers of overtrust? If we already trust robots this much today, how will this trust evolve as robots become more versatile? More universal? More human? Is there a potential for abuse here? The answer, Dr. Howard warns, is an outright yes.





Within this discussion about trust lies another, more subtle line of questioning—one about bias.





Robots, engineered by the human mind, will inherently carry human biases. Even the best programmer with the best intentions will, unintentionally, produce technology that is partial to his/her own experiences. So, why is this a problem?





Consider the Google algorithm that made headlines in mid-2015 for “showing prestigious job ads to men but not to women.” Or the Flickr image recognition tool that tagged black users as “gorillas” or “animals.” Or the 2013 Harvard study that found that “first names, previously identified as being assigned at birth to more black than white babies… generated [Google] ads suggestive of an arrest.”



This problem is neither new nor unique; ads and algorithms programmed by humans (intentionally or unintentionally) inherit the sexist and racist tendencies carried by those humans.








(Image courtesy of Flickr.)

And there’s more. North Dakota’s police drones have been legally armed with weapons such as “tear gas, rubber bullets, beanbags, pepper spray, and tasers” for over two years. How do we know that the software being used in these systems are trustworthy? That they have been rigorously monitored and tested for aspects of bias? Whose value systems are being inputted here? And do we trust them enough to trust the robots involved?





As our world continues to tumble forward into a future immersed intricately with technology, these questions must be addressed. Robotics development teams should include members with an extensive diversity of thought, spanning economic, gender, ethnic, and even “tech” (referring to a diversity in technical training) lines to mitigate biases that may negatively impact the robots’, well, intelligence. (Granted, bias is inherent to humanity, so there is a danger in thinking that we could ever objectively produce robots that are entirely unbiased. Still, it is a step in the right direction to at least recognize that bias may present itself as a problem and to actively, proactively attempt to avoid blatant manifestations of it.)





To answer the question of reducing bias in the future, Dr. Howard ventured so far as to suggest a sort of criminal robot court— one that would rigorously and strenuously test our robots before they are put in the hands of the real world. Within it would be a “law system” that evaluates the hundreds of thousands of inputs and their associated outputs in an attempt to catch coding errors long before they have the potential to impact society on a larger level.





So, in a lot of ways, we are in the midst of a golden era. We can still ask these questions in the hope of presenting them to the world to answer; technology can be molded to be what we want it to be. And, as time goes on, robotics and AI will together become an irrefutable aspect of the future of the human condition. Of human identity.





What better time to question the social consequences of robotic programming than now?





Maybe MIT is up to something big after all.







References





1. Robinette, Paul, et al. "Overtrust of robots in emergency evacuation scenarios." Human-Robot Interaction (HRI), 2016 11th ACM/IEEE International Conference on. IEEE, 2016.





2. Zak, Paul J., Robert Kurzban, and William T. Matzner. "The neurobiology of trust." Annals of the New York Academy of Sciences 1032.1 (2004): 224-227.






Want to cite this post?



Ramesh, Sunidhi. (2018). The Ethical Design of Intelligent Robots. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/02/the-ethical-design-of-intelligent-robots.html

Tuesday, February 20, 2018

One Track Moral Enhancement




By Nada Gligorov







Nada Gligorov is an associate professor in the Bioethics Program of the Icahn School of Medicine at Mount Sinai. She is also faculty for the Clarkson University-Icahn School of Medicine Bioethics Masters Program. The primary focus of Nada’s scholarly work is the examination of the interaction between commonsense and scientific theories. Most recently, she authored of a monograph titled Neuroethics and the Scientific Revision of Common Sense (Studies in Brain and Mind, Springer). In 2014, Nada founded the Working Papers in Ethics and Moral Psychology speaker series–a working group where speakers are invited to present well-developed, as yet unpublished work.





Within the debate on neuroenhancement, cognitive and moral enhancements have been discussed as two different kinds of improvements achievable by different biomedical means. Pharmacological means that improve memory, attention, decision-making, or wakefulness have been accorded the status of “cognitive enhancers,” while attempts to improve empathy or diminish aggression have been categorized as “moral enhancements.” According to Ingmar Persson and Julian Savulescu (2008; 2012), cognitive enhancement could outstrip our natural abilities to improve commonsense morality. The view of commonsense morality as static motivates Persson and Savulescu (2008) to establish two distinct tracts of enhancement and to argue that cognitive enhancement needs to be coupled with moral enhancement to prevent the negative impact of rapid scientific progress that might be precipitated by the use of cognitive enhancers. To argue that cognitive enhancement might lead to improvements both in science and in commonsense morality, I will propose that commonsense morality is a folk theory with features similar to a scientific theory.



Persson and Savulescu describe commonsense morality in the following manner:




“By ‘common-sense morality’ we mean a set of moral attitudes that is a common denominator of the diversely specified moralities of human societies over the world. We take it that the explanation of why there is a set of moral attitudes that is a common feature of culturally diverse moralities is that it has its origin in our evolutionary history” (Persson and Savulescu 2012, 12)."






To redefine commonsense morality as a folk theory, I will utilize some established views about commonsense psychology and then apply them to commonsense morality. Commonsense psychology can be understood as an empirically evaluable folk-psychological theory that seeks to predict and explain human behavior by attributing psychological states, such as beliefs, desires, and sensations, to individuals (Sellars 1977; Churchland 1992). Commonsense psychology is a folk psychology (FP), with all the features of a scientific psychology. Just like a scientific theory, FP introduces unseen entities or processes to explain and predict observable phenomena. For example, FP introduces psychological states, which are not directly observable, to account for overt behavior. Additionally, similar to a scientific theory, FP explains and predicts human behavior by specifying law-like relations between psychological states, external stimuli, and overt behavior (Churchland 1992).






Image courtesy of Pixabay.

For example: Alex believes that the dog will stop barking if she gives him a treat, so she reaches for a treat and places it in front of the dog. Here Alex perceives a bothersome auditory stimulus, which causes her to believe that the dog will be assuaged by the treat; this explains why she reaches for the treat and puts it in front of the barking dog. In everyday life, we often use similar psychological explanations to describe the behavior of those around us and we can draw the boundaries of our current folk psychology by collecting commonly used and universally accepted psychological statements that feature psychological concepts, such as ‘belief,’ ‘desire,’ and ‘sensation’ (Lewis 1972; Stich 1996). This collection contains both the law-like generalization of FP and the definitions of folk-psychological concepts.



A consequence of this characterization of commonsense psychology is that our shared dispositions to predict and explain overt behavior by ascribing psychological states to others is the outcome of adopting folk psychology. Given that both our observations of people’s behavior and our habit of attributing psychological states are theory-laden, to change them, we need changes in our background theory. In fact, proponents of the view that commonsense psychology is an empirically evaluable theory often argue that it is false, and they make the prediction that FP will eventually be replaced by a neuroscientific theory that does not utilize psychological states at all (Churchland 1992). This change would affect how we observe and describe human behavior; instead of attributing beliefs and desires to people, we would explain their behavior as being caused by neurological processes.






There are a number of ways in which I see this view of commonsense psychology applying to commonsense morality. It is possible to characterize commonsense morality as a tacitly endorsed theory, i.e., a folk morality (FM). Persson and Savulescu offer a way of demarcating commonsense morality as a common feature of culturally diverse moralities, which is similar to the method of identifying the boundaries of FP by collecting commonly known psychological explanations. To identify FM, we would collect generally accepted moral statements that feature moral concepts. To circumscribe the concept of justice, for example, we would identify commonly accepted generalizations that feature the term ‘justice,’ say to describe an individual’s behavior as just, or to describe a punishment as just, or to categorize a certain allocation of resources as just, and so forth. This collection would yield the folk theory of justice, which forms the basis for the concept of justice we use in everyday life. The scope of other folk moral concepts, such as the notions of rights or moral responsibility, could be identified in similar ways.






Image courtesy of Pixabay.

By adopting a folk concept of justice, we become able to make judgments about whether something falls under that concept. We become able to observe certain actions or even certain individuals as conforming to the folk concept of justice. For example, we interpret a person giving money to a homeless individual as just. We perceive certain events, such as an older lady being mugged, as unjust. Familiarity with the concept of justice supports our ability to appraise a situation and to feel appropriate emotions. For example, thinking that you are witnessing a theft, a young man snatching an elderly woman’s bag, will provoke anger; but realizing that the young man was only taking back what was his from the old lady, who stole his bag hours earlier, will change anger to a more positive emotion. Going back to Persson and Savulescu’s characterization of commonsense morality as a set of psychological dispositions, I would argue that instead of those dispositions being the basis of our commonsense morality, they are the result of the tacit endorsement of a folk morality.





As folk psychology could be revised and in principle replaced by a better theory, so could our current folk morality be revised and replaced. Furthermore, just like changes in folk psychology would lead to changes in how we explain and predict human behavior, changes in folk morality would lead to changes in our moral attitudes and judgments. This would run counter to the conclusion by Persson and Savulescu (2012) that our commonsense morality is in principle static and that it is not able to adjust to changes in the world caused by rapid scientific development.



Persson and Savulescu think moral attitudes are static because they maintain that commonsense morality is rooted in and limited by biology. This does not distinguish the ability to be moral from any other abilities, including cognitive abilities, which are also the product of our biology. Even if commonsense morality is limited by biology, this does not undermine the argument that it constitutes a folk theory. Again, I will draw a parallel between folk psychology and folk morality. There are those who accept that FP is a theory, but because they think it is innate, they argue that it cannot be replaced by a more suitable psychology (Fodor 1975; Carruthers 1996). So even if we assume that we have a biological or evolutionary predisposition to develop a particular type of folk morality, it is still possible to maintain that this type of morality is a theory. The question of whether commonsense morality can be revised is distinct from whether it is a theory.





Adopting the view that commonsense morality is a theory, however, can lead to an answer about how to promote changes in FM. If folk morality has the same features as a scientific theory, then cognitive enhancers that would lead to advancements in scientific theories would also lead to changes in folk morality. Additionally, if commonsense morality is limited by biology, as Persson and Savulescu argue, neuroenhancement might be required to extend those biological limits. Redefining commonsense morality as a folk theory would remove the need for two separate tracks of enhancement, one moral and one cognitive because if improvements in cognitive processes such as attention, learning, and memory improve our abilities to generate adequate theories in science, they would have those same effects on our abilities to generate moral theories.







References








1. Carruthers, P. (1996). Language thought and Consciousness. Cambridge: Cambridge University Press.







2. Churchland, P. M. (1992). A neurocomputational perspective: The nature of mind and the structure of science. Cambridge, MA: MIT Press. Fodor, J. (1975.) The language of thought. New York: Thomas Cromwell







3. Lewis, D. (1972). Psychophysical and theoretical identifications. Australasian Journal of Philosophy, 50 (3), 207–215; Stich, S. (1996). Deconstructing the mind. Oxford: Oxford University Press.







4. Persson, I. & Savulescu, J. (2008). The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity. Journal of Applied Philosophy, 25(3): 162-177







5. Persson, I. & Savulescu, J. (2012). Unfit for the Future: The Need for Moral Enhancement. Oxford University Press.







6. Sellars, W. (1977, 1997 ed.). Empiricism and the philosophy of mind. Cambridge, MA: Harvard University Press. 2








Want to cite this post?




Gligorov, N. (2018). One Track Moral Enhancement. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/02/one-track-moral-enhancement.html

Tuesday, February 13, 2018

International Neuroethics Society Annual Meeting Summary: Ethics of Neuroscience and Neurotechnology




By Ian Stevens






Ian is a 4th year undergraduate student at Northern Arizona University. He is majoring in Biomedical Sciences with minors in Psychological Sciences and Philosophy to pursue interdisciplinary research on how medicine, neuroscience, and philosophy connect. 



At the 2017 International Neuroethics Society Annual Meeting, an array of neuroscientists, physicians, philosophers, and lawyers gathered to discuss the ethical implications of neuroscientific research in addiction, neurotechnology, and the judicial system. A panel consisting of Dr. Frederic Gilbert with the University of Washington, Dr. Merlin Bittlinger, with the Universitätsmedizin Berlin – Charité, and Dr. Anna Wexler with the University of Pennsylvania presented their research on the ethics of neurotechnologies.






Dr. Gilbert discussed the cultivation and development of neurotechnologies that use artificial intelligence (AI) to operate brain-computer interfaces (BCI), such as the implanted seizure advisory system, which is implanted invasively into the brain for the treatment of drug-resistant epilepsy (1). He provided three main reasons for the ethical examination of such developing neurotechnologies. The first is that these devices could provide “neuro-signatures” that could aid in the detection of addiction and sexual urges. These issues could challenge our notions of privacy and autonomy, concerns that are being explored with other technologies (2, 3). Secondly these devices, as other similar invasive neurotechnologies, have been shown to cause or be associated with personality changes and because of this we need to understand how these technologies might affect a patient’s notion of self and identity (4). It seems concerning to enter a treatment as a certain person but leave as another. How the risks and benefits of treatment are balanced when a patient prior to surgery might not be the same afterwards challenges conventional standards of risk and benefit. Finally, the field of AI with BCIs is a very ambiguous one with the pace of developing predictive brain implants exceeding our understandings of how they will affect us (5).





Expanding on his second justification, Dr. Gilbert discussed his research on ways these artificially intelligent devices can alter subjects’ perception of themselves. He used qualitative data from interviews to assess the concern that BCIs alter personalities and shared two stories of a 52-year-old woman receiving an AI BCI for epilepsy and a younger female student also being treated for epilepsy with an AI BCI (6). The 52-year-old woman stated that, because of the implanted AI device, she felt like she could do anything, and nothing could stop her (AI BCI induced postoperative distorted perception of capacities).






An open brain-computer interface (BCI) board.

(Image courtesy of Wikimedia.)

This contrasted with the student who experienced postoperative symptoms of depression because she felt the AI device forced her to confront the fact that she was epileptic (AI BCI induced drastic rupture in identity leading to iatrogenic harms). These dialogues have lead Dr. Gilbert to argue for a distinction between restorative and deteriorative personality changes associated with BCIs (what he calls “self-estrangement”) (7).



This distinction is initially helpful for two possible reasons. One, it assists in confirming that a patient’s sense of identity can change in reference to the AI BCI they are treated with, but also that there are certain kinds of patients who are incompatible with being treated with BCIs. Like pharmacological treatments for mental health, some patients might not benefit from the deleterious identify changes associated with their AI BCI treatment. So, in conclusion, Dr. Gilbert advised that those who are not accepting of their neurologic disease should not undergo AI BCI treatment out of concern for the device having a destructive change in their core personality.





Dr. Bittlinger, whose current work focuses on the ethical, legal, and social aspects of psychiatric neurosurgery, presented his research on the ethical evaluation of innovative research involving unknown risk by using the example of deep brain stimulation (DBS) in Alzheimer’s Disease (AD). Dr. Bittlinger emphasized how much of a global burden AD is, with no cure within sight. With only a few drugs available for treating the symptoms of AD, there is an obvious need for innovative research. He said that using DBS as an innovative, or currently unconventional, treatment should be examined ethically before we proceed down the road to other treatment options. To support this, Dr. Bittlinger quoted the Declaration of Helsinki (8) and its sentiments on the need for the patients to be autonomous beings and the importance of consent in research. The notion that the risks undertaken by patients should be low and minimal is not addressed, however, and DBS is in the highest risk class of treatments being explored for AD because of its invasive nature. The Declaration of Helsinki points to this importance, stating “individuals must not be included in a research study that has no likelihood of benefit for them unless it is intended to promote the health of the group represented by the potential subject, the research cannot instead be performed with persons capable of providing informed consent, and the research entails only minimal risk and minimal burden” (9). While all treatments in clinical trials strive for this, the innovative nature of DBS for AD possess large risks for unknown benefits. While AD can be debilitating to the patient, the risk associated with invasive implantation may be too great. Because of this and the fact that clinical trials include possibly un-autonomous decision-makers (the Alzheimer’s populous), Dr. Bittlinger stressed the need for further evidence of DBS efficacy in the long-term.








Image courtesy of Pixabay.

Dr. Bittlinger’s take home message was that “neuroethicists should encourage researchers to see methodological rigor not only as a liability but as an asset.” He is advocating for a form of methodological beneficence. While trials might normally look to cause minimal maleficence, questioning the implicit structure of research to be ethical could provide benefits in the realms of research with the highest risk. After an extensive literature review, Dr. Bittlinger made the important distinction between studies with no unknown risk compared to those with no knowledge of unknown risks (10). This uncertainty of the unknowns is the basis for Dr. Bittlinger’s question of exactly how much pre-clinical data is required to justify clinical interventions with DBS for Alzheimer’s disease. In line with this methodological beneficence and using probability models, Dr. Bittlinger finished off his talk when he stressed the need for neuroscientists to prioritize confirmatory clinical trials over exploratory ones in early stages of research.





Finally, Dr. Wexler presented on the use of brain stimulation in a variety of health and wellness clinics around the United States. Her work focused on the use of tDCS (transcranial direct current stimulation) and how the current studies on the subject have suggested its effectiveness in treating depression, chronic pain, and cognitive enhancement (though there is still debate in the literature about the efficacy of tDCS). She also noted that there is a larger presence of tDCS use in the DIY (Do It Yourself) community, where people fashion their own devices with batteries and sponges; however, it has been more common for tDCS products to be obtained as consumer products (11). Her fascination with the field came from the fact that two groups use these devices: researchers (a very controlled setting) and average consumers (a very uncontrolled setting). However, what struck her was the fact that a third group of people, clinicians, were also using tDCS devices as a means of treatment for their patients (a semi-controlled setting). This semi-controlled setting was curious to Dr. Wexler since it was fraught with ethical concerns distinct from the well-known DIY concerns, and the possible off-label use of tDCS in such a setting.



The semi-structured environment of the clinic presents a clinical bioethical inquiry. How should these devices be regulated and how should they be understood as treatment options? Should they only be approved as a clinical treatment for disease, or perhaps as an off-label procedure to enhance?






Image courtesy of Pexels.

She defined an off-label use as a device or drug used for an intention other than it was approved and referenced using Trazadone, a drug used to treat depression, for alcohol dependency as an example (12). She then went on to discuss the open-ended semi-structured interviews she conducted with health care providers that offered tDCS services. Although the analyses are still underway, she shared some insights she has had so far; namely tDCS use has been tied to complementary and alternative medicine, the pricing of using such devices varies by provider, and the treatment focused on depression, anxiety, and ADD. Out of the practitioners, some thought that tDCS was FDA approved (when in fact it was not), and overall those using tDCS came from people possessing an MD, Ph.D. or no clinical background. Regardless of the legal distinctions between the regulation of the sale of tDCS devices or the use of them, the ethical questions she left us with are pressing ones. Should these devices be allowed to be used in clinics without supporting research?





The developing neurotechnologies are broad in their application, but there are common threads of ethical reflection that Dr. Gilbert, Dr. Bittlinger, and Dr. Wexler have highlighted. As with all new treatment options, our outlook as scientists, philosophers, lawyers, and ethicists should be critical, although not pessimistic. Neurotechnologies look to be great treatment options for many chronic neurological problems; however, the side-effects, and therefore the risk and benefit trade-offs are unknown. The “how” question of connecting the human brain with technology has been solved on some levels; however, what this connection ethically means still needs to be unraveled.







References




1. Mark J. Cook et al., “Prediction of Seizure Likelihood with a Long-Term, Implanted Seizure Advisory System in Patients with Drug-Resistant Epilepsy: A First-in-Man Study,” The Lancet. Neurology 12, no. 6 (June 2013): 563–71, https://doi.org/10.1016/S1474-4422(13)70075-9.





2. Tamara Denning, Yoky Matsuoka, and Tadayoshi Kohno, “Neurosecurity: Security and Privacy for Neural Devices,” Neurosurgical Focus 27, no. 1 (July 1, 2009): E7, https://doi.org/10.3171/2009.4.FOCUS0985.





3. Frederic Gilbert, “A Threat to Autonomy? The Intrusion of Predictive Brain Implants,” Ajob Neuroscience 6, no. 4 (October 2, 2015): 4–11, https://doi.org/10.1080/21507740.2015.1076087.





4. Frederic Gilbert et al., “I Miss Being Me: Phenomenological Effects of Deep Brain Stimulation,” AJOB Neuroscience 8, no. 2 (April 3, 2017): 96–109, https://doi.org/10.1080/21507740.2017.1320319.





5, 6, 7. Frederic. Gilbert et al., “Embodiment and Estrangement: Results from a First-in-Human ‘Intelligent BCI’ Trial,” Science and Engineering Ethics, 2017, https://doi.org/10.1007/s11948-017-0001-5.





8. “WMA - The World Medical Association-WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects,” accessed December 28, 2017, https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/.





9. “WMA - The World Medical Association-WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects.”





10. John Noel M. Viaña, Merlin Bittlinger, and Frederic Gilbert, “Ethical Considerations for Deep Brain Stimulation Trials in Patients with Early-Onset Alzheimer’s Disease,” Journal of Alzheimer’s Disease: JAD 58, no. 2 (2017): 289–301, https://doi.org/10.3233/JAD-161073.





11. Anna Wexler, “The Social Context of ‘Do-It-Yourself’ Brain Stimulation: Neurohackers, Biohackers, and Lifehackers,” Frontiers in Human Neuroscience 11 (2017), https://doi.org/10.3389/fnhum.2017.00224.





12. Letizia Bossini et al., “Off-Label Uses of Trazodone: A Review,” Expert Opinion on Pharmacotherapy 13, no. 12 (August 2012): 1707–17, https://doi.org/10.1517/14656566.2012.699523.






Want to cite this post?




Stevens, I. (2018). International Neuroethics Society Annual Meeting Summary: Ethics of Neuroscience and Neurotechnology. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/02/international-neuroethics-society_10.html

Tuesday, February 6, 2018

The Anniversary of the First Neuroethics Conference (No, Not That One)




By Jonathan D. Moreno







Jonathan D. Moreno is the David and Lyn Silfen University Professor at the University of Pennsylvania where he is a Penn Integrates Knowledge (PIK) professor. At Penn he is also Professor of Medical Ethics and Health Policy, of History and Sociology of Science, and of Philosophy.  His latest book is Impromptu Man: J.L. Moreno and the Origins of Psychodrama, Encounter Culture, and the Social Network (2014), which Amazon called a “#1 hot new release.”  Among his previous books are The Body Politic, which was named a Best Book of 2011 by Kirkus Reviews, Mind Wars (2012), and Undue Risk (2000).





The 15th anniversary of what is widely viewed as the first neuroethics conference, “Neuroethics: Mapping the Field” was celebrated in 2017. The meeting was held in San Francisco, organized by the University of California and Stanford, and sponsored by the Dana Foundation. Cerebrum, the journal that is published by the foundation, celebrated the anniversary by publishing short memoirs by some of the speakers, including my own. The feature was dubbed “The First Neuroethics Meeting.”





Except that it wasn’t. The first conference that was recognizably about neuroethics was held in Washington, D.C. under the auspices of a conservative think tank, and its 20th anniversary is in 2018. 






It does seem that the 2002 meeting was the first one to use the term neuroethics in its title. With the support of Dana’s president, William Safire, the program brought together many of those who are still leaders in the field. But the earlier one, sponsored by a Washington, D.C. think tank, the Ethics and Public Policy Center and held at the National Press Club, also featured some of those who are still prominent. They included Harvard’s Steven Hyman, later to be the first president of the International Neuroethics Society, Adrian Raine who was then at the University of Southern California, and the present author. But those we might today consider the usual suspects were in the minority. 





And this is where it gets interesting. 








William Bennett, former Director of the

Office of National Drug Control Policy.

(Image courtesy of Wikimedia.)

Called “Neuroscience and the Human Spirit: Meeting the Challenges of Contemporary Brain Research,” the 1998 conference also featured some speakers who were prominent for other reasons, including Charles Krauthammer, William Bennett and Fred Goodwin. Krauthammer was and remains an influential political commentator. Bennett was President George H.W. Bush’s “drug czar” and would go on to be President George W. Bush’s Secretary of Education. Fred Goodwin was formerly scientific director of the National Institute of Mental Health who had been involved in a controversy after he seemed to compare inner city youth to primates. It is fair to say that these and other participants, like the Ethics and Public Policy Center itself, were identified with socially conservative views or at least had annoyed liberals in various ways. 





The National Press Club event was reported in a Nature Neuroscience editorial, which called the conference “unusual,” which indeed it was, even pioneering. “Its purpose,” the editorial noted, “was to examine the extent to which modern brain research threatens traditional views of humanity, including the western religious tradition.” Among the topics addressed were free will, the implications of predicting behavior, and the evolution of religious and moral beliefs. Many of the speakers were deeply concerned about the ways that more knowledge about and control over the brain could compromise traditional ethical and social conventions. 





What in retrospect reads like the rationale for the field of neuroethics, Nature Neuroscience heartily endorsed the goals of the conference. “[T]here are compelling reasons for further discussion. Neuroscientists should recognize that their work may be construed as having deep and possibly disturbing implications, and that if they do not discuss these implications, others will do so on their behalf. The diversity of views expressed at the conference suggests that reconciliation is not imminent, but it will nevertheless be valuable to define the areas of agreement and disagreement more precisely. The EPPC has performed a useful service in promoting that goal.”






Judy Illes, former president of the

 International Neuroethics Society, presided

over the Washington Conference in 2017.

(Image courtesy of Wikimedia.)

When neuroethics went self-conscious as an academic field after the Dana conference, its agenda was markedly different from that of the 1998 meeting. Not a single panel in San Francisco was devoted to the implications of modern brain science for religious faith, nor except indirectly to the effects on moral traditions. For the Ethics and Public Policy Center their conference was a one-off, but the center’s associates (many of whom are important figures in American conservative thought), no doubt regard the field’s themes as typical of left-wing academia in its exclusion of such concerns. 





Looking through a lens two decades later, the Washington conference foreshadowed the science ethics wars to come. It took place right around the time that the first papers were published reporting the isolation of human embryonic stem cells in a University of Wisconsin laboratory and two years before the election of George W. Bush. The bioethics culture wars had not yet erupted into public view through limits on federally supported stem cell research and a presidential bioethics council that was anathema to much of the scientific community. Yet “Neuroscience and the Human Spirit” demonstrated the serious interest among those popularly known as neoconservatives in the relationship between modern science and traditional values. 





Although it was finally stem cell research that became the focus of controversy and allegations of a “war on science” in the early 2000s, in a far more muted fashion neuroscience led the way. 




Want to cite this post?





Moreno, J. (2018). The Anniversary of the First Neuroethics Conference (No, Not That One). The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/02/the-anniversary-of-first-neuroethics.html