Pages

Tuesday, October 16, 2018

What can neuroscience tell us about ethics?




By Adina L. Roskies








Image courtesy of Bill Sanderson, Wellcome Collection

What can neuroscience tell us about ethics? Some say nothing – ethics is a normative discipline that concerns the way the world should be, while neuroscience is normatively insignificant: it is a descriptive science which tells us about the way the world is. This seems in line with what is sometimes called “Hume’s Law”, the claim that one cannot derive an ought from an is (Cohon, 2018). This claim is contentious and its scope unclear, but it certainly does seem true of demonstrative arguments, at the least. Neuroethics, by its name, however, seems to suggest that neuroscience is relevant for ethical thought, and indeed some have taken it to be a fact that neuroscience has delivered ethical consequences. It seems to me that there is some confusion about this issue, and so here I’d like to clarify the ways in which I think neuroscience can be relevant to ethics.





1. Efforts to naturalize normativity


One way neuroscience (construed very broadly) might contribute is to enable us to see how normativity arises as a natural phenomenon. Efforts to show how different hormones and receptors, for example, underlie sociality and trust are an example of this, and some believe that a complete neural plus evolutionary account of the development of our norms is all there is to understanding ethics (Churchland, 2012). However, not all agree that a reductionist or historical approach is possible, and many maintain that no descriptive approach to ethics will suffice to capture what is good or right.





2. Examples and counterexamples





Photograph of Phineas Gage, photo courtesy of Jack and

Beverly Wilgus, now in the Warren Anatomical Museum

Some philosophical theories claim to capture the nature of various concepts or constructs. One particular metaethical view, for example, holds that it is true of moral judgment or belief that it necessarily motivates: that judging or believing something to be good or right intrinsically leads to motivation to pursue it. This view, motivational internalism (MI), has been attacked by a thought experiment, the claim that one could coherently conceive of someone who had moral beliefs but was not motivated by them (Brink, 1997). Adherents of MI, however, argue that this is not coherent or conceivable, and that such “amoralists” could not ever exist. Neuroscience has offered up potential counterexamples to MI in the form of a type of brain damage that prima facie results in people who aver moral beliefs that appear normal, but do not seem motivated to act in accordance with them (A. Roskies, 2003). Although adherents of MI can offer similar moves (denying, for example, that they have moral beliefs, asserting that they do have moral motivation, etc.) as in the conceptual case of the amoralist, the existence of these people offers opportunities to test these arguments in the real world, and forces us to constrain our interpretations in ways that respect the fact that these are real people embedded in the actual social/moral world. For example, if the seemingly moral claims these people make have the same psychological profile as other things that they aver and that we count as their beliefs, can we really deny that these people have moral beliefs? The theory that best accommodates the complexity of this real-world data should ultimately win the day.







3. Illuminating the ways things work


Neuroethics has and will continue to illuminate the way in which we reason morally, make choices, etc. Sometimes knowing how things work give us new handles to use in ethical reasoning. For example, Greene and colleagues have described a dual process model of moral judgment wherein emotional triggers prompt us to deem certain actions morally permissible or impermissible, whereas more controlled reasoning may sometimes lead to different judgments  (Greene, Nystrom, Engell, Darley, & Cohen, 2004; Greene, Sommerville, Nystrom, Darley, & Cohen, 2001). Greene has used this data to argue that consequentialism is superior to deontology (Greene, 2014). Although there has been extensive debate as to whether the neuroscience here leads directly to an ethical conclusion, all parties actually concur that it does not (Berker, 2009; Kahane, 2012; Kamm, 2009). Greene himself is clear that what does the work is the claim that the factors the “emotional” system responds to are ethically irrelevant. What is at issue is rather 1) whether that normative premise is true (Greene thinks it is self-evident; others disagree); and whether 2) the deliverances of these neural systems really map reasonably well onto various ethical frameworks. Neither of these questions are purely neuroscientific, but the neuroscience may allow us to answer them to our satisfaction.







Image courtesy of Wikimedia Commons

A second example of how understanding how things could have ethical implications comes from the free will literature. Some have argued that work showing that certain signals from the brain that precede awareness rule out the possibility of free will, and that that has ethical consequences (Kaposy, 2010; Libet, 1985). Although further work shows this claim to be mistaken on empirical grounds, the idea that the neuroscience alone could disprove some complex philosophical concept is mistaken, for the mechanistic commitments of the concept are not explicit. Only given real philosophical work and clear philosophical commitments can the neuroscience ever weigh in on a philosophical issue. In the case of free will, for example, there are alternative philosophical theories of free will which would be unchallenged even if the original interpretation of the neuroscientific claims held up (A. L. Roskies, 2006).




4. Providing factual premises to ethical arguments


The most common way in which neuroscience can contribute to ethics is by providing factual premises to ethical arguments. Indeed, in some sense all the former examples are some variety of this, but they have their distinctive character. And indeed, this is what one would expect if neuroscience is a descriptive enterprise, and ethics fundamentally normative or prescriptive. A clear example of how neuroscientific facts can lead to ethical consequences can be seen by looking at the literature from brain damage. Many people think we owe a certain level of ethical consideration to creatures capable of consciousness, but not to those incapable of it. And some clinical syndromes have been emblematic of lack of consciousness. But suppose neuroscience could show (to a reasonable degree of certainty) that some people, whom we had taken to lack the capacity for consciousness, and thus to lack a certain level of moral standing, were indeed conscious (A. L. Roskies, 2018)? We would then have to conclude that they were due the moral consideration we accord to other conscious entities. This indeed has happened with a subset of people diagnosed to be in Persistent Vegetative State (PVS) (Owen, 2013; Owen et al., 2006), providing a real world example of how neuroscientific evidence could lead, in the presence of the right kind of normative premises, to important and surprising ethical conclusions.


________________





Adina Roskies is the Helman Family Distinguished Professor at Dartmouth College, Professor of Philosophy and chair of the Cognitive Science Program. She is also affiliated with the Department of Psychological and Brain Sciences. She received a Ph.D from the University of California, San Diego in Neuroscience and Cognitive Science in 1995, a Ph.D. from MIT in philosophy in 2004, and an M.S.L. from Yale Law School in 2014. Prior to her work in philosophy she held a postdoctoral fellowship in cognitive neuroimaging at Washington University with Steven Petersen and Marcus Raichle, and from 1997-1999 was Senior Editor of the neuroscience journal Neuron. Dr. Roskies’ philosophical research interests lie at the intersection of philosophy and neuroscience, and include philosophy of mind, philosophy of science, and ethics. She has coauthored a book with Stephen Morse, A Primer on Criminal Law and Neuroscience. 








References






Berker, S. (2009). The Normative Insignificance of Neuroscience. Philosophy & Public Affairs, 37(4), 293–329.







Brink, D. O. (1997). Moral Motivation. Ethics, 108(1), 4–32. https://doi.org/10.1086/233786







Churchland, P. S. (2012). Braintrust: What Neuroscience Tells Us about Morality. Princeton University Press. Retrieved from http://press.princeton.edu/titles/9399.html







Cohon, R. (2018). Hume’s Moral Philosophy. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy ( (Fall 2018 Edition). Retrieved from https://plato.stanford.edu/archives/fall2018/entries/hume-moral/







Greene, J. D. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics, 124(4), 695–726. https://doi.org/10.1086/675875







Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The Neural Bases of Cognitive Conflict and Control in Moral Judgment. Neuron, 44(2), 389–400. https://doi.org/10.1016/j.neuron.2004.09.027







Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI Investigation of Emotional Engagement in Moral Judgment. Science, 293(5537), 2105–2108. https://doi.org/10.1126/science.1062872







Kahane, G. (2012). On the Wrong Track: Process and Content in Moral Psychology. Mind & Language, 27(5), 519–545. https://doi.org/10.1111/mila.12001







Kamm, F. M. (2009). Neuroscience and Moral Reasoning: A Note on Recent Research. Philosophy & Public Affairs, 37(4), 330–345. https://doi.org/10.1111/j.1088-4963.2009.01165.x







Kaposy, C. (2010). The Supposed Obligation to Change One’s Beliefs About Ethics Because of Discoveries in Neuroscience. AJOB Neuroscience, 1(4), 23–30. https://doi.org/10.1080/21507740.2010.510820







Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences, 8(04), 529–539. https://doi.org/10.1017/S0140525X00044903







Owen. (2013). Detecting Consciousness: A Unique Role for Neuroimaging. Annual Review of Psychology, 64(1), 109–133. https://doi.org/10.1146/annurev-psych-113011-143729







Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D. (2006). Detecting Awareness in the Vegetative State. Science, 313(5792), 1402–1402. https://doi.org/10.1126/science.1130197







Roskies, A. (2003). Are ethical judgments intrinsically motivational? Lessons from “acquired sociopathy” [1]. Philosophical Psychology, 16(1), 51–66. https://doi.org/10.1080/0951508032000067743







Roskies, A. L. (2006). Neuroscientific challenges to free will and responsibility. Trends in Cognitive Sciences, 10(9), 419–423. https://doi.org/10.1016/j.tics.2006.07.011







Roskies, A. L. (2018). Consciousness and End of Life Ethical Issues. In Routledge Handbook of Consciousness. Routledge Handbooks Online. https://doi.org/10.4324/9781315676982-34










Want to cite this post?




Roskies, A. (2018). What can neuroscience tell us about ethics? The Neuroethics Blog. Retrieved on , http://www.theneuroethicsblog.com/2018/10/what-can-neuroscience-tell-us-about.htmlfrom

No comments:

Post a Comment