Neil Levy |
By Neil Levy, PhD
Neil Levy is the Deputy Director of the Oxford Centre for Neuroethics, Head of Neuroethics at Florey Neuroscience Institutes, University of
Melbourne, and a member of the AJOB Neuroscience Editorial Board. His research examines moral responsibility and free will.
Might
doing ethics be harmful to your moral health? One would expect just the
opposite: the deeper you think about ethics, the more you read and the
larger the number of cases you consider, the more expertise you acquire.
Bioethicists and neuroethicists are moral experts, one might think.
That’s why it is appropriate for media organizations to ask us for our
opinion, or for hospitals and research institutions to ask us to serve
on institutional review boards.
In
this post, I leave aside the question whether ethicists like me deserve
to have their opinions about controversial issues given special weight
when we offer them. It is really hard to know what could serve as
evidence for or against that view (the issues are controversial, so
there is no way of measuring how often we get it right that’s not going
to beg all sorts of questions). There is some evidence, however, that
ethicists behave no better than anyone else, which places some pressure
on the idea that all our reflection, writing and reading makes us moral
experts.
Eric Schwitzgebel |
The evidence comes mainly from the work of Eric Schwitzgebel and his colleagues (especially Joshua Rust).
Schwitzgebel measured the behavior of philosophy professors and
advanced students in a variety of ways. They measured the rate at which
relatively obscure books on ethics – those likely only to be of interest
to specialists – were stolen from academic libraries; they discovered
that these books were somewhat more likely to be stolen than other
books. They examined the rates at which specialists in ethics voted in
public elections (in the US) and found that they were no more likely to
vote than specialists in other topics within philosophy (and less likely
to vote than political scientists). They examined the rate at which
ethicists avoided paying the registration fees at a large philosophical
conference in the US, and found they were no less likely to free ride
than non-ethicists. They sent emails to philosophy professors
specializing in ethics and other professors within and outside
philosophy, purporting to be from students seeking information about
courses and office hours, and measured the rate at which each group
responded. Ethicists were not significantly more likely to respond than
other professors (though ethicists did respond slightly more,
statistical analysis indicated that the small effect could be the result
of chance). They even asked for self-reports and discovered that
ethicists were not more likely to report they behaved better than
non-ethicists. For instance, they were not more likely to report that
they donated blood more frequently or abstained from eating meat more
frequently, though they were more likely to report that they thought
people had a duty to do these things.
In many ways,
these findings are rather depressing. Of course most of the behaviors
involved are relatively trivial, but some are quite important, and in
any case much of the moral life consists of small courtesies to one
another. We might have expected long and hard training in ethics to lead
to better behavior, but it seems not to. But why not?
Schwitzgebel
and colleagues suggest several different explanations. Perhaps, for
instance, people go into ethics precisely because they find it puzzling
or difficult. If that’s right, then perhaps ethicists are actually
improved by their reading and reflection, but improved relative to their
starting position, which wasn’t all that great. But the explanation I
want to focus on is different: perhaps studying ethics leads to moral
self-licensing.
Moral self-licensing occurs
when people think that they have an excuse for behaving less well
because of the morally good way they have acted in the past. Moral
self-licensing has been demonstrated experimentally and empirically a
number of times. It has been found, for instance, that people who have
bought an environmentally friendly product – say energy-conserving light
bulbs – might then give themselves permission to consume more of
something less environmentally friendly, and that people who are
prompted to think of some good action they have performed in the past
are less likely to donate to charity. Might this kind of effect be at
work in canceling out the effects of all that ethics self-education in
which professionals engage?
Perhaps, that is, that
reflecting on the ethical faults and foibles of others, or thinking
through moral dilemmas and coming to a conclusion with which we are
satisfied, has the same kind of effects on subsequent behavior as
actually doing morally good things. The suggestion is plausible: just as
buying an ethical product may lead us to (unconsciously) think of
ourselves as more moral people than average (who might therefore deserve
to be cut some slack) so reasoning to a conclusion we think of as moral
may lead us (again, unconsciously) to think of ourselves as morally
better than average. Alternatively, if moral licensing is instead
explained by expenditure of effort or time or money in the service of a
moral end, the effects of moral deliberation may work via the fact that
it is also effortful and time-consuming. In either case, we might behave
no better precisely because we reason more.
If that is
the explanation, it would be heartening in one way. It would not show
that all our efforts at moral deliberation and reflection don’t pay off,
in the sense that the explanation is fully compatible with our actually
reasoning our way to better conclusions than others. It would just
suggest that we pay a price for our hard work: we fail to live up to the
high standards we actually set. In another way, however, it might be
extremely disheartening. Perhaps our conclusions are worthwhile but if
others make the effort of engaging with them, following our reasoning
and perhaps being convinced by it, we can expect that very effort to
have ill effects on their behavior. We would get a paradox of moral
reasoning: it is worthwhile just so long as you don’t engage in it.
These
kinds of results should give ethicists some pause. We think that ethics
is extremely important, but it may be that our efforts at teaching it
and reflecting on it don’t lead to better behavior in ourselves and in
our students. Properly assessing these findings requires further
research, empirical and also philosophical (is hypocrisy compatible with
providing ethical guidance)? At the very least, they ought to shake us out
of complacency. Perhaps in doing so, they will help us to overcome the
very moral self-licensing that ethics may otherwise produce.
Want to cite this post?
Levy, N. (2013). The Effect of Theoretical Ethics on Actual Behavior: Implications for Neuroethics. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/09/the-effect-of-theoretical-ethics-on.html
The problem is with the assumption that ethics expertise and moral behaviour are linked. Ethics expertise is a set of skills or competencies: awareness of ethical issues and ethical implications of decisions, familiarity with different ethical lenses and theoretical perspectives, familiarity with relevant literature, and ethical reasoning and decision making. One can be quite proficient in those skills and still go through life as a pretty terrible person.
ReplyDeleteI don't think the moral self-licensing is a relevant factor because the work of an ethicist isn't inherently morally good (e.g., teaching ethics in a university wouldn't qualify as something morally good that would license unethical behaviour). What may be happening, however, is that ethics expertise makes ethicists more proficient at rationalizing their unethical behaviour.