Pages

Tuesday, September 24, 2013

Intelligence Testing: Accurate or Extremely Biased?



By Emily Young



In the early 1900s, psychologist Charles Spearman noticed that children who did well in one subject in school were likely to do well in other subjects as well, and those who did poorly in one subject were likely to do poorly across all subjects. He concluded that there is a factor, g, which correlates with testing performance (Spearman 1904). The g factor is defined as the measure of the variance of testing performance between individuals and is sometimes called “general intelligence”.



Later on, psychologist Raymond Cattell determined that there are two subsets of g, called fluid intelligence (denoted Gf) and crystallized intelligence (denoted Gc). Fluid intelligence is defined as abstract reasoning or logic; it is an individual’s ability to solve a novel problem or puzzle. Crystalized intelligence is more knowledge based, and is defined as the ability to use one’s learned skills, knowledge, and experience (Cattell 1987). It is important to note that while crystallized intelligence relies on knowledge, it is not a measure of knowledge but rather a measure of the ability to use one’s knowledge.



The first standardized intelligence test was created in 1905 by French Psychologist Albert Binet, as a method to screen for mental retardation in French schoolboys. The test measured intelligence by comparing an individual’s score to the average score of children his own age (Binet 1905). The test was later revised by Lewis Terman of Stanford University and named the Stanford-Binet Intelligence Scales. The Stanford-Binet is now in its fifth edition and includes five sections: fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing, and working memory.



Since the Stanford-Binet, many other standardized intelligence scales have been developed. One of the most popular modern intelligence tests is the Raven’s Progressive Matrices (RPM) test (Raven, 2003). The test gives individuals a series of boxes, each containing shapes that change from box to box, and a box that is empty. The test taker must recognize the pattern that is shown and correctly identify the shape that should go in the empty box from a collection of options. Unlike the Stanford-Binet, RPM is entirely visual; the test taker does not have to answer written questions, meaning the measured IQ is not dependent on reading comprehension. This allows for better testing that eliminates variables such as native language, age, and possible reading disability.






A general example of the questions on the Raven’s Progessive Matrices test.



So what exactly are these IQ tests measuring? The Stanford-Binet measures g through tasks that measure both Gf and Gc. Because RPM is entirely non-verbal and puzzle based, it almost exclusively measures Gf.



Which brings us to the next question; are these tests effectively measuring g?



Since their creation, modern Western intelligence testing has shown a difference in average intelligence, varying from group to group; whites score higher than blacks, the rich score higher than the poor. In some tests, women and men score differently from task to task. Are these differences due to heritable differences in intelligence between race, gender, and socioeconomic status? Or are environment, schooling, and stigma to blame? Or, are the tests themselves flawed?



While intelligence tests claim to be culture-fair, none of the tests created so far are one hundred percent unbiased. As Serpell (1979) found, when asked to reproduce figures from using wire, pencil and paper, and clay, Zambian children performed better in the wire task, while English children performed better in the pencil and paper task. Each group did better in the medium to which they were more accustomed. Pencil and paper IQ tests may be intrinsically biased towards Western culture.



Furthermore, while African-Americans have historically scored lower than white Americans on intelligence testing, this gap as been lessening in recent years (Dickens and Flynn 2006). This could be the result of one of two things; the first possibility is that average intelligence is increasing in the black community at a higher rate than in the white community (measured intelligence has been steadily increasing across all groups due to the Flynn effect). However, it seems more likely that post-segregation, white and black cultures have been merging, and schools have been integrated, meaning that white and black children have a better chance of receiving the same education. If this is the case, IQ tests are either measuring knowledge more than the test creators think they do, or the tests are extremely culturally biased, but this bias is lessening due to assimilation of white and black culture in America.



Not only are intelligence tests culturally biased, but they also seem to be biased in favor of neurotypical individuals. For example, while typically developing individuals generally perform similarly on RPM and the Wechsler Adult Intelligence Scale (WAIS), individuals with Autism typically score higher on RPM than on WAIS (Bolte et al. 2009, Mottron 2004). This is because while RPM is a visual task, WAIS is almost entirely verbal. Individuals with autism seem to use visual strategies to solve tasks and therefore have difficulty on tasks that can only be solved verbally (Kunda and Goel 2010). While this phenomenon is typically seen as a cognitive deficit, it is important to note that autistic individuals outperform neurotypical individuals on some visual tasks.



Therefore, by only measuring one specific part of intelligence, some IQ tests portray autistic individuals as having a cognitive deficit. What if some disorders, such as autism, are not actually disorders, but simply a way of thinking that differs from what is considered “normal”?



For example, Dr. Temple Grandin, an autistic woman with a PhD in Animal Sciences, uses her incredible visual working memory to design cattle equipment that is much more humane and far less anxiety-inducing than previous models. Grandin says her autism allows her to see the world in pictures; her inner thoughts are entirely devoid of language, she simply thinks in extremely detailed movies. She says her visual memory and sensitivity to details has allowed her to be so good at designing things, because details that neurotypical people gloss over are extremely important to her and end up making a huge difference in the efficiency of the final product.




Temple Grandin utilized her incredible working memory to design humane cattle-holding equipment for the agriculture industry.



Autism may not be the only example of a disorder being mischaracterized. Studies have shown that children with ADHD on average have lower IQs than neurotypical children (Kuntsi, 2003). However, in his TEDx talk, Stephen Tonti, a senior at Carnegie Mellon, discusses why he believes ADHD is not a disorder, but simply a difference in cognition. Tonti argues that by viewing ADHD as a disorder implies that it needs to be fixed. He states that his ADHD makes him better at some tasks than neurotypical individuals, and that the world needs a diversity of cognition in order to run smoothly.



Therefore, while IQ tests are intended to measure intelligence, they often only measure one type of intelligence, and are therefore biased against certain groups of people. By trying to fit cognition into a box, IQ testing disvalues cognitive diversity. This may be causing negative impacts. By telling an individual that their intelligence is low when in fact it is simply different, we could not only be holding people back, but we might also be depriving the world of a diverse group of thinkers that could solve problems from a different perspective.



Even if current IQ tests are not fair across all groups, the future of intelligence testing may be brighter; as discussed previously on the Neuroethics Blog, fMRI intelligence testing could eliminate biases in intelligence testing. By observing testers’ thought processes in action, researchers would be able to see which brain pathways a subject recruits to solve a test, and whether he or she uses a visual or verbal approach to the question, thereby observing fluid and crystal intelligence in action.





References



Binet, Alfred. (1905) L'Annee Psychologique, 12,191-244.



Bölte, S., Dziobek, I., & Poustka, F. “Brief report: The level and nature of autistic intelligence revisited”. Journal of Autism and Developmental Disorders 39 (2009): 678–682.



Cattell, Raymond B., and Raymond B. Cattell. "The Discovery of Fluid and Crystallized General Intelligence." Intelligence: Its Structure, Growth, and Action. Amsterdam: North-Holland, 1987. 87-120. Print.



Dickens, William T., and James R. Flynn. "Black Americans Reduce the Racial IQ Gap: Evidence from Standardization Samples." Psychological Science 17.10 (2006): 913-20. Web.



Kunda, Maithilee, and Ashok K. Goel. "Thinking in Pictures as a Cognitive Account of Autism." Journal of Autism and Developmental Disorders 41.9 (2011): 1157-177. Print.



Kuntsi, J., T.C. Eley, A. Taylor, C. Hughes, P. Asherson, A. Caspi, and T.E. Moffitt. "Co-occurrence of ADHD and Low IQ Has Genetic Origins." American Journal of Medical Genetics 124B.1 (2004): 41-47. Print.



Mottron, Laurent, Michelle Dawson, Isabelle Soulières, Benedicte Hubert, and Jake Burack. "Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception." Journal of Autism and Developmental Disorders 36.1 (2006): 27-43. Print.



Raven, J., J. C. Raven, and J. Court. Manual for Raven’s Progressive Matrices and Vocabulary Scales, Section I: General Overview. San Antonio: Harcourt Assessment, 2003. Print.



Serpell, Robert. "How Specific Are Perceptual Skills? A Cross-cultural Study of Pattern Reproduction." British Journal of Psychology 70.3 (1979): 365-80. Print.



Spearman, Charles E. "'General Intelligence', Objectively Determined And Measured." American Journal of Psychology 15 (1904): 201-93. Web.





Want to cite this post?



Young, E. (2013). Intelligence Testing: Accurate, or Extremely Biased? Retrieved on , from http://www.theneuroethicsblog.com/2013/09/intelligence-testing-accurate-or.html

Tuesday, September 17, 2013

Neuroscience and Philosophy of Mind: The Relevance for Neuroethics

By Rabbi Ira Bedzow, MA



Rabbi Ira Bedzow is a 2013 recipient of the Emory Center for Ethics
Neuroethics Travel Award. He is the project director for Moral Education
research project for the TAG Institute, and is currently pursuing his
PhD in Religion at Emory University.




Philosophy of mind examines the nature of the mind, mental functions, and consciousness, and their relationship to the body, i.e. the brain. Most contemporary philosophers of mind adopt a physicalist position, meaning that the mind is not something separate from the body. Nevertheless, they disagree as to whether mental states could eventually be explained by physical descriptions (reductionism), or whether they will always have its own vocabulary (non-reductionism). With the sophistication of neuroscience and the predominance of the physicalist position, it may seem that the importance of philosophy of mind is losing relevance, not only for those who are reductionist in their opinion about the relationship between the mind and the brain but even to non-reductionists as well. For example, William Vallicella recently answered the question, "Is Philosophy of Mind Relevant to the Practice of Neuroscience?" in the following way:


Off the top of my 'head,' it seems to me that […] it should make no difference at all to the practicing neuroscientist what philosophy of mind he accepts.

Valicella’s answer betrays an inclination towards neurocentrism (the view that human behavior can be best explained by looking solely or primarily at the brain). According to Vallicella, neuroscience would not be affected by any philosophies of mind, since one can always find a way to make philosophical premises correspond to biological findings (with a little work). Relevance is equated with correspondence, and philosophy of mind must fit into the findings of neuroscience. If it doesn't, there is no contradiction; rather, philosophy becomes irrelevant.


What Vallicella does not consider, and which should be a major concern for neuroethics, is that a person's given philosophy of mind influences how he or she will interpret neuroscientific data. For this reason, which philosophy of mind we uphold has a major affect on the practice of neuroscience and how its findings shape the way we interact in social life. Moreover, if we ignore the relevance of philosophy of mind, or other non-neurocentrist methodologies for that matter, and allow neurocentric accounts to have an exclusive position in explaining all aspects of human behavior, both social and psychological, then we lose more than we realize in terms of our ability to properly explain and shape human behavior.




In this post, I do not want to show how different philosophies of mind affect neuroscience per se, though I will provide examples which compare different views of the mind. These examples are only meant to show, however, the practical ramifications of having different philosophical assumptions about the mind which cannot be resolved by neuroscience. Rather, I will attempt to show that philosophy of mind in general is still important even with the current advances in neuroscience.




The first example is a case where having a philosophy of mind will affect the conclusions drawn from neuroscientific data with respect to mental illness. Professor Seth Grant of the University of Edinburgh and his team claim that their research shows a direct link between the evolution of behavior and the origins of brain diseases. As Grant puts it, "Our work shows that the price of higher intelligence and more complex behaviours is more mental illness." To claim a correlation between higher intelligence and mental illness is also to presuppose not only a theory of mental illness (in terms of its causes) but also a concept of mental illness (in terms of its bio-social definition). The latter is an ethical and philosophical claim as much as a biological or neuroscientific claim. In fact, it fits the "Disorder as Statistical Deviance" concept of mental illness, albeit on a grander scale so as to include animals as much as humans.




The premise that humans are simply smarter than other animals accepts a philosophy of mind that traces back to Aristotle, who thought that the uniqueness of human beings lies solely in their rational capacity; all other parts of the human soul are the same as those possessed by animals. Yet this is not the only view of the human psyche that can influence one's perception of a possible correlation between intelligence and mental illness. Maimonides (a 12th century Jewish physician and rabbi), on the other hand, contended humans are wholly unique; the "animal" aspects of their soul, such as the nutritive, sentient, imaginative, and appetitive parts, are only analogous to those found in animals, they are not the same. (See Shemonah Perakim, Chapter 1.)






Philosophy and science (via theguardian.com)

The importance of these divergent opinions for the sake of this discussion is not specifically a matter of how each one understands the human soul; rather, their disagreement is important in terms of whether one can interpret the results of damaging mice's brains as being analogous to the effects of mental illness and brain diseases in humans, as Seth Green and his team has done. For Maimonides, the relationship between mental illness and human intelligence should not be judged according to that which is found in the animal kingdom, since the analogy does not make for an informative comparison. Aristotle's philosophy, on the other hand, would allow for an analogy between mice and humans for the sake of reading the data in the way that Seth Green and his team has done. In this case, philosophy influences how to interpret the data, since it influences which data will be relevant.




A second example is a case where having a philosophy of mind can affect what we can expect neuroscience to be capable of explaining. For example, Sam Harris, has argued that neuroscience will eventually determine human values. No longer would legal culpability and moral responsibility be ethical issues; rather, they would be questions for science to explain. Yet - to take just one area of ethical controversy - while addiction may lead to neurological changes in a person, an addict may not become a wanton (to use Harry Frankfurt's phrase). As Sally L. Satel and Scott O. Lilienfeld note, "The key problem with neurocentrism is that it devalues the importance of psychological explanations and environmental factors, such as familial chaos, stress, and widespread access to drugs, in sustaining addiction."



Though neuroscience can show that biological changes occur from addiction, using that scientific data-point without any consideration for psychological or environmental factors to call addiction a "brain-disease" also removes (or at least undermines) the idea of human agency in influencing how an addict behaves. From a philosophical perspective, the idea that habituation creates a loss of free will is an ancient assumption with traces to Aristotle, yet Aristotle's view that habits ossify a person's behavior is not the only one that has traction in moral thought. Maimonides adopts a more plastic view of human character, whereby a person is still able to change his or her behavior (even if only to a degree) when his or her habits have seemed to make the person a wanton, and the recovery of certain addicts help to support this assumption. The neuroscientist, however, is unable to tell the difference between a brain scan of a person who cannot act differently from one who did not act differently. Thus data collected from a brain scan could not resolve this philosophical disagreement. Whether agency plays a role in an addict's potential recovery or not is still a matter for philosophy.




While Vallicella's response to the question, "Is Philosophy of Mind Relevant to the Practice of Neuroscience?" allows us to feel comfortable holding onto our philosophies of mind by explaining that they do not contradict the advances of neuroscience, when understood in terms of reconciliation alone, the question is not interesting. Rather, the question that we all should consider, and which has great relevance both to philosophy of mind and to the practice of neuroscience, is - how does having a philosophy of mind influence the way neuroscientific data is interpreted, and how does our interpretation of such data affect our answers to the question of how the brain and the mind relate, and how we relate, to each other?




To those who may contend that philosophy (of mind) should have no influence on (neuro)science, I want to end with a quote from Hilary Putnam, who stated regarding the subsumption of values into the scientific method,



Apparently any fantasy - the fantasy of doing science using only deductive logic (Popper), the fantasy of vindicating induction deductively (Reichenbach), the fantasy of reducing science to a simple sampling algorithm (Carnap), the fantasy of selecting theories given mysteriously available set of "true observation conditionals," or, alternatively, "settling for psychology" (both Quine) - is regarded as preferable to rethinking the whole dogma (the last dogma of empiricism?) that facts are objective and values are subjective and "never the twain shall meet."1

Because science, as a method of inquiry, and philosophy, as a set of methodological assumptions used to interpret data, cannot in fact be separated, there is more relevance to philosophy than neurocentrists may like to admit.



___________________________



1. Hilary Putnam, The Collapse of the Fact/Value Dichotomy (Cambridge: Harvard University Press, 2002) 145.







Want to cite this post?



Bedzow, I. (2013). Neuroscience and Philosophy of Mind: The Relevance for Neuroethics. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/09/neuroscience-and-philosophy-of-mind.html

Tuesday, September 10, 2013

The Drug Made Me Do It: An Examination of the Prozac Defense

The plot of a recent Hollywood thriller, Side Effects, revolves around many pressing legal and ethical questions surrounding the use of anti-depressant medications. The movie explores the life of a supposedly depressed woman—Emily Taylor—who seeks treatment from her psychiatrist. Emily’s doctor prescribes her an anti-depressant—Ablixa. Emily then proceeds to murder her husband in cold blood while under the influence of the drug. The movie seeks to explore the culpability of this depressed woman in a legal sense. During the trial, the psychiatrist argues that neither he nor Emily Taylor is responsible; rather, Emily Taylor was simply “a hopeless victim of circumstance and biology.” Is it possible that a drug could be responsible for one’s actions as argued by the psychiatrist in the movie? The answer is not clear. Nonetheless, the possibility that someone could escape criminal punishment due to a certain anti-depressant represents a serious ethical quandary that should be examined.








There is no doubt that certain substances have the capacity to markedly alter our motivations and inhibitions. For example, alcohol is a commonly referenced example. When one makes the choice to drink alcohol to get drunk, under most circumstances, one is likely to experience several of the side effects associated with alcohol consumption, such as difficulty walking, blurred vision, and impaired memory [1]. Additionally, drinking alcohol can also often lead to poor decisions of questionable legality. However, many people do not escape punishment because they were under the influence of alcohol because they choose to drink. When anti-depressant drugs come into play, the situation is much different because people are no longer voluntarily taking a drug.



These questions of legal responsibility have resurfaced with the increasing usage of anti-depressant drugs. In fact, the use of selective serotonin reuptake inhibitors (henceforth, SSRIs) has risen remarkably in the past ten years. According to a report released in 2011, the use of anti-depressants has increased 400% between 1988-1994 to 2005-2008 [2].






A visualization of anti-depressant use from a recent CDC report

Simultaneously, over the past 15 years, lawyers have increasingly attempted to use the so-called “Prozac defense” with differing degrees of success. The “Prozac defense” does not refer solely to the intake of Prozac; rather, the argument put forth by several prominent psychiatrists and lawyers relies on the notion that different SSRIs (such as Effexor, Cymbalta, Zoloft, Paxil, etc.) alter the mind and can cause one to make a decision that would not have been made if one were not to take the drug.



The history of the defense is fairly limited to the past decade. In fact, no overarching legal framework exists to deal with the potentially mind-altering side effects of SSRIs. Nonetheless, there have been several cases globally in which the court ruled that an SSRI caused an illegal behavior. Four of the cases are outlined here [3]. These decisions set an important precedent: a person can evade responsibility for an action due to the side effects of a particular drug. However, not every instance in which lawyers have employed a version of the “Prozac defense” has successfully extricated the criminal from punishment.






A summary of recent trial verdicts where an anti-depressant was involved

Specifically, the first major case that sought to deal with the legal ramifications of anti-depressant side effects involved a 60-year old man, Donald Schell (denoted as “DS 2001” in the above chart). In 1998, Schell was diagnosed with extreme anxiety and was prescribed Paxil to deal with his anxiousness. Within 48 hours of taking the drug, Donald killed his wife, daughter, granddaughter, and himself. One of Donald’s family members, Tobin, sued SmithKline (the Paxil manufacturer). Tobin’s lawyers argued that those who take Paxil have shown increased suicidal or homicidal thoughts. Predictably, SmithKline worked to deny these allegations by questioning the causality of such claims. However, expert reports used in the trial pointed to Paxil’s potential culpability in a small set of instances:



“[I]t is generally understood by most psychiatrists that a certain number of patients, perhaps five percent, will develop restlessness and anxiety when prescribed selective serotonin uptake inhibitor drugs (SSRIs)…Furthermore, a certain number of depressed patients are known to “switch” in to hypomanic states when treated with antidepressant drugs. When a patient has a hypomanic history (Mr. Schell appears to have had none) or already exhibits akathisic symptoms (Mr. Schell did), SSRI compounds should not be prescribed …[4].”



Tobin’s defense was ultimately successful, marking the first verdict against a pharmaceutical company in the United States. This case set an important precedent that a drug can in fact be responsible for the unlawful actions someone may take when under the influence of anti-depressants.







As many may suspect, proving a drug caused an action can be quite a feat. First, one must demonstrate that a specific drug can cause some individuals to commit murder and/or suicide, for example. Second, the drug must be the cause of someone’s specific actions in a certain case. Third, as in the Tobin case, the lawyer must prove beyond a reasonable doubt that the pharmaceutical company was also aware that a particular drug could cause an individual to commit homicide if certain previous symptoms existed.



The first criterion has some scientific backing. Many studies have shown that certain SSRIs could have adverse effects on certain subsets of the population that have certain pre-existing symptoms or genes [5]. However, the small nature of this group of people makes it difficult to establish an overarching legal framework that deals with all cases that may have involved the intake of an anti-depressant.



The second criterion poses not only a problem in the case of criminal culpability in SSRI product liability cases but also other criminal court cases. The nature of statistical inference requires that researchers look at overall trends in behavior. It is then difficult to predict accurately how one individual will behave with exact precision. Moreover, even if a lawyer can point to research that suggests anti-depressants can cause impulse inhibition, aggression, and homicidal thoughts in individuals, one cannot attribute such findings to the actions of an individual case with exact accuracy. Combating this issue embodies a huge statistical challenge for psychiatrists and lawyers alike.



Lastly, the assumption behind the anti-depressant defense is that someone has awareness of the potential consequences of the drug and is therefore responsible for the actions taken while on the drug. This can pose a daunting legal challenge for a multitude of reasons. For example, if pharmaceutical companies put warning labels on their drugs, then who is responsible if someone takes an SSRI and commits suicide? Additionally, our medical system, as it stands, assumes that a psychiatrist possesses professional knowledge and expertise of when and to whom to administer SSRIs. Therefore, if a medical doctor prescribes a particular drug to a patient knowing the drug’s side effects, often a pharmaceutical company would argue they are no longer responsible for the patient’s actions while under the influence of a drug. The question of responsibility has no universally clear answer, which is what makes attributing legal accountability in the case of criminal acts incredibly difficult.







Ultimately, I believe that pinning responsibility on one sole actor may not be feasible because the issues of legal culpability remain incredibly complex and intertwined. Given that many can and do benefit from taking Prozac, Zoloft, and other similar drugs, pharmaceutical companies should not stop producing SSRIs given depression’s prevalence [6]. It would also not be wise to place responsibility exclusively on the clinician in fear that doing so may discourage psychiatrists from prescribing anti-depressants when it is appropriate. Where does that leave lawyers and clinicians?



In my opinion, lawyers have an obligation to learn about how neuroscience could inform the legal system, for better or for worse. Given the increasing prevalence of legal cases that have drawn on neuroscientific evidence (like the Schell case mentioned above), it is clear that fluency in neuroscience will be necessary in order to grapple with complex questions revolving around sentencing. For example, in a Canadian homicide case, the Winnipeg court called on expert psychiatric testimony to determine whether Prozac played a role in a murder case. In 2011, the judge did in fact conclude that the 16-year old Canadian teenager would be tried as a minor despite the prosecution’s push to charge him as an adult [7]. As this case and the Schell case demonstrate, neuroscience, psychology, and law will inevitably encounter one another in the court of law in the coming years. It is ultimately my hope that the individuals working in these fields can work synergistically in order to craft a legal framework that appropriately deals with individuals that have a history of mental illness in terms of fair sentencing and punishment.





Want to cite this post?



Marshall, J. (2013). The Drug Made Me Do It: An Examination of the Prozac Defense. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/09/the-drug-made-me-do-it-examination-of.html






References



[1] National Institute on Alcohol Abuse and Alcoholism. (2004). “Alcohol’s Damaging Effects on the Brain,” Alcohol Alert, 63. Retrieved from http://pubs.niaaa.nih.gov/publications/aa63/aa63.pdf

[2] Wehrwein, Peter. (2011). “Astounding Increase in Antidepressant use by Americans,” Harvard Health Blog. Retrieved from http://www.health.harvard.edu/blog/astounding-increase-in-antidepressant-use-by-americans-201110203624

[3] Healy, David; Herxheimer, Andrew; and Menkes, David, (2006). “Antidepressants and Violence: Problems at the Interface of Medicine and Law,” PLoS Med 3(9). Retrieved from http://www.plosmedicine.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pmed.0030372&representation=PDF

[4] Whitehead, Paul (2003). “Causality and Collateral Estoppel: Process and Content of Recent SSRI Litigation,” Journal of American Psychiatry Law 31. Retrieved from http://www.jaapl.org/content/31/3/377.long

[5] Lucire, Yolande and Crotty, Christopher (2011). “Antidepressant-induced akathisia-related homicides associated with diminishing mutations in metabolizing genes of the CYP450 family,” Pharmacogenomics and Personalized Medicine 4. Retrieved from http://www.nt.gov.au/lant/parliamentary-business/committees/ctc/youth-suicides/Submissions/Sub%20No.%2016,%20Dr%20Yolande%20Lucire,%20Part%204,%20Sept%2030%20Sept%202011.pdf

[6] Gibbons, Robert; Hur, Kwan; Brown, Hendricks; Davis, John; and Mann, John (2012). “Benefits From AntidepressantsSynthesis of 6-Week Patient-Level Outcomes From Double-blind Placebo-Controlled Randomized Trials of Fluoxetine and Venlafaxine,” Arch Gen Psychiatry 69(6). Retrieved from http://archpsyc.jamanetwork.com/article.aspx?articleid=1151020

[7] McIntrye, Mike (2011). “Judge Agrees Prozac Made Teen a Killer,” Winnipeg Free Press. Retrieved from http://www.winnipegfreepress.com/breakingnews/judge-agrees-prozac-made-teen-a-killer-130010278.html




Tuesday, September 3, 2013

The Effect of Theoretical Ethics on Actual Behavior: Implications for Neuroethics




Neil Levy

By Neil Levy, PhD



Neil Levy is the Deputy Director of the Oxford Centre for Neuroethics, Head of Neuroethics at Florey Neuroscience Institutes, University of
Melbourne, and
 a member of the AJOB Neuroscience Editorial Board. His research examines moral responsibility and free will.



Might
doing ethics be harmful to your moral health? One would expect just the
opposite: the deeper you think about ethics, the more you read and the
larger the number of cases you consider, the more expertise you acquire.
Bioethicists and neuroethicists are moral experts, one might think.
That’s why it is appropriate for media organizations to ask us for our
opinion, or for hospitals and research institutions to ask us to serve
on institutional review boards.



In
this post, I leave aside the question whether ethicists like me deserve
to have their opinions about controversial issues given special weight
when we offer them. It is really hard to know what could serve as
evidence for or against that view (the issues are controversial, so
there is no way of measuring how often we get it right that’s not going
to beg all sorts of questions). There is some evidence, however, that
ethicists behave no better than anyone else, which places some pressure
on the idea that all our reflection, writing and reading makes us moral
experts.






Eric Schwitzgebel

The evidence comes mainly from the work of Eric Schwitzgebel and his colleagues (especially Joshua Rust).
Schwitzgebel measured the behavior of philosophy professors and
advanced students in a variety of ways. They measured the rate at which
relatively obscure books on ethics – those likely only to be of interest
to specialists – were stolen from academic libraries; they discovered
that these books were somewhat more likely to be stolen than other
books. They examined the rates at which specialists in ethics voted in
public elections (in the US) and found that they were no more likely to
vote than specialists in other topics within philosophy (and less likely
to vote than political scientists). They examined the rate at which
ethicists avoided paying the registration fees at a large philosophical
conference in the US, and found they were no less likely to free ride
than non-ethicists. They sent emails to philosophy professors
specializing in ethics and other professors within and outside
philosophy, purporting to be from students seeking information about
courses and office hours, and measured the rate at which each group
responded. Ethicists were not significantly more likely to respond than
other professors (though ethicists did respond slightly more,
statistical analysis indicated that the small effect could be the result
of chance). They even asked for self-reports and discovered that
ethicists were not more likely to report they behaved better than
non-ethicists. For instance, they were not more likely to report that
they donated blood more frequently or abstained from eating meat more
frequently, though they were more likely to report that they thought
people had a duty to do these things.



In many ways,
these findings are rather depressing. Of course most of the behaviors
involved are relatively trivial, but some are quite important, and in
any case much of the moral life consists of small courtesies to one
another. We might have expected long and hard training in ethics to lead
to better behavior, but it seems not to. But why not?



Schwitzgebel
and colleagues suggest several different explanations.  Perhaps, for
instance, people go into ethics precisely because they find it puzzling
or difficult. If that’s right, then perhaps ethicists are actually
improved by their reading and reflection, but improved relative to their
starting position, which wasn’t all that great. But the explanation I
want to focus on is different: perhaps studying ethics leads to moral
self-licensing.



Moral self-licensing occurs
when people think that they have an excuse for behaving less well
because of the morally good way they have acted in the past. Moral
self-licensing has been demonstrated experimentally and empirically a
number of times. It has been found, for instance, that people who have
bought an environmentally friendly product – say energy-conserving light
bulbs – might then give themselves permission to consume more of
something less environmentally friendly, and that people who are
prompted to think of some good action they have performed in the past
are less likely to donate to charity. Might this kind of effect be at
work in canceling out the effects of all that ethics self-education in
which professionals engage?



Perhaps, that is, that
reflecting on the ethical faults and foibles of others, or thinking
through moral dilemmas and coming to a conclusion with which we are
satisfied, has the same kind of effects on subsequent behavior as
actually doing morally good things. The suggestion is plausible: just as
buying an ethical product may lead us to (unconsciously) think of
ourselves as more moral people than average (who might therefore deserve
to be cut some slack) so reasoning to a conclusion we think of as moral
may lead us (again, unconsciously) to think of ourselves as morally
better than average. Alternatively, if moral licensing is instead
explained by expenditure of effort or time or money in the service of a
moral end, the effects of moral deliberation may work via the fact that
it is also effortful and time-consuming. In either case, we might behave
no better precisely because we reason more.



If that is
the explanation, it would be heartening in one way. It would not show
that all our efforts at moral deliberation and reflection don’t pay off,
in the sense that the explanation is fully compatible with our actually
reasoning our way to better conclusions than others. It would just
suggest that we pay a price for our hard work: we fail to live up to the
high standards we actually set. In another way, however, it might be
extremely disheartening. Perhaps our conclusions are worthwhile but if
others make the effort of engaging with them, following our reasoning
and perhaps being convinced by it, we can expect that very effort to
have ill effects on their behavior. We would get a paradox of moral
reasoning: it is worthwhile just so long as you don’t engage in it.



These
kinds of results should give ethicists some pause. We think that ethics
is extremely important, but it may be that our efforts at teaching it
and reflecting on it don’t lead to better behavior in ourselves and in
our students. Properly assessing these findings requires further
research, empirical and also philosophical (is hypocrisy compatible with
providing ethical guidance)? At the very least, they ought to shake us out
of complacency. Perhaps in doing so, they will help us to overcome the
very moral self-licensing that ethics may otherwise produce.





Want to cite this post?



Levy, N. (2013). The Effect of Theoretical Ethics on Actual Behavior: Implications for Neuroethics. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2013/09/the-effect-of-theoretical-ethics-on.html