Pages

Tuesday, February 23, 2016

A plea for “slow science” and philosophical patience in neuroethics

By Richard Ashcroft, PhD






Professor Richard Ashcroft, an AJOB Neuroscience Editorial Board Member, teaches medical law and ethics at both the undergraduate and postgraduate level in the Department of Law at Queen Mary University of London.




Readers of AJOB Neuroscience will be very familiar with the range and pace of innovation in applications of neurosciences to problems in mental health and wellbeing, education, criminology and criminal justice, defense, and love and sexuality – to name but a few areas of human concern. However, there is a skeptical tendency which pushes back against such innovation and claims. This skepticism takes a number of forms. One form is philosophical: some claims made about neurosciences and their applications just make no sense. They rest on conceptual mistakes or logical fallacies. This kind of attack has been made most persuasively by neuroscientist M.R. Bennett and philosopher P.M.S. Hacker in their Neurophilosophy: Philosophical Foundations of Neuroscience (2003). Another form is empirical: some claims are advanced on the basis of weak or flawed evidence, and may go well beyond what that evidence could support, even if on its own terms the data are robust and obtained in methodologically sound ways. A typical instance of this is the way that newspapers regularly report neuroimaging studies which purport to describe “the autistic brain,” when at best they describe some differences in one subset of autistic people carrying out one experimental task, compared with a small control group of putatively neurotypical people. Another form is ethical: some technologies raise significant ethical challenges. And obviously some challenges are political, bearing on interests of particular social groups or on competing visions of the society we want to live in. The standard examples here are drawn from debates about neuro- or psychopharmacological enhancement.






All of this is true of other life sciences as well – particularly genetics – but there seems at the moment to be a particular backlash about “neurohype” (see previous related posts on this blog here, here, and here).





While there is a lot of enthusiasm for the potential of the neurosciences to improve our understanding of the brain, the mind and human behaviour, and to enable us to develop useful technologies for treatment and enhancement, there is also a lot of scorn for scientific and popular literature which seems, on its face, to do little more than add the particle neuro- to the name of an existing discipline, or to turn well understood findings in psychology or economics into apparently novel discoveries in the neurosciences.





It seems to me that there is a powerful need for ways to disentangle or demarcate on the one hand good from bad arguments about neurosciences and technologies and on the other hand genuinely novel findings where the “neuro-“ is genuinely central from rebranding and turf claiming on the part of neuro-enthusiasts.





This is not directly an ethical question, I think. Some scientists and commentators are, perhaps, guilty of intellectual dishonesty, or of representing themselves as having expertise they don’t really have. This is something that bioethicists have continually wrestled with, from the earliest days of the field. But most of us are not guilty of deliberately over-claiming or hyping our work, or findings in the sciences, any more than we are occasionally guilty of underestimating or misunderstanding the issues. The same is true of the scientists and clinicians working in the neuro-fields.





I think rather it is a question of intellectual responsibility. We need to take more pains over what we say. We need to slow down, to practice in discussions of neuroethics what philosopher of science Isabelle Stengers calls “slow science”.  Here I think there is a special place for philosophy in neuroethics and in interdisciplinary collaboration with neuroscientists, in seeking analytical clarity about what is being claimed, with what evidence, and to what effect. I am not making a special plea for the role of philosophy in making grand theoretical and metaphysical theories, but rather for a modest analytical patience. Wittgenstein, without Wittgensteinianism.








Incorporating philosophy in neuroethics, image courtesy

of flickr user m01229











Want to cite this post?



Ashcroft, R. (2016). A plea for “slow science” and philosophical patience in neuroethics. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/02/a-plea-for-slow-science-and.html

Tuesday, February 16, 2016

Our Lazy Brain Democracy: Are We Doomed?

By John Banja, PhD




Lately, I’ve been thinking about the Martin Shkreli embarrassment in connection with System 1 and 2 reasoning [1].  Popularized by thinkers like Nobel Laureate Daniel Kahneman, System 1 thinking refers to the fast, intuitive, reflexive, usually highly reliable cognition that humans deploy perhaps 95 percent of the time in navigating and making sense of their environments. System 2 thinking, on the other hand, is slow, effortful, plodding, analytical, and data dependent—in short, an activity that most humans don’t particularly gravitate towards perhaps because our brains, at least according to Kahneman, are inherently lazy [1]. Shkreli, you’ll recall, is a former pharmaceutical CEO who found himself at the top of everyone’s hate list when he announced that his company was going to increase the cost of its drug Daraprim by 5000 percent.  (Daraprim is used in the treatment of malaria and HIV.) The public’s System 1, gut-level outrage predictably kicked in and, within weeks, Shkreli found himself without a job and battling criminal charges for securities fraud he allegedly committed with a previous company.




Martin Shkreli arrest, image courtesy of YouTube 










To me, Shkreli’s case vividly illustrates America’s sensationalist-prone, media-driven, knee-jerk, System 1 style of moral reasoning. Of course, Shkreli’s greed was way over the top and his price-hike justification—pharma’s predictable “we need these profit margins to fuel innovation”—was ridiculously disingenuous. Yet, Shkreli’s pricing indiscretion was an economic blip compared to two other health-related events with which our public sense of justice seems little concerned. One is the $160 billion merger (largest in 2015 corporate America) that is underway between American pharmaceutical giant Pfizer and the Ireland based company Allergan. Pfizer’s intentions are to reconstitute itself as an Irish company so as to be relieved from paying the high American corporate tax rate. The relocation is perfectly legal and follows in the wake of Burger King, Medtronic, and other American companies doing the same. But it means the loss of gazillions of tax dollars to the American economy.  Oh, and Pfizer also just announced that it was raising its price on some 100 drugs (including Viagra) by as much as 20 percent.





And how about this persisting one: The United States pays more for prescription drugs than any other country in the world. One of the primary reasons why is because Medicare, which is the largest drug purchaser in the United States, is prohibited by law from negotiating price discounts with pharmaceutical companies. Medicaid can negotiate discounts and health maintenance organizations and the VA can negotiate too. But not Medicare.





So, how come we robustly and gleefully intervene with the likes of Shkreli but allow these other, considerably more worrisome events to get a pass? Why isn’t the public and the media equally outraged by the colossal loss of tax revenue and the increased health care costs that the American economy must bear because of economic arrangements like these? I’d suggest it’s because they are very hard to understand, tedious to analyze, and frustrating to resolve. In short, they’d require a lot of System 2 thinking that we’re not willing to expend. For example, just to appreciate the Pfizer case, you’d need to know about “inversions,” territorial tax systems, deferrals, and anti-tax abuse regulations. For Medicare, you’d need to know about the history and the moral propriety of the back room, closed-door deals and trade-offs that resulted in the original 2003 legislation, and why President Obama hasn’t yet attempted to reform the legislation despite saying he would. You’d then have to argue the associated moral pros and cons according to some kind of ethical platform—most assuredly, a platform that not everyone would accept, such as Wall Street’s fat cats shrugging their shoulders at crusading moralists like Bernie Sanders.





Yet, we should all be thoroughly upset at how these events are structurally embedded in and enabled by our laws and politically sanctioned economic arrangements such that their impact will almost certainly widen the already worrisome gap between our country’s wealthiest 3 percent and everyone else. But as long as we stay mired in System 1 thinking, that’s very unlikely to happen.








Superintelligence, image courtesy of flickr user Anders Sandberg

Maybe technology will help us out. I’ve been reading Nick Bostrom’s new book Superintelligence, and early on he makes a provocative comment: that the super-intelligent technology of the not terribly distant future—perhaps within a century—will be “capable of improving its own architecture… (It) should be able to understand its own workings sufficiently to engineer new algorithms and computational structures to bootstrap its cognitive performance.” [2, p. 29, also described in a previous blog post].  That sort of thing would accomplish what “neuroevolution” hasn’t done for us humans in the last 10,000 years. Although our brains may have “shrunk” in size over time, they continue to use the same neural structures, programs and circuits our ancient ancestors did, which worked great for fleeing from predators, herding livestock, and cultivating crops but not great for dealing with inversions, territorial tax systems, Medicare Part D coverage legislation, and the like.





Consequently, superintelligent technology that could learn from its mistakes and continuously rewire and re-architecture itself seems fabulous. But in matters of morality, there’s a big catch: What if the superintelligent technology turns out to be greedier than Shkreli but is much more strategic and cunning? How do we program such technology with not only the requisite instrumental intelligence for regulating pricing schemes, tax rates, profits, and so on, but also with substantive moral intelligence so that our prospects for human flourishing aren’t thwarted? Indeed, who or what gets to say what “flourishing” is, what are “acceptable” means for achieving it, and what such “improvement” resembles? We’ll not only need a lot of System 2 thinking to accomplish this, but a collective, exquisitely well-intentioned and morally disciplined will if our grandchildren are going to thank us for our efforts.





For now, I hope we can figure out how to overcome our lazy brains and morally enhance ourselves with the following: That we figure out ways to shape our educational systems, the media, and other information outlets in ways that present information that is true to facts, that respects the law of non-contradiction, that practices sound methods of evidence gathering, and that values the best scientific opinions. We need to humble ourselves and admit all the things we don’t know, and then commit ourselves to becoming much more informed knowledge consumers. And at the very least, we should collectively denounce the media’s spotlighting evolution deniers, vaccine deniers, climate change deniers, and—perhaps the most vomitous—Sandy Hook murder deniers as beneath the intelligence and dignity of a 21st century electorate.





One of the best features of western democracies is that they enable the fairly rapid correction of legislative or socioeconomic experiments gone wrong. The greatest challenge of navigating life in the 21st century, however, is that we have created many extremely complex socioeconomic arrangements that lazy brains are not equipped to manage and seem unmotivated to change. If we fail to remedy these ills, however, we will suffer the consequences of a lazy brain democracy. While System 1, lazy brain thinking can be comforting (because it doesn’t require much effort), a national commitment to System 2 reasoning about the issues we all know are important for our collective well-being may spell the difference between doom and realistic hope as our century evolves.



References



1. Kahneman, D. 2011. Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux.



2. Bostrom, N. 2014. Superintelligence. Oxford, UK: Oxford University Press.





Want to cite this post?



Banja, J. (2016). Our Lazy Brain Democracy: Are We Doomed?. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/02/our-lazy-brain-democracy-are-we-doomed.html




Tuesday, February 9, 2016

AI and the Rise of Babybots: Book Review of Louisa Hall’s Speak




By Katie Strong, PhD




“Why should I be punished for the direction of our planet’s spin? With or without my intervention, we were headed towards robots,” writes Stephen Chinn, a main character in the novel Speak by Louisa Hall. Stephen has been imprisoned for his creation of robots deemed illegally lifelike, and in a brief moment of recrimination when writing his memoir from prison, he continues, “You blame me for the fact that your daughters found their mechanical dolls more human than you, but is it my fault, for making a doll too human? Or your fault, for being too mechanical?” 




The dolls that resemble humans are referred to as “babybots,” robots with minds that deviate only 10% from human thought and have the ability to process sensory information. Speak tells the story of how babybots come into being and then describes the aftermath once they have been deemed harmful and removed from society. The book moves between character’s stories taking place in four different time periods, from the 16th century to 2040, and the plot is told through letters, court transcripts, and diary selections from five main characters. Through these various first-person views, pieces of the story behind babybots and the rise of artificial intelligence are made clear.




Around the same time that Stephen toils in prison writing his memoir, a young girl named Gaby is slowly losing her ability to speak and move following the mandated removal of her babybot. An outbreak, characterized by stuttering and physical rigidity, has begun among children whose babybots have been ripped away. We read Gaby’s struggle to cope with this loss as she communicates with her replacement robot in court transcripts meant to prove Stephen’s innocence against his charges: the knowing creation of mechanical life, the intent to endanger to morals of children, and the continuous violence against the family.





Through the three remaining voices, Alan Turing, The Dettmans, and Mary Bradford, we are swept backwards in time. While Stephen is credited and punished for his creation of babybots, traveling through this timeline, it becomes clear that he alone is not responsible. Stephen’s babybot is based on a program known as MARY3. Stephen programs MARY3 to display empathy, error, and personality from MARY2, a robot with extensive memory created by the scientist Karl Dettman and his wife Ruth. Through a series of letters that chronicle the dissolution of the couple’s marriage, we learn about the development of MARY2 and their conflicting ideas about memory and artificial intelligence. MARY2 is able to recite entire personal histories, including the diary of Mary Bradford, a pilgrim traveling to Massachusetts in the 16th century. Excerpts of Mary’s diary reveal her thoughts, and although her existence obviously predates computers, she writes of seeking solace in a nonhuman and then deeply contemplates whether this companion, her dog, has a soul like hers.








Alan Turing image courtesy of flikr user Steve Montana Photography

Amongst these four stories are letters from Alan Turing to the mother of his deceased childhood friend. These fictional letters follow the real story of Alan Turing as he moves from a student at the Sherborne School to his eventual suicide with cyanide poisoning. Alan is the only character based on a real person, and his story haunts the fictional portions of this book with the reminder that the history of babybots could convincingly be rooted in our own reality and history.




The book does not contain an evil scientist taking over the world with robots or a swarm of computers extinguishing humankind. The most villainous of characters is Stephen, but MARY3 is partially a result of a well intentioned father. Much of the action that could have filled an entire book – the height of babybots, the decision to remove them from society, and the arrest of their creator – is actually omitted and only alluded to and foreshadowed. Speak takes a subtler approach, and instead, more chillingly, slowly chronicles the destruction of single individuals and their relationships as society grapples with the emerging role of technology. Stephen is enamored with his new wife, but eventually becomes so engrossed in the creation of MARY3 that even her ovarian cancer diagnosis cannot snap him out of his stupor. Gaby and her classmates are truly unable to make human connections, and are even barred from doing so when quarantined. To alleviate the symptoms of the outbreak, it is decided that children should be given a replacement robot with only slightly less capability than those determined to be too harmful. In lieu of a climactic scene involving babybots, one of the more dramatic images is Karl finally making the decision to leave Ruth after he feels she has completely shut him out of their marriage. Speak is at its best with these heartbreaking moments of splintered affection and love. Big picture details are mostly omitted; there is hardly mention of logistics behind the removal of babybots or how governments are ensuring illegal robots do not appear again. Hints of apocalyptic events, including the lack of water and the destruction of beaches, do appear in the sections taking place in the 21st century, but feel like unnecessary and jarring details in the backdrop of these deeply personal stories.





Speak may be a warning letter of sorts, but it is a beautiful and poetic one. Hall is a published poet and the lyrical presentation of philosophical ideas in her book is a testament to her ability. Interspersed with the movement of the plot, the characters contemplate the Fibonacci sequence, theory of relativity, the passage of time, memory in artificial intelligence, nonhuman souls, loneliness, and love. All of the characters in Speak are trying to say something, and with Hall’s prose behind their voice, they demand to be listened to.





There is a 6th voice to this narrative that is not recorded in the character list at the beginning of the book – the voice of Eva, Gaby’s babybot. She opens the novel as she describes being taken away to a warehouse to live out the rest of her battery life and the novel ends as her time is running out. She makes it clear that her voice is a combination of all the voices that have come together to make her – from Mary Bradford to Stephen Chinn, and she will continue to remember until she no longer can. She realizes what the humans in the novel seem unable to comprehend – there is a group of humans reflected in every piece of technology. Stephen and Eva may take the fall for humankind by spending their respective remaining time in prison and a warehouse, but as Speak makes it clear, both the creator and creation know that technology advances at the speed which society allows.





Speak was released in the summer of 2015 and is currently for sale.





Want to cite this post?





Strong, K. (2016). AI and the Rise of Babybots: Book Review of Louisa Hall’s Speak. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/01/ai-and-rise-of-babybots-book-review-of.html


Tuesday, February 2, 2016

Emotions without Emotion: A Challenge for the Neurophilosophy and Neuroscience of Emotion

By Louis Charland



Louis C. Charland is Professor in the Departments of Philosophy, Psychiatry, and the School of Health Studies, at Western University in London, Canada. He is also an International Partner Investigator with the Australian Research Council Centre of Excellence for the History of Emotions, based at the University of Western Australia, in Perth, Australia.


Many scholars of the affective domain now consider “emotion” to be the leading keyword of the philosophy of emotion and the affective sciences. Indeed, many major journals and books in the area refer directly to “emotion” in their titles: for example, Emotion Review, Cognition and Emotion, The Emotional Brain (Le Doux 1996), Cognitive Neuroscience of Emotion (Lane & Nadel 2002), and The Emotional Life of your Brain (Davidson & Begley 2012). At times, “feeling,” “mood,” “affect,” and “sentiment” are argued to be close contenders, but such challenges are normally formulated by contrasting their explanatory promise, and their theoretical status, with “emotion.” Historically, debates about the nature of affective terms and posits used to revolve, in conceptual orbit, around the term “passion” and its many variants (Dixon 2003). In our new emotion-centric universe, everything seems to revolve around “emotion” and its many variants.


The problem is that, despite its popularity, “emotion” is a keyword in crisis (Dixon 2012). There are too many variants and insufficient consensus. According to some, things are so bad that we should do away with “emotion” entirely (Griffiths 1997). Ironically, this last suggestion may not be so iconoclastic. There is, apparently, relatively little interest in the question whether “emotion” demarcates a clear, legitimate, scientific domain of its own, except perhaps to deny that it does (Charland 2002; but see Griffiths 2004a). In contrast, there is much interest in the study of individual emotions and both the variety and the pace of research in this area has been impressive (Barrett 2007, Izard 2007, Panksepp 2000). Consequently, we are left with a seeming paradox. Research on individual emotions is thriving. At the same time, the question whether those emotions form a homogenous class, or natural kind, remains unresolved. Sometimes, the answer is simply, no. But that harkens back to the question why the emotions are all lumped together as “emotions” in the first place.


Historically, beginning with Paul Broca’s 1878 isolation of the so-called “limbic lobe” (grand lobe limbique), there have been influential formulations of the hypothesis that “emotion” is a natural kind with specialized brain centers and circuits tied to particular anatomical features (Papez 1937, Maclean 1952). There have also been detractors. James (1884) famously argued that there are no specialized brain centers for emotion. More recently, it has been argued that the concept of a specialized limbic system dedicated to emotion has outlived its usefulness (Le Doux 1996). Others, however, still see value in the concept of an evolutionarily primitive organizational subcortical limbic core of the brain (Panksepp 1998). Note that the hypothesis that is of concern in these discussions concerns “emotion.” That is very different from the hypothesis that some individual basic “emotions” may qualify as natural kinds (Barrett 2006, Panksepp 2000).


This latter hypothesis, which concerns individual emotions, is a worthy object of discussion in contemporary neuroscience. But its historical ancestor, which concerns the nature of emotion, appears to have fallen by the wayside. This despite the fact that it is still very common to find neuroscientists speaking of a contrast or distinction between “cognition” and “emotion,” as if this reflected a division in the natural order of things (Damasio 1994, Pessoa 2013). That distinction is also very much in circulation in contemporary philosophy of emotion and the affective sciences.


However, when we inquire into the theoretical foundations and evidence for the said distinction between “cognition” and “emotion” in philosophy, what we find is a concerted mass denial of the thesis that emotion forms a natural kind or class of any sort (Charland 2002). Surprisingly, there are very few philosophical arguments to the contrary. One notable example is Jesse Prinz, who proposes a very original and esoteric version of the hypothesis that emotion is a natural kind, though apparently to no avail (Prinz 2004). Other, quite different, formulations of that hypothesis have also been proposed, but again to no avail (Charland 1995, 2002, 2005). In the end, we are left with a seeming paradox: “emotions” without “emotion.” We have a philosophy of emotion without “emotion,” and finally, a purported scientific distinction between the theoretical domains of “cognition” and “emotion” that has no clear definition or borderline.


The sad truth is that the meaning and theoretical status of “emotion” continues to be a matter of great contention, which according to some is nothing short of a “scandal” (Russell 2012). Yet research on “emotion” continues unabated, as if the theoretical status of “emotion,” natural or otherwise, were unimportant or actually simply settled in the negative, perhaps only a chimerical scientific fantasy of no worth. What detractors fail to appreciate, or simply deny, is that ignoring the status of ‘emotion’ without attending to a solution only serves to push the question further back. We are still left with an apparent paradox that requires explanation and resolution. How can there be “emotions” without “emotion”? And, what sense is there to the distinction between “cognition” and “emotion” if there is no scientific domain that corresponds to “emotion”?


At this point, it is interesting to consider other candidates that might provide a new, theoretically healthier and more respectable, keyword for the philosophy of emotion and the affective sciences. There is one promising theoretical posit worth looking at in this regard. The concept of “core affect” is such a candidate (Russell 2003). Commendably, some forward-looking philosophers have not missed the occasion to explore its viability as an alternate foundational natural kind candidate for the philosophy of emotion, and the philosophical foundations of the affective sciences (Scarantino 2009). This said, the problems with “emotion” are still serious and ubiquitous enough to merit investigation on their own and the jury is still out on “core affect” anyway.


It is time to try and enlist the help of neuroscientists and neurophilosophers to help us solve this vexing paradox. What, after all, does it mean to talk of an “emotional” brain, of neuropeptides as the messengers of “emotion,” and of a cognitive neuroscience of “emotion”— arguably a contradiction in terms? The good news is that some neuroscientists are increasingly moving beyond the study of individual emotions and short-term emotional states to more foundational questions associated with emotional processes of greater scope and longer duration (see e.g. Hamann 2013 for a brief review). This line of investigation may provide one promising avenue to the nature of “emotion” by attempting to examine more complex “emotional” systems of longer duration than mere, one-time, single or repeated, emotional responses.



Faces expressing six of the passions, courtesy of Wikimedia Commons
But there is a problem. Because of its experimental technologies of measurement and observation, contemporary neuroscience is very much methodologically tied to and biased towards the study of short term emotional states and processes. One consequence of this is a theoretical weakness when it comes to understanding how such short term states and processes are dynamically organized over long periods of time; like, for example, “passions” as they were understood in the formative years of the psychopathology of affectivity (Charland 2010). Passions in this sense are categorically different from emotions, since they organize and regulate emotions over time (Charland 2011).


Of course, reinstating passions in this technical sense to our current roster of affective terms and concepts does not in itself solve the problems with “emotion” we have been struggling with. But it does point to another way of conceptualizing the role of emotion in affectivity which may throw light on it. Some historians, indeed, have openly wondered whether we may have placed too many burdens on the term “emotion” when we relinquished the term “passion” to the proverbial dustbin of history (Dixon 2003).


Contemporary neuroscientists and neuropsychologists might do well to be reminded of Thèodule Ribot, who, along with William James and Willhelm Wundt, is considered one of the founding fathers of modern experimental psychology. He forcefully argued that a complete psychology had to distinguish and utilize “passions,” “emotions,” and “feelings” (Ribot 1896). There is presently no suitable analogue for passions in contemporary neuroscience and psychology, which may prevent us from appreciating the nature and role of “emotion” from a more complete theoretical perspective.


At any rate, at present, it is hard to see how neuroscience and neurophilosophy can continue to operate on the assumption that cognition and emotion constitute distinct realms of scientific inquiry, without a suitable theoretical concept of “emotion” to tie individual emotions together, either as a “kind” or prototype “family” of some sort. Of course, it is true that there has been a dramatic increase in our knowledge of how “cognition” and “emotion” interact and interface in the production of decision-making and behavior. Some frame their research in this area by relying in large part on the identification of distinct anatomical loci in the brain that are apparently related to emotion and emotional processing (Damasio 1994). Others argue that “cognition” and “emotion” do not map onto separate anatomical brain regions (Pessoa 2013). However, this still leaves us needing definitions that clearly explain what exactly those “cognitive” and “emotional” factors in the brain are, and what makes them so.


It is possible that the concept of valence might offer a solution to this internal scientific problem of demarcation: that is, the problem how to demarcate “emotion” from “cognition” (Charland 2005a). Valence might at least explain the special normative character of emotional states in general (Griffiths 2004b, Prinz 2004). But the question still remains how exactly we get from this line of argument to the assumption that “emotion” represents a distinct domain of scientific inquiry that is different from “cognition” – neuroscientifically. And valence has its own problems, which are seldom considered (Charland 2005b.) Admittedly, there are those who believe that, in this situation, “… there is something to be said for not insisting on defining terms that are the object of study [… and that ...] to precisely define emotion and cognition … would be to draw … an artificial distinction between them” (Pessoa 2013, 4).


In response to this, one may say that a theory of integration and interaction of the emotional and cognitive capacities of the brain that cannot yet precisely define these terms, might still yield interesting results, but that for the relevant science to ultimately progress, we will eventually need to know the exact scientific meaning of the theoretical terms and definitions it is based on, and what this translates to in reality. Is the distinction between “cognition” and “emotion” a scientific fabrication, an artifact of culture? Or is it somehow written into the nature of some biological systems and forms of life and not others?


This last question would seem to be a matter of some importance for neurophilosophy and neuroethics. After all, the distinction between “cognition” and “emotion” is implicated in so many pressing folk psychological debates in popular culture that could benefit from a closer philosophical, neuroscientifically informed, commentary.


One explosive example is the portrayal of emotions as non-rational and allied with femininity, and cognition as the essence of reasoning, which is allied with masculinity (Jaggar 1999). Another is how to draw the line between creatures that are or will be capable of emotion, and creatures, or other forms of life, that are or will not (Panksepp 1998). It is to be hoped that, as neuroscientific and neurophilosophical research on individual emotions continues to progress at a rapid pace, equal attention will be paid to the question what it is about individual emotions that permits us to class them all together under “emotion,” and whether the distinction between “cognition” and “emotion” is a cultural myth or scientific fabrication, or somehow written into the nature of the brain and its anatomy and neural networks.


Works Cited


Barrett, L.F. 2006 Are emotions natural kinds? Perspectives on Psychological Science, 1, 28–58.


Charland, L.C. 1995. Emotion as a natural kind: Towards a computational foundation for emotion theory. Philosophical Psychology 8(1), 59-84.


Charland, L.C. 2002. The Natural Kind Status of Emotion. British Journal for the Philosophy of Science 53 (4), 511-537.


Charland, L.C. 2005a. The Heat of Emotion: Valence and the Demarcation Problem, Journal of Consciousness Studies 12 (8-10), 82-102.


Charland, L.C. 2005b. Emotion experience and the Indeterminacy of Valence. In Lisa Feldman Barrett, Paula Niedenthal, Piotr Winkielman ( eds.). Emotion and Consciousness, New York: Guilford Press, 231-254.



Charland, L.C. 2010. Reinstating the Passions: Arguments from the History of Psychopathology. In Peter Goldie (ed.) The Oxford Handbook to Philosophy of Emotion. Oxford: Oxford University Press, 237-259.


Charland, L.C. 2011. Moral Undertow and the Passions: two Challenges for Emotion Regulation. Emotion Review, 3(1), 83-99.


Damasio, A, 1994. Descartes’ Error: Emotion, Reason, and the Human Brian. New York: Putnam Books.


Davidson, R.J., Begley, S. 2012. The Emotional Life of Your Brain. London: Hodder & Stoughton.


Dixon, T. 2012 Emotion”: The History of a Keyword in Crisis Emotion Review, 4, 338-344


Dixon, T. 2003. From Passions to Emotions: The Creation of a Secular Category. Cambridge: Cambridge University Press.


Griffiths, P. E. 1997. What emotions really are: The problem of psychological categories. Chicago, IL: University of Chicago Press.


Griffiths, P. E. 2004a. Is emotion a natural kind? In R. C. Solomon (Ed.), Thinking about feeling: Contemporary philosophers on emotions (pp. 233–249). Oxford, UK: Oxford University Pre


Griffiths, Paul. E. 2004b. Emotions as natural and normative kinds. Philosophy of Science 71 (5), 901-911.


Hamman, S. 2013 Imaging the Emotional Brain. Emotion Researcher The Official Newsletter of the International Society for Research on Emotion. Available online at http://emotionresearcher.com/the-emotional-brain/hamann/ Accessed Dec 15, 2015 8:06 AM.


Izard, C. 2007 Basic Emotions, Natural Kinds, Emotion Schemas, and a New Paradigm. Perspectives on Psychological Science, 2(3), 260-280.


Jaggar, A. 1999. Love and Knowledge in Feminist Epistemology. Inquiry, 32(2), 151-17.


James, W. 1884. What is an emotion? Mind, 9(34), 188-205.


Lane, R.D., Nadel, L. 2002. Cognitive Neuroscience of Emotion. Oxford: Oxford University Press.


Le Doux, J. 1996. The Emotional Brain. Touchstone Books: New York.


MacLean, P. D. (1952). Some psychiatric implications of physiological studies on frontotemporal portion of limbic system (visceral brain). Electroencephalography and Clinical Neurophysiology, 4, 407–418.


Panksepp, J. 2000. Emotions as natural kinds within the mammalian brain. In Lewis M., Haviland J. (Eds.), Handbook of emotions (2nd ed., pp. 87–107). New York: Guilford Press


Panksepp, J. 1998. Affective Neuroscience. Oxford: Oxford University Press.


Papez J.W. 1937 A proposed mechanism of emotion. J Neuropsychiatry Clinical Neuroscience, 1995 Winter; 7(1), 03-12.


Pessoa, L. 2013. The Cognitive Emotional Brain. Cambridge Mass.: MIT Press.


Prinz J. 2004 Gut Reactions: A Perceptual Theory of Emotion. Oxford: Oxford University Press.


Ribot, Thèodule. 1896. La psychologie des sentiments. Paris : Alcan.


Russel J.A. 2003 Core affect and the psychological construction of emotion. Psychological Review, 110(1):145-172.


Russel J.A. 2012 On Defining Emotion. Emotion Review, 4, 337.


Scarantino, A. 2009 Core Affect and Natural Affective Kinds,” Philosophy of Science, 76, 940–957.




Want to cite this post?



Charland, L.C. (2016). Emotions without Emotion: A Challenge for the Neurophilosophy and Neuroscience of Emotion. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/02/emotions-without-emotion-challenge-for.html