Pages

Thursday, August 30, 2012

Response to “Society Does Not Make Gender” by Dr. Larry Young and Brian Alexander







"A queer symbol of new gender image"

by Finnish artist Susi Waegelein

At the beginning of August, Ruth Padawer published a piece in the New York Times magazine about gender non-conforming children and parents. Last week, Dr. Larry Young of Emory University and science writer Brian Alexander (who are publishing a book together, The Chemistry Between Us) published a response to the article, in which they argue, essentially, that gender is biologically hardwired into the brains of fetuses by the organizational effects of hormones. They go on to implicitly endorse what has been called the “brain sex theory” of transgender identity/behavior. According to this theory, hormones organize the sex/gender of the brain much later than they organize the sex/gender of the genitals, allowing for a discordance to develop between the two (Bao 2011).



Admirably, Young and Alexander use the brain sex theory to argue for an acceptance of gender non-conforming children. They write, “so rather than seeing threat, we should embrace all shades of gender, whether snips and snails, sugar and spice, or somewhere in between.” However, there are (at least) four major problems with their argument: they essentialize gender; they uncritically embrace human brain organization theory; they uncritically embrace the double-edged sword of essentialism on behalf of transgender people; and they selectively (mis)use evidence about intersex and transgender people to support an ideological claim about the innateness of gender differences.





Essentializing sex/gender



In their post, Young and Alexander write, “Society -- toy makers, churches, parents, fashion magazines -- does not make gender.” They go on to argue, “Such [hormonally driven brain] organization, not advertising, is why boys, as a group, are more likely to shoot a doll full of BBs, while girls, as a group, are more likely to dress dolls and "nurture" them.” They also chastise most feminists for trying to “ignore real differences between typical boys and girls.”






Do these things really not make gender?

Photographs by Janet McKnight





I’m going to assume for the sake of my own sanity that what Young and Alexander meant was “society alone does not make gender.” If that’s what they meant, I agree wholeheartedly. Feminist sciences studies scholars view sex/gender as the product of a continuing interaction (or “intra-action” as Karen Barad puts it) of biological and social factors, within a larger framework that sees the biological and the social as co-constitutive.







If Young and Alexander meant (improbably) that “society does not at all make gender,” then I refer them to the fields of history and anthropology, which have spent decades accumulating evidence that ideas about gender vary across time and culture, and that different ideas about gender influence how people think, feel, and behave as gendered beings. I would also point out that evidence of average differences between populations of boys and girls or men and women do not, in themselves, confirm the innateness of gender as even infants adapt themselves to the gendered contexts in which they find themselves. And if you believe that in today’s world, infants, children and adults are no longer socialized into gender roles, then I refer you once again to Cordelia Fine’s excellent book, Delusions of Gender, in which she uses substantial evidence, primarily from cognitive science, to argue that gender stereotypes are very much alive and well and that these stereotypes powerfully influence our feelings, thoughts, and behavior, often unconsciously.



Uncritical embrace of human brain organization theory








Image from Harvard University Press

Young and Alexander write, “Unfortunately, neither side [those who encourage gender diversity and those who oppose it] seems to have heard of something called the Organizational Hypothesis.” Unfortunately, neither Young nor Alexander seems to have heard of the serious critiques that have been made of the Organizational Hypothesis in regards to humans. The most comprehensive critique has been made by Rebecca M. Jordan-Young in her book Brain Storm: The Flaws in The Science of Sex Differences (2010). In the book, Jordan-Young reviews the more than three-hundred scientific studies conducted between the late 1960s and 2008 on the organizational hypothesis in humans. She concludes that the evidence from these three-hundred studies is too disjointed and even contradictory to provide real support for the claim that sex/gender or sexual orientation are “hardwired” into the human brain prenatally by the action of hormones. According to Jordan-Young, in the three domains with the most evidence for hard-wiring – feminine and masculine sexual behavior, sexual orientation, and sex-typed interests – different scientists have used such different definitions of, for example, “feminine sexuality” that different studies cannot actually be said to support one another.





I don’t have the space here to summarize the entire book. Slate has a slightly longer summary here. I would recommend the book to everyone and especially to scientists conducting research in the field of brain organization theory. Even if scientists end up disagreeing with Jordan-Young’s analysis, it at least needs to be reckoned with, which Young and Alexander clearly have not done.



Uncritical embrace of the double-edged sword of essentialism



In another post for the Neuroethics Blog, Cyd Cipolla talks about the “double-edged sword of essentialism” and sexual orientation. A number of gay-rights supporters have argued that scientific evidence for the innateness of homosexuality (a gay person is “born this way”) should lead to an increase in acceptance for homosexuality. However, as Cyd points out, depending on your already formed beliefs about homosexuality, you could also use scientific evidence for the innateness of homosexuality either to develop biological/medical “treatments” for homosexuality or to conclude that homosexual people can’t be “fixed” and thus should be eradicated. At the same time, calling for gay rights on the basis of the innateness of homosexuality excludes from the conversation those gay people who do not believe their sexuality is innate (remember the furor over Cynthia Nixon’s comments?).






Classic "born this way" argument for gay rights

Are they also are referencing the hypothesized relationship between sexual orientation and handedness?

Picture by Photo Munki





As in the case of sexual orientation, some transgender activists have argued that transgender people are “born this way” and thus should be accepted by society. Some trans activists and allies have specifically used the “brain sex theory” to support their claim that transgender people are “born this way” (e.g. “A Conversation with Milton Diamond, Ph.D.” in The Phallus Palace). This is basically the claim that Young and Alexander make, arguing that gender non-conforming children should not be forced to conform to expectations “in opposition to their wiring” as “in the end, our brains will out.”









However, as in the case of sexual orientation, this plea for acceptance on the basis of innateness is double-edged sword. The “brain sex theory” could also be used to develop biological/medical “treatments” for transgender identity/behavior. At the same time, arguing that gender is fixed before birth (even if arguing that gender may be incongruent with chromosomes and/or genitals) may exclude gender-fluid people from the conversation (to my knowledge, the brain organization theory doesn’t account well for gender fluidity, if I’m wrong, please let me know).



Selective (mis)uses of evidence about intersex and transgender people to support an ideological claim about the innateness of gender differences



Some feminist scholars and queer theorists have used intersex people or transgender people as evidence to support arguments about the social construction of gender (for a critique, see Invisible Lives by Vivian Namaste). Alternatively, a number of scientists have used intersex people as evidence to support arguments about the innateness of gender differences. Both uses are problematic if the ideological lens employed in any particular argument obscures the complexity of intersex or transgender lives.



In their post, Young and Alexander use studies of people with 5-alpha reductase deficiency (5-ARD) to provide evidence for the innateness of gender (people with 5-ARD are exposed to male-typical levels of androgens prenatally, but appear to be female until puberty, at which point their bodies become more male-typical looking). Jordan-Young extensively critiques the interpretation of studies of people with 5-ARD offered by brain-organization theorists (see pages 66-69). Of the use of any study of intersex people to support brain-organization theory, Jordan-Young writes, “the controversy recounted above highlights the difficulty in deciding whether psychosexual differences among intersex people are due to the direct effect of hormones on the brain, or to other factors like indirect effects on behavior via the development of atypical genitals, or the experience of illness and multiple surgeries” (78).








Young and Alexander also reference androgen-insensitivity syndrome so I am including this picture again:

"Women with AIS and related DSD conditions who want AIS to be represented by real, proud people instead of stigmatizing pictures where the face has been removed"

Image by Ksaviano



Although Young and Alexander don’t quite argue that transgender people provide evidence for the innateness of gender, some studies have made precisely this claim (e.g. Garcia-Falgueras et al. 2011). As in the case of the use of intersex people as evidence for gender-innateness, I believe the use of transgender subjects as evidence gender-innateness often obscures the complexity of transgender lives. Ironically enough, some of the gender non-conforming children described by Ruth Padawer don’t seem to be well accounted for by the “brain sex theory.” One boy, Alex, switches back and forth between feminine and masculine dress and behavior. A second boy, Jose, went through a long period during which he wanted to dress and behave in “girly” ways. By age 9, he was much less interested in wearing dresses, although he still liked to play with dolls. Understanding these complex lives requires understanding the role of ever-changing ideas about what are appropriate dress and behavior for boys and girls and the role of biology in the production of sex/gender identity and behavior.



A simple plea



In sum, while I agree with the main conclusion of Young and Alexander’s post (that we should embrace all shades of gender), the way they make their argument is problematic as it essentializes gender, uncritically accepts human brain organization theory, bases a call for transgender acceptance on biological essentialism, and (mis)uses studies of intersex people to support an ideological claim about gender essentialism.






Image from Routledge

In other blog posts, I have encouraged neuroscientists to contribute thoughtfully to public discussions about gender and sexuality. Neuroscientists can perform an important service by explaining neuroscience research to the public and by reflecting on the ethical and/or policy implications of this research. To this encouragement, I would add another: neuroscientists who want to participate in public conversations about ethics and policy would do well to engage with scholars in other fields, especially the social sciences and humanities, who are working in similar areas. For example, I would encourage Young and Alexander to consider engaging with scholars in the emerging field of Transgender Studies (yes, there is a small, but growing field – check out this reader for an introduction) if they plan to continue contributing to public conversations about trans issues. I remain convinced that conversations across disciplines will lead to more thoughtful and reflective work, including more thoughtful and reflective public scholarship, on the part of all involved.



I end with a simple plea: regardless of the relative contributions of genetics, prenatal hormones, parenting, environment of rearing, social expectations about gender, or personal agency in the production of any gender non-conforming child’s sex/gender identity or presentation, all gender non-conforming children deserve love and support and they deserve to be free from harassment and bullying, especially at school. Period.







Want to cite this post?

Gupta, K. (2012). Response to “Society Does Not Make Gender” by Dr. Larry Young and Brian Alexander. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/08/response-to-society-does-not-make.html

Experimental Ethics: An Even Greater Challenge to the Doctrine of Double Effect

In his article Neuroethics: A New Way of Doing Ethics, Neil Levy (2011) argues that “experimental results from the sciences of the mind suggest that appeal to [the Doctrine of Double Effect] might be question-begging.” As Levy frames the Doctrine, the Doctrine is a moral principal that is meant to ground the intuitive moral difference between effects that are brought about intentionally versus those that are merely foreseen. More specifically, the Doctrine is supposed to ground the intuition that, when certain conditions are met, it is morally permissible to bring about a bad outcome that is merely foreseen, but, under these same conditions, it would not be morally permissible to bring about a bad outcome intentionally. Or, another way to put this, the Doctrine claims that it takes more to justify causing harm intentionally than it takes to justify causing harm as a merely foreseen side effect (Sinnott-Armstrong, Mallon, McCoy, & Hull, 2008).








The intellectual roots of the Doctrine of Double Effect begin with St. Aquinas and St. Augustine. The Doctrine has since played a central part in moral theorizing within both the Catholic Church and within secular moral theorizing.





Intuitive illustrations of the Doctrine include (adapted from (McIntyre, 2011)):



1. In a military campaign, it is typically judged impermissible to target civilians, but it is often judged permissible to target a legitimate military target (e.g., a WMD factory) even if the attack on the military target is foreseen to lead to civilian causalities.



2. Someone who thinks abortion is wrong, even in circumstances that would save the mother’s life, might nevertheless consistently believe that it is permissible to perform a hysterectomy on a pregnant woman with cancer, even if it is foreseen that the hysterectomy will lead to the death of the fetus.



However, as Levy points out, there is a great deal of evidence suggesting that if one judges an effect to be morally bad, it is more likely that one will judge the act to have been brought about intentionally (and, thus, not merely foreseen). Much of this evidence comes from a series of studies conducted by Joshua Knobe and others (for an overview see: Knobe, 2010). In the earliest of these studies, Knobe (2003)randomly assigned participants to read one of two stories about a chairman of a company who instituted a new profit-generating program. The only difference between the two stories was that the foreseen side effect of the program would either harm the environment (a morally negative outcome) or help the environment (a morally positive outcome).



The harm version read as follows:




The vice president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.”



The chairman of the board answered, “I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.”



They started the new program. Sure enough, the environment was harmed.



The “help” version was identical to the “harm” version except ‘harm’ was replaced with ‘help’.





After reading the stories, participants were asked if the chairman intentionally harmed [helped] the environment. What Knobe found was that the vast majority of participants were willing to say that the chairman intentionally harmed the environment, but very few were willing to say that the chairman intentionally helped the environment. These findings have led Knobe and others to claim that judgments of intentionality are sensitive to moral considerations. This pattern of asymmetrical attribution of intentionality (and other state-of-mind attributions) based (ostensibly) on manipulations of moral considerations has become known as the side-effect effect, or the Knobe effect.








When is it OK to harm the environment in the name of economic growth? Well, according to one reading of the Doctrine, under certain conditions, it may be permissible to harm the environment if the harm was merely foreseen (and, thus, non-intentional). But when is harming the environment construed as merely foreseen? According to the Knobe effect, probably not often.







If Levy’s construal of the Doctrine are correct, and if people’s judgments of intentionality are sensitive to moral considerations, then the Doctrine of Double Effect is circular and would be an unreliable guide for grounding judgments about the permissibility of actions.



For example, if one already thought bringing about civilian causalities was impermissible, then one would be more likely to judge that the foreseen bringing about of civilian deaths was intentional, even in cases where the bringing about of civilian deaths was a side effect of attacking a legitimate military target. Since the civilian deaths would be judged to be intentional, according to Levy’s construal of the Doctrine of Double Effect it would be impermissible to attack the legitimate military target. Impermissibility judgments feed into intentionality judgments which feed back into impermissibility judgments. Thus, the Doctrine is circular and question-begging!



However, the Doctrine of Double Effect is not always primarily construed as depending on the distinction between intentionally bringing about an outcome versus merely foreseeing an outcome will be brought about. Rather, under many traditional formulations of the Doctrine of Double Effect, the morally relevant distinction depends primarily on whether an act or outcome is a side effect or a non-side effect (i.e., means or a goal).[1]



To build an even stronger case against the Doctrine of Double Effect, one would also need evidence that people more readily construe bad outcomes as non-side effects. To the best of my knowledge, there is currently no published study that shows that moral considerations can affect people’s classification of an outcome as being a side effect or a non-side effect. However, my lab is currently exploring just this possibility, and the early results are in: it looks like moral considerations do have an impact upon whether people classify an outcome as a side effect or a non-side effect.



For example, we find that people overwhelming (about 82%) classify HELPING THE ENVIRONMENT in Knobe’s helping the environment version of the chairman case (see above) as a SIDE EFFECT. However, only 42 percent of people classify HARMING THE ENVIRONMENT in Knobe’s harming the environment version of the chairman case as a side effect.



To make sure this finding was not simply a consequence of the exact details of Knobe’s chairman cases, we also tested seven other cases that were modeled on Knobe’s chairman cases.



For example, in one of the cases, a scientist (instead of a chairman) is deciding whether to implement a new methodology (instead of a new program) that would help her get the results she wanted (instead of generating more profits). In one version of the story, the new methodology would also violate ethical guidelines (instead of harm the environment), and in the other version of the story, the new methodology would also conform to ethical guidelines (instead of help the environment).



In another example, a ship captain is deciding whether to take a new route that would help her arrive at the destination quicker. In one version of the story, the new route was a dangerous route and thus would put the crew into extreme danger, and in the other version of the story, the new route was a very safe route, allowing the safety of the crew to be ensured. Same basic structure of Knobe’s original chairman case, but the actor and outcomes were varied.



What we found was that in six of the eight cases (including Knobe’s chairman case) people were more willing to say that the bad outcome was not a side effect than they were willing to say that the good outcome was not a side effect. To put this differently, when the outcome was good (e.g., helping the environment, conforming to ethical guidelines, ensuring the crew’s safety), people overwhelming judged the outcome to be a side effect. But when the outcome was bad (e.g., harming the environment, violating ethical guidelines, putting the crew into extreme danger), people tended to be split on whether the outcome was a side effect or a non-side effect.



Thus, our initial evidence suggests that people’s classification of outcomes as side effects or non-side effects may, in part, depend on moral considerations. [2] If this is right, then even when construing the Doctrine as being primarily concerned with the distinction between side effects and non-side-effects, the evidence suggests that the Doctrine of Double Effect is circular and would be an unreliable guide for grounding judgments about the permissibility of actions.







Want to cite this post?

Shepard, J. (2012). Experimental Ethics: An Even Greater Challenge to the Doctrine of Double Effect. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/08/experimental-ethics-even-greater.html



_____________________________________________________________________________

Notes



[1] While it is true that talk of intentions is present in almost all discussions of the Doctrine of Double Effect (even those that construe the primary distinction as side effects versus non-side effects), I take it that discussion of intentions play a role in the Doctrine in so far as intentions are a guide to distinguishing what outcomes should count as side effects versus non-side effects.



[2] In my view, technically, people’s asymmetric classification of outcomes as side effects/non-side effects is not dependent on moral considerations, but rather is dependent on a non-moral considerations that typically (though not always) correlate with the evaluative valence of an outcome. Getting into the details of my view is beyond the scope of the post and is unimportant for the particular point at hand. (It would turn out on my view that the Doctrine would be circular for a large portion of cases due to the nature of the correlation between the non-moral considerations and evaluative outcomes.)



Works cited



Knobe, J. (2003). Intentional action and side effects in ordinary language. Analysis, 63(3), 190–194.



Knobe, J. (2010). Person as scientist, person as moralist. The Behavioral and brain sciences, 33(4), 315–29; discussion 329–65.



Levy, N. (2011). Neuroethics: A new way of going ethics. AJOB neuroscience, 2(2), 3–9.



McIntyre, A. (2011). Doctrine of double effect. Stanford Encyclopedia of Philosophy. Retrieved from http://plato.stanford.edu/entries/double-effect/



Sinnott-Armstrong, W., Mallon, R., McCoy, T., & Hull, J. G. (2008). Intention, temporal order, and moral judgments. Mind & Language, 23(1), 90–106.

Tuesday, August 28, 2012

Welcome Our Newest Neuroethics Scholar!




It is with great pleasure that the Emory
Neuroethics Program announces its newest neuroethics scholar: Riley Zeller-Townson! The Neuroethics Program invited graduate
students to create and to join collaborative, interdepartmental faculty teams
at Emory and in the Atlanta community to pursue Neuroethics scholarship.  Graduate students were free to propose
projects of interest to them. Proposals included innovative ideas in the arena
of teaching, empirical research, new media, and beyond. By the completion of
their one year appointments, each scholar is expected to co-author a paper and
present his or her work.  The selection
process was quite competitive. The abstract of Riley’s proposed project and a
short bio can be found below.






Riley Zeller-Townson (Neuroethics and Art)




Riley Zeller-Townson



For my Neuroethics Scholars Program Fellowship, I will be studying, as well as participating in, the interaction between Neuroethics and Art.  This includes documenting and analyzing ethical issues highlighted by artwork that incorporates (or focuses) on neural tissue, as well as developing cost-effective tools to enable artists to integrate electrophysiology into their work. I approach this project from the perspective that art can act as a type of “experimental ethics.”  That is, while written academic ethical discourse can suggest scenarios that highlight the gaps or failings in our moral frameworks, art can bring those scenarios to life and allow audience members to confront them at both an instinctive as well as intellectual level.  










Silent Barrage on display at the National Art Museum of China






Biological art is particularly
well-suited to do this, by generating novel living and partially-living systems
that fall in between the points already mapped out on the moral landscape.  'Silent
Barrage
,' a bio-art piece that I have assisted with, provides an example of
how the integration of neural tissue into art can raise questions that are of
particular importance to Neuroethics.  In
'Silent Barrage' the claim is made that an in vitro culture of neurons
is actively sensing and responding to its environment.  Does this imply a degree of mental life that
burdens the artists and scientists with responsibility for the piece's well
being?  Or is 'well being' meaningless
when all notions of pain and suffering are impossible to justify?  Are there any 'qualia' at all that the piece could
be said to experience?  Furthermore,
could this bio-art project have been created in an ethical manner if it was not
part of a scientific collaboration, and served some kind of scientific purpose?




To stoke the fires further (and
remove any doubt toward my own biases), I'm going to be building a device that
will allow more artists to create these kinds of artwork- -specifically, a
cost-effective amplifier designed for extracellular electrophysiology of
vertebrate neurons.  The final product
will condition neural signals such that they can be recorded using a standard
laptop headphone jack (inviting the use of artistic tools already available for
manipulation of sound).  This will be
very similar to what the 'Backyard Brains' system does for invertebrate neural
signals. Both the ethics and the engineering sides of my project will be
developed in collaboration with SymbioticA, an internationally renowned center
of excellence in bio-art and bioethics within the School of Anatomy, Physiology
and Human Biology at the University of Western Australia.









The engineer and the two artists who worked on Silent Barrage (Peter Gee, Philip Gamblen, and Guy Ben-Ary) and Riley standing by the installation 







I'm working on my PhD in Biomedical Engineering at the Georgia Institute of Technology, in Dr. Steve Potter's lab.  My (neuroscience) research interests include the role of the axon in neural computation, applications of basic neuroscience to artificial intelligence, and open-source electrophysiology tools.   








Neurons growing on a multi-electrode array in Steve Potter's lab



As an engineer
whose opinions on bio-art and ethics are heavily influenced by the artists he's
worked with, I would greatly appreciate additional perspectives on these issues
from all of you artists, scientists, and ethicists out there!



Thursday, August 23, 2012

Finding and Naming (Symptom) Constellations



By Guest Contributor Racheal Borgman, MA  










DSM IV-TR via Wikipedia.org

The rhetorical component of illness is an important extension to the issues raised in last month’s post on the DSM. As Anjana Kallarackal pointed out, there are concerns aplenty when it comes to the DSM and how the committee goes about its categorizing work. But I was especially interested by the very first response to the post, by David Nicholson:




"I wonder if it would be useful to try to put a number to the "negative consequences" of a given addiction… If we could decide how damaging some addiction was, maybe that would tell us how much to medicalize it as well. Insurance companies could decide that they'd cover cognitive behavioral therapy for internet addiction, but nothing beyond that."




It’s an incredibly tempting solution.

But then there’s the pesky rhetorical component of illness that must be contended with. For instance, how do we:


  • know that an illness is an illness?

  • know that a particular group of symptoms corresponds to a particular, named illness? 

  • bridge the gap between the physical experience of an individual and a category of illness that a DSM committee has set down in its tome, and 

  • ensure that the two align? In other words, how do we get to the name of a thing that can then be measured and analyzed? 





By W.K.-L. Dickson,

'Fred Ott's Sneeze'

via Wikimedia Commons


Let's take an example. You have a stuffy nose and you feel a little achy all over. Without thinking much about it, you reach up and feel under your chin. Ah, yes. Swollen lymph nodes. A cold. You go about saying things like, “I’ve got a cold coming on—sore throat, stuffy nose, all that,” and people respond with recognition: “Oh, that’s awful! I was out last week with the same thing.”

What enables this bit of self-diagnosis? How do people understand what your ailment is and feels like from your quick, glossy explanation?



Here’s a twist to the example. You have a stuffy nose and you feel achy all over. Swollen lymph nodes. A rash on your leg. It must be a common cold. You say, “I’ve got a cold coming on—sore throat, rash on the leg, the whole nine yards.” The response is going to be different: “A rash on your leg? What?” One of those symptoms is not like the other.



And that’s what I mean by symptom constellation: a group of symptoms that are understood to go together, to be participants in the same illness, and which, when traveling together, allow us to call an illness by its name. It goes the other way, too: if I decide I have a common cold, I don’t include the leg rash in the description of my illness. Having named the illness, I have also identified its constituent symptoms.



This phenomenon led Arthur Kleinman, a psychiatrist/medical anthropologist/interdisciplinary boundary-crosser, to claim that “illness meanings are shared and negotiated. They are an integral dimension of lives lived together.” [1] Our attention to physical symptoms is driven by culturally-contextualized narrative interpretations of what an illness is and what effects it has.






Dr. Arthur Kleinman, Esther and Sidney

Rabb Professor, Department of Anthropology,

Harvard University

As a practical example of illness interpretation across cultures, Kleinman uses an endangered illness, neurasthenia. In America, neurasthenia has been eradicated. In China, it was flourishing before the DSM killed it in 1995. [2]



Kleinman believes that America eradicated neurasthenia by making a simple tweak to its symptom constellation. Neurasthenia is almost indistinguishable from clinical depression except for one small symptom: mechanical weakness of the physical nerves. In America, we understand clinical depression and its symptom constellation, even if we sometimes need reminders from pharmaceutical ads on the television. Mechanical weakness of the nerves is not a part of our known constellation, and is thus a non-sequitor symptom. If you feel weak and dizzy, you need to ask your physician about some other illness. And quick as a wink, with just that small adjustment to our cultural understanding of depression, neurasthenia was wiped out.



Meanwhile, in China, Kleinman suggests, depression carried a political overtone [3]: being depressed was a statement of discontent with one’s surroundings, both on a personal and a political level. Therefore, an adjustment to neurasthenia à l'américaine would render Chinese patients into political protestors. Neurasthenia became an acceptable alternative, not only as a diagnosis, but as an illness to report to oneself and others. It’s not a matter of lying about physical experience, either—the patients Kleinman interviews describe the condition of their weak nerves in great detail, and are clearly describing their experiences as best they can, much as we describe our lymph nodes as though they bothered us before our fingers felt them swollen.



As another medical/rhetoric scholar, Arthur Frank, puts it: “Suffering comes to understand itself by hearing its own testimony.” [4] Only in a particular cultural context may a patient experience nerve weakness as a part of this particular symptom constellation we know as “neurasthenia.” Otherwise, nerve weakness becomes like a leg rash during a cold: unrelated.



Neurasthenia’s relativity points to the essential rhetorical component of illness: patients’ “stories are the respective products of the worlds each moves through, though these local worlds are also formed anew with each act of interpretation of every story the community recognizes as theirs.” [4] Without communal recognition, neurasthenia ceases to exist. With communal recognition, it flourishes. The same thing goes for mental illness.



Interestingly, the communities in which interpretation and communal recognition happen are frequently different than the communities in which categorizing work occurs.  Physicians, psychiatrists, DSM committees, lawyers, and judges may never encounter the cultural environment that has enabled a particular type of illness, and thus, a particular kind of suffering. (Kleinman has a great story about this, involving pickle juice, here. [5])






Photo credit: Nora Volkow

PET brain scans show chemical differences in the

brain between addicts and non-addicts, via Wikimedia Commons

Which brings us back to Mr. Nicholson’s idea about quantification of an addiction’s detriment. If the identity and even symptoms of an illness change within different communities of interpretation, then so do the consequences of the illness. Mental illnesses provide the clearest examples of this (hysteria in women a century ago, alcoholism in more recent years), but it happens with physical disorders too. So if a minority community, through the acts of saying and interpreting a physical experience, brings an illness into being, how does this influence another community’s act of categorization? When pursuing a categorization project such as the DSM, how do we avoid the tyranny of the majority? How do we avoid institutionalizing prejudices against any community’s narrative interpretations of physical experience? The rhetorical component of illness also problematizes tools that rely on a “normalized” comparative brain or body, including, for instance, brain images used for lie detecting. If the definitions of “wellness” and “illness” are culturally contextualized—often in deeply fundamental, incommensurable ways—then against what “normal” brain or body do we compare a potentially “ill” brain or body? [6]



The very definition of illness or well-being is constantly in flux within disparate cultural communities. The question of categorization—in the DSM or elsewhere—becomes less one of quantification and more one of pragmatic usefulness: how can we do the most good and the least bad to the most individuals, while retaining the usefulness of this categorizing tool? This issue finds special traction in neuroethics in cases of legal deliberations dependent on “official” diagnoses of a person’s mental health. As a special communicative situation, with its own rules, legal proceedings become a second-removed categorization: interpretation is first divorced from whatever particular communities the participants emerge from, and next, the illness is divorced from its own particulars, both the individual’s physiologic involvement and the illness’s creation-through-interpretation that happens on a social level. Like a pinned beetle, the mental fitness of an individual is taken out of its natural environment and displayed against the flattened categories of precedent, both legal and medical, and against guides such as the DSM (read more here and here). When removed from the community of its creation, how can we identify a mental illness? Who is capable of judging the existence of a mental illness?







Want to cite this post?

Borgman, R. (2012). Finding and Naming (Symptom) Constellations. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/08/finding-and-naming-symptom.html#more



------------------------------------------------------------

1. Kleinman, Arthur. The Illness Narratives (New York: Basic Books, 1988) 186.

2. Apparently, Kleinman wrote neurasthenia out of existence—with a little help from the DSM:

http://www.ncbi.nlm.nih.gov/pubmed/18040092

3. Kleinman, 109.

4. Frank, Arthur. The Wounded Storyteller (University of Chicago Press, 1995) 171.

5. Kleinman, 130.



6. And for an excellent examination of just such a cultural divide and its influence on medical diagnosis and treatment, check out The Spirit Catches You and You Fall Down by Anne Fadiman.










Thursday, August 16, 2012

The Military and Dual Use Neuroscience

If there’s one thing I learned from the most recent installment of Christopher Nolan’s Batman trilogy, it’s this:  if you’re doing interesting research, it probably has a military application.






In the interest of spoiler avoidance, let's just call this Wayne Enterprises invention "dual-use." (http://ixpower.com/2012/07/dark-knight-rises-batman-movie-does-infant-smr-industry-no-favors/)





Dual Use Technology

The formal name for it is “dual-use technology,” and it’s difficult to find an area of research in which it’s not a relevant concern. Innovations in renewable energy may avert catastrophic global warming, but they also promise to significantly lower military fuel costs and improve the mobility of forces newly unconstrained by the logistics of fossil fuel transportation. Research into nuclear fusion foreshadows essentially inexhaustible carbon-free energy at the same time as it provides a technological foundation for fusion-triggered nuclear weapons that some believe may lower the threshold for nuclear weapons use. Even ostensibly benign anti-obesity campaigns have military implications, as suggested by a recent CBS News article ominously titled “Too Fat To Serve: Military Wages War on Obesity.”



Physics and engineering tend to be the disciplines most readily associated with high-profile military innovations, but it’s biology – and neuroscience in particular – that has increasingly captured the interest of the military research establishment. In 2006’s Mind Wars: Brain Research and National Defense, University of Pennsylvania bioethicist Jonathan Moreno estimates that “most of [DARPA’s][1] desired research proposals directly or indirectly involve the brain” and, in a journal article published this year, finds that the fiscal year 2011 budget contains over $350 million in military neuroscience research. A 2009 Army report entitled “Opportunities in Neuroscience for Future Army Applications” similarly emphasizes the importance of neuroscientific research, declaring that “emerging neuroscience opportunities have great potential to improve soldier performance and enable the development of technologies to increase the effectiveness of soldiers on the battlefield."






Jonathan Moreno’s Mind Wars, to my knowledge the most comprehensive work on the military applications of neuroscience.  (http://scienceprogress.org/wp-content/uploads/2012/05/MindWars_cover.jpg)



The military applications of neuroscience are vast, but can be divided[2] into three categories: performance enhancement and degradation, surveillance and threat assessment, and neural interface.



Performance Enhancement and Degradation

Performance and cognitive enhancement technologies are not new to the military, though they’ve certainly taken on new forms in recent years. The use of stimulants - methamphetamine in Germany and Japan, and amphetamine among the British and Americans – was widespread throughout militaries during World War 2, and the 2009 Army report includes a section on good ‘ol caffeine as a means to “to improve cognitive functioning during sustained military operations.” Recent military research has investigated new drugs, most notably ampakines[3], that attempt to combat the negative effects of sleep deprivation without incurring the abuse potential and side effects often attributed to traditional stimulants. A 2012 report on neuroscience and conflict published by the UK Royal Society cites a number of additional substances – notably, the Parkinson’s drug and dopamine precursor L-DOPA for learning enhancement, the social-behavior-modulating hormone oxytocin for unit cohesion, and anxiety-dulling beta-blockers for decision-making under stressful conditions – with apparent potential for military use. Which substances will find an ultimate military application remains, at this point, unclear. For all the well-publicized success of underground chemists in producing euphoric knockoffs of popular recreational drugs[4], however, it seems inevitable that the military’s best pharmaceutical minds will eventually develop a set of chemicals appropriate to the wide variety of tasks faced by military personnel.






This woman’s oxytocin foot tattoo inspires a certain degree of love in me, though I’m told by more studied colleagues that “looking at molecular diagrams” doesn’t constitute an effective route of drug administration (http://io9.com/5925206/10-reasons-why-oxytocin-is-the-most-amazing-molecule-in-the-world)



Military interest in performance enhancement extends well beyond chemicals. “Opportunities in Neuroscience for Future Army Applications” recommends medium-term field deployment of transcranial magnetic stimulation (TMS), a form of direct electrical brain stimulation that has been associated with memory enhancement. The 2009 DARPA Strategic Plan references a DARPA program, intended for intelligence analysts[5], that aims to develop a neuroimaging system capable of detecting visual information below the level of conscious apprehension. The same strategic plan cites applications for neuroimaging in prescreening potential recruits and in expertise development for high-skill activities such as marksmanship and language acquisition.



In addition to the performance enhancement of its own personnel, the military stands to benefit from the performance degradation of the enemy. Techniques for achieving this goal, which might be categorized broadly as “chemical incapacitation,” have applications in crowd control, counter-terrorism, interrogation, and direct warfighting[6]. Incapacitating substances include opiates, notably utilized by Russia during the Moscow Theater hostage crisis for purposes of mass sedation, as well as other agents with established or theoretical sedating properties such as benzodiazepines, alpha-2 adrenoreceptor agonists, and orexin antagonists[7]. The U.S. military has also conducted research into the somewhat more science-fiction suggestive (and, depending on your political preferences, substantially more sinister sounding[8]) “directed energy weapons,” concentrated beams of small particles or electromagnetic radiation with the ability to cause cognitive impairment as well as physical incapacitation.



Surveillance and Threat Assessment

An EEG device marketed as the Veritas TruthWave helmet has received a fair bit of media coverage over the past several months for its supposed “mind-reading” properties. Attached to the head of a suspicious individual, TruthWave uses EEG to determine if a subject recognizes a given suspicious visual stimulus[9]. If the suspect responds with a pattern of brain activity known as a “P300 signal,” recognition – and therefore, it is thought, guilt – can be inferred. The CEO of Veritas Scientific, Eric Elbot, has been about as ominous as any person could realistically be about a product they hope to sell, telling the Institute of Electrical and Electronics Engineers that “The last realm of privacy is your mind… This will invade that.” Veritas’ research is funded by the U.S. military, and Elbot claims that a similar Veritas product has already been deployed in a border control context.







Veritas Scientific, the company behind theTruthWave helmet (http://www.veritasscientific.com/)




Along similar lines, a company called No Lie MRI has marketed fMRI truth detection technology to the Department of Defense. If you’re a loyal reader of the Neuroethics Blog, this likely won’t strike you as too surprising: the accuracy and usability of fMRI for lie detection have been discussed extensively here in the past. While fMRI has demonstrated impressive lie-detection capabilities in some studies, Neuroethics blogger David Nicholson points out that the current generation of fMRI machines also “take up an entire room and… sound like a dishwasher powered by the souls of unborn babies,” a fact which likely limits their usability in a field context. TruthWave, which neither takes up an entire room nor (to my knowledge) sounds anything like unborn children, may go some way towards ameliorating these limitations.



Neural Interface

Of all the neuroscience technologies currently under investigation by the military, it is neural interface that may produce the most far-ranging implications. Civilian researchers have made remarkable strides in direct neurological control of limbs and other objects, including the successful neural control of prosthetic robotic arms in both primates and humans. Neural interface technology has clear short-term applications in producing high-quality prosthetics for injured servicemembers, to the point where the website for DARPA’s Revolutionizing Prosthetics program suggests that “servicemembers with arm loss may one day have the option of choosing to return to duty.”






The guy on the left looks amused out of his mind. (http://www.defense.gov/news/newsarticle.aspx?id=62114)





In the medium-to-long term, it is conceivable that neural interface systems may revolutionize warfare in its entirety. The UK Royal Society report suggests a number of applications that appear at first glance to border on science fiction: imagine, for instance, remote-operated and brain-controlled vehicles for operations in enemy territory, neutrally-interfaced weapons systems that use unconscious brain data to enhance reaction times, or magnetic implants in the fingers that, when connected to the brain, allow the user to “feel” heat at a distance. In a fascinating Penn State interview, Jonathan Moreno is asked which military neuroscience technologies he feels are most “eye-opening or scary.” Dr. Moreno responds that neural interface technologies enabling what is “essentially a robot army… with the creativity and spontaneity of a human operator” may constitute the ultimate future of warfare (though perhaps, he cautions, not in his lifetime). Such warfare would be conducted not with “boots on the ground,” but by military personnel sequestered safely in a bunker dozens or hundreds of miles away.



Concluding Remarks

A generation ago, a young, patriotic science student might have aspired to work at the Lawrence Livermore or Los Alamos national laboratories, designing multi-megaton nuclear weapons to contain the Communist threat. Today, that same student – perusing a DARPA budget now easily accessible to her online – might reasonably conclude that it is neuroscience, not physics, in which the bulk of future military research opportunities lie. The implications of this paradigm shift for present-day neuroscientists are substantial, a fact which has increasingly been recognized by publications in the field (see herehere, and here). The potentially coercive use of performance enhancing substances among military service members, the consequences of EEG and fMRI for privacy, and the legal and ethical implications of next-generation chemical incapacitants are just some of topics that have been discussed extensively in this literature.



In my next post, I’ll look more comprehensively at the legal, ethical, and geopolitical implications of novel military neuroscience technologies, and discuss the role of neuroscientists in influencing possible future applications of their research.








Want to cite this post?


Gordon, R. (2012). The Military and Dual Use Neuroscience. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/08/the-military-and-dual-use-neuroscience.html





--------------------------------

[1] Defense Advanced Research Projects Agency, the federal agency responsible for research into military-relevant technology.



[2] Imperfectly, and according to a more or less arbitrary system of personal categorization.



[3] Drugs whose action is mediated, as might be expected, through the AMPA subtype of glutamate receptors.



[4] e.g. “bath salts” and synthetic cannabis, among others.



[5] Neurotechnology for Intelligence Analysts (NIA).



[6] Although many of these applications are either clearly or ambiguously restricted by international law.



[7] See the UK Royal Society report (http://royalsociety.org/uploadedFiles/Royal_Society_Content/policy/projects/brain-waves/2012-02-06-BW3.pdf) for more information on these and similar incapacitating substances.



[8] Sinister sounding enough, in fact, that a Google search for “‘directed energy weapons’ conspiracy” yields 58,000 results, the first page of which contains diverse allegations involving mind control, the anti-Christ, 9/11 truth, and a Russian scheme to melt the polar ice caps.



[9] It’s not clear to me what constitutes a “suspicious visual stimulus,” but one article (http://spectrum.ieee.org/biomedical/diagnostics/the-mindreading-machine/) suggests “bomb specs or Osama bin Laden’s face” as possible examples.



Monday, August 13, 2012

Comment on: Placebo for Psychogenic Illnesses: Why “It’s all in my head” does and doesn’t matter

*This post was originally posted on the Neuroethics Women (NEW Leaders) Leaders site.



Recently, I composed a piece for Nature Science Soapbox entitled, Placebo for Psychogenic Illnesses: Why "It's all in my head" does and doesn't matter and in the Huffington Post on Placebo. Both pieces work to reframe and deepen our understanding of medicine and illness by utilizing neuroscience. Importantly, this process must include humility for the limitations of neuroscience and our current understanding of the brain while also maintaining an openness to what we don't know, avoiding foreclosing opportunities for richer understanding of the brain's capabilities.









I believe neuroethics discourse needs to occur with all relevant stakeholders, and as I discussed with colleagues recently, I feel it would be a failure if I couldn't engage in neuroethics discourse outside of my discipline (I admit that I'm well-rooted in neuroscience). I've had colleagues voice concerns about misinterpretations of what we say by the public and sensationalizing of our findings in the popular media. Generally, these, primarily academics, use these as (not entirely unfounded) excuses to avoid speaking with the public or general audiences. This is a mistake. Neuroethics is a discipline that directly speaks to the implications of neuroscience for society and its ethical norms; we must involve society and general audiences. This necessarily means including not only individuals outside of our discipline, but individuals in our broader communities.



Recently, I received an emotionally charged response to my Nature Science Soapbox piece, where, unfortunately, the reader felt I had dismissed her and her son's medical reality. I understand that this misinterpretation will happen often as I'm trying to work with deeply engrained social norms about mental illness, stigma, and evolving descriptions of what the brain does (and even the mind does).

You can read her comment here and my response can be found below.



***********************************************************************

 

Thanks for your comment, Kathleen. To be clear, this article is not about dismissing PANDAS or Dr. Trifiletti. This article is about 2 things: 1) re-framing the way we conceptualize psychogenic illnesses and placebo by 2) utilizing evidence-based arguments—in this case utilizing neuroscience. The overall goal being to advocate for ways to minimize suffering and to explore innovative ways to help people, in this case psychogenic patients, who are truly suffering and lack standard measures for medical care.



 How do we use the phrase, “It’s all in your head,” and I invite you to start with asking yourself.



 Generally, it’s used to describe psychological phenomena, thinking, feelings, or “mind”. If it’s “all in your head”, it’s almost something to be dismissed. If you have a problem that’s “all in your head,” it’s not truly “real”: you should “get over it”. And if you don’t or can’t, it reflects poorly on the very fabric of your character: you’re weak-willed, maybe even failing morally.

 



Because of advances in neuroscience, we now understand that many of these psychological phenomena are intimately tied to changes in brain chemistry, and electrical activity in the brain. In this case, “it’s all in your head,” as it’s used above, stops making sense; it’s a false distinction between what is a “real” disease and what isn’t. For example, PANDAS and psychogenic disorders are equally real, as real as cancer. These all physically affect the body-- and I consider the brain as part of the body.



 Historically, as a society, we have relied on technology to define disease. And as technologies become more sophisticated, we are learning more and more about how the brain works. For example, conditions that cause enormous suffering, such as epilepsy, and dystonia were formerly considered “not real”, believe it or not. As technology has grown more sophisticated, we can now attribute epilepsy to abnormal clusters of electrical activity and dystonia to aberrant circuitry in the brain. Similarly, placebo, which was thought to be nonspecific, “fake” even, now has been demonstrated to have significant physiological effects in the brain that correlate with reported benefit (from both patients and doctors).

 



What I want to share with non-neuroscientists, is the sense of wonder and humility scientists have with regard to biological phenomenon and living. We, as a discipline, must challenge our assumptions, and revise our thinking, all of the time. Scientists do this through evidence and empirical work. And although we currently don’t have technology to measure something, doesn't’ mean we won’t in the future.

 



The girls in LeRoy are an unfortunate case study in just how unnecessarily worse the situation can get when society uses false distinctions of “what’s in your head” and what’s not. The implication being that “what’s in your head” is NOT real and therefore stigmatized. These girls, whose suffering and conditions are very real, were only made worse on all accounts, by this type of logic.



 In sum, this article is about respectfully re-framing the way we, as a society, define disease and medicine by utilizing scientific evidence, in this case based in the brain sciences. And by re-framing the way we see psychological phenomena through the lens of science, we may eliminate the unfair stigma and suffering associated with false distinctions between what’s in the mind versus what’s in the body.







Want to cite this post?

Rommelfanger, K. (2012). Comment on: Placebo for Psychogenic Illnesses: Why “It’s all in my head” does and doesn’t matter. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/08/comment-on-placebo-for-psychogenic.html

Thursday, August 9, 2012

Brain Connectomes: Your ticket to the future













Science often provides us with thrilling and
puzzling scenarios in which our imaginations are forced to conceive the
possibilities the future may bring. Life after death is an old concept that is getting
a facelift. The Connectome, a very real development in neuroscience, is being
used to conceptualize another very interesting piece of science-[fiction]:
mind uploading.







Image from http://www.mindcontrol.se/?attachment_id=3021





Fast-forward a few centuries. Bear with me,
as this requires imagination. You have just died and are beginning the journey
to the next stage of your life. For this trip, you won’t have to pack any bags.
If all goes smoothly, you will be back home in time for the evening sitcoms.
Your casket was lowered into the Earth this morning and because your driver’s
license indicated ‘Continue Life’ you are scheduled for resurrection this
afternoon. Suddenly, a message appears.






There are
three ticket options for you today. Our Elite ticket (1 million USD) and our
most comfortable ride in to the future comes with a wide assortment of amenities.
While fully reinstating your memory, personality and acquired skills, you will
be presented with the opportunity to make any adjustments you wish. A memory of
violence, depression or hardship can simply be erased, liberating you from a
particularly difficult moment. Using our advanced technology, we can also
augment or sharpen certain memories with algorithms that accurately calculate
how an event may have occurred. You will enjoy our most luxurious Back2LiFE
Robotics model, the Elite Humanoid, which comes fully equipped with our AWAKE®
(Automated Work And Knowledge Environment) interactive software, allowing you
to sense the world and all its warmth, just as your previous body did.





The next
option, the Premium ticket (600,000 USD), provides you with all of your
memories and your personality. The Premium Back2LiFE model provides a full
range of motion while also allowing you to interact with the world using the
AWAKE® versions of the 5 human senses: touch, taste, sight, smell and hearing.
Upgrades for this plan are available at any time.





The Economy
ticket (125,000 USD) allows you to return to life free of the weight of any
memories or personality and you will enjoy our basic Back2LiFE model. Upgrades
are not available for this plan…





The choice to Continue Life may not be so far
away. While I am aware that this may sound a bit out there, I am not the only
one who thinks like this. Tom Scott has created a video 
describing the process of coming back to life and I highly
recommend watching it. It is a chilling, yet extremely believable take on what
re-entry may look like and the choices the human race may someday face. There
are many highly qualified individuals that belief life as we know it will end
very soon. Ray Kurzweil
believes that brain uploading will be possible by 2040. Transhumanism,
continuation of life
and something called the Singularity are all hot topics.





Before we enter this discussion I should
preface with this: I do not intend to answer questions, proclaim that I know
the answer or make any definitive suggestions on a future course of action. I
am here to ask questions, prompt you to think, and hope that collectively, we
can figure out what to do with this issue.







Image from http://fanart.tv/movie/2277/bicentennial-man/


If you have ever seen the Bicentennial Man, it may have changed the way you think about life, death or what
it means to be ‘human.’ In short, Robin Williams, with all of his magnificent
charm, is a robot of the 21st century with no greater desire than to
become human. The film, filled with Williams’ knee slap humor and tear-jerking
moments, outlines this ‘unique’ robot’s transition from machine to man. A key
and defining factor is that in order for Williams to be recognized as human, he
must be able to die. According to the film, the ability to die is the proof
that says you had lived a human life.





Imagine for a moment the reverse of the
process. Take an old and dying human body and turn it into a shining, advanced
new machine. Upon death, all of a person’s thoughts, memories and emotions
would be recorded, transferred, and translated into a mechanical body and the
person would be brought back into consciousness. I am not saying that this is
possible, plausible or that it will ever be, but there are people working very
hard to make it so. Kenneth Hayworth, Ph.D., is one of those people. Dr. Hayworth
graduated from University of Southern California before moving on to work at
Harvard. A project that relates directly to his work is the Human Connectome Project
, a $40-million collaborative study funded by the National
Institute of Health. The goal of the project is to create a map of the entire
brain, similar to what the Human Genome project set out to do with DNA. The
Connectome project feeds into Dr. Hayworth’s theories, as he believes that an
understanding of the brain’s infrastructure will help in its reconstruction.
However, he understands that there is more. He says, “You can’t look at a road
map of Manhattan and know what its like down there. You have to dig deeper.”





Scientists at Washington University, St. Louis,
the University of Minnesota, UCLA and the Massachusetts General Hospital are
doing the digging for the Human Connectome. They’re not digging for a source of
immortality. Instead, they hope that a thorough understanding of the brain will
unlock secrets to treating neuropathologies. At the University of Georgia and
Emory University, connectomics is already is use
. Tianming Liu (U.Ga) and his team have mapped the brain, using
landmarks as they navigate through the dense network of cells. They call the
landmarks DICCOL; dense individualized and common connectivity-based cortical
landmarks. Dajiang Zhu, a student working on the project, says, “DICCOL is very
similar to a GPS system. [It’s] a map of the human brain.”









Image from

http://cercor.oxfordjournals.org/content/early/2012/04/05/cercor.bhs072.full


Xiaoping Hu and
Claire Coles at Emory University are collaborating with Liu and
hope to use their map to compare ‘normal’ brains to the brains of
children who were exposed to cocaine while in the womb. As you might expect,
exposure to cocaine can be extremely harmful to children, with the potential to
cause serious damage
to their brain networks.





Brain mapping technology has huge potential. Consequently,
there are many issues that it brings, some in the far future but several that
are very relevant now. There are three major topics I will touch on: death,
identity, and property. They are all interconnected within the scope of this
discussion.





First, I will start with death, as it was the
impetus for having this discussion. This is not the first time that someone has
challenged the definition of death. Over the centuries as technology and
medicine advance, our understanding of death has grown and changed. Before
1970, the main identifiers for death (and life, actually) were the cessation of cardiopulmonary function
. As we push
forward, we have come to see that a heartbeat and respiratory action signify
that the brainstem is intact but higher brain function may be absent (think
coma or persistent vegetative state). While the science is still disputed, it
is generally understood that when the brain ceases to be active
, the individual has died. We have yet to discover have to
discover how to jumpstart the brain back into action, which has caused us to
deem those without neural function as brain dead. Thus, we have another
definition of death, looking beyond heart and lung function and into neural
activity.





Brain uploading challenges both of the aforementioned
definitions of death. After ‘conventional’ death, the possibility of returning to
life makes me wonder if we actually died in the first place. It’s very tricky,
actually. When biological death takes place, what can we say about our
consciousness?





A less abstract thought to consider is the
right to die. Currently, suicide and euthanasia are illegal in most countries
 and are controversial. As such, Dr. Hayworth and those who are
riding his train of thought must wait to die before they can undergo pre-upload
procedures. It would greatly increase the ability to harvest information from
the brain if it could be taken before death to avoid any associated damages
(cell death from lack of oxygen or damage from a head impact during an
accident). So, should a person be allowed to undergo a ‘life-ending’ surgery
with the intent (or perhaps hope is a better word) of returning to life in the
future? On the other hand, should advanced directives be used, such that an
individual can request to not be uploaded in the same way they can ask not to
be resuscitated?





Dr. Hayworth’s plan has interesting religious
implications. His kind of resurrection clashes with the after-life/next-life
beliefs of many religions. Can Heaven, Hell or reincarnation exist if our minds
are re-synthesized with science? In the brain-uploading situation, what appears
to happen is one ‘consciousness’ dies and another is constructed (a bit like
the movie The Prestige
. If you
haven’t seen it yet, pretend like you didn’t read that). You are then stuck
with this tricky situation with identity and determining what really is going
on here.





Dr. Hayworth has an interesting answer to
this conundrum. Though he is answering in the context of creating multiple
reincarnates, his thoughts apply here as well. As each new being is brought
into awareness, they become their own individual. You could have two clones of
the same person. As soon as they awake, they have both begun their own unique
experience and instantly become distinct beings. As such, you are not faced
with identical copies but two distinguishable persons. It’s similar to maternal
twins; the reincarnates have the same physical make up and in this case, the
same memories, but they will experience the world separately from each other.





So, perhaps you are not really coming back to
life. Someone else is just picking up where you left off.





This transition makes for a very interesting
scenario. As a 21-year-old college undergraduate, I have acquired a whole lot
of stuff. By age 85-90, I imagine that I will have built upon my stash. When I
die, I expect to write my possessions off in a will, distributing some here and
there, or perhaps I will just be buried with the entirety of my estate
converted into gold. Property I expect, would be turned over to a relative,
sold or forfeited to the government. However, if I am coming back to life, can
I just put everything on hold until I return? Do I get to keep my things after
my biological death? For how long do I have to reclaim it? Can I decide to put
it into storage for 200 years because I would really like to experience the 23nd
century? Does my estate roll over? Does debt?







Image from

http://onlyhdwallpapers.com/high-definition-wallpaper/clones-desktop-hd-wallpaper-589675/


Then again, if it isn’t really ‘me’ who is coming
back to life, does the reincarnate have rights to my estate? Who is going to
make that call? While I cannot imagine why someone would want to leave his or
her next-generation self in the dirt, say someone is low on cash but wants to
‘Continue Life.’ Can they use an IOU and promise that their reincarnate will
pay for the costs of the procedure? Here’s a fun
scenario: Why not get two reincarnates and let one work off the debt and have
the other have some fun? You could turn yourself into an indentured servant. In
this case, who is granted personhood as well as the rights and liberties of
being a person? If there are three reincarnates, can all three vote in
elections? It get’s quite messy quickly. The thought of these possibilities is
terribly exciting and excitingly terrifying.





I know that many of the topics I touched on
were skimmed over and deserve much more attention. I encourage you to dig
deeper into these subjects, discuss them with your peers and let me know what
you come up with. This is a huge topic and a full discussion would be well
beyond the scope of this blog post. My goal was to bring up some questions, get
people talking about this and let you readers find your own opinion. In the
meantime, pay attention to connectomes and brain mapping, as I believe that
they hold a lot of promise for the future of neuro-healthcare.









Want to cite this post?


Craig, E. (2012). Brain Connectomes: Your ticket to the future. The Neuroethics Blog. Retrieved on

, from http://www.theneuroethicsblog.com/2012/08/brain-connectomes-your-ticket-to-future.html