Pages

Tuesday, May 30, 2017

Gender Bias in the Sciences: A Neuroethical Priority


By Lindsey Grubbs




Lindsey Grubbs is a doctoral candidate in English at Emory University, where she is also pursuing a certificate in bioethics. Her work has been published in Literature & Medicine and the American Journal of Bioethics Neuroscience, and she has a chapter co-authored with Karen Rommelfanger forthcoming in the Routledge Handbook of Neuroethics.   





In a March 29, 2017 lecture at Emory University, Dr. Bita Moghaddam, Chair of the Department of Behavioral Neuroscience at Oregon Health & Science University, began her talk, “Women’s Reality in Academic Science,” by asking the room of around fifty undergraduate and graduate students, “Who’s not here today?”





The answer? Men. (Mostly. To be fair, there were two.) Women in the audience offered a few hypotheses: maybe men felt like they would be judged for coming to a “women’s” event; maybe they wanted the women in their community to enjoy a female-majority space; maybe they don’t think that gender impacts their education and career.





Moghaddam seemed inclined to favor this third view: anecdotally, she has noticed a marked lack of interest from younger men when it comes to discussing gender bias in the sciences. More interested, she suggested, are older men who run laboratories or departments and watch wave after wave of talented women leave the profession, and those who have seen their partners or children impacted by sexism in science.





Dr. Moghaddam was invited to speak in Atlanta for her work against the systemic bias facing women in the sciences. She co-authored a short piece in Neuropsychopharmacology titled “Women at the Podium: ACNP Strives to Reach Speaker Gender Equality at the Annual Meeting.” The essay (and a corresponding podcast) describes measures taken while Moghaddam was chairing the program committee for the competitive, invitation-only American College of Neuropsychopharmacology (ACNP) annual meeting.





The problem? Although well-represented in ACNP membership, women were underrepresented for the conference’s prestigious speaking opportunities. In 2010, for example, only 7% of plenaries were delivered by women (the numbers in 2011 and 2012 were a bit better, at 33% and 15% respectively). As a result, women were not getting the same valuable exposure and opportunities for professional development as male scientists. Moreover, the quality of the science may have been suffering from a lack of diverse perspectives, and younger scientists could have been turned off by panels full of old white men (somewhere around 50% of neuroscience graduate scientists are women).








Portrait of M. and Mme Lavoisier.

Image courtesy of Wikipedia.



And neuroethicists should take note. Whose voices are being amplified and whose dampened in neuroscience are fundamental ethical questions. Many of history’s most egregious ethical violations were supported in part by scientific bias—American slavery was justified by racial medicine developed by white men (see, for instance, the work of pro-slavery Samuel Cartwright) and women’s exclusion from education and the public sphere in the nineteenth century was advocated for by early male neurologists (like S. Weir Mitchell’s Doctor and Patient). In interrogating the ethical issues facing the science of the mind today, neuroethics must not neglect the practices that shape the field and dictate its contours through amplifying some voices while dampening others.





In short, more inclusive science is better science, as shown by the work of feminist science scholar Deboleena Roy. With a doctorate in reproductive neuroendocrinology and molecular biology, Roy argues that her feminist training allowed her to innovate in her neuroendocrinological research, leading to important new insights about the hypothalamic-pituitary-gonadal axis. In her essay “Asking Different Questions: Feminist Practices for the Natural Sciences,” she examines the ways that the “methodology of the oppressed” allows scientists to ask new questions.





In a 2014 article for Neuroethics, Roy points to a need both for neuroethicists studying sex and gender difference to engage with histories of medicine and feminist theory and for feminist theories to become comfortable enough with tools like neuroimaging that they can contribute to, rather than simply critique, work in neuroscience. Roy argues that neuroethics may be a productive theoretical space where both feminist scholars and neuroscientists can explore these issues together, suggesting (among other possible topics) the assumption that structural differences in the brain equate to functional differences in behavior.





The Gendered Innovation website, sponsored by the National Science Foundation, Stanford, and European Commission, provides resources and case studies to bring gender and sex analysis into research, stating plainly that integrating such methods “produces excellence in science, medicine, and engineering research, policy, and practice.” The group argues that “Doing research wrong costs lives and money,” pointing to issues as diverse as pharmaceuticals pulled because they were life-threatening to women and injuries in vehicle accidents because crash dummies are modeled after the average size of males. Bringing gender analysis to research, they claim, is good for research, society, and business. This is certainly true in neuroscience, where bias is built into the very infrastructure of research, as in the case of neuroimaging scanners engineered for the larger average male head, and so yielding less precise results for scans of the female brain. And the stakes of such failures are high. Because neuroscientific research so often interrogates fundamental questions about personhood, morality, consciousness, and intelligence, research bias has the potential to skew our perception of identify in particularly damaging ways.





Gendered Innovation takes for granted that a more inclusive science is a better science, with interlocking initiatives to increase the number of women in science, to promote equality within science organizations, and to improve research by including sex and gender analysis. Certainly, not all female neuroscientists pursue feminist work, and not all feminist work is pursued by women, but creating an inclusive environment that foregrounds anti-bias as a goal must be an important part of an ethical neuroscience. This work needs to happen at many levels—feminist theory excels at this kind of critique, and feminist work in the lab is changing the course of neuroscience, but Dr. Moghaddam’s work suggests that we must not neglect administrative, pragmatic solutions.





Moghaddam’s team found a surprisingly simple and effective solution to the problem of unequal representation at the annual ACNP conference. More women were included on the program committee, and the call for proposals was edited to include the following phrase: “While scientific quality is paramount, the committee will strongly consider the composition of the panels that include women, under-represented minorities and early career scientists and clinicians.” Following this addition, more than 90% of panel proposals included at least one woman—a dramatic improvement, even though women still only made up about 35% of speakers. Essentially, when greater equality made men’s panels more competitive, they were more proactive about assuring gender-diverse panels. Notably, attendees’ assessment of the scientific quality of the annual meeting improved at the same time that women’s representation did.








Image courtesy of Flickr.

Dr. Moghaddam, by all measures an incredibly successful scientist, shared many experiences of gender discrimination, from expectations that she do departmental “chores,” to colleagues’ horror that she would become pregnant early on the tenure-track, to being mistaken for an administrative assistant or janitor. And such experiences are more than anecdotal. This powerful article in the Harvard Business Review begins with a stark statistic from the U.S. National Science Foundation (NSF), which suggests that while approximately half of doctoral degrees in the sciences are awarded to women, they hold only 21% of full professorships—even though women often out perform their male colleagues early in their careers. The authors’ analysis suggests that, while women obtain 10-15% more prestigious first-author papers than men (the position for the junior scientist who led the research and writing efforts), women are significantly under-represented in last-author papers (the spot reserved for the senior scientist whose grant money and ideology likely guided the paper).





While some suggest that women simply aren’t entering the STEM pipeline, or leak out of it due to a desire for better work-life balance, research suggests that systemic gender bias may be a more fundamental problem, with women having to continually prove themselves in the face of doubt, being expected to balance the tightrope of “appropriate” masculinity and femininity, facing diminished opportunities and expectations after having children, and facing increased discrimination and competition from other women in their field. Many of these forms of bias, the authors note, have been noted at higher levels by women of color, and black and Latina women also report feeling that social isolation in the department was essential for maintaining an air of competence.





These challenges are not unique to neuroscience, and across the university, women are underrepresented in full-time, tenure-track positions and overrepresented in part-time and contingent positions. Of note to many neuroethicists, philosophy also has a particularly bad rap for equal representation of women in full-time, tenure-track positions. Neuroethicists, then, should be vigilant. If we (over-simply, of course) imagine neuroethics as a kind of petri-dish of philosophy and neuroscience, then we certainly have some deep-seated demons to confront, despite the strength of our female founders and contributors. (Take an internet stroll to NeuroEthicsWomen Leaders to see for yourself.)





Many women organize around these kinds of issues, advocating for a more equitable environment. The website anneslist compiles lists of female neuroscientists to facilitate speaking and networking opportunities. And at Emory, Moghaddam’s talk was sponsored by Emory Women in Neuroscience, an organization founded by graduate students at the university in 2010 to create an environment in a department where 75% of graduate students were women, while only 25% of the faculty were. (According a rough count of the faculty page this statistic has improved in the past seven years, but only slightly.) Spearheaded by president Amielle Moreno, the organization holds several events per year, from social events like BBQs and a Girl Scout Cookie and Wine Night, to screenings of films about women in science (this year, they sponsored a viewing and discussion of Hidden Figures), to bootcamps to provide women with a friendly environment to learn to code. The group’s mission clearly resonates, and this was one of the better-attended talks I’ve seen in my years at Emory, even with (very) few men present.








Image courtesy of Wikimedia.

And on this last point, Dr. Moghaddam did not lay the blame for men’s lack of interest in the talk solely at their own door. The women in the room, she suggested, could have been more proactive about trying to bring their male friends and colleagues to the table. Women in academia (and beyond, of course) will likely take the lead in advocating for change. But this is often a perilous position. As one audience member pointed out, women may worry that they’ll be branded as “difficult” if they call out peers or superiors for sexism. And they may be right. What this suggests, then, is that men need to take on more responsibility for doing this labor. If women’s positions are already more vulnerable, those with more security should seriously consider how they can proactively improve equality. Perhaps incentives like those employed by ACNP could be applied in more contexts, driving home to men that gender equality is in their best interest.





But gender equality must only be one goal. Sexism can be practiced by women as well as men, and so a fifty-fifty faculty split wouldn’t automatically fix attitudinal problems. There is a difference, too, between science done by women and feminist science (for more on this, see here, here, or here). Further, we must consider who is being left out if we talk in these terms. My own discussion above, for instance, relies uncomfortably on the categories “men” and “women,” leaving out those whose gender identities may not fit neatly into those categories. Moreover, by talking about “women” as a group with a unified experience, we can overlook the nuances of racialized sex discrimination. Although my own discussion has focused on gender rather than race, racial bias within science is also rampant (see posts here and here), impoverishing the quality of research. On these fronts, and more, fields like neuroscience and neuroethics should have open discussion--and experimentation with pragmatic changes like the one that led to ACNP’s improved representation of women at the podium. Neuroethicists have a duty to stay informed about bias in the academy and to work both pragmatically and imaginatively to develop and support a more inclusive field—and women shouldn’t have to do it alone.




Want to cite this post?



Grubbs, L. (2017). Gender Bias in the Sciences: A Neuroethical Priority. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/05/gender-bias-in-sciences-neuroethical.html



Tuesday, May 23, 2017

How you’ll grow up, and how you’ll grow old


By Nathan Ahlgrim




Nathan Ahlgrim is a third year Ph.D. candidate in the Neuroscience Program at Emory. In his research, he studies how different brain regions interact to make certain memories stronger than others. In his own life, he strengthens his own brain power by hiking through the north Georgia mountains and reading highly technical science...fiction.




An ounce of prevention can only be worth a pound of cure if you know what to prevent in the first place. The solution to modifying disease onset can be fairly straightforward if the prevention techniques are rooted in lifestyle, such as maintaining a healthy diet and weight to prevent hypertension and type-II diabetes. However, disorders of the brain are more complicated – both to treat and to predict. The emerging science of preclinical detection of brain disorders was on display at Emory University during the April 28th symposium entitled, “The Use of Preclinical Biomarkers for Brain Diseases: A Neuroethical Dilemma.” Perspectives from ethicists, researchers conducting preclinical research, and participants or family members of those involved in clinical research were brought together over the course of the symposium. The diversity of panelists provided a holistic view of where preclinical research stands, and what must be considered as the field progresses.





Throughout the day, panelists discussed different ethical challenges of preclinical detection in the lens of three diseases: preclinical research and communicating risk in the context of Autism Spectrum Disorder (ASD), interventions and treatment of preclinical patients in the context of schizophrenia, and the delivery of a preclinical diagnosis and stigma in the context of Alzheimer’s disease. The symposium was bookended, appropriately, by discussions of two diseases that typically emerge at the beginning and end of life: ASD and Alzheimer’s disease. Drs. Cheryl Klaiman and Allan Levey discussed the clinical research of ASD and Alzheimer’s, respectively. Drs. Paul Root Wolpe and Dena Davis framed the clinical research by highlighting the ethical challenges that must be addressed with preclinical research of those diseases. 





Attempting to detect markers of ASD in infants and Alzheimer’s disease in middle aged adults raises distinct ethical challenges; even so, common hurdles arise in both diseases, highlighting the universality of the questions that all preclinical research must address. The shortcomings of current scientific practice were vividly portrayed during the symposium by people who are both involved in the research and touched by these diseases. As is true for many ethical dilemmas, day-long discussions of these ethical concerns did not produce resolutions. The discussion did spawn a consensus, however: transparency in conveying the implications of preclinical research and the options for the patient going forward is critical to ensuring all patients and families are treated with the dignity they deserve. 








Image courtesy of The Blue Diamond Gallery.

Both ASD and Alzheimer’s disease are proliferating in their prevalence and visibility. ASD, a developmental disorder that disproportionately affects boys over girls, is principally characterized by deficits in social communication and repetitive behaviors. It is now estimated that 1 in every 68 children will be on the autism spectrum [1]. Alzheimer’s disease, the most common cause of dementia, is an age-related progressive neurodegenerative disease that is characterized by a progressive loss of memory and other cognitive functions. Furthermore, over 5.5 million people are currently living with Alzheimer’s disease in the United States, and that number is expected to double in the next 20 years. Given the prevalence of both disorders, most of us know someone diagnosed with ASD or Alzheimer’s even if we have not been personally affected by these disorders. 





However, visibility can backfire by putting a spotlight on the frightening implications of an Alzheimer’s disease or ASD diagnosis. One parent of an autistic patient shared how he was forced to deal with the consequences of such fear after consulting a doctor about his son’s social development. Although the pediatrician believed that the child was autistic, the pediatrician refrained from sharing this diagnosis because he could not bring himself to deliver what he deemed a ‘death sentence.’ Only later, after the family received the diagnosis by seeking a second opinion, did the pediatrician disclose the original diagnosis. 





This doctor’s (poor) choice of words and delayed diagnosis were discussed, largely unfavorably, during the symposium. Even so, we can all empathize with the fear of a diagnosis that we do not fully understand. Being given a diagnosis of either Alzheimer’s disease or ASD before clinical symptoms manifest raises the specter of a loss of autonomy. As Alzheimer’s disease develops, a patient can lose his or her autonomy as cognitive functions fail. In addition to the personal loss of control, patients with Alzheimer’s disease are often unfairly stigmatized by their community. Loved ones can fear of becoming a caregiver and prematurely withdraw from relationships. Misinformation about early-stage Alzheimer’s can also jeopardize a patient’s employment long before he or she becomes cognitively impaired. Autonomy can similarly concern parents of a child with ASD. After the diagnosis, parents may feel cornered into lifelong care for their child, who may lack access to community resources and never be able to live independently. Not only that, but the mountains of evidence disproving the role of parenting in ASD development are not always sufficient to protect parents from being blamed for their child’s disorder, either by themselves or their community. 





Of course, the goal of preclinical research is to strike before the disease progresses – before it is too late to intervene. Clinical trials for both ASD and Alzheimer’s suggest that effective treatments rely on early detection, asking researchers to push the current boundaries for preclinical detection and diagnosis. Treatment outcomes in ASD drastically improve the earlier that intervention starts, which is why Dr. Cheryl Klaiman and the Marcus Autism Center are continuing research of behavioral markers that identify differences in the social behavior of infants as early as 6 months of age [2]. 








Artisitc representation of the neurodegeneration and memory

loss that occur in Alzheimer's disease.  Image courtesy

of Flickr user, Kalvicio de las Nieves.

Sadly, all drugs to treat Alzheimer’s disease that were promising in animal models have failed to show any benefit for human patients. FDA-approved drugs taken by patients with Alzheimer’s only act to treat the symptoms, not the disease. And even those few approved treatments do not provide symptom relief for all patients. The repeated failures of Alzheimer’s clinical trials may be a product of intervening too late. Dr. Allan Levey described how the brain pathologies of Alzheimer’s disease – plaques of amyloid-beta and tangles of tau – are developing for decades before any cognitive impairments appear [3]. 





However, pushing preclinical diagnosis earlier and earlier raises several concerns. In favor of early diagnosis is the notion that even when scientists do not have good news, the patient’s (or parents’) autonomy must be respected (see the Belmont Report). Therefore, the ethical course of action would appear to be to inform the patient when a positive diagnosis is present, whether the disease is in a clinical or preclinical stage. Only then can the patient (or family member) make an informed decision about his or her health. 





Dr. Paul Root Wolpe presented a counter-argument against informing a patient in all circumstances: given that the preclinical state is, by definition, before clinical symptoms exist, any preclinical diagnosis is probabilistic. What is the threshold before informing and intervening, 80%? 50%? Is it more ethically responsible to subject a family to intensive and expensive treatment for ASD when it is not present, or to let the disorder go untreated? The Belmont Report’s mandate on beneficence and non-maleficence does not offer a clear answer. When striving to act with beneficence and non-maleficence, preclinical research relies on relative risk. That is problematic, given that Dr. Wolpe believes that humans are not built to understand relative risk. 





The risk for Type I error (a false positive) in the case of preclinical ASD may be negligible. Behavioral therapy designed to help those with ASD has been shown to benefit all children, typically developing or not. Therefore, the only possible harm would be asking extensive time and effort of the family that was not strictly necessary. Still, with the universality of the benefit, one wonders why scientists should bother with early detection for ASD. Integrating such therapy into all classrooms would both provide treatment for the children who needed it and reduce the stigma of being “abnormal” or “other,” since all children would participate in the same experience. 








Image courtesy of Flickr user, Melissa.

Alzheimer’s disease is different. Science has yet to provide an effective treatment for the disease, and thus a preclinical diagnosis cannot initiate a treatment plan. As mentioned previously, the discovery of effective treatments will likely depend on the ability to detect the disease and intervene early in its progression. This order of events unfortunately means the first cohorts of research participants will not reap the benefits of the science they contribute to. The panelists and audience at the symposium were, unsurprisingly, more split on whether they would rather receive a preclinical Alzheimer’s disease diagnosis for themselves than a preclinical ASD diagnosis for their child. Luckily, patients with preclinical Alzheimer’s disease retain full cognitive function, and thus maintain their capacity for autonomy. However, this does not hold true as patients progress from preclinical to clinical Alzheimer’s disease. Changes in personality coincide with [4] or even precede [5] a clinical diagnosis of Alzheimer’s disease, often causing the clinical Alzheimer’s disease patient to have different wishes and beliefs than the preclinical Alzheimer’s disease patient. With this in mind, many audience members voiced the opinion that they would prefer to die before the cognitive symptoms of Alzheimer’s began.





However, Dr. Dena Davis brought more nuance to this idea, saying that our prospective sympathy as healthy individuals – our ability to accurately predict how we will feel once in a disease state – is profoundly flawed. A proponent of the right to die, Dr. Davis painted a troubling portrait of a patient given a diagnosis of preclinical Alzheimer’s disease. Say that person chooses to end his life once he becomes severely cognitively impaired. By the time the impairment has taken hold, he may no longer remember the initial wish, or may completely change mind. Whose wishes are to be honored: those of the clinical patient or those of the preclinical patient? This conundrum was also heartbreakingly described in Lisa Genova’s novel, Still Alice





This quandary is why discussions between ethicists, scientists, and patients are necessary. The ability to detect a disease before clinical symptoms appear is a laudable scientific achievement, but knowledge must be put in context of the consumer of those technologies. Without context, scientific discoveries fail to do good, and can often do harm. 





Effective medicine requires support and trust from the community. Two doses of the Measles-Mumps-Rubella (MMR) vaccine are 97% effective against measles, and yet there were 61 cases in the U.S. in the first four months of this year. These cases occurred a full 17 years after endemic measles was effectively eliminated in the U.S. [6], and are primarily a result of poor vaccination rates. A combination of fear of unsafe vaccines, mistrust in doctors, and a lack of belief in the need of vaccinations drove parents away from the established research of the effectiveness and necessity of vaccines. In parallel, fear of stigma and an unwillingness to face the diagnosis of a brain disorder could similarly push patients away from treatments if scientists are not diligent in their education and branding of the research. 








Nathan created this image to be used as the logo for the

April 28th neuroethics symposium. 

The investment of patients enrolled in preclinical research may produce a larger effect size in clinical trials than would ever be practical outside of a research environment due to the self-selectivity of the participants. Only invested parents would enroll their children in studies that demand time and continuing effort. Similarly, only highly self-motivated study participants would stick to a treatment schedule of infusions and lumbar punctures before Alzheimer’s symptoms ever appeared. One symposium speaker was first drawn to participate in the Anti-Amyloid Treatment in Asymptomatic Alzheimer’s (A4) study because of a family member who suffered from Alzheimer’s disease. Personal ties to the research are stronger motivators to patients than any academic rationale scientists can create. If preclinical research for either disease does produce an effective treatment, both scientists and community health partners will need to put forth the additional effort to instill the treatment with broad appeal and accessibility. 





The burden of garnering community support may fall on scientists more than many scientists might like to admit. Our representative participant in the preclinical Alzheimer’s study was quick to say that personal interactions keep him motivated to continue the study. A large part of the reason why he voluntarily receives infusions of a trial drug by Dr. Allan Levey’s team, and is considering doing a lumbar puncture, is because of the people on the team. The need for scientists to consider the ethics of their research is obvious. However, as our representative study participant underscored, scientists’ interpersonal relationships with their patients must also be consciously developed. That is the only way that the resulting research will do any good in the community. 





Scientists are still developing treatments for Alzheimer’s disease and ASD. No magic bullet is likely to ever appear. However, a diagnosis does not need to be a death sentence. Preclinical detection enables intervention before clinical pathology appears, allowing for an ounce of prevention to be applied before a pound of cure is needed. This is not to diminish the years of demanding, often heartbreaking labor that is asked of caregivers of people with Alzheimer’s disease or ASD. What should drive the scientific research and treatment plans? When asking what is good for the patient and his or her family, we as scientists must always remember who we are serving, and what our end goals are. As one parent at our meeting remarked, “we may not have the cure, but we have the care.”



References



1. Prevalence and Characteristics of Autism Spectrum Disorder Among Children Aged 8 Years - Autism and Developmental Disabilities Monitoring Network, 11 Sites, United States, 2012. 2016, Centers for Disease Control and Prevention.



2. Jones, W. and A. Klin, Attention to eyes is present but in decline in 2-6-month-old infants later diagnosed with autism. Nature, 2013. 504(7480): p. 427-431.



3. Serrano-Pozo, A., et al., Neuropathological Alterations in Alzheimer Disease. Cold Spring Harbor Perspectives in Medicine:, 2011. 1(1): p. a006189.



4. Mega, M.S., et al., The spectrum of behavioral changes in Alzheimer's disease. Neurology, 1996. 46(1): p. 130-135.



5. Balsis, S., B.D. Carpenter, and M. Storandt, Personality Change Precedes Clinical Diagnosis of Dementia of the Alzheimer Type. The Journals of Gerontology: Series B, 2005. 60(2): p. P98-P101.



6. Katz, S.L. and A.R. Hinman, Summary and conclusions: measles elimination meeting, 16-17 March 2000. J Infect Dis, 2004. 189 Suppl 1: p. S43-7.



Want to cite this post?



Ahlgrim, N. (2017). How you’ll grow up, and how you’ll grow old. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/05/how-youll-grow-up-and-how-youll-grow-old.html

Saturday, May 13, 2017

Happy 15th Birthday, Neuroethics!


By Henry T. Greely






Henry T. (Hank) Greely is the Deane F. and Kate Edelman Johnson Professor of Law and Professor, by courtesy, of Genetics at Stanford University. He specializes in ethical, legal, and social issues arising from advances in the biosciences, particularly from genetics, neuroscience, and human stem cell research. He directs the Stanford Center for Law and the Biosciences and the Stanford Program on Neuroscience in Society; chairs the California Advisory Committee on Human Stem Cell Research; is the President Elect of the International Neuroethics Society; and serves on the Neuroscience Forum of the National Academy of Medicine; the Committee on Science, Technology, and Law of the National Academy of Sciences; and the NIH Multi-Council Working Group on the BRAIN Initiative. He was elected a fellow of the American Association for the Advancement of Science in 2007. His book, THE END OF SEX AND THE FUTURE OF HUMAN REPRODUCTION, was published in May 2016. 






Professor Greely graduated from Stanford in 1974 and from Yale Law School in 1977. He served as a law clerk for Judge John Minor Wisdom on the United States Court of Appeals for the Fifth Circuit and for Justice Potter Stewart of the United States Supreme Court. After working during the Carter Administration in the Departments of Defense and Energy, he entered private law practice in Los Angeles in 1981. He joined the Stanford faculty in 1985. 





Fifteen years ago, on May 13, 2002, a two-day conference called “Neuroethics: Mapping the Field” began at the Presidio in San Francisco. And modern neuroethics was born. That conference was the first meeting to bring together a wide range of people who were, or would soon be, writing in “neuroethics;” it gave the new field substantial publicity; and, perhaps most importantly, it gave it a catchy name. 



That birthdate could, of course, be debated. In his introduction to the proceedings of that conference, William Safire, a long-time columnist for the NEW YORK TIMES (among other things), gave neuroethics a longer history: 


The first conference or meeting on this general subject was held back in the summer of 1816 in a cottage on Lake Geneva. Present were a couple of world-class poets, their mistresses, and their doctor. (Marcus) 


Safire referred to the summer holiday of Lord Byron and Percy Bysshe Shelley; Byron’s sometime mistress, Claire Clairmont; and Shelley’s then-mistress, later wife, known at the time as Mary Godwin and now remembered as Mary Wollstonecraft Shelley. The historically cold and wet summer of 1816 (“the year without a summer”) led them to try writing ghost stories. Godwin succeeded brilliantly; her story eventually was published in 1818 as FRANKENSTEIN: OR, THE NEW PROMETHEUS.






Camillo Golgi, image courtesy of

Wikipedia.


Safire’s arresting opening gives neuroethics either too little history or too much. If, like Safire, one allows neuroethics to predate an understanding of the importance of the brain, early human literature – both religious and secular – show a keen interest in human desires and motivations. So does philosophy, since at least classical Greece. But without a recognition of a critical role of the physical brain in human behavior and consciousness, I do not think those discussions should be called “neuroethics,” though they are its precursors. 




It was not until the late 19th century that we saw the beginnings of deeper understanding of not only the role of the brain but of how it might function, notably through the (dueling) work of Camillo Golgi and Santiago Ramón y Cajal. Al Jonsen has noted that many twentieth century issues and events posed issues we would today consider “neuroethics.” (Johnson) The issues seemed particularly intense in the 1960s and early 1970s, with active debates ranging from the uses of electroconvulsive therapy and frontal lobotomies; to legal and medical uses of brain death; to research with psychedelic drugs, aversion therapy, and “mind control.” 




But the nascent field calmed down again, until the rise of good neuroimaging in the 1990s, largely through magnetic resonance imaging, first structural and then functional. These took major steps toward connecting the physical brain to the intangible mind and thus linking neuroscience more directly to human society. People began to write about them for both specialized and general audiences. (Kulynych 1996, Kulynych 1997, Carter 1998, Blank 1999). And academics noticed. In 2000, based on planning begun by Paul Root Wolpe in 1998, the Bioethics Center at the University of Pennsylvania held three experts’ meetings, the first in January, the second in March, and the third in June. 





In retrospect, though, 2002 was clearly the crucial year for neuroethics. It started in January when the American Association for the Advancement of Science and the journal Neuron jointly sponsored a symposium, was called “Understanding the Neural Basis of Complex Behaviors: The Implications for Science and Society.” Then, on February 7, 2002, Penn Bioethics held a public conference on “Bioethics and the Cognitive Neuroscience Revolution” as the culmination of its three meetings in 2000. 




But the most important meeting was held on May 13 and 14 at the San Francisco Presidio. Sponsored by the Dana Foundation and jointly hosted by UCSF and Stanford, this conference, called “Neuroethics: Mapping the Field,” brought together about 150 neuroscientists, philosophers, bioethicists, lawyers, and others. The Dana Press published the conference proceedings later in 2002; the book was fascinating reading then, and remains so today. 







Santiago Ramon y Cajal, image

courtesy of Wikipedia.

Zach Hall of UCSF and Barbara Koenig of Stanford were the main organizers of the meeting. Hall was a neuroscientist, who had returned to the UCSF faculty after serving as Director of the National Institute of Neurological Disease and Stroke at NIH. Koenig was the Executive Director of the Stanford Center for Biomedical Ethics (SCBE). Koenig, a bioethicist who did not then have a deep background in neuroscience, was assisted by others at SCBE, notably Judy Illes, a neuroscience Ph.D. who had recently joined the Center. 




Hall, mainly from the neuroscience side, and Koenig, mainly from the bioethics side, organized the meeting but William Safire was its prime mover. Safire was one of the most interesting people I have ever met. (McFadden) He dropped out of Syracuse University after two years and boasted to me – probably accurately – that he was the last person in American politics to be a college dropout. From 1955 until 1968 he worked in public relations firms, his own after 1961, with occasional time out to work on Republican political campaigns. In 1968 he joined the transition team and then the Nixon White House as a special assistant with a focus on speech writing, coining, among other phrases, “the nattering nabobs of negativism” for a speech by Vice President Agnew. He left the Nixon Administration to become a political columnist for the New York Times, which he did until 2005. He remained with the Times, however, continuing to write the “On Language” column he started in the New York Times magazine in 1979 until shortly before his death from pancreatic cancer in September 2009. 




Safire’s New York Times obituary makes no mention of neuroscience or neuroethics, but his involvement was quite real. In 1993 he became a member of the Board of Directors of the Dana Foundation, a private charitable foundation created in the 1950 by Charles A. Dana, a lawyer and businessman; in 1998 he became its vice chairman and then in 2000 its chairman. As chairman Safire made neuroscience the Foundation’s almost exclusive focus. 





Hall’s welcome to start the Conference, as published in the conference proceedings, explains Safire’s role in it: 


This meeting had its genesis in a visit to San Francisco by Bill Safire about a year and a half ago, I took Bill down to the new Mission Bay campus at UCSF and we were talking about all the brain research that would be going on there, I said that we also hoped to have a bioethics center. As we were talking about the need for discussion of these issues with respect to the brain, Bill suddenly turned to me and said, neuroethics. It was like that magic moment – “plastics” in the movie The Graduate. Bill said, “neuroethics,” and I thought, “that’s it.” (Marcus) 





William Safire. (Image courtesy of Wikimedia.)

The conference had four sessions, each with a moderator and three or four speakers, several mealtime speeches, and a concluding section. The sessions were called Brain Science and Self, Brain Science and Social Policy, Ethics and the Practice of Brain Science, and Brain Science and Public Discourse. (In retrospect, and in light of my preferred scope for the field, “Brain Science” would have been a better, broader term than “Neuroscience,” but “neuroethics” and “neurolaw” both sound much better than “brain science ethics” or “brain science law.”) 




The speakers and moderators came from both neuroscience and ethics (broadly construed). Many of them were prominent at the time of the conference; many played important continuing roles in the development of neuroethics. From neuroscience came Marilyn Albert, Colin Blakemore, Antonio Damasio, Michael Gazzaniga, Steven Hyman, William Mobley, Daniel Schacter, and Kenneth Schaffner, as well as Zach Hall. Arthur Caplan, Judy Illes, Albert Jonsen, Barbara Koenig, Bernard Lo, Jonathan Moreno, Erik Parens, William Safire, William Winslade, Paul Root Wolpe, and I all spoke from ethics, law, politics, or philosophy. And at least three speakers did not fit neatly into that divide – Patricia Smith Churchland, a philosopher of the mind deeply involved in neuroscience; Donald Kennedy, a biologist and former president of Stanford who, at that time, was the editor of Science magazine; and Ron Kotulak, a science journalist. 





Like many conferences, this one claimed to want more discussion than presentations. My recollection, supported by the conference proceedings, is that, unlike most conferences, it succeeded in this goal. The general discussions between and among the speakers and the invited audience were insightful, and sometimes heated. 




Also like many conferences, this one was created in the hope that it would have some lasting impact. The most immediate consequence was the publication, with impressive speed, of the conference proceedings in July 2002, but perhaps more important was the publicity given to the idea of neuroethics. 




Two days after the conference ended, Safire used his NEW YORK TIMES column to write about

neuroethics generally and the conference. After starting the column with the Congressional debate over banning human cloning, Safire moved to the importance of neuroethics, ending with “The conference 'mapping the field' of neuroethics this week showed how eager many scientists are to grapple with the moral consequences of their research. It's up to schools and media and Congress to put it high on the public's menu.” (Safire) 








(Image courtesy of Flickr.)

The following week, the cover of THE ECONOMIST proclaimed “The Future of Mind Control” with an image of a shaved head with a dial implanted in its forehead. The issue contained both a long science story on the ethical issues arising from neuroscience and a leader (editorial) on the same subject. (The Economist) While neither ECONOMIST piece used the term “neuroethics” or mentioned the Presidio conference (and the story at least must have been in preparation well before the conference), the effect, especially in conjunction with Safire’s column, was more attention for the issues. 




But perhaps the most important result of the Presidio conference was the field’s name. Safire first used it in print in his May 2002 column, but, according to Hall, had used it with him about 18 months earlier. Although searchers have found earlier uses of the term (Illes, Racine), no one disputes that Safire was the first to use it publicly in its current sense or that he was the one who popularized it. 




It is, in some ways, a poor name for the field. Calling the area “neuroethics” risks limiting it. After all, much of the interest in ‘neuroethics” is in its legal and social implications, not just its “ethical” ones. And using “ethics” also raises a longstanding difficulty between philosophers who sometimes act as though they own the term, and bioethics. I made these arguments at the Presidio conference, but, even as I did so, conceded “I’m afraid this is a doomed argument because I don’t have a better word. ‘Neuroethics’ sounds great.” (Marcus) 




On that point at least, I was right. So tonight I’ll raise a glass to “neuroethics” and wish it “Happy birthday, and many happy returns!” And I hope the readers of this blog will join me. 





References



Rita Carter, MAPPING THE MIND (1998, Berkeley, CA: U. Calif. Press).



Robert H. Blank, BRAIN POLICY: HOW THE NEW NEUROSCIENCE WILL CHANGE OUR BRAINS AND OUR POLITICS (1999, Washington, D.C.: Georgetown Univ. Press)



The Ethics of Brain Science: Open Your Mind, THE ECONOMIST (May 23, 2002), accessed on Apr. 29, 2017 at http://www.economist.com/node/1143317.



The Future of Mind Control, THE ECONOMIST, (May 23, 2002), accessed on Apr. 29, 2017 at http://www.economist.com/node/1143583.



Judy Illes, Neuroethics in a New Era of Neuroimaging, 24 Am. J. Neurorad. 1739 (2003)



Albert R. Jonsen, Nudging toward Neuroethics: An Overview of the History and Foundations of Neuroethics in THE DEBATE ABOUT NEUROETHICS: PERSPECTIVES ON THE FIELD’S DEVELOPMENT, FOCUS, AND FUTURE (ed. Eric Racine and John Aspler, forthcoming 2017, Springer:)



Jennifer Kulynych, Brain, Mind, and Criminal Behavior: Neuroimages as Scientific Evidence, JURIMETRICS 235-244 (1996) 



Jennifer Kulynych, Psychiatric Neuroimaging Evidence: A High-Tech Crystal Ball? 49 STAN. L. REV. 1249 (1997)



Steven J. Marcus, ed., NEUROETHICS: MAPPING THE FIELD, Conference Proceedings at 4 (2002, Dana Press: New York).



Robert D. McFadden, William Safire, Political Columnist and Oracle of Language, Dies at 79, New York Times (Sept. 27, 2009), accessed on January 1, 2017 at http://www.nytimes.com/2009/09/28/us/28safire.html. (This is my source for most of the biographical information about Safire.)



Eric Racine, in PRAGMATIC NEUROETHICS (2010 MIT Press: Cambridge, Mass)



William Safire, the “But What If” Factor, NEW YORK TIMES (May 16, 2002), accessed on Apr 29, 2017 at http://www.nytimes.com/2002/05/16/opinion/the-but-what-if-factor.html.





Want to cite this post?



Greely, H. (2017). Happy 15th Birthday, Neuroethics! The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/05/happy-15th-birthday-neuroethics.html

Tuesday, May 9, 2017

Reading into the Science: The Neuroscience and Ethics of Enhancement


By Shweta Sahu







Image courtesy of Pexels.

I was always an average student: I was good, just not good enough. I often wondered what my life and grades would be like if I’d had a better memory or learned faster. I remember several exams throughout my high school career where I just could not recall what certain rote memorization facts or specific details were, and now in college, I realize that if I could somehow learn faster, how much time would I save and be able to study even more? Would a better memory have led me to do better on my exams in high school, and would my faster ability to learn new information have increased my GPA?





Such has been the question for years now in the ongoing debates of memory enhancement and cognitive enhancement, respectively. I’m not the only student to have ever felt this way and I’m sure I won’t be the last. Technology and medicine seem to be on the brink of exciting new findings, ones that may help us in ways we’ve never before thought imaginable.





Though neuroscientists are still attempting to understand the intricacies of how memory functions, it has been known since the early 1900’s that memory works in three modes: working memory, short-term memory, and long term memory, each of which are regionalized to different parts of the brain. Working memory, which lasts from seconds to minutes, contains information that can be acted on and processed, not merely maintained by rehearsal. Short term memory on the other hand, is slightly longer in duration and occurs in the prefrontal cortex (think George Miller’s Magic number 7). It is here in short term memory that if an item is rehearsed, it can be “moved” into long term memory, and this long term memory is of particular interest to physicians and clinicians. Long term memory lasts over days, months, or years and is divided into declarative (explicit) memory and nondeclarative (implicit) memory. Declarative memory itself can be further subdivided into episodic memories, which are memories of personal experiences and autobiographical memories, and semantic memory, which is objective knowledge that is factual in nature, deemed “world knowledge.” The brain’s ability to acquire the aforementioned declarative memories depends on the medial temporal lobe regions, which include the amygdala, hippocampus, and the surrounding parahippocampal, perirhinal and entorhinal cortical areas. It is within these structures that memory and learning occur, specifically communication via neurotransmitters and the repeated activation of certain synapses.








Image courtesy of Novalens.

It is also here that enhancement is used, whether it's enhancement via chemical means (notably the neurotransmitters: acetylcholine, dopamine, and serotonin) or enhancement via technological means (TMS, DBS, tDCS, etc.). From studies in humans and animals, it is well known that the hippocampus is crucial for the formation of new long term memories, but since the hippocampus is deep within the brain, electrically stimulating it becomes tricky. This is where stimulation of the entorhinal cortex becomes key, as it is heavily connected to the hippocampus. Both transcranial magnetic stimulation (TMS) and deep brain stimulation (DBS) are techniques which target specific regions of the brain, as opposed to the chemical equivalent (i.e. drugs) that are not localizable. A revolutionary study done in 2012 by Suthana et al., aimed to test whether DBS of the hippocampus or entorhinal cortex altered memory performance on spatial memory tasks. They found that “entorhinal stimulation applied while the subjects learned locations of landmarks enhanced their subsequent memory of these locations,” though direct hippocampal stimulation did not yield similar results. Moreover, in past studies, TMS has been shown to improve performance on different tasks, but a 2014 study found that repeated TMS over the span of one week could be used to improve memory for events at least 24 hours after the stimulation is given, specifically when tested with “memory tests consisting of a set of arbitrary associations between faces and words that they were asked to learn and remember.” This study is particularly noteworthy because it was done on healthy volunteers with “normal” memory, and essentially those in whom you wouldn’t expect to see marked improvement since their brains are already ‘functioning at their normal capacities.’





Enhancement via chemical means is also rising in popularity among adults and college students. For example, Ritalin and Adderall, two commonly prescribed stimulants for ADHD, increase the extracellular concentration of dopamine in the brain by blocking the dopamine transporter. Patients with hyperactivity-impulsive ADHD have changes to their dopamine transport gene, which is why prescribing these stimulants can alleviate those symptoms. However, Ritalin and Adderall are now being used off-label and are being abused by nonmedical users (those who are not being prescribed it) in order to try to enhance their performance. One intriguing qualitative study found that “stimulants’ effects on users’ emotions and feelings are an important contributor to users’ perceptions of improved academic performance” and thus, felt cognitively enhanced. Of the college students interviewed, many reported a feeling “up”, and one stated, “your energy level is higher… it’s just easier to function at a highly productive level.” Further, students reported increased drive and motivation, saying that Adderall produced surplus energy that was discharged through an “internal push, pressure.” Moreover, they claimed these stimulant medications allowed them to be “interested” in the material which thereby increased their feeling of enjoyment. All this is to say that these students did feel cognitively enhanced and saw nothing wrong with it. In contrast, some students think that the unauthorized use of prescription medications is cheating, whether it be to enhance motivation, information, or recall. In fact, some school administrators see it the same way, with Duke being the most notable example of a university that has explicitly stated in its Student Conduct code that such unintended usage is deemed “cheating.”





That brings us to the questionable ethics of cognitive and memory enhancement, both chemical and electrical. The current state of affairs is divided and there is no distinguishable line in the sand. One view in medicine is “first, do no harm.” Maurice Bernstein, MD, says that transforming physicians from healers to enhancers has the potential to “degrade” this standard of doing no harm. Richard M. Restak, MD, is a clinical professor of neurology, and provides another, more technical answer when asked if he would prescribe enhancement. He says, “I don’t prescribe them… Such use is definitely off-label and puts the physician at a disadvantage should something go wrong." However, Dr. Chatterjee, a prominent neuroethicist and inventor of the term “cosmetic neurology” offers up a realistic view that “medical economics will drive some physicians to embrace the enhancement role with open arms, especially if it means regaining some of the autonomy lost to managed care plans.” So much of medicine is now dictated by protocols and standard operating procedures, but Dr. Chatterjee suggests this may change if physicians are given this new option to reclaim some of their authority, putting the decision making-power back in their hands.








Image courtesy of Wikipedia.

Nevertheless, physicians are not the only ones divided on this issue; the general public seems to be even more so. A proponent of enhancement and author of Liberation Biology: The Scientific and Moral Case for the Biotech Revolution, Ronald Bailey, argues that disease is a state of dis-ease. He further states, “if patients are unsatisfied with some aspect of their lives and doctors can help them with very few risks, then why shouldn't they do so?" However, Deane Alban, researcher, writer, and manager of BeBrainFit.com offers a contrasting opinion. She writes,



“Smart drugs have side effects, are almost always obtained quasi-legally, and may not even work. You have only one brain. You can artificially stimulate it now for perceived short-term benefits. Or you can nourish and protect it so that it stays sharp for a lifetime. The decision is a no-brainer.”



But is it? By not taking advantage of such enhancing technologies will we get left behind? Now the issue turns to that of implicit coercion, where one feels like he/she has to do something or take something in order to keep up even if he himself/ she herself doesn’t want to. This further raises the question of whether employers will begin contemplating enhancement for their employees, even preferring those who are functioning at a higher level than others. Speaking in terms of efficiency, why not take the more productive team member? Already, air force pilots are required (and some medical residents are encouraged) to take Modafinil, a stimulant originally intended to treat narcolepsy and sleep disorders. If the work force continuously demands excellence of its employees, why not expand that and take a cognitive enhancer, since they make employees less prone to error, able to work and concentrate for longer hours, and operate more efficiently? If surgeons and restaurant employees are “coerced” to wash their hands and follow other protocol, this step may not be all that far away for the rest of us if these enhancement drugs are proved safe and efficacious.






That said, if there is a way for me to enhance my memory, learn faster, motivate myself to learn more, and enjoy what I do learn, I think I would take it *if it is not considered cheating and *if they are deemed effective. Lots of literature exists out on the internet as to how patients with ADHD feel that they are brought to a comparable level as others when they take this medication. However, there are several conflicting results as to whether these Smart Drugs can help enhance those beyond the “normal capacity.” Yet, if we can’t make people who use it illegally stop (and we cannot completely and irrevocably accomplish this), is there a time in the near future when we will legalize it for everyone, and those who choose to take Smart Drugs can take them according their own volition? At that point, I might just take it. I don’t want to get left behind. Do you?



Want to cite this post?





Sahu, S. (2017). Reading into the Science: The Neuroscience and Ethics of Enhancement. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/05/reading-into-science-neuroscience-and.html




Tuesday, May 2, 2017

The [Sea] Monster Inside Me


By Sunidhi Ramesh







A side-by-side comparison of a sea horse and the human

hippocampus (Greek for sea monster).

(Image courtesy of Wikimedia Commons.)

In 1587, Venetian anatomist Julius Aranzi gave a name to the intricate, hallmark structure located in the medial temporal lobe of the human brain—the hippocampus, Greek for sea monster.





The hippocampus, often said to resemble a sea horse, has since been identified as a key player in the consolidation of information (from short-term memory to long-term memory) and in the spatial memory that allows for our day-to-day navigation. Because of its importance in learning and memory, hippocampal damage is often a culprit in varying forms of dementia, Alzheimer’s disease, short-term memory loss, and amnesia.





Since its discovery, the hippocampus has been the subject of extensive research ranging from understanding diet and exercise as cognitive modulators to demonstrating the three-step encoding, storage, and retrieval process that the structure so consistently performs. In this time, it has become apparent that the hippocampus is not only a vital structure for normal human functioning, but it is also necessary to what makes us uniquely human.





In the center of this hippocampal research are place cells, individual neurons in the hippocampus that become active when an animal “enters a particular cell-specific place in its environment.” These cells are able to collect distinctive components of an organism’s surroundings and then organize their outputs in a way that is useful for the brain to understand its own location in space.





The hippocampus, then, is a model system for neural information coordination. It uses consistently reliable coding to function like a GPS, signaling the animal’s location through a pattern of activity across a population of place cells; different cells that are active and silent at each location of an environment behave like a jumbotron, allowing the cells together to code for the animal’s current location. This ensures that whenever a cell discharges, there is (more or less) a simple and single interpretation for the animal’s position.








The human hippocampus (indicated here in red) is a

bilateral structure located under the cerebral cortex

in the medial temporal lobe.

(Image courtesy of Wikimedia Commons.)

A different pattern of active and silent cells signals a different location, which, ultimately, continues to chart the space the animal is in. Together, place cells act to work as the brain’s cognitive map, a mental representation of places the animal knows and is familiar with.





In order to better understand place cells, Dr. André Fenton at New York University (NYU) runs a lab in which he (along with other researchers at NYU) aims to “[investigate] the role of the hippocampus in controlling how we choose relevant information to process” by “studying the interaction of memories and neural activity in signaling information from multiple spatial frames.” In a landmark experiment with Dr. Todd Sacktor, Dr. Fenton identified “protein kinase M zeta (PKMzeta) as a key molecular component of long term memory.” When PKMzeta is selectively inhibited in specific brain areas, long-term memories are erased for “even a month after rats learn a place avoidance task.”





More specifically, when PKMzeta is inhibited, “place cells lose their spatial firing specificity.”





In short, then, rats that have been trained to avoid certain locations in a small chamber will no longer be able to (or remember to) avoid them because the rats’ place cells fail to properly communicate location-based information within their brains.





To these rats, tasks that were once practically inherent and familiar are now impaired and brand-new.





While this research has yet to be applied in humans, the wide-ranging implications of being able to essentially reset memory warrants ethical consideration. Let’s imagine that “one day we can create a drug that's really focused-- that can take out specific kinds of memories.” In what situations could we use this drug clinically? Who would be permitted to use it? What kinds of regulations would need to be in place? And, most importantly, what does it mean to allow human beings to selectively “delete” certain memories over others?








A tractographic reconstruction of some of the many

neural connections in the human brain.

(Image courtesy of Wikipedia.)

About 150 years after Julius Aranzi gave a name to the hippocampus, Scottish philosopher David Hume published his novel A Treatise of Human Nature. In this book, Hume famously argued a then-radical idea of human nature and identity: “that the ‘self,’ as we conceive of it, is not a single spiritual or psychological entity, like a ‘soul,’ but rather a collection of discrete sensations and impressions — a bundle.” Linked together, these “bundles” create a distinctive and unique “self” that separates us human beings from one another. The key to Hume’s argument is the identity of these linkages between bundles.




We call them memories.





In this view, memories, then, are at the center of what makes us distinctive and different individuals. They are what distinguish me from you, what constitute the core of our identities, and what separate the past from the present and the present from the future. As one philosopher said it, “life without memory is no life at all.” (This concept is why Alzheimer’s disease and other such progressive brain disorders are so widely feared in the modern world; by involuntarily tearing memories away, these diseases slowly strip individuals of their pasts and –many argue— of who they are.)





What, then, are the consequences to changing memories? If memories are what make us who we are, does removing or modifying or changing memories change who we are? If I can no longer recall the car accident that sent my mother into therapy for six years or the way I felt the day I was rejected from ten universities, am I still me? And, if I were still myself after these changes, how much would I have to modify my memory to alter who I am? A couple uncomfortable incidents? A dozen? All of them?





These questions have yet to be answered.





Still, “a lot of unpleasant, a lot of difficult memories,” bioethicist Art Caplan says, “form who we are. We learn. It becomes part of our character, our identity. Some might say the struggle against bad experiences is part of what makes us better people.” But, are there situations in which this hypothetical (although not purely hypothetical, as our discussion about place cells suggests) memory repression drug may prove useful?








"What if I told you that I could erase some of your memories?

I'll just give you a pill, and poof, they're gone. Would

you do it?" science web producer David Levin asks.

(Image courtesy of Pixabay.)

Many war veterans are plagued with nightmares and emotional trauma, often becoming prisoners to a disease we now call PTSD (post-traumatic stress disorder). Patients of this mental condition could benefit from having the option to break down memories that are the source of their disorders. This same logic could be extended to scores of torment and torture individuals. Targets of violent personal assaults. Victims of childhood trauma. Rape survivors.



In these circumstances, memory medication could offer individuals the promise of returning a great deal of his/her functioning prior to the traumatic incident. Rather than change who the person currently is, clinical applications to memory medication could restore who the person originally was.





But where do we draw the line? How do we determine if a memory is bad enough to warrant modification? And who determines if a memory warrants modification at all? Doctors? Patients? Law makers?





While advances in place cells and PKMzeta research are both far from allowing us the ability to selectively modify memory, these ethical considerations are relevant to any serious discussion regarding the future of the hippocampus as we know it.





Perhaps one day, the answers to these questions will help us tackle the biggest questions of all: can the sea monster-shaped structure in our brains someday shield us from the monsters of the real world? 





Do we want it to?





References



Barry, Jeremy M., et al. "Inhibition of protein kinase M? disrupts the stable spatial discharge of hippocampal place cells in a familiar environment." Journal of Neuroscience 32.40 (2012): 13753-13762.



Baylis, Françoise. "'I am who I am': On the perceived threats to personal identity from deep brain stimulation." Neuroethics 6.3 (2013): 513-526.



Bir, Shyamal C., et al. "Julius Caesar Arantius (Giulio Cesare Aranzi, 1530–1589) and the hippocampus of the human brain: history behind the discovery." Journal of neurosurgery 122.4 (2015): 971-975.



Carey, Benedict. "Brain Researchers Open Door to Editing Memory." The New York Times. The New York Times, 05 Apr. 2009. Web. 08 Apr. 2017.



Gentile, Sal. “If we erase our memories, do we erase ourselves?” PBS, Public Broadcasting Service, 24 Nov. 2010. Web. 12 Apr 2017.



Hume, David. A treatise of human nature. Courier Corporation, 2003.



Levin, David. "Ethics of Erasing Memory." PBS. Public Broadcasting Service, 13 Jan. 2011. Web. 11 Apr. 2017.



Nadel, Lynn, and Morris Moscovitch. "Memory consolidation, retrograde amnesia and the hippocampal complex." Current opinion in neurobiology 7.2 (1997): 217-227.



Pastalkova, Eva, et al. "Storage of spatial information by the maintenance mechanism of LTP." science 313.5790 (2006): 1141-1144.



West, Mark J., et al. "Differences in the pattern of hippocampal neuronal loss in normal ageing and Alzheimer's disease." The Lancet 344.8925 (1994): 769-772.



Zimmer, Carl. "Memory researchers, rebuffed by science, came roaring back." STAT. STAT, 23 June 2016. Web. 09 Apr. 2017.





Want to cite this post?



Ramesh, Sunidhi. (2017). The [Sea] Monster Inside Me. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/04/the-sea-monster-inside-me.html