Pages

Tuesday, June 6, 2017

The Neuroethics Blog Series on Black Mirror: Virtual Reality


By Hale Soloff




Hale is a Neuroscience PhD student at Emory University. He aims to integrate neuroethics investigations with his own research on human cognition. Hale is passionate about science education and public science communication, and is pursuing a career in teaching science. 





Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we utilize our technology and how it can affect us in return. This piece is the first in a series of posts that will discuss ethical issues surrounding neuro-technologies featured in the show and will compare how similar technologies are impacting us in the real world. 



Black Mirror – Plot Summary 




Some of the neuro-technologies featured in Black Mirror at first seem marvelous and enticing, but the show repeatedly illustrates how abusing or misusing such technologies can lead to disturbing, and even catastrophic, consequences. This may seem scary enough, but what if the goal of a device was to intentionally frighten its user? 




In the episode “Playtest” a man named Cooper volunteers to help a video game company test out a brand-new device, referred to as a “mushroom.” After being warned that using the device requires a small, reversible medical procedure, supposedly no more invasive than getting his ears pierced, Cooper signs a consent form and the mushroom is injected into the back of his head. The mushroom records electrical activity from his brain, uses intelligent software to determine what he fears the most, and then stimulates his brain with more electricity to make him see a “mental projection” of his fears. As an arachnophobe, Cooper first sees a spider crawling towards him that nobody else can see; in fact, the mental projections he sees are so convincing that he becomes skeptical of whether another human being, whom he can see and hear, is real or simply a projection. 







Image courtesy of Flickr.

Then Cooper’s mushroom device malfunctions. Despite being assured that it can only make him

experience audio and visual stimuli and that nothing he sees can physically hurt him, Cooper feels pain when attacked by a knife-wielding projection. Apparently, this is because “data tendrils” from the mushroom’s neural net dug deeper into his brain and took root, causing him to feel physical pain when he was struck by the projection. Soon after that, the neural net wipes Cooper’s memory, leaving him with no knowledge of himself or his loved ones. In a chilling end to the episode, an incoming phone call to Cooper’s cell phone interferes with the signal of the device, causing the mushroom to malfunction and kill Cooper by over-stimulating his brain. 



The technology used in “Playtest”




The device that Cooper tests in this episode is an “interactive augmented reality system,” a chimera of three technologies that exist today. The first, Virtual Reality (VR), involves wearing a headset that blinds you to the outside world, instead placing you in a virtual 360o environment that you can observe and reach out to touch with wearable, glove-like controllers. VR has become a popular technology in the world of gaming because of the feeling of full immersion that it gives the user. VR has even been used in both research and rehabilitation of human cognitive processes; for instance, scientists at Emory University use immersive VR exposure therapy to help treat combat-related post-traumatic stress disorder (PTSD) in veterans. Through controlling how faithfully a virtual environment represents traumatic stimuli, individuals with trauma disorders and phobias can safely confront “triggering” scenarios while practicing coping methods. The second technology, augmented reality (AR), differs from VR in that it overlays a virtual image onto a real environment. For example, the popular AR game “Pokémon Go" allows users to observe Pokémon in their homes, playgrounds, and shopping malls through their phone’s camera. 







Image courtesy of Wikimedia.

In Black Mirror, Cooper experiences a combination of these two technologies: using the mushroom was fully immersive like VR, but also projected objects into his real-world environment like AR. This combination is made possible through a Brain-Computer Interface (BCI), the third real-life technology used in Black Mirror. BCIs are direct connections between a brain and a computer, meaning the user can control the computer with their voluntary cognition, and the computer can sometimes affect the user with electrical stimulation. The uses for BCI are extensive, from advanced prosthetic limbs that can be moved with the user’s concentration and intent, to computer-controlled Deep Brain Stimulation (DBS) for the treatment of Parkinson’s disease and treatment-resistant depression. We are even on the verge of BCI-controlled video games, some of which will use electroencephalogram (EEG) electrodes to measure and interpret brain waves to control a game



Do I need to be worried about having a “mushroom device” in my brain? 





Though the mushroom device utilized in Black Mirror bears similarity to current technologies, it is important to consider the differences between what is presented in the media and what capabilities we have today. Both the mushroom and BCIs can be used to record the brain’s electrical activity while simultaneously stimulating the brain to affect its behavior. However, the mushroom in “Playtest” is inserted quickly and easily into Cooper’s brain, presumably by somebody with little or no medical training. BCIs, such as DBS or ECoG, that directly stimulate the central nervous system require an invasive surgical procedure performed by highly trained brain surgeons. Although the mushroom is advanced enough to determine Cooper’s fears and thoughts, our current ability to analyze and interpret brain activity does not allow for the degree of “mind-reading” exhibited in the show (contrary to the neuro-hype surrounding consumer BCIs). Neural activity recorded by a BCI device under highly controlled conditions can be translated into meaningful, but limited, psychological information, such as predicting intentions slightly before they are acted upon and recognizing thought patterns that are distinct for different objects. The closest we’ve come to “mind-reading” includes the neuroimaging work of Jack Gallant’s lab, but attempts for “mind-writing” images or words with BCI’s have yet to be done. 



Ethical issues featured in “Playtest” 







A DBS (deep brain stimulation) procedure. Image courtesy

of Wikimedia.

The technologies described in “Playtest” give rise to a host of ethical concerns, with one of the most salient being a violation of autonomy. Normally clinicians and patients work together to determine the correct level of stimulation that a therapeutic stimulation device like DBS should deliver to the brain. A future proposed version—on the horizon, but not yet employed in humans even experimentally—, “closed-loop” DBS, uses a computer algorithm to determine stimulation levels (with the goal of diminishing the need for external control by the user or clinician) based on current brain activity. This closed-loop system is how the mushroom in Cooper’s nervous system can first record his brain activity, then analyze it to determine his fears, and finally deliver a fearful experience to him using stimulation. To be clear, these technologies are not being developed for manipulating these kinds of complex sensory or perceptual images as in “Playtest.” Again, existing brain stimulation technology does not allow for the controlled, vivid hallucinations that Cooper sees. However, some ethicists are exploring whether “closing the loop” on brain stimulation could lessen a user’s real or perceived agency, or the capacity of the individual to act independently. Allowing internal readjustments of stimulation to be decided by the algorithm, rather than consciously manually adjusted by the patient or clinician, may have the end result of diminishing agency of the user, even if not for something as dramatic as illustrated in “Playtest,” like for facilitating movement. 





Similarly, the full immersion experience facilitated by VR and AR in “Playtest” represents an infringement of the user’s autonomy. One of the core appeals of VR in gaming is the factor of immersion: rather than learning which button on the controller makes you grab an object or move the camera, you simply reach out and grab the object with your hand or turn your head to observe more of your environment. This, however, leads us to the doorstep of a disturbing possibility—the inability to escape. I have listened to individuals playing horror games on VR devices exclaim, “Oh my gosh, it’s so much scarier because I can’t just look away.” They were not truly upset, because they knew that escape was as easy as taking off the headset, turning off the device, and leaving the room. But, if the VR capabilities are implanted into your brain, like those seen in this Black Mirror episode, it creates a scenario where there may be no escape from frightening, threatening, or even painful stimuli. While VR technology is currently being used to exhibit light, sound, and even touch through external manifestations, a BCI like the one used in “Playtest” directly “hijacks” your sensory system by causing you to experience stimuli that are not truly present. Black Mirror shows us a scenario that is only possible because of the unique features of BCI and VR: inescapable torture inflicted by an entertainment system. 








Image courtesy of Airman Magazine.

Finally, BCIs give rise to a unique privacy issue: if the gaming company in “Playtest” misplaced Cooper’s data, potentially anyone could know his innermost fears and personal thoughts. If the mushroom device was capable of “mind-reading” his subconscious fears, intentions, and more, that information could easily be saved and sold to interested companies; a law was enacted this April that allows internet service providers and other companies to sell their customers’ personal data, (like your social media and search engine browsing habits without consent). As discussed, our current understanding of brain data allows us to interpret limited information in a controlled environment, but as our ability to interpret brain signals improves (and as more data is aggregated) this issue of brain privacy may become more pressing. It is important that those involved in facilitating BCI understand and minimize the risk involved—engineers can design BCIs to be more safe and secure, clinicians can protect patients’ brain-data and explain the risks of BCI to their patients, and policy-makers can carefully consider how to protect an individuals’ right to his or her brain activity. 




Conclusion – Invasive BCI in Gaming





Entertaining media like Black Mirror raises interesting questions about the ethics of technology, but when trying to answer these questions, we need to be aware of the current state of technology and we must separate fact from fiction. Engineers, neuroscientists, and ethicists are working together to design safer, noninvasive and more ergonomic BCIs, and if they succeed it would improve brain stimulation treatment for patients, allow more individuals to safely use BCIs, and may even permit the implementation of more advanced BCIs in gaming. Perhaps we could even create, as stated in the episode, “the most personal survival horror game in history…one that works out how to scare you using your own mind.”




Want to cite this post?



Soloff, H. (2017). The Neuroethics Blog Series on Black Mirror: Virtual Reality. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/06/the-neuroethics-blog-series-on-black.html

Tuesday, May 30, 2017

Gender Bias in the Sciences: A Neuroethical Priority


By Lindsey Grubbs




Lindsey Grubbs is a doctoral candidate in English at Emory University, where she is also pursuing a certificate in bioethics. Her work has been published in Literature & Medicine and the American Journal of Bioethics Neuroscience, and she has a chapter co-authored with Karen Rommelfanger forthcoming in the Routledge Handbook of Neuroethics.   





In a March 29, 2017 lecture at Emory University, Dr. Bita Moghaddam, Chair of the Department of Behavioral Neuroscience at Oregon Health & Science University, began her talk, “Women’s Reality in Academic Science,” by asking the room of around fifty undergraduate and graduate students, “Who’s not here today?”





The answer? Men. (Mostly. To be fair, there were two.) Women in the audience offered a few hypotheses: maybe men felt like they would be judged for coming to a “women’s” event; maybe they wanted the women in their community to enjoy a female-majority space; maybe they don’t think that gender impacts their education and career.





Moghaddam seemed inclined to favor this third view: anecdotally, she has noticed a marked lack of interest from younger men when it comes to discussing gender bias in the sciences. More interested, she suggested, are older men who run laboratories or departments and watch wave after wave of talented women leave the profession, and those who have seen their partners or children impacted by sexism in science.





Dr. Moghaddam was invited to speak in Atlanta for her work against the systemic bias facing women in the sciences. She co-authored a short piece in Neuropsychopharmacology titled “Women at the Podium: ACNP Strives to Reach Speaker Gender Equality at the Annual Meeting.” The essay (and a corresponding podcast) describes measures taken while Moghaddam was chairing the program committee for the competitive, invitation-only American College of Neuropsychopharmacology (ACNP) annual meeting.





The problem? Although well-represented in ACNP membership, women were underrepresented for the conference’s prestigious speaking opportunities. In 2010, for example, only 7% of plenaries were delivered by women (the numbers in 2011 and 2012 were a bit better, at 33% and 15% respectively). As a result, women were not getting the same valuable exposure and opportunities for professional development as male scientists. Moreover, the quality of the science may have been suffering from a lack of diverse perspectives, and younger scientists could have been turned off by panels full of old white men (somewhere around 50% of neuroscience graduate scientists are women).








Portrait of M. and Mme Lavoisier.

Image courtesy of Wikipedia.



And neuroethicists should take note. Whose voices are being amplified and whose dampened in neuroscience are fundamental ethical questions. Many of history’s most egregious ethical violations were supported in part by scientific bias—American slavery was justified by racial medicine developed by white men (see, for instance, the work of pro-slavery Samuel Cartwright) and women’s exclusion from education and the public sphere in the nineteenth century was advocated for by early male neurologists (like S. Weir Mitchell’s Doctor and Patient). In interrogating the ethical issues facing the science of the mind today, neuroethics must not neglect the practices that shape the field and dictate its contours through amplifying some voices while dampening others.





In short, more inclusive science is better science, as shown by the work of feminist science scholar Deboleena Roy. With a doctorate in reproductive neuroendocrinology and molecular biology, Roy argues that her feminist training allowed her to innovate in her neuroendocrinological research, leading to important new insights about the hypothalamic-pituitary-gonadal axis. In her essay “Asking Different Questions: Feminist Practices for the Natural Sciences,” she examines the ways that the “methodology of the oppressed” allows scientists to ask new questions.





In a 2014 article for Neuroethics, Roy points to a need both for neuroethicists studying sex and gender difference to engage with histories of medicine and feminist theory and for feminist theories to become comfortable enough with tools like neuroimaging that they can contribute to, rather than simply critique, work in neuroscience. Roy argues that neuroethics may be a productive theoretical space where both feminist scholars and neuroscientists can explore these issues together, suggesting (among other possible topics) the assumption that structural differences in the brain equate to functional differences in behavior.





The Gendered Innovation website, sponsored by the National Science Foundation, Stanford, and European Commission, provides resources and case studies to bring gender and sex analysis into research, stating plainly that integrating such methods “produces excellence in science, medicine, and engineering research, policy, and practice.” The group argues that “Doing research wrong costs lives and money,” pointing to issues as diverse as pharmaceuticals pulled because they were life-threatening to women and injuries in vehicle accidents because crash dummies are modeled after the average size of males. Bringing gender analysis to research, they claim, is good for research, society, and business. This is certainly true in neuroscience, where bias is built into the very infrastructure of research, as in the case of neuroimaging scanners engineered for the larger average male head, and so yielding less precise results for scans of the female brain. And the stakes of such failures are high. Because neuroscientific research so often interrogates fundamental questions about personhood, morality, consciousness, and intelligence, research bias has the potential to skew our perception of identify in particularly damaging ways.





Gendered Innovation takes for granted that a more inclusive science is a better science, with interlocking initiatives to increase the number of women in science, to promote equality within science organizations, and to improve research by including sex and gender analysis. Certainly, not all female neuroscientists pursue feminist work, and not all feminist work is pursued by women, but creating an inclusive environment that foregrounds anti-bias as a goal must be an important part of an ethical neuroscience. This work needs to happen at many levels—feminist theory excels at this kind of critique, and feminist work in the lab is changing the course of neuroscience, but Dr. Moghaddam’s work suggests that we must not neglect administrative, pragmatic solutions.





Moghaddam’s team found a surprisingly simple and effective solution to the problem of unequal representation at the annual ACNP conference. More women were included on the program committee, and the call for proposals was edited to include the following phrase: “While scientific quality is paramount, the committee will strongly consider the composition of the panels that include women, under-represented minorities and early career scientists and clinicians.” Following this addition, more than 90% of panel proposals included at least one woman—a dramatic improvement, even though women still only made up about 35% of speakers. Essentially, when greater equality made men’s panels more competitive, they were more proactive about assuring gender-diverse panels. Notably, attendees’ assessment of the scientific quality of the annual meeting improved at the same time that women’s representation did.








Image courtesy of Flickr.

Dr. Moghaddam, by all measures an incredibly successful scientist, shared many experiences of gender discrimination, from expectations that she do departmental “chores,” to colleagues’ horror that she would become pregnant early on the tenure-track, to being mistaken for an administrative assistant or janitor. And such experiences are more than anecdotal. This powerful article in the Harvard Business Review begins with a stark statistic from the U.S. National Science Foundation (NSF), which suggests that while approximately half of doctoral degrees in the sciences are awarded to women, they hold only 21% of full professorships—even though women often out perform their male colleagues early in their careers. The authors’ analysis suggests that, while women obtain 10-15% more prestigious first-author papers than men (the position for the junior scientist who led the research and writing efforts), women are significantly under-represented in last-author papers (the spot reserved for the senior scientist whose grant money and ideology likely guided the paper).





While some suggest that women simply aren’t entering the STEM pipeline, or leak out of it due to a desire for better work-life balance, research suggests that systemic gender bias may be a more fundamental problem, with women having to continually prove themselves in the face of doubt, being expected to balance the tightrope of “appropriate” masculinity and femininity, facing diminished opportunities and expectations after having children, and facing increased discrimination and competition from other women in their field. Many of these forms of bias, the authors note, have been noted at higher levels by women of color, and black and Latina women also report feeling that social isolation in the department was essential for maintaining an air of competence.





These challenges are not unique to neuroscience, and across the university, women are underrepresented in full-time, tenure-track positions and overrepresented in part-time and contingent positions. Of note to many neuroethicists, philosophy also has a particularly bad rap for equal representation of women in full-time, tenure-track positions. Neuroethicists, then, should be vigilant. If we (over-simply, of course) imagine neuroethics as a kind of petri-dish of philosophy and neuroscience, then we certainly have some deep-seated demons to confront, despite the strength of our female founders and contributors. (Take an internet stroll to NeuroEthicsWomen Leaders to see for yourself.)





Many women organize around these kinds of issues, advocating for a more equitable environment. The website anneslist compiles lists of female neuroscientists to facilitate speaking and networking opportunities. And at Emory, Moghaddam’s talk was sponsored by Emory Women in Neuroscience, an organization founded by graduate students at the university in 2010 to create an environment in a department where 75% of graduate students were women, while only 25% of the faculty were. (According a rough count of the faculty page this statistic has improved in the past seven years, but only slightly.) Spearheaded by president Amielle Moreno, the organization holds several events per year, from social events like BBQs and a Girl Scout Cookie and Wine Night, to screenings of films about women in science (this year, they sponsored a viewing and discussion of Hidden Figures), to bootcamps to provide women with a friendly environment to learn to code. The group’s mission clearly resonates, and this was one of the better-attended talks I’ve seen in my years at Emory, even with (very) few men present.








Image courtesy of Wikimedia.

And on this last point, Dr. Moghaddam did not lay the blame for men’s lack of interest in the talk solely at their own door. The women in the room, she suggested, could have been more proactive about trying to bring their male friends and colleagues to the table. Women in academia (and beyond, of course) will likely take the lead in advocating for change. But this is often a perilous position. As one audience member pointed out, women may worry that they’ll be branded as “difficult” if they call out peers or superiors for sexism. And they may be right. What this suggests, then, is that men need to take on more responsibility for doing this labor. If women’s positions are already more vulnerable, those with more security should seriously consider how they can proactively improve equality. Perhaps incentives like those employed by ACNP could be applied in more contexts, driving home to men that gender equality is in their best interest.





But gender equality must only be one goal. Sexism can be practiced by women as well as men, and so a fifty-fifty faculty split wouldn’t automatically fix attitudinal problems. There is a difference, too, between science done by women and feminist science (for more on this, see here, here, or here). Further, we must consider who is being left out if we talk in these terms. My own discussion above, for instance, relies uncomfortably on the categories “men” and “women,” leaving out those whose gender identities may not fit neatly into those categories. Moreover, by talking about “women” as a group with a unified experience, we can overlook the nuances of racialized sex discrimination. Although my own discussion has focused on gender rather than race, racial bias within science is also rampant (see posts here and here), impoverishing the quality of research. On these fronts, and more, fields like neuroscience and neuroethics should have open discussion--and experimentation with pragmatic changes like the one that led to ACNP’s improved representation of women at the podium. Neuroethicists have a duty to stay informed about bias in the academy and to work both pragmatically and imaginatively to develop and support a more inclusive field—and women shouldn’t have to do it alone.




Want to cite this post?



Grubbs, L. (2017). Gender Bias in the Sciences: A Neuroethical Priority. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/05/gender-bias-in-sciences-neuroethical.html



Tuesday, May 23, 2017

How you’ll grow up, and how you’ll grow old


By Nathan Ahlgrim




Nathan Ahlgrim is a third year Ph.D. candidate in the Neuroscience Program at Emory. In his research, he studies how different brain regions interact to make certain memories stronger than others. In his own life, he strengthens his own brain power by hiking through the north Georgia mountains and reading highly technical science...fiction.




An ounce of prevention can only be worth a pound of cure if you know what to prevent in the first place. The solution to modifying disease onset can be fairly straightforward if the prevention techniques are rooted in lifestyle, such as maintaining a healthy diet and weight to prevent hypertension and type-II diabetes. However, disorders of the brain are more complicated – both to treat and to predict. The emerging science of preclinical detection of brain disorders was on display at Emory University during the April 28th symposium entitled, “The Use of Preclinical Biomarkers for Brain Diseases: A Neuroethical Dilemma.” Perspectives from ethicists, researchers conducting preclinical research, and participants or family members of those involved in clinical research were brought together over the course of the symposium. The diversity of panelists provided a holistic view of where preclinical research stands, and what must be considered as the field progresses.





Throughout the day, panelists discussed different ethical challenges of preclinical detection in the lens of three diseases: preclinical research and communicating risk in the context of Autism Spectrum Disorder (ASD), interventions and treatment of preclinical patients in the context of schizophrenia, and the delivery of a preclinical diagnosis and stigma in the context of Alzheimer’s disease. The symposium was bookended, appropriately, by discussions of two diseases that typically emerge at the beginning and end of life: ASD and Alzheimer’s disease. Drs. Cheryl Klaiman and Allan Levey discussed the clinical research of ASD and Alzheimer’s, respectively. Drs. Paul Root Wolpe and Dena Davis framed the clinical research by highlighting the ethical challenges that must be addressed with preclinical research of those diseases. 





Attempting to detect markers of ASD in infants and Alzheimer’s disease in middle aged adults raises distinct ethical challenges; even so, common hurdles arise in both diseases, highlighting the universality of the questions that all preclinical research must address. The shortcomings of current scientific practice were vividly portrayed during the symposium by people who are both involved in the research and touched by these diseases. As is true for many ethical dilemmas, day-long discussions of these ethical concerns did not produce resolutions. The discussion did spawn a consensus, however: transparency in conveying the implications of preclinical research and the options for the patient going forward is critical to ensuring all patients and families are treated with the dignity they deserve. 








Image courtesy of The Blue Diamond Gallery.

Both ASD and Alzheimer’s disease are proliferating in their prevalence and visibility. ASD, a developmental disorder that disproportionately affects boys over girls, is principally characterized by deficits in social communication and repetitive behaviors. It is now estimated that 1 in every 68 children will be on the autism spectrum [1]. Alzheimer’s disease, the most common cause of dementia, is an age-related progressive neurodegenerative disease that is characterized by a progressive loss of memory and other cognitive functions. Furthermore, over 5.5 million people are currently living with Alzheimer’s disease in the United States, and that number is expected to double in the next 20 years. Given the prevalence of both disorders, most of us know someone diagnosed with ASD or Alzheimer’s even if we have not been personally affected by these disorders. 





However, visibility can backfire by putting a spotlight on the frightening implications of an Alzheimer’s disease or ASD diagnosis. One parent of an autistic patient shared how he was forced to deal with the consequences of such fear after consulting a doctor about his son’s social development. Although the pediatrician believed that the child was autistic, the pediatrician refrained from sharing this diagnosis because he could not bring himself to deliver what he deemed a ‘death sentence.’ Only later, after the family received the diagnosis by seeking a second opinion, did the pediatrician disclose the original diagnosis. 





This doctor’s (poor) choice of words and delayed diagnosis were discussed, largely unfavorably, during the symposium. Even so, we can all empathize with the fear of a diagnosis that we do not fully understand. Being given a diagnosis of either Alzheimer’s disease or ASD before clinical symptoms manifest raises the specter of a loss of autonomy. As Alzheimer’s disease develops, a patient can lose his or her autonomy as cognitive functions fail. In addition to the personal loss of control, patients with Alzheimer’s disease are often unfairly stigmatized by their community. Loved ones can fear of becoming a caregiver and prematurely withdraw from relationships. Misinformation about early-stage Alzheimer’s can also jeopardize a patient’s employment long before he or she becomes cognitively impaired. Autonomy can similarly concern parents of a child with ASD. After the diagnosis, parents may feel cornered into lifelong care for their child, who may lack access to community resources and never be able to live independently. Not only that, but the mountains of evidence disproving the role of parenting in ASD development are not always sufficient to protect parents from being blamed for their child’s disorder, either by themselves or their community. 





Of course, the goal of preclinical research is to strike before the disease progresses – before it is too late to intervene. Clinical trials for both ASD and Alzheimer’s suggest that effective treatments rely on early detection, asking researchers to push the current boundaries for preclinical detection and diagnosis. Treatment outcomes in ASD drastically improve the earlier that intervention starts, which is why Dr. Cheryl Klaiman and the Marcus Autism Center are continuing research of behavioral markers that identify differences in the social behavior of infants as early as 6 months of age [2]. 








Artisitc representation of the neurodegeneration and memory

loss that occur in Alzheimer's disease.  Image courtesy

of Flickr user, Kalvicio de las Nieves.

Sadly, all drugs to treat Alzheimer’s disease that were promising in animal models have failed to show any benefit for human patients. FDA-approved drugs taken by patients with Alzheimer’s only act to treat the symptoms, not the disease. And even those few approved treatments do not provide symptom relief for all patients. The repeated failures of Alzheimer’s clinical trials may be a product of intervening too late. Dr. Allan Levey described how the brain pathologies of Alzheimer’s disease – plaques of amyloid-beta and tangles of tau – are developing for decades before any cognitive impairments appear [3]. 





However, pushing preclinical diagnosis earlier and earlier raises several concerns. In favor of early diagnosis is the notion that even when scientists do not have good news, the patient’s (or parents’) autonomy must be respected (see the Belmont Report). Therefore, the ethical course of action would appear to be to inform the patient when a positive diagnosis is present, whether the disease is in a clinical or preclinical stage. Only then can the patient (or family member) make an informed decision about his or her health. 





Dr. Paul Root Wolpe presented a counter-argument against informing a patient in all circumstances: given that the preclinical state is, by definition, before clinical symptoms exist, any preclinical diagnosis is probabilistic. What is the threshold before informing and intervening, 80%? 50%? Is it more ethically responsible to subject a family to intensive and expensive treatment for ASD when it is not present, or to let the disorder go untreated? The Belmont Report’s mandate on beneficence and non-maleficence does not offer a clear answer. When striving to act with beneficence and non-maleficence, preclinical research relies on relative risk. That is problematic, given that Dr. Wolpe believes that humans are not built to understand relative risk. 





The risk for Type I error (a false positive) in the case of preclinical ASD may be negligible. Behavioral therapy designed to help those with ASD has been shown to benefit all children, typically developing or not. Therefore, the only possible harm would be asking extensive time and effort of the family that was not strictly necessary. Still, with the universality of the benefit, one wonders why scientists should bother with early detection for ASD. Integrating such therapy into all classrooms would both provide treatment for the children who needed it and reduce the stigma of being “abnormal” or “other,” since all children would participate in the same experience. 








Image courtesy of Flickr user, Melissa.

Alzheimer’s disease is different. Science has yet to provide an effective treatment for the disease, and thus a preclinical diagnosis cannot initiate a treatment plan. As mentioned previously, the discovery of effective treatments will likely depend on the ability to detect the disease and intervene early in its progression. This order of events unfortunately means the first cohorts of research participants will not reap the benefits of the science they contribute to. The panelists and audience at the symposium were, unsurprisingly, more split on whether they would rather receive a preclinical Alzheimer’s disease diagnosis for themselves than a preclinical ASD diagnosis for their child. Luckily, patients with preclinical Alzheimer’s disease retain full cognitive function, and thus maintain their capacity for autonomy. However, this does not hold true as patients progress from preclinical to clinical Alzheimer’s disease. Changes in personality coincide with [4] or even precede [5] a clinical diagnosis of Alzheimer’s disease, often causing the clinical Alzheimer’s disease patient to have different wishes and beliefs than the preclinical Alzheimer’s disease patient. With this in mind, many audience members voiced the opinion that they would prefer to die before the cognitive symptoms of Alzheimer’s began.





However, Dr. Dena Davis brought more nuance to this idea, saying that our prospective sympathy as healthy individuals – our ability to accurately predict how we will feel once in a disease state – is profoundly flawed. A proponent of the right to die, Dr. Davis painted a troubling portrait of a patient given a diagnosis of preclinical Alzheimer’s disease. Say that person chooses to end his life once he becomes severely cognitively impaired. By the time the impairment has taken hold, he may no longer remember the initial wish, or may completely change mind. Whose wishes are to be honored: those of the clinical patient or those of the preclinical patient? This conundrum was also heartbreakingly described in Lisa Genova’s novel, Still Alice





This quandary is why discussions between ethicists, scientists, and patients are necessary. The ability to detect a disease before clinical symptoms appear is a laudable scientific achievement, but knowledge must be put in context of the consumer of those technologies. Without context, scientific discoveries fail to do good, and can often do harm. 





Effective medicine requires support and trust from the community. Two doses of the Measles-Mumps-Rubella (MMR) vaccine are 97% effective against measles, and yet there were 61 cases in the U.S. in the first four months of this year. These cases occurred a full 17 years after endemic measles was effectively eliminated in the U.S. [6], and are primarily a result of poor vaccination rates. A combination of fear of unsafe vaccines, mistrust in doctors, and a lack of belief in the need of vaccinations drove parents away from the established research of the effectiveness and necessity of vaccines. In parallel, fear of stigma and an unwillingness to face the diagnosis of a brain disorder could similarly push patients away from treatments if scientists are not diligent in their education and branding of the research. 








Nathan created this image to be used as the logo for the

April 28th neuroethics symposium. 

The investment of patients enrolled in preclinical research may produce a larger effect size in clinical trials than would ever be practical outside of a research environment due to the self-selectivity of the participants. Only invested parents would enroll their children in studies that demand time and continuing effort. Similarly, only highly self-motivated study participants would stick to a treatment schedule of infusions and lumbar punctures before Alzheimer’s symptoms ever appeared. One symposium speaker was first drawn to participate in the Anti-Amyloid Treatment in Asymptomatic Alzheimer’s (A4) study because of a family member who suffered from Alzheimer’s disease. Personal ties to the research are stronger motivators to patients than any academic rationale scientists can create. If preclinical research for either disease does produce an effective treatment, both scientists and community health partners will need to put forth the additional effort to instill the treatment with broad appeal and accessibility. 





The burden of garnering community support may fall on scientists more than many scientists might like to admit. Our representative participant in the preclinical Alzheimer’s study was quick to say that personal interactions keep him motivated to continue the study. A large part of the reason why he voluntarily receives infusions of a trial drug by Dr. Allan Levey’s team, and is considering doing a lumbar puncture, is because of the people on the team. The need for scientists to consider the ethics of their research is obvious. However, as our representative study participant underscored, scientists’ interpersonal relationships with their patients must also be consciously developed. That is the only way that the resulting research will do any good in the community. 





Scientists are still developing treatments for Alzheimer’s disease and ASD. No magic bullet is likely to ever appear. However, a diagnosis does not need to be a death sentence. Preclinical detection enables intervention before clinical pathology appears, allowing for an ounce of prevention to be applied before a pound of cure is needed. This is not to diminish the years of demanding, often heartbreaking labor that is asked of caregivers of people with Alzheimer’s disease or ASD. What should drive the scientific research and treatment plans? When asking what is good for the patient and his or her family, we as scientists must always remember who we are serving, and what our end goals are. As one parent at our meeting remarked, “we may not have the cure, but we have the care.”



References



1. Prevalence and Characteristics of Autism Spectrum Disorder Among Children Aged 8 Years - Autism and Developmental Disabilities Monitoring Network, 11 Sites, United States, 2012. 2016, Centers for Disease Control and Prevention.



2. Jones, W. and A. Klin, Attention to eyes is present but in decline in 2-6-month-old infants later diagnosed with autism. Nature, 2013. 504(7480): p. 427-431.



3. Serrano-Pozo, A., et al., Neuropathological Alterations in Alzheimer Disease. Cold Spring Harbor Perspectives in Medicine:, 2011. 1(1): p. a006189.



4. Mega, M.S., et al., The spectrum of behavioral changes in Alzheimer's disease. Neurology, 1996. 46(1): p. 130-135.



5. Balsis, S., B.D. Carpenter, and M. Storandt, Personality Change Precedes Clinical Diagnosis of Dementia of the Alzheimer Type. The Journals of Gerontology: Series B, 2005. 60(2): p. P98-P101.



6. Katz, S.L. and A.R. Hinman, Summary and conclusions: measles elimination meeting, 16-17 March 2000. J Infect Dis, 2004. 189 Suppl 1: p. S43-7.



Want to cite this post?



Ahlgrim, N. (2017). How you’ll grow up, and how you’ll grow old. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/05/how-youll-grow-up-and-how-youll-grow-old.html

Saturday, May 13, 2017

Happy 15th Birthday, Neuroethics!


By Henry T. Greely






Henry T. (Hank) Greely is the Deane F. and Kate Edelman Johnson Professor of Law and Professor, by courtesy, of Genetics at Stanford University. He specializes in ethical, legal, and social issues arising from advances in the biosciences, particularly from genetics, neuroscience, and human stem cell research. He directs the Stanford Center for Law and the Biosciences and the Stanford Program on Neuroscience in Society; chairs the California Advisory Committee on Human Stem Cell Research; is the President Elect of the International Neuroethics Society; and serves on the Neuroscience Forum of the National Academy of Medicine; the Committee on Science, Technology, and Law of the National Academy of Sciences; and the NIH Multi-Council Working Group on the BRAIN Initiative. He was elected a fellow of the American Association for the Advancement of Science in 2007. His book, THE END OF SEX AND THE FUTURE OF HUMAN REPRODUCTION, was published in May 2016. 






Professor Greely graduated from Stanford in 1974 and from Yale Law School in 1977. He served as a law clerk for Judge John Minor Wisdom on the United States Court of Appeals for the Fifth Circuit and for Justice Potter Stewart of the United States Supreme Court. After working during the Carter Administration in the Departments of Defense and Energy, he entered private law practice in Los Angeles in 1981. He joined the Stanford faculty in 1985. 





Fifteen years ago, on May 13, 2002, a two-day conference called “Neuroethics: Mapping the Field” began at the Presidio in San Francisco. And modern neuroethics was born. That conference was the first meeting to bring together a wide range of people who were, or would soon be, writing in “neuroethics;” it gave the new field substantial publicity; and, perhaps most importantly, it gave it a catchy name. 



That birthdate could, of course, be debated. In his introduction to the proceedings of that conference, William Safire, a long-time columnist for the NEW YORK TIMES (among other things), gave neuroethics a longer history: 


The first conference or meeting on this general subject was held back in the summer of 1816 in a cottage on Lake Geneva. Present were a couple of world-class poets, their mistresses, and their doctor. (Marcus) 


Safire referred to the summer holiday of Lord Byron and Percy Bysshe Shelley; Byron’s sometime mistress, Claire Clairmont; and Shelley’s then-mistress, later wife, known at the time as Mary Godwin and now remembered as Mary Wollstonecraft Shelley. The historically cold and wet summer of 1816 (“the year without a summer”) led them to try writing ghost stories. Godwin succeeded brilliantly; her story eventually was published in 1818 as FRANKENSTEIN: OR, THE NEW PROMETHEUS.






Camillo Golgi, image courtesy of

Wikipedia.


Safire’s arresting opening gives neuroethics either too little history or too much. If, like Safire, one allows neuroethics to predate an understanding of the importance of the brain, early human literature – both religious and secular – show a keen interest in human desires and motivations. So does philosophy, since at least classical Greece. But without a recognition of a critical role of the physical brain in human behavior and consciousness, I do not think those discussions should be called “neuroethics,” though they are its precursors. 




It was not until the late 19th century that we saw the beginnings of deeper understanding of not only the role of the brain but of how it might function, notably through the (dueling) work of Camillo Golgi and Santiago Ramón y Cajal. Al Jonsen has noted that many twentieth century issues and events posed issues we would today consider “neuroethics.” (Johnson) The issues seemed particularly intense in the 1960s and early 1970s, with active debates ranging from the uses of electroconvulsive therapy and frontal lobotomies; to legal and medical uses of brain death; to research with psychedelic drugs, aversion therapy, and “mind control.” 




But the nascent field calmed down again, until the rise of good neuroimaging in the 1990s, largely through magnetic resonance imaging, first structural and then functional. These took major steps toward connecting the physical brain to the intangible mind and thus linking neuroscience more directly to human society. People began to write about them for both specialized and general audiences. (Kulynych 1996, Kulynych 1997, Carter 1998, Blank 1999). And academics noticed. In 2000, based on planning begun by Paul Root Wolpe in 1998, the Bioethics Center at the University of Pennsylvania held three experts’ meetings, the first in January, the second in March, and the third in June. 





In retrospect, though, 2002 was clearly the crucial year for neuroethics. It started in January when the American Association for the Advancement of Science and the journal Neuron jointly sponsored a symposium, was called “Understanding the Neural Basis of Complex Behaviors: The Implications for Science and Society.” Then, on February 7, 2002, Penn Bioethics held a public conference on “Bioethics and the Cognitive Neuroscience Revolution” as the culmination of its three meetings in 2000. 




But the most important meeting was held on May 13 and 14 at the San Francisco Presidio. Sponsored by the Dana Foundation and jointly hosted by UCSF and Stanford, this conference, called “Neuroethics: Mapping the Field,” brought together about 150 neuroscientists, philosophers, bioethicists, lawyers, and others. The Dana Press published the conference proceedings later in 2002; the book was fascinating reading then, and remains so today. 







Santiago Ramon y Cajal, image

courtesy of Wikipedia.

Zach Hall of UCSF and Barbara Koenig of Stanford were the main organizers of the meeting. Hall was a neuroscientist, who had returned to the UCSF faculty after serving as Director of the National Institute of Neurological Disease and Stroke at NIH. Koenig was the Executive Director of the Stanford Center for Biomedical Ethics (SCBE). Koenig, a bioethicist who did not then have a deep background in neuroscience, was assisted by others at SCBE, notably Judy Illes, a neuroscience Ph.D. who had recently joined the Center. 




Hall, mainly from the neuroscience side, and Koenig, mainly from the bioethics side, organized the meeting but William Safire was its prime mover. Safire was one of the most interesting people I have ever met. (McFadden) He dropped out of Syracuse University after two years and boasted to me – probably accurately – that he was the last person in American politics to be a college dropout. From 1955 until 1968 he worked in public relations firms, his own after 1961, with occasional time out to work on Republican political campaigns. In 1968 he joined the transition team and then the Nixon White House as a special assistant with a focus on speech writing, coining, among other phrases, “the nattering nabobs of negativism” for a speech by Vice President Agnew. He left the Nixon Administration to become a political columnist for the New York Times, which he did until 2005. He remained with the Times, however, continuing to write the “On Language” column he started in the New York Times magazine in 1979 until shortly before his death from pancreatic cancer in September 2009. 




Safire’s New York Times obituary makes no mention of neuroscience or neuroethics, but his involvement was quite real. In 1993 he became a member of the Board of Directors of the Dana Foundation, a private charitable foundation created in the 1950 by Charles A. Dana, a lawyer and businessman; in 1998 he became its vice chairman and then in 2000 its chairman. As chairman Safire made neuroscience the Foundation’s almost exclusive focus. 





Hall’s welcome to start the Conference, as published in the conference proceedings, explains Safire’s role in it: 


This meeting had its genesis in a visit to San Francisco by Bill Safire about a year and a half ago, I took Bill down to the new Mission Bay campus at UCSF and we were talking about all the brain research that would be going on there, I said that we also hoped to have a bioethics center. As we were talking about the need for discussion of these issues with respect to the brain, Bill suddenly turned to me and said, neuroethics. It was like that magic moment – “plastics” in the movie The Graduate. Bill said, “neuroethics,” and I thought, “that’s it.” (Marcus) 





William Safire. (Image courtesy of Wikimedia.)

The conference had four sessions, each with a moderator and three or four speakers, several mealtime speeches, and a concluding section. The sessions were called Brain Science and Self, Brain Science and Social Policy, Ethics and the Practice of Brain Science, and Brain Science and Public Discourse. (In retrospect, and in light of my preferred scope for the field, “Brain Science” would have been a better, broader term than “Neuroscience,” but “neuroethics” and “neurolaw” both sound much better than “brain science ethics” or “brain science law.”) 




The speakers and moderators came from both neuroscience and ethics (broadly construed). Many of them were prominent at the time of the conference; many played important continuing roles in the development of neuroethics. From neuroscience came Marilyn Albert, Colin Blakemore, Antonio Damasio, Michael Gazzaniga, Steven Hyman, William Mobley, Daniel Schacter, and Kenneth Schaffner, as well as Zach Hall. Arthur Caplan, Judy Illes, Albert Jonsen, Barbara Koenig, Bernard Lo, Jonathan Moreno, Erik Parens, William Safire, William Winslade, Paul Root Wolpe, and I all spoke from ethics, law, politics, or philosophy. And at least three speakers did not fit neatly into that divide – Patricia Smith Churchland, a philosopher of the mind deeply involved in neuroscience; Donald Kennedy, a biologist and former president of Stanford who, at that time, was the editor of Science magazine; and Ron Kotulak, a science journalist. 





Like many conferences, this one claimed to want more discussion than presentations. My recollection, supported by the conference proceedings, is that, unlike most conferences, it succeeded in this goal. The general discussions between and among the speakers and the invited audience were insightful, and sometimes heated. 




Also like many conferences, this one was created in the hope that it would have some lasting impact. The most immediate consequence was the publication, with impressive speed, of the conference proceedings in July 2002, but perhaps more important was the publicity given to the idea of neuroethics. 




Two days after the conference ended, Safire used his NEW YORK TIMES column to write about

neuroethics generally and the conference. After starting the column with the Congressional debate over banning human cloning, Safire moved to the importance of neuroethics, ending with “The conference 'mapping the field' of neuroethics this week showed how eager many scientists are to grapple with the moral consequences of their research. It's up to schools and media and Congress to put it high on the public's menu.” (Safire) 








(Image courtesy of Flickr.)

The following week, the cover of THE ECONOMIST proclaimed “The Future of Mind Control” with an image of a shaved head with a dial implanted in its forehead. The issue contained both a long science story on the ethical issues arising from neuroscience and a leader (editorial) on the same subject. (The Economist) While neither ECONOMIST piece used the term “neuroethics” or mentioned the Presidio conference (and the story at least must have been in preparation well before the conference), the effect, especially in conjunction with Safire’s column, was more attention for the issues. 




But perhaps the most important result of the Presidio conference was the field’s name. Safire first used it in print in his May 2002 column, but, according to Hall, had used it with him about 18 months earlier. Although searchers have found earlier uses of the term (Illes, Racine), no one disputes that Safire was the first to use it publicly in its current sense or that he was the one who popularized it. 




It is, in some ways, a poor name for the field. Calling the area “neuroethics” risks limiting it. After all, much of the interest in ‘neuroethics” is in its legal and social implications, not just its “ethical” ones. And using “ethics” also raises a longstanding difficulty between philosophers who sometimes act as though they own the term, and bioethics. I made these arguments at the Presidio conference, but, even as I did so, conceded “I’m afraid this is a doomed argument because I don’t have a better word. ‘Neuroethics’ sounds great.” (Marcus) 




On that point at least, I was right. So tonight I’ll raise a glass to “neuroethics” and wish it “Happy birthday, and many happy returns!” And I hope the readers of this blog will join me. 





References



Rita Carter, MAPPING THE MIND (1998, Berkeley, CA: U. Calif. Press).



Robert H. Blank, BRAIN POLICY: HOW THE NEW NEUROSCIENCE WILL CHANGE OUR BRAINS AND OUR POLITICS (1999, Washington, D.C.: Georgetown Univ. Press)



The Ethics of Brain Science: Open Your Mind, THE ECONOMIST (May 23, 2002), accessed on Apr. 29, 2017 at http://www.economist.com/node/1143317.



The Future of Mind Control, THE ECONOMIST, (May 23, 2002), accessed on Apr. 29, 2017 at http://www.economist.com/node/1143583.



Judy Illes, Neuroethics in a New Era of Neuroimaging, 24 Am. J. Neurorad. 1739 (2003)



Albert R. Jonsen, Nudging toward Neuroethics: An Overview of the History and Foundations of Neuroethics in THE DEBATE ABOUT NEUROETHICS: PERSPECTIVES ON THE FIELD’S DEVELOPMENT, FOCUS, AND FUTURE (ed. Eric Racine and John Aspler, forthcoming 2017, Springer:)



Jennifer Kulynych, Brain, Mind, and Criminal Behavior: Neuroimages as Scientific Evidence, JURIMETRICS 235-244 (1996) 



Jennifer Kulynych, Psychiatric Neuroimaging Evidence: A High-Tech Crystal Ball? 49 STAN. L. REV. 1249 (1997)



Steven J. Marcus, ed., NEUROETHICS: MAPPING THE FIELD, Conference Proceedings at 4 (2002, Dana Press: New York).



Robert D. McFadden, William Safire, Political Columnist and Oracle of Language, Dies at 79, New York Times (Sept. 27, 2009), accessed on January 1, 2017 at http://www.nytimes.com/2009/09/28/us/28safire.html. (This is my source for most of the biographical information about Safire.)



Eric Racine, in PRAGMATIC NEUROETHICS (2010 MIT Press: Cambridge, Mass)



William Safire, the “But What If” Factor, NEW YORK TIMES (May 16, 2002), accessed on Apr 29, 2017 at http://www.nytimes.com/2002/05/16/opinion/the-but-what-if-factor.html.





Want to cite this post?



Greely, H. (2017). Happy 15th Birthday, Neuroethics! The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/05/happy-15th-birthday-neuroethics.html

Tuesday, May 9, 2017

Reading into the Science: The Neuroscience and Ethics of Enhancement


By Shweta Sahu







Image courtesy of Pexels.

I was always an average student: I was good, just not good enough. I often wondered what my life and grades would be like if I’d had a better memory or learned faster. I remember several exams throughout my high school career where I just could not recall what certain rote memorization facts or specific details were, and now in college, I realize that if I could somehow learn faster, how much time would I save and be able to study even more? Would a better memory have led me to do better on my exams in high school, and would my faster ability to learn new information have increased my GPA?





Such has been the question for years now in the ongoing debates of memory enhancement and cognitive enhancement, respectively. I’m not the only student to have ever felt this way and I’m sure I won’t be the last. Technology and medicine seem to be on the brink of exciting new findings, ones that may help us in ways we’ve never before thought imaginable.





Though neuroscientists are still attempting to understand the intricacies of how memory functions, it has been known since the early 1900’s that memory works in three modes: working memory, short-term memory, and long term memory, each of which are regionalized to different parts of the brain. Working memory, which lasts from seconds to minutes, contains information that can be acted on and processed, not merely maintained by rehearsal. Short term memory on the other hand, is slightly longer in duration and occurs in the prefrontal cortex (think George Miller’s Magic number 7). It is here in short term memory that if an item is rehearsed, it can be “moved” into long term memory, and this long term memory is of particular interest to physicians and clinicians. Long term memory lasts over days, months, or years and is divided into declarative (explicit) memory and nondeclarative (implicit) memory. Declarative memory itself can be further subdivided into episodic memories, which are memories of personal experiences and autobiographical memories, and semantic memory, which is objective knowledge that is factual in nature, deemed “world knowledge.” The brain’s ability to acquire the aforementioned declarative memories depends on the medial temporal lobe regions, which include the amygdala, hippocampus, and the surrounding parahippocampal, perirhinal and entorhinal cortical areas. It is within these structures that memory and learning occur, specifically communication via neurotransmitters and the repeated activation of certain synapses.








Image courtesy of Novalens.

It is also here that enhancement is used, whether it's enhancement via chemical means (notably the neurotransmitters: acetylcholine, dopamine, and serotonin) or enhancement via technological means (TMS, DBS, tDCS, etc.). From studies in humans and animals, it is well known that the hippocampus is crucial for the formation of new long term memories, but since the hippocampus is deep within the brain, electrically stimulating it becomes tricky. This is where stimulation of the entorhinal cortex becomes key, as it is heavily connected to the hippocampus. Both transcranial magnetic stimulation (TMS) and deep brain stimulation (DBS) are techniques which target specific regions of the brain, as opposed to the chemical equivalent (i.e. drugs) that are not localizable. A revolutionary study done in 2012 by Suthana et al., aimed to test whether DBS of the hippocampus or entorhinal cortex altered memory performance on spatial memory tasks. They found that “entorhinal stimulation applied while the subjects learned locations of landmarks enhanced their subsequent memory of these locations,” though direct hippocampal stimulation did not yield similar results. Moreover, in past studies, TMS has been shown to improve performance on different tasks, but a 2014 study found that repeated TMS over the span of one week could be used to improve memory for events at least 24 hours after the stimulation is given, specifically when tested with “memory tests consisting of a set of arbitrary associations between faces and words that they were asked to learn and remember.” This study is particularly noteworthy because it was done on healthy volunteers with “normal” memory, and essentially those in whom you wouldn’t expect to see marked improvement since their brains are already ‘functioning at their normal capacities.’





Enhancement via chemical means is also rising in popularity among adults and college students. For example, Ritalin and Adderall, two commonly prescribed stimulants for ADHD, increase the extracellular concentration of dopamine in the brain by blocking the dopamine transporter. Patients with hyperactivity-impulsive ADHD have changes to their dopamine transport gene, which is why prescribing these stimulants can alleviate those symptoms. However, Ritalin and Adderall are now being used off-label and are being abused by nonmedical users (those who are not being prescribed it) in order to try to enhance their performance. One intriguing qualitative study found that “stimulants’ effects on users’ emotions and feelings are an important contributor to users’ perceptions of improved academic performance” and thus, felt cognitively enhanced. Of the college students interviewed, many reported a feeling “up”, and one stated, “your energy level is higher… it’s just easier to function at a highly productive level.” Further, students reported increased drive and motivation, saying that Adderall produced surplus energy that was discharged through an “internal push, pressure.” Moreover, they claimed these stimulant medications allowed them to be “interested” in the material which thereby increased their feeling of enjoyment. All this is to say that these students did feel cognitively enhanced and saw nothing wrong with it. In contrast, some students think that the unauthorized use of prescription medications is cheating, whether it be to enhance motivation, information, or recall. In fact, some school administrators see it the same way, with Duke being the most notable example of a university that has explicitly stated in its Student Conduct code that such unintended usage is deemed “cheating.”





That brings us to the questionable ethics of cognitive and memory enhancement, both chemical and electrical. The current state of affairs is divided and there is no distinguishable line in the sand. One view in medicine is “first, do no harm.” Maurice Bernstein, MD, says that transforming physicians from healers to enhancers has the potential to “degrade” this standard of doing no harm. Richard M. Restak, MD, is a clinical professor of neurology, and provides another, more technical answer when asked if he would prescribe enhancement. He says, “I don’t prescribe them… Such use is definitely off-label and puts the physician at a disadvantage should something go wrong." However, Dr. Chatterjee, a prominent neuroethicist and inventor of the term “cosmetic neurology” offers up a realistic view that “medical economics will drive some physicians to embrace the enhancement role with open arms, especially if it means regaining some of the autonomy lost to managed care plans.” So much of medicine is now dictated by protocols and standard operating procedures, but Dr. Chatterjee suggests this may change if physicians are given this new option to reclaim some of their authority, putting the decision making-power back in their hands.








Image courtesy of Wikipedia.

Nevertheless, physicians are not the only ones divided on this issue; the general public seems to be even more so. A proponent of enhancement and author of Liberation Biology: The Scientific and Moral Case for the Biotech Revolution, Ronald Bailey, argues that disease is a state of dis-ease. He further states, “if patients are unsatisfied with some aspect of their lives and doctors can help them with very few risks, then why shouldn't they do so?" However, Deane Alban, researcher, writer, and manager of BeBrainFit.com offers a contrasting opinion. She writes,



“Smart drugs have side effects, are almost always obtained quasi-legally, and may not even work. You have only one brain. You can artificially stimulate it now for perceived short-term benefits. Or you can nourish and protect it so that it stays sharp for a lifetime. The decision is a no-brainer.”



But is it? By not taking advantage of such enhancing technologies will we get left behind? Now the issue turns to that of implicit coercion, where one feels like he/she has to do something or take something in order to keep up even if he himself/ she herself doesn’t want to. This further raises the question of whether employers will begin contemplating enhancement for their employees, even preferring those who are functioning at a higher level than others. Speaking in terms of efficiency, why not take the more productive team member? Already, air force pilots are required (and some medical residents are encouraged) to take Modafinil, a stimulant originally intended to treat narcolepsy and sleep disorders. If the work force continuously demands excellence of its employees, why not expand that and take a cognitive enhancer, since they make employees less prone to error, able to work and concentrate for longer hours, and operate more efficiently? If surgeons and restaurant employees are “coerced” to wash their hands and follow other protocol, this step may not be all that far away for the rest of us if these enhancement drugs are proved safe and efficacious.






That said, if there is a way for me to enhance my memory, learn faster, motivate myself to learn more, and enjoy what I do learn, I think I would take it *if it is not considered cheating and *if they are deemed effective. Lots of literature exists out on the internet as to how patients with ADHD feel that they are brought to a comparable level as others when they take this medication. However, there are several conflicting results as to whether these Smart Drugs can help enhance those beyond the “normal capacity.” Yet, if we can’t make people who use it illegally stop (and we cannot completely and irrevocably accomplish this), is there a time in the near future when we will legalize it for everyone, and those who choose to take Smart Drugs can take them according their own volition? At that point, I might just take it. I don’t want to get left behind. Do you?



Want to cite this post?





Sahu, S. (2017). Reading into the Science: The Neuroscience and Ethics of Enhancement. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/05/reading-into-science-neuroscience-and.html