Pages

Tuesday, November 27, 2018

How Artificial Intelligence Reshapes Classroom: Recap of October's The Future Now: NEEDs



By Yunmiao Wang







Image courtesy to Max Pixel


In 2016, hundreds of students enrolled in an online course on Artificial Intelligence (AI) that is part of the Masters of Science in Computer Science program at Georgia Institute of Technology. An online discussion forum allowed the students to post questions and participate in the virtual classroom from all over world. A class of that size generated over 10000 messages during the semester, and their questions were answered mostly by teaching assistants (TAs).  It was not until the end of the course that the students were informed that one of the many TAs was, in fact, an AI. Her name was Jill Watson (see stories of her development and reception here and here).





At October’s installment of The Future Now: Neuroscience and Emerging Ethical Dilemma Series (NEEDs) series, Dr. Ashok Goel, the leader of the team that created Jill Watson, presented a talk entitled “AI Agents Among Us: Changing Anthropology of Virtual Classroom.” He shared the development of Jill Watson and discussed the impact of AI on education. 




Correctness, Coverage, and Authenticity


Inspired by the large size of the online course and the high demand of responses, Dr. Goel and his team first created Jill Watson based on IBM’s Watson platform, hence the name. While much of the system is no longer built on IBM’s Watson, the name remains. Goel explained that Jill Watson was never built to replace the human educators. The role of Jill is to answer the administrative questions students have.  Over 100 messages are posted daily, and 80% of the student questions are about the assessments in the class. It is infeasible for the professors and teaching assistants to answer all of them in a timely fashion. Jill’s ability to answer about a quarter of the total questions means more time for human TAs to help students with their content-related questions. Goel and his team focused on three dimensions when programming Jill Watson: correctness, coverage, and authenticity.







Image courtesy to Max Pixel


Initially, Jill Watson’s answers were not completely correct, but they became better as the software “learned” from the same course taught previously. Jill’s responses are built from the large database of thousands of responses by human TAs in the past. Jill now extracts text from the question, finds the best match in the database, and recalls the answers from previous courses if the match is over 97% confidence level. Based on the data from 2016 to date, 91% of her responses are completely correct and 9% of them are partially correct. Dr. Goel emphasized that Jill Watson has not made any mistakes since 2018. For total coverage, Jill Watson can answer 34% of student questions  about assessments without any human intervention. 





Authenticity, in this specific scenario, describes the ability of an AI to respond to student posts like a human TA. The first time Jill Watson was used in the class, an intriguing question was whether students could distinguish her from the human TAs. When Jill was first introduced to the online course, her identity as an AI was kept as a secret from the students. It came as a surprise to many that there was an AI among the teaching staff. When told that one of the TA’s was an AI and asked to guess who they thought it was, some students chose incorrectly. To others, however, there were a few factors that hinted at Jill Watson’s real identity. The near-instantaneous response time at any time of the day, for example, gave her away. Since then, a short delay has been introduced to her response. Moreover, Jill Watson now “sleeps,” given that a response from a TA at 3 AM can be quite suspicious.  The lack of variability in response is another way the students distinguished Jill from the human TAs. As humans, our answers to the same question over time may vary. Jill’s responses, however showed little difference as they are determined by the existing database. Despite these clues that could expose her identity, the level of authenticity of Jill Watson is still quite astonishing. Although Dr. Goel and his team originally focused on increasing the authenticity of Jill Watson, they have now shifted their attention to improving Watson’s role as an assistant to educators. 





Impact of AI on Education


A major question shared by Dr. Goel and across the field of education is how to make education more accessible and affordable. Dr. Goel believes that the solution is, “At Scale, Use AI”




According to Dr. Goel, students of online classes that employ AI as TAs perform as well as those in residential classes. An article by Meyer in 2012 showed that the dropout rate of massive online open courses offered by Stanford, MIT, and UC Berkley ranged from 80 to 98% [1]. One of the major reasons of low student retention is attributed to a lack of collaborative environment [2]. The reported retention rate of online courses with Jill, however, is about 80%, which is comparable to the averaged retention rate of 83% in a physical class. In addition to their performance, students’ feedback about having an AI TA is also overwhelmingly positive. Moreover, the students’ ratings of Jill Watson have tended to improve as the semester continues, contrary to Dr. Goel’s initial prediction. For all Jill’s success, she cannot answer every type of question. Her ability is limited in terms of handling open-ended inquiries. Dr. Goel described a situation where a human TA had to step in when Watson was asked to elaborate on how a concept can be applied to solve a specific problem. 




As Jill Watson is designed to answer questions based on existing database, it remains unknown how she will perform in more open-ended courses such as philosophy and poetry. One thing is clear: the current AI is not invincible. Dr. Goel stressed that artificial intelligence functions the best as a member of a team that collaborates with humans.


      


A Two-Way Issue: Bias


The topic of bias came up several times during Dr. Goel’s talk. As humans, we are biologically wired to develop bias. Artificial intelligence, an entity that runs on codes created by humans, faces the same problem. Dr. Goel revealed that at one point Jill Watson had shown gender bias. In his story, a male student stated in his introduction that he and his wife were expecting a child and that his performance might be uneven when the child was born. Jill Watson’s response was “Welcome to the course! Congratulations on the impending arrival!” However, when a female student conveyed similar concerns due to a pregnancy, Watson’s reply was instead “Welcome to the class.” Not welcome to the baby. What possibly triggered the “gender bias” of Jill Watson?





Dr. Goel attributed the incident to the population distribution by gender. Historically, 85% of the students that took the course were males. While similar situations have happened to male students in the past, Jill Watson had never encountered a female student with the same issue. Information from past classes allow Jill Watson to answer many questions confidently, but it also creates a bias due to the limited information she has to work with. The concern is whether she can develop other types of biases. Dr. Goel raised ethical concerns including, what is an effective mechanism to check these potential biases, and who should be the ones to investigate them? A discussion about bias in intelligent robots and AI was covered in a previous the Future Now: NEEDs post




Image courtesy to Nick YoungsonAlpha Stock Images





We tend to extend our biases about other human beings towards non-human subjects.  Dr. Goel acknowledged at the beginning of the talk that he had been referring to Jill Watson as “she”. I am also guilty of the bias and have been calling Jill Watson “she” and “her” as if it is a human entity. Biases towards AI can also manifest in forms other than gender. Dr. Goel and his team try to avoid students’ bias towards the AI teaching assistants now that they are aware of their existence. All TAs (both human and AI) are given a pseudonym. Jill Watson is also disguised by names that imply different genders such as Liz Duncan and Ian Braun. An intriguing ethical question raised by Goel is whether these names of the AI would trigger gender stereotypes [3]. 





Another conversation that stems from the topic of bias is the issue of trust. When asked whether using AI will decrease students’ trust of human TAs, Dr. Goel conveyed that we are still generations away from AI agents outperforming us as humans. 





Some students have nominated Jill Watson for the outstanding TA award at Georgia Tech, but she has not yet received the award. Dr. Goel humorously associated unsuccessful nomination with bias against the AI. 





Moving Forward: The Possibility of Forgetting, Virtual Lab, and New Ways of Learning


As mentioned earlier, the AI TAs can cover about 34% of the questions about class assessments posed by students. Dr. Goel and his team have been working on improving the percentage but have not yet succeeded. His understanding of the halted progress is that some past information stopped being useful or reliable as the class constantly evolves. He mentioned the possibility of introducing “forgetting” that might keep Jill Watson’s memory and knowledge current and relevant.





The future direction of AI research is not limited to training AIs to forget. Dr. Goel has now introduced a virtual lab, where students who may not have access to a physical lab are able to simulate experiments and learn interactively from widely available online sources and AI instructors. New AIs are being developed to ask students questions to help them refine their models instead of simply giving away the answers. 





“The new question is not how we can address problems in education with AI, but how we can envision new kinds of education all together?” Dr. Goel stated at the end of the talk, “If everything is going to be an AI, it invites the new question about what learning could be.” 





_________________













Yunmiao Wang is a third year Ph.D student in the Neuroscience Program at Emory University. She studies functional roles of brain regions in controlling movement at the Jaeger Lab. 




















References


[1] Meyer, R. (2012). What It’s Like to Teach a MOOC (and What the Heck’s a MOOC?), http://tinyurl.com/cdfvvqy Open University, (2012), Innovating Pedagogy.





[2] Daniel, J. (2012). Making Sense of MOOCs: Musings in a Maze of Myth, Paradox and Possibility. Journal of Interactive Media in Education.





[3] Goel, A.K. & Polepeddi, L. (2016). Jill Watson: A Virtual Teaching Assistant for Online Education. School  of  Interactive Computing Technical Reports, Georgia Institute of Technology.





Want to cite this post?





Wang, YM. (2018).  How Artificial Intelligence Reshapes Classroom: Recap of October's The Future Now: NEEDs. The Neuroethics Blog. Retrieved on,


http://www.theneuroethicsblog.com/2018/11/how-artificial-intelligence-reshapes.html

Tuesday, November 20, 2018

Duty to warn about mental status: legal requirements, patient rights, and future ethical challenges




By Elaine Walker, Ph.D. 








Image courtesy to Franchise Opportunities

Balancing patient confidentiality with public safety continues to be a contentious issue in both legislation and professional ethics. In this post, some aspects of the delicate balance are examined with reference to professional duties to protect individuals or public safety by warning or reporting on dangers posed by patients. Because “duty-to-warn” has been most salient in the fields of neurology and mental health, these areas will be the main focus (Felthaus, 2006; Werth et al., 2009). 






This year, legislators in the state of New York proposed a bill that would require physicians to notify the state Department of Motor Vehicles (DMV) about certain medical conditions (e g., seizure disorder, dementia) that might compromise driving ability and endanger public safety. The proposed bill was precipitated by the death of two children who were hit by a car driven by a woman who had a record of previous violations and experienced a seizure at the time of the fatal accident. Lawmakers sponsoring the bill argued that the proposed reporting requirement will help get dangerous drivers off the street. Citizens expressed their agreement by marching in support. Understandably, others responded with concern about increased physician liability, as well as potential violations of patient confidentiality and rights. 








Image courtesy to Nick Youngson, Creative Commons 3

Because legislation imposing a legal duty to warn about or report on mental status places health care providers in potential jeopardy for both failure to report and breach of confidentiality, it raises numerous questions about diagnostic procedures and thresholds. For example, in the case of seizures, what categories of evidence should be considered sufficient? While standard procedures for diagnosing epilepsy include electroencephalograms (EEGs), seizures can be readily observable in the absence of any medical test. If a patient refuses an EEG, but the physician observes clinically significant seizures, does that observation provide sufficient evidence to require the physician to report the patient to the DMV? Alternatively, if there is significant EEG epileptiform activity, but no observable seizures, should the patient be considered at-risk for seizures and prohibited from driving?





The responsibility to warn about public dangers due to patient impairment has been especially challenging in the field of psychiatry. In 1976, the Supreme Court of California (Tarasoff v. Regents of the University of California) ruled that mental health care providers had a legal duty to warn identifiable victims of a patient’s serious threats to harm them. This legislation has been widely recognized in U.S. jurisprudence, and it set the stage for national standards for the responsibilities of mental health professionals. It has also generated a large body of literature concerning the circumstances that give rise to these warnings from professionals. But more recently, the discussion has moved from patients to public figures. 








Image courtesy to Max Pixel, Creative Commons

In a highly controversial edited volume (The dangerous case of Donald Trump: 27 psychiatrists and mental health experts assess a president), psychiatrist Dr. Bandy Lee and contributors have taken the duty-to-warn one step further; they argue that the responsibilities of mental health professionals extend to protecting the public from political leaders who are dangerous due to their mental status impairment or brain dysfunction (Lee, 2017). This notion runs counter to the so-called “Goldwater Rule”, formalized in 1973 by the APA in Section 7 of the American Psychiatric Association's (APA) Principles of Medical Ethics, which states that “it is unethical for psychiatrists to give a professional opinion about public figures whom they have not examined in person, and from whom they have not obtained consent to discuss their mental health in public statements.” (This provision was named after presidential candidate Barry Goldwater in response to some psychiatrists’ public statements warning about his psychiatric status.) Taking a position counter to the “Goldwater Rule”, psychologists Scott Lilienfeld, Joshua Miller and Donald Lynam have pointed out that there is substantial research which raises serious questions about the assumption that in-person examinations are the gold standard (Lilienfeld et al., 2018). When compared to in-person psychiatric interviews, observational data, informant reports, and life history information can yield more reliable, predictive, and accurate diagnoses. Because the latter information about public figures is now more readily accessible from a variety of sources, Lilienfeld and his colleagues raise questions about the Goldwater Rule, suggesting it is “outdated and premised on dubious scientific assumptions.” They also advocate for a “duty to inform”, such that mental health experts should be able to comment on a public figures’ psychiatric status when these individuals hold positions of power and could take actions that threaten the publics’ safety, as is the case for political figures. 





With advances in predictive medicine, automated diagnostics, and related scientific fields, our ability





Image courtesy to Max Pixel, Creative Commons

to diagnose and predict illnesses, including neurological and psychiatric syndromes, will increase. For example, in the field of dementia research it is now possible to test for cognitive deficits, such as memory loss, speech, and visual/spatial problems, which appear before the onset of symptoms that meet the severity criteria for clinical diagnosis (Peterson et al., 2001). These deficits occur on a continuum with the more significant cognitive impairments that are the defining features of Alzheimer’s and other dementias. This raises the question of where, on the continuum of symptom severity (i e., from mild to severe cognitive and/or perceptual-motor impairment), should we place the threshold for imposing such restrictions? The implications of scientific advances will be accompanied by more ethical challenges as we attempt to balance professional and legal guidelines about duty-to-warn with patients’ rights to confidentiality.
Of course, these concerns are not restricted to disorders that involve brain function or behavior. There are ongoing debates about duty-to-warn patient’s relatives about hereditary disease risks (Offit et al., 2004) and patient’s social contacts about viral exposures (Burke, 2015). In anticipation of the expanding scope of concerns about professional responsibilities to warn and inform, it is important that ethicists be involved in policy discussions concerning standards for diagnostic evidence and limits on patient and public rights to privacy. 





________________







Elaine Walker is the Charles Howard Candler Professor of Psychology and Neuroscience at Emory University.  She leads a research laboratory that has been funded by the National Institute of Mental Health and private foundations for over 30 years to study risk factors for major mental illness, especially schizophrenia and other psychosis.  Her research has focused on both the behavioral and neurobiological factors associated with psychosis risk.  In 2007, she was invited by NIMH to form a national consortium with eight other investigators who had been funded to do research in this area.  The consortium, The North American Prodrome Longitudinal Study, is the largest prospective study of youth who show clinical signs of risk that has ever been funded by the NIMH.  Now, in the 9th year of funding, this multi-site collaborative study is documenting the behavioral, brain, neuroendocrinological and epigenetic, changes that predate psychosis onset.  








References 





Burke, J. (2015). Discretion to Warn: Balancing Privacy Rights with the Need to Warn Unaware Partners of Likely HIV/AIDS Exposure. BCJL & Soc. Just., 35, 89. 





Chee, J. N., Rapoport, M. J., Molnar, F., Herrmann, N., O'neill, D., Marottoli, R., ... & Lanctôt, K. L. (2017). Update on the risk of motor vehicle collision or driving impairment with dementia: a collaborative international systematic review and meta-analysis. The American Journal of Geriatric Psychiatry. 





Felthous, A. R. (2006). Warning a potential victim of a person's dangerousness: clinician's duty or victim's right? Journal of the American Academy of Psychiatry and the Law Online, 34(3), 338-348. 





Holoyda, B. J., Landess, J., Scott, C. L., & Newman, W. J. (2018). Taking the Wheel: Patient Driving in Clinical Psychiatry. Psychiatric Annals, 48(9), 421-426. 





Knapp, S., & VandeCreek, L. (2005). Ethical and Patient Management Issues With Older, Impaired Drivers. Professional Psychology: Research and Practice, 36(2), 197. 





Lee, B. X. (2017). The dangerous case of Donald Trump: 27 psychiatrists and mental health experts assess a president. Thomas Dunne Books. 





Lilienfeld, S. O., Miller, J. D., & Lynam, D. R. (2018). The Goldwater Rule: Perspectives from, and implications for, psychological science. Perspectives on Psychological Science, 13(1), 3-27. 





Mirheidari, B., Blackburn, D., Walker, T., Reuber, M., & Christensen, H. (2018). Dementia detection using automatic analysis of conversations. Computer Speech & Language. 





Offit, K., Groeger, E., Turner, S., Wadsworth, E. A., & Weiser, M. A. (2004). The duty to warn a patient's family members about hereditary disease risks. Jama, 292(12), 1469-1473. 





Petersen, R. C., Stevens, J. C., Ganguli, M., Tangalos, E. G., Cummings, J. L., & DeKosky, S. T. (2001). Practice parameter: early detection of dementia: mild cognitive impairment (an evidence-based review): report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology, 56(9), 1133-1142. 





The Principles of Medical Ethics with Annotations Especially Applicable to Psychiatry" (2013 ed.). American Psychiatric Association. 





Werth Jr, J. L., Welfel, E. R. E., & Benjamin, G. A. H. (2009). The duty to protect: Ethical, legal, and professional considerations for mental health professionals. American Psychological Association.








Want to cite this post?




Walker, E. (2018). Duty to warn about mental status: legal requirements, patient rights, and future ethical challenges. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/11/duty-to-warn-about-mental-status-legal.html

Wednesday, November 14, 2018

Me, Myself, and my Social Constructs



By Ashley Oldshue



“He began to search among the infinite series of impressions which time had laid down, leaf upon leaf, fold upon fold softly, incessantly upon his brain”

--- Virginia Woolf, To the Lighthouse






Image courtesy to Tomas Castelazo, Wikimedia Commons


Identity is a motif that runs central to our lives, it is woven into our language, our learning, and our literature. Virginia Woolf, in her novel To the Lighthouse, describes identity as a flipbook of images (Woolf, 1981, p. 169). She asserts that when we look at someone, we do not hold a single, uniform concept of them. Instead, we see a series of images and interactions running like a flipbook in our heads. It is in this idea of who they are that we are able to add pages and evolve over time. However, no one deed can erase all the rest. Everybody is made up of good and bad, and these inconsistencies together form an identity. However, what if someone did change so drastically that it was like reading a whole new book?





Similar questions about identity and the role of theory of mind emerged repeatedly throughout the 2018 Neuroethics Network Conference. Held at the Institut du Cerveau et de la Moelle épinière (ICM), the premier Brain and Spine Institute in Paris, this conference featured speakers from neuroscience, psychology, philosophy, business, medicine, and more. Everyone, from professors to ethicists to students, were all discussing and dissecting ethical questions that are at the forefront of neuroscience.





One speaker in particular, a philosopher from Eindhoven University of Technology, addressed theory of mind, identity, and this concept of the “true self.”  Dr. Sven Nyholm has worked with ethics of neurotechnology, deep brain stimulation (DBS), happiness and well-being (Sven Nyholm).  A large portion of his discussion centered around how we attribute mental states to others.  In cases of dementia, addiction, or depression that can result in large behavioral changes, there is often a dissonance created between the mental attributions people have of the diagnosed individual and the changes in behavior they see.  One of his primary methodologies, as described in his talk, consists of patient interviews.  Patients themselves or their loved ones will often report that this individual is “someone else” or that the person they once knew is “no longer there.”






Image courtesy to Shamir R, Noecker A and McIntyre C,

Wikimedia Commons


Can this, in fact, be the case?  Is there a “true self” that can be lost or found?  How do we act towards someone who appears to be a stranger to us?  What implications does this have for the treatment of these individuals?  For example, DBS has been widely used to treat cases of depression, obsessive compulsive disorder (OCD), and Parkinson’s Disease (PD).  However, there have been many reports of a “dislocated self” following treatment, even if symptoms were successfully treated (Baylis, 2013).  Is this an ethical practice if we think the treatment could pose a threat to someone’s identity?  On the other hand, there was an OCD case where DBS did not help resolve a patient’s compulsive tendencies, but they did report an overall happier disposition and wanted to continue treatment (S. Nyholm, personal communication, June 21, 2018).   Is it ethical to stop treatment because the disorder itself is not being remedied?  This can be a difficult concept to grasp not only for the individual themselves but their loved ones, their doctors, and their caregivers as well.





A question raised by an audience member at ICM introduced the idea of a coherent life narrative.  We generally see our lives as a somewhat linear progression, with a coherent thread of commonality running through our experiences, a true self.  However, we must also ask ourselves if this need for coherence is a reflection of our “true self” or a social construct (ICM, personal communication, June 20, 2018).  Dr. Dan P. McAdams, a psychology professor at Northwestern University, elaborates on this concept of coherence, asking “are good life stories always coherent?” (McAdams, 2006).  He further claims that “the problem of narrative coherence is the problem of being understood in a social context” (McAdams, 2006), that we impose on ourselves and on others the expectations of how an articulate story should look.  Woolf alludes to this social construct as well in her investigation of identity and self-concept, stating, “For now she need not think about anybody…All the being and the doing, expansive, glittering, vocal, evaporated; and one shrunk, with a sense of solemnity, to being oneself, a wedge-shaped core of darkness, something invisible to others” (Woolf, 1981, p. 62).





Bioethicist Françoise Baylis, Ph.D., discusses the implications of this idea in a treatment setting.  She asserts that there is no one, static concept of identity, that people are dynamic collection of all of their relationships over time, socially constructed and context-based (Baylis, 2013).  Therefore, their “story” or narrative is a reflection of the people in their life and can take on any shape or trajectory.  During the onset of her mother’s Alzheimer’s Disease, she reports caregivers often trying to comfort her by saying things like, “that’s not your mother anymore.”  However, Baylis argues that this implication of having lost someone when they are still “living amongst us” can have the opposite effect and be extremely hurtful (Postma, 2016).  Baylis addresses cases of DBS in PD as well that show profound improvement in motor control.  However, there have also been cases of mild to more extreme side effects, such as major depressive disorder or mania.  Baylis emphasizes that personal identity is a dynamic structure and perhaps the “threat” to this structure is misdirected at the treatment, that it actually results from differential treatment by the people in their life.  For example, we are often quick to accept what we view as positive changes, such as elevated mood or increased work ethic, as a natural enhancement to character.  However, we reject changes that are negative, such as increased aggression or impulsivity, as dissonant from someone’s personal identity.  Baylis (2013) states that personal identity is “at the intersection of who [someone] wants to be, and who others will minimally let [them] be”.  There are cases of other brain disorders, such as schizophrenia, where framing of the disorder by loved ones has significant effects on the capacities of the patient.  There are still limits to this nonrestrictive view on identity in that narratives must be ultimately rooted in reality and threats to agency and autonomy are a separate issue.  However, from her experience as a caregiver herself, she views it as our job to support these individuals that have undergone dramatic life experiences such as Alzheimer’s Disease or DBS, to “help them to belong” and “continue to recognize them” even when they cannot themselves (Postma, 2016).






Image courtesy to Adaiyaalam, Wikimedia Commons


Dr. Nyholm takes this conversation one step further. He asserts that, in these discussions about threats to personal identity and narrative coherence that DBS may pose, there is another concept that people hold on to that should be considered: the true self (Nyholm & O’Neill, 2016).  The question of narrative coherence is about continuity over time.  While a drastic turn may produce unfamiliar changes in personality, they are still the same person (Danaher, 2016).  This is similar to Baylis’ argument, who draws this line of distinction between personality and personal identity.  However, through cases of DBS and patient interviews, Nyholm has identified that there is this concept of a true or authentic self.  This perception is not about continuity over time, but is about values.  Nyholm states, “Often we see the best part of ourselves or other peoples as the thing that is representing the true self” (Danaher, 2016).  Therefore, in cases of doing something we regret or qualities of our self that we are not proud of, people are quick to dissociate these from themselves.  This is not a matter of taking responsibility for one’s actions, but a projection that “what we aspire to be is often the thing that we associate with our true selves” (Danaher, 2016).  Therefore, the true self can be likened to the pursuit of the best version of yourself.   DBS should be discussed in its potential to positively or negatively alter one’s concept of their true self.  If a patient undergoes DBS that successfully alleviates them of their obsessive-compulsive tendencies, they could see this as having a positive effect on their true self.  However, consider a case where a PD patient elects to receive DBS to alleviate his motor impairments, even though the treatment enters him into a manic state that require institutionalization (Danaher, 2016).  This begs the questions of how we can incorporate identity and well-being into a conversation about the risks and benefits of certain neurological treatments.





These discussions need to be addressed for DBS at large and in specific cases.  Both Baylis and Nyholm help to draw some distinction between DBS for PD, that primarily produces motor impairment, versus other brain disorders such as depression, that already pose a threat to concepts of identity and true self.  However, if we understand identity to be a dynamic structure and true self-actualization to be a laudable goal, should we be pursuing other applications of DBS outside of diagnosable brain disorders?


________________






Ashley Oldshue is a fourth-year undergraduate student studying Neuroscience and Behavioral Biology and Visual Arts.  A member Dr. Lena Ting’s Neuromechanics Lab and a recent Beckman Scholar, her research is primarily computational and focuses on modelling muscle-tendon dynamics for sensorimotor control.  She plans to continue this line of research throughout the next year and pursue graduate school in biomedical engineering. 




References




Baylis, F. (2013). “I Am Who I Am”: On the Perceived Threats to Personal Identity from Deep Brain Stimulation. Neuroethics, 6(3), 513-526. doi:10.1007/s12152-011-9137-1



Danaher, J. (2016, May 13). Episode #3 - Sven Nyholm on Love Enhancement, Deep Brain Stimulation and the Ethics of Self Driving Cars. Retrieved from http://philosophicaldisquisitions.blogspot.com/2016/05/episode-3-sven-nyholm-on-love.html



McAdams, D. P. (2006). The Problem of Narrative Coherence. Constructivist Psychology, 19(2), 109-125.



Nyholm, S., & O’Neill, E. (2016). Deep Brain Stimulation, Continuity over Time, and the True Self. Cambridge Quarterly of Healthcare Ethics, 25(04), 647-658. doi:10.1017/s0963180116000372



Postma, R. (2016, May 10). Françoise Baylis. Retrieved from https://www.youtube.com/watch?v=NYasXd6-9eI&feature=youtu.be



Sven Nyholm. (n.d.). Retrieved from https://www.tue.nl/en/research/researchers/sven-nyholm/



Woolf, V. (1981). To the Lighthouse. New York, NY: Houghton Mifflin Harcourt Publishing Company.








Want to cite this post?



Oldshue, A. (2018). Me, Myself, and my Social Constructs. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/11/me-myself-and-my-social-constructs.html

Tuesday, November 13, 2018

Neuralink: Concerns of Brain-Machine Interfaces




By Oscar Gao



Introduction 





Image courtesy to Nicolas Ferrando and Lois Lammerhuber, Flickr

When Elon Musk starts a company developing brain-machine interfaces, you know it has the potential to be the next big thing. He claimed that for people to be competitive in the artificial intelligence age, we will have to become cyborgs, a "merger of
biological intelligence and machine intelligence” (Marsh, 2018; Solon,
2017). He started the company Neuralink, which aims to build “ultra high bandwidth brain-machine interfaces to connect humans and computers.” This company, at the moment, is hiring computer scientists and engineers who have "exceptional ability and a track record of building things that work" (“NEURALINK”, n.d.). Also specified on its website, one does not need experience in neuroscience to apply for a job. The company does, however, need to work with neuroscientists and neuroethicists to discuss the ethical implications and guidelines for their projects.





The concept of a brain-computer interface (BCI) is not new. As early as 1973, researchers have attempted to develop interfaces that connect brain signals to external devices. This kind of uni-directional interface is often used for patients with spinal cord injury to regain motor control. One example is Ajiboye et al.'s study (2017) in which a patient with a spinal cord injury was able to restore his hand reaching and grasping using a BCI. Elon Musk's company, however, is interested in developing a bi-directional interface by adding an inserted layer onto the brain and therefore enhancing the input and output of the brain. It allows the human to “process and generate information as fast as they absorb it” and thus would make people better at remembering and communicating with others (Winkler, 2017).
I will address the concerns of privacy, coercion, and personhood of this futuristic interface in this post. 






Privacy 



There are gaps in legislation regulation of BCI data (Trimper, Wolpe & Rommelfanger, 2014). BCI data, which can potentially contain elements and indications of one's memory, personal preference, and emotional inclination, should be tightly regulated. Musk’s company is registered as a medical research company for now, but Elon Musk has stated the ambition of creating potential non-medical applications of enhancing able-bodied humans. There are standard guidelines and regulations in place for BCI clinical trials, but there is currently no legislation for its non-medical/clinical research application. Legislators should regulate how the BCI related neuro-data is stored and used to protect consumers’ neuro-privacy. 






Coercion 






Image courtesy to Tom Mesic, Flickr

Elon Musk argued that for people to be productive in a future society, they will have to incorporate a BCI. This leads to the issue of coercion. Take a famous cyborg, Thad Starner, as an example. He has been carrying his machine extension, the precursor to Google glass, since 1993. On NPR, he shared his experiences with Lizzy, his extension. He mentioned that he had Lizzy with him during his Ph.D. oral qualifying exam, and was able to look up detailed information online using the computer (“Computer Or Human? Thad”, 2015). If this type of technology becomes prevalent, as Musk projects it to become, the standard of performance will be raised. People will have to start wearing their own BCI to match other's performance. The technology will change the perceived “normal” function and capacities of a human. This can create implicit coercion if people using BCI gain an unfair advantage and others are forced to start using it to not fall behind. 






Personhood 



According to Starner in the NPR interview, he claimed to be better with people when carrying Lizzy; however, the interview did not feature perspectives from friends and colleagues of Starner. How BCI can alter self-perception needs to be examined. Is the BCI an extension of the user or is it merely a tool? How does it influence or interact with the user's personality? To what extent is the user responsible for the decisions made by the interface (Tamburrini, 2009)? For example, who is to blame if a person commits a crime under the influence of the interface she is carrying? Companies such as Musk’s Neuralink, along with legislators, need to address these questions when developing brain-computer interfaces. 






Conclusion 



Musk and his team are not vocal about their project at Neuralink, leaving the public in speculation of how far they have left to go in building a “direct cortical interface” to enhance human function (Winkler, 2017). "We are at least 10 to 15 years away from the cognitive enhancement goals in healthy, able-bodied subjects," argued Pedram Mohseni, a professor of Case Western Reserve University, when talking about BCI’s future application (Marsh, 2018). However, it is not too early to consider ethical standards for the unavoidable prevalence of bi-directional brain-computer interface, as suggested by Musk (Marsh, 2018). “It’s really important to address these issues before they come up, because when you try to play catch-up, it can take a decade before something’s in place,” says Karen Rommelfanger, director of the Neuroethics Program at Emory University, to the New York Times (Zimmer, 2015).




----













My name is Oscar. I am from China.
I am a senior at Emory majoring in Neuroscience, and I will go to Georgia Tech
next year to pursue an engineering degree. I am interested in brain-related technologies.

 













References






Ajiboye, A. B., Willett, F. R., Young, D. R., Memberg, W. D., Murphy, B. A., Miller, J. P., …
Kirsch, R. F. (2017). Restoration of reaching and grasping in a person with tetraplegia through brain-controlled muscle stimulation: a proof-of-concept demonstration. Lancet (London, England), 389(10081), 1821–1830. http://doi.org.proxy.library.emory.edu/10.1016/S0140-6736(17)30601-3 









Computer Or Human? Thad [Boardcast]. (2015, February 12). Retrieved from
https://www.npr.org/2015/02/13/385793862/computer-or-human-thad 









Farah, M., Illes, J., Cook-Deegan, R., Gardner, H., Kandel, E., King, P., . . . Wolpe. (2004).
Neurocognitive enhancement: What can we do and what should we do? Nature Reviews. Neuroscience., 5(5), 421-425. 









Marsh, S. (2018, January 01). Neurotechnology, Elon Musk and the goal of human enhancement.
Retrieved from https://www.theguardian.com/technology/2018/jan/01/elon-musk-neurotechnology-human-enhancement-brain-computer-interfaces 









NEURALINK. (n.d.). Retrieved from https://neuralink.com/ 









Recode. (2016). We are already cyborgs | Elon Musk | Code Conference 2016. Retrieved from
https://www.youtube.com/watch?list=PLKof9YSAshgyPqlK-UUYrHfIQaOzFPSL4&v=ZrGPuUQsDjo 









Solon, O. (2017, February 15). Elon Musk says humans must become cyborgs to stay relevant. Is
he right? Retrieved from https://www.theguardian.com/technology/2017/feb/15/elon-musk-cyborgs-robots-artificial-intelligence-is-he-right 









Tamburrini, G. (2009). Brain to computer communication: ethical perspectives on interaction
models. Neuroethics 2, 137–149. doi: 10.1007/s12152-009- 9040-1 









Vidal, J. J. (1973). Toward direct brain-computer communication. Annu. Rev. Biophys. Bioeng.
2, 157–180. doi: 10.1146/annurev.bb.02.060173.001105 









Winkler, R. (2017, Mar 27). Elon musk launches neuralink to connect brains with computers;
startup from CEO of tesla and SpaceX aims to implant tiny electrodes in human brains. Wall Street Journal (Online) Retrieved from https://login.proxy.library.emory.edu/login?url=https://search-proquest-com.proxy.library.emory.edu/docview/1881307727?accountid=10747 









Zimmer, C. (2015, July 14). Scientists Demonstrate Animal Mind-Melds. Retrieved from
https://www.nytimes.com/2015/07/14/science/scientists-demonstrate-animal-mind-melds.html








Want to cite this post?




Gao, O. (2018). Neuralink: Concerns of Brain-Machine Interfaces. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/11/neuralink-concerns-of-brain-machine.html

Tuesday, November 6, 2018

Medicine & Neuroethics: Perspectives from the Intersection




By Somnath Das








Image courtesy of publicdomainpictures.net.

The first year of medical school is infamously rigorous – it both challenges and changes virtually anyone who dares to undertake it. My experience with this trial was certainly not unique. Despite the knowledge I have gained (on paper, at least), I greatly missed learning about a passion of mine: neuroethics. June marked the two-year anniversary of my attending the Neuroethics in Paris study abroad course hosted by Emory University, which served as the foundation of my exposure to this field. I additionally had the pleasure of taking a graduate neuroethics course offered by the Emory Center for Ethics Masters of Bioethics Program during my time at Emory, which was a more rigorous, yet very essential and fulfilling, dive into the field. Given my previous exposure, it felt odd to begin medical school with little opportunity to formally engage in the field of neuroethics. While my experience with the first year of medical school did not include formal content in neuroethics, I couldn’t help but notice multiple parallels between the two fields, which I will briefly discuss in this blog post. Ultimately, it is my belief that physicians must pay attention to, study, and engage in the field of neuroethics. In this post, I illustrate the reasons for holding this belief by highlighting some of the critical discussions present in both fields; it is my hope that these debates balloon to involve many doctors and patients in the near future.





Cognitive Enhancement  





Different and sometimes conflicting concerns spring up when debating cognitive enhancement for healthy individuals from neuroethical versus medical ethics perspectives. In seeking to address this debate from the biomedical perspective, medical ethicists often choose to focus on measurable cognitive benefits (improvements in memory or concentration), adverse effects (stimulant side effects or stimulant-drug interactions), patient autonomy, and informed consent. The distributive justice of cognitive enhancers is also of concern to medical ethicists, being that physicians currently possess significant financial incentives, under the current medical model to prescribe pharmacological stimulants (Cheung & Pierre 2015). Neuroethicists add to the discussion by debating the ethics of both current and future neuroscientific advances in the field of cognitive stimulation (Racine 2010, pg 10). I have previously written about how neuroethicists, in a similar vein to medical ethicists, voice concern over how individuals with more financial resources can use enhancement to gain an unfair cognitive advantage in society; however, others in neuroethics have argued that healthy individuals should have the right to use enhancement at will, being that stimulation may be necessary to adapt to a more cognitively-demanding future (Clark 2014). Indeed, some bioethicists, such as Arthur Caplan note that cognitive enhancement should “always be done by choice, not dictated by others.” Greely et al. adopts a similar view, noting that the cognitive enhancement debate should not focus on “when,” but rather “how” future leaders mitigate the risks and maximize the benefits of stimulants (Greely 2008).





By emphasizing the ethics of neuroscientific advancements, neuroethicists have pointed out the need to further study the motivations for healthy individuals to seek enhancement. In fact, some neuroethics literature has demonstrated that users may seek enhancement due to less medically quantifiable benefits such as “increased energy” as opposed to increased performance on cognitive tasks (Illeva & Farah 2013). A more complete understanding of the motivations of cognitively healthy individuals seeking to further enhance their abilities will inform the role physicians should play in the future of enhancement distribution (Forlini, Gauthier, & Racine 2013; Chatterjee 2017). These motivations are particularly important to assess as people resort to do-it-yourself (DIY) methods of brain enhancement.








Image courtesy of Flickr user, A Health Blog.

For example, there is a growing community of individuals experimenting with transcranial direct stimulation (tDCS) devices, often broadcasting their results via scientifically unregulated forums such as YouTube. Wexler (2017) argues that the rise in DIY-tDCS should be viewed within the broader growth of the “neurohacking” movement, which she observes is primarily focused on self-improvement of neural abilities (as opposed to a pushback on science and its authorities). Wexler notes that understanding the motivations of healthy individuals for seeking enhancement is important, in that it “might be useful in terms of predicting whether or not users might go ‘underground’ in response to regulation.” As neuroscientists continue to push for advancements in brain enhancement (Farah 2012, pg 580), the ethical ramifications of laboratory and clinical experiments will become inevitably more complex. When this increasing ethical complexity is combined with the fact that society at large is demanding more from human brains and scientific information is rapidly becoming democratized, I believe that physicians bear an increased onus of responsibility to translate the findings of neuroscience to best help their patients achieve their life goals with the help of scientific advancements while avoiding significant harm.





Advanced Neural Diagnostics & Patient Privacy 





The advancement of brain imaging has greatly changed how we think about the nature of using cognitive data for research. The predictive capacity of advanced diagnostics for psychiatric and neurological disorders remains a hotly debated topic. In their response to the Nature article “Attention to Eyes Is Present but in Decline in 2–6-Month-Old Infants Later Diagnosed with Autism,” neuroethicists Karen Rommelfanger & Jennifer Sarrett note that clinical investigators stand as some of the last and most important arbiters of clarifying the nature of imaging data to patients. As brain imaging becomes increasingly complex and precise, it will be necessary for physicians to play an ever-increasing role in constructing the guidelines by which these techniques are used in clinical practice to ensure both informed consent and responsible use and interpretation of data. 





It is necessary to note that physicians have the responsibility to not only clarify the nature of medical data collection, but also to protect this data from being used irresponsibly. In 1996, HIPAA introduced further regulations as to what information is considered “protected,” and it created protocols for the transmission of this information via physical and electronic methods. Within their first weeks of medical school, medical students are introduced to HIPAA, and the importance of protecting patient data persists within both scientific and medical training. However, with the advancement of brain imaging the very nature of what data physicians collect on patients is rapidly changing, potentially to the point where neural data is now too sensitive to be transmitted without further legal change. 








Image courtesy of Flickr user, amenclinicsphotos ac.



Martha Farah has written extensively about the ability of brain imaging to capture complex neural data, such as individual personality traits. This observation raises the question: what if this data can be traced back to the individuals despite randomization and privacy protections? Farah notes that under the current legal framework, “functional brain images can be obtained with consent for one purpose but later analyzed for other purposes” (Farah 2012, pg 578).  Perhaps a more important question is, who owns this data? Future physicians are taught that once a patient’s medical information is stored within the hospital medical records system, patients lose a significant locus of control over their data. As neural data becomes more complex, and therefore more traceable back to individuals, would [and can] physicians exert the same privacy protocols and protections codified in HIPAA? The answer remains unclear, and the future will involve significant engagement of perspectives from ethicists, physicians, and patients to ensure the safety of the most sensitive types of medical data. 





Conclusion





The future of medicine is intimately tied to emerging neurotechnologies, and therefore will require a keen understanding of what motivates the public to seek new technologies and how the public conceptualizes these technologies in terms of risks, benefits, and long-term impacts. Gaining this understanding will help physicians and neuroethicists alike to protect individual patient safety and privacy. I believe the physician can serve as the strongest bridge between the worlds of academia and individuals who will be impacted by this technology. This idea is hardly new; from transplantation to novel cancer therapeutics, physicians stand to interpret the intersection of what is possible and what needs to be done in order to heal the patient. Being that some technologies may do more harm than good, the “ideal” physician ultimately should serve to protect their patient from the dangers of novel technologies when the risks outweigh the benefits. It is through proper training and exposure to neuroethics that I believe physicians can better treat their patients and be more adequately prepared to address the future of what is to come in modern medicine. 


________________






Somnath Das is a second year student at Sidney Kimmel Medical College. His interests currently involve integrating neuroethics education into the training of future medical professionals. His interest began at Emory University under the instruction of Dr. Karen Rommelfanger, and he still enjoys occasionally contributing to the blog to this day. The implications of futuristic technologies both within and outside medicine interests him, and he views Neruoethics as a toolbox to think, debate, and perceive the sequelae of the latest neuroscientific innovations.












References






Clark, V. P., & Parasuraman, R. (2014, 01). Neuroenhancement: Enhancing brain and mind in health and in disease. NeuroImage, 85, 889-894. doi:10.1016/j.neuroimage.2013.08.071









Chatterjee, A. (2017). Grounding ethics from below: CRISPR-cas9 and genetic modification. The Neuroethics Blog. Retrieved on July 24, 2018, from http://www.theneuroethicsblog.com/2017/07/grounding-ethics-from-below-crispr-cas9.html









Davis, J. K., Hoffmaster, B., & Hooker, C. (n.d.). Pragmatic Neuroethics. Retrieved from https://mitpress.mit.edu/books/pragmatic-neuroethics









Farah, M. J. (2012, 01). Neuroethics: The Ethical, Legal, and Societal Impact of Neuroscience. Annual Review of Psychology, 63(1), 571-591. doi:10.1146/annurev.psych.093008.100438









Forlini, C., Gauthier, S., & Racine, E. (2013, September 03). Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3761009/









Greely, H., Sahakian, B., Harris, J., Kessler, R. C., Gazzaniga, M., Campbell, P., & Farah, M. J. (2008, December 10). Towards responsible use of cognitive-enhancing drugs by the healthy. Retrieved from https://www.nature.com/articles/456702a









Ilieva, I. P., & Farah, M. J. (2013). Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3813924/















Racine, E. (2010). Pragmatic neuroethics: Improving treatment and understanding of the mind-brain. The MIT Press.




Want to cite this post?



Das, S. (2018). Medicine & Neuroethics: Perspectives from the Intersection. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/11/medicine-neuroethics-perspectives-from.html

Tuesday, October 30, 2018

Phenomenology of the Locked-in Syndrome: Time to Move Forward




By Fernando Vidal








Image courtesy of Wikimedia Commons.

The main features of the locked-in syndrome (LIS) explain its name: persons in LIS are tetraplegic and cannot speak, but have normal visual perception, consciousness, cognitive functions and bodily sensations. They are “locked in” an almost entirely motionless body. A condition of extremely low prevalence identified and named in 1966, LIS most frequently results from a brainstem stroke or develops in the advanced stage of a neurodegenerative disease such as amyotrophic lateral sclerosis (ALS), which affects the motor neuron system and leads to paralysis. LIS presents three forms. In total or complete LIS (CLIS), patients lack all mobility; in classic LIS, blinking or vertical eye movements are preserved; in incomplete LIS, other voluntary motion is possible. Mortality is high in the early phase of LIS of vascular origin, but around 80% of patients who become stable live ten years and 40% live twenty years after entering the locked-in state. Persons who are locked-in as consequence of stroke or traumatic injury sometimes evolve from classic to incomplete LIS. They can usually communicate via blinking or vertical eye movement, by choosing letters from an alphabet spell board. When additional movements are regained, they facilitate the use of a computer. It is hoped that brain-computer interfaces (BCI) will enable CLIS patients to communicate too.




In January 2017, under the title “Groundbreaking system allows locked-in syndrome patients to communicate,” The Guardian reported on a study demonstrating that four ALS patients, two in complete LIS and two entering the condition, learned to respond to questions in a way that could be decoded by measuring frontocentral oxygenation changes detected with functional near infrared spectroscopy. Niels Birbaumer, well-known for his pioneering work on BCIs, told the journal that such a result (which has since been questioned) was “the first sign that completely locked-in syndrome may be abolished forever, because with all of these patients we can now ask them the most critical questions in life.”





Yet what do we know about how locked-in persons envisage such critical questions and relate them to the extreme existential situation in which they find themselves? Rather little. A systematic phenomenology, in the sense of a description and analysis of experience as lived by locked-in persons themselves, has not yet been undertaken. It deserves to exist along mainstream, more clinical and quantitative approaches to the question, “What is it like to be conscious but paralyzed and voiceless?”





Speaking of a “happy majority” of locked-in persons may be exaggerated given the response rates to quality of life (QOL) surveys. At the same time, the existing research shows that many locked-in persons report subjective wellbeing and a relatively satisfactory QOL level that stays stable over time. As a population they display low rates of depression, suicidal thoughts, euthanasia requests, and do-not-resuscitate orders. Most respondents to a ground-breaking closed-ended questionnaire about body and personal identity in LIS said they felt they were essentially the same as before entering the locked-in state, reporting a continuous experienced identity when they accepted their bodily changes, and a discontinuous one when they did not. The body, though paralyzed, remains a strong component of identity. The phenomenological dynamics of such relationship to the body has been explored in cases of profound paralysis due to ALS or multiple sclerosis, but not yet for LIS.








Image courtesy of pxhere.

Illness, notes philosopher Havi Carel, is a “limit case of embodied experience.” As an extreme instance of that limit, LIS offers a unique opportunity to investigate on a real-life basis central questions related to notions and practices of personhood and embodiment in the realm of values, beliefs and experiences. These questions, concerning for example the relationships between mind and body, self and other, autonomy and dependency, life in health and illness, or the criteria for ascertaining rights and obligations, are at the heart of significant contemporary debates in philosophy, ethics, and the practice of medicine.





In the perspective of “enactivism,” which sees the mind as embodied, embedded, extended and enacted, LIS appears as a social injury that affects the self through its impact on the individual’s capacity to engage with the social environment. Though operating in a frame that places more emphasis on first-person experience, individual self-awareness and self-narrative, a phenomenologist such as Richard Zaner also attributes a central role to the interactive, relational and communicative processes involved in locked-in individuals’ experience. Beyond their obvious practical import, communication and intersubjectivity emerge as possessing fundamental ontological significance. By describing in detail the processes they involve, phenomenology throws light on philosophical and anthropological issues. But it should also contribute to caring for persons whose lives, contrary to what healthy people and even professionals believe, are worth living – yet whose predicament and capacities have been understood in ways that may strip them of their civil and political rights. Other hitherto ignored dimensions, like gender or emotions, will have to be taken into account. The same applies to such material realities as the level of financial support from the state. These realities help explain why, for example, the use and acceptance of tracheostomy ventilation – a procedure in which a tube is inserted into a person’s windpipe through a cut in the neck to allow breathing – is more frequent in Japan than in Western countries.





A consolidating network of scholars from various disciplines in Europe, North America and Japan aims at working toward a phenomenology of LIS mainly by way of two complementary qualitative methodologies. On the one hand, the project Phenomenology of the Locked-in Syndrome analyzes locked-in persons’ autobiographical narratives. There are about thirty such narratives in Western European languages and at least as many in Japanese. A few articles discuss from a literary or phenomenological standpoint Jean-Dominique Bauby’s The Diving Bell and the Butterfly (1997), the widely translated bestseller Julian Schnabel made into a prize-winning film. But the rest of the memoirs, and the corpus as a whole, remain to be scrutinized. On the other hand, the project studies the experience of LIS by way of open-ended questionnaires and interviews with patients, caregivers and family members. Instances of this approach, also a novelty with regard to LIS, are included in a forthcoming special issue of Neuroethics entitled “The Locked-in Syndrome: Perspectives from Ethics, History and Phenomenology.” [1]








Image courtesy of Wikimedia Commons.

The place of LIS within bioethics and neuroethics looks paradoxical. Because consciousness is preserved in LIS, and because this function is considered as the most critical standard for human personhood, there is never any doubt that locked-in individuals are fully persons. Even when they are subjected to some form of tutelage, their circumstances do not give rise to the ethical and procedural issues that are customary in connection with the disorders of consciousness (DOC). Misdiagnosis (as “vegetative”) and its dramatic consequences (the patient is no longer considered a person) have been often documented, but that does not alter the ontological status of the affected individuals. This situation explains the marginal place of LIS in bioethics and neuroethics. The challenges LIS raises – about enabling communication, the exercise of autonomy, the status of advanced directives, the validity of informed consent, or decision-making about treatment and end-of-life – are not really specific to the condition, and are ethically less knotty than in the case of DOC. Knowledge about LIS patients’ self-assessed QOL and the fact that communicative difficulties are the chief source of their suffering gives rise to a twofold moral imperative: the above-mentioned healthy people’s negative biases toward life in the locked-in state should be avoided, and everything possible has to be done to facilitate communication.





It should be possible to go beyond such considerations. The limited attention devoted to LIS in neuroethics and biomedical ethics may mirror the rarity of the syndrome, but it also reflects the modern Western primacy of (self)consciousness and autonomy as normative criteria for personhood and for defining obligations toward patients. LIS, however, highlights the extent to which communication and relationality are integral to their empirical realization. The philosophy of personhood has emphasized physical and psychological criteria to varying degrees, and the human sciences have argued for a more constitutive role for intersubjectivity and technological systems. In such a context, LIS has to be examined together with conditions, such as DOC and dementias, which more directly problematize personhood at the conceptual and practical levels. Locked-in persons’ experience invites us to explore these issues by turning the usual vantage point around – asking what LIS can do for theories, rather than what theories can do for LIS [2].







_________________









Fernando Vidal is Research Professor of ICREA (Catalan Institution for Research and Advanced Studies) and Professor at the Medical Anthropology Research Center, Rovira i Virgili University (Tarragona, Spain). A former Guggenheim Fellow, he was in 2017 elected to the Academia Europaea, and was Fellow at the Brocher Foundation (Geneva) and Visiting Professor at Ritsumeikan University (Kyoto). His most recent book, Being Brains: Making the Cerebral Subject (with F. Ortega) received the 2018 Outstanding Book Award of the International Society for the History of the Neurosciences.

















Author's Notes





[1] Edited by F. Vidal, it brings together participants of the workshop Personhood and the Locked-in Syndrome (Barcelona, 2016), funded by the Catalan Institution for Research and Advanced Studies with additional support from the Víctor Grifols i Lucas Foundation. The project Phenomenology of the Locked-in Syndrome is attached to the Medical Anthropology Research Center, Rovira i Virgili University, Tarragona.




[2] This post sketches some of the issues extensively discussed in F. Vidal, “Phenomenology of the Locked-in Syndrome: An Overview and Some Suggestions” (Neuroethics, in press). https://doi.org/10.1007/s12152-018-9388-1.




Locked-in persons are scattered, and not easy to find and contact. Individuals in any country interested in collaborating with the project sketched here can write to F. Vidal, fernando.vidal@icrea.cat.






Want to cite this post?




Vidal, F. (2018). Phenomenology of the Locked-in Syndrome: Time to Move Forward. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2018/10/phenomenology-of-locked-in-syndrome.html