Pages

Tuesday, November 29, 2016

"American Horror Story" in Real Life: Understanding Racialized Views of Mental Illness and Stigma


By Sunidhi Ramesh






Racial and ethnic discrimination have taken various forms in the

United States since its formation as a nation. The sign in the image

reads: "Deport all Iranians. Get the hell out of my country."

Image courtesy of Wikipedia.


From 245 years of slavery to indirect racism in police sanctioning and force, minority belittlement has remained rampant in American society (1). There is no doubt that this history has left minorities in the United States with a differential understanding of what it means to be American and, more importantly, what it means to be an individual in a larger humankind.



Generally, our day-to-day experiences shape the values, beliefs, and attitudes that allow us to navigate the real world (2). And so, with regards to minorities, consistent exposure to these subjective experiences (of belittlement and discrimination, for example) can begin to shape subjective perceptions that, in turn, can mold larger perspectives and viewpoints.





Last spring, I conducted a project for a class to address the reception (3) of white and non-white, or persons of color (POC), students to part of an episode from American Horror Story: Freak Show. The video I asked them to watch portrays a mentally incapacitated woman, Pepper, who is wrongfully framed for the murder of her sister’s child. The character’s blatant scapegoating is shocking not only for the lack of humanity it portrays but also for the reality of being a human being in society while not being viewed as human.





Although the episode remains to be somewhat of an exaggeration, the opinions of the interview respondents in my project ultimately suggested that there exists a racial basis of perceiving the mental disabilities of Pepper—a racial basis that may indeed be deeply rooted in the racial history of the United States.








The premise behind my project was the understanding that past experience informs perception. What, then, are the different circumstances (in regards to mental illness/disabilities) that white and POC Americans are facing? Current public health research suggests that there exist racial differences in the field of mental health.






In 2010, for example, researchers at the University of Pittsburgh found that internalized stigma among African Americans had a direct relationship with attitudes towards their mental heath treatment (4); in general, African Americans in this study reported more negative attitudes toward mental health treatment, and, as compared to their white counterparts, African Americans were less likely to seek out mental health treatment and were more likely to hold negative views about themselves if they were diagnosed with a mental illness (4).





Another study conducted in 2012 validates these results, finding that African Americans are significantly less likely than other race-ethnic groups to have received mental health services (5); although the article begins to tie this trend to education differences among the different racial groups, a definitive explanation for the relationship between race-ethnicity and the receipt of mental health services could not be found (5).





Beyond studies regarding the specific treatment of mental illness is research that questions the root of mental illnesses such as depression; one such study found a “clear, direct” relationship between perceived discrimination (which arises from “formative social experiences”) and symptoms of depression in Mexican-origin adults in California (6).





The conclusions drawn in these studies as well as those in other similar research imply that mental illness does not stand on its own; it, in fact, is a factor that is intertwined (rather strongly) with race as well as elements that underlie race such as discrimination and education.





Because the subjective experiences faced by minorities formulate differential understandings and subjective perspectives, these perspectives (according to these studies) can then go on to create different attitudes towards mental health. Ultimately, this cascade can form bigger more personal feelings such as internalized and public stigma.





With this comes a question: what if the differences in the way POC and White Americans are treated (either for mental illnesses or in general) manifest themselves in how different racial groups perceive mental health?







A photo of a freak show exhibition, taken around 1941. The 

sign at the top reads: "Human Freaks Alive." 

Image courtesy of Wikimedia Commons.


Before getting into my project, I must mention that Pepper, throughout American Horror Story, is part of a “freak show”—a term that the dictionary defines as “a display of people with unusual or grotesque physical features as at a circus or a carnival show."As I was watching the show for the first time a few years ago, I was appalled at how it presented the reactions of people who interacted with the “freaks.” There was shock, amusement, fear, and even a sense of superiority. In one scene, the circus actors went out to a diner and were immediately kicked out on the grounds of “disturbing and scaring the other customers.” More often than not, the families who attended the circus would disrespect and taunt the performers.





I later realized that this scene illuminated the major difference between physical disability and mental illness. Physical disability can be seen; it is outward and apparent to a point where it can be identified and acknowledged as easily as it can be mocked and ridiculed.





Mental illness cannot. It is invisible, an uninvited guest that only the patient can feel, describe, and identify. It is silent. Quiet. Unseen. (This distinction regarding mental illness is why hundreds of articles with titles such as “I Don’t Believe in Mental Illness” and “9 Signs Why Your Mental Illness is Made Up for Attention” plague the Internet.)





People who bear mental illnesses are told that their symptoms are not real, that “laziness explains 100% of mental disorders,” or simply that their illnesses “does not exist.” These kinds of perceptions build up and begin to create stigma around mental illness.





Statistically, three out of four people who experience mental illness today have reported experiencing stigma. This stigma leads to feelings of shame, hopelessness, distress and misrepresentation in media. It discourages patients from seeking necessary help. It frames mental illness as a shameful blemish and weakness.





And in many cases, stigma and discrimination come hand in hand; often, those with mental illnesses and disabilities are denied employment, housing, insurance coverage and general social interactions such as friendship and marriage (7).





Worst of all, this very stigma throws mental health patients into a dangerous cycle of social isolation and harm.





According to a 2002 research paper written by Allison J. Gray, “Discrimination alters how patients see themselves, their self worth and their future place in the world. The immediate psychological effects of a psychiatric diagnosis include disbelief, shame, terror, grief, and anger” (8). She then argues that these patients eventually face social isolation, which directly leads to high rates of self-harm and suicide.





So, how and where do we go from here? Can we work toward destigmatizing mental illness?





Or is this a lost cause? Could the racial discrepancy between perceiving disabilities be too deeply rooted to change how these conditions are perceived? And where does this racial difference come from?





For this small preliminary class project, I asked ten respondents (five white and five POC) to watch the aforementioned 30-minute clip. This episode covers the experiences of a young woman, Pepper, who suffers from microcephaly, a rare neurological condition “in which the brain does not develop properly, resulting in a smaller than normal head” as well as intellectual disability, poor speech abilities, and abnormal facial features. In the clip, Pepper is introduced into the care of her older sister and her brother-in-law, a couple that later gives birth to a deformed child. Although Pepper cares for and loves the child as her own, her caretakers appear to be overtaken by “the burden” of having to deal with two individuals who are unable to fully look after themselves. In response, Pepper’s brother-in-law (with permission from his wife) murders the infant and places the blame on Pepper, who is unable to speak for herself but seemingly unaware of the injustice done to her. At the end of the clip, Pepper is placed in an insane asylum, forced to live there due to her supposed involvement in the brutal murder of her sister’s child.




A comparison between head sizes for a child with microcephaly

and a normal child. This change in head shape is often attributed

to abnormal brain development. Image courtesy of Wikimedia 

Commons.





Following the viewing, I asked each respondent seven questions regarding their overall feelings as well as what characters and parts of the plotline resonated with them the most. In the end, I found three general categories of responses—each of which was clearly divided racially.





The most striking of these categories was, by far, how the white and POC respondents referred to Pepper’s microcephaly. I should preface with the fact that the episode never directly labeled her condition, and Pepper’s mental and physical statuses were not referred to as a disability in the scenes the respondents viewed. Still, every white interviewee spoke of Pepper’s condition as a “disability”—a handicap that allowed her to be bullied by her family and the justice system. These students seemed to dwell on the idea that Pepper was subordinated in the minds of those around her. To them, she was bullied for and handicapped by her mental state.





The POC respondents, on the other hand, did not use the words “disability” or “handicap.” Instead, they speak of her as an “outsider,” a deviation from what it means to be “normal.” This word, “normal,” was raised by every POC respondent. These students chose to discuss Pepper’s experiences in light of their own by drawing parallels between what it means to be a functioning, “normal” member of this society and the consequences of being the opposite, when an individual deviates from those norms (discrimination and outcasting).





Although these data are just preliminary, the implications, if these results held true with a larger pool or participants, are tremendous. At the least, these outcomes suggest that human perceptions of mentally and physically compromised individuals are racially based— that there may exist a socially constructed phenomenon for why white respondents viewed Pepper as “disabled” and POC respondents saw her as simply “not normal.”





If anything, the tendency for the POC individuals in my interviews to focus more on the aspects of being “normal” (rather than being discriminated) suggests something about the more personal aspects of the minority experience. It is possible that this theme was so salient because the question asked for the interviewees to relate the clip to their own personal lives (9); perhaps the notion of mental disabilities is not as prominent to these POC individuals as it may have been to the white respondents (as was suggested by the public health studies on POC Americans and mental health). Again, the validity of this statement should be explored through further research.







Schlitzie (born Schlitze Surtees) was an American 

sideshow performer; Pepper's appearance and 

story are said to be based on Schlitzie's life.

Image courtesy of Wikipedia.


Whatever the case, the answers to these questions are not clear. They may never be clear or easy to address—unless we are somehow able to pinpoint exactly where these entangled differential perceptions stem from or whether or not they can be changed. What can change, however, is the stigma around mental illness.





If the relationship between subjective experience, differential understanding, subjective perception, different mental health treatment and attitudes, and stigma exists, can we tap into breaking the cycle? Can we try to change mental health treatment by better educating our doctors and mental health professionals? Can we change mental health attitudes by better explaining conditions to patients or the general public? Would changes in the initial subjective experience (reducing discrimination, for example) reduce mental health stigma down the line?





And would this stigma be alleviated with more evidence for a biological basis to mental illness? Possibly (10, 11, 12).





But this would require research as well—research that is deliberately designed to avoid reinforcing negative stereotypes. In other words, while bias is inherent to some degree in all research, specific biases such as gender and racial bias need to be consciously monitored in this research to avoid being implicitly implemented into the research process.



How can this be done? The Journal of European Psychology suggests engaging in introspection to acknowledge any biases before the research is conducted, including different types of people and viewpoints on the research team, standardizing procedures for data collection and checking for statistical significance—all while being aware of the errors and omissions that may be embedded in the research itself. Maybe, with these cautions in mind, we can work towards more direct, objective research that can lead to the lessening of stigma (especially towards specific races) around mental illness.





Until then, we must begin to realize that perception of mental illness is not black and white; it is socially directed, differentially interpreted, and variably understood. More importantly, it is profoundly engrained in experience and identity.





This understanding needs to come first.





Perhaps then we can begin to unravel the answers to the bigger questions we have.





Note: The students in my project were asked to watch two segments from Episode 10 of Season 4 of American Horror Story: 1) 31:53 to 37:20 and 2) 38:41 to 49:00.





References 



1) Piazza, James A. "Types of minority discrimination and terrorism." Conflict Management and Peace Science 29.5 (2012): 521-546.



2) Rokeach, Milton. The nature of human values. Vol. 438. New York: Free press, 1973.



3) Shively, JoEllen. "Cowboys and Indians: Perceptions of western films among American Indians and Anglos." American Sociological Review (1992): 725-734.



4) Brown, Charlotte, et al. "Depression stigma, race, and treatment seeking behavior and attitudes." Journal of community psychology 38.3 (2010): 350-368.



5) Broman, Clifford L. "Race differences in the receipt of mental health services among young adults." Psychological Services 9.1 (2012): 38.



6) Finch, B. K., Kolody, B., & Vega, W. A. (2000). Perceived discrimination and depression among Mexican-origin adults in California. Journal of Health and Social Behavior, 295-313.



7) Office of the Surgeon General (US, & Center for Mental Health Services (US. (2001). Culture counts: The influence of culture and society on mental health.



8) Gray, A. J. (2002). Stigma in psychiatry. Journal of the royal society of medicine, 95(2), 72-76.



9) Trepte, S. (2006). Social Identity Theory. In J. Bryant & P. Vorderer (Eds.), Psychology of Entertainment (pp. 255-271). Mahwah, NJ: Lawrence Erlbaum.



10) Corrigan PW, Watson AC. At issue: Stop the stigma: call mental illness a brain disease. Schizophrenia bulletin. 2004;30(3):477-479.



11) Corrigan PW. Lessons learned from unintended consequences about erasing the stigma of mental illness. World psychiatry : official journal of the World Psychiatric Association. 2016;15(1):67-73.



12) Insel TR, Wang PS. Rethinking mental illness. Jama. 2010;303(19):1970-1971.






Want to cite this post?



Ramesh, Sunidhi. (2016). "American Horror Story" in Real Life: Understanding Racialized Views of Mental Illness and Stigma. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/11/american-horror-story-in-real-life_22.html


Tuesday, November 22, 2016

Debating the Replication Crisis - Why Neuroethics Needs to Pay Attention



By Ben Wills



Ben Wills studied Cognitive Science at Vassar College, where his thesis examined cognitive neuroscience research on the self. He is currently a legal assistant at a Portland, Oregon law firm, where he continues to hone his interests at the intersections of brain, law, and society.






In 2010 Dana Carney, Amy Cuddy, and Andy Yap published a study showing that assuming an expansive posture, or “power pose,” leads to increased testosterone levels, task performance, and self-confidence. The popular media and public swooned at the idea that something as simple as standing like Wonder Woman could boost performance and confidence. A 2012 TED talk that author Amy Cuddy gave on her research has become the site’s second-most watched video, with over 37 million views. Over the past year and change, however, the power pose effect has gradually fallen out of favor in experimental psychology. A 2015 meta-analysis of power pose studies by Ranehill et al. concluded that power posing affects only self-reported feelings of power, not hormone levels or performance. This past September, reflecting mounting evidence that power pose effects are overblown, co-author Dana Carney denounced the construct, stating, “I do not believe that ‘power pose’ effects are real.”






What happened?




Increasingly, as the power pose saga illustrates, famous findings and the research practices that produce them are being called into question. Researchers are discovering that many attempts to replicate results are producing much smaller effects or no effects at all when compared to the original studies. While there has been concern over this issue among scientists for some time, as the publicity surrounding the rise and fall of the power pose indicates, discussion of this “replication crisis” has unquestionably spilled over from scientists’ listservs into popular culture.






Though replicability issues pervade many areas of experimental science, cognitive neuroscience and psychology are particularly susceptible. One main reason is the relatively high number and great impact of choices that researchers in this field make in methodology, data collection, and analysis (collectively known as “researcher degrees of freedom”). The consequences of shoddy science in psychology are outsized as well. More than perhaps most experimental disciplines, cognitive neuroscience and psychology directly impact popular culture, influencing how people interact and think of themselves. Phrenology, strict behaviorism, and the pathologizing of queerness are obsolete psychological doctrines that caused extensive harm before being shown to be roundly false. The shaky foundation of the power pose effect may be benign compared to the utter nonsense of phrenology, but both illustrate a distinct need to make sure that psychological results are true and valid.






Enter neuroethics. From debate on the ethics of cognitive enhancement to guidelines for the ethical use of neuroimaging research in the courtroom, neuroethics is fundamentally located at the intersection of society and the mind and brain sciences. A comprehensive neuroethics must consider not only society’s engagement with technology and scientific results, but the very process of research and the production of those results. After all, a policy recommendation or ethical analysis is only as valuable as the data on which it’s based. Consequently, neuroethics is obliged to keep an eye on the theories, methods, and findings of the mind and brain sciences. The replication crisis is a problem for psychology, society, and neuroethics as well.






Though this “replication crisis” is regarded by many as a major issue in the field, just how big of a problem it is and what the most appropriate response should be are questions whose answers have little consensus. This was the focus of a public debate hosted by The Center for Brain and Consciousness at NYU on Thursday, September 29, between Brian Nosek, psychologist at UVa and the director of the Center for Open Science, and Jason Mitchell, cognitive neuroscientist at Harvard. The title for the debate was, “Do Replication Projects Cast Doubt On Many Published Studies in Psychology?” but, as their goal was to dial in on the starkest differences of opinion between them, the debaters focused on the process of replication rather than quantifying the unreproducability of psychological research.









Flier for the NYU public debate

Brian Nosek, taking the affirmative position, presented first. He began by defining a replication attempt as a study that is identical to the original such that the only difference is their order. That one study comes before the other, Nosek argued, is irrelevant for evaluating results. At the same time, due to differences in time, place, sample, and other variables, he acknowledged that no replication is truly exact.







Nosek also expressed concern that the field, in determining a study’s value, often over-emphasizes statistical significance, in particular the famous p-value of 0.05 used in traditional statistics to determine if an effect is significant. While traditional statistics and p-values have their place, he argued, an over-emphasis on statistical significance is at odds with best scientific practices. For one, just as there can be many reasons why a study does not reach significance, there can be many reasons outside of a possible “true” effect why experiments can yield a statistically significant result (for more on this, Nosek suggested reading Greenwald 1975, a piece that presages much of the current debate). Thus, the potential causes of falsely concluding that there is an effect are more numerous than it might seem. This issue is exacerbated by science journals’ well-known bias toward flashy, statistically significant results, which provides incentive for researchers to take liberties with their data collection and analyses in ways that are more likely to yield statistically significant results.






Beyond encouraging questionable research practices, Nosek argued that over-emphasizing statistical significance makes researchers prone to gloss over results that, while not significant, are nevertheless informative. Even replication “failures” are not utter losses, but can contribute valuable information about the robustness of an effect.






Overall, Nosek pushed for a more inclusive understanding of what makes results in psychological research valuable while making a strong case that, among other causes, journals’ publication bias and misuse of aforementioned researcher degrees of freedom have led to a startling number of questionable findings in psychological science. Given the somewhat dire results of replication projects so far (the original Reproducibility Project: Psychology found that about 60% of studied failed to replicate, and a sneak peek Nosek offered of the data from the sister Many Labs 2 project was not much different), Nosek argued that widespread replication is essential for the health of the field.






In presenting the opposing position, Jason Mitchell did not object to replication (indeed, Mitchell stated that it is very important, especially within labs) so much as to the methods of the Reproducibility Project: Psychology (which Nosek led). In seeking to reproduce findings, Mitchell stated, there is an overemphasis on reproducing the minutiae of the procedure (direct replication) rather than capturing the essence of the original study (conceptual replication). The question psychologists seek to answer is rarely simply, “what is the effect of x stimulus on y mental event or behavior,” but rather, “what are the changes in y mental state or behavior brought about by mental causes that are in turn effects of x stimulus?” In an example he gave, if you’re studying the effect of mood on people’s tendency to socialize (operationalized as a happy mood elicited by listening to the Beach Boys or a sad mood elicited by listening to Adele), what you’re fundamentally not interested in is the effect of Adele or the Beach Boys on socialization per se. Rather, you’re interested in the songs’ effects only to the extent that they cause subjects to experience a certain mood. Mitchell argued that direct replication studies, in prioritizing the similarity of the individual labs’ attempts and their faithfulness to the original procedure, are likely missing the forest for the trees.









Amy Cuddy on "power poses," courtesy of Vimeo.

Touching on the nominal topic of the debate, Mitchell also made the important point that some effects in psychology, though “real,” are much harder to elicit than others. That cognitive dissonance is difficult to elicit in no way means it’s not a real effect. Resources of funding, knowledge, expertise in the paradigm, etc. all influence the likelihood of researchers successfully finding a real effect. Following this, he argued, it’s difficult to get any kind of accurate idea of the overall rate of replicability in the mind sciences, and results from the reproducibility project and similar efforts cannot simply be extrapolated to the field at large.






As noted by the hosts David Chalmers and Ned Block, there was little “bloodshed” – throughout the discussion, it was clear that the researchers respect each other and are committed to doing good science. They agreed on many fundamental issues, including that the mind and brain sciences are not without problems and that replication is valuable tool. Further discussion would have illuminated their differing opinions on direct versus conceptual replication and the plausibility of an overall suspected rate of replicability in cognitive neuroscience and psychology.






If this debate was any indication, neuroethics as a field may find some comfort in the general direction of the mind and brain sciences. Most researchers, both on the stage and in the audience, seemed to be taking replication seriously, and the field has displayed a push to self-correct. In the meantime, when it comes to psychological research, neuroethics may trust but must verify.



References



Carney, D. R., Cuddy, A. J. C., & Yap, A. J. (2010). Power poses: Brief nonverbal displays cause neuroendocrine change and increase risk tolerance. Psychological Science, 21, 1363-1368.



Farah, M. (2015). The unknowns of cognitive enhancement. Science 350, 379-80. DOI: 10.1126/science.aad5893.



Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1-20.



Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. DOI: 10.1126/science.aac4716.



Ranehill, E., Dreber, A., Johannesson, M., Leiberg, S., Sul, S., & Weber, R. A. (2015). Assessing the Robustness of Power Posing: No Effect on Hormones and Risk Tolerance in a Large Sample of Men and Women. Psychological Science, 33, 1-4.



Want to cite this post?



Wills, B. (2016). XXXX. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/11/wills-title-pending.html

Tuesday, November 15, 2016

The 2016 Kavli Futures Symposium: Ethical foundations of Novel Neurotechnologies: Identity, Agency and Normality


By Sean Batir (1), Rafael Yuste (1), Sara Goering (2), and Laura Specker Sullivan (2)







Image from Kavli Futures Symposium

(1) Neurotechnology Center, Kavli Institute of Brain Science, Department of Biological Sciences, Columbia University, New York, NY 10027




(2) Department of Philosophy, and Center for Sensorimotor Neural Engineering, University of Washington, Seattle, WA 98195




Detailed biographies for each author are located at the end of this post




Often described as the “two cultures,” few would deny the divide between the humanities and the sciences. This divide must be broken down if humanistic progress is to be made in the future of transformative technologies. The 2016 Kavli Futures Symposium held by Dr. Rafael Yuste and Dr. Sara Goering at the Neurotechnology Center of Columbia University addressed the divide between the humanities and sciences by curating an interdisciplinary dialogue between leading neuroscientists, neural engineers, and bioethicists across three broad topics of conversation. These three topics include conversations on identity and mind reading, agency and brain stimulation, and definitions of normality in the context of brain enhancement. The message of such an event is clear: dialogue between neurotechnology and ethics is necessary because the novel neurotechnologies are poised to generate a profound transformation in our society.






With the emergence of technology that can read the brain’s patterns at an intimate level, questions arose about the implications for how these methods could reveal the core of human identity – the mind. Jack Gallant, from UC Berkeley, reported on a neural decoder that can identify the visual imagery used by human subjects (1). As subjects in Gallant’s studies watched videos, the decoder determined how to identify which videos they were watching based on fMRI data. Gallant is convinced that “technologically, ubiquitous non-invasive brain decoding will happen. The only way that’s not going to happen is if society stops funding science and technology.”





Other panelists at the symposium shared Gallant’s confidence in the advent of technology that can decode the content of mental activity, and discussed how motor intentions can be decoded and used to control external objects, like a computer cursor or robotic arm. For instance, Miguel Nicolelis from Duke University discussed a Brain Net that merged neural commands from the brains of three monkeys “into a collective effort responsible for moving a virtual arm.” As the leader of one of the laboratories at the forefront of improving brain computer interfaces for prosthetic control, Nicolelis raised the question of whether such technologies “should be used for military applications.” Beyond specialized use, Nicolelis expressed concern that access to new technologies could be limited – who will be using brain decoders or multiple brain machine interfaces, and why?







Neural technologies that access our internal mental processes may have the potential to shift our understanding of human identity and our sense of ourselves as individual agents. In thinking about identity, philosopher Francoise Baylis of Dalhousie University discussed neuromodulation devices, invoking deep brain stimulation treatments (DBS) as an example. She stated, “DBS is not a challenge or threat to identity. I think people are conflating changes in personality with changes in personal identity. I do not think these are the same… at the end of the day, identity rests in memory, belonging, and recognition.” Baylis argued that our identities are always dynamic and relational, and neural technologies are another way that our relational identities can shift, without threatening who we are. Still, some felt that neural devices may call into question our sense of agency and responsibility for our actions. In considering the issues raised during this panel, Patricia Churchland, from UCSD, emphasized that in 15 sensational accounts of the limits that new technologies will impose on free choice and responsibility for action, she stated that a key question about new neurotechnologies is: “What will it do for us?” There is a need for a balanced approach between speculation about future possibilities, reflection about what science and technology are already doing, and how this will affect society in the short term.





Since sophisticated brain stimulation technologies are already capable of eliciting complex behaviors in lower mammals, ethicists discussed an array of concerns related to agency: how we can know whether our actions and behaviors actually result from our own intentions when adaptive neural devices interact with our brains? Pim Haselager of Radboud University explored our “sense of agency” in experiments designed to separate our belief in our agency from our actual causal efficacy in acting (2). His work suggests that “the harder you work, the more agency you feel,” and he notes that maintaining a strong sense of agency while using a BCI may be linked to a relatively high level of effort on the part of user. Haselager described the sense of agency as multi-faceted – while we are learning more about the dimensions of agency, interpersonal and psychosocial issues are still emerging with neurotechnological research. Ed Boyden from MIT, whose laboratory is developing tools for mapping and controlling the brain through optogenetics, continued the discourse surrounding the multifaceted nature of agency, by questioning, “Can detailed models of an individual’s [mental] traits be reconstructed to the point in which simulation could be possible?” He suggested that as the ability to probe neural circuits expands, we will face increasingly complex questions about ourselves and our priorities. If a human-like simulation could be developed, would it possess the same internal dilemma of agency that persists in any decision-making human?





Leigh Hochberg, from Brown University, whose laboratory focuses on brain computer interfaces for paraplegic patients and the clinical trials of BrainGate technology, suggested that how and why privacy of neural data is ascertained depends on what we think is in the data – what does it tell us? This affects how he assesses risk and benefit in his own work – in a trial with a small number of participants, clinical data might be easily identifiable. This requires what Hochberg described as an “extraordinary consent process.” With evidence of the safety and efficacy of BCIs, increasing numbers of participants in BCI clinical trials and changes to consent requirements, more thinking is needed about how neural data and security are handled. Finally, Martha Farah, from the University of Pennsylvania, raised important conceptual questions about agency. She proposed that agency is ethically significant because it is necessary for freedom and autonomy, which underlie commonsense notions of moral responsibility. The concern with neurotechnology and agency is not whether an intervention is “in the head,” but whether it is quantitatively different from preceding technologies, like pharmaceuticals – does it allow for drastically more control over individuals and their agency? Farah suggested that new neural technology might allow for more fine-grained control of human thoughts and behavior, a possibility that raises economic and regulatory issues in the short term, equality and opportunity questions in the medium term, and existential questions about humanity in the long term.





The sheer existence of mind and brain enhancing technologies belies a tenuous and fundamental assumption that both ethicists and neuroscientists believe should be addressed: What exactly does it mean to be normal, and is achieving normality a reasonable aim? Blaise Aguera from Google opened the floor, starting a discussion about gender as an instance of the social tendency to impose a structure of normality (e.g., binary genders) when a much wider array of gendered possibilities is available – not even just on a spectrum, but across a “a multidimensional vectorial space.” Neural technologies should not inadvertently be designed in ways that exacerbate existing biases such as gender or limited appreciation for the diversity of modes of being in the world. Rather, Aguera asserted that “those of us who create these systems” of human enhancement should “explore a deontology” with “something like science, wellbeing, equity, freedom, and progress” as initial guiding principles. Polina Anikeeva at MIT then shared her work on new devices that match the flexible material properties of the brain, explaining her motivation to make devices less invasive because an “ethical implication is that when we introduce a rigid device, then we destroy the surrounding tissue,” creating glial scars that “don’t interact the same way as neurons do.” Her work shows how even upstream material design of electrodes for neural technology may have a significant impact on the end-user’s experience of the technology





Gregor Wolbring from the University of Calgary expanded the conversation on normality and enhancement to address “ability privilege,” which is the idea that “individuals who enjoy the advantages are unwilling to give up their advantages,” because for many people the judgment of abilities is intrinsic to one’s self-identity and security. He posed questions regarding how we determine ability expectations, and how those expectations alter the treatment of people whose bodies are not typical. Will disabled people want neurotechnologies? Perhaps, if they are understood as tools to achieve well-being, rather than as ways to “fix” people. When asked about the role of neuroprosthetics in the disability world, Wolbring expressed “Tool, yes. Identity, no.” David Wasserman from NIH turned the conversation to neurodiversity, and the movement to reframe some neuroatypical forms of processing as forms of valuable diversity. Such individuals may not need medical technology, but better social accommodation. Thus, Wasserman argued for a more pecuniary focus, emphasizing “more funding ought to be given to…biomedical research that would increase the flourishing” of people living with various neuroatypical conditions. Wasserman suggests that such research should be less focused on medical “fixes”, even though the public tends to be moved by research justifications focused on medical advancement. This latter point was confirmed by Gallant, who noted that “while scientists do a bad job of explaining how science works, the public knows they get sick, and they go to the hospital. This is why the NIH budget is 10 times greater than NSF….medicalizing research has the good effect of attracting funding to biomedical research.” Equipped with this knowledge, a slightly clearer picture begins to emerge, where research at the frontiers of neurotechnology may be forced to address normalization in a medical context for the sake of funding further research, unless funding structures change.





An open discussion held at the end of the Kavli Futures Symposium with all speakers and members of the NIH BRAIN Neuroethics Workgroup synthesized separate kernels of knowledge shared throughout the event. This included a sense of urgency for funding ethical and legal work in order to guide the development of new technologies that have the capacity to radically transform the human experience. There is a need to ensure that multiple stakeholders, including scientists, disabled people, members of the general public, and ethicists work together to consider the ethical aspects of scientific and technological developments. These ethical aspects are clearest in the short term, such as issues about funding priorities, institutional space for ethics, translational goals, and social support for individuals using novel technologies. Long-term questions can also be raised, including the value of preserving the separateness of individuals with private mental space, the potential for combining consciousness toward shared tasks, and the significance of potential enhancements that radically alter what we can directly control with our brains.





By exploring the collective web of thought that connects the humanities and the sciences, several profound issues were identified. Attending to these issues should galvanize the relevant public and private entities to attend more fully to the integration of neurotechnological research with human values.




Author Biographies




Rafael Yuste is a professor of biological sciences and neuroscience at Columbia University. Yuste is interested in understanding the function and pathology of the cerebral cortex, using calcium imaging and optogenetics to “break the code” and decipher the communication between groups of neurons. Yuste has obtained many awards for his work, including those from the New York City Mayor, the Society for Neuroscience and the National Institutes of Health’s Director. He is a member of Spain’s Royal Academies of Science and Medicine. Yuste also led the researchers who proposed the Brain Activity Map, precursor to the BRAIN initiative, and is currently involved in helping to launch a global BRAIN project and a Neuroethical Guidelines Commission.  He was born in Madrid, where he obtained his medical degree at the Universidad Autónoma. He then joined Sydney Brenner's laboratory in Cambridge, UK. He engaged in Ph.D. study with Larry Katz in Torsten Wiesel’s laboratory at Rockefeller University and was a postdoctoral student of David Tank at Bell Labs. In 2005, he became a Howard Hughes Medical Institute investigator and co-director of the Kavli Institute for Brain Circuits. Since 2014, he serves as director of the Neurotechnology Center at Columbia.





Sean Batir is currently a PhD candidate rotating the Dr. Rafael Yuste's laboratory. Previously, he helped co-found two companies in the Bay Area and Boston that developed inconspicuous wearable devices and augmented reality. He also worked as a software developer at Oracle Corporation, creating unified archives that could deploy cloud-enabled databases in a virtual zone and web-based applications that enabled user-friendly visualization of Oracle Supercluster system features. Academically, he earned his M.Res in Bioengineering at the Imperial College of London, in Dr. Simon Schultz's Neural Coding lab.  He developed a new method for complex spike classification in the cerebellum. Prior to his Master's work, Sean studied optogenetic interrogation of the amygdala-hippocampal circuit and  contributed to the automated patch clamping device developed in Dr. Ed Boyden's Synthetic Neurobiology Group at MIT. As a SENS Summer Research Fellow at the Buck Institute of Aging, Sean also characterized therapeutic effects of lithium as a tentative treatment to Parkinson's disease. Sean is driven to develop transformative technologies that redefine what it means to be human. He believes that innovation occurs through interdisciplinary dialogue, both within academia and outside of it, and seeks to facilitate interactions that drive creation.





Laura Specker Sullivan is a postdoctoral fellow in neuroethics at the Center for Sensorimotor Neural Engineering, University of Washington. Her position is jointly held with the National Core for Neuroethics at the University of British Columbia. She conducts conceptual research on issues in practical ethics relating to the justification and goals of biomedical practices as well as empirical research on stakeholder attitudes and perceptions towards emerging technologies such as brain-computer interfaces. Her work often takes a cross-cultural approach, focusing on Japanese and Buddhist perspectives. She received her PhD from the Department of Philosophy at the University of Hawaii at Manoa in 2015.






Sara Goering is Associate Professor of Philosophy at the University of Washington, Seattle, and affiliated with the Program on Values and the Disability Studies Program. She leads the ethics thrust at the Center for Sensorimotor Neural Engineering.











References



1. Naselaris et al. 2015. A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes. Neuroimage 105(15): 215-228.



2. Haselager, W.F.G. 2013. Did I do that? Brain-Computer Interfacing and the sense of agency. Minds and Machines 23(3): 405-418.



Want to cite this post?



Batir S, Yuste R, Goering S, and Specker Sullivan L. (2016). The 2016 Kavli Futures Symposium: Ethical foundations of Novel Neurotechnologies: Identity, Agency and Normality. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/11/the-2016-kavli-futures-symposium_14.htm


Tuesday, November 8, 2016

On the ethics of machine learning applications in clinical neuroscience


By Philipp Kellmeyer




Dr. med. Philipp Kellmeyer, M.D., M.Phil. (Cantab) is a board-certified neurologist working as postdoctoral researcher in the Intracranial EEG and Brain Imaging group at the University of Freiburg Medical Center, German. His current projects include the preparation of a clinical trial for using a wireless brain-computer interface to restore communication in severely paralyzed patients. In neuroethics, he works on ethical issues of emerging neurotechnologies. He is a member of the Rapid Action Task Force of the International Neuroethics Society and the Advisory Committee of the Neuroethics Network.





What is machine learning, you ask? 


As a brief working definition up front: machine learning refers to software that can learn from experience and is thus particularly good at extracting knowledge from data and for generating predictions [1]. Recently, one particularly powerful variant called deep learning has become the staple of much of recent progress (and hype) in applied machine learning. Deep learning uses biologically inspired artificial neural networks with many processing stages (hence the word "deep"). These deep networks, together with the ever-growing computing power and larger datasets for learning, now deliver groundbreaking performances at many tasks. For example, Google’s AlphaGo program that comprehensively beat a Go champion in January 2016 uses deep learning algorithms for reinforcement learning (analyzing 30 million Go moves and playing against itself). Despite these spectacular (and media-friendly) successes, however, the interaction between humans and algorithms may also go badly awry.






The software engineers who designed ‘Tay,’ the chatbot based on machine learning, for instance, surely had high hopes that it may hold its own on Twitter’s unforgiving world of high-density human microblogging. Soon, however, these hopes turned to dust when - seemingly coordinated - interactions between Twitter users and Tay turned the ideologically blank slate of a program into a foul display of racist and sexist tweets [2].








Image courtesy Wikimedia

These examples reflect diverse efforts to create more and more “use-cases” for machine learning such as predictive policing (using machine learning to proactively identify potential offenders) [3], earthquake prediction [4], self-driving vehicles [5], autonomous weapons systems [6], or even for creative purposes like the composition of Beatles-like songs or lyrics. Here, I focus on some aspects of machine learning applications in clinical neuroscience that, in my opinion, warrant particular scrutiny.





Machine learning applications in clinical neuroscience 


In recent years, leveraging computational methods for the modeling of disorders has become a particularly fruitful strategy for research in neurology and psychiatry [7], [8].  In clinical neuroimaging, for example, machine learning algorithms have been shown to detect morphological brain changes typical of Alzheimer’s dementia [9], identify brain tumor types and grades [10], predict language outcome after stroke [11], or distinguish typical from atypical Parkinson’s syndromes [12]. In psychiatric research, examples for applying machine learning are the prediction of outcomes in psychosis, [13] and the persistence and severity of depressive symptoms.14 More generally, most current applications follow one of the following rationales: (1) to distinguish between healthy and pathological tissue in images, (2) to distinguish between different variants of conditions, (3) to make predictions on the outcome of particular conditions. While these are potentially helpful tools for assisting doctors in clinical decision-making, they are not a routinely used in clinics yet. It is safe to predict, however, that machine learning based programs for automated image processing, diagnosis, and outcome prediction will play a significant role in the near future.





Some of the ethical challenges 


One area in which intelligent systems may create ethical challenges is their impact on autonomy and accountability of clinical decision-making. As long as machine learning software for computer-aided diagnosis merely assist radiologists and the clinician keeps the authority over clinical decision-making, it would seem that there is no profound conflict between autonomy and accountability. If, on the other hand, decision-making was to be relegated to the intelligent system, to any degree whatsoever, we may indeed face the problem of an “accountability gap” [15]. After all, who (or what) would need to be held accountable in the case of a grave system error resulting in misdiagnosis: the software engineer, the company or the regulatory body that allowed the software to enter the clinic?







Image courtesy of Vimeo

Another problem may arise from the potential for malicious exploitation of an adaptive, initially “blank”, machine learning algorithm - as in the case of Tay, the chatbot. A machine learning software in its initial, untrained state would perhaps be particularly vulnerable for exploitation by interacting users with malicious intents. Nevertheless, it still requires some leap of the imagination to go from collectively trolling a chatbot to become racist or sexist, to scenarios referred to as “neurohacking” in which hackers viciously exploit computational weaknesses of neurotechnological devices for improper purposes. Despite this potential for misuse, the adaptiveness of modern machine learning software may, with appropriate political oversight and regulation, work in favor of developing programs that are capable of ethically sound decision-making.





While intelligent systems based on machine learning software perform increasingly more complex tasks, designing a “moral machine” [16]  (also see previous discussion on blog here)– a computer program with a conscience if you will – alas remains elusive. A rigid set of algorithms will most likely perform poorly in the face of uncertainty, in ethically ambiguous or conflicting scenarios, and will not improve its behavior through its experiences. From an optimistic point of view, the “innate” learning capabilities of machine learning may enable software to develop ethically responsible behavior if given appropriate data sets for learning. For example, having responsible and professionally trained humans interact and train with intelligent systems – “digital parenting” – may enhance the moral conduct of machine learning software and immunize it against misuse [17].





While the limited scope here precludes an in-depth reconstruction of this debate, I encourage you to ponder how the extent of and relationship between autonomy, intentionality, and accountability, when exhibited by an intelligent system, may influence our inclination to consider it a moral agent. Meanwhile, one interesting ancillary benefit that arises from this increasing interest in teaching ethics to machines is that we study the principles of human moral reasoning and decision-making much more intensely [18].





 Suggestions for political regulation and oversight of machine learning software 


To prevent maladaptive system behavior and malicious interference, close regulatory legislation and oversight is necessary which appreciates the complexities of machine learning applications in medical neuroscience. In analogy to ethical codes for the development of robotic systems – the concept of “responsible robotics” [19] – I would emphasize the need for such an ethical framework to include non-embodied software – “responsible algorithmics,” if you will. From the policy-making perspective, the extent of regulatory involvement in developing intelligent systems for medical applications should be proportionate to the degree of autonomous system behavior and potential harm caused by these systems. We may also consider whether the regulatory review process for novel medical applications based on machine learning should include a specialized commission containing experts in clinical medicine, data and computer science, engineering, and medical ethics.





Instead of merely remaining playful children of the Internet age we may eventually grow up to become “digital parents”, teaching intelligent systems to behave responsibly and ethically – just as we would with our actual children.





Acknowledgments 


I thank Julia Turan (Science Communicator, London, @JuliaTuran) and the editors of The Neuroethics Blog for valuable discussions of the text and editing. I also thank Robin Schirrmeister (Department of Computer Science, University of Freiburg) for clarifications and discussions on machine learning. Remaining factual and conceptual shortcomings are thus entirely my own.



References 



1. Russell, S. & Norvig, P. Artificial Intelligence: A Modern Approach. (Prentice Hall, 2013).



2. Staff & agencies. Microsoft ‘deeply sorry’ for racist and sexist tweets by AI chatbot. The Guardian (2016).



3. Lartey, J. Predictive policing practices labeled as ‘flawed’ by civil rights coalition. The Guardian (2016).



4. Adeli, H. & Panakkat, A. A probabilistic neural network for earthquake magnitude prediction. Neural Netw. 22, 1018–1024 (2009).



5. Surden, H. & Williams, M.-A. Technological Opacity, Predictability, and Self-Driving Cars. (Social Science Research Network, 2016).



6. Thurnher, J. S. in Targeting: The Challenges of Modern Warfare (eds. Ducheine, P. A. L., Schmitt, M. N. & Osinga, F. P. B.) 177–199 (T.M.C. Asser Press, 2016).



7. Maia, T. V. & Frank, M. J. From reinforcement learning models to psychiatric and neurological disorders. Nat. Neurosci. 14, 154–162 (2011).



8. Fletcher, P. C. & Frith, C. D. Perceiving is believing: a Bayesian approach to explaining the positive symptoms of schizophrenia. Nat. Rev. Neurosci. 10, 48–58 (2009).



9. Li, S. et al. Hippocampal Shape Analysis of Alzheimer Disease Based on Machine Learning Methods. Am. J. Neuroradiol. 28, 1339–1345 (2007).



10. Zacharaki, E. I. et al. Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme. Magn. Reson. Med. 62, 1609–1618 (2009).



11. Saur, D. et al. Early functional magnetic resonance imaging activations predict language outcome after stroke. Brain 133, 1252–1264 (2010).



12. Salvatore, C. et al. Machine learning on brain MRI data for differential diagnosis of Parkinson’s disease and Progressive Supranuclear Palsy. J. Neurosci. Methods 222, 230–237 (2014).



13. Young, J., Kempton, M. J. & McGuire, P. Using machine learning to predict outcomes in psychosis. Lancet Psychiatry 3, 908–909 (2016).



14. Kessler, R. C. et al. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports. Mol. Psychiatry 21, 1366–1371 (2016).



15. Kellmeyer, P. et al. Effects of closed-loop medical devices on the autonomy and accountability of persons and systems. Camb. Q. Healthc. Ethics (2016).



16. Wallach, W. & Allen, C. Moral Machines: Teaching Robots Right from Wrong. (Oxford University Press, 2008).



17. Floridi, L. & Sanders, J. W. On the Morality of Artificial Agents. Minds Mach. 14, 349–379.



18. Skalko, J. & Cherry, M. J. Bioethics and Moral Agency: On Autonomy and Moral Responsibility. J. Med. Philos. 41, 435–443 (2016).



19. Murphy, R. R. & Woods, D. D. Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intell. Syst. 24, 14–20 (2009).




Want to cite this post?



Kellmeyer, P. (2016). On the ethics of machine learning applications in clinical neuroscience. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/11/on-ethics-of-machine-learning.html


Tuesday, November 1, 2016

A Good Death: Towards Alternative Dementia Personhoods


By Melissa Liu




Melissa is a Medical Anthropology PhD student at the U. of Washington, Seattle. Her nascent research circles the intersection of neuroscience, dementia, and design. Melissa is also a Neuroethics Fellow with the Center for Sensorimotor Neural Engineering, an NSF ERC.  





Something is amiss. Why is there a neighborhood of houses within this assisted living facility? Why do all the houses in the neighborhood have the same 1950s design? Am I standing on carpet? It looks like a garden path. The ceiling feels like a sunset in real time. [1] Where am I? When is this? The questions above are inspired by Lantern, one of several memory care facilities in Ohio based on a patent-pending memory care program created by Jean Makesh where rehabilitation is the goal [2] [3]. However, many more models around the world are based on Reminiscence therapy, a type of therapy which technically has “[no] single definition” but generally “[involves] the recalling of early life events and interaction between individuals” [4]. Research shows that “Reminiscence therapy is used extensively in dementia care and evidence shows when used effectively it helps individuals retain a sense of self-worth, identity and individuality” [4].




Reminiscence therapy serves as the foundation of many types of dementia village (DV) iterations. DV and similarly designed places are based on various models of caregiving and therapies. DV are memory care communities designed with the goal of caring for residents with dementia who live in their personal memories. The communities are designed to provide spaces for a high degree of reminiscence that allows freedom for residents to live their realities.







Image courtesy of Wikimedia


DVs are being designed and built around the world. The San Diego Opera House is currently building Glenner Town Square, a “faux city...that will be like stepping into a time warp.” Located in a warehouse, the “fully functional...self-contained city center” that is a day care center for individuals with dementia [5]. Georgian Bay, an assisted living facility in Canada, includes a 1947 Dodge and visits from Elvis impersonators [6] [7]. In Denmark's Museum in Aarhus exists an exhibit called the “House of Memories” open selectively to people with Alzheimer's. The exhibit is a house that includes both 1950s architectural design and the unique focus on sensorial details. As visitors are led through the house, the actor/museum guide playing the housewife will open a can of coffee specifically chosen for its smell of popular coffee brands in the 1950s. The exhibit is based on research on the “reminiscence bump” that proposes that the “best preserved...memories [are] from a person's teens and 20s” [8]. Reminiscence-based spaces for those with dementia hold a lineage spanning back to 2009 when the groundbreaking DV Hogeweyk was created [9].





Hogeweyk is known for being the first dementia village. Located in Weesp, a suburb of Amsterdam, Hogeweyk is completely enclosed except for one camouflaged door. The village includes twenty-three houses for merely 152 residents [10]— all with severe dementia. The residents are cared for by 250 staff members providing twenty-four-hour care. After experiencing their parents’ dementia, Hogeweyk’s founders collaborated with Dementia Village Architects to design the village. Research shows that relative to living in a biomedical facility, Hogeweyk residents require less medication, have fewer behavioral issues, and report greater quality of life [10]. As written in a Gizmodo article, Hogeweyk is designed for residents to feel “normal” and still “participate in life, the same way they did before they entered a dementia care unit” [9].





Seven general lifestyle apartments are chosen for residents based on how they lived most their lives. For example, a person who lived in a high socioeconomic bracket may be placed in the “upper class” lifestyle apartment. Other apartment types include “homey,” “Christian,” “artisan,” Indonesian,” and “cultural” [10]. It is unclear who chooses and what criteria are used for selecting an apartment lifestyle for a resident.







Image courtesy of Wikimedia


Besides theoretically suffusing residents’ individual lives into apartment designs, the rest of the village is designed as any small town might be. Hogeweyk includes such fixtures as a cafe, a grocery store, a salon, a theater, and gardens. The caregivers play dual roles by working in their medical capacities but also playing the roles of village employees (e.g., gardener, hairstylist). Residents live their lives as they desire: strolling where they please, tending to their hair at the salon, purchasing food. While manicuring the lawn, gardeners can also keep a medical caregiver’s eye on the residents. Hogeweyk caters to the reality (or what he sees as the reality) of individuals with severe dementia.





In comparison, strict adherence to biomedical models of general eldercare have led to many in the United States dying in sterile hospital beds [11]. Scientific research leads to financially costly life-sustaining treatments that lack consideration for a patient’s quality of life (e.g., dialysis) [11]. Models based on valuing length of life over quality of life contributes to a carelessness for the patient or, in non-biomedical terms, human beings who deserve to be thought of as such. Residents of retirement homes that spend most of their daily lives confined to the residential premises might feel a loss of control and freedom. Biomedical models of eldercare may better be used in tandem with design knowledge, which may change both the way practices of the ‘care’ of healthcare and the patient are conceptualized. If medical research points to patients feeling a lack of freedom in residential facilities, implementing long walkways (cf. Hogeweyk) where residents experience greater space would bolster the humanity and value of a resident whose desires and quality of life are seriously considered and respected.





In the case of Alzheimer’s disease, Hogeweyk moves towards recreating a good life as a way for residents to experience a good death. By searching neither for a cure nor a cause, Hogeweyk focuses on creating an ontology that both fulfills the desires of individuals with dementia as much as possible (e.g., watching a play in a theater) and creates comfort and ease for basic skills (e.g. walkways are color coded to help create ease for residents to stay on a path). Rather than treating residents with dementia as patients with symptoms, Hogeweyk holds of prime importance the dignity and personhood of its residents. Respecting the reality of residents informs the facility’s design.







Image courtesy of Flikr


A widely repeated critique in news articles is that Hogeweyk is lying to its residents and fabricating reality for a vulnerable population [12][13]. The village takes reminiscence therapy to the extreme where the past is the present. Even articles casting Hogeweyk in a positive light describe the village as “a more benevolent version of ‘The Truman Show’” [14]. In the article “On Recognition, Caring, and Dementia,” Dr. Janelle Taylor, a medical anthropologist at the University of Washington, Seattle, argues for an alternative configuration of care experientially learned by caring for her mother who has dementia, Taylor writes that “[those] who have little firsthand experience with dementia tend, I think, to imagine it as a more or less purely cognitive loss of a store of remembered facts, manifested in a loss of the ability to recite names and dates and other bit of information” [15]. After being repeatedly asked if her mother remembers her name, Taylor writes that “I don't need my mother to tell me my name...I already know these things” [15]. Rather, the question that should be asked is “Do we grant her recognition?” [15].





These villages are attempting to cater to a growing market of aging Baby Boomers. Hogeweyk's construction cost of over $25 million was primarily government-funded [16]. The resident pays around $6,000 a month. The Netherlands, where Hogeweyk is located, consistently ranks first on measurements of best healthcare systems. Citizens have mandatory government-funded healthcare insurance [17]. With privatized healthcare in the United States, at what cost to residents would the building of a DV be profitable? What kind of care would be provided to those who cannot afford to live in a DV?





The question turns to sustainable models of care for individuals with dementia. Organizations are looking at changing conceptions of age and how communities are configured. Judson, a not-for-profit organization in Ohio, creates intergenerational apartment buildings [18] [19]. Similarly, the Dutch have Humanitas Independent Senior Living Facility that provides free student housing for students who will be paired with an elderly person for a roommate [20][21]. Each student is required to spend at least thirty hours “helping out” their roommates and neighbors [19]. As an institutional affiliate with the World Health Organization, the AARP has a program for Age-Friendly Communities that certifies cities actively creating shifts in eight particular “Domains of Livability” (e.g., housing, social participation, community support) [22]. With the increasing number of people who will be diagnosed with dementia, there is an urgent call to imagine, design, and move towards a future where communities shift towards a different models to care for the aging whether that truly addresses the needs and dignity of the aging. Whether the best model is DMs or something like satisfying new criteria for Domains of Livability or both remains to be seen.




Acknowledgment 

Thank you to Karen Rommelfanger for her generous help and guidance.



References 



 1. Porter, Evan. “One man turned nursing home design on its head when he created this stunning facility.” Upworthy, September 8, 2016. Accessed September 23, 2016. http://www.upworthy.com/one-man-turned-nursing-home-design-on-its-head-when-he-created-this-stunning-facility?g=2&c=ufb1.



2. Makesh, Jean. “PodCast 5 – Seven Building blocks for new learning.” YouTube video, 7:55. Posted April 6, 2015. https://www.youtube.com/watch?v=rDsdigYD-4g.



3. “Svayus – ‘Memories of yesterday to function today ™’.” Svayus. Accessed September 23, 2016. http://svayus.com/.



4. Dempsey, Laura, et al. “Reminiscence in dementia: A concept analysis.” Dementia 13(2014):176-192.



5. Lewis, Danny. “Fake Towns Could Help People With Alzheimer’s Live Happier Lives: Model towns meant to spark memories could help patients with dementia.” Smithsonian, September 21, 2016. Accessed September 23, 2016. http://www.smithsonianmag.com/smart-news/fake-towns-could-help-people-alzheimers-live-happier-lives-180960518/?utm_source=facebook.com&no-ist.



6. McLaughlin, Tracy. “Retirement home turns back the clock for dementia patients.” Toronto Sun, May 17, 2015. Accessed September 23, 2016. http://www.torontosun.com/2015/05/17/retirement-home-turns-back-the-clock-for-dementia-patients.



7. The National. “Home Recreates Past for Dementia Patients.” YouTube video, 6:52. Posted October 11, 2015. https://www.youtube.com/watch?v=9rOYmxIWzJI.



8. Overgaard, Sidsel. “Denmark’s ‘House of Memories’ Creates 1950s For Alzheimer’s Patients.” NPR, September 13, 2016. Accessed September 23, 2016. http://www.npr.org/sections/parallels/2016/09/13/493744351/denmarks-house-of-memories-recreates-1950s-for-alzheimers-patients?utm_source=npr_newsletter&utm_medium=email&utm_content=20160918&utm_campaign=npr_email_a_friend&utm_term=storyshare.



9. Campbell-Dollaghan, Kelsey. “An Amazing Village Designed Just For People With Dementia.” Gizmodo, February 20, 2014. Accessed September 23, 2016. http://gizmodo.com/inside-an-amazing-village-designed-just-for-people-with-1526062373



10. “Hogeweyk, living in lifestyles. A mirror image of recognizable lifestyles in our society.” Hogeweyk. Accessed September 23, 2016. http://hogeweyk.dementiavillage.com/en/.



11. Kaufman, Sharon. Ordinary Medicine: Extraordinary Treatments, Longer Lives, and Where to Draw the Line. North Carolina: Duke University Press, 2015.



12. Sagan, Aleksandra. “Canada’s version of Hogewey dementia village recreates ‘normal’ life: Canadian facility creates similar false-reality experience based on Holland’s Hogewey.” CBC News, May 3, 2015. Accessed September 23, 2016. http://www.cbc.ca/news/health/canada-s-version-of-hogewey-dementia-village-recreates-normal-life-1.3001258.



13. Napoletan, Ann. “ Dementia Care: What in the World is a Dementia Village?” Alzheimer’s.net, August 7, 2013. Accessed September 23, 2016. http://www.alzheimers.net/2013-08-07/dementia-village/.



14. Planos, Josh. “The Dutch Village Where Everyone Has Dementia: The town of Hogeway, outside Amsterdam, is a Truman Show-style nursing home.” The Atlantic, November 14, 2014. Accessed September 23, 2016. http://www.theatlantic.com/health/archive/2014/11/the-dutch-village-where-everyone-has-dementia/382195/.



15. Taylor, Janelle. “On Recognition, Caring, and Dementia.” Medical Anthropology Quarterly 22(2008):313-335.



16. Tagliabue, John. “Taking On Dementia With the Experiences of Normal Life.” The New York Times, April 24, 2012. Accessed September 23, 2016. http://www.nytimes.com/2012/04/25/world/europe/netherlands-hogewey-offers-normal-life-to-dementia-patients.html?_r=0.



17. “Zorgrek.; uitgaven (lopende, constant prijzen, financiering, 1998-2013.” Centraal Bureau voor de Statistiek, May 21, 2015. Accessed September 23, 2016. http://statline.cbs.nl/StatWeb/publication/?DM=SLNL&PA=71914ned&D1=37-43&D2=a&HDR=G1&STB=T&VW=T.



18. “About Us.” Judson. Accessed September 23, 2016. http://www.judsonsmartliving.org/about/.



19. Hansman, Heather. “College Students are Living Rent-Free in a Cleveland Retirement Home: Research shows that the unique arrangement could have health benefits for the elderly.” Smithsonian, October 16, 2015. Accessed September 23, 2016. http://www.smithsonianmag.com/innovation/college-students-are-living-rent-free-in-cleveland-retirement-home-180956930/.



20. Regnier, Victor. Design for Assisted Living: Guidelines for Housing the Physically and Mentally Frail. Wiley: 2002.



21. Reed, Carey. “Dutch nursing home offers rent-free housing to students.” PBS, April 5, 2015. Accessed September 23, 2016. http://www.pbs.org/newshour/rundown/dutch-retirement-home-offers-rent-free-housing-students-one-condition/.



22. “The 8 Domains of Livability: An Introduction.” AARP. Accessed September 23, 2016. http://www.aarp.org/livable-communities/network-age-friendly-communities/info-2016/8-domains-of-livability-introduction.html.






Want to cite this post?



Liu, M. (2016). A Good Death: Towards Alternative Dementia Personhoods. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/10/a-good-death-towards-alternative.html