Pages

Tuesday, March 29, 2016

AlphaGo and Google DeepMind: (Un)Settling the Score between Human and Artificial Intelligence

By Katie L. Strong, PhD 



In a quiet room in a London office building, artificial intelligence history was made last October as reigning European Champion Fan Hui played Go, a strategy-based game he had played countless times before. This particular match was different from the others though – not only was Fan Hui losing, but he was losing against a machine.





The machine was a novel artificial intelligence system named AlphaGo developed by Google DeepMind. DeepMind, which was acquired by Google in 2014 for an alleged $617 million (their largest European acquisition to date), is a company focused on developing machines that are capable of learning new tasks for themselves. DeepMind is more interested in artificial “general” intelligence, or AI machines that are adaptive to the task at hand and can accomplish new goals with little or no preprogramming. DeepMind programs essentially have a kind of short-term working memory that allows them to manipulate and adapt information to make decisions. This is in contrast to AI that may be very adept at a specific job, but cannot translate these skills to a different task without human intervention. For the researchers at DeepMind, the perfect platform to test these types of sophisticated AI: computer and board games. 











Courtesy of Flickr user Alexandre Keledjian


DeepMind had set their sights high with Go; since IBM’s chess playing Deep Blue beat Garry Karparov in 1997, Go has been considered the holy grail of artificial intelligence, and many experts had predicted that humans would remain undefeated for at least another 10 years. Go is a relatively straightforward game with few rules, but the number of possibilities on the board makes for complex, interesting play that requires long-term planning; on the typical 19x19 grid, according to the DeepMind website, there are more legal game positions “than there are atoms in the universe.” Players take turns strategically placing stones (black for the first player, white for the second) on the grid intersections in an effort to form territories. Passing is an alternative to taking a turn, and the game ultimately ends when both players have passed due to the lack of unmarked territory. Often though, towards the end of the game, one player will resign in lieu of playing to the very end.





In a Nature paper published in January of this year, researchers at DeepMind reported the development of an AI agent that could beat other Go computer games with a winning rate of 99.8%. Buried in the text, in a single paragraph of the Results section, the authors also briefly describe the epic match between AlphaGo and Fan Hui, which ultimately resulted in a 5 to 0 win for artificial intelligence.







With that significant win in hand, DeepMind took a much bolder approach in announcing AlphaGo’s complexity, and invited Lee Sedol, the top Go player in the world for the last decade, to compete in a five match tournament the week of March 9th – 15th. Instead of a private match at DeepMind’s headquarters, this contest was live-streamed to the world through YouTube and came with a 1 million dollar prize. Despite the defeat of Fan Hui and the backing of Google, Lee Sedol was still fairly confident in his skills and said late February in a statement, “I have heard that Google DeepMind’s AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time.”





Three and half hours into the first match on March 9th though, Lee Sedol resigned, or forfeited, the match. He resigned the second and third matches as well. According to Lee Sedol during a press conference following the third game, he felt he underestimated the program during game one, made mistakes in game two, and was under extreme pressure in game three.





However, in a win for humanity, Lee Sedol won the fourth game. Interestingly, the first 11 moves of the fourth game were exactly the same as the second game, and perhaps Lee Sedol was able to capitalize on what he learned from the previous three. According to the English commentator Michael Redmond, Move 78 (a move by Lee Sedol) elicited a miscalculation from AlphaGo and the game was essentially over from that point. In both of these games, Lee Sedol played second (the white stones), and he stated in the post four-game press conference that AlphaGo is weaker when the machine goes first.








Cofounder of DeepMind Demis Hassabis

Whether or not AlphaGo is actually weaker when it plays first is difficult to know since Lee Sedol may be the only person that can attest to this. During the post-four game press conference, cofounder of DeepMind Demis Hassabis stated that Lee Sedol’s win was valuable to the algorithm and the researchers would take AlphaGo back to the UK to study what had happened, so this weakness could be confirmed (and presumably fixed). One important point of Go play that may have influenced the outcome though is that AlphaGo will play moves to maximize its chances of winning, irrespective of how this move influences the margin of victory. Whether or not this is a weakness is probably up for debate as well, but in this sense AlphaGo is not playing like a professional human player. Go has a long history of being respected for its elegance and simplicity, but AlphaGo is not concerned with the sophistication or complexity of the game – it just wants to win. 





Lee Sedol requested and was granted the opportunity to play black (the first move) in the fifth and final match-up, even though the rules of the game stated that it would be randomly assigned. “I really do hope I can win with black” Lee Sedol said after winning game four, “because winning with black is much more valuable.” The fifth match lasted a grueling five hours, but eventually Lee Sedol did resign. After almost a week of play, the championship concluded with a 4-1 score for artificial intelligence.





When AlphaGo played Fan Hui in October 2015, the agent beat a professional 2-dan player, but Lee Sedol ranks higher than Fan Hui as a 9-dan professional player. (Those who have mastered the game of Go are ranked on a scale known as dan, which begins with 1-dan and continues to 9-dan). To put this into perspective, Lee Sedol was a 2-dan professional player in 1998, and it wasn’t until 2003 that he reached 9-dan status. Playing at the professional level of 9-dan from 2-dan took Lee Sedol five years, but AlphaGo was able to climb this ladder in only five months. DeepMind was able to build an artificial intelligence agent with these capabilities by utilizing two important concepts, deep neural networks and reinforcement learning. Typical AI agents of the past deployed tree searching to review possible outcomes, but this brute force approach where AI considers the effect of every possible move on the outcome of the game is not feasible in Go. In Go, the first black stone played could lead to hundreds of potential moves by white, which in turn could lead to hundreds of potential moves by black. Humans have been able to master Go without mentally running through every possible play during each turn and without mentally finishing the game after every move by an opponent. Humans rely on imagination and intuition to master complex skills, and AlphaGo is actually designed to mimic these very complex cognitive functions.








Courtesy of Flickr user Little Book

Deep neural networks are loosely based on how neural connections in our brains work, and neural networks have been utilized for years to optimize our searches in Google and to increase the performance of voice recognition in smartphones. Analogous to synaptic plasticity, where synaptic strength increases or decreases over a lifetime, computer neural networks change and strengthen when presented with many examples. In this type of processing, neural networks are organized into layers, and each layer is responsible for constructing only a single piece of information. For example, in facial recognition software, the first layer of the network may only pick up on pixels and the second layer will only be able to reconstruct simple shapes, while a more sophisticated layer may be able to recognize difficult shapes (i.e, eyes and mouths). These layers will continue to become more complex until the software can recognize faces.





AlphaGo has to two neural networks: a policy network to select the next move, and a value network to select the winner of the game. AlphaGo uses the Go board as input and processes it through 12 layers of neural networks to determine the best move. To train the neural networks, researchers used 30 million moves from games played on the KGS Go server, and this alone led to an agent that could predict the human move 57% of the time. The goal was not to play at the level of humans though; the goal was to beat humans, and to do that researchers utilized reinforcement learning where AlphaGo was split in two and then played thousands of games against itself. With this, AlphaGo was able to win at the rate of 99.8% against commercial Go programs.





These neural networks mean that AlphaGo doesn’t search through every possible position to determine the best move before it makes a play and it doesn’t simulate entire games to help make a choice either. Instead, AlphaGo only considers a few potential moves when confronted with a decision and considers only the more immediate consequences of these potential moves. Even though chess has many fewer possible legal moves than Go, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in 1997. AlphaGo is just more human-like in that it makes these choices intelligently and precisely. According to AlphaGo developer David Silver in this video, “the search process itself is not based on brute force. It’s based on something more akin to imagination.”





This powerful computing power is not reserved strictly for games; DeepMind’s website declares that it would like to “solve intelligence” and “use it to make the world a better place.” Games are just the beginning, but deep neural networks may be able to model disease states, pandemics, or climate change and teach us to think differently about the world’s toughest problems. (DeepMind Health was announced on February 24th of this year.) Many of the moves that AlphaGo made in the beginning of the matches baffled Go professionals because they seemed like mistakes, but AlphaGo ultimately won. Were these really mistakes that AlphaGo was able to fix later or were these moves just beyond our current comprehension? How many potential Go moves have never before been considered or played out in a game?





If AlphaGo’s choices of moves could surprise Go professionals and even the masterminds behind AlphaGo, should we fear that AlphaGo is an early version of a machine that could spontaneously evolve into a conscious AI? Today, we probably have very little to be concerned about. Although the technology behind AlphaGo could be applied to many other games, AlphaGo’s learning progress was hardly casual as it took millions of games of training. However, how will we know when we do need to worry? Games have provided us with a convenient benchmark to measure the progress of AI, from backgammon in 1979 to the recent Go match, but if Go was a final frontier for AI, where do we go from here?







Measuring emerging consciousness in AI agents that simulate the human brain will be challenging, according to a paper by Kathinka Evers and Yadin Dudai of the Human Brain Project. We can use a Turing Test, although the authors note that it seems highly plausible that an intelligent AI could pass the Turing Test without having consciousness. We could also try to detect in silico signatures similar to our brain signatures that denote consciousness, but we are at a loss for what those signals may be and how well they actually represent human consciousness. If consciousness is more than just well-defined organization and requires biological entities, then computers will never be conscious in the same sense that we are and instead will exhibit only an artificial consciousness. Furthermore, thought leaders on the integrated information theory (IIT) Giulio Tononi and Christof Koch have argued in this paper that a simulation of consciousness is not the same as consciousness, and “IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.”





Regardless of how we debate machine consciousness, neural networks that mimic human learning are being utilized in most major companies that dominate our society, including Facebook, Google, and Microsoft. We will probably continue to see deep reinforcement learning as developed by DeepMind to improve voice recognition, translations, YouTube, and image searching. Deep reinforcement learning could also be used to power self-driving cars, train robots, and as Hassabis envisions in the future, develop scientist AIs that work alongside humans. Without a well-defined metric for machine intelligence and consciousness, time will tell which of these milestones marks the next great achievement in AI, how we measure its significance, and whether this event warrants anxiety. The mysterious ethics board that Hassabis negotiated with Google is probably a reflection of the company’s awareness of the ambiguous state of future AI research.








As uncertain and even scary as the future may seem though, it is important to remember that AlphaGo lost one of the matches, and that loss matters. Prior to the match, AlphaGo played millions and millions of Go games, many more games than Lee Sedol could ever play in a lifetime. AlphaGo never got tired, it never got intimidated by Lee Sedol’s 18 international titles, and it never participated in self-doubt. AlphaGo’s ignorance to the stakes of the games worked in its favor; Lee Sedol admitted he was under too much pressure during the third match.





For all of these advantages though, AlphaGo couldn’t adapt quickly or learn fast enough from Lee Sedol to make a difference in how it played. For AlphaGo to get better, it must play millions of games – not just a couple. Lee Sedol was able to play the first three matches, learn from AlphaGo, and exploit what he thought was a weakness. He thought AlphaGo played weaker when it played black, and he took advantage of this by playing a move that many consider brilliant and unexpected. AlphaGo challenged Lee Sedol and then brought out the best in him. And, when it comes to the future, the outcome of the fourth match begs the question: how can AI bring out the best in us?








Want to cite this post?





Strong, K.L. (2016). AlphaGo and Google DeepMind: (Un)Settling the Score between Human and Artificial Intelligence. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/03/alphago-and-google-deepmind-unsettling.html


Tuesday, March 22, 2016

When it comes to issues of identity and authenticity in DBS, let patients have a voice


By Ryan Purcell








Reconstruction of DBS electrode placement, image courtesy

of Wikipedia

Deep brain stimulation (DBS) is an extraordinarily popular topic in neuroethics. In fact, you could fill a book with all of the articles written on the subject just in AJOB Neuroscience alone (and the editors have considered doing this!). A special issue on the topic in AJOBN can be found here. Among the most widely discussed neuroethical issues in the DBS arena are concerns over the effects on patient identity and authenticity. But perhaps one perspective that has not been fully represented in the academic literature is that of the patients for whom this is actually their last hope to find a way out of a profound, debilitating and often years-long episode of depression. At February’s Neuroethics and Neuroscience in the News journal club, Dr. Helen Mayberg spoke passionately about the approach that led her team to attempt DBS for major depressive disorder (MDD), the ensuing media response, and how that has affected her ongoing work to improve the technique, better understand the etiology of MDD, and allow patients to get back to their lives.





The DBS for depression story goes back more than a decade and began in Toronto. Dr. Mayberg’s group consistently found cingulate area 25 to be differentially active in mood studies; it was tonically active in depressed patients and became transiently activated when healthy subjects were saddened while in the PET scanner (Mayberg, 1997). The idea was put forward that if the activity of this area could be reduced, perhaps it may lift some patients who had exhausted all other options out of the depths of the most debilitating degrees of depression. By this time, deep brain stimulation of the basal ganglia had become a mainstream treatment to calm tremors resulting from some motor disorders, particularly those resulting from Parkinson’s Disease.








Dr. Helen Mayberg

The initial DBS for depression study (Mayberg et al., 2005) was a major success and quickly made waves in psychiatry and the news media. To quote Dr. Mayberg, “the term ‘going viral’ didn’t exist in 2006 but it definitely went viral.” In an article entitled “A Depression Switch?”, David Dobbs writing for the New York Times Magazine profiled a patient, Deanna Cole-Benjamin, who was added to the initial study after several years of a devastating, profound depressive episode that had proven resistant to psychotherapy, drugs, and upwards of 100 electroconvulsive therapy sessions. Her experience with DBS, however, was extraordinary and she recounted it this way: “It was literally like a switch being turned on that had been held down for years…All of a sudden they hit the spot, and I feel so calm and so peaceful. It was overwhelming to be able to process emotion on somebody's face. I'd been numb to that for so long.” Initially, Dr. Mayberg was dismayed by the article’s title, which indicated that this was a quick, almost miraculous fix rather than a long process that requires ongoing brain stimulation delivered long-term using the implanted device and active psychotherapy and retraining on the part of the patient to recover fully. But she recognized that these were, in fact, the patient’s words. Indeed, patients, Dr. Mayberg added, should have more a voice in the neuroethics literature.






Neuroethicists have written at length about the potential effects of DBS on patient identity (Baylis, 2013), what it means for authenticity, and also ideas such as alienation (Kraemer, 2013). These are important issues as Parkinson’s patients undergoing DBS for motor symptoms, for example, have on occasion had serious negative post-operative side effects including mania and suicidal ideation (Kraemer, 2013). Perhaps the most central questions to neuroethicists are 1) is the device infringing on or altering patient autonomy and 2) is the patient’s identity or authentic self fundamentally changed by the so-called brain pacemaker. However, hearing Dr. Mayberg describe patients who have been completely debilitated by the disease, who can no longer care for themselves or others and think of little other than suicide, ethical concerns about whether a potential treatment might compromise the patient’s authenticity seem absurd. If your authentic self essentially cannot function any longer then is there any other option but to alter it? Still, post-operatively some patients describe an anxiety or fear that the efficacy of the stimulation will erode over time and that at any moment, the disease that, in Deanna’s case, came on without warning, could re-emerge. Dr. Mayberg, though, contends that successful DBS procedures return patients to who they really are, that identity could be better defined as “who are you without depression.” This can be difficult to ascertain for the researchers, however, because their first impression of the patient is always in a profoundly depressed state, and they can only see that pre-depression identity through the eyes of their family and friends.









Location of area 25, image courtesy of Wikipedia

In a way, if the hypothesis that over-activity of area 25 is underlying MDD holds up to further testing then, in a simplistic sense, it fits nicely with the “back to your old self” notion. Conceptually speaking, this is not a method intended to overcome a negative with an overabundance of positives in other areas. Instead, the idea is to normalize the activity of a particularly powerful area where the activity somehow went a bit haywire. Researchers have in fact found that DBS of the nucleus accumbens, a key node of the so-called reward circuit, can elicit euphoric feelings (Synofzik, Schlaepfer, & Fins, 2012). Dr. Mayberg stressed that in her view, DBS of area 25 for MDD is different; that it restores more normal function and enables patients to get back to their lives (and rather than activating an area, she likens DBS in area 25 to taking off the brake). For this reason, Dr. Mayberg considers area 25 stimulation as more of a removal of inhibition that does not really create or activate a new identity, but circumvents the barrier to enable one to be their authentic self whoever that may be. But what is also missing from this explanation is how patients view who they are before and after stimulation and ultimately this is an empirical question, which is where Dr. Mayberg’s recent study on intraoperative self-assessment from patients come into play (Choi, Riva-Posse, Gross, & Mayberg, 2015).






The neuroethical discussions over how to understand the many applications of DBS and its consequences for patients will continue and, it now seems, so will the medical and scientific debates over the effectiveness of the procedure. The multi-site BROADEN clinical trial was halted and there has been some degree of blowback on other blogs criticizing the early enthusiasm surrounding DBS for MDD. The potential physical and nonphysical harms of such an intervention, the critics argue, are not worth the risks given the absence of a consistent benefit. But perhaps now is the best time, as researchers continue to push forward in the pursuit of understanding MDD’s underlying mechanisms and how DBS might help patients with MDD, to also reconsider the approaches to evaluating patients’ perspectives and views on the value and risks of the intervention.





References





Baylis, F. (2013). “I am who i am”: On the perceived threats to personal identity from deep brain stimulation. Neuroethics, 6(3), 513–526. http://doi.org/10.1007/s12152-011-9137-1





Choi, K. S., Riva-Posse, P., Gross, R. E., & Mayberg, H. S. (2015). Mapping the “Depression Switch” During Intraoperative Testing of Subcallosal Cingulate Deep Brain Stimulation. JAMA Neurology, 72(11), 1–9. http://doi.org/10.1001/jamaneurol.2015.2564





Kraemer, F. (2013). Authenticity or autonomy? When deep brain stimulation causes a dilemma. Journal of Medical Ethics, 39(12), 757–60. http://doi.org/10.1136/medethics-2011-100427





Mayberg, H. S. (1997). Limbic-cortical dysregulation: a proposed model of depression. The Journal of Neuropsychiatry and Clinical Neurosciences, 9(3), 471–81. http://doi.org/10.1176/jnp.9.3.471





Mayberg, H. S., Lozano, A. M., Voon, V., McNeely, H. E., Seminowicz, D., Hamani, C., … Kennedy, S. H. (2005). Deep Brain Stimulation for Treatment-Resistant Depression. Neuron, 45(5), 651–660. http://doi.org/10.1016/j.neuron.2005.02.014





Synofzik, M., Schlaepfer, T. E., & Fins, J. J. (2012). How Happy Is Too Happy? Euphoria, Neuroethics, and Deep Brain Stimulation of the Nucleus Accumbens. AJOB Neuroscience, 3(May 2012), 30–36. http://doi.org/10.1080/21507740.2011.635633





Want to cite this post?





Purcell, R. (2016). When it comes to issues of identity and authenticity in DBS, let patients have a voice. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/03/when-it-comes-to-issues-of-identity-and.html

Tuesday, March 15, 2016

Naming the devil: The mental health double bind




By Jennifer Laura Lee






Jenn Laura Lee recently received her undergraduate in neuroscience from McGill University in Montreal, Canada, and hopes to pursue a PhD in neurobiology this fall. Her current interests include the advancement of women in STEM and the ethics of animal experimentation.





The “Bell Let’s Talk” initiative swept through Canada on January 27, hoping to end the stigma associated with mental illness, one text and one share at a time. Michael Landsberg shares his thoughts in a short video on the Facebook page. “The stigma exists because fundamentally there’s a feeling in this country still that depression is more of a weakness than a sickness,” he explains. “People use the word depression all the time to describe a bad time in their life, a down time. But that’s very different than the illness itself.” Perhaps such a bold statement merits closer examination.





Philosophers, psychologists, and neuroscientists find themselves rallying behind two starkly contrasting paradigms of mental health, lobbying for conflicting changes in policy and attitude. On one end of the spectrum lies the medical model of psychiatry - the notion that the classification of mental illness can and ought to be truly objective, scientific, and devoid of value judgements. At the other extreme, a Foucault-esque theory posits that most psychiatric classifications are nothing more than a reflection of the values of those who do the classifying; classification is inherently normative and necessarily serves the interests of those in power. 






Most modern paradigms take a more moderate approach, arguing that classification is based on both objective facts about the body and elements of normativity, but that diagnoses are useful nonetheless and do ultimately describe “real” illnesses. Nevertheless, the push and pull of each extreme keeps our current societal approach to mental illness in an uncomfortable double bind. In an over-medicalized paradigm, where we prescribe anti-depressants for those going through financial or relationship crises, we risk prescribing inauthentic neurobiological fixes to the suffering caused by complex social problems. But in an under-medicalized paradigm, we risk inadequately addressing the suffering caused by treatable neurobiological anomalies, under the pretense of total social relativism (more on the issues surrounding naming mental illness here).





For instance, in favour of de-medicalization, the neurodiversity movement (see previous blog posts on the topic herehere, and here)  quite reasonably suggests that conditions like autism ought not to be considered disorders, but rather alternative ways of thinking. Society can holistically benefit from including and adjusting to diverse modes of thought, rather than attempting to change autistic individuals to fit the mold (see also: philosopher Ian Hacking’s “looping effect” which might describe the way in which the very act of being diagnosed with a Diagnostic and Statistical Manual (DSM)-classified mental disorder can alter one’s self- and public perception of the condition, creating an “otherness” where it ought not exist).






Interpreting physical illness vs. medical illness, image

courtesy of Buzzfeed


Similarly, the categorization and naming of mental disorders can be damaging to ethnic minorities, women, and the socioeconomically oppressed. Naming compels individuals to misattribute the suffering caused by societal structures to problems intrinsic to their own bodies and brains and prevents marginalized individuals from seeing the reality of their greater social context, which legitimizes and perpetuates harmful social structures. For instance, so-called “Self-Defeating Personality Disorder” (SDPD) was introduced in the DSM III-R in 1987, describing criteria which closely mirrored traditional feminine submissiveness in the context of domestic abuse. An individual with SDPD “Chooses people and situations that lead to disappointment, failure, or mistreatment even when better options are clearly available … Engages in excessive self-sacrifice that is unsolicited by the intended recipients of the sacrifice." It was subsequently excluded from DSM-IV in recognition that symptoms of abuse are primarily caused by male abusers, and that misguided medical diagnoses can have profoundly damaging effects on the already socially marginalized.




The naming of mental disorders is much more socially relative than that of physical disorders. And yet in some cases, the comparison between mental and physical disorders can have incredibly beneficial impacts on mental health discourse. Consider the message of the simple yet effective #BellLetsTalk campaign, or BuzzFeed’s recent pieces on mental illness (exemplified by this video and this listicle). In promoting a liberal stance on mental health in popular discourse, popular media frequently draw on the comparison between mental and physical disorders to reveal contradictory attitudes and social policies. This comparison inherently medicalizes mental health, but to the effect of taking mental illness more seriously, with arguably positive outcomes for de-stigmatization and patient care. In positing that the brain, like the kidney or any other organ, can malfunction and “get sick” for periods during one’s life (as is said to occur during some episodes of depression or mania), we classify mental disorders into discrete categories, in the same, dispassionate way one might be diagnosed with a stomach ulcer.




In diminishing the stigma surrounding mental disorders to match that of mundane physical illnesses, the medical classification of mental illness might provide individuals with the emotional detachment needed to seek appropriate help, whether in the form of reaching out to friends and employers, or seeking therapy or medication.




Such dispassionate comparison to physical diagnoses may moreover be crucial in legitimizing policy discourse, providing us the linguistic tools to address inadequacies such as sick leave and insurance coverage. As economist Richard Layard and CBT specialist David M. Clark project in “Thrive,” depression, when viewed as an illness like any other, is on average 50% more disabling than physical conditions like angina, asthma, arthritis, and diabetes, yet is much more likely to go untreated in Britain’s healthcare system. There may therefore be a lot of political progress to be made through the injection of objectivity into the public discourse on mental health.




Moreover, perhaps we overlook the psychological benefits of medical categorization in the phenomenology of mental illness itself. It may be empowering to be able to conceptualize depression or OCD or addiction as a foreign thing to be beat, rather than festering in the hopeless determinism of one’s (often unalterable) social conditions or previous life decisions. In naming an illness, an individual can recognize her current state as an aberration from her authentic self, positioning herself in opposition to her affliction during the healing process, battling against depression or addiction in much the same way that one might battle against cancer. On a social level, this paradigm might open the door to seeking support, in the knowledge that one’s condition is not one’s “fault,” and no more shameful or unusual than the common cold. Medicalization in social discourse can therefore serve a useful purpose and is not always necessarily a thing to be feared.







The use of biomarkers, including those found through blood tests,

have been found to outperform traditional diagnoses of mental

illness, image courtesy of Wikipedia


Nevertheless, as Sana Sheikh points out in a brilliant piece for Jacobin, we must recognize the disproportionate economic incentives which bias our healthcare system toward over-medicalization. Pharmaceutical innovation in the mental health domain has been stagnant, with very few new psychiatric drugs being developed over the last decade (predominantly because the neural mechanisms underlying most mental illnesses are still largely uncharted).




Hoping to bring objective neurological mechanisms to the forefront of mental health research, with possible pharmaceutical applications, the National Institute of Mental Health (NIMH)’s new Research Domain Criteria (RDoC) initiative seeks to redraft our framework for mental health research into its most systematized, objective formulation to date. And as the largest provider of funding for mental health research, the influence of the NIMH in dictating our prevailing views on mental illness must not be underestimated.




Rejecting symptom-based DSM groupings as still too subjective, the new system relies nearly exclusively on measurable biomarkers for the categorization of mental illness. Blood tests or genetic screens for depression could soon eclipse subjective accounts. Proponents insist that biotypes (biomarker-based categories) outperform traditional diagnoses of illnesses like schizophrenia or bipolar disorder, in that there is significant biological overlap between traditional DSM groupings.




Of course, the system is already under fire for its seeming total lack of consideration for psychosocial or environmental factors in the pathology of mental disease. Moreover, as Sheikh reminds us, there must be an irreducibly subjective element to mental illness - if someone self-reports feeling depressed but the biomarkers in their blood suggest otherwise, it would be bizarre to conclude that they are wrong about their own mental state.




The medicalization of mental illness is thus not simplistically good or bad, and the degree to which medicalization is appropriate or beneficial will vary from case to case. Faced with this uncertainty, we must be wary of blanket policies that lean too far in either direction. One-dimensional policies like NIMH’s RDoC may well produce pharmaceutical innovation, but certainly have the potential to lead to harmful, reductionist accounts of mental illness. Conversely, it might be beneficial in policy discourse for conditions like depression to be treated as a veritable mental illness. In light of the rapidly changing policy and funding landscapes of neuroscience and psychology, we must insist on studying the pathology of mental disorders as a constellation of environmental, psychosocial and biological factors, and seek authentic, balanced, and multi-faceted solutions to the unique suffering presented by each.




Want to cite this post?



Lee, J.L. (2016). Naming the devil: The mental health double bind. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/03/naming-devil-mental-health-double-bind_14.html

Tuesday, March 8, 2016

The ethical duty to know: Facilitated communication for autism as a tragic case example

By Scott O. Lilienfeld, Ph.D.






Scott O. Lilienfeld is a Samuel Candler Dobbs Professor of Psychology at Emory University. He received his A.B. from Cornell University in 1982 and his Ph.D. in Psychology (Clinical) from the University of Minnesota in 1990. His interests include the etiology and assessment of personality disorders, conceptual issues in psychiatric classification, scientific thinking and evidence-based practice in psychology, and most recently, the implications of neuroscience for the broader field of psychology. Along with Sally Satel, he is co-author of Brainwashed: The Seductive Appeal of Mindless Neuroscience (2013, Basic Books).





I’m a clinical psychologist by training, although I no longer conduct psychotherapy. In the course of my graduate work at the University of Minnesota during the 1980s, I – like virtually all therapists in training – learned all about the ethical mandates of clinical practice. By now, all mental health professionals can practically recite them by heart: don’t sleep with your clients, avoid dual relationships, don’t show up drunk to work, don’t violate client confidentiality, always report child abuse and elder abuse to appropriate authorities, and so on. To be sure, all of these ethical requirements are exceedingly important.






Yet, with few exceptions, clinical psychology and the allied fields of mental health practice, such as psychiatry, social work, mental health counseling, and psychiatric nursing, have largely neglected another crucial set of ethical requirements, namely, what University of Nevada at Reno clinical psychologists William O’Donohue and Deborah Henderson term epistemic duties – responsibilities to seek out and possess accurate knowledge about the world. As these authors pointed out in a 1999 article, all mental health professionals should be “knowledge experts.” That is, they should be specialists who keep up with the best available research literature on the efficacy of psychological interventions and the validity of assessment procedures, and who continually draw on this information to provide the best possible client care. As O’Donohue and Henderson observe, mental health professionals are also ethically obligated to be relentlessly self-critical. Ideally, they contend, “one acknowledges that one’s beliefs may be in error and one seeks to rigorously criticize one’s beliefs to see if they are in error or are in need of revision.”





For far too long, the fields of mental health have ignored the somber duty of mental health professionals to act in accordance with rigorous scientific evidence. A decision to do otherwise is commonly regarded as a preference, not a serious ethical breach. The tragic case of facilitated communication for severe developmental disabilities reminds us of why epistemic duties are every bit as crucial as the other ethical obligations with which psychologists and psychiatrists are familiar.





The story begins in 1977 in St. Nicholas Hospital, an institution for individuals with intellectual and physical disabilities in Melbourne, Australia. There, a staff member named Rosemary Crossley developed a technique—originally called facilitated communication training—for purportedly extracting communication from individuals with serious physical disabilities, such as cerebral palsy, that often prevented individuals from speaking. Soon, the seemingly remarkable technique was extended to other conditions, most notably autism, now termed autism spectrum disorder. The premise of FC was straightforward: Contrary to what mental health professionals had long assumed, nonverbal people with autism are actually of reasonably normal intelligence. Nevertheless, they cannot express themselves verbally due to a neurological condition known as “developmental apraxia,” a purported disconnection between the brain’s language and motor centers. According to this theory, autism is fundamentally not a mental disorder, as psychologists had presumed, but a movement disorder. As a consequence, individuals with this condition are cognitively intact people trapped in a malfunctioning body. With the assistance of a facilitator who stabilizes the person’s hand and arm movements, the individual with autism can now suddenly type out words and sentences using a keyboard, letter pad, or similar medium.












Example of a keyboard used in facilitated communication,

courtesy of Wikipedia



In 1989, Douglas Biklen, a sociologist and Professor of Special Education at Syracuse University, observed Crossley’s methods and announced the startling news of facilitated communication’s effectiveness for autism in an influential 1990 article. According to Biklen, with the help of facilitated communication, many individuals with autism who were previously presumed to be mute and severely cognitively impaired could now communicate eloquently. Many composed poetry that told of their profound joy at being liberated from a prison of silence. Parents’ and other loved ones’ dreams of communicating with their nonverbal children were at last realized.





News of the stunning breakthrough reached schools throughout the United States, and it was not long before thousands of facilitators began administering the technique in classrooms. Scores of children with severe autism were mainstreamed into schools, excelling in classes with the aid of facilitators. Workshops in facilitated communication were offered to enthusiastic audiences. Facilitated communication was widely heralded as a “miracle” in the treatment of autism and related conditions, and for good reason.




To many skeptics, though, facilitated communication seemed too good to be true. How could children who could not read – and whose IQs were often estimated to be below 30 or 40 - suddenly use advanced language that conveyed remarkably mature thoughts and emotions? Where would they have learned this language? And given that many of them could draw, paint, or throw, why did they need a facilitator to stabilize their hand movements? Biklen and his colleagues were convinced that facilitated communication worked, yet they had conducted no formal research to support their expansive claims.




When controlled studies finally began to appear in the pages of academic journals in the early to mid-1990s, the scientific verdict was unanimous – and devastating. These investigations demonstrated persuasively that the apparent effectiveness of the technique was a diabolical illusion. When facilitators and children with autism were shown different stimuli, such as a dog versus a cat, the word typed out always corresponded to what the facilitator saw, not to what the child saw (see this classic video for a powerful expose of facilitated communication). The “effectiveness” of facilitated communication is therefore attributable to what psychologists term the “ideomotor effect”: a phenomenon whereby people’s thoughts influence their actions without their knowledge. Without being aware of it, facilitators themselves were guiding individuals’ hands and fingers to the intended letters.




Moreover, a dark side of facilitated communication soon emerged. Although precise numbers are hard to come by, dozens of parents were charged with sexual abuse solely on the basis of facilitated allegations from their children. Many of these parents were removed from their homes, and some were jailed or imprisoned, their reputations permanently tarnished. Yet we now know from controlled research that these accusations emanated from the minds of the facilitators, not the children.




The facilitated communication debacle took an even more sickening turn when Anna Stubblefield, a Professor of Philosophy at Rutgers University in Newark, a major proponent of the technique, met D.J., a 31-year old man who has the mental capacity of an 18-month-old, according to his doctors. D.J. has never uttered a word and requires specialized assistance to bathe, dress himself, and eat. Stubblefield began using facilitated communication to communicate with D.J. After a time, they “expressed their love” for each other, and eventually had sexual intercourse in her campus office. Of course, the sex was not consensual, as D.J.’s communications were not his own. In January of 2016, Stubblefield was convicted of aggravated sexual assault and sentenced to 12 years in prison; the case is being appealed.









Ouija board, image courtesy of Wikipedia


Tragically, Stubblefield and other proponents of facilitated communication had forsaken their epistemic duties in at least three ways. First, the ideomotor effect had been familiar to psychologists for well over a century. Such supposedly “occult” phenomena as Ouija boards, automatic writing, table-turning during séances, and water dowsing had long been recognized as the products of unconscious cueing and prompting of responses.






Indeed, while she was a graduate student at Harvard University under the mentorship of the great psychologist William James, Gertrude Stein – later to become a famed author –penned two articles on the ideomotor effect. Had facilitated communication advocates done their homework and taken heed of this well-replicated but insidious effect, they would presumably have been aware of how readily we can all be duped by it.




Second, from the outset, Biklen and other facilitated communication advocates never troubled themselves to conduct controlled studies to ascertain whether the method worked. Furthermore, when the negative data finally poured in from scores of laboratories, the advocates almost always explained away these findings using a plethora of ad hoc excuses. For example, some insisted that controlled studies of facilitated communication were essentially worthless because they placed participants in a “confrontational” situation, making them feel pressured to perform. Yet many of these individuals had successfully given facilitated “performances” at academic conferences, typing sentences in the presence of hundreds of amazed spectators.




Third and finally, Biklen and other facilitated communication advocates had failed in their obligation to be self-critical. Rather than ask themselves whether their claims might be wrong, they reflexively criticized the critics, dismissing their methodology on flimsy and unpersuasive grounds.




Ultimately, the proponents of facilitated communication very much wanted to help individuals with autism. But the facilitated communication tragedy teaches us that good intentions are not sufficient. Good intentions paired with grossly inaccurate knowledge and an absence of a self-critical mindset can be disastrous. This tragedy also teaches us that by not attending to their epistemic duties, professionals can do grave harm without intending to do so.





Want to cite this post?



Lilienfeld, S.O. (2016). The Ethical Duty to Know: Facilitated Communication for Autism as a Tragic Case Example. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/03/the-ethical-duty-to-know-facilitated.html

Tuesday, March 1, 2016

Sitting Here in My Safe European Home: How Neuroscientific Research Can Help Shape EU Policy During the Syrian Refugee Crisis


By Joseph Wszalek, J.D. and Sara Heyn





Joseph Wszalek, J.D., is a fourth-year PhD student in the Neuroscience Training Program/Neuroscience and Public Policy Program at the University of Wisconsin. His research work focuses on the interaction between social cognition, language, and traumatic brain injury, with an emphasis on legal contexts. He holds a law degree cum laude and Order of the Coif from the University of Wisconsin Law School, where he was a US Department of Education Foreign Language and Area Studies Fellow through the Center for European Studies and a member of the Wisconsin International Law Journal’s senior editorial board. 






Sara Heyn is currently a graduate student pursuing a J.D. along with a PhD in Neuroscience at the University of Wisconsin-Madison. Her research interests include psychopathy, decision-making processes, and the use of neuroscientific evidence in the courtroom. 




Ethical guidelines are a fundamental aspect of the legal profession. The modern attorney serves three simultaneous ethical roles: he is an advocate for his clients, he is an officer of the legal system, and he is a public citizen with special responsibility for the rule of law (ABA Model Rules, 2014). These ethical obligations do not merely prohibit unacceptable conduct: they impose positive duties on licensed attorneys to actively promote and improve the administration of justice in all three capacities. In stark contrast to the legal profession, however, the ethical obligations of the scientific profession are considerably less well-defined. So-called research ethics are concerned more with establishing responsible research practices and less with encouraging active social duties. Put another way, while the modern scientist has an undeniable ethical obligation in his role as a member of the scientific community, it’s unclear whether or not this obligation extends to his role as a general citizen, and it’s unclear whether or not he, like his lawyer counterpart, has a “special responsibility” to actively improve society.




Despite this ambiguity, organizations like the International Neuroethics Society make it clear that the scientific community should play a more active role in translating scientific expertise to societal problems. As scientific methodologies and findings become more and more sophisticated, the ability of science, including neuroscience, to define and characterize the human condition gives data and findings more and more pragmatic utility. One such area of research is child development and the impact of environmental stressors. Because neuropsychological research has effectively characterized the risks associated with adverse childhood experiences (ACEs), we argue that these findings can (and should) support evidence-based initiatives targeting one of modern society’s most catastrophic problems: the Syrian refugee crisis.



Syrian Refugees in the European Union 


The Syrian civil war, one of the most catastrophic human rights disasters since World War II, has created some 4.2 million refugees who seek asylum all around the globe (United Nations High Commission, 2015). Their primary destination in the West is the European Union, which as of November 2015 has received asylum applications from nearly three quarters of a million displaced individuals (European Commission, 2015). Despite the refugees’ stark plight, however, religious and geopolitical tensions have complicated international response and have trapped hundred of thousands of men, women, and children in a social and legal limbo. With one in ten of the Syrian refugee population being a child under the age of five, (United Nations High Commission, 2015), this calamity implicates the lives of many children.




Fortunately, EU law has long recognized the importance of establishing additional legal protections for children. Even though the European Union does not have general competence (i.e., formal legal authority) in the area of fundamental child rights, both the Treaty of the European Union and the Charter of Fundamental Rights of the European Union explicitly address the need to recognize and respect the child’s best interest in actions related to children. Additionally, a 2011 Communication from the EU Commission detailed the EU Agenda for the rights of the child, proposing: “The purpose is to reaffirm the strong commitment of all EU institutions and of all Member States to promoting, protecting and fulfilling the rights of the child in all relevant EU policies and to turn it into concrete results” (EU Agenda for the Rights of the Child, 2011). This Agenda laid out three priority areas: child-friendly justice (including access to justice and registration of documents relating to civil status), targeted protection of vulnerable children (including asylum seekers), and the accommodation of children in the European Union’s external actions (including the protection of children in areas of armed conflict). However, the Commission noted that the “significant lack of reliable, comparable and official data” was a “serious obstacle for the development and implementation of genuine evidence-based policies,” and it affirmed its commitment to cooperate with relevant organisations to produce basic data and information to guide decision making” [emphasis in original]. Clearly, then, this legal framework has the potential to address and accommodate basic neuroscientific research relating to the effect of ACEs on childhood development in the refugee context.



The Basic Adverse Childhood Experience Data 





Children of Syrian refugees are being subjected to numerous

ACEs, image courtesy of Wikimedia Commons 

Neuropsychological studies (i.e., studies which investigate how the structure and function of the brain relate to emotion, cognition, and behavior) have repeatedly shown that environmental stress risks a broad range of undesirable health outcomes. For example, From Neurons to Neighborhoods pioneered studies into the adverse and dramatic effects that childhood stress can have on current and future health and development (Shonkoff and Phillips, 2000). This project defined “stress” as the set of changes in the body and the brain that are put in motion when there are overwhelming threats to physical or psychological well-being. Perhaps unsurprisingly, severe or chronic stress is associated with a host of cognitive and neurological deficits, including reduced cerebral volume and hemispheric integration, impaired executive function, and dysregulated reward and emotion responses (Petchell & Pizzagalli, 2011). The amygdala and the hippocampus are prime targets, and recent findings suggest that the amygdala modulates stress-induced memory and learning deficits by reducing the expression of memory-related genes in the hippocampus (Rei et al., 2015). Animal models have confirmed that the effects of stress on the amygdala and hippocampus cause disruption to learning, memory, and cognitive regulation (Malter Cohen et al., 2013).



ACEs, then, are an umbrella term for detrimental childhood experiences that are likely to produce severe or chronic stress. Experiences such as maltreatment, abuse, neglect, and trauma, are associated with a host of behavioral, physical, and mental outcomes (Petchell & Pizzagalli, 2011). In addition to the deficits explained earlier, sequelae of ACEs include, but are not limited to, reports of: poorer emotional well-being, self-harm and suicidal ideation, delinquent behavior, obesity, diabetes, poorer quality of adult relationships, substance abuse, and cardiovascular disease (Kalmakis & Chandler, 2015). It is clear that ACEs, and the stress they cause, now represent a much more comprehensive threat to an individual’s overall health than previously thought.




Finally, scientific findings also suggest that refugees face almost-certain risk of ACEs. Pre-refugee events (e.g., armed conflict, infrastructure failings, environmental disasters), migration events (e.g., separation from family, dangerous travel conditions, exploitation by traffickers), and post-migration events (e.g., discrimination, lack of personal and societal support networks, uncertain asylum status) are all difficult and traumatic situations, and the rates of negative mental and emotional outcomes for refugees children are staggering (Bronstein & Montgomery, 2011; Jensen, Skårdalsmo, & Fjermestad, 2014).



In summation, ACE research paints a bleak, but ultimately informative, picture of the challenges and risks that Syrian refugee children face as the flee to Europe. With this basic data in mind, then, we would end our essay with a brief example of how this data can be used to drive evidence-based policy to accommodate and ameliorate the effects of ACEs.



Recognizing Potential Deficits in Language and Advocacy 




ACEs may cause refugee children to struggle during

legal proceedings, image courtesy of Wikipedia

The EU legal system, like all legal systems, represents linguistic demands at their most challenging. Navigating the asylum process may require, inter alia: communicating with lawyers, administrative judges, and law enforcement officials, often through an interpreter; reading and understanding information relating to visas, immigration law, and other legal texts, again often through an interpreter; and arranging for living or work arrangements, whether unofficial or official. Perhaps recognizing the overwhelming difficulty presented by these language demands, the European Parliament passed a recent directive establishing “Guarantees for unaccompanied minors” (EU Directive, 2013). This directive requires EU states to ensure that representatives represent and assist these children by explaining their rights, helping prepare them for personal interviews, and acting in the best interest of the child.



We now know, however, that child refugees who suffered from ACEs might struggle with these seemingly simple demands because of the many detrimental sequelae that ACEs can cause. A child might have disregulated connections between his amygdala and his hippocampus, which could cause him to struggle to learn the procedural demands and to tell consistent narratives. A child might have impaired white-matter pathways, which might impair her ability to integrate cross-modal information while communicating and to understand figurative and abstract language (Kovic et al., 2010; Kasparian, 2013). A child might have executive function deficits, which could disrupt his ability to regulate his speech and communication and to follow confusing lines of questioning (Henry et al., 2015). Finally, a child might have lower cortical volume overall, which could manifest as lower levels of overall intelligence and delayed language acquisition or use (Pangelinan et al., 2011). All of these outcomes are known effects of ACE, and all of them would cripple a child’s ability to represent himself during these legal proceedings.



Fortunately, the same basic data that revealed these nuances provide solutions to evidence-based policies, and neuroscientists are in a position to address this “special responsibility.” For example, encourage the representative officials to adopt best practices relating to interviewing child clients: this might include a formal language competency assessment and the use of open question–based techniques (Snow et al., 2012). Advocate for documents and materials that are written not just at a standardized reading level but at an even lower level so that refugee children with delayed language impairments can understand them. Suggest formal administrative policies could recognize that inconsistent narratives or communication impairments do not always represent intentional deception. Ensure that the “best interest of the child” standard, which is always assessed on a case-by-case basis, considers ACE-related deficits as part of the child’s “particular vulnerability and protection needs” (Parsons, 2010). These relative minor modifications, all based on the neuroscience data on stress and ACEs, could net major benefits for the child, for the EU immigration system, and for society as a whole.



Conclusion 

Licensed members of the legal profession have ethical and professional obligations to use their training and knowledge to promote equality and justice. Even though scientists lack such a formal responsibility, we firmly believe that active engagement and consideration of neuroscientific data in light of social contexts is a key component of a scientist’s ethical duties. The example of the Syrian refugee crisis and ACEs is just one minor component of the myriad challenges that society faces, but it is nevertheless an effective and profound example of how scientists, like legal professionals, are in a unique position to use basic data to accept and act on “special responsibilities.”



Works Cited




1. United Nations High Commission on Refugees, Syria Regional Refugee Response Inter-agency Information Sharing Portal, available at http://data.unhcr.org/syrianrefugees/regional.php# (last accessed November 19th, 2015).






2. Israel Bronstein & Paul Montgomery. Psychological Distress in Refugee Children: A Systematic Review. Clin Child Fam Psychol Rev (2011) 14:44-56.





3. Charter of Fundamental Rights of the EU, Art. 24, 2000/C 364/01





4 EU Directive on common procedures for granting and withdrawing international protections, 2013/32/EU





5. National Research Council (US) and Institute of Medicine (US) Committee on Integrating the Science of Early Childhood Development; Shonkoff JP, Phillips DA, editors. From Neurons to Neighborhoods: The Science of Early Childhood Development. Washington (DC): National Academies Press (US); 2000.





6. American Bar Association. Model Rules of Professional Conduct. 2014 Edition.





7. National Research Council and Institute of Medicine (2000) From Neurons to Neighborhoods: The Science of Early Childhood Development. Committee on Integrating the Science of Early Childhood Development. Jack P. Shonkoff and Deborah A. Phillips, eds. Board on Children, Youth, and Families, Commission on Behavioral and Social Sciences and Education. Washington, D.C.: National Academy Press





8. European Commission. ECHO Factsheet – Syria Crisis. November 2015.





9. Pia Pechtel & Diego A. Pizzagalli. (2011). Effects of early life stress on cognitive and affective function: an integrated review of human literature. Psychopharmacology 214:55-70.





10. Rei D, Mason X, Seo J, Gräff J, Rudenko A, Wang J, Rueda R, Siegert S, Cho S, Canter RG, Mungenast AE, Deisseroth K, Tsai LH. (2015). Basolateral amygdala bidirectionally modulates stress-induced hippocampal learning and memory deficits through a p25/Cdk5-dependent pathway. PNAS 112(23):7291-6.





11. Early-life stress has persistent effects on amygdala function and development in mice and humans. (2013). Matthew Malter Cohen, Deqiang Jing, Rui R. Yang, Nim Tottenhama, Francis S. Leeb, and B. J. Casey. PNAS 110(45):18274-8.





12. Karen A. Kalmakis & Genevieve E. Chandler (2015). Health consequences of adverse childhood experiences: A systematic review. Journal of the American Association of Nurse Practitioners 27:457-465.





13. Keselman, Olga; Cederborg, Ann-Christin; Linell, Per (2010). "That is not necessary for you to know!": Negotiation of participation status of unaccompanied children in interpreter-mediated asylum hearings” Interpreting 12:1, 83-104.





14. Pamela C. Snow, Martine B. Powell, and Divie D. Sanger. (2012). Oral Language Competence, Young Speakers, and the Law. Language, Speech, and Hearing Services in Schools. Vol. 43, 496-506.





15. Annika Parsons (2010). The best interests of the child in asylum and refugee children in Finland. National. Rapporteur on Trafficking in Human Beings. Publication 6.





16. Vanja Kovic, Kim Plunkett, Gert Westermann (2010). The shape of words in the brain. Cognition 114:19-28.





17. Kristina Kasparian (2013). Hemispheric differences in figurative language processing: Contributions of neuroimaging methods and challenges in reconciling current empirical findings. Journal of Neurolinguistics 26:1-21.





18. Lucy A. Henry, David J. Messer, Gilly Nash (2015). Executive functioning and verbal fluency in children with language difficulties. Learning and Instruction 39:137-147.





19. Melissa M. Pangelinan, Guangyu Zhang, John W. VanMeter, Jane E. Clark, Bradley D. Hartfeld, Amy J. Haufler (2011). Beyond age and gender: Relationships between cortical and subcortical and cognitive-motor abilities in school-age children. Neuroimage 54:3093-3100.







Want to cite this post?



Wszalek, J., Heyn, S. (2016). Sitting Here in My Safe European Home: How Neuroscientific Research Can Help Shape EU Policy During the Syrian Refugee Crisis. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/02/sitting-here-in-my-safe-european-home.html