Pages

Tuesday, December 19, 2017

The Neuroethics Blog Series on Black Mirror: White Christmas



By Yunmiao Wang





Miao is a second year graduate student in the Neuroscience Program at Emory University. She has watched Black Mirror since it first came out, and has always been interested in the topics of Neuroethics. 





Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we treat our technology and how it can affect us in return. This piece is the final installment of a series of posts that discuss ethical issues surrounding neuro-technologies featured in the show, and will compare how similar technologies are impacting us in the real world. 







SPOILER ALERT: The following contains plot spoilers for the Netflix television series, Black Mirror




Plot Summary





“White Christmas” begins with a man, Joe Potter, waking up at a small and isolated outpost in the snowy wilderness. “I Wish It Could Be Christmas Everyday” plays in the background. Joe walks into the kitchen and finds Matt Trent cooking for Christmas. Matt, who seems to be bored of the mundane lifestyle of the outpost, asks Joe about how he ended up there—a conversation they have never had in their five years together at the outpost. Joe becomes defensive and is reluctant to share his past. He asks Matt the same question. In order to encourage Joe to open up, Matt shares a few stories about himself.




Matt first tells a story where he is a dating coach who trains socially awkward men like Harry how to seduce women. A remote technology called EYE-LINK enables Matt, along with eight other students, to watch through Harry’s eyes and help him approach women. In this fashion, Harry meets a woman named Jennifer at a corporate Christmas party. Due to a series of ironic circumstances, Jennifer kills both Harry and herself, as she believes both of them are troubled by voices in their heads. Matt and the rest of the students watching through EYE-LINK panic as they watch Harry die. They try to destroy any evidence that they were ever involved with Harry.







Image courtesy of Flickr user

Lindsay Silveira.

In order to win over Joe’s trust, Matt goes on to share another story about someone he once worked with, Greta, who lives in a spacious, futuristic house and seems to be very particular about every detail in her life. A week prior, Greta had an implant placed in her head that copies her thoughts and memories. The thoughts and memories are later surgically retrieved and stored in an egg-shaped device called a “cookie.” Matt’s job is to train the cookie, which is essentially a copy of Greta’s mind, to accept her position and to serve Greta day and night. Initially, the cookie shows great confusion about “her” situation, not knowing “she” is not the real Greta. Matt embodies the cookie with a simulated form of Greta and places “her” inside a vast space with an operating desk. To convince the cookie to do housekeeping work for the real Greta, Matt alters the cookie’s perception of time and makes “her” experience total isolation and boredom until “she” finally gives in. In the present day, Joe shows his disdain for the cookie technology and criticizes it as barbaric.




Joe finally shares what brought him to the outpost and starts his story by saying that his girlfriend’s father never liked him. Joe and Beth were in a serious relationship until his drinking problem slowly pushed Beth away. On a double-date they have with their friends, Tim and Gita, Beth seems to be upset, which causes Joe to drink more. After the dinner, the drunken Joe finds out that Beth is pregnant and congratulates her. Instead of being happy, Beth expresses her unwillingness to keep the baby, which angers Joe. After a heated argument, Beth blocks Joe through a technology called Z-EYE and leaves him. Being blocked by Z-EYE, Joe is only able to see a blurry grey silhouette of Beth and is unable to hear her. He spends months looking for her and writing apology letters without receiving any response. Joe also finds out that Beth has kept the baby, but he is not allowed to see her offspring due to the Z-EYE blocking. One day he sees Beth’s image in the news, which implies that she is dead. Saddened by the news, Joe is determined to meet his child for the first time since the block has been removed upon Beth’s death. He waits outside of Beth’s father’s cabin during Christmastime with a gift for the child. To his surprise, the child he has been longing to see has Asian heritage, which neither he nor Beth has. Joe soon realizes that Beth was having an affair with their friend Tim. He follows the child with shock and confronts Beth’s father. Out of anger, Joe kills Beth’s father with the snow globe he brought as a gift and runs away in panic, leaving the little girl alone on a snowy day.




In the present, Matt asks Joe if he knows what happened to the kid. Joe finally breaks down and confesses that he is responsible for killing both Beth’s father and the child. Matt soon disappears after the confession, leaving Joe to realize the outpost is the same cabin where Beth’s father and daughter died. It turns out that everything so far has taken place in a cookie of Joe in order to make him confess his crime. Matt helped the officers with Joe’s case to regain his own freedom due to his involvement in Harry’s death. Even though Matt is released from the police station, he is blocked from everyone through Z-EYE and will not be able to interact with anyone in reality. Back in the police station, as an officer leaves work for Christmas, he sets the time perception for Joe’s cookie to 1000 years per minute, leaving the copy of Joe wandering in the cabin as “I Wish It Could Be Christmas Everyday” goes on endlessly in the background.





Current Technology







Google Glass Enterprise Edition; image courtesy of

Wikimedia Commons.

“White Christmas” presents three fictional technologies: EYE-LINK, Z-EYE, and the cookie. The episode manifests the privacy issues in our real world, and, moreover, it explores the concept of selfhood and the boundaries of our relationship with advanced AI.




The EYE-LINK that allowed Harry to livestream his view with multiple people and the Z-EYE that blocks Matt from the rest of the world are closer to reality than fiction. Google Glass, despite the failure of its first version, has made its second attempt and returned this year as the Glass Enterprise Edition [1]. Given the controversy about privacy and the criticism of the wearability of its predecessor, the newer version has switched gears to become an augmented reality tool for enterprise. For example, according to the report by Steven Levy, the Glass has been employed by an agricultural equipment manufacturer company to provide workers with detailed instructions on the assembly line, which has dramatically increased the yield with high quality [1]. However, this pivot to partner with industrial companies does not necessarily mean the end of smart glasses for the general consumers. If anything, it might be a beginning of the evolution for smart glasses.




While Google Glass is not a built-in device, visual prosthetics that implant into the visual system are no longer a dream. There has been success in restoring near-normal eyesight of blind mice [2] and trials of vision rehabilitation in humans through implants [3]. It is just a matter of time before we see the birth of technology similar to EYE-LINK. After all, many people nowadays are used to sharing their lives on social media, in real-time, through their phones. If built-in sensory devices that augment our perceptions become reality, blocking others through signal manipulation would not be much of a challenge either.




Compared with EYE-LINK and Z-EYE, the cookie technology from the episode seems far more implausible based on our current understanding of neuroscience. The root of consciousness and our minds remain a mystery, despite how much we currently know about the nervous system. While we are decades away from copying our own minds, the current developments of AI are still startling. AlphaGo has been making the news over the past few years by defeating top professional Go players from around the world. While Deep Blue, another AI system, defeated world chess champion Gary Kasparov in 1997, the defeat of humans by AI in Go is much more difficult. Go, a classic abstract strategy board game that dates back to 3000 years ago, is viewed as the most challenging classical game for AI to win due to its large number of possible board configurations [4]. Given such a massive amount of possibilities, traditional AI methods involving exhaustion of all possible positions by a search tree do not apply to Go. Previous generations of AlphaGo were developed by playing with numerous amateur Go players via advanced search trees and deep neural networks. The reason that the recent win by AlphaGo Zero is so striking is that it learned to master the game without any human knowledge [5]. The newer version of AlphaGo learns to play the game by playing against itself with much higher efficiency. The triumph of AlphaGo not only means the winning of the game, but also represents the conquering of some challenges in machine learning. The advanced algorithm could potentially mean a step towards solving more complicated learning tasks, such as emotion recognition and social learning.







AlphaGo competing in a Go game; image courtesy of Flicker

user lewcpe.

As Google DeepMind continues to advance learning algorithms, a new company, Neuralink, founded by SpaceX and Tesla CEO Elon Musk has drawn a lot of attention for its audacious goal of combining human and artificial intelligence. Elon Musk is greatly concerned with AI’s potential threat to humanity and proposes that Neuralink could be a way to prevent such a threat from happening. Indeed, the brain-machine interface (BMI) is no longer a novel concept. Scientists have developed deep brain stimulation that benefits people who suffer from Parkinson’s disease, epilepsy, and many other neurological disorders [6]. In addition, people with paralysis are able to control artificial limbs through brain-machine interfaces [7]. BMI shows great promise in terms of improving people’s life quality. However, what Elon Musk is proposing is to augment the healthy human brain and improve its power by connecting it to artificial intelligence. While it is tempting to acquire a “super power” such as photographic memory through BMI, the great power comes with a great price – the interface will be highly likely to require invasive implantation of hundreds of electrodes onto the brain. Predicting the potential side effects will also be extremely challenging, as there is so much left unknown about our brains. Are people going to be willing to take enormous, unknown risks for the possibility of having photographic memory?




Despite the impressive progress scientists have made in the field of machine learning and artificial intelligence, we are still far away from anything like the cookie that would be able to copy a person’s consciousness and manipulate it to our advantage.





Ethical Consideration




After Matt explains how he coerces Cookie Greta to work for the real Greta, Joe feels empathetic towards the cookie and calls the technology slavery and barbaric. Matt argues that since Cookie Greta is only made of code, she is not real, and, hence, it is not barbaric. The disagreement between the two raises a fundamental question about whether or not the copy of a person’s mind is merely lines of code. If not, should these mind-copies have rights as we do? Similar discussions can be found in  this previous post on the blog.







Image can be found here.

“White Christmas” also brings up the question of how we perceive of our own minds and the minds of others. Why do some people believe that the cookie is nothing but simulation by a device? Many people seem to believe that there is a hierarchy of mind among different species. Daniel M. Wegner and Kurt Gray mentioned in their book, The Mind Club, a “mind survey” they conducted online. This self-report survey aimed to evaluate people’s perception of minds by asking them to compare thirteen potential minds on different mental abilities (see figure 1) (7). Based on 2499 responses, they found that people view mental abilities of the same mind differently. The authors categorize these mental abilities into two factors: experience and agency. Experience represents one’s ability to feel things such as pain, joy and anger. They define agency to be another set of mental abilities with which one can think and perform tasks instead of sensing and feeling. For example, the average response indicated that participants view themselves with high mental abilities in both experience and agency, whereas they rank a robot with relatively high agency but very little experience. Even though this two-dimensional mapping is a rather coarse way to quantify our perceptions of minds, it shows us that humans, whether consciously or subconsciously, rank minds of others (including humans, animals, robots and even god(s)) based on their mental abilities to think and to feel.




Let’s employ the concepts of agency and experience to help us understand why people do not think AI, including the cookie, has consciousness. One might agree that the cookie has a high level of intelligence, in other words agency, due to the power of algorithm in a futuristic world, but he or she might find it difficult to imagine that the code has feelings too. Matt gives Cookie Greta a physical body to help “her” cope with “her” distress. While it might be a filming tactic for the audience to better visualize cookie, the embodiment seems to also provide Cookie Greta with an outlet to feel, sense, and better understand “her” own existence. Moreover, Matt has to change Cookie Greta’s perception of time and leave “her” in prolonged solitary to force “her” into compliance given “her” fear of boredom. The fact that Matt cannot simply adjust the codes to make Cookie obedient but has to manipulate “her” through “her” fear, which is an emotion, somehow indicates the Cookie has the ability to feel and experience. Similarly, Matt takes advantage of Cookie Joe’s empathy and guilt through manipulation in order to make him confess. Even though it can still be argued that these seemingly human emotions are nothing but simulation, how can we be certain that the simulated mind does not experience these feelings? If they are able to feel the same way as we do, forcing Joe’s cookie to listen to the same Christmas carol for millions of years in isolation would be an utterly brutal and unfair punishment.




If we assume that the cookie indeed has some form of consciousness, the next question would be: should cookies bear the same consequences of their origin’s actions? It is clear that both Cookie Greta and Cookie Joe have the same memories and ways of thinking as their real selves (the term of “real” is used loosely here to differentiate the cookie and its origin instead of implying that the former is not real). Based on the confession, Joe is indeed responsible for the death of two lives. However, should the copy of his mind be responsible for his crime? Do we view the copy as an extension of him or do we see the cookie as an independent individual? Similarly, if Neuralink does succeed in creating a hybrid of human brain and AI, how do we define the identify of an individual and who should be responsible for its wrong-doing?






Conclusion







Darling's robot dinosaur; image courtesy of

WikimediaCommons.

If you disagree with how Matt and the officers treat the advanced AI in “White Christmas,” you might find some comfort in the study of human-robot interactions conducted by Dr. Kate Darling from the MIT Media Lab (8, 9). In an informal experiment, human subjects were first presented with robot dinosaurs and were asked to play with them. After building an emotional connection with the robots for about an hour, the participants were then instructed to torture and destroy them with various tools the experimenters provided. All of the volunteers refused to follow the command. Dr. Darling, an expert in robot ethics and an advocate for legal protection of robots, explains that even though people are aware that the robots are not actually alive, they naturally project their emotions on to the robot dinosaur. If people can feel empathy towards a life-like robot, are most of us really capable of watching the suffering of humanoid AI, even if it does not have consciousness? As Immanuel Kant said, “he who is cruel to animals becomes hard also in his dealings with men. We can judge the heart of a man by his treatment of animals.”





References





1. Levy, Steven (2017, July 18). Google Glass 2.0 is starting a startling second act. Retrieved from https://www.wired.com/story/google-glass-2-is-here/






2. Nirenberg, S., & Pandarinath, C. (2012). Retinal prosthetic strategy with the capacity to restore normal vision. Proc Natl Acad Sci USA, 109(37), 15012-15017. doi:10.1073/pnas.1207035109






3. Lewis, P. M., Ackland, H. M., Lowery, A. J., & Rosenfeld, J. V. (2015). Restoration of vision in blind individuals using bionic devices: a review with a focus on cortical visual prostheses. Brain res, 1595, 51-73. doi:10.1016/j.brainres.2014.11.020






4. The story of AlphaGo so far. Retrieved from https://deepmind.com/research/alphago/






5. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., … Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 500(7676), 354-359. doi:10.1038/nature24270






6. Lyson, M. K. (2011). Deep brain stimulation: current and future clinical applications. Mayo Clin Proc, 86(7), 662-672. doi:10.4065/mcp.2011.0045






7. Wegner, D. M., & Gray, K. J. (2017). The mind club: who thinks, what feels, and why it matters. New York, NY: Penguin Books






8. Can robots teach us what it means to be human? (2017, July 10). Retrieved from https://www.npr.org/2017/07/10/536424647/can-robots-teach-us-what-it-means-to-be-human






9. Darling, K. (2012). Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior toward robotic objects. Robot Law, Calo, Froomkin, Kerr ed., Edward Elgar 2016; We robot Conference. Available at https://ssrn.com/abstract=2044797 or http://dx.doi.org/10.2139/ssrn.2044797





Want to cite this post?



Wang, Y. (2017). The Neuroethics Blog Series on Black Mirror: White Christmas. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/the-neuroethics-blog-series-on-black.html







Tuesday, December 12, 2017

Neuroethics in the News Recap: Psychosis, Unshared Reality, or Clairaudiance?



By Nathan Ahlgrim








Even computer programs, like DeepDream, hallucinate.

Courtesy of Wikimedia Commons.

Experiencing hallucinations is one of the most sure-fire ways to be labeled with one of the most derogatory of words: “crazy.” Hearing voices that no one else can hear is a popular laugh line (look no further than Phoebe in Friends), but it can be a serious and distressing symptom of schizophrenia and other incapacitating disorders. Anderson Cooper demonstrated the seriousness of the issue, finding the most mundane of tasks nearly impossible as he lived a day immersed in simulated hallucinations. Psychotic symptoms are less frequently the butt of jokes with increasing visibility and sensitivity, but people with schizophrenia and others who hear voices are still victims of stigma. Of course, people with schizophrenia deserve to be treated like patients in the mental healthcare system to ease their suffering and manage their symptoms, but there is a population who are at peace with the voices only they can hear. At last month’s Neuroethics and Neuroscience in the News meeting, Stephanie Hare and Dr. Jessica Turner of Georgia State University painted the contrast between people with schizophrenia and people that scientists call “healthy voice hearers.” In doing so, they discussed how hearing voices should not necessarily be considered pathological, reframing what healthy and normal behavior should include.





Their discussion centered around the work out of Dr. Philip Corlett’s lab [1], which compared how people with schizophrenia and self-described psychics experience auditory hallucinations. An article in The Atlantic later followed, profiling one of the self-described psychic mediums and her relationship with the voices only she hears. The problem with labels comes to the fore in the very premise of the study: the psychics are labeled non-psychotic even while perceiving sounds in the absence of a noise. Mental health practitioners then must decide whether to pathologize the experience – to label it as a symptom of a disorder. Refraining from pathologizing their experience makes sense with the current definition of a “disorder,” which contains the criterion of causing distress. Because psychics are not bothered by the voices they hear, their hearing voices is not considered to be a symptom of a disorder or psychosis. However, given our society’s negative view on hallucinations and psychosis, how many people are inappropriately pathologized for similar

experiences?








Image courtesy of Pixabay.

David Rosenhan presented a pessimistic critique of how Western medicine deals with hallucinations in the 1970’s with his report, On Being Sane in Insane Places [2]. He and his colleagues presented themselves to a psychiatric ward, reporting auditory hallucinations without any other symptoms. Once committed, they behaved as they normally would and no longer reported any hallucinatory events. Even so, the healthcare professionals did not accuse or suspect them of malingering, and never granted them a clean bill of health. As a result, Rosenham and others argued that mental health professionals focus on symptoms to the exclusion of a holistic picture, and that hallucinations are overly pathologized.





Rosenhan would be happy to see the recent changes in how the American medical system treats hallucinations over the intervening decades, with continuing improvements in the Diagnostic Standards Manual (DSM). Qualifying symptoms for schizophrenia are hallucinations, delusions, disorganized speech or behavior, and negative symptoms (social withdrawal, anhedonia, etc.). Beginning with DSM III in 1980, a diagnosis required “significant impairment” associated with at least one of these symptoms. With the publication of DSM V in 2013, a diagnosis now needs at least two of these qualifying symptoms presented with significant occupational or social dysfunction. Therefore, hallucinations are no longer sufficient for a diagnosis of schizophrenia in and of themselves, and healthy voice hearers are free from diagnosis.








Stephanie Hare describing how the brain acts

during auditory hallucinations

Non-voice hearers can balk at the idea that hallucinations are part of a typical or “normal” spectrum of experience. However, anywhere between 5 and 28% of the general population experience auditory hallucinations, and only 25% of those meet the criteria for psychosis [3]. Can anything be considered abnormal if one-quarter of the population experiences it? Surprisingly, the neuroscientific evidence also supports the dissociation between auditory hallucinations and neurological disorders. Brain activity does not differ between healthy voice hearers and those with a psychiatric diagnosis when experiencing auditory hallucinations [4]. The brains of the people we label sick and healthy seem to produce auditory hallucinations in the same way. How, then, should auditory hallucinations and healthy voice hearers be treated by psychiatrists and society writ large?





The concept of non-distressing hallucinations is foreign to those whose only exposure to the phenomenon is in portrayals of schizophrenia. And yet, most healthy voice hearers classify their voices as positive, controllable, and not bothersome [1]. The resulting argument is that if hallucinations do not make the person want to seek help for their condition, we should let them be. But treatment-seeking is not always a prerequisite for a mental disorder; some disorders do not feel out of place to the individual at all. People with personality disorders often fit that category. Although Borderline Personality Disorder causes significant distress and treatment seeking, people with other personality disorders do not perceive their behavior as abnormal and are not distressed by their own behavior, as with Narcissistic Personality Disorder [5]. Overall, people with personality disorders are very likely to push against the need to treat the underlying condition [6]. Diagnoses occur because their disorder causes deleterious effects on the person’s social and professional life, not treatment seeking. But psychics can experience similar ostracization. As the medium interviewed for The Atlantic article states “You just can’t go into a room and say ‘Hey, I’m a psychic medium’ and people are gonna accept you.” Hallucinations can interfere with a person’s life whether they are attributed to schizophrenia or psychic sensitivity, with prejudices stemming from either fear or disdain. How mental health professionals define “significant distress” to accurately account for both experiences will inform how the perception of illness, and the stigma surrounding it, evolves.





Applying Lessons Learned








People with chromesthesia associate sounds with color.

Image courtesy of Wikipedia

Healthy voice hearers are beginning to speak out and seek acceptance. Their mission to erode the stigma surrounding auditory hallucinations does not need to start from scratch. Synesthesia, the perceptual experience of blending two or more senses, is outside the realm of typical experience, and yet synesthetes are viewed with wonder, not fear or pity. As with so many other topics, our collective internet search behavior gives away our prejudices: the top Google result for “synesthesia in the media” is the BBC article “How synaesthesia inspires artists.” In contrast, “schizophrenia in the media” returns an academic article finding the majority of schizophrenic characters in movies released between 1990 and 2010 “engaged in dangerous or violent behaviors”. Synesthesia has uniquely captured a positive public image. More directly, a specific type of hallucination has already been accepted as normal and healthy. In the grey zone between wakefulness and sleep, hypnagogic hallucinations trigger extra-sensory experiences like sudden noises or vivid visual scenes. In contrast to wakeful hallucinations, these are accepted as a non-pathological occurrence in many people’s lives.





To get to a point of similar acceptance, healthy voice hearers would benefit from a spectrum approach. Binary health/illness evaluations are now being replaced with dimensionality assessments. Auditory hallucinations may belong on one end of a spectrum of perceptual vividness, inside the range of normal experience for many people. All these strategies have one common theme: deliberate language. Calling a person ‘schizophrenic,’ ‘crazy,’ and even ‘hallucinating’ instantly pathologizes and strips the person of identity. Replacing that vocabulary with inclusive language like ‘person with schizophrenia,’ ‘psychosis,’ and even ‘nonconsensual reality’ give agency and acknowledge divergent experiences. Such deliberation over language is often accused of being too politically correct; but this is the first step in fostering a safe environment for people within the entire range of sensory experiences. Only in a safe environment can voice hearers seek help if they need it, or be transparent about their experiences if not.




References






[1] Powers AR, 3rd, Kelley MS, Corlett PR. (2017). Schizophr Bull 43: 84-98.


[2] Rosenhan DL. (1973). Science 179: 250-8.


[3] de Leede-Smith S, Barkus E. (2013). Frontiers in Human Neuroscience 7:


[4] Diederen KMJ, Daalman K, de Weijer AD, Neggers SFW, van Gastel W, Blom JD, Kahn RS, Sommer IEC. (2012). Schizophrenia Bulletin 38: 1074-82.


[5] Caligor E, Levy KN, Yeomans FE. (2015). The American journal of psychiatry 172: 415-22.


[6] Tyrer P, Mitchard S, Methuen C, Ranger M. (2003). Journal of personality disorders 17: 263-8.



Want to cite this post?



Ahlgrim, N. (2017). Neuroethics in the News Recap: Psychosis, Unshared Reality, or Clairaudiance?. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/neuroethics-in-news-recap-psychosis.html

Tuesday, December 5, 2017

Neuroethics, the Predictive Brain, and Hallucinating Neural Networks




By Andy Clark







Andy Clark is Professor of Logic and Metaphysics in the School of Philosophy, Psychology and Language Sciences, at Edinburgh University in Scotland. He is the author of several books including Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, 2016). Andy is currently PI on a 4-year ERC-funded project Expecting Ourselves: Prediction, Action, and the Construction of Conscious Experience.





In this post, I’d like to explore an emerging neurocomputational story that has implications for how we should think about ourselves and about the relations between normal and atypical forms of human experience.




Predictive Processing: From Peeps to Phrases



The approach is often known as ‘predictive processing’ and, as the name suggests, it depicts brains as multi-area, multi-level engines of prediction. Such devices (for some introductions, see Hohwy (2013), Clark (2013) (2016)) are constantly trying to self-generate the sensory stream – to re-create it ‘from the top-down’ using stored knowledge (‘prior information’) about the world. When the attempt at top-down matching fails, so-called ‘prediction errors’ result. These ‘residual errors’ flag whatever remains unexplained by the current best predictive guess and are thus excellent guides for the recruitment of new predictions and/or the refinement of old ones. A multi-level exchange involving predictions, prediction errors, and new predictions then ensues, until a kind of equilibrium is reached.





That’s pretty abstract and highly compressed. But a compelling example involves the hearing of ‘sine-wave speech.’ This is speech with much of the usual signal cut out, so that all that remains is a series of ‘peeps and whistles.’ You can hear an example by clicking on the first loudspeaker icon here. You probably won’t make much sense of what you hear. But now click on the next loudspeaker and listen to the original sentence before revisiting the sinewave replica. Now, your experiential world has altered. It sounds like odd but clearly intelligible speech. In one sense, you are now able to hallucinate the richer meaning-bearing structure despite that poor sensory signal. In another (equally valid) sense, you are now simply hearing what is there, but through a process that starts with better prior information, and so is better able to sift the interesting signal from the distracting noise. For some more demos like this, try here, or here.








Image courtesy of Pexels.

According to these ‘predictive processing’ accounts, the process is one in which you start off with inadequate prior knowledge; so, when you first hear the sine wave version, you are unable to meet the incoming signal with the right wave of top-down predictions. After hearing the sentence, your model improves and you can match the sine wave skeleton with a rich flow of top-down prediction. Once you are expert enough, you can even recruit those apt top-down flows without hearing the specific sentence first. This corresponds to having learnt a generalizable world-model that now powers top-down prediction across new instances.





Finally – but crucially for present purposes – the balance between top-down prediction and bottom-up sensory evidence is itself controlled and variable, so that sometimes we rely more on the sensory evidence, and sometimes more on the top-down predictions. This is the process known as the  ‘precision-weighting’ of the predictions and prediction error signals (see Fitzgerald et al (2015)).





Perturbing Predictions





Or rather, that’s what happens when all works as it should. But what happens when such systems go wrong? Consider some of the options:





Over-weighting the sensory evidence.





This corresponds to assigning too much weight (precision) to the errors flagging unexplained sensory information or (what here amounts to the very same thing) assigning too little weight to top-down predictions.  Do that, and you won’t be able to detect faint patterns in a noisy environment, missing the famous Dalmatian dog hidden in the play of light and shadow, or the true sentences hidden in the peeps and pops of sine wave speech. Could it be that autism spectrum disorder involves this kind of failure, making the incoming sensory stream seem full of unexplained details and hard to master? (For works that explore this and related ideas, see Pellicano and Burr (2012), Brock (2012), Friston et al (2012).)





Under-weighting the sensory evidence








Image courtesy of Pixabay.

This corresponds to assigning too little weight to sensory prediction error, or (though from a Bayesian perspective this amounts to the same thing) assigning too much weight to top-down predictions. Do that, and you will start to hallucinate patterns that are not there, just because you strongly predict them. We can do this on demand, as when we set out to spot faces in the clouds. But if we don’t know we are upping the value of our own predictions, we may believe our own hallucinations. Indeed, just this was shown in healthy undergraduates whose task was to try to detect the faint onset of Bing Crosby singing ‘White Christmas’ in a noisy sound file. Unknown to them, the sound file was just white noise (no faint trace of White Christmas at all). Yet a significant number of students claimed to hear the onset of the song (Merckelbach and van de Ven (2001) – and for a follow-up study showing that the effect is increased by caffeine, see Crow et al (2011)).





More Complex Disturbances





Fletcher and Frith (2009)) use the Bayesian/Predictive Processing apparatus to account for the emergence of delusions and hallucinations (the so-called ‘positive symptoms’) in schizophrenia. The basic idea is that both these symptoms might flow from a single underlying cause: falsely generated and highly-weighted (high-precision) waves of prediction error. The high weighting assigned to these falsely generated error signals renders them functionally potent, positioning them to drive the system towards increasingly bizarre hypotheses so as to accommodate them. Once such hypotheses take hold, new low-level sensory stimulation may be interpreted falsely. From the emerging ‘predictive brain’ perspective, this is no stranger than prior expectations making pure white noise sound like White Christmas.








A hallucinating multi-layer neural network looks at the

University of Sussex campus (Work by Suzuki

et al. (2017). Image reproduced by permission.)

Our experiential worlds, all this suggests, are a kind of shifting mosaic in which top-down predictions meet sensory evidence. This is a delicate mechanism prone to environmental, physio-logical, and pharmaco-logical upset. Using as a base the multi-level neural network architecture Deep Dream, Suzuki et al (2017) created an immersive VR (Virtual Reality) environment in which subjects could experience visual effects remarkably similar to those reported using hallucinogenic drugs. Translated (as suggested by Suzuki et al) into predictive processing terms, the networks were in effect being told strongly to predict certain kinds of object or feature in the input stream, thereby warping the processing of the raw visual information along those specific dimensions.  For example, the network that generated the image shown in Fig 1 was (in predictive processing terms) forced chronically to predict ‘seeing dogs’ while taking input from the Sussex campus. The results were then replayed to subjects using a heads-up display and 360 degree immersive VR. Here’s a video clip of what the viewers experienced.





Predictive processing accounts link directly to psychopharmacological models and speculations. Corlett et al (2009) (2010) relate the chemical mechanisms associated with a variety of psychoses to specific impairments in the precision-weighted top-down/bottom–up balancing act: impairments echoed, the same authors note, by the action of different psychotomimetic drugs.





Implications for Neuroethics





All this has implications both for the nature and practice of neuroscience and for the social and political frameworks in which we live and work.








Image courtesy of Flickr.

Predictive perception is endemically hostage to good training data. So immersion in statistically unrepresentative worlds will yield real-seeming but distortive percepts. Barrett and Wormwood, in a high-profile New York Times piece, suggest that skewed predictions may play a role in some police shootings of unarmed black men. In the right context, visual evidence that ought to lead us to perceive a handheld cell-phone in a dark alley is trumped by top-down predictions that instead deliver a percept as of a handgun. Skewed environments build bad perceivers (not just bad reasoners or actors).





Above all, we should get used to a simple but transformative fact – the idea of raw sensory experience is radically mistaken. Where we might sometimes think we are seeing or smelling or tasting what’s simply ‘given in the signal,’ we are instead seeing, tasting, or smelling only what’s there relative to an expectation. This picture of the roots of experience is the topic of our on-going ERC-funded project Expecting Ourselves. We are all, in this limited sense, hallucinating all the time. When others hallucinate or fall prey to delusions, they are not doing anything radically different from the neurotypical case.





* This post was prepared thanks to support from the European Research Council (XSPECT - DLV-692739). Thanks to Anil Seth and Keisuke Suzuki for letting me use their work on the Hallucination Machine, and to David Carmel, Frank Schumann and the X-SPECT team for helpful comments on an earlier version.







References






Barrett, L.F. and Wormwood, J (2015) When a Gun is Not a Gun, New York Times, April 17







Brock, J (2012) Alternative Bayesian accounts of autistic perception: comment on Pellicano and Burr Trends in Cognitive Sciences, Volume 16, Issue 12, 573-574 doi:10.1016/j.tics.2012.10.005







Clark, A (2013) Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science Behavioral and Brain Sciences 36: 3:  p. 181-204









Clark, A (2016) Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, NY)









Corlett PR, Frith CD, and Fletcher PC (2009) From drugs to deprivation: a Bayesian framework for understanding models of psychosis. Psychopharmacology (Berl) 206:4: p.515-30









Corlett PR, Taylor JR, Wang XJ, Fletcher PC, and Krystal JH (2010) Toward a neurobiology of delusions. Progress In Neurobiology. 92: 3 p.345-369









Crowe, S., Barot, J., Caldow, S., D’Aspromonte, J., Dell’Orso, J  Di Clemente, A.,   Hanson, K  Kellett, M  Makhlota, S  McIvor, B  McKenzie, L  Norman, R.,   Thiru, A.,  Twyerould, M., and Sapega, S (2011) The effect of caffeine and stress on auditory hallucinations in a non-clinical sample Personality and Individual Difference 50 :5 :626-630









Feldman H and Friston K (2010) Attention, uncertainty, and free-energy. Frontiers in Human Neuroscience 2: 4 article 215 (doi: 10.3389/fnhum.2010.00215)









FitzGerald, T. H. B., Dolan, R. J., & Friston, K. (2015). Dopamine, reward learning, and active inference. Frontiers in Computational Neuroscience, 9, 136. http://doi.org/10.3389/fncom.2015.00136









Fletcher, P and Frith, C (2009) Perceiving is believing: a Bayesian appraoch to explaining the positive symptoms of schizophrenia. Nature Reviews: Neuroscience 10: 48-58









Friston K. (2005). A theory of cortical responses. Philos Trans R Soc Lond B Biol Sci.29;360(1456):815-36.









Friston, K.,  Lawson, R. & Frith, C.D.. (2013). On hyperpriors and hypopriors: Comment on Pellicano and Burr. Trends in Cognitive. Sciences, 17, 1.p1









Happé, F (2013) Embedded Figures Test (EFT) Encyclopedia of Autism Spectrum Disorders pp 1077-1078









Hohwy, J (2013) The Predictive Mind (Oxford University press, NY)









Merckelbach, H. & van de Ven, V. (2001). Another White Christmas: fantasy proneness and reports of 'hallucinatory experiences' in undergraduate students. Journal of Behaviour Therapy and Experimental Psychiatry, 32, 137-144.









Pellicano E., Burr D (2012) When the world becomes too real: A Bayesian explanation of autistic perception. Trends in Cognitive Sciences. 2012; 16:504–510.  doi: 10.1016/j.tics.2012.08.009









Suzuki, K., Roseboom, W., Schwartzman, D., and Seth, A. (2017) A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology Scientific Reports 7, Article number: 15982 doi:10.1038/s41598-017-16316-2






Want to cite this post?



Clark, A. (2017). Neuroethics, the Predictive Brain, and Hallucinating Neural Networks. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/neuroethics-predictive-brain-and.html