Pages

Showing posts with label neural networks. Show all posts
Showing posts with label neural networks. Show all posts

Tuesday, September 11, 2018

The future of an AI artist




This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.





By Coco Cao








An example of AI-generated art

Image courtesy of Flickr

An article published on New Scientist entitled, “Artificially intelligent painters invent new styles of art” has captured my attention. The article discussed a recent study conducted by Elgammal et al. (2017), who developed a computational creative system (the Creative Adversarial Network) for art generation based on the Generative Adversarial Network (GAN), which has the ability to generate novel images simulating a given distribution. Originally, GAN consisted of two neural networks, a generator and a discriminator. To create the Creative Adversarial Network (CAN), scientists trained the discriminator with 75753 art works from 25 art styles so it learned to categorize art works based on their styles. The discriminator also learned to distinguish between art and non-art pieces, based on learned art styles. Then, the discriminator is able to correct the generator, a network that generates art pieces. The generator eventually learns and produces art pieces that are indistinguishable from the human produced art pieces. While ensuring the art piece is still aesthetically pleasing, CAN generates abstract arts that enhance creativity by maximizing deviation from established art styles. 




After learning about AI’s ability to be “creative” and generate art pieces, I was frightened. Unlike AI’s application in a scientific context, AI in an art context elicits human feelings. Is it possible that AI artists could replace human artists in the future? Considering the importance of the author’s creativity and originality in art, the critical ethical concern regards the individualism of AI artists. Can we consider the art pieces generated from AI as expressions of themselves? 




In 1738, Jacques de Vaucanson, a French watchmaker, generated a life size mechanical duck with feathers. The mechanical duck could eat, move and flap its wings. Therefore, audiences refused to believe it was artificial, since the mechanical duck exhibited all of the behaviors of a real duck (Glimcher, 2004). If the mechanical duck was a real duck, like audiences believed, then the behaviors must have been generated by the mechanical duck. However, this situation may not be the case. The mechanical duck was likely programmed by Vaucanson, causing Vaucanson to be the generator of the mechanical duck’s behavior. In the case of an AI artist, Elgammal et al. (2017) stated in the paper that CAN involved human creative products in the learning process while the creative process was carried out by AI. Therefore, the AI creative process was hugely dependent on pre-exposure of artwork created by humans. Does this mean the artwork was originally created by AI? I don’t think so. During my recent visit to the “Artists and Robots” exhibition, which was held in the Grand Palais in Paris and presented the applications and implications of AI in arts, I noticed that there were some robot paintings with the human artist/programmer’s signatures on the paintings. In this case, the credit of the AI-generated art pieces still belonged to the human artists and programmers. Therefore, AI are not considered as an individual in an art context for now. 








Vaucanson's duck along with two of his other creations

Image courtesy of Wikipedia

Moreover, because AI is programmed to think like humans, does it mean humans already understand the biological basis of creativity and individualism? Moreover, are we able to program creativity and individualism? Currently, research suggests that three brain networks, which are the default mode network, the executive control network and the salience network, are related to creativity. Those three brain networks are scattered through the frontal and parietal cortices (Brenner, 2018). Regarding individualism, Chiao et al. (2009) suggests that neural activity in medial prefrontal cortex positively predicts individualistic and collectivistic views of self. However, all brain areas are interconnected, and we still don’t know the specific neuronal interactions involved in creativity and individualism. Without actually understanding the human neuronal basis of creativity and individualism, we are unable to program creativity and individualism into AIs. Therefore, art pieces generated by AI are not original. 





Other than the originality of art, the meaning of art is also crucial. We can evaluate meanings of art in two contexts: the meaning to the audience viewing the art and the meaning to the artists who created the art. Ted Snell (2018, May 04), the Cultural Precinct director of University of Western Australia, concluded that the evaluation of art depends on audiences’ knowledge and experience. Considering the subjectivity in art evaluation, what is the meaning of art to an artist? 








Image courtesy of Pixabay

There are different kinds of art. As a dance minor student, I am more familiar with performing arts. After years of training in classical ballet, I witnessed my technical improvements as I put more effort into my dancing. However, dance is not only about physical growth in overcoming technical challenges. It also helps me to grow mentally: as I started to accept my imperfectness and I became more humble and persistent in training. If art brings artists mental growth, we still have no way to measure the meaning of growing for AI. Until now, AI’s learning process has been largely guided by humans and AI is currently developed under a human societal context. Therefore, if we consider AI as an individual but biologically distinct from a human, do human societal values apply to AIs? 





So far, we can neither consider those AI artists as individuals nor do we have any way to measure the AI mental growth experienced while producing those works. However, we cannot neglect the infinite possibilities of art pieces generated by AI. Also, considering the immortality of AI, AI could someday exceed us by continuously learning and improving. “Robot” originates from the Czech word “Robotnik,” which means slave. It is possible that AI could “enslave” us in the future. While it is interesting to experiment with AI’s ability in creating arts, we need to evaluate the consequences of accepting AI art pieces. Nevertheless, it is truly fascinating to see the artworks created by an AI artist! 





_______________






Coco Cao is a fourth-year undergraduate student at Emory University, majoring in Neuroscience and Behavioral Biology and minoring in Dance and movement studies. She is originally from China and hopes to pursue a career in medicine.
















References: 





Baraniuk, C. (2017, June 29). Artificially intelligent painters invent new styles of art. Retrieved June 14, 2018, from https://www.newscientist.com/article/2139184-artificially-intelligent-painters-invent-new-styles-of-art/ 





Brenner, G. H. (2018, February 22). Your Brain on Creativity. Retrieved July 5, 2018, from https://www.psychologytoday.com/us/blog/experimentations/201802/your-brain-creativity 





Chiao, J. Y., Harada, T., Komeda, H., Li, Z., Mano, Y., Saito, D., Iidaka, T. (2009). Neural basis of individualistic and collectivistic views of self. Human Brain Mapping,30(9), 2813-2820. doi:10.1002/hbm.20707 





Elgammal, A., Liu, B., Elhoseiny, M., & Mazzone, M. (2017). CAN: Creative Adversarial Networks, Generating" Art" by Learning About Styles and Deviating from Style Norms. arXiv preprint arXiv:1706.07068. 





Glimcher, P. W. (2004). Decisions, uncertainty, and the brain: The science of neuroeconomics. Cambridge, MA: MIT Press. 





Réunion des musées nationaux – Grand Palais. (n.d.). Artists & Robots. Retrieved July 5, 2018, from https://www.grandpalais.fr/en/event/artists-robots 





Snell, T. (2018, May 04). On judging art prizes (it's all subjective, isn't it?). Retrieved June 14, 2018, from https://theconversation.com/on-judging-art-prizes-its-all-subjective-isnt-it-38430








Want to cite this post?



Cao, C. (2018). The future of an AI artist. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/09/the-future-of-ai-artist.html

Tuesday, June 26, 2018

Facial recognition, values, and the human brain




By Elisabeth Hildt








Image courtesy of Pixabay.

Research is not an isolated activity. It takes place in a social context, sometimes influenced by value assumptions and sometimes accompanied by social and ethical implications. A recent example of this complex interplay is an article, “Deep neural networks can detect sexual orientation from faces” by Yilun Wang and Michal Kosinski, accepted in 2017 for publication in the Journal of Personality and Social Psychology.





In this study on face recognition, the researchers used deep neural networks to classify the sexual orientations of persons depicted in facial images uploaded on a dating website. While the discriminatory power of the system was limited, the algorithm was reported to have achieved higher accuracy in the setting than human subjects. The study can be seen in the context of the “prenatal hormone theory of sexual orientation,” which claims that gay men and women tend to have gender-atypical facial morphology.




The abstract of the article ends with the sentences (p.2): “Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”





The authors of the study seem to assume that their role is confined to conducting research, sending the results out to society, and (maybe) sounding a note of caution, a caveat (Murphy 2017; Resnick 2018). But that, beyond that, their research does not pose any considerable ethical issues. This can be questioned, however. For researchers have a clear responsibility to think about the social embeddedness and ethical implications of their research before it is carried out and published and to design their studies in a way that keeps possible negative consequences to a minimum.





To begin with, there has been an ongoing discussion on whether this study complies with research ethics standards. Issues raised include whether the research is in line with the dating site’s guidelines and with copyright regulations, as well as whether the researchers were entitled to use the photos without having obtained the informed consent of the individuals who uploaded them for an entirely different purpose (Flaherty 2017; Leetaru 2017). As there is an ongoing investigation into these issues – but also discussion on whether there is need for new guidelines regulating artificial intelligence (AI) and digital data research (Leetaru 2017)— the study has not yet been published in the journal.








Image courtesy of Flickr

Apart from the above-mentioned research ethics issues, ethical aspects matter in two respects: first with regard to the value assumptions implicit in the study design and second with regard to possible ethical implications of the research. Physiognomy, the broader context in which the study is located, is a controversial field that many consider to be a pseudoscience (Emspak 2017).



Physiognomy assumes that a person’s facial features give indications of his/her personality traits. It is not by chance that the study reminds me of the pseudo-scientific phrenological approaches of the 19th century that attempted to draw conclusions about individuals’ personality traits based on the shape of their skulls (Holtzman 2015). What unites these two is that their approach is influenced by social value assumptions and the motivation to be able to identify individuals with behavior or with characteristics considered socially deviant.



Other brain-related research fields are not immune to social value assumptions either. An example is craniometry and the highly questionable claim made by Samuel George Morton in the 19th century that differences in cranial capacity between different ethnic groups are indicators of the intellectual capacity of these ethnic groups. The same applies for discussions on the relevance of brain size for intelligence (Fausto-Sterling 1993). These examples show us more about the underlying social assumptions of the researchers than about the actual relevance of their measurements.





Similarly, one of the basic assumptions of the Wang & Kosinski paper, the “prenatal hormone theory of sexual orientation” and the view that there is a correlation between the shape of a person’s face and his/her sexual orientation, is far from being proven (Emspak 2017: Murphy 2017). While the quality of the underlying scientific assumption is a complex question that cannot be resolved here, the choice of the research topic reflects the view that AI-based facial recognition to detect sexual orientation is a topic worth pursuing.



One of the central conclusions of the study is that human faces “contain more information about sexual orientation than can be perceived or interpreted by the human brain” (Wang & Kosinski, p. 29). Deep neural networks are reported to provide more accurate results in the described study setting because they take features into consideration that are not accessible or not relevant for humans and the human brain when it comes to distinguishing between heterosexual and homosexual individuals based on their faces. Nevertheless, it is obvious that the resulting data needs human interpretation, especially in view of the intimacy of the trait under investigation. For example, the authors explain the higher probability of seeing a shadow on the forefront of heterosexual men and lesbian women in the study by the tendency of both groups to wear baseball caps and “the association between baseball caps and masculinity in American culture” (p. 20). In other cultural contexts, different influencing factors may be expected. But, it remains unexplained as to why there is a higher probability of gay people wearing glasses in the study (Emspak 2017).








Image courtesy of Pixabay.

The underlying question is: how can we ever adequately interpret the resulting data in a situation in which not only a considerable number of the elements used by the system, but also their relevance escape our understanding? There is a clear risk for discrimination against homosexual men and women based on opaque algorithms (O’Neil 2016; Agüera y Arcas et al. 2018). This leads to the question of possible negative consequences for homosexual men and women.





Concerning possible ethical implications of their research study, the authors stress that their intention was to raise awareness of the risks gay people may face already, particularly in view of the growing digitalization of everyday lives, and that they did not develop algorithms for their study but instead used widely available off-the-shelf tools. However, it seems obvious that the study not only raises awareness of the options available in the digital age, but also suggests how to realize similar approaches; it also reinforces the assumption that using facial recognition to find out about the sexual orientation of individuals may be worthwhile.


_______________








Elisabeth Hildt is Professor of Philosophy and Director of the Center for the Study of Ethics in the Professions at Illinois Institute of Technology, Chicago. Her research focus is on neuroethics, ethics of technology, and Science and Technology Studies. Before moving to Chicago, she was the head of the Research Group on Neuroethics/Neurophilosophy at the University of Mainz, Germany.


















References








Agüera y Arcas, B., Todorov, A., Mitchell. M. (2018): “Do algorithms reveal sexual orientation or just expose our stereotypes?”, Medium January 11, 2018; https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477









Emspak, J. (2017): “Facing Facts: Artificial Intelligence and the Resurgence of Physiognomy”, Undark 11.08.2017; https://undark.org/article/facing-facts-artificial-intelligence/









Fausto-Sterling, A. (1993): “Sex, Race, Brains, and Calipers”, Discover Magazine 14(10): 32-37. http://discovermagazine.com/1993/oct/sexracebrainsand288









Flaherty, C. (2017): “Prominent journal that accepted controversial study on AI gaydar is reviewing ethics in the work”, Inside Higher Ed Sep 13, 2017; https://www.insidehighered.com/news/2017/09/13/prominent-journal-accepted-controversial-study-ai-gaydar-reviewing-ethics-work









Holtzman, G.S. (2015): “When Phrenology was Used in Court. Lessons in neuroscience from the 1834 trial of a 9-year-old”, Slate, Future Tense, Dec 16, 2015, http://www.slate.com/articles/technology/future_tense/2015/12/how_phrenology_was_used_in_the_1834_trial_of_9_year_old_major_mitchell.html









Leetaru, K. (2017): “AI ‘Gaydar’ And How The Future Of AI Will Be Exempt From Ethical Review”, Forbes, Sep 16, 2017; https://www.forbes.com/sites/kalevleetaru/2017/09/16/ai-gaydar-and-how-the-future-of-ai-will-be-exempt-from-ethical-review/#704e7602c09a









Murphy, H. (2017): “Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine”, New York Times, Oct 9, 2017; https://www.nytimes.com/2017/10/09/science/stanford-sexual-orientation-study.html









O’Neil, C. (2016): Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown









Resnik, B. (2018): “This psychologist’s “gaydar” research makes us uncomfortable. That’s the point.”, Vox, Jan 29, 2018; https://www.vox.com/science-and-health/2018/1/29/16571684/michal-kosinski-artificial-intelligence-faces









Wang, Y., Kosinski, M. (2017): “Deep neural networks can detect sexual orientation from faces”, https://www.gsb.stanford.edu/sites/gsb/files/publication-pdf/wang_kosinski.pdf







Want to cite this post?



Hildt, E. (2018). Facial recognition, values, and the human brain. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/06/facial-recognition-values-and-human.html

Tuesday, December 5, 2017

Neuroethics, the Predictive Brain, and Hallucinating Neural Networks




By Andy Clark







Andy Clark is Professor of Logic and Metaphysics in the School of Philosophy, Psychology and Language Sciences, at Edinburgh University in Scotland. He is the author of several books including Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, 2016). Andy is currently PI on a 4-year ERC-funded project Expecting Ourselves: Prediction, Action, and the Construction of Conscious Experience.





In this post, I’d like to explore an emerging neurocomputational story that has implications for how we should think about ourselves and about the relations between normal and atypical forms of human experience.




Predictive Processing: From Peeps to Phrases



The approach is often known as ‘predictive processing’ and, as the name suggests, it depicts brains as multi-area, multi-level engines of prediction. Such devices (for some introductions, see Hohwy (2013), Clark (2013) (2016)) are constantly trying to self-generate the sensory stream – to re-create it ‘from the top-down’ using stored knowledge (‘prior information’) about the world. When the attempt at top-down matching fails, so-called ‘prediction errors’ result. These ‘residual errors’ flag whatever remains unexplained by the current best predictive guess and are thus excellent guides for the recruitment of new predictions and/or the refinement of old ones. A multi-level exchange involving predictions, prediction errors, and new predictions then ensues, until a kind of equilibrium is reached.





That’s pretty abstract and highly compressed. But a compelling example involves the hearing of ‘sine-wave speech.’ This is speech with much of the usual signal cut out, so that all that remains is a series of ‘peeps and whistles.’ You can hear an example by clicking on the first loudspeaker icon here. You probably won’t make much sense of what you hear. But now click on the next loudspeaker and listen to the original sentence before revisiting the sinewave replica. Now, your experiential world has altered. It sounds like odd but clearly intelligible speech. In one sense, you are now able to hallucinate the richer meaning-bearing structure despite that poor sensory signal. In another (equally valid) sense, you are now simply hearing what is there, but through a process that starts with better prior information, and so is better able to sift the interesting signal from the distracting noise. For some more demos like this, try here, or here.








Image courtesy of Pexels.

According to these ‘predictive processing’ accounts, the process is one in which you start off with inadequate prior knowledge; so, when you first hear the sine wave version, you are unable to meet the incoming signal with the right wave of top-down predictions. After hearing the sentence, your model improves and you can match the sine wave skeleton with a rich flow of top-down prediction. Once you are expert enough, you can even recruit those apt top-down flows without hearing the specific sentence first. This corresponds to having learnt a generalizable world-model that now powers top-down prediction across new instances.





Finally – but crucially for present purposes – the balance between top-down prediction and bottom-up sensory evidence is itself controlled and variable, so that sometimes we rely more on the sensory evidence, and sometimes more on the top-down predictions. This is the process known as the  ‘precision-weighting’ of the predictions and prediction error signals (see Fitzgerald et al (2015)).





Perturbing Predictions





Or rather, that’s what happens when all works as it should. But what happens when such systems go wrong? Consider some of the options:





Over-weighting the sensory evidence.





This corresponds to assigning too much weight (precision) to the errors flagging unexplained sensory information or (what here amounts to the very same thing) assigning too little weight to top-down predictions.  Do that, and you won’t be able to detect faint patterns in a noisy environment, missing the famous Dalmatian dog hidden in the play of light and shadow, or the true sentences hidden in the peeps and pops of sine wave speech. Could it be that autism spectrum disorder involves this kind of failure, making the incoming sensory stream seem full of unexplained details and hard to master? (For works that explore this and related ideas, see Pellicano and Burr (2012), Brock (2012), Friston et al (2012).)





Under-weighting the sensory evidence








Image courtesy of Pixabay.

This corresponds to assigning too little weight to sensory prediction error, or (though from a Bayesian perspective this amounts to the same thing) assigning too much weight to top-down predictions. Do that, and you will start to hallucinate patterns that are not there, just because you strongly predict them. We can do this on demand, as when we set out to spot faces in the clouds. But if we don’t know we are upping the value of our own predictions, we may believe our own hallucinations. Indeed, just this was shown in healthy undergraduates whose task was to try to detect the faint onset of Bing Crosby singing ‘White Christmas’ in a noisy sound file. Unknown to them, the sound file was just white noise (no faint trace of White Christmas at all). Yet a significant number of students claimed to hear the onset of the song (Merckelbach and van de Ven (2001) – and for a follow-up study showing that the effect is increased by caffeine, see Crow et al (2011)).





More Complex Disturbances





Fletcher and Frith (2009)) use the Bayesian/Predictive Processing apparatus to account for the emergence of delusions and hallucinations (the so-called ‘positive symptoms’) in schizophrenia. The basic idea is that both these symptoms might flow from a single underlying cause: falsely generated and highly-weighted (high-precision) waves of prediction error. The high weighting assigned to these falsely generated error signals renders them functionally potent, positioning them to drive the system towards increasingly bizarre hypotheses so as to accommodate them. Once such hypotheses take hold, new low-level sensory stimulation may be interpreted falsely. From the emerging ‘predictive brain’ perspective, this is no stranger than prior expectations making pure white noise sound like White Christmas.








A hallucinating multi-layer neural network looks at the

University of Sussex campus (Work by Suzuki

et al. (2017). Image reproduced by permission.)

Our experiential worlds, all this suggests, are a kind of shifting mosaic in which top-down predictions meet sensory evidence. This is a delicate mechanism prone to environmental, physio-logical, and pharmaco-logical upset. Using as a base the multi-level neural network architecture Deep Dream, Suzuki et al (2017) created an immersive VR (Virtual Reality) environment in which subjects could experience visual effects remarkably similar to those reported using hallucinogenic drugs. Translated (as suggested by Suzuki et al) into predictive processing terms, the networks were in effect being told strongly to predict certain kinds of object or feature in the input stream, thereby warping the processing of the raw visual information along those specific dimensions.  For example, the network that generated the image shown in Fig 1 was (in predictive processing terms) forced chronically to predict ‘seeing dogs’ while taking input from the Sussex campus. The results were then replayed to subjects using a heads-up display and 360 degree immersive VR. Here’s a video clip of what the viewers experienced.





Predictive processing accounts link directly to psychopharmacological models and speculations. Corlett et al (2009) (2010) relate the chemical mechanisms associated with a variety of psychoses to specific impairments in the precision-weighted top-down/bottom–up balancing act: impairments echoed, the same authors note, by the action of different psychotomimetic drugs.





Implications for Neuroethics





All this has implications both for the nature and practice of neuroscience and for the social and political frameworks in which we live and work.








Image courtesy of Flickr.

Predictive perception is endemically hostage to good training data. So immersion in statistically unrepresentative worlds will yield real-seeming but distortive percepts. Barrett and Wormwood, in a high-profile New York Times piece, suggest that skewed predictions may play a role in some police shootings of unarmed black men. In the right context, visual evidence that ought to lead us to perceive a handheld cell-phone in a dark alley is trumped by top-down predictions that instead deliver a percept as of a handgun. Skewed environments build bad perceivers (not just bad reasoners or actors).





Above all, we should get used to a simple but transformative fact – the idea of raw sensory experience is radically mistaken. Where we might sometimes think we are seeing or smelling or tasting what’s simply ‘given in the signal,’ we are instead seeing, tasting, or smelling only what’s there relative to an expectation. This picture of the roots of experience is the topic of our on-going ERC-funded project Expecting Ourselves. We are all, in this limited sense, hallucinating all the time. When others hallucinate or fall prey to delusions, they are not doing anything radically different from the neurotypical case.





* This post was prepared thanks to support from the European Research Council (XSPECT - DLV-692739). Thanks to Anil Seth and Keisuke Suzuki for letting me use their work on the Hallucination Machine, and to David Carmel, Frank Schumann and the X-SPECT team for helpful comments on an earlier version.







References






Barrett, L.F. and Wormwood, J (2015) When a Gun is Not a Gun, New York Times, April 17







Brock, J (2012) Alternative Bayesian accounts of autistic perception: comment on Pellicano and Burr Trends in Cognitive Sciences, Volume 16, Issue 12, 573-574 doi:10.1016/j.tics.2012.10.005







Clark, A (2013) Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science Behavioral and Brain Sciences 36: 3:  p. 181-204









Clark, A (2016) Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, NY)









Corlett PR, Frith CD, and Fletcher PC (2009) From drugs to deprivation: a Bayesian framework for understanding models of psychosis. Psychopharmacology (Berl) 206:4: p.515-30









Corlett PR, Taylor JR, Wang XJ, Fletcher PC, and Krystal JH (2010) Toward a neurobiology of delusions. Progress In Neurobiology. 92: 3 p.345-369









Crowe, S., Barot, J., Caldow, S., D’Aspromonte, J., Dell’Orso, J  Di Clemente, A.,   Hanson, K  Kellett, M  Makhlota, S  McIvor, B  McKenzie, L  Norman, R.,   Thiru, A.,  Twyerould, M., and Sapega, S (2011) The effect of caffeine and stress on auditory hallucinations in a non-clinical sample Personality and Individual Difference 50 :5 :626-630









Feldman H and Friston K (2010) Attention, uncertainty, and free-energy. Frontiers in Human Neuroscience 2: 4 article 215 (doi: 10.3389/fnhum.2010.00215)









FitzGerald, T. H. B., Dolan, R. J., & Friston, K. (2015). Dopamine, reward learning, and active inference. Frontiers in Computational Neuroscience, 9, 136. http://doi.org/10.3389/fncom.2015.00136









Fletcher, P and Frith, C (2009) Perceiving is believing: a Bayesian appraoch to explaining the positive symptoms of schizophrenia. Nature Reviews: Neuroscience 10: 48-58









Friston K. (2005). A theory of cortical responses. Philos Trans R Soc Lond B Biol Sci.29;360(1456):815-36.









Friston, K.,  Lawson, R. & Frith, C.D.. (2013). On hyperpriors and hypopriors: Comment on Pellicano and Burr. Trends in Cognitive. Sciences, 17, 1.p1









Happé, F (2013) Embedded Figures Test (EFT) Encyclopedia of Autism Spectrum Disorders pp 1077-1078









Hohwy, J (2013) The Predictive Mind (Oxford University press, NY)









Merckelbach, H. & van de Ven, V. (2001). Another White Christmas: fantasy proneness and reports of 'hallucinatory experiences' in undergraduate students. Journal of Behaviour Therapy and Experimental Psychiatry, 32, 137-144.









Pellicano E., Burr D (2012) When the world becomes too real: A Bayesian explanation of autistic perception. Trends in Cognitive Sciences. 2012; 16:504–510.  doi: 10.1016/j.tics.2012.08.009









Suzuki, K., Roseboom, W., Schwartzman, D., and Seth, A. (2017) A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology Scientific Reports 7, Article number: 15982 doi:10.1038/s41598-017-16316-2






Want to cite this post?



Clark, A. (2017). Neuroethics, the Predictive Brain, and Hallucinating Neural Networks. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/neuroethics-predictive-brain-and.html

Tuesday, March 29, 2016

AlphaGo and Google DeepMind: (Un)Settling the Score between Human and Artificial Intelligence

By Katie L. Strong, PhD 



In a quiet room in a London office building, artificial intelligence history was made last October as reigning European Champion Fan Hui played Go, a strategy-based game he had played countless times before. This particular match was different from the others though – not only was Fan Hui losing, but he was losing against a machine.





The machine was a novel artificial intelligence system named AlphaGo developed by Google DeepMind. DeepMind, which was acquired by Google in 2014 for an alleged $617 million (their largest European acquisition to date), is a company focused on developing machines that are capable of learning new tasks for themselves. DeepMind is more interested in artificial “general” intelligence, or AI machines that are adaptive to the task at hand and can accomplish new goals with little or no preprogramming. DeepMind programs essentially have a kind of short-term working memory that allows them to manipulate and adapt information to make decisions. This is in contrast to AI that may be very adept at a specific job, but cannot translate these skills to a different task without human intervention. For the researchers at DeepMind, the perfect platform to test these types of sophisticated AI: computer and board games. 











Courtesy of Flickr user Alexandre Keledjian


DeepMind had set their sights high with Go; since IBM’s chess playing Deep Blue beat Garry Karparov in 1997, Go has been considered the holy grail of artificial intelligence, and many experts had predicted that humans would remain undefeated for at least another 10 years. Go is a relatively straightforward game with few rules, but the number of possibilities on the board makes for complex, interesting play that requires long-term planning; on the typical 19x19 grid, according to the DeepMind website, there are more legal game positions “than there are atoms in the universe.” Players take turns strategically placing stones (black for the first player, white for the second) on the grid intersections in an effort to form territories. Passing is an alternative to taking a turn, and the game ultimately ends when both players have passed due to the lack of unmarked territory. Often though, towards the end of the game, one player will resign in lieu of playing to the very end.





In a Nature paper published in January of this year, researchers at DeepMind reported the development of an AI agent that could beat other Go computer games with a winning rate of 99.8%. Buried in the text, in a single paragraph of the Results section, the authors also briefly describe the epic match between AlphaGo and Fan Hui, which ultimately resulted in a 5 to 0 win for artificial intelligence.







With that significant win in hand, DeepMind took a much bolder approach in announcing AlphaGo’s complexity, and invited Lee Sedol, the top Go player in the world for the last decade, to compete in a five match tournament the week of March 9th – 15th. Instead of a private match at DeepMind’s headquarters, this contest was live-streamed to the world through YouTube and came with a 1 million dollar prize. Despite the defeat of Fan Hui and the backing of Google, Lee Sedol was still fairly confident in his skills and said late February in a statement, “I have heard that Google DeepMind’s AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time.”





Three and half hours into the first match on March 9th though, Lee Sedol resigned, or forfeited, the match. He resigned the second and third matches as well. According to Lee Sedol during a press conference following the third game, he felt he underestimated the program during game one, made mistakes in game two, and was under extreme pressure in game three.





However, in a win for humanity, Lee Sedol won the fourth game. Interestingly, the first 11 moves of the fourth game were exactly the same as the second game, and perhaps Lee Sedol was able to capitalize on what he learned from the previous three. According to the English commentator Michael Redmond, Move 78 (a move by Lee Sedol) elicited a miscalculation from AlphaGo and the game was essentially over from that point. In both of these games, Lee Sedol played second (the white stones), and he stated in the post four-game press conference that AlphaGo is weaker when the machine goes first.








Cofounder of DeepMind Demis Hassabis

Whether or not AlphaGo is actually weaker when it plays first is difficult to know since Lee Sedol may be the only person that can attest to this. During the post-four game press conference, cofounder of DeepMind Demis Hassabis stated that Lee Sedol’s win was valuable to the algorithm and the researchers would take AlphaGo back to the UK to study what had happened, so this weakness could be confirmed (and presumably fixed). One important point of Go play that may have influenced the outcome though is that AlphaGo will play moves to maximize its chances of winning, irrespective of how this move influences the margin of victory. Whether or not this is a weakness is probably up for debate as well, but in this sense AlphaGo is not playing like a professional human player. Go has a long history of being respected for its elegance and simplicity, but AlphaGo is not concerned with the sophistication or complexity of the game – it just wants to win. 





Lee Sedol requested and was granted the opportunity to play black (the first move) in the fifth and final match-up, even though the rules of the game stated that it would be randomly assigned. “I really do hope I can win with black” Lee Sedol said after winning game four, “because winning with black is much more valuable.” The fifth match lasted a grueling five hours, but eventually Lee Sedol did resign. After almost a week of play, the championship concluded with a 4-1 score for artificial intelligence.





When AlphaGo played Fan Hui in October 2015, the agent beat a professional 2-dan player, but Lee Sedol ranks higher than Fan Hui as a 9-dan professional player. (Those who have mastered the game of Go are ranked on a scale known as dan, which begins with 1-dan and continues to 9-dan). To put this into perspective, Lee Sedol was a 2-dan professional player in 1998, and it wasn’t until 2003 that he reached 9-dan status. Playing at the professional level of 9-dan from 2-dan took Lee Sedol five years, but AlphaGo was able to climb this ladder in only five months. DeepMind was able to build an artificial intelligence agent with these capabilities by utilizing two important concepts, deep neural networks and reinforcement learning. Typical AI agents of the past deployed tree searching to review possible outcomes, but this brute force approach where AI considers the effect of every possible move on the outcome of the game is not feasible in Go. In Go, the first black stone played could lead to hundreds of potential moves by white, which in turn could lead to hundreds of potential moves by black. Humans have been able to master Go without mentally running through every possible play during each turn and without mentally finishing the game after every move by an opponent. Humans rely on imagination and intuition to master complex skills, and AlphaGo is actually designed to mimic these very complex cognitive functions.








Courtesy of Flickr user Little Book

Deep neural networks are loosely based on how neural connections in our brains work, and neural networks have been utilized for years to optimize our searches in Google and to increase the performance of voice recognition in smartphones. Analogous to synaptic plasticity, where synaptic strength increases or decreases over a lifetime, computer neural networks change and strengthen when presented with many examples. In this type of processing, neural networks are organized into layers, and each layer is responsible for constructing only a single piece of information. For example, in facial recognition software, the first layer of the network may only pick up on pixels and the second layer will only be able to reconstruct simple shapes, while a more sophisticated layer may be able to recognize difficult shapes (i.e, eyes and mouths). These layers will continue to become more complex until the software can recognize faces.





AlphaGo has to two neural networks: a policy network to select the next move, and a value network to select the winner of the game. AlphaGo uses the Go board as input and processes it through 12 layers of neural networks to determine the best move. To train the neural networks, researchers used 30 million moves from games played on the KGS Go server, and this alone led to an agent that could predict the human move 57% of the time. The goal was not to play at the level of humans though; the goal was to beat humans, and to do that researchers utilized reinforcement learning where AlphaGo was split in two and then played thousands of games against itself. With this, AlphaGo was able to win at the rate of 99.8% against commercial Go programs.





These neural networks mean that AlphaGo doesn’t search through every possible position to determine the best move before it makes a play and it doesn’t simulate entire games to help make a choice either. Instead, AlphaGo only considers a few potential moves when confronted with a decision and considers only the more immediate consequences of these potential moves. Even though chess has many fewer possible legal moves than Go, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in 1997. AlphaGo is just more human-like in that it makes these choices intelligently and precisely. According to AlphaGo developer David Silver in this video, “the search process itself is not based on brute force. It’s based on something more akin to imagination.”





This powerful computing power is not reserved strictly for games; DeepMind’s website declares that it would like to “solve intelligence” and “use it to make the world a better place.” Games are just the beginning, but deep neural networks may be able to model disease states, pandemics, or climate change and teach us to think differently about the world’s toughest problems. (DeepMind Health was announced on February 24th of this year.) Many of the moves that AlphaGo made in the beginning of the matches baffled Go professionals because they seemed like mistakes, but AlphaGo ultimately won. Were these really mistakes that AlphaGo was able to fix later or were these moves just beyond our current comprehension? How many potential Go moves have never before been considered or played out in a game?





If AlphaGo’s choices of moves could surprise Go professionals and even the masterminds behind AlphaGo, should we fear that AlphaGo is an early version of a machine that could spontaneously evolve into a conscious AI? Today, we probably have very little to be concerned about. Although the technology behind AlphaGo could be applied to many other games, AlphaGo’s learning progress was hardly casual as it took millions of games of training. However, how will we know when we do need to worry? Games have provided us with a convenient benchmark to measure the progress of AI, from backgammon in 1979 to the recent Go match, but if Go was a final frontier for AI, where do we go from here?







Measuring emerging consciousness in AI agents that simulate the human brain will be challenging, according to a paper by Kathinka Evers and Yadin Dudai of the Human Brain Project. We can use a Turing Test, although the authors note that it seems highly plausible that an intelligent AI could pass the Turing Test without having consciousness. We could also try to detect in silico signatures similar to our brain signatures that denote consciousness, but we are at a loss for what those signals may be and how well they actually represent human consciousness. If consciousness is more than just well-defined organization and requires biological entities, then computers will never be conscious in the same sense that we are and instead will exhibit only an artificial consciousness. Furthermore, thought leaders on the integrated information theory (IIT) Giulio Tononi and Christof Koch have argued in this paper that a simulation of consciousness is not the same as consciousness, and “IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.”





Regardless of how we debate machine consciousness, neural networks that mimic human learning are being utilized in most major companies that dominate our society, including Facebook, Google, and Microsoft. We will probably continue to see deep reinforcement learning as developed by DeepMind to improve voice recognition, translations, YouTube, and image searching. Deep reinforcement learning could also be used to power self-driving cars, train robots, and as Hassabis envisions in the future, develop scientist AIs that work alongside humans. Without a well-defined metric for machine intelligence and consciousness, time will tell which of these milestones marks the next great achievement in AI, how we measure its significance, and whether this event warrants anxiety. The mysterious ethics board that Hassabis negotiated with Google is probably a reflection of the company’s awareness of the ambiguous state of future AI research.








As uncertain and even scary as the future may seem though, it is important to remember that AlphaGo lost one of the matches, and that loss matters. Prior to the match, AlphaGo played millions and millions of Go games, many more games than Lee Sedol could ever play in a lifetime. AlphaGo never got tired, it never got intimidated by Lee Sedol’s 18 international titles, and it never participated in self-doubt. AlphaGo’s ignorance to the stakes of the games worked in its favor; Lee Sedol admitted he was under too much pressure during the third match.





For all of these advantages though, AlphaGo couldn’t adapt quickly or learn fast enough from Lee Sedol to make a difference in how it played. For AlphaGo to get better, it must play millions of games – not just a couple. Lee Sedol was able to play the first three matches, learn from AlphaGo, and exploit what he thought was a weakness. He thought AlphaGo played weaker when it played black, and he took advantage of this by playing a move that many consider brilliant and unexpected. AlphaGo challenged Lee Sedol and then brought out the best in him. And, when it comes to the future, the outcome of the fourth match begs the question: how can AI bring out the best in us?








Want to cite this post?





Strong, K.L. (2016). AlphaGo and Google DeepMind: (Un)Settling the Score between Human and Artificial Intelligence. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/03/alphago-and-google-deepmind-unsettling.html