Pages

Showing posts with label AlphaGo. Show all posts
Showing posts with label AlphaGo. Show all posts

Tuesday, December 19, 2017

The Neuroethics Blog Series on Black Mirror: White Christmas



By Yunmiao Wang





Miao is a second year graduate student in the Neuroscience Program at Emory University. She has watched Black Mirror since it first came out, and has always been interested in the topics of Neuroethics. 





Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we treat our technology and how it can affect us in return. This piece is the final installment of a series of posts that discuss ethical issues surrounding neuro-technologies featured in the show, and will compare how similar technologies are impacting us in the real world. 







SPOILER ALERT: The following contains plot spoilers for the Netflix television series, Black Mirror




Plot Summary





“White Christmas” begins with a man, Joe Potter, waking up at a small and isolated outpost in the snowy wilderness. “I Wish It Could Be Christmas Everyday” plays in the background. Joe walks into the kitchen and finds Matt Trent cooking for Christmas. Matt, who seems to be bored of the mundane lifestyle of the outpost, asks Joe about how he ended up there—a conversation they have never had in their five years together at the outpost. Joe becomes defensive and is reluctant to share his past. He asks Matt the same question. In order to encourage Joe to open up, Matt shares a few stories about himself.




Matt first tells a story where he is a dating coach who trains socially awkward men like Harry how to seduce women. A remote technology called EYE-LINK enables Matt, along with eight other students, to watch through Harry’s eyes and help him approach women. In this fashion, Harry meets a woman named Jennifer at a corporate Christmas party. Due to a series of ironic circumstances, Jennifer kills both Harry and herself, as she believes both of them are troubled by voices in their heads. Matt and the rest of the students watching through EYE-LINK panic as they watch Harry die. They try to destroy any evidence that they were ever involved with Harry.







Image courtesy of Flickr user

Lindsay Silveira.

In order to win over Joe’s trust, Matt goes on to share another story about someone he once worked with, Greta, who lives in a spacious, futuristic house and seems to be very particular about every detail in her life. A week prior, Greta had an implant placed in her head that copies her thoughts and memories. The thoughts and memories are later surgically retrieved and stored in an egg-shaped device called a “cookie.” Matt’s job is to train the cookie, which is essentially a copy of Greta’s mind, to accept her position and to serve Greta day and night. Initially, the cookie shows great confusion about “her” situation, not knowing “she” is not the real Greta. Matt embodies the cookie with a simulated form of Greta and places “her” inside a vast space with an operating desk. To convince the cookie to do housekeeping work for the real Greta, Matt alters the cookie’s perception of time and makes “her” experience total isolation and boredom until “she” finally gives in. In the present day, Joe shows his disdain for the cookie technology and criticizes it as barbaric.




Joe finally shares what brought him to the outpost and starts his story by saying that his girlfriend’s father never liked him. Joe and Beth were in a serious relationship until his drinking problem slowly pushed Beth away. On a double-date they have with their friends, Tim and Gita, Beth seems to be upset, which causes Joe to drink more. After the dinner, the drunken Joe finds out that Beth is pregnant and congratulates her. Instead of being happy, Beth expresses her unwillingness to keep the baby, which angers Joe. After a heated argument, Beth blocks Joe through a technology called Z-EYE and leaves him. Being blocked by Z-EYE, Joe is only able to see a blurry grey silhouette of Beth and is unable to hear her. He spends months looking for her and writing apology letters without receiving any response. Joe also finds out that Beth has kept the baby, but he is not allowed to see her offspring due to the Z-EYE blocking. One day he sees Beth’s image in the news, which implies that she is dead. Saddened by the news, Joe is determined to meet his child for the first time since the block has been removed upon Beth’s death. He waits outside of Beth’s father’s cabin during Christmastime with a gift for the child. To his surprise, the child he has been longing to see has Asian heritage, which neither he nor Beth has. Joe soon realizes that Beth was having an affair with their friend Tim. He follows the child with shock and confronts Beth’s father. Out of anger, Joe kills Beth’s father with the snow globe he brought as a gift and runs away in panic, leaving the little girl alone on a snowy day.




In the present, Matt asks Joe if he knows what happened to the kid. Joe finally breaks down and confesses that he is responsible for killing both Beth’s father and the child. Matt soon disappears after the confession, leaving Joe to realize the outpost is the same cabin where Beth’s father and daughter died. It turns out that everything so far has taken place in a cookie of Joe in order to make him confess his crime. Matt helped the officers with Joe’s case to regain his own freedom due to his involvement in Harry’s death. Even though Matt is released from the police station, he is blocked from everyone through Z-EYE and will not be able to interact with anyone in reality. Back in the police station, as an officer leaves work for Christmas, he sets the time perception for Joe’s cookie to 1000 years per minute, leaving the copy of Joe wandering in the cabin as “I Wish It Could Be Christmas Everyday” goes on endlessly in the background.





Current Technology







Google Glass Enterprise Edition; image courtesy of

Wikimedia Commons.

“White Christmas” presents three fictional technologies: EYE-LINK, Z-EYE, and the cookie. The episode manifests the privacy issues in our real world, and, moreover, it explores the concept of selfhood and the boundaries of our relationship with advanced AI.




The EYE-LINK that allowed Harry to livestream his view with multiple people and the Z-EYE that blocks Matt from the rest of the world are closer to reality than fiction. Google Glass, despite the failure of its first version, has made its second attempt and returned this year as the Glass Enterprise Edition [1]. Given the controversy about privacy and the criticism of the wearability of its predecessor, the newer version has switched gears to become an augmented reality tool for enterprise. For example, according to the report by Steven Levy, the Glass has been employed by an agricultural equipment manufacturer company to provide workers with detailed instructions on the assembly line, which has dramatically increased the yield with high quality [1]. However, this pivot to partner with industrial companies does not necessarily mean the end of smart glasses for the general consumers. If anything, it might be a beginning of the evolution for smart glasses.




While Google Glass is not a built-in device, visual prosthetics that implant into the visual system are no longer a dream. There has been success in restoring near-normal eyesight of blind mice [2] and trials of vision rehabilitation in humans through implants [3]. It is just a matter of time before we see the birth of technology similar to EYE-LINK. After all, many people nowadays are used to sharing their lives on social media, in real-time, through their phones. If built-in sensory devices that augment our perceptions become reality, blocking others through signal manipulation would not be much of a challenge either.




Compared with EYE-LINK and Z-EYE, the cookie technology from the episode seems far more implausible based on our current understanding of neuroscience. The root of consciousness and our minds remain a mystery, despite how much we currently know about the nervous system. While we are decades away from copying our own minds, the current developments of AI are still startling. AlphaGo has been making the news over the past few years by defeating top professional Go players from around the world. While Deep Blue, another AI system, defeated world chess champion Gary Kasparov in 1997, the defeat of humans by AI in Go is much more difficult. Go, a classic abstract strategy board game that dates back to 3000 years ago, is viewed as the most challenging classical game for AI to win due to its large number of possible board configurations [4]. Given such a massive amount of possibilities, traditional AI methods involving exhaustion of all possible positions by a search tree do not apply to Go. Previous generations of AlphaGo were developed by playing with numerous amateur Go players via advanced search trees and deep neural networks. The reason that the recent win by AlphaGo Zero is so striking is that it learned to master the game without any human knowledge [5]. The newer version of AlphaGo learns to play the game by playing against itself with much higher efficiency. The triumph of AlphaGo not only means the winning of the game, but also represents the conquering of some challenges in machine learning. The advanced algorithm could potentially mean a step towards solving more complicated learning tasks, such as emotion recognition and social learning.







AlphaGo competing in a Go game; image courtesy of Flicker

user lewcpe.

As Google DeepMind continues to advance learning algorithms, a new company, Neuralink, founded by SpaceX and Tesla CEO Elon Musk has drawn a lot of attention for its audacious goal of combining human and artificial intelligence. Elon Musk is greatly concerned with AI’s potential threat to humanity and proposes that Neuralink could be a way to prevent such a threat from happening. Indeed, the brain-machine interface (BMI) is no longer a novel concept. Scientists have developed deep brain stimulation that benefits people who suffer from Parkinson’s disease, epilepsy, and many other neurological disorders [6]. In addition, people with paralysis are able to control artificial limbs through brain-machine interfaces [7]. BMI shows great promise in terms of improving people’s life quality. However, what Elon Musk is proposing is to augment the healthy human brain and improve its power by connecting it to artificial intelligence. While it is tempting to acquire a “super power” such as photographic memory through BMI, the great power comes with a great price – the interface will be highly likely to require invasive implantation of hundreds of electrodes onto the brain. Predicting the potential side effects will also be extremely challenging, as there is so much left unknown about our brains. Are people going to be willing to take enormous, unknown risks for the possibility of having photographic memory?




Despite the impressive progress scientists have made in the field of machine learning and artificial intelligence, we are still far away from anything like the cookie that would be able to copy a person’s consciousness and manipulate it to our advantage.





Ethical Consideration




After Matt explains how he coerces Cookie Greta to work for the real Greta, Joe feels empathetic towards the cookie and calls the technology slavery and barbaric. Matt argues that since Cookie Greta is only made of code, she is not real, and, hence, it is not barbaric. The disagreement between the two raises a fundamental question about whether or not the copy of a person’s mind is merely lines of code. If not, should these mind-copies have rights as we do? Similar discussions can be found in  this previous post on the blog.







Image can be found here.

“White Christmas” also brings up the question of how we perceive of our own minds and the minds of others. Why do some people believe that the cookie is nothing but simulation by a device? Many people seem to believe that there is a hierarchy of mind among different species. Daniel M. Wegner and Kurt Gray mentioned in their book, The Mind Club, a “mind survey” they conducted online. This self-report survey aimed to evaluate people’s perception of minds by asking them to compare thirteen potential minds on different mental abilities (see figure 1) (7). Based on 2499 responses, they found that people view mental abilities of the same mind differently. The authors categorize these mental abilities into two factors: experience and agency. Experience represents one’s ability to feel things such as pain, joy and anger. They define agency to be another set of mental abilities with which one can think and perform tasks instead of sensing and feeling. For example, the average response indicated that participants view themselves with high mental abilities in both experience and agency, whereas they rank a robot with relatively high agency but very little experience. Even though this two-dimensional mapping is a rather coarse way to quantify our perceptions of minds, it shows us that humans, whether consciously or subconsciously, rank minds of others (including humans, animals, robots and even god(s)) based on their mental abilities to think and to feel.




Let’s employ the concepts of agency and experience to help us understand why people do not think AI, including the cookie, has consciousness. One might agree that the cookie has a high level of intelligence, in other words agency, due to the power of algorithm in a futuristic world, but he or she might find it difficult to imagine that the code has feelings too. Matt gives Cookie Greta a physical body to help “her” cope with “her” distress. While it might be a filming tactic for the audience to better visualize cookie, the embodiment seems to also provide Cookie Greta with an outlet to feel, sense, and better understand “her” own existence. Moreover, Matt has to change Cookie Greta’s perception of time and leave “her” in prolonged solitary to force “her” into compliance given “her” fear of boredom. The fact that Matt cannot simply adjust the codes to make Cookie obedient but has to manipulate “her” through “her” fear, which is an emotion, somehow indicates the Cookie has the ability to feel and experience. Similarly, Matt takes advantage of Cookie Joe’s empathy and guilt through manipulation in order to make him confess. Even though it can still be argued that these seemingly human emotions are nothing but simulation, how can we be certain that the simulated mind does not experience these feelings? If they are able to feel the same way as we do, forcing Joe’s cookie to listen to the same Christmas carol for millions of years in isolation would be an utterly brutal and unfair punishment.




If we assume that the cookie indeed has some form of consciousness, the next question would be: should cookies bear the same consequences of their origin’s actions? It is clear that both Cookie Greta and Cookie Joe have the same memories and ways of thinking as their real selves (the term of “real” is used loosely here to differentiate the cookie and its origin instead of implying that the former is not real). Based on the confession, Joe is indeed responsible for the death of two lives. However, should the copy of his mind be responsible for his crime? Do we view the copy as an extension of him or do we see the cookie as an independent individual? Similarly, if Neuralink does succeed in creating a hybrid of human brain and AI, how do we define the identify of an individual and who should be responsible for its wrong-doing?






Conclusion







Darling's robot dinosaur; image courtesy of

WikimediaCommons.

If you disagree with how Matt and the officers treat the advanced AI in “White Christmas,” you might find some comfort in the study of human-robot interactions conducted by Dr. Kate Darling from the MIT Media Lab (8, 9). In an informal experiment, human subjects were first presented with robot dinosaurs and were asked to play with them. After building an emotional connection with the robots for about an hour, the participants were then instructed to torture and destroy them with various tools the experimenters provided. All of the volunteers refused to follow the command. Dr. Darling, an expert in robot ethics and an advocate for legal protection of robots, explains that even though people are aware that the robots are not actually alive, they naturally project their emotions on to the robot dinosaur. If people can feel empathy towards a life-like robot, are most of us really capable of watching the suffering of humanoid AI, even if it does not have consciousness? As Immanuel Kant said, “he who is cruel to animals becomes hard also in his dealings with men. We can judge the heart of a man by his treatment of animals.”





References





1. Levy, Steven (2017, July 18). Google Glass 2.0 is starting a startling second act. Retrieved from https://www.wired.com/story/google-glass-2-is-here/






2. Nirenberg, S., & Pandarinath, C. (2012). Retinal prosthetic strategy with the capacity to restore normal vision. Proc Natl Acad Sci USA, 109(37), 15012-15017. doi:10.1073/pnas.1207035109






3. Lewis, P. M., Ackland, H. M., Lowery, A. J., & Rosenfeld, J. V. (2015). Restoration of vision in blind individuals using bionic devices: a review with a focus on cortical visual prostheses. Brain res, 1595, 51-73. doi:10.1016/j.brainres.2014.11.020






4. The story of AlphaGo so far. Retrieved from https://deepmind.com/research/alphago/






5. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., … Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 500(7676), 354-359. doi:10.1038/nature24270






6. Lyson, M. K. (2011). Deep brain stimulation: current and future clinical applications. Mayo Clin Proc, 86(7), 662-672. doi:10.4065/mcp.2011.0045






7. Wegner, D. M., & Gray, K. J. (2017). The mind club: who thinks, what feels, and why it matters. New York, NY: Penguin Books






8. Can robots teach us what it means to be human? (2017, July 10). Retrieved from https://www.npr.org/2017/07/10/536424647/can-robots-teach-us-what-it-means-to-be-human






9. Darling, K. (2012). Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior toward robotic objects. Robot Law, Calo, Froomkin, Kerr ed., Edward Elgar 2016; We robot Conference. Available at https://ssrn.com/abstract=2044797 or http://dx.doi.org/10.2139/ssrn.2044797





Want to cite this post?



Wang, Y. (2017). The Neuroethics Blog Series on Black Mirror: White Christmas. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/the-neuroethics-blog-series-on-black.html







Tuesday, March 29, 2016

AlphaGo and Google DeepMind: (Un)Settling the Score between Human and Artificial Intelligence

By Katie L. Strong, PhD 



In a quiet room in a London office building, artificial intelligence history was made last October as reigning European Champion Fan Hui played Go, a strategy-based game he had played countless times before. This particular match was different from the others though – not only was Fan Hui losing, but he was losing against a machine.





The machine was a novel artificial intelligence system named AlphaGo developed by Google DeepMind. DeepMind, which was acquired by Google in 2014 for an alleged $617 million (their largest European acquisition to date), is a company focused on developing machines that are capable of learning new tasks for themselves. DeepMind is more interested in artificial “general” intelligence, or AI machines that are adaptive to the task at hand and can accomplish new goals with little or no preprogramming. DeepMind programs essentially have a kind of short-term working memory that allows them to manipulate and adapt information to make decisions. This is in contrast to AI that may be very adept at a specific job, but cannot translate these skills to a different task without human intervention. For the researchers at DeepMind, the perfect platform to test these types of sophisticated AI: computer and board games. 











Courtesy of Flickr user Alexandre Keledjian


DeepMind had set their sights high with Go; since IBM’s chess playing Deep Blue beat Garry Karparov in 1997, Go has been considered the holy grail of artificial intelligence, and many experts had predicted that humans would remain undefeated for at least another 10 years. Go is a relatively straightforward game with few rules, but the number of possibilities on the board makes for complex, interesting play that requires long-term planning; on the typical 19x19 grid, according to the DeepMind website, there are more legal game positions “than there are atoms in the universe.” Players take turns strategically placing stones (black for the first player, white for the second) on the grid intersections in an effort to form territories. Passing is an alternative to taking a turn, and the game ultimately ends when both players have passed due to the lack of unmarked territory. Often though, towards the end of the game, one player will resign in lieu of playing to the very end.





In a Nature paper published in January of this year, researchers at DeepMind reported the development of an AI agent that could beat other Go computer games with a winning rate of 99.8%. Buried in the text, in a single paragraph of the Results section, the authors also briefly describe the epic match between AlphaGo and Fan Hui, which ultimately resulted in a 5 to 0 win for artificial intelligence.







With that significant win in hand, DeepMind took a much bolder approach in announcing AlphaGo’s complexity, and invited Lee Sedol, the top Go player in the world for the last decade, to compete in a five match tournament the week of March 9th – 15th. Instead of a private match at DeepMind’s headquarters, this contest was live-streamed to the world through YouTube and came with a 1 million dollar prize. Despite the defeat of Fan Hui and the backing of Google, Lee Sedol was still fairly confident in his skills and said late February in a statement, “I have heard that Google DeepMind’s AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time.”





Three and half hours into the first match on March 9th though, Lee Sedol resigned, or forfeited, the match. He resigned the second and third matches as well. According to Lee Sedol during a press conference following the third game, he felt he underestimated the program during game one, made mistakes in game two, and was under extreme pressure in game three.





However, in a win for humanity, Lee Sedol won the fourth game. Interestingly, the first 11 moves of the fourth game were exactly the same as the second game, and perhaps Lee Sedol was able to capitalize on what he learned from the previous three. According to the English commentator Michael Redmond, Move 78 (a move by Lee Sedol) elicited a miscalculation from AlphaGo and the game was essentially over from that point. In both of these games, Lee Sedol played second (the white stones), and he stated in the post four-game press conference that AlphaGo is weaker when the machine goes first.








Cofounder of DeepMind Demis Hassabis

Whether or not AlphaGo is actually weaker when it plays first is difficult to know since Lee Sedol may be the only person that can attest to this. During the post-four game press conference, cofounder of DeepMind Demis Hassabis stated that Lee Sedol’s win was valuable to the algorithm and the researchers would take AlphaGo back to the UK to study what had happened, so this weakness could be confirmed (and presumably fixed). One important point of Go play that may have influenced the outcome though is that AlphaGo will play moves to maximize its chances of winning, irrespective of how this move influences the margin of victory. Whether or not this is a weakness is probably up for debate as well, but in this sense AlphaGo is not playing like a professional human player. Go has a long history of being respected for its elegance and simplicity, but AlphaGo is not concerned with the sophistication or complexity of the game – it just wants to win. 





Lee Sedol requested and was granted the opportunity to play black (the first move) in the fifth and final match-up, even though the rules of the game stated that it would be randomly assigned. “I really do hope I can win with black” Lee Sedol said after winning game four, “because winning with black is much more valuable.” The fifth match lasted a grueling five hours, but eventually Lee Sedol did resign. After almost a week of play, the championship concluded with a 4-1 score for artificial intelligence.





When AlphaGo played Fan Hui in October 2015, the agent beat a professional 2-dan player, but Lee Sedol ranks higher than Fan Hui as a 9-dan professional player. (Those who have mastered the game of Go are ranked on a scale known as dan, which begins with 1-dan and continues to 9-dan). To put this into perspective, Lee Sedol was a 2-dan professional player in 1998, and it wasn’t until 2003 that he reached 9-dan status. Playing at the professional level of 9-dan from 2-dan took Lee Sedol five years, but AlphaGo was able to climb this ladder in only five months. DeepMind was able to build an artificial intelligence agent with these capabilities by utilizing two important concepts, deep neural networks and reinforcement learning. Typical AI agents of the past deployed tree searching to review possible outcomes, but this brute force approach where AI considers the effect of every possible move on the outcome of the game is not feasible in Go. In Go, the first black stone played could lead to hundreds of potential moves by white, which in turn could lead to hundreds of potential moves by black. Humans have been able to master Go without mentally running through every possible play during each turn and without mentally finishing the game after every move by an opponent. Humans rely on imagination and intuition to master complex skills, and AlphaGo is actually designed to mimic these very complex cognitive functions.








Courtesy of Flickr user Little Book

Deep neural networks are loosely based on how neural connections in our brains work, and neural networks have been utilized for years to optimize our searches in Google and to increase the performance of voice recognition in smartphones. Analogous to synaptic plasticity, where synaptic strength increases or decreases over a lifetime, computer neural networks change and strengthen when presented with many examples. In this type of processing, neural networks are organized into layers, and each layer is responsible for constructing only a single piece of information. For example, in facial recognition software, the first layer of the network may only pick up on pixels and the second layer will only be able to reconstruct simple shapes, while a more sophisticated layer may be able to recognize difficult shapes (i.e, eyes and mouths). These layers will continue to become more complex until the software can recognize faces.





AlphaGo has to two neural networks: a policy network to select the next move, and a value network to select the winner of the game. AlphaGo uses the Go board as input and processes it through 12 layers of neural networks to determine the best move. To train the neural networks, researchers used 30 million moves from games played on the KGS Go server, and this alone led to an agent that could predict the human move 57% of the time. The goal was not to play at the level of humans though; the goal was to beat humans, and to do that researchers utilized reinforcement learning where AlphaGo was split in two and then played thousands of games against itself. With this, AlphaGo was able to win at the rate of 99.8% against commercial Go programs.





These neural networks mean that AlphaGo doesn’t search through every possible position to determine the best move before it makes a play and it doesn’t simulate entire games to help make a choice either. Instead, AlphaGo only considers a few potential moves when confronted with a decision and considers only the more immediate consequences of these potential moves. Even though chess has many fewer possible legal moves than Go, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in 1997. AlphaGo is just more human-like in that it makes these choices intelligently and precisely. According to AlphaGo developer David Silver in this video, “the search process itself is not based on brute force. It’s based on something more akin to imagination.”





This powerful computing power is not reserved strictly for games; DeepMind’s website declares that it would like to “solve intelligence” and “use it to make the world a better place.” Games are just the beginning, but deep neural networks may be able to model disease states, pandemics, or climate change and teach us to think differently about the world’s toughest problems. (DeepMind Health was announced on February 24th of this year.) Many of the moves that AlphaGo made in the beginning of the matches baffled Go professionals because they seemed like mistakes, but AlphaGo ultimately won. Were these really mistakes that AlphaGo was able to fix later or were these moves just beyond our current comprehension? How many potential Go moves have never before been considered or played out in a game?





If AlphaGo’s choices of moves could surprise Go professionals and even the masterminds behind AlphaGo, should we fear that AlphaGo is an early version of a machine that could spontaneously evolve into a conscious AI? Today, we probably have very little to be concerned about. Although the technology behind AlphaGo could be applied to many other games, AlphaGo’s learning progress was hardly casual as it took millions of games of training. However, how will we know when we do need to worry? Games have provided us with a convenient benchmark to measure the progress of AI, from backgammon in 1979 to the recent Go match, but if Go was a final frontier for AI, where do we go from here?







Measuring emerging consciousness in AI agents that simulate the human brain will be challenging, according to a paper by Kathinka Evers and Yadin Dudai of the Human Brain Project. We can use a Turing Test, although the authors note that it seems highly plausible that an intelligent AI could pass the Turing Test without having consciousness. We could also try to detect in silico signatures similar to our brain signatures that denote consciousness, but we are at a loss for what those signals may be and how well they actually represent human consciousness. If consciousness is more than just well-defined organization and requires biological entities, then computers will never be conscious in the same sense that we are and instead will exhibit only an artificial consciousness. Furthermore, thought leaders on the integrated information theory (IIT) Giulio Tononi and Christof Koch have argued in this paper that a simulation of consciousness is not the same as consciousness, and “IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.”





Regardless of how we debate machine consciousness, neural networks that mimic human learning are being utilized in most major companies that dominate our society, including Facebook, Google, and Microsoft. We will probably continue to see deep reinforcement learning as developed by DeepMind to improve voice recognition, translations, YouTube, and image searching. Deep reinforcement learning could also be used to power self-driving cars, train robots, and as Hassabis envisions in the future, develop scientist AIs that work alongside humans. Without a well-defined metric for machine intelligence and consciousness, time will tell which of these milestones marks the next great achievement in AI, how we measure its significance, and whether this event warrants anxiety. The mysterious ethics board that Hassabis negotiated with Google is probably a reflection of the company’s awareness of the ambiguous state of future AI research.








As uncertain and even scary as the future may seem though, it is important to remember that AlphaGo lost one of the matches, and that loss matters. Prior to the match, AlphaGo played millions and millions of Go games, many more games than Lee Sedol could ever play in a lifetime. AlphaGo never got tired, it never got intimidated by Lee Sedol’s 18 international titles, and it never participated in self-doubt. AlphaGo’s ignorance to the stakes of the games worked in its favor; Lee Sedol admitted he was under too much pressure during the third match.





For all of these advantages though, AlphaGo couldn’t adapt quickly or learn fast enough from Lee Sedol to make a difference in how it played. For AlphaGo to get better, it must play millions of games – not just a couple. Lee Sedol was able to play the first three matches, learn from AlphaGo, and exploit what he thought was a weakness. He thought AlphaGo played weaker when it played black, and he took advantage of this by playing a move that many consider brilliant and unexpected. AlphaGo challenged Lee Sedol and then brought out the best in him. And, when it comes to the future, the outcome of the fourth match begs the question: how can AI bring out the best in us?








Want to cite this post?





Strong, K.L. (2016). AlphaGo and Google DeepMind: (Un)Settling the Score between Human and Artificial Intelligence. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/03/alphago-and-google-deepmind-unsettling.html