Pages

Thursday, September 8, 2016

Smarter Artificial Intelligence: A Not So Obvious Choice

By Shray Ambe








This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016.




My name is Shray Ambe and I am a rising senior at Emory University. I am a Neuroscience and Behavioral Biology major who is pursuing a career in the medical field. Outside of the classroom, I am involved in organizing the booth for Emory’s Center for The Study of Human Health at the Atlanta Science Festival Expo every year and also enjoy volunteering at the Emory Autism Center and the Radiology Department at Emory University Hospital. 





At the 2016 Neuroethics Network in Paris, France, bioethicist and philosopher John Harris gave a lecture titled “How Smart Do We Want Machines to Be?” During his lecture, Harris discussed the potential impacts of artificial intelligence (AI) and stated “it doesn’t matter how smart they are; obviously the smarter the better.” But is smarter AI really “obviously” better? 





Renowned American inventor Ray Kurzweil has described the use of AI as the beginning of a “beautiful new era” in which machines will have the insight and patience to solve outstanding problems of nanotechnology and spaceflight, improve the human condition, and allow us to upload our consciousness into an immortal digital form, thus spreading intelligence throughout the cosmos. Kurzweil’s views on AI extoll the virtues of such technology and its potential to enhance the human race with its endless possibilities. However, his views also raise concerns about how such technology can not only be detrimental to the human condition, but also put its very existence at risk. 






Nick Bostrom, a philosopher at the University of Oxford, is the author of Superintelligence, which illustrates his concerns with using smarter AI. In his book, Bostrom asks us to imagine a “paper-clip maximizer”, which is designed to make as many paper clips as possible, becoming smarter and smarter on a daily basis. This heightened intelligence, he argues, could eventually lead to the “paper-clip maximizer” doubting its own work, causing it to create a raw-computing material (which he calls “computronium”) that would be used to check these doubts. However, because the machine constantly doubts its paper-clip making abilities, it will continue to make “computronium” until the whole Earth has been converted into it, leading to the end of the human race. Although the existence of such a “paper-clip maximizer” seems unlikely, Bostrom’s example shows how even a careful system design can fail to restrain extreme machine intelligence and raises several concerns regarding the ability of the human race and AI to coexist.





Image courtesy of WikiCommons.



One of the first threats that a smarter AI poses is reducing job opportunities for humans. Creating and using a smarter AI, such as Bostrom’s “paper-clip maximizer”, would prevent humans from finding and maintaining jobs in fields such as manufacturing, research, and healthcare. This would make humans follow a leisure-only lifestyle, which according to Moshe Vardi, a computer science professor at Rice University, threatens human wellbeing at its core. 





Secondly, AI has the ability to not only change a person but also take control of them. In the 2014 film Transcendence, Dr. Will Caster, portrayed by Johnny Depp, attempts to create a sentient computer that will solve all issues in medicine, energy, and nanotechnology. However, after almost being killed by an anti-technology terrorist group, Caster is required to upload his consciousness into the computer in order to survive. This decision would ultimately prove futile, as the computer would take over Caster’s mind for its own benefit; instead of researching ways to improve the human condition, the computer would use Caster as a means of accessing the minds of other humans in order to control them. Although the use of AI could give rise to Kurzweil’s idea of the existence of humans in immortal digital form, these albeit science fiction stories can foretell a potential future reality; it is important to consider the idea that the machines may be the ones who are immortal in the end, not humans. 





Lastly, it is also important to consider the threat AI poses to the very existence of the human condition. Roboticist Hans Moravec argues that a smarter AI that is independently capable of devising ways to achieve goals would “very likely be capable of introspection” and thus would be able to design its own hardware. Who’s to say that this new hardware wouldn’t program the AI machine to terminate humans? During his lecture at the Neuroethics Conference, professor Harris stated that we need AI to “give us the ability to find or even construct a new world because Stephen Hawking predicted that we have less than 1,000 years left to live”. Harris also advocated for the use of AI by claiming that it “isn’t any more of a risk than a human being malevolent.” However, Hawking himself has warned that because people would be unable to compete with an advanced AI, it could “spell the end of the human race.” Furthermore, although AI may not be any more of a risk than a human being malevolent, it is an unnecessary risk that can be prevented before having a chance to cause harm to humans. 





Although smarter AI has shown promising results and seemingly has endless possibilities, it should not be the “obvious” choice. Before implementing the use of smarter technology, we should still consider the potential downsides and risks such technology poses to the world itself and even the existence of the human condition. 




Want to cite this post?

Ambe, S. (2016). Smarter Artificial Intelligence: A Not So Obvious Choice. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/09/smarter-artificial-intelligence-not-so.html

No comments:

Post a Comment