Pages

Tuesday, September 6, 2016

The Age of Artificial Intelligence: Beneficial Advancement or Disastrous Uncertainty

By Sang Xayasouk






This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016.




Sang Xayasouk is entering her fourth year at Emory University where she is majoring in Neuroscience and Behavioral Biology and minoring in Comparative Literature. She is currently a member of the Gamma Phi Beta Sorority and a research assistant under Dr. Sampath Prahalad’s lab, which focuses on juvenile idiopathic arthritis and its risk factors. She plans to pursue a career in medicine after taking a gap year to gain experience in the healthcare and research fields.




On the 30th of June, the students of Emory University attended the Neuroethics Network session held at the Institut du Cerveau et de la Moelle Épinière (ICM). The first lecture was given by John Harris, a bioethicist and professor emeritus at University of Manchester. His talk was entitled How Smart Do We Want Machines to Be? and Harris addressed several points concerning artificial intelligence (AI). An audience member asked a question regarding self-driving smart cars, also asked by Dr. Rommelfanger in a group exercise in class, “You are given a self-driving car and you have only two options: hitting and killing the ten pedestrians ahead or swerving into a wall and killing only yourself. What should the car be programmed to do and who would be at fault, possibly the programmer?” Harris said we should not have self-driving cars at all, but why should this concept be completely eliminated?



At first, I did not know how to respond to the availability of this technology. After speaking to Dr. Guillaume Palacios, a former theoretical physics researcher currently working with AI technology, I began to understand the prospective benefits and consequences (Palacios 2016). I asked Dr. Palacios the same question and this was his reply:


If you ask me [in the Tesla accident that cost the life of the driver] if the programmers of the self-driving car are to blame, I would say it’s a difficult question, both from the ethical and technical point of view. One thing for sure is the AI technology that powers the car’s capability to drive itself is no ordinary program. I do not know the exact details because, to the best of my knowledge the Tesla code is not openly available, but it certainly relies on machine learning techniques such as ‘deep learning’. Machine learning algorithms are not defined as a bunch of instructions telling the program to do this or this if that or that happens; the self-driving car program cannot and should not be programmed that way because that would be too complicated and inefficient. Rather, it is designed to optimize a certain goal (or cost) according to data it learned from. In the case we are interested in, the self-driving car is programmed to optimize its driving ability according to the data of past drives the Tesla collected. 


It’s impossible to think of all possible events that could happen during a drive. If a rabbit crosses while an elephant is standing in the middle of the road and my exit is in 2 miles, then what? For this very reason, instruction-based algorithms should be discarded and the AI techniques privileged. It should be encoded to optimize function—that the best way to drive is to go from point A to B while respecting the rules of traffic, minimizing errors, and avoiding accidents. But it is a learning process and it cannot be 100% accurate. With time, the AI technology will learn through a network of other ‘experiences’, minimize error, and become better. There is still much needed improvement but the Tesla project and other similar projects (i.e. Google self-driving car) are a step in the right direction. In conclusion, I would say that the amazing promise AI technology provides is its ability to learn not from a single driver’s experience but from hundreds of thousands, if not millions. We can therefore predict that soon, AI drivers will be far better and safer drivers than humans are (Palacios 2016).






Self-driving cars. Image courtesy of sfgate.com


As with any innovative concept, it takes time to work out the inconsistencies. In a MIT review article, Will Knight discussed the significance of a Tesla crash that occurred over a month ago. It brings those that considered this technology to be an immaculate development and an instantaneous driving solution back to reality; these people ought to bear in mind that the first car introduced by Henry Ford still had its imperfections (Dodge et al. 2016); the self-driving Tesla will be the same. Knight also gave a quote by Bosch that aptly describes the current automated vehicle technology. Bosch said, “Automated driving is coming—not overnight, but gradually.” Thus, the idea of a self-driving car should not be completely eradicated, rather, it should continue to progress and be allowed to improve its imperfections.





Dr. Harris also stated two truths of the future: there will be no more human beings and there will be no planet Earth. He said we need to look towards AI to find or construct another place for us humans to relocate. This particular snippet of his talk raised a few concerns of my own. If humans are not to exist in the future, are we looking to replace our population with AI? Would we go as far as to integrate AI technology with human beings to extend our lives? Sergio Canavero gave a Ted Talk discussing the protocol, as well as the possibilities of head transplantation. He briefly mentioned attaining immortality by moving an able and intelligent head with an able and younger body. To elaborate on this controversial concern, could we potentially replace our aged bodies with AI bodies and maintain our heads? Human-to-human transplant is problematic because body rejection is a key issue (Editor’s note: Though Canavero is planning on conducting a human-to-human head transplant in December 2017). While resembling the original organic body, the new AI body would learn how to be like the original body the head and mind is used to; then a form of immortality could be achieved. Certainly, there are several reasons why head transplants—purely human or human/AI fusion—should not be carried out; the possible implications with head transplantations are outlined clearly by Ryan Purcell on The Neuroethics Blog. There are too many ethical concerns regarding individuality, autonomy, and cost to have this as a viable solution. Unless these complications were diminished, it would be difficult to move forward in this direction.





Overall, I was grateful to have attended a conference with such a distinguished and esteemed panel. There were also ethical concerns and topics that I never considered before but have been made more aware of. It was certainly an enlightening experience and I hope to make an appearance sometime in my future career. Thank you Dr. Karen Rommelfanger and the Neuroethics Network Conference for allowing me and the students of Emory University the opportunity to participate in this conference.




References



Dodge, Bob, Casey Dodge, John Dodge, and Horace Dodge. 2016. “Henry Ford: A Case Study of an Innovator.” PDF e-book. July 20. https://www.thehenryford.org/docs/default-source/default-document-library/default-document-library/henryfordandinnovation.pdf?sfvrsn=0.



Palacios, Guillaume. 2016. Self-driving Cars. Personal.





Want to cite this post?

Xayasouk, Sang. (2016). The Age of Artificial Intelligence: Beneficial Advancement or Disastrous Uncertainty. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/09/redefining-x-and-y-axes-of-cognitive.html

1 comment:

  1. It seems to me that those who wish to eradicate semi-autonomous self-guided vehicles that learn over time, should first eradicate the horse.

    ReplyDelete