Pages

Tuesday, October 3, 2017

“It is sometimes a sad life, and it is a long life:” Artificial intelligence and mind uploading in World of Tomorrow


By Jonah Queen









"The world of tomorrow" was the motto of the

1939 New York World's Fair

Image courtesy of Flickr user Joe Haupt

“One day, when you are old enough, you will be impregnated with a perfect clone of yourself. You will later upload all of your memories into this healthy new body. One day, long after that, you will repeat this process all over again. Through this cloning process, Emily, you will hope to live forever.”








These are some of the first lines of dialogue spoken in the 2015 animated short film, World of Tomorrow.* These lines provide an introduction to the technology and society that this science fiction film imagines might exist in our future. In response to a sequel, which was released last month, I am dedicating a post on this blog to discussing the film through a neuroethical lens.



Plot Summary (Note: the following contains spoilers for World of Tomorrow)




Those lines are spoken to a young girl named Emily by one of her clones (a “third generation Emily”) who is contacting her from 227 years in the future. The clone of Emily (who I will refer to as Emily) explains that in the future, those who can afford it regularly have their minds uploaded into either a clone of themselves or a cube-shaped digital storage device. Emily’s descriptions of the future are mostly lost on the young Emily (who is referred to in the film as Emily Prime), but Emily continues the conversation undeterred, as if she were speaking to an adult—a dynamic that continues throughout the film.





Emily then teleports Emily Prime to her location in the future and shows her some other technologies, including “view screens,” which allow people to view others’ memories. Emily uses a view screen to share memories of some important events in her life, including the various jobs she has held, her marriage (to a man who was also a clone), and the death of her husband.





After this tour through her memories, Emily suddenly explains that the world will be hit by a large meteor in sixty days. In the hopes of surviving, many are uploading their minds into cubes and having them launched into space. Those who cannot afford mind uploading are turning to “discount time travel,” which frequently results in deadly malfunctions. Emily then explains that the reason she contacted Emily Prime was to retrieve a memory from her that she had forgotten: a memory of her and her mother, which she says will comfort her in her final moments. After removing the memory from Emily Prime’s brain and implanting it into her own with a raygun-like device, she transports Emily Prime back to her present.



Ethical Issues 




In a mere seventeen minutes, this short touches on many of the issues discussed in contemporary bioethics and neuroethics, including human cloning, mind uploading, artificial intelligence, distributive justice, and technologically advanced escapist media. In this post, I will mostly focus on the ethics of mind uploading and artificial intelligence.








One possible method for creating a digital copy of a human brain

Image courtesy of Wikimedia Commons

In the film, mind uploading is depicted as a way for people to attempt to achieve immortality through uploading their minds into either a machine or into the brain of a clone of themselves. While current technology is nowhere close to achieving this goal (though some say that it could happen in our lifetimes), advances in neuroscience and computer science have led many to consider this possibility, and discussions of the ethical implications are currently underway.





One potential concern that the movie touches on is the quality of life (if it could even be considered life) that a person (or disembodied mind) is subject to after such a procedure. Emily tells Emily Prime that their grandfather had his consciousness uploaded into a cube and reads one of the messages he sent to her, which consists entirely of exclamations of horror. What he might be experiencing is not specified, but it is likely that the experience of having one’s consciousness existing within a computer would be so different from our embodied life that it would be disorienting or even unpleasant. Since our brains are not the only parts of us involved in feeling and perception, what would it be like to exist without input from a body? If there is not sufficient stimulation, would the mind suffer the negative effects of sensory deprivation? Would this technology need to simulate the experience of having a body? Would the contents of the entire nervous system (including, for example, the enteric nervous system that innervates the gut) need to be uploaded? You might even ask what the requisite features of a nervous system would be to have a “meaningful life." Some have raised such issues with organoids, so-called “mini-brains.”





And the cloning method does not solve these issues either. As Emily explains, the clones have some mental and physical “defects” that people are willing to overlook in their quest for immortality. She also seems tired and saddened by the length of her life (in addition to their other technologies, the human lifespan has greatly increased in the future) as well as the stress caused by having several lifetimes’ worth of memories. The quote in the title of this post is how she describes her existence to Emily Prime, and the sequel might explore this further, as it is subtitled The Burden of Other People's Thoughts.





One of the other issues that the film raises is the question of whether mind uploading would really be extending life or just creating a separate person (or computer program) with your personality, intellect, and memory. From your perspective (the original you), wouldn’t your consciousness end? That is how it seems to me, and some philosophers and ethicists agree. A previous post on this blog explores this idea and goes even further, raising the possibility of someone having their mind uploaded into multiple entities, creating several “copies” of themselves, which would make it even more difficult to see it as a simple continuation of one’s life. The technology could even be used to copy and upload a person’s consciousness to a computer or clone while they are still alive. In this sense, mind uploading can be seen as creating a new entity—either a person with a bioengineered brain or a sentient artificial intelligence (AI) based on a human brain. A recent post on this blog discusses various technologies that could be used to create a copy of someone’s mind after their death—further blurring the lines between mind uploading and AI.




The sci-fi trope of the robot apocalypse is often

referenced in discussions about AI

Image courtesy of Flickr used Gisela Giardino




World of Tomorrow also addresses AI in a different context. While the ethics of AI is currently a popular topic in tech media, much of the coverage focuses on the risks AI could pose to humans. This can be seen in the sensationalist coverage of Facebook’s recent AI experiment (though the claims that the experiment was stopped out of fear are not entirely accurate). While prominent figures in science and computing (including Stephen Hawking, Elon Musk, and Bill Gates) have expressed concerns about the threat that sufficiently advanced AI could pose, others are focused on the more immediate concerns around programming AI to make life-and-death decisions, whether for self-driving cars or autonomous weapons.





However, the issues concerning AI in World of Tomorrow are different. Emily describes how one of her jobs involved supervising robots on the moon. She programmed the solar-powered robots to fear death so they would stay in the sun on the light side of the moon. After the operation goes out of business, Emily is relocated, but, to save money, the robots are left there, where they continue to move across the moon’s surface and transmit depressing poetry back to earth. This (along with the previous discussion of mind uploading) presents another aspect to the ethics of AI debate: if we can create an AI capable of suffering, how should we treat it? While this issue is complicated by the fact that we can never truly know the subjective feelings of another entity (that is, they could be philosophical zombies), if something can suffer, even if it is a robot or software program that we have created, it seems clear that we should treat it well (though we, unfortunately, often do not even extend that courtesy to organisms that we recognize as living and capable of feeling). And maybe we should not even create such sentient artificial beings in the first place. This is a topic in AI ethics sometimes called robot rights, with some ethicists and philosophers arguing that a conscious AI should be given the same rights as an animal or even a person, depending on its level of complexity. As sentient machines, obviously, do not yet exist, this debate is mostly theoretical, and many see it as unnecessary at this time. Though a similar issue is discussed in neuroethics when it comes to determining if (and when) a collection of cultured neurons in a lab can become complex enough to feel.




World of Tomorrow presents a vision of the future which while bleak, is still very human. The film plays off the phrase “world of tomorrow,” which has often been used to describe an optimistic and utopian vision of the future, to instead show a future where advances in technology have led to even more extreme versions of many of the same problems we have today. If we want to work towards solving these issues (without slowing technological progress) we need to learn how to use our tools wisely.



*As of publication, World of Tomorrow is available to watch on Netflix and Vimeo





Want to cite this post?



Queen, J. (2017). “It is sometimes a sad life, and it is a long life:” Artificial intelligence and mind uploading in World of Tomorrow. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/10/it-is-sometimes-sad-life-and-it-is-long.html

No comments:

Post a Comment