Pages

Wednesday, September 26, 2018

Caveats in Quantifying Consciousness



This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.





By Ankita Moss








Image courtesy of Flickr user, Mike MacKenzie.

As I was listening to a presentation during the 2018 Neuroethics Network Conference in Paris, a particular phrase resonated with me: we must now contemplate the existence of “the minds of those that never lived.”





Dr. John Harris, a professor at the University of Manchester, discussed both the philosophical and practical considerations of emerging artificial intelligence technologies and their relationship to human notions of the theory of mind, or the ability to interpret the mental states of both oneself and others and use this to predict behavior.





Upon hearing this phrase and relating it to theory of mind, I immediately began to question my notions of “the self” and consciousness. To UC Berkeley philosopher Dr. Alva Noe, one manifests consciousness by building relationships with others, acting deliberately on the external environment in some capacity. Conversely, a group of Harvard scientists claim they have found the mechanistic origin of consciousness, a connection between the brainstem region responsible for arousal and regions of the brain that contribute to awareness.




Having explored theory of mind in my introductory psychology class, I had assumed that I would be somewhat familiar with the theory of mind material presented during the talks. However, Dr. John Harris offered a scenario that I had never considered, despite its actual plausibility: how will humans convince emerging artificial minds that we too can independently act on the world, and thus, have a consciousness as well? What if these artificially constructed, conscious forms of matter gain the notion that they “discovered” humans like the Europeans decided that they “discovered” the Americas? What if history repeats itself as innovations progress?





As Dr. Harris stated these piercingly thought-provoking questions to the audience, I suddenly stopped taking notes and attempted to grapple with the potential soon-to-be reality. What was once seemingly far-fetched, was now a plausible reality that humanity will have to evaluate using both philosophical and practical measures. This evaluation must encompass profound considerations that will redefine what it means to be human.





Humans make the nuanced argument that animals may have a lower level of consciousness as an attempt to justify animal testing and cruelty. Some may say that this is justified given that humans have a “higher level” of consciousness. In making this argument, we unconsciously strip animals of some of the rights that we take for granted. For example, some non-human primates are seen as models for human consciousness, while early invertebrates are labeled to have very low conscious capacity. Dr. Michio Kaku, in his book “The Future of the Mind,” defines his space-time theory of consciousness as the “process of creating a model of the world using multiple feedback loops in various parameters in order to accomplish a goal.” In this system, organisms are assigned a number indicating a low or high level of consciousness. Humans have the highest level because of our developed prefrontal cortex and ability to construct abstract thoughts and bring them to fruition in our external environment. If consciousness is rooted in theory of mind, this “higher” and “more advanced” cognitive state is essentially defined by our ability to interact with our surroundings. This definition justifies the argument that humans have more autonomy and control over the external environment than any other living organism.








Image courtesy of Flickr user, Bovee and Thill.

However, Dr. Harris’s point made me question this accepted view of consciousness. As the race to develop “more-human” artificial intelligence continues, humanity will have to define what it means to have a higher theory of mind and whether an artificially intelligent being could one day gain a higher level of rationality and awareness than a human being. This possibility raises concerns regarding the idea of quantifying consciousness. If we were to create a more rational, partly artificial being with a higher theory of mind or integrative intelligence, such as the interface proposed by Elon Musk’s Neuralink, the coin could very well be flipped and humans would become the animals with the definitive “lower” assigned number of consciousness. A perceived lesser consciousness would strip humans of control and give more complex beings a greater power that is justifiable by the very model that humanity constructed. Humanity has yet to encounter this dilemma, but it is inevitable if this technology advances as a result of the irrational and ill-advised attempt to birth more intelligent and complex beings.





In considering the trajectory of artificial intelligence research and its consequences, we must also take into account a possible loss of control resulting from the stripping/lessening of human rights as we now know them. Turning back to the considerations offered by Dr. Harris, one must take note of how the rights of peoples have been violated over the course of history. The Europeans stripped Native Americans of basic human rights, as well as rights to the land they inhabited first. Through the lens of the Europeans, “higher level” weapons and tools granted them superior power. If history does indeed repeat itself, and if superiority and dominance are inevitable, the prospective of fully conscious artificial intelligence threatens basic human rights and autonomy. If we already justify our actions towards those with “lesser intelligence” on the grounds of our own supposed “superiority,” then the same logic should have us seriously considering the possibility that we may be creating a toxic template that will inevitably put us on the losing side.





As of now, one of the most controversial feats in artificial intelligence is being undertaken by the 2045 Initiative, which claims it will create a sentient robot that can house human personality. The initiative’s aim is to create a race of enhanced humans, and with that, reinvent the fields of ethics, psychology, science, and even metaphysics. There may come a time when humans will have to question what it truly means to be conscious. Perhaps, it is not a matter of “if it will happen” but “when it will happen.”





So I ask, do emerging artificial intelligence technologies and initiatives proposing to create hybrid humans serve as the platform for the extension of humanity or as a sentence for its end?


_______________





Ankita Moss is an undergraduate student at Emory University majoring in Neuroscience and Behavioral Biology. Ankita has had a strong interest in neuroethics since high-school and hopes to contribute to the field professionally in the future. Aside from neuroscience and neuroethics, she is also very passionate about start-ups and entrepreneurship and founded the Catalyst biotechnology think-tank at Emory Entrepreneurship and Venture Management. Therefore, Ankita hopes to one day specifically navigate the ethical implications of neurotechnology startups and their impact on issues of identity and personhood.












Want to cite this post?



Moss, A. (2018). Caveats in Quantifying Consciousness. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/09/caveats-in-quantifying-consciousness.html



Tuesday, September 25, 2018

Artificial Emotional Intelligence



This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.





By Ruhee Patel









Image courtesy of Pexels user, Mohamed Hassan

In the race for more effective marketing strategies, an enormous step forward came with artificial emotional intelligence (emotion AI). Companies have developed software that can track someone’s emotions over a given period of time. Affectiva is a company that develops emotion AI for companies to facilitate more directed marketing for consumers. Media companies and product brands can use this information to show consumers more of what they want to see based on products that made them feel positive emotions in the past.









Emotion tracking is accomplished by recording slight changes in facial expression and movement. The technology relies on algorithms that can be trained to recognize features of specific expressions (1). Companies such as Unilever are already using Affectiva software now for online focus groups to judge reactions to advertisements. Hershey is also partnering with Affectiva to develop a device for stores that tells users to smile in exchange for a treat (2). Facial emotion recognition usually works either through machine learning or the geometric feature-based approach. The machine learning approach involves feature selection for the training of the machine learning algorithms, feature classification, feature extraction, and data classification. In contrast, the geometric feature-based approach standardizes the images before facial component detection and the decision function. Some investigators have reached over 90% emotion recognition accuracy (3). Emotion AI can even measure heart rate by monitoring slight fluctuations in the color of a person’s face. Affectiva has developed software that would work through web cameras in stores or in computers, in the case of online shopping. Affectiva also created Affdex for Market Research, which provides companies with calculations based on the Affectiva database, so companies have points of comparison when making marketing decisions.






In the future, Affectiva wants to expand into the healthcare field, as monitoring emotions has the potential to help people who are at risk for certain mental illnesses, such as depression (4). A mental health services researcher, Steven Vannoy, is developing an app using Affectiva’s software that would monitor a user’s emotions through check-ins. These check-ins would have the user describe how they are feeling about the future, who they are with, and what they are doing. This information would be used to predict a user’s short-term risk of suicide (5).









Affectiva has the largest emotion database in the world, containing over 6.5 million faces from 87 countries (6). The data comes from video recordings of people in natural environments, such as in a car, home, or office. Each person in the database opted in to have their faces recorded, with the option to opt out at any time. The top three countries represented in the dataset are India, the USA, and China. The data is used to find more examples and variations of expressions that the active learning algorithms can learn from. The dataset also provides opportunities to understand how emotions are expressed differently across cultures (7).









Emotion AI brings up many ethical concerns that should be addressed before this technology is implemented further, given the personal nature of the data that is collected by this software. One ethical concern of emotion AI is derived from the way the technology works. Using data on emotions to give consumers more personalized marketing could play on emotions in a potentially harmful way. For example, advertisements that appeal to the conscious or subconscious emotional response to food can lead many to consume unhealthier food. Children would be particularly vulnerable to this type of marketing if they are targeted by advertisements for low-nutrition, high fat foods. Therefore, emotion AI can have negative impacts on health outcomes for future generations in the long run (8).












Image courtesy of Pixabay.

Children are not the only vulnerable population that would need to be protected from emotion AI. Using emotions to customize advertisement experiences can also have harmful effects when products such as cigarettes are featured, especially if consumers are smokers or have smoked regularly in the past. People who are addicted to any substance and patients with psychiatric disorders would also fall into this vulnerable population (9). Lawmakers, neuroscientists, ethics experts, and developers of emotion AI need to consider how these vulnerable populations can be protected due to the greater capacity to manipulate when using emotion data in marketing.









Another major ethical concern surrounding emotion AI is that the data could fall into the wrong hands. Affectiva’s products are available to developers, so it is important to regulate how far the data can go and what developers can do with it. For example, personal data on emotions could have negative consequences if this information was sold to insurance companies. Insurance companies could lower premiums for certain people who show more positive emotions or even use Affectiva software to start tracking, on a daily basis, emotions of insured individuals who give consent to do so (10). This system could lower insurance premiums for people who consent to using emotion AI and create an unfair disadvantage for those who do not consent.









A third ethical concern with implementing emotion AI relates to consent. If stores install video cameras that measure customer emotions, there would be implicit coercion towards consumers who do not feel comfortable with this technology. If people do not consent, the only way they can avoid being filmed by AI cameras is to not enter the store at all. This acts as implicit coercion for people to consent to using this technology. Similarly, would consent to be recorded be required only from the online shopper? Or would friends or family who are close to the shopper also need to give consent to be filmed? Lawmakers and regulators need to draw strict boundaries to ensure privacy and to only capture emotions of people who have given informed consent.









Overall, artificial emotional intelligence has the potential to increase efficiency of marketing strategies. This technology can also be used to potentially save lives, by helping people who are at risk for suicide by analyzing their emotions throughout the day. However, artificial emotional intelligence should still be regulated to ensure that it is implemented in an ethical manner.


_______________
























Ruhee Patel is a fourth year undergraduate student at Emory University. She is studying Neuroscience and Behavioral Biology and hopes to pursue a career in healthcare. She is involved in neurokinesiology research under Dr. Hackney at Emory School of Medicine.











References:



1. Morsy, A. (2016). Emotional Matters: Innovative software brings emotional intelligence to our digital devices. IEEE Pulse, 7(6), 38-41.



2. Darrow, B. (2015, September 11). Computers can’t read your mind yet, but they’re getting closer. Fortune.com. Retrieved from fortune.com/2015/09/11/affectiva-emotient-startups/



3. Mehta, D., Siddiqui, M., & Javaid, A. (2018). Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality. Sensors, 18(2), Sensors, 2018, Vol.18(2).



4. Jarboe, Greg. (2018, June 11). What is Artificial Emotional Intelligence & How Does Emotion AI Work? Searchenginejournal.com. Retrieved from https://www.searchenginejournal.com/what-is-artificial-emotional-intelligence/255769/



5. Affectiva. (2017, August 14). SDK on the Spot: Suicide Prevention Project with Emotion Recognition. Blog.affectiva.com. Retrieved from blog.affectiva.com/sdk-on-the-spot-suicide-prevention-project-with-emotion-recognition



6. SDK. (n.d.) Retrieved from www.affectiva.com/product/emotion-sdk/



7. Zijderveld, G. (2017, April 14). The World’s Largest Emotion Database: 5.3 Million Faces and Counting. blog.affectiva.com. Retrieved from http://blog.affectiva.com/the-worlds-largest-emotion-database-5.3-million-faces-and-counting



8. Jain, A. (2010). Temptations in cyberspace: New battlefields in childhood obesity. Health Affairs., 29(3), 425-429.



9. Ulman, Y., Cakar, T., & Yildiz, G. (2015). Ethical Issues in Neuromarketing: "I Consume, Therefore I am!". Science and Engineering Ethics, 21(5), 1271-1284.



10. Libarikaian, A., Javanmardian, K., McElHaney, & Majumder, A. (2017, May). Harnessing the potential of data in insurance. Mckinsey.com. Retrieved from https://www.mckinsey.com/industries/financial-services/our-insights/harnessing-the-potential-of-data-in-insurance





Want to cite this post?



Patel, R. (2018). Artificial Emotional Intelligence. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/09/artificial-emotional-intelligence.html

Tuesday, September 18, 2018

NeuroTechX and Future Considerations for Neurotechnology




By Maria Marano








Image courtesy of Wikimedia Commons

As society has seen bursts of activity in the technology sector, we are continually discovering ways to harness these new advances. While some fields, such as artificial intelligence and machine learning, have already been massively exploited by industry, neurotechnology hasn’t fully broken into consumer markets1. Generally, neurotechnology refers to any technology associated with the brain. Consumer products that use brain activity to modulate behaviour, such as the Muse headband, do exist, but neurotech remains predominantly in the hands of researchers and the science community1. As neurotechnological advances begin to take centre stage and become a part of the 21st-century zeitgeist, the ethical implications of these technologies must be fully appreciated and addressed2. One area of concern is the fear that limited access to neurotech will create further discrepancies between regions with regards to quality of life.





Ultimately, developers expect neurotechnology to be utilized for clinical purposes1. Brain-computer interface products are currently used to enhance meditation3 and attention4, but the primary goal is to use neurotechnology for therapeutics5. Prominent present-day examples of neurotech in the healthcare industry include virtual reality therapies for stroke rehabilitation6, phobias7, and autism spectrum disorders8. Unfortunately, as more of these fields develop and prosper, the improvements to health and wellness will be restricted to those who can access neurotechnologies. Furthermore, with Elon MuskBryan Johnson, and others work towards “cognitive enhancement” devices; “enhanced” individuals could easily gain an advantage over the unenhanced9. As is so often the case, these advantages will likely be conferred onto those in developed nations and, more specifically, wealthier individuals first. This distribution has the potential to exacerbate existing socio-economic differences; therefore, it is essential that as a society we democratically monitor progress and dictate guidelines as the neurotechnology industry advances.




Access is not the only problem to address with regards to neurotechnology. As with many research endeavours, another consideration for neurotechnology is variability in research regulations between countries. Ethics approval is a fundamental component of both animal and human research, but the stringency of these regulations varies widely across regions.10 As neurotechnology progresses, some countries may be tempted to further relax ethical standards in an attempt to gain an advantage for publication purposes. Alternatively, some regions may allow procedures that are deemed illegal in other areas to capitalize on the financial prospects of these technologies. Elizabeth Parrish, the CEO of BioViva, travelled to Colombia to self-administer a gene therapy procedure that was not FDA approved in America11, highlighting the potential consequences of divergent regulations. Clearly, it will be important to maintain an open dialogue across economic strata and cultural lines regarding the future of neurotechnology to ensure everyone can properly benefit from these resources.





With these interests in mind, we at NeuroTechX are focused on looking ahead and creating equal opportunity in the neurotechnology domain. NeuroTechX (NTX) is a global organization leading the advancement of neurotechnology. Our community was started by a group of graduate students and neurotechnology enthusiasts in Montreal, Canada. After spending countless hours searching YouTube for instructional videos, NTX was born out of a desire to exchange knowledge and resources related to neurotechnology. Our organization is founded on a “bottom-up” approach focused on identifying and fulfilling needs as requested by our members. In particular, we seek to provide members with access to key resources and learning opportunities. As a non-profit community, we are unencumbered by commercial interests; this freedom allows NTX to remain unbiased as we combat emerging pseudoscience in our industry. Currently, NTX consists of 17 chapters spanning four continents and 15 affiliated student clubs. Our community includes neuroscientists, software developers, engineers, and industry professionals; we hope to foster and develop the future leaders of neurotechnology, both in academics and industry, who represent all socioeconomic strata and cultural backgrounds. By making training and resources available on a global-scale, NTX hopes to mitigate access inequality. With these lofty goals in mind, NTX has identified specific areas to direct its efforts: a worldwide, interconnected channel of chapters, and an educational platform.




Chapters:








NeuroTechX consists of seventeen chapters spanning four

continents with fifteen affiliated student clubs

We have developed an international channel for communication and collaboration through our chapters initiative. Chapters are local community groups, which act as points of entry for anyone with an interest or expertise in neurotechnology. While each chapter is unique, they share a common goal of facilitating the exchange of ideas and collaboration on projects. Chapters can take advantage of the tools and resources available across our worldwide community through our online platforms. NTX leverages social media as an information exchange platform to unite and inform our members across the globe. We believe that by establishing global connections, we can mitigate disparity in the neurotechnology sector and help ensure everyone has an equal opportunity to benefit from these advances.





Notably, our international chapters have helped us identify ways to facilitate access in diverse cultural and educational settings. As a tangible representation of how access determines output, activity in the NeuroTechLIMA chapter has been limited by a lack of resources in Spanish. Having been made aware of such predicaments, we are now working to develop materials in a wider array of languages to help mitigate language barriers. NTX feels the international connectome is a critical aspect of our mandate because it ensures remote areas are exposed to the same opportunities as more urban regions. Moreover, the inclusion of neurotech enthusiasts from outside technology “hubs”12 guarantees more diversity in ethical discussions surrounding neurotechnology. We feel all voices should be included and participate in deciding the governing rules surrounding the development and implementation of future neurotechnology.





Beyond Academia:







Image courtesy of Wikimedia Commons

Until recently, information on neurotechnology was highly scarce and primarily restricted to academic institutions. Recognizing that these difficulties are likely exacerbated in areas with limited access to educational channels or neurotech industry NTX developed an education platform. NeuroTechEDU is an online, open-source learning portal consisting of informative blog articles, lessons on all facets of BCI, teaching webinars on YouTube, and an extensive resource list. By creating a consolidated portal with relevant neurotech information, we hope to accelerate the learning phase for new members and provide useful resources to our more experienced constituents. By keeping NeuroTechEDU online, free, and open-source, we ensure neurotechnology is not merely limited to those affiliated with prestigious institutions. We hope our educational resources will give access to interested parties all over the world. We expect significant neurotechnology advances in the coming years, and our goal is to train the future professionals of this industry. We hope that by providing uninhibited access to educational resources, we can ensure everyone is equally prepared for the upcoming neurotechnology boom.





Moving forward:





NeuroTechX is dedicated to reducing disparities in access to knowledge and resources by connecting individuals to neurotech opportunities and establishing a global connectome for information exchange. We hope that by breaking down access barriers, we can contribute to a harmonious future where everyone can benefit from neurotechnology.

We invite you to check out our monthly Neurotech Newsletter to stay informed on recent advances in the field, Also, join us on the NTX Slack workspace to engage in further neurotech discussions.






_______________






Maria Marano recently completed her MSc. at the University of Toronto, and is currently working as a scientific writer highlighting neurotech advances in the healthcare industry. She is also the Vice-President of the Toronto chapter of NeuroTechX. Maria is interested in knowledge translation and effectively harnessing research discoveries to benefit the broader public. 

























References:















2 Yuste, R. et al., 2017. Four ethical priorities for neurotechnology and AI. Nature. 551:159-163. doi:10.1038/551159a





































































Want to cite this post?




Marano, M. (2018). NeuroTechX and Future Considerations for Neurotechnology. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/09/neurotechx-and-future-considerations.html

Wednesday, September 12, 2018

Ethical Implications of the Neurotechnology Touchpoints




This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.





By Janet Guo





The TouchPoint Solution™ (commonly referred to as TouchPoints™) is a noninvasive neurotechnology device that one can wear on any part of the body. The device can be accessorized (detachable wristband in each pack available), so it can be worn like a watch or placed inside a pocket or sock. The founders of TouchPoints™, Dr. Amy Serin and entrepreneur Vicki Mayo, consider it to be a neuroscientific device because of the bilateral alternating stimulation tactile (BLAST) action it allows the user’s brain to undergo. This is a device that can affect people in good health or those who suffer from a neurologic disease and is therefore classifiable as a neuroscientific device by the broad scientific definition proposed by Illes & Lombera (2009). The website even claims that the brain can “create new neural pathways that are net positive” and has a “lasting effect on your brain”. In many of the TouchPoints™ advertisements (many of which can be found on the official TouchPoints™ YouTube channel, TouchPoints™ devices are claimed to relieve stress by 70% in under 30 seconds. 





TouchPoints™ was originally launched in late 2015 with the mission of bringing relief to people who have high levels of stress and anxiety. This technology has been through several developments and newer, cheaper versions have been released since its initial launch.  Its presence in news media has been increasing-- Huffington Post (Wolfson, 2017), Mashable (Mashable staff, 2017), and The Washington Times (Szadkowski, 2017) are only a few of the popular news and opinion websites that have published pieces about TouchPoints™. An investigation of the science and ethics behind this device is warranted as the number of sales is increasing greatly due to the expansion of the company to the international level. This expansion was highlighted by founder Dr. Amy Serin at the 2017 SharpBrains Virtual Summit: Brain Health & Enhancement in the Digital Age (SharpBrains, 2018).




TouchPoints™ appears to be a helpful service to those who deal with extreme levels of stress, which is estimated to be around one-third of Americans, according to a 2007 nationwide survey (American Psychological Association, 2007). In addition, TouchPoints™ includes testimonials from users with Parkinson's disease, Autism, and ADHD along with insomnia and general stress, suggesting that its use has helped alleviate some of their symptoms (although there are no clinical claims made by The TouchPoint Solution™). 








Image courtesy of Pixabay

Neuropsychologist Dr. Serin and Vicki Mayo claim that TouchPoints™ works by BLAST technology that alters the body’s fight-or-flight (F3) response to stress or anxiety, allowing you to think clearly. Many of the academic studies cited in the TouchPoints™ site found that bilateral stimulation of the prefrontal cortex and inferior temporal lobe of the brain does in fact enhance comfortable feelings and is a recognized and accepted form of psychotherapy for posttraumatic stress disorder, although the exact mechanism is still unclear (Amano & Toichi, 2016; Nieuwenhuis et al., 2012; Servan-Schreiber et al., 2006). Additionally, Nieuwenhuis et al. (2012) found that bilateral stimulation may have some effect on memory, which is not mentioned on the TouchPoints™ site. None of the peer-reviewed literature cited on the TouchPoints™ site extended the study population to those with Parkinson’s disease, Autism, or ADHD. While there are many case studies including testimonial from past TouchPoints™ users and in-house studies (which have undergone no clear peer-review process), these results must be taken with a sense of doubt since there is clear motivation and obvious bias in the studies Dr. Serin conducts to help market her own product. 





Besides critically evaluating the credibility of the science behind TouchPoints™, it is important to highlight some of the major ethical concerns this technology brings to the surface. Firstly, if this product really has the power to alleviate stress by up to 90%, then those who are able to afford it (starting at $160 for the Basic Value Bundle and reaching up to $1,999 for the TouchPoints™ original institutional pack), would have an advantage over those of a lower socioeconomic class. Unequal access has the potential to widen the current achievement gap we see between children from families of upper and lower socioeconomic class in terms of academic success (Smeding et al., 2013). Many of the TouchPoints™ advertisements promote the product by saying you can wear it within your socks or on your wrists underneath a long-sleeve shirt, which would make the device invisible to the naked eye. If TouchPoints™ is able to considerably lower your stress and anxiety levels, would this be fair to those who are asked to perform under the same high-stress conditions without this technology? Does the use of this device need to be publicized in the workplace or school setting? For example, for students who are taking a challenging exam, the one who has TouchPoints™ devices stored in his socks or on his wrists will feel less stress and perhaps may be able to perform better. This calls into question whether those in authority (such as teachers or employers) should hold the responsibility of allowing these devices in testing rooms. 








Image courtesy of Pixabay

Secondly, as this device becomes cheaper and wide-spread with time, a reliance on this technology may become embedded within society and could even change how we view fundamental features of the human condition. The human body currently has a typical stress response, which involves the release of many stress hormones (Koelsch et al., 2016); however, as time goes on and if this device is used in response to any sort of stress or anxiety, the body may adapt in such a way that it becomes reliant on TouchPoints™. From a neuroethical standpoint, TouchPoints™ has the potential to change a fundamental feature of the human condition mediated through a brain mechanism, although none of the studies on the TouchPoints™ website have addressed this directly. If future research shows the exact mechanism TouchPoints™ induces within the brain, and it differs greatly from the typical stress and anxiety response within the brain, then TouchPoints™ has the potential to alter some of the fundamental conditions of what it means to be human. In line with this, other neuroethical questions also arise: Would reliance on this technology change who you are and how you view yourself? Would diminishing your experience of stress keep you from positively adapting to that stress? Could it change the person you are becoming? 





Thirdly, questions of privacy and safety must be considered. The “Legal Conditions” outline many of the privacy concerns one may have in regard to data gathered by TouchPoints™ such as: Will one’s data be shared with third-parties? What actions will TouchPoints™ take to ensure that personal data, like number of times TouchPoints™ is used per day and personal identification (age, gender, etc.), is protected? Do these protective measures change from country to country? The current conditions are quite vague, and exactly what neurodata is being collected and how that is being analyzed is not explicitly stated. Neurodata is particularly vulnerable because it can contain data that allows for you to be identifiable and alteration or misuse of the data could lead to changes of your fundamental identity. The TouchPoints™ privacy policy states that any materials accessed by third-parties through Touchpoint Properties are available at the user’s own risk. TouchPoints™ also emphasizes that it assumes no responsibility for the “timeliness, deletion, mis-delivery or failure to store any content…, user communications or personalization settings”. This creates room for much concern about what information can be deleted when requested and when it will be deleted. TouchPoints™ can be purchased within many different countries (SharpBrains, 2018), and some services are exclusive to certain countries. TouchPoints™ emphasizes that “those who access or use the Touchpoint Properties from other countries do so at their own volition and are responsible for compliance with the local law”. This forces much of the responsibility on the user in order to ensure compliance with the local law. These aspects of the privacy policy are very general and mention nothing about the real-time access of a user’s data, which should be the minimum privacy standard (Purcell & Rommelfanger, 2017). 





These ethics concerns raise the following questions: Is TouchPoints™ truly a “neuro” technology? How credible and reliable is the research they use to support their claims? How necessary or useful is this device for those without any sort of medical condition? Is it fair for some people to have access and others not to? How might this affect daily human life in the future? TouchPoints™ forces us to consider the broader implications of how much neurotechnology has the potential to impact our daily lives and how easy it is for commercialization of medicalized products to muddy the science behind the product.






_______________






Janet Guo is a junior on the pre-medical track at Emory University majoring in Neuroscience and Behavioral Biology (NBB) with a minor in Chinese Studies. Her first professional exposure to neuroethics in an academic setting was in her NBB471 (Neuroethics) course during her study abroad experience in Paris, France and has remained extremely interested in the topic ever since.










References:





Amano, T., & Toichi, M. (2016). The Role of Alternating Bilateral Stimulation in Establishing Positive Cognition in EMDR Therapy: A Multi-Channel Near-Infrared Spectroscopy Study. Plos One,11(10). doi:10.1371/journal.pone.0162735





American Psychological Association. (2007). Stress a Major Health Problem in The U.S., Warns APA. Retrieved from http://www.apa.org/news/press/releases/2007/10/stress.aspx 





Illes, J., & Lombera, S. (2009). Identifiable Neuro Ethics Challenges to the Banking of Neuro Data. Minnesota Journal of Law, Science & Technology, 10(1), 71-94. Retrieved July 2, 2018 





Koelsch, S., Boehlig, A., Hohenadel, M., Nitsche, I., Bauer, K., & Sack, U. (2016). The impact of acute stress on hormones and cytokines and how their recovery is affected by music-evoked positive mood. Scientific Reports,6(1). doi:10.1038/srep23008 





Mashable staff. (2017, November 27). Project Entrepreneur expands accelerator program to help more women entrepreneurs build scalable companies. Retrieved from https://mashable.com/2017/11/27/project-entrepreneur-3/?europe=true#JkIxMAFH0iqT 





Nieuwenhuis, S., Elzinga, B. M., Ras, P. H., Berends, F., Duijs, P., Samara, Z., & Slagter, H. A. (2013). Bilateral saccadic eye movements and tactile stimulation, but not auditory stimulation, enhance memory retrieval. Brain and Cognition,81(1), 52-56. doi:10.1016/j.bandc.2012.10.003 





Purcell, R. H., & Rommelfanger, K. S. (2016). Biometric Tracking From Professional Athletes to Consumers. The American Journal of Bioethics, 17(1), 72-74. doi:10.1080/15265161.2016.1251652 





Servan-Schreiber, D., Schooler, J., Dew, M. A., Carter, C., & Bartone, P. (2006). Eye Movement Desensitization and Reprocessing for Posttraumatic Stress Disorder: A Pilot Blinded, Randomized Study of Stimulation Type. Psychotherapy and Psychosomatics,75(5), 290-297. doi:10.1159/000093950 





Smeding, A., Darnon, C., Souchal, C., Toczek-Capelle, M., & Butera, F. (2013). Reducing the Socio-Economic Status Achievement Gap at University by Promoting Mastery-Oriented Assessment. PLoS ONE, 8(8). doi:10.1371/journal.pone.0071678 





Szadkowski, J. (2017, November 22). Holiday Gift Guide 2017 - Best in home and health gadgets. Retrieved from https://www.washingtontimes.com/news/2017/nov/22/holiday-gift-guide-2017-best-home-health-gadgets/ 





Wolfson, R. (2017, September 23). A Wearable Stress Relief Device Helps You Relax, While Giving Back To Those In Need. Retrieved from https://www.huffingtonpost.com/entry/a-wearable-stress-relief-device-helps-you-relax-while_us_59c47092e4b0b7022a64696d










Want to cite this post?




Guo, J. (2018). Ethical Implications of the Neurotechnology Touchpoints. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/09/ethical-implications-of-neurotechnology.html

Tuesday, September 11, 2018

The future of an AI artist




This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.





By Coco Cao








An example of AI-generated art

Image courtesy of Flickr

An article published on New Scientist entitled, “Artificially intelligent painters invent new styles of art” has captured my attention. The article discussed a recent study conducted by Elgammal et al. (2017), who developed a computational creative system (the Creative Adversarial Network) for art generation based on the Generative Adversarial Network (GAN), which has the ability to generate novel images simulating a given distribution. Originally, GAN consisted of two neural networks, a generator and a discriminator. To create the Creative Adversarial Network (CAN), scientists trained the discriminator with 75753 art works from 25 art styles so it learned to categorize art works based on their styles. The discriminator also learned to distinguish between art and non-art pieces, based on learned art styles. Then, the discriminator is able to correct the generator, a network that generates art pieces. The generator eventually learns and produces art pieces that are indistinguishable from the human produced art pieces. While ensuring the art piece is still aesthetically pleasing, CAN generates abstract arts that enhance creativity by maximizing deviation from established art styles. 




After learning about AI’s ability to be “creative” and generate art pieces, I was frightened. Unlike AI’s application in a scientific context, AI in an art context elicits human feelings. Is it possible that AI artists could replace human artists in the future? Considering the importance of the author’s creativity and originality in art, the critical ethical concern regards the individualism of AI artists. Can we consider the art pieces generated from AI as expressions of themselves? 




In 1738, Jacques de Vaucanson, a French watchmaker, generated a life size mechanical duck with feathers. The mechanical duck could eat, move and flap its wings. Therefore, audiences refused to believe it was artificial, since the mechanical duck exhibited all of the behaviors of a real duck (Glimcher, 2004). If the mechanical duck was a real duck, like audiences believed, then the behaviors must have been generated by the mechanical duck. However, this situation may not be the case. The mechanical duck was likely programmed by Vaucanson, causing Vaucanson to be the generator of the mechanical duck’s behavior. In the case of an AI artist, Elgammal et al. (2017) stated in the paper that CAN involved human creative products in the learning process while the creative process was carried out by AI. Therefore, the AI creative process was hugely dependent on pre-exposure of artwork created by humans. Does this mean the artwork was originally created by AI? I don’t think so. During my recent visit to the “Artists and Robots” exhibition, which was held in the Grand Palais in Paris and presented the applications and implications of AI in arts, I noticed that there were some robot paintings with the human artist/programmer’s signatures on the paintings. In this case, the credit of the AI-generated art pieces still belonged to the human artists and programmers. Therefore, AI are not considered as an individual in an art context for now. 








Vaucanson's duck along with two of his other creations

Image courtesy of Wikipedia

Moreover, because AI is programmed to think like humans, does it mean humans already understand the biological basis of creativity and individualism? Moreover, are we able to program creativity and individualism? Currently, research suggests that three brain networks, which are the default mode network, the executive control network and the salience network, are related to creativity. Those three brain networks are scattered through the frontal and parietal cortices (Brenner, 2018). Regarding individualism, Chiao et al. (2009) suggests that neural activity in medial prefrontal cortex positively predicts individualistic and collectivistic views of self. However, all brain areas are interconnected, and we still don’t know the specific neuronal interactions involved in creativity and individualism. Without actually understanding the human neuronal basis of creativity and individualism, we are unable to program creativity and individualism into AIs. Therefore, art pieces generated by AI are not original. 





Other than the originality of art, the meaning of art is also crucial. We can evaluate meanings of art in two contexts: the meaning to the audience viewing the art and the meaning to the artists who created the art. Ted Snell (2018, May 04), the Cultural Precinct director of University of Western Australia, concluded that the evaluation of art depends on audiences’ knowledge and experience. Considering the subjectivity in art evaluation, what is the meaning of art to an artist? 








Image courtesy of Pixabay

There are different kinds of art. As a dance minor student, I am more familiar with performing arts. After years of training in classical ballet, I witnessed my technical improvements as I put more effort into my dancing. However, dance is not only about physical growth in overcoming technical challenges. It also helps me to grow mentally: as I started to accept my imperfectness and I became more humble and persistent in training. If art brings artists mental growth, we still have no way to measure the meaning of growing for AI. Until now, AI’s learning process has been largely guided by humans and AI is currently developed under a human societal context. Therefore, if we consider AI as an individual but biologically distinct from a human, do human societal values apply to AIs? 





So far, we can neither consider those AI artists as individuals nor do we have any way to measure the AI mental growth experienced while producing those works. However, we cannot neglect the infinite possibilities of art pieces generated by AI. Also, considering the immortality of AI, AI could someday exceed us by continuously learning and improving. “Robot” originates from the Czech word “Robotnik,” which means slave. It is possible that AI could “enslave” us in the future. While it is interesting to experiment with AI’s ability in creating arts, we need to evaluate the consequences of accepting AI art pieces. Nevertheless, it is truly fascinating to see the artworks created by an AI artist! 





_______________






Coco Cao is a fourth-year undergraduate student at Emory University, majoring in Neuroscience and Behavioral Biology and minoring in Dance and movement studies. She is originally from China and hopes to pursue a career in medicine.
















References: 





Baraniuk, C. (2017, June 29). Artificially intelligent painters invent new styles of art. Retrieved June 14, 2018, from https://www.newscientist.com/article/2139184-artificially-intelligent-painters-invent-new-styles-of-art/ 





Brenner, G. H. (2018, February 22). Your Brain on Creativity. Retrieved July 5, 2018, from https://www.psychologytoday.com/us/blog/experimentations/201802/your-brain-creativity 





Chiao, J. Y., Harada, T., Komeda, H., Li, Z., Mano, Y., Saito, D., Iidaka, T. (2009). Neural basis of individualistic and collectivistic views of self. Human Brain Mapping,30(9), 2813-2820. doi:10.1002/hbm.20707 





Elgammal, A., Liu, B., Elhoseiny, M., & Mazzone, M. (2017). CAN: Creative Adversarial Networks, Generating" Art" by Learning About Styles and Deviating from Style Norms. arXiv preprint arXiv:1706.07068. 





Glimcher, P. W. (2004). Decisions, uncertainty, and the brain: The science of neuroeconomics. Cambridge, MA: MIT Press. 





Réunion des musées nationaux – Grand Palais. (n.d.). Artists & Robots. Retrieved July 5, 2018, from https://www.grandpalais.fr/en/event/artists-robots 





Snell, T. (2018, May 04). On judging art prizes (it's all subjective, isn't it?). Retrieved June 14, 2018, from https://theconversation.com/on-judging-art-prizes-its-all-subjective-isnt-it-38430








Want to cite this post?



Cao, C. (2018). The future of an AI artist. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/09/the-future-of-ai-artist.html

Tuesday, September 4, 2018

Organoids, Chimeras, Ex Vivo Brains – Oh My!




By Henry T. Greely









Image courtesy of Wikimedia Commons

At about the time of the birth of modern neuroethics, Adina Roskies usefully divided the field into two parts: the neuroscience of ethics, what neuroscience can tell us about ethics, and the ethics of neuroscience, what ethical issues neuroscience will bring us (1). At some point, in my own work, I broke her second point into the ethics of neuroscience research and the ethical (and social and legal) implications of neuroscience for the non-research world. (I have no clue now whether that was original with me.)






The second part of Roskies’ division of neuroethics, the ethics of neuroscience research, has always had a special place in my heart because early work in it really helped mold the field we have today. In the early ‘00s, groups that mixed scientists, physicians, and ethicists, largely through the efforts of Judy Illes, explored what to do about abnormal brain scans taken from otherwise healthy volunteers. (See, e.g., 2, 3) It had become clear that, in the computer-generated imagery of a brain MRI, more than 20 percent of “the usual subjects” (college undergraduates, usually psychology majors) and about half of “mature” subjects had something “odd” in their brains. These could be variations of no clinical significance, such as “silent” blockages or benign tumor to potentially very serious problems, such as malignant tumors or large “unpopped” aneurysms. Happily, only small fractions of those oddities held clinical significance, but this still posed hard questions for researchers, many of whom were not themselves clinicians. What, if anything, should they tell, and to whom? And so, working together, scientists, clinicians, and ethicists talked with each other, learned from each other, and came up with useful answers, usually involving both changes to the consent process and a procedure for expert review of some worrisome scans.



That model of close and fruitful interaction between the neuroscience researchers and people with ethics or legal expertise has, I think, largely persisted in neuroethics, at least in North America. The early conversations about the disclosure of results deserve significant credit for that. 






Ethical issues in neuroscience research have continued to appear, on questions from confidentiality to consent. But largely without me. I’ve thought of my neuroethics work as almost entirely on the ethical, legal, and social effects of neuroscience outside the research setting.  Until now.* 






Last April, Nature published a comment written by Duke University’s Nita Farahany, me, and 15 others, including 11 scientists (4).  The piece, The Ethics of Experimenting with Human Brain Tissue, grew out of a May 2017 workshop held at Duke by its Science and Society Initiative with help from the NIH BRAIN Initiative.  It dealt with…wait for it…organoids, chimeras, and ex vivo brains, things I call “human brain surrogates”, and pointed out the dilemma they embody. Ethical constraints limit what we can do to the brains of living people, leading us to create surrogates for people’s brains – but the closer the surrogate comes to the living brain, the more we back into those same ethical constraints.









Image courtesy of Flickr

The best way to study how a human brain works is by studying a living human brain in a living human being. But this has problems. Humans are terrible lab animals – we disobey, we lie, and we can call lawyers. Researchers can’t “sacrifice” us at the right moment in the research and then carefully examine slices of our brain.  We hold rights that make much intrusive and risky, but potentially illuminating, research ethically (and legally) impossible. So, researchers look for surrogates for living human brains in living human beings, surrogates that do not have such an inconvenient moral status. Mouse brains in mice, monkey brains in monkeys, thin slices of human brains in Petrie dishes – all these have research value. But we know, from decades of disappointment in moving those findings to humans, that, although similar in some ways, none of them is the same as, or perfectly predicts, a living human brain in a living human. 






Enter three new or improved technologies. (For more information on any of these technologies, see the Nature Comment; for much more information, read its references.)




1) “Organoid” is the term generally used to refer to a small ball of human cells grown in cell culture from stem cells (human stem cells for human organoids). The stem cells may be embryonic stem cells, induced pluripotent stem cells, or other types of stem cells, but the effort has been to get cells that will all become one or more cell types found in an organ. Thus, there are human liver organoids, kidney organoids, gut organoids…and yes, brain organoids. The human neural organoids have been grown for over three years – and some of them have survived for over two years (4). They have diameters of about 4 millimeters (or a sixth of an inch), about the size of a very small pea (4). They have no vasculature and so the cells need to be in contact with the oxygen and nutrient bearing (and waste bearing-away) culture media. Currently human neural organoids have about two to six million neurons (no other brain cells so far, just neurons). They self-organize, grow synapses, fire, and continue to get more and more complex as time goes on. Still, by comparison, the human brain is estimated to contain approximately 86 billion neurons (5). 


2) Chimeras – in this case, human/non-human brain chimeras – are creatures with some human brain cells and some non-human brain cells. (Thus far, in brains at least, they are always non-human animals with some human cells, not humans with some non-human cells.)  Chimeras have been used in research for many years, though organoids are opening new possibilities: such as transplanting human organoids into rodent brains – which turn out to grow blood vessels for them (6). 


3) Researchers have also long used human brain tissue kept alive outside the body – ex vivo tissue – but what is used and how is, like chimeras, becoming “new and improved.” Instead of keeping flat sheets of human brain cells alive in a dish, researchers are keeping alive and studying larger and larger chunks of human brains, taken from neurosurgical discards or from the recently dead. There are even some efforts, so far only in non-humans, to keep whole brains from dead animals “alive” apart from their bodies (7). 




Other than being creepy, what do these all have in common? They are efforts to understand human brain function, and ultimately human brain diseases, better by making human brains that do not have the rights of human “persons” and hence can be used more broadly, and more roughly, in labs.  But as noted below, there’s a dilemma: the closer the surrogate comes to the living human brain in a living human, the more questions it raises about whether the surrogate has a moral status – and, if so, what status?









A retinal organoid

Image courtesy of Flickr

The Comment lays out some, but by no means all, of the questions these surrogates raise – lays them out but does not answer them. Some of those questions are about the fully human persons whose cells or tissues are used in the research and some are about the “thing” being studied itself.  






There is much work to be done on the ethics of this kind of neuroscience research, but, like the work on disclosure of MRI findings to subjects, it is work that requires the union of scientific, medical, ethical, legal, and other expertise. To know what to think of a human neural organoid, it is important to know something at least of what, if anything, that organoid can sense, perceive, do, or, possibly at some point, think. But the science is only a start to the question. We know mice, rats, and monkeys sense, perceive, do, and think things but we will allow some research with them. The ethical questions need to inform the scientific questions about these things; the scientific findings need to inform the ethics answers.






Happily, this is happening. The authors of the Comment, scientists and ethicists, recognized the appropriateness of concern about this research – not so much as it exists today but for what it may (or may not) become in five or ten years. And they recognized the need to work together. This kind of collaboration, indeed, is built into the NIH BRAIN Initiative, an effort that, at heart (and brain) is fundamentally about creating new tools for neuroscience research, and ultimately brain treatment. Its Multi-Council Working Group, made up of directors and outside advisory council members from 10 NIH Centers and Institutes as well as some at-large representatives (I’m one). It contains a Neuroethics Division (8), chaired by Dr. Christine Grady, chair of the Bioethics Department at the NIH Clinical Center, and myself. That division includes both scientists and ethicists among its members and the Duke workshop had its origin in some of the Division’s work. 






But there is much more to be done, on these topics and others, by that group and many others. Think about possible issues raised by using CRISPR to modify non-human primates by giving them human versions of some genes. (A National Academies of Medicine forum will do so this October (9)).  Or the questions of another human brain surrogate, an in-silico version, if it approaches close enough to consciousness to prompt concerns about its moral status. 






All scientific revolutions are ultimately based on revolutions in tools. The MRI, fMRI, animal models, thin slices, and other tools have taken us far but the next generation of tools is coming. It will make possible much scientific progress, but will undoubtedly raise many ethics issues, issues that will be important, complicated, and (usually) fun. Come and play!








P.S. And join the International Neuroethics Society! Our Annual Meeting on November 1 and 2, 2018 in San Diego features a panel on these issues: http://www.neuroethicssociety.org/2018-annual-meeting-program.



*Actually, I was wrong about that, a mistake it may be useful to explain. I have been an author, from 2003 to 2017, on six pieces involving human/non-human chimeras (10-15), at least two of which were specifically about brain chimeras. (11, 12) But, in spite of being involved in neuroethics since its modern birth, I put those in a different category – they were part of my stem cell work with some connections to my “weird life forms” interests. I – and we – need to remember to define neuroethics broadly.




_______________












Hank Greely is the Deane F. and Kate Edelman Johnson Professor of Law at Stanford University, where he directs its Center for Law and the Biosciences as well as the Stanford Center for Neuroscience and Society. He began serving a two-year term as president of the International Neuroethics Society in November 2017. He chairs the California Advisory Committee on Human Stem Cell Research; and serves on the Neuroscience Forum of the National Academy of Medicine; the Committee on Science, Technology, and Law of the National Academy of Sciences; and the NIH BRAIN Initiative’s Multi-Council Working Group, whose Neuroethics Division he co-chairs. And he likes playing (doubles) tennis even though he is, to be charitable, not very good.












REFERENCES






1. Roskies, Adina L., 2002, “Neuroethics for the New Millennium”, Neuron, 35(1): 21–23. doi:10.1016/S0896-6273(02)00763-8






2. Kim, B.S., Illes, J., Kaplan, R.T., Reiss, A., Atlas, S.W. Neurologic findings in healthy children on pediatric fMRI: Incidence and significance, 23 Am J Neurorad.1674 (2002);






3. Illes, J., Desmond, J., Huang, L.F., Raffin, T.A., Atlas, S.W. Ethical and practical considerations in managing incidental neurologic findings in fMRI,  50 Brain and Cognition 358 (2002).






4. Nita A. Farahany, Henry T. Greely, et al., The Ethics of Experimenting with Human Brain Tissue, NATURE, 556:429-32 (April 26, 2018)






5. Society for Neuroscience, BRAIN FACTS at 5 (2018), available for download at file:///Users/hgreely/Downloads/Brain%20Facts%20Book%202018%20PDF.pdf, accessed July 12, 2018.






6. Mansour, A. A. et al. (2018) An in vivo model of functional and vascularized human brain organoids Nature Biotechnol. 36:432041, https://doi.org/10.1038/nbt.4127.






7. Antonio Regalado, Researchers Are Keeping Pig Brains Alive Outside the Body, Technology Review (April 25, 2018), https://www.technologyreview.com/s/611007/researchers-are-keeping-pig-brains-alive-outside-the-body/






8. The BRAIN Initiative, Neuroethics Division of the BRAIN Multi-Council Working Group, https://www.braininitiative.nih.gov/about/neuroethics.htm, accessed July 12, 2018






9. Forum on Neuroscience and Nervous System Disorders, National Academy of Medicine, Transgenic and Chimeric Neuroscience Research: Exploring the Scientific Opportunities Afforded by New Nonhuman Primate Models – A Workshop, at http://nationalacademies.org/hmd/Activities/Research/NeuroForum/2018-OCT-4.aspx, accessed July 12, 2018








10. Henry T. Greely, Defining Chimeras – and Chimeric Concerns, American Journal of Bioethics, Vol. 3, issue 3:17-20 (2003)






11. Mark Greene, et al., Moral Issues of Human–Non-human Primate Neural Grafting, Science 309:385-386 (July 15, 2005)






12. Henry T. Greely, Mildred K. Cho, Linda F. Hogle, Debra M. Satz, Thinking About the Human Neuron Mouse, American Journal of Bioethics:  NEUROSCIENCE 7:(5) 25-40 (May/June 2007)



13. Henry T. Greely, Human/Nonhuman Chimeras: Assessing the Issues, in Oxford Handbook of Animal Ethics (ed. Tom Beauchamp and R.G. Frey, Oxford Univ. Press, 2011)






14. Henry T. Greely, Academic Chimeras?, The American Journal of Bioethics, 14:2, 13-14 (2014)






15. Jun Wu, et al., Stem Cells and Interspecies Chimeras: Past, Present, Future, Nature 540:51-59 (Dec. 1, 2016)










Want to cite this post?




Greely, H. (2018). Organoids, Chimeras, Ex Vivo Brains – Oh My! The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/09/organoids-chimeras-ex-vivo-brains-oh-my.html