By Elisabeth Hildt
Image courtesy of Pixabay. |
Research is not an isolated activity. It takes place in a social context, sometimes influenced by value assumptions and sometimes accompanied by social and ethical implications. A recent example of this complex interplay is an article, “Deep neural networks can detect sexual orientation from faces” by Yilun Wang and Michal Kosinski, accepted in 2017 for publication in the Journal of Personality and Social Psychology.
In this study on face recognition, the researchers used deep neural networks to classify the sexual orientations of persons depicted in facial images uploaded on a dating website. While the discriminatory power of the system was limited, the algorithm was reported to have achieved higher accuracy in the setting than human subjects. The study can be seen in the context of the “prenatal hormone theory of sexual orientation,” which claims that gay men and women tend to have gender-atypical facial morphology.
The abstract of the article ends with the sentences (p.2): “Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”
The authors of the study seem to assume that their role is confined to conducting research, sending the results out to society, and (maybe) sounding a note of caution, a caveat (Murphy 2017; Resnick 2018). But that, beyond that, their research does not pose any considerable ethical issues. This can be questioned, however. For researchers have a clear responsibility to think about the social embeddedness and ethical implications of their research before it is carried out and published and to design their studies in a way that keeps possible negative consequences to a minimum.
To begin with, there has been an ongoing discussion on whether this study complies with research ethics standards. Issues raised include whether the research is in line with the dating site’s guidelines and with copyright regulations, as well as whether the researchers were entitled to use the photos without having obtained the informed consent of the individuals who uploaded them for an entirely different purpose (Flaherty 2017; Leetaru 2017). As there is an ongoing investigation into these issues – but also discussion on whether there is need for new guidelines regulating artificial intelligence (AI) and digital data research (Leetaru 2017)— the study has not yet been published in the journal.
Image courtesy of Flickr. |
Apart from the above-mentioned research ethics issues, ethical aspects matter in two respects: first with regard to the value assumptions implicit in the study design and second with regard to possible ethical implications of the research. Physiognomy, the broader context in which the study is located, is a controversial field that many consider to be a pseudoscience (Emspak 2017).
Physiognomy assumes that a person’s facial features give indications of his/her personality traits. It is not by chance that the study reminds me of the pseudo-scientific phrenological approaches of the 19th century that attempted to draw conclusions about individuals’ personality traits based on the shape of their skulls (Holtzman 2015). What unites these two is that their approach is influenced by social value assumptions and the motivation to be able to identify individuals with behavior or with characteristics considered socially deviant.
Other brain-related research fields are not immune to social value assumptions either. An example is craniometry and the highly questionable claim made by Samuel George Morton in the 19th century that differences in cranial capacity between different ethnic groups are indicators of the intellectual capacity of these ethnic groups. The same applies for discussions on the relevance of brain size for intelligence (Fausto-Sterling 1993). These examples show us more about the underlying social assumptions of the researchers than about the actual relevance of their measurements.
Similarly, one of the basic assumptions of the Wang & Kosinski paper, the “prenatal hormone theory of sexual orientation” and the view that there is a correlation between the shape of a person’s face and his/her sexual orientation, is far from being proven (Emspak 2017: Murphy 2017). While the quality of the underlying scientific assumption is a complex question that cannot be resolved here, the choice of the research topic reflects the view that AI-based facial recognition to detect sexual orientation is a topic worth pursuing.
One of the central conclusions of the study is that human faces “contain more information about sexual orientation than can be perceived or interpreted by the human brain” (Wang & Kosinski, p. 29). Deep neural networks are reported to provide more accurate results in the described study setting because they take features into consideration that are not accessible or not relevant for humans and the human brain when it comes to distinguishing between heterosexual and homosexual individuals based on their faces. Nevertheless, it is obvious that the resulting data needs human interpretation, especially in view of the intimacy of the trait under investigation. For example, the authors explain the higher probability of seeing a shadow on the forefront of heterosexual men and lesbian women in the study by the tendency of both groups to wear baseball caps and “the association between baseball caps and masculinity in American culture” (p. 20). In other cultural contexts, different influencing factors may be expected. But, it remains unexplained as to why there is a higher probability of gay people wearing glasses in the study (Emspak 2017).
Image courtesy of Pixabay. |
The underlying question is: how can we ever adequately interpret the resulting data in a situation in which not only a considerable number of the elements used by the system, but also their relevance escape our understanding? There is a clear risk for discrimination against homosexual men and women based on opaque algorithms (O’Neil 2016; Agüera y Arcas et al. 2018). This leads to the question of possible negative consequences for homosexual men and women.
Concerning possible ethical implications of their research study, the authors stress that their intention was to raise awareness of the risks gay people may face already, particularly in view of the growing digitalization of everyday lives, and that they did not develop algorithms for their study but instead used widely available off-the-shelf tools. However, it seems obvious that the study not only raises awareness of the options available in the digital age, but also suggests how to realize similar approaches; it also reinforces the assumption that using facial recognition to find out about the sexual orientation of individuals may be worthwhile.
_______________
Elisabeth Hildt is Professor of Philosophy and Director of the Center for the Study of Ethics in the Professions at Illinois Institute of Technology, Chicago. Her research focus is on neuroethics, ethics of technology, and Science and Technology Studies. Before moving to Chicago, she was the head of the Research Group on Neuroethics/Neurophilosophy at the University of Mainz, Germany.
References
Agüera y Arcas, B., Todorov, A., Mitchell. M. (2018): “Do algorithms reveal sexual orientation or just expose our stereotypes?”, Medium January 11, 2018; https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477
Emspak, J. (2017): “Facing Facts: Artificial Intelligence and the Resurgence of Physiognomy”, Undark 11.08.2017; https://undark.org/article/facing-facts-artificial-intelligence/
Fausto-Sterling, A. (1993): “Sex, Race, Brains, and Calipers”, Discover Magazine 14(10): 32-37. http://discovermagazine.com/1993/oct/sexracebrainsand288
Flaherty, C. (2017): “Prominent journal that accepted controversial study on AI gaydar is reviewing ethics in the work”, Inside Higher Ed Sep 13, 2017; https://www.insidehighered.com/news/2017/09/13/prominent-journal-accepted-controversial-study-ai-gaydar-reviewing-ethics-work
Holtzman, G.S. (2015): “When Phrenology was Used in Court. Lessons in neuroscience from the 1834 trial of a 9-year-old”, Slate, Future Tense, Dec 16, 2015, http://www.slate.com/articles/technology/future_tense/2015/12/how_phrenology_was_used_in_the_1834_trial_of_9_year_old_major_mitchell.html
Leetaru, K. (2017): “AI ‘Gaydar’ And How The Future Of AI Will Be Exempt From Ethical Review”, Forbes, Sep 16, 2017; https://www.forbes.com/sites/kalevleetaru/2017/09/16/ai-gaydar-and-how-the-future-of-ai-will-be-exempt-from-ethical-review/#704e7602c09a
Murphy, H. (2017): “Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine”, New York Times, Oct 9, 2017; https://www.nytimes.com/2017/10/09/science/stanford-sexual-orientation-study.html
O’Neil, C. (2016): Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown
Resnik, B. (2018): “This psychologist’s “gaydar” research makes us uncomfortable. That’s the point.”, Vox, Jan 29, 2018; https://www.vox.com/science-and-health/2018/1/29/16571684/michal-kosinski-artificial-intelligence-faces
Wang, Y., Kosinski, M. (2017): “Deep neural networks can detect sexual orientation from faces”, https://www.gsb.stanford.edu/sites/gsb/files/publication-pdf/wang_kosinski.pdf
Want to cite this post?
Hildt, E. (2018). Facial recognition, values, and the human brain. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/06/facial-recognition-values-and-human.html