By Carolyn C. Meltzer, MD
Image courtesy of Wikimedia Commons |
The term “artificial intelligence” (AI) was first used in 1955 by John McCarthy of Dartmouth College to describe complex information processing (McCarthy 1955). While the field has progressed slowly since that time, recent advancements in computational power, deep learning and neural network systems, and access to large datasets have set the stage for the rapid acceleration of AI. While there is much painstaking work ahead before transformational uses of AI catch up with the hype (Kinsella 2017), substantial impact in nearly all aspects of human life is envisioned.
AI is being integrated in fields as diverse as medicine, finance, journalism, transportation, and law enforcement. AI aims to mimic human cognitive processes, as imperfect as they may be. Our human tendencies to generalize common associations, avoid ambiguity, and more tightly identify with others who are more like ourselves may help us navigate our world efficiently, yet how they may translate into our design of AI systems is yet unclear. As is typically the case, technology is racing ahead of our ability to consider the societal and ethical consequences of its implementation (Horvitz 2017).
AI algorithms already introduced into common use have given us a glimpse of the downside of the lack of diversity in computing, biotech, engineering, and other STEM fields. If you own an iPhone X, you routinely rely on facial recognition technology to unlock your phone and sign in to various apps such as for personal banking. Such algorithms appear to work best if you are a white man. For women and dark-skinned persons, the errors increase considerably (from <1% for a white male to up to 35% for a dark skin-toned woman) (Buolamwini 2018, Lohr 2018). Why is this the case? Two widely used aggregate datasets applied to develop face recognition algorithms were overwhelmingly male and white. So, the algorithm works exactly as it was trained, that is, for a largely white, male world.
AI strategies such as machine learning and natural language processing may also amplify gender, racial, and other stereotypes by generalizing associations found in the training datasets. For example, photographs that include nurses may mis-identify images of male nurses as female due to associated text annotations of datasets showing nurses to be more commonly women (Bolukbasi et al. 2016). Bolukbasi and colleagues (2016), in their aptly titled work “Man is to Computer Programmer as Woman is to Homemaker?”, set out to de-bias Google News texts by modifying word embeddings to remove gender stereotypes. Their work is a cautionary tale of the bias that is foundational to commonly available data.
How doctors are typically represented Image courtesy of Max Pixel |
I have found that performing a google image search of the term “doctor” will turn up a page of images that largely reinforce our unconscious bias of what a doctor looks like, that is male and white. The notorious 2016 case of Dr. Tamika Cross, the young, black, female physician barred from treating an ill passenger on a Delta flight, set off a social media backlash (#whatadoctorlookslike) to raise public awareness of the social harm of this form of stereotype bias.
Other sources of adaptive cognitive heuristic principles or bias that help humans make quick judgements can be translated into AI algorithms, particularly machine learning (Doell and Siebert 2016). These include confirmation bias (e.g., human classification error in labeling of data in a training set) and priming (e.g., resulting in yellow items over-labeled as “banana” when a banana was over-represented in the training sample). Once embedded in widely distributed technology platforms -- such as smartphones, law enforcement databases, transportation platforms, and medical diagnostic systems -- algorithmic bias can have disastrous effects (Horovitz 2017; Bass and Huet 2017). The potential power and authority of “black box” automated AI systems could result in misidentification errors support false criminal charges, failure of a self-driving car to correctly identify a human figure crossing the road or contribute to misdiagnoses and inappropriate health care. Further, the more complex the problem AI is targeted to address – such as in guiding medical diagnoses - the more challenging it may be to discern embedded bias effects.
_______________
Dr. Meltzer, William P. Timmie Professor and Chair of Radiology and Imaging Sciences Professor and Associate Dean for Research at Emory University School of Medicine, is a neuroradiologist and nuclear medicine physician whose translational research has focused on brain structure-function relationships in normal aging, dementia, and other late-life neuropsychiatric disorders. Her work in imaging technologies includes oversight of the clinical evaluation of the world’s first combined PET/CT scanner. Dr. Meltzer has held numerous leadership roles in national professional societies and advisory boards, including the Advisory Council for the National Institute for Biomedical Imaging and Bioengineering, and has authored approximately 200 publications.
References:
McCarthy J, Minsky M, Rochester N, Shannon CE, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf August, 1955
Gartner Hype Cycle Suggests Another AI Winter Could Be Near
Bret Kinsella, November 5, 2017
(accessed May 29, 2018)
Joy Buolamwini. TED x Beacon Street, 2016 https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
Facial Recognition Is Accurate, if You’re a White Guy
New York Times; By Steve Lohr, February 9, 2018
https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html (accessed May 29, 2018)
Buolamwini J, Gebru T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. 2018;81:77-91
Horvitz E. AI, People, and Society. Science 2017; 357(6346):7.
Researchers Combat Gender and Racial Bias in Artificial Intelligence.
Bloomberg; by Dina Bass and Ellen Huet, December 4, 2017.
(accessed May 29, 2018)
Bolukbasi T, Chang K-W, Zou J, Saligrama V, Kalai A. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Doell C, Siebert S. Evaluation of Cognitive Architectures Inspired by Cognitive Biases. Procedia Computer Science Volume 88, 2016, Pages 155–162 7th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2016
Want to cite this post?
Meltzer, C. (2018). Worrisome Implications of Lack of Diversity in Silicon Valley. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/08/worrisome-implications-of-lack-of.html
No comments:
Post a Comment