Racial disparities in automated speech recognition
Allison Koenecke,
Andrew Nam,
Emily Lake,
Joe Nudell,
Minnie Quartey,
Zion Mengesha,
Connor Toups,
John R. Rickford,
Dan Jurafsky and
Sharad Goel ()
Additional contact information
Allison Koenecke: Institute for Computational & Mathematical Engineering, Stanford University, Stanford, CA 94305
Andrew Nam: Department of Psychology, Stanford University, Stanford, CA 94305
Emily Lake: Department of Linguistics, Stanford University, Stanford, CA 94305
Joe Nudell: Department of Management Science & Engineering, Stanford University, Stanford, CA 94305
Minnie Quartey: Department of Linguistics, Georgetown University, Washington, DC 20057
Zion Mengesha: Department of Linguistics, Stanford University, Stanford, CA 94305
Connor Toups: Department of Linguistics, Stanford University, Stanford, CA 94305
John R. Rickford: Department of Linguistics, Stanford University, Stanford, CA 94305
Dan Jurafsky: Department of Linguistics, Stanford University, Stanford, CA 94305; Department of Computer Science, Stanford University, Stanford, CA 94305
Sharad Goel: Department of Management Science & Engineering, Stanford University, Stanford, CA 94305
Proceedings of the National Academy of Sciences, 2020, vol. 117, issue 14, 7684-7689
Abstract:
Automated speech recognition (ASR) systems, which use sophisticated machine-learning algorithms to convert spoken language to text, have become increasingly widespread, powering popular virtual assistants, facilitating automated closed captioning, and enabling digital dictation platforms for health care. Over the last several years, the quality of these systems has dramatically improved, due both to advances in deep learning and to the collection of large-scale datasets used to train the systems. There is concern, however, that these tools do not work equally well for all subgroups of the population. Here, we examine the ability of five state-of-the-art ASR systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with 42 white speakers and 73 black speakers. In total, this corpus spans five US cities and consists of 19.8 h of audio matched on the age and gender of the speaker. We found that all five ASR systems exhibited substantial racial disparities, with an average word error rate (WER) of 0.35 for black speakers compared with 0.19 for white speakers. We trace these disparities to the underlying acoustic models used by the ASR systems as the race gap was equally large on a subset of identical phrases spoken by black and white individuals in our corpus. We conclude by proposing strategies—such as using more diverse training datasets that include African American Vernacular English—to reduce these performance differences and ensure speech recognition technology is inclusive.
Keywords: fair machine learning; natural language processing; speech-to-text (search for similar items in EconPapers)
Date: 2020
References: Add references at CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://www.pnas.org/content/117/14/7684.full (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nas:journl:v:117:y:2020:p:7684-7689
Access Statistics for this article
More articles in Proceedings of the National Academy of Sciences from Proceedings of the National Academy of Sciences
Bibliographic data for series maintained by PNAS Product Team ().