Optimized Scoring Systems: Toward Trust in Machine Learning for Healthcare and Criminal Justice
Cynthia Rudin () and
Berk Ustun ()
Additional contact information
Cynthia Rudin: Departments of Computer Science, Electrical and Computer Engineering, and Statistical Science, Duke University, Durham, North Carolina 27708
Berk Ustun: Center for Research in Computation for Society, Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138
Interfaces, 2018, vol. 48, issue 5, 449-466
Abstract:
Questions of trust in machine-learning models are becoming increasingly important as these tools are starting to be used widely for high-stakes decisions in medicine and criminal justice. Transparency of models is a key aspect affecting trust. This paper reveals that there is new technology to build transparent machine-learning models that are often as accurate as black-box machine-learning models. These methods have already had an impact in medicine and criminal justice. This work calls into question the overall need for black-box models in these applications.
Keywords: machine learning; sparse linear models; scoring systems; trust; transparency; interpretability; healthcare; criminal justice; recidivism (search for similar items in EconPapers)
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://doi.org/10.1287/inte.2018.0957 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:orinte:v:48:y:2018:i:5:p:449-466
Access Statistics for this article
More articles in Interfaces from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().