Towards accountability in machine learning applications: A system-testing approach
Wayne Wan and
Thies Lindenthal
No 22-001, ZEW Discussion Papers from ZEW - Leibniz Centre for European Economic Research
Abstract:
A rapidly expanding universe of technology-focused startups is trying to change and improve the way real estate markets operate. The undisputed predictive power of machine learning (ML) models often plays a crucial role in the 'disruption' of traditional processes. However, an accountability gap prevails: How do the models arrive at their predictions? Do they do what we hope they do - or are corners cut? Training ML models is a software development process at heart. We suggest to follow a dedicated software testing framework and to verify that the ML model performs as intended. Illustratively, we augment two ML image classifiers with a system testing procedure based on local interpretable model-agnostic explanation (LIME) techniques. Analyzing the classifications sheds light on some of the factors that determine the behavior of the systems.
Keywords: machine learning; accountability gap; computer vision; real estate; urban studies (search for similar items in EconPapers)
JEL-codes: C52 R30 (search for similar items in EconPapers)
Date: 2022
New Economics Papers: this item is included in nep-big and nep-cmp
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.econstor.eu/bitstream/10419/250385/1/1789605806.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:zbw:zewdip:22001
Access Statistics for this paper
More papers in ZEW Discussion Papers from ZEW - Leibniz Centre for European Economic Research Contact information at EDIRC.
Bibliographic data for series maintained by ZBW - Leibniz Information Centre for Economics ().