Elevating Developers’ Accountability Awareness in AI Systems Development
Jan-Hendrik Schmidt (),
Sebastian Clemens Bartsch (),
Martin Adam () and
Alexander Benlian ()
Additional contact information
Jan-Hendrik Schmidt: Technical University of Darmstadt
Sebastian Clemens Bartsch: Technical University of Darmstadt
Martin Adam: University of Goettingen
Alexander Benlian: Technical University of Darmstadt
Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, 2025, vol. 67, issue 1, No 6, 109-135
Abstract:
Abstract The increasing proliferation of artificial intelligence (AI) systems presents new challenges for the future of information systems (IS) development, especially in terms of holding stakeholders accountable for the development and impacts of AI systems. However, current governance tools and methods in IS development, such as AI principles or audits, are often criticized for their ineffectiveness in influencing AI developers’ attitudes and perceptions. Drawing on construal level theory and Toulmin’s model of argumentation, this paper employed a sequential mixed method approach to integrate insights from a randomized online experiment (Study 1) and qualitative interviews (Study 2). This combined approach helped us investigate how different types of accountability arguments affect AI developers’ accountability perceptions. In the online experiment, process accountability arguments were found to be more effective than outcome accountability arguments in enhancing AI developers’ perceived accountability. However, when supported by evidence, both types of accountability arguments prove to be similarly effective. The qualitative study corroborates and complements the quantitative study’s conclusions, revealing that process and outcome accountability emerge as distinct theoretical constructs in AI systems development. The interviews also highlight critical organizational and individual boundary conditions that shape how AI developers perceive their accountability. Together, the results contribute to IS research on algorithmic accountability and IS development by revealing the distinct nature of process and outcome accountability while demonstrating the effectiveness of tailored arguments as governance tools and methods in AI systems development.
Keywords: Artificial intelligence; AI systems development; Accountability; Construal level theory; Toulmin’s model of argumentation; Mixed methods (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s12599-024-00914-2 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:binfse:v:67:y:2025:i:1:d:10.1007_s12599-024-00914-2
Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/12599
DOI: 10.1007/s12599-024-00914-2
Access Statistics for this article
Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK is currently edited by Martin Bichler
More articles in Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK from Springer, Gesellschaft für Informatik e.V. (GI)
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().