Explainable product backorder prediction exploiting CNN: Introducing explainable models in businesses
Md Shajalal (),
Alexander Boden () and
Gunnar Stevens ()
Additional contact information
Md Shajalal: Fraunhofer-Institute for Applied Information Technology FIT
Alexander Boden: Fraunhofer-Institute for Applied Information Technology FIT
Gunnar Stevens: University of Siegen
Electronic Markets, 2022, vol. 32, issue 4, No 16, 2107-2122
Abstract:
Abstract Due to expected positive impacts on business, the application of artificial intelligence has been widely increased. The decision-making procedures of those models are often complex and not easily understandable to the company’s stakeholders, i.e. the people having to follow up on recommendations or try to understand automated decisions of a system. This opaqueness and black-box nature might hinder adoption, as users struggle to make sense and trust the predictions of AI models. Recent research on eXplainable Artificial Intelligence (XAI) focused mainly on explaining the models to AI experts with the purpose of debugging and improving the performance of the models. In this article, we explore how such systems could be made explainable to the stakeholders. For doing so, we propose a new convolutional neural network (CNN)-based explainable predictive model for product backorder prediction in inventory management. Backorders are orders that customers place for products that are currently not in stock. The company now takes the risk to produce or acquire the backordered products while in the meantime, customers can cancel their orders if that takes too long, leaving the company with unsold items in their inventory. Hence, for their strategic inventory management, companies need to make decisions based on assumptions. Our argument is that these tasks can be improved by offering explanations for AI recommendations. Hence, our research investigates how such explanations could be provided, employing Shapley additive explanations to explain the overall models’ priority in decision-making. Besides that, we introduce locally interpretable surrogate models that can explain any individual prediction of a model. The experimental results demonstrate effectiveness in predicting backorders in terms of standard evaluation metrics and outperform known related works with AUC 0.9489. Our approach demonstrates how current limitations of predictive technologies can be addressed in the business domain.
Keywords: eXplainable artificial intelligence (XAI); Backorder prediction; CNN; Local explanation; Global explanation (search for similar items in EconPapers)
JEL-codes: C80 M1 M15 O33 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://link.springer.com/10.1007/s12525-022-00599-z Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:elmark:v:32:y:2022:i:4:d:10.1007_s12525-022-00599-z
Ordering information: This journal article can be ordered from
http://www.springer. ... ystems/journal/12525
DOI: 10.1007/s12525-022-00599-z
Access Statistics for this article
Electronic Markets is currently edited by Rainer Alt and Hans-Dieter Zimmermann
More articles in Electronic Markets from Springer, IIM University of St. Gallen
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().