EconPapers    
Economics at your fingertips  
 

Warnings and Endorsements: Improving Human–AI Collaboration in the Presence of Outliers

Matthew DosSantos DiSorbo (), Kris Johnson Ferreira (), Maya Balakrishnan () and Jordan Tong ()
Additional contact information
Matthew DosSantos DiSorbo: Harvard Business School, Boston, Massachusetts 02163
Kris Johnson Ferreira: Harvard Business School, Boston, Massachusetts 02163
Maya Balakrishnan: University of Texas at Dallas, Richardson, Texas 75080
Jordan Tong: Wisconsin School of Business, University of Wisconsin-Madison, Madison, Wisconsin 53706

Manufacturing & Service Operations Management, 2025, vol. 27, issue 6, 1814-1831

Abstract: Problem definition : Whereas artificial intelligence (AI) algorithms may perform well on data that are representative of the training set (inliers), they may err when extrapolating on nonrepresentative data (outliers). How can humans and algorithms work together to make better decisions when faced with outliers and inliers? Methodology/results : We study a human–AI collaboration on prediction tasks using a bias adjustment framework and hypothesize that humans tend toward naïve adjusting behavior: humans make adjustments to AI predictions that are too similar across inliers and outliers when, ideally, adjustments should be larger on outliers than inliers. In an online experiment, we demonstrate that participants are indeed unable to sufficiently differentiate their adjustments to an AI algorithm when faced with both inliers and outliers, leading to a 143%–176% increase in their absolute deviation from the optimal prediction compared with participants facing either all inliers or all outliers. We design a warning (an endorsement) that alerts participants when feature values constitute outliers (inliers), and in a second experiment, we show that this warning (endorsement) helps participants differentiate adjustments, reducing their absolute deviation from the optimal prediction by an average of 35% (28%). Deploying both interventions together reduces participants’ absolute deviation from the optimal prediction by 49%. In a third experiment, we demonstrate the robustness of warnings and endorsements in the presence of “fringeliers”—data points with features marginally outside the range of the training data set. Managerial implications : Our work details an important behavioral bias and identifies a simple educational intervention for mitigation. Ultimately, we hope that this work will help managers better equip their employees for human–AI collaboration.

Keywords: human–AI collaboration; behavioral operations; experiments (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/msom.2024.0854 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:ormsom:v:27:y:2025:i:6:p:1814-1831

Access Statistics for this article

More articles in Manufacturing & Service Operations Management from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-11-02
Handle: RePEc:inm:ormsom:v:27:y:2025:i:6:p:1814-1831