EconPapers    
Economics at your fingertips  
 

Human oversight done right: The AI Act should use humans to monitor AI only when effective

Johannes Walter

No 02/2023, ZEW policy briefs from ZEW - Leibniz Centre for European Economic Research

Abstract: The EU's proposed Artificial Intelligence Act (AI Act) is meant to ensure safe AI systems in high-risk applications. The Act relies on human supervision of machine-learning algorithms, yet mounting evidence indicates that such oversight is not always reliable. In many cases, humans cannot accurately assess the quality of algorithmic recommendations, and thus fail to prevent harmful behaviour. This policy brief proposes three ways to solve the problem: First, Article 14 of the AI Act should be revised to acknowledge that humans often have difficulty assessing recommendations made by algorithms. Second, the suitability of human oversight for preventing harmful outcomes should be empirically tested for every high-risk application under consideration. Third, following Biermann et al. (2022), human decision-makers should receive feedback on past decisions to enable learning and improve future decisions.

Date: 2023
New Economics Papers: this item is included in nep-ain, nep-cmp and nep-mfd
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.econstor.eu/bitstream/10419/271285/1/1838979220.pdf (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:zbw:zewpbs:022023

Access Statistics for this paper

More papers in ZEW policy briefs from ZEW - Leibniz Centre for European Economic Research Contact information at EDIRC.
Bibliographic data for series maintained by ZBW - Leibniz Information Centre for Economics ().

 
Page updated 2025-03-20
Handle: RePEc:zbw:zewpbs:022023