Metrics, Explainability and the European AI Act Proposal
Francesco Sovrano,
Salvatore Sapienza,
Monica Palmirani and
Fabio Vitali
Additional contact information
Francesco Sovrano: Department of Computer Science and Engineering (DISI), Università di Bologna, 40126 Bologna, Italy
Salvatore Sapienza: CIRSFID—ALMA AI, Università di Bologna, 40126 Bologna, Italy
Monica Palmirani: CIRSFID—ALMA AI, Università di Bologna, 40126 Bologna, Italy
Fabio Vitali: Department of Computer Science and Engineering (DISI), Università di Bologna, 40126 Bologna, Italy
J, 2022, vol. 5, issue 1, 1-13
Abstract:
On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion.
Keywords: explainable artificial intelligence; explainability; metrics; standardisation; Artificial Intelligence Act (search for similar items in EconPapers)
JEL-codes: I1 I10 I12 I13 I14 I18 I19 (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.mdpi.com/2571-8800/5/1/10/pdf (application/pdf)
https://www.mdpi.com/2571-8800/5/1/10/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jjopen:v:5:y:2022:i:1:p:10-138:d:752840
Access Statistics for this article
J is currently edited by Ms. Angelia Su
More articles in J from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().