Advancing accountability in AI: Governing and managing risks throughout the lifecycle for trustworthy AI
Oecd
No 349, OECD Digital Economy Papers from OECD Publishing
Abstract:
This report presents research and findings on accountability and risk in AI systems by providing an overview of how risk-management frameworks and the AI system lifecycle can be integrated to promote trustworthy AI. It also explores processes and technical attributes that can facilitate the implementation of values-based principles for trustworthy AI and identifies tools and mechanisms to define, assess, treat, and govern risks at each stage of the AI system lifecycle.This report leverages OECD frameworks – including the OECD AI Principles, the AI system lifecycle, and the OECD framework for classifying AI systems – and recognised risk-management and due-diligence frameworks like the ISO 31000 risk-management framework, the OECD Due Diligence Guidance for Responsible Business Conduct, and the US National Institute of Standards and Technology’s AI risk-management framework.
Date: 2023-02-23
References: Add references at CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1787/2448f04b-en (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:oec:stiaab:349-en
Access Statistics for this paper
More papers in OECD Digital Economy Papers from OECD Publishing Contact information at EDIRC.
Bibliographic data for series maintained by ().