How are AI developers managing risks?: Insights from responses to the reporting framework of the Hiroshima AI Process Code of Conduct
Karine Perset and
Sara Fialho Esposito
No 45, OECD Artificial Intelligence Papers from OECD Publishing
Abstract:
Rapid advances in artificial intelligence (AI) are reshaping economies and societies, creating significant opportunities while also raising important considerations around the effective governance and risk management of advanced AI systems. Launched in February 2025, the Hiroshima AI Process Reporting Framework is the first international, voluntary tool to help organisations report on their practices compared to the Hiroshima AI Process International Code of Conduct for Organisations Developing Advanced AI Systems. This report presents preliminary insights from submissions by 20 organisations across diverse sectors and countries, examining their approaches to risk identification and management, transparency, governance, content authentication, AI safety research, and the advancement of global interests.
Date: 2025-09-25
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:oec:comaaa:45-en
Access Statistics for this paper
More papers in OECD Artificial Intelligence Papers from OECD Publishing
Bibliographic data for series maintained by ().