General Framework for AI Security and Privacy
Dilli Prasad Sharma (),
Arash Habibi Lashkari (),
Mahdi Daghmehchi Firoozjaei (),
Samaneh Mahdavifar () and
Pulei Xiong ()
Additional contact information
Dilli Prasad Sharma: University of Toronto
Arash Habibi Lashkari: York University
Mahdi Daghmehchi Firoozjaei: MacEwan University
Samaneh Mahdavifar: McGill University
Pulei Xiong: National Research Council of Canada
Chapter Chapter 10 in Understanding AI in Cybersecurity and Secure AI, 2025, pp 197-219 from Springer
Abstract:
Abstract This chapter presents a general framework for AI security and privacy. It begins with examining security threats and defenses across key phases of the AI system development life cycle, including data collection, preprocessing, model training, inference, and system integration. The chapter discusses NIST’s AI Risk Management Framework (AI RMF), focusing on risk identification, system trustworthiness, and the lifecycle dimensions of AI systems. It also outlines core frameworks, including Google’s Secure AI Framework, and relevant security and privacy standards, such as ISO/IEC AI security standards, EU AI Act, and OECD AI principles.
Date: 2025
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:prochp:978-3-031-91524-6_10
Ordering information: This item can be ordered from
http://www.springer.com/9783031915246
DOI: 10.1007/978-3-031-91524-6_10
Access Statistics for this chapter
More chapters in Progress in IS from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().