AI Cybersecurity
Rohan Sharma
Chapter Chapter 18 in AI and the Boardroom, 2024, pp 225-236 from Springer
Abstract:
Abstract Imagine your business relying heavily on AI for day-to-day decisions, only to realize that a security breach could compromise sensitive data and erode customer trust. As AI becomes deeply integrated into enterprise operations, safeguarding these systems from vulnerabilities is not just important—it’s critical. This chapter explores the key security challenges facing AI systems, particularly for Large Language Models (LLMs), such as data leaks, malicious actors, and data poisoning. It contrasts these threats with proactive strategies like strict data handling protocols, stringent access controls, and effective data masking. By focusing on continuous monitoring and collaborating with security experts, businesses can strengthen their AI security frameworks. Actionable insight: Develop a solid AI security strategy today—implement regular system audits, enforce access restrictions, and keep your teams informed about AI security best practices. Is your AI infrastructure ready to face modern security challenges?
Date: 2024
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:979-8-8688-0796-1_18
Ordering information: This item can be ordered from
http://www.springer.com/9798868807961
DOI: 10.1007/979-8-8688-0796-1_18
Access Statistics for this chapter
More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().