EconPapers    
Economics at your fingertips  
 

VTT-LLM: Advancing Vulnerability-to-Tactic-and-Technique Mapping through Fine-Tuning of Large Language Model

Chenhui Zhang, Le Wang (), Dunqiu Fan, Junyi Zhu, Tang Zhou, Liyi Zeng and Zhaohua Li
Additional contact information
Chenhui Zhang: Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
Le Wang: Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
Dunqiu Fan: NSFOCUS Inc., Guangzhou 510006, China
Junyi Zhu: Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
Tang Zhou: Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
Liyi Zeng: Peng Cheng Laboratory, Shenzhen 518000, China
Zhaohua Li: Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen 518110, China

Mathematics, 2024, vol. 12, issue 9, 1-13

Abstract: Vulnerabilities are often accompanied by cyberattacks. CVE is the largest repository of open vulnerabilities, which keeps expanding. ATT&CK models known multi-step attacks both tactically and technically and remains up to date. It is valuable to correlate the vulnerability in CVE with the corresponding tactic and technique of ATT&CK which exploit the vulnerability, for active defense. Mappings manually is not only time-consuming but also difficult to keep up-to-date. Existing language-based automated mapping methods do not utilize the information associated with attack behaviors outside of CVE and ATT&CK and are therefore ineffective. In this paper, we propose a novel framework named VTT-LLM for mapping V ulnerabilities to T actics and T echniques based on L arge L anguage M odels, which consists of a generation model and a mapping model. In order to generate fine-tuning instructions for LLM, we create a template to extract knowledge of CWE (a standardized list of common weaknesses) and CAPEC (a standardized list of common attack patterns). We train the generation model of VTT-LLM by fine-tuning the LLM according to the above instructions. The generation model correlates vulnerability and attack through their descriptions. The mapping model transforms the descriptions of ATT&CK tactics and techniques into vectors through text embedding and further associates them with attacks through semantic matching. By leveraging the knowledge of CWE and CAPEC, VTT-LLM can eventually automate the process of linking vulnerabilities in CVE to the attack techniques and tactics of ATT&CK. Experiments on the latest public dataset, ChatGPT-VDMEval, show the effectiveness of VTT-LLM with an accuracy of 85.18%, which is 13.69% and 54.42% higher than the existing CVET and ChatGPT-based methods, respectively. In addition, compared to fine-tuning without outside knowledge, the accuracy of VTT-LLM with chain fine-tuning is 9.24% higher on average across different LLMs.

Keywords: vulnerabilities; large language model; tactics and techniques; fine-tuning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/12/9/1286/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/9/1286/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:9:p:1286-:d:1381822

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:12:y:2024:i:9:p:1286-:d:1381822