Improving software vulnerability classification performance using normalized difference measures
Patrick Kwaku Kudjo,
Selasie Aformaley Brown () and
Solomon Mensah
Additional contact information
Patrick Kwaku Kudjo: Wisconsin International University College
Selasie Aformaley Brown: University of Professional Studies
Solomon Mensah: University of Ghana
International Journal of System Assurance Engineering and Management, 2023, vol. 14, issue 3, No 15, 1010-1027
Abstract:
Abstract Vulnerability Classification Models (VCMs) play a crucial role in software reliability engineering and hence, have attracted significant studies from researchers and practitioners. Recently, machine learning and data mining techniques have emerged as important paradigms for vulnerability classification. However, there are some major drawbacks of existing vulnerability classification models, which include difficulties in curating real vulnerability reports and their associated code fixes from large software repositories. Additionally, different types of features such as the traditional software metrics and text mining features that are extracted from term vectors are used to build vulnerability classification models, which often results in the curse of dimensionality. This significantly impacts the time required for classification and the prediction accuracy of existing vulnerability classification models. To address these deficiencies, this study presents a vulnerability classification framework using the term frequency-inverse document frequency (TF-IDF), and the normalized difference measure. In the proposed framework, the TF-IDF model is first used to compute the frequency and weight of each word from the textual description of vulnerability reports. The normalized difference measure is then employed to select an optimal subset of feature words or terms for the machine learning algorithms. The proposed approach was validated using three vulnerable software applications containing a total number of 3949 real vulnerabilities and five machine learning algorithms, namely Naïve Bayes, Naïve Bayes Multinomial, Support Vector Machines, K-Nearest Neighbor, and Decision Tree. Standard classification evaluation metrics such as precision, recall, F-measure, and accuracy were applied to assess the performance of the models and the results were validated using Welch t-test, and Cliff’s delta effect size. The outcome of this study demonstrates that normalized difference measure and k-nearest neighbor significantly improves the accuracy of vulnerability report classification.
Keywords: Software vulnerability; Feature selection; Normalized difference measure; Severity (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://link.springer.com/10.1007/s13198-023-01911-6 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:ijsaem:v:14:y:2023:i:3:d:10.1007_s13198-023-01911-6
Ordering information: This journal article can be ordered from
http://www.springer.com/engineering/journal/13198
DOI: 10.1007/s13198-023-01911-6
Access Statistics for this article
International Journal of System Assurance Engineering and Management is currently edited by P.K. Kapur, A.K. Verma and U. Kumar
More articles in International Journal of System Assurance Engineering and Management from Springer, The Society for Reliability, Engineering Quality and Operations Management (SREQOM),India, and Division of Operation and Maintenance, Lulea University of Technology, Sweden
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().