EconPapers    
Economics at your fingertips  
 

Efficient Network Traffic Analysis Using Large-Parameter LLMs on Consumer-Grade GPUs

Xingshen Wei, Zhihua Wang, Duo Chen and Lizhao You ()
Additional contact information
Xingshen Wei: State Grid Electric Power Research Institute Co., Ltd., Nanjing 211106, China
Zhihua Wang: State Grid Shanghai Municipal Electric Power Company, Shanghai 200122, China
Duo Chen: School of Informatics, Xiamen University, Xiamen 361102, China
Lizhao You: School of Informatics, Xiamen University, Xiamen 361102, China

Mathematics, 2025, vol. 13, issue 23, 1-19

Abstract: With the growth of network scale and the sophistication of cyberattacks, traditional learning-based traffic analysis methods struggle to maintain generalization. While Large Language Model (LLM)-based approaches offer improved generalization, they suffer from low training and inference efficiency on consumer-grade GPU platforms—typical in resource-constrained deployment scenarios. As a result, existing LLM-based methods often rely on small-parameter models, which limit their effectiveness. To overcome these limitations, we propose to use a large-parameter LLM-based algorithm for network traffic analysis that enhances both generalization and performance. We further introduce two key techniques to enable practical deployment and improve efficiency on consumer-grade GPUs: (a) a traffic-to-text mapping strategy that allows LLMs to process raw network traffic, coupled with a LoRA-based fine-tuning mechanism to improve adaptability across downstream tasks while reducing training overhead; and (b) a sparsity-aware inference acceleration mechanism that employs a hot–cold neuron allocation strategy to alleviate hardware bottlenecks and predicts inactive neurons to skip redundant computations. Experimental results on a consumer-grade NVIDIA RTX A6000 GPU show that our method outperforms existing LLM-based approaches by 6–8% in accuracy across various network traffic analysis tasks, benefiting from the adoption of large-parameter models. Furthermore, our approach achieves up to a 4.07× improvement in inference efficiency compared with llama.cpp, demonstrating both the effectiveness and practicality of the proposed design for real-world network traffic analysis applications.

Keywords: network security; network traffic analysis; large language model; LoRA fine-tuning; sparsity-aware acceleration (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/23/3754/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/23/3754/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:23:p:3754-:d:1801045

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-11-25
Handle: RePEc:gam:jmathe:v:13:y:2025:i:23:p:3754-:d:1801045