EconPapers    
Economics at your fingertips  
 

Weighted Transformer Classifier for User-Agent Progression Modeling, Bot Contamination Detection, and Traffic Trust Scoring

Geza Lucz () and Bertalan Forstner
Additional contact information
Geza Lucz: Department of Automation and Applied Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics Műegyetem rkp. 3., H-1111 Budapest, Hungary
Bertalan Forstner: Department of Automation and Applied Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics Műegyetem rkp. 3., H-1111 Budapest, Hungary

Mathematics, 2025, vol. 13, issue 19, 1-17

Abstract: In this paper, we present a unique method to determine the level of bot contamination of web-based user agents. It is common practice for bots and robotic agents to masquerade as human-like to avoid content and performance limitations. This paper continues our previous work, using over 600 million web log entries collected from over 4000 domains to derive and generalize how the prominence of specific web browser versions progresses over time, assuming genuine human agency. Here, we introduce a parametric model capable of reproducing this progression in a tunable way. This simulation allows us to tag human-generated traffic in our data accurately. Along with the highest confidence self-tagged bot traffic, we train a Transformer-based classifier that can determine the bot contamination—a botness metric of user-agents without prior labels. Unlike traditional syntactic or rule-based filters, our model learns temporal patterns of raw and heuristic-derived features, capturing nuanced shifts in request volume, response ratios, content targeting, and entropy-based indicators over time. This rolling window-based pre-classification of traffic allows content providers to bin streams according to their bot infusion levels and direct them to several specifically tuned filtering pipelines, given the current load levels and available free resources. We also show that aggregated traffic data from multiple sources can enhance our model’s accuracy and can be further tailored to regional characteristics using localized metadata from standard web server logs. Our ability to adjust the heuristics to geographical or use case specifics makes our method robust and flexible. Our evaluation highlights that 65% of unclassified traffic is bot-based, underscoring the urgency of robust detection systems. We also propose practical methods for independent or third-party verification and further classification by abusiveness.

Keywords: bot detection; user-agent lifecycle; temporal modeling; transformer networks; weighted training (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/19/3153/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/19/3153/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:19:p:3153-:d:1763743

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-10-03
Handle: RePEc:gam:jmathe:v:13:y:2025:i:19:p:3153-:d:1763743