Traffic Sign Detection Based on Lightweight Multiscale Feature Fusion Network
Shan Lin,
Zicheng Zhang,
Jie Tao (),
Fan Zhang,
Xing Fan and
Qingchang Lu
Additional contact information
Shan Lin: School of Electronic and Control Engineering, Chang’an University, Xi’an 710064, China
Zicheng Zhang: School of Electronic and Control Engineering, Chang’an University, Xi’an 710064, China
Jie Tao: Zhejiang Institute of Mechanical and Electrical Engineering Co., Ltd., Hangzhou 310002, China
Fan Zhang: School of Information Engineering, Chang’an University, Xi’an 710064, China
Xing Fan: School of Electronic and Control Engineering, Chang’an University, Xi’an 710064, China
Qingchang Lu: School of Electronic and Control Engineering, Chang’an University, Xi’an 710064, China
Sustainability, 2022, vol. 14, issue 21, 1-18
Abstract:
Traffic sign detection is a research hotspot in advanced assisted driving systems, given the complex background, light transformation, and scale changes of traffic sign targets, as well as the problems of slow result acquisition and low accuracy of existing detection methods. To solve the above problems, this paper proposes a traffic sign detection method based on a lightweight multiscale feature fusion network. Since a lightweight network model is simple and has fewer parameters, it can greatly improve the detection speed of a target. To learn more target features and improve the generalization ability of the model, a multiscale feature fusion method can be used to improve recognition accuracy during training. Firstly, MobileNetV3 was selected as the backbone network, a new spatial attention mechanism was introduced, and a spatial attention branch and a channel attention branch were constructed to obtain a mixed attention weight map. Secondly, a feature-interleaving module was constructed to convert the single-scale feature map of the specified layer into a multiscale feature fusion map to realize the combined encoding of high-level semantic information and low-level semantic information. Then, a feature extraction base network for lightweight multiscale feature fusion with an attention mechanism based on the above steps was constructed. Finally, a key-point detection network was constructed to output the location information, bias information, and category probability of the center points of traffic signs to achieve the detection and recognition of traffic signs. The model was trained, validated, and tested using TT100K datasets, and the detection accuracy of 36 common categories of traffic signs reached more than 85%, among which the detection accuracy of five categories exceeded 95%. The results showed that, compared with the traditional methods of Faster R-CNN, CornerNet, and CenterNet, traffic sign detection based on a lightweight multiscale feature fusion network had obvious advantages in the speed and accuracy of recognition, significantly improved the detection performance for small targets, and achieved a better real-time performance.
Keywords: traffic engineering; traffic sign detection; convolutional neural network; multiscale feature fusion; attention mechanism (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2071-1050/14/21/14019/pdf (application/pdf)
https://www.mdpi.com/2071-1050/14/21/14019/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:14:y:2022:i:21:p:14019-:d:955562
Access Statistics for this article
Sustainability is currently edited by Ms. Alexandra Wu
More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().