Improved DeepLabV3+ for UAV-Based Highway Lane Line Segmentation
Yueze Wang,
Dudu Guo (),
Yang Wang,
Hongbo Shuai,
Zhuzhou Li and
Jin Ran
Additional contact information
Yueze Wang: School of Intelligent Manufacturing Modern Industry, Xinjiang University, Urumqi 830017, China
Dudu Guo: School of Traffic and Transportation Engineering, Xinjiang University, Urumqi 830017, China
Yang Wang: Xinjiang Transportation Planning and Survey & Design Research Institute Co., Ltd., Urumqi 830017, China
Hongbo Shuai: School of Intelligent Manufacturing Modern Industry, Xinjiang University, Urumqi 830017, China
Zhuzhou Li: School of Intelligent Manufacturing Modern Industry, Xinjiang University, Urumqi 830017, China
Jin Ran: School of Traffic and Transportation Engineering, Xinjiang University, Urumqi 830017, China
Sustainability, 2025, vol. 17, issue 16, 1-22
Abstract:
Sustainable highway infrastructure maintenance critically depends on precise lane line detection, yet conventional inspection approaches remain resource-depleting, carbon-intensive, and hazardous to personnel. To mitigate these constraints and address the low accuracy and high parameterization of existing models, this study utilizes unmanned aerial vehicle (UAV) imagery and proposes a UAV-based highway lane line segmentation method using an improved DeepLabV3+ model that resolves multi-scale lane line segmentation challenges in UAV imagery. MobileNetV2 is used as the backbone network to significantly reduce the number of model parameters. The Squeeze-and-Excitation (SE) attention mechanism is integrated to enhance feature extraction capabilities, particularly at lane line edges. A Feature Pyramid Network (FPN) is incorporated to improve multi-scale lane line feature extraction. We introduce a novel Waterfall Atrous Spatial Pyramid Pooling (WASPP) module, utilizing cascaded atrous convolutions with strategic dilation rate adjustments to progressively expand the receptive field and aggregate contextual information across scales. The improved model outperforms the original DeepLabV3+ by 5.04% mIoU (85.30% vs. 80.26%) and 3.35% F1-Score (91.74% vs. 88.39%) while cutting parameters by 85% (8.03 M vs. 54.8 M) and reducing training time by 2 h 50 min, thereby enhancing the model’s accuracy in lane line segmentation, reducing the number of parameters, and lowering the carbon footprint.
Keywords: lane line detection; semantic segmentation; UAV imagery; DeepLabV3+; attention mechanism (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2071-1050/17/16/7317/pdf (application/pdf)
https://www.mdpi.com/2071-1050/17/16/7317/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:17:y:2025:i:16:p:7317-:d:1723581
Access Statistics for this article
Sustainability is currently edited by Ms. Alexandra Wu
More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().