EconPapers    
Economics at your fingertips  
 

A Multi-LiDAR Self-Calibration System Based on Natural Environments and Motion Constraints

Yuxuan Tang, Jie Hu (), Zhiyong Yang, Wencai Xu, Shuaidi He and Bolun Hu
Additional contact information
Yuxuan Tang: Hubei Research Center for New Energy & Intelligent Connected Vehicle, Wuhan University of Technology, Luoshi Road, Wuhan 430070, China
Jie Hu: Hubei Research Center for New Energy & Intelligent Connected Vehicle, Wuhan University of Technology, Luoshi Road, Wuhan 430070, China
Zhiyong Yang: Hubei Agricultural Machinery Institute, Hubei University of Technology, Nanli Road, Wuhan 430068, China
Wencai Xu: Hubei Research Center for New Energy & Intelligent Connected Vehicle, Wuhan University of Technology, Luoshi Road, Wuhan 430070, China
Shuaidi He: Hubei Research Center for New Energy & Intelligent Connected Vehicle, Wuhan University of Technology, Luoshi Road, Wuhan 430070, China
Bolun Hu: Commercial Product R&D Institute, Dongfeng Automobile Co., Ltd., Wuhan 430056, China

Mathematics, 2025, vol. 13, issue 19, 1-18

Abstract: Autonomous commercial vehicles often mount multiple LiDARs to enlarge their field of view, but conventional calibration is labor-intensive and prone to drift during long-term operation. We present an online self-calibration method that combines a ground plane motion constraint with a virtual RGB–D projection, mapping 3D point clouds to 2D feature/depth images to reduce feature extraction cost while preserving 3D structure. Motion consistency across consecutive frames enables a reduced-dimension hand–eye formulation. Within this formulation, the estimation integrates geometric constraints on S E ( 3 ) using Lagrange multiplier aggregation and quasi-Newton refinement. This approach highlights key aspects of identifiability, conditioning, and convergence. An online monitor evaluates plane alignment and LiDAR–INS odometry consistency to detect degradation and trigger recalibration. Tests on a commercial vehicle with six LiDARs and on nuScenes demonstrate accuracy comparable to offline, target-based methods while supporting practical online use. On the vehicle, maximum errors are 6.058 cm (translation) and 4.768° (rotation); on nuScenes, 2.916 cm and 5.386°. The approach streamlines calibration, enables online monitoring, and remains robust in real-world settings.

Keywords: multi-LiDAR calibration; hand–eye; constrained optimization; planar motion; LiDAR odometry; RGB–D projection; RANSAC; BFGS (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/19/3181/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/19/3181/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:19:p:3181-:d:1764808

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-10-05
Handle: RePEc:gam:jmathe:v:13:y:2025:i:19:p:3181-:d:1764808