LV-FeatEx: Large Viewpoint-Image Feature Extraction
Yukai Wang,
Yinghui Wang (),
Wenzhuo Li (),
Yanxing Liang,
Liangyi Huang and
Xiaojuan Ning
Additional contact information
Yukai Wang: School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
Yinghui Wang: School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
Wenzhuo Li: School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
Yanxing Liang: School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
Liangyi Huang: School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA
Xiaojuan Ning: Department of Computer Science & Engineering, Xi’an University of Technology, Xi’an 710048, China
Mathematics, 2025, vol. 13, issue 7, 1-24
Abstract:
Maintaining stable image feature extraction under viewpoint changes is challenging, particularly when the angle between the camera’s reverse direction and the object’s surface normal exceeds 40 degrees. Such conditions can result in unreliable feature detection. Consequently, this hinders the performance of vision-based systems. To address this, we propose a feature point extraction method named Large Viewpoint Feature Extraction (LV-FeatEx). Firstly, the method uses a dual-threshold approach based on image grayscale histograms and Kapur’s maximum entropy to constrain the AGAST (Adaptive and Generic Accelerated Segment Test) feature detector. Combined with the FREAK (Fast Retina Keypoint) descriptor, the method enables more effective estimation of camera motion parameters. Next, we design a longitude sampling strategy to create a sparser affine simulation model. Meanwhile, images undergo perspective transformation based on the camera motion parameters. This improves operational efficiency and aligns perspective distortions between two images, enhancing feature point extraction accuracy under large viewpoints. Finally, we verify the stability of the extracted feature points through feature point matching. Comprehensive experimental results show that, under large viewpoint changes, our method outperforms popular classical and deep learning feature extraction methods. The correct rate of feature point matching improves by an average of 40.1 percent, and speed increases by an average of 6.67 times simultaneously.
Keywords: feature points; large viewpoint; ASIFT; AGAST; FREAK (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/7/1111/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/7/1111/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:7:p:1111-:d:1622349
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().