EconPapers    
Economics at your fingertips  
 

Multi-scale progressive fusion-based depth image completion and enhancement for industrial collaborative robot applications

Chuhua Xian (), Jun Zhang, Wenhao Yang and Yunbo Zhang ()
Additional contact information
Chuhua Xian: South China University of Technology
Jun Zhang: South China University of Technology
Wenhao Yang: Rochester Institute of Technology
Yunbo Zhang: Rochester Institute of Technology

Journal of Intelligent Manufacturing, 2024, vol. 35, issue 5, No 11, 2119-2135

Abstract: Abstract The depth image obtained by consumer-level depth cameras generally has low resolution and missing regions due to the limitations of the depth camera hardware and the method of depth image generation. Despite the fact that many studies have been done on RGB image completion and super-resolution, a key issue with depth images is that there will be evident jagged boundaries and a significant loss of geometric information. To address these issues, we introduce a multi-scale progressive fusion network for depth image completion and super-resolution in this paper, which has an asymptotic structure for integrating hierarchical features in different domains. We employ two separate branches to learn the features of a multi-scale image given a depth image and its corresponding RGB image. The extracted features are then fused into different level features of these two branches using a step-by-step strategy to recreate the final depth image. To confine distinct borders and geometric features, a multi-dimension loss is also designed. Extensive depth completion and super-resolution studies reveal that our proposed method outperforms state-of-the-art methods both qualitatively and quantitatively. The proposed methods are also applied to two human–robot interaction applications, including a remote-controlled robot based on an unmanned ground vehicle (UGV), AR-based toolpath planning, and automatic toolpath extraction. All these experimental results indicate the effectiveness and potential benefits of the proposed methods.

Keywords: Human–robot-interaction; Collaborative robot; Fusion-learning; Depth image completion; Super resolution; Multi-scale network (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s10845-023-02299-7 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:joinma:v:35:y:2024:i:5:d:10.1007_s10845-023-02299-7

Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10845

DOI: 10.1007/s10845-023-02299-7

Access Statistics for this article

Journal of Intelligent Manufacturing is currently edited by Andrew Kusiak

More articles in Journal of Intelligent Manufacturing from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-12
Handle: RePEc:spr:joinma:v:35:y:2024:i:5:d:10.1007_s10845-023-02299-7