EconPapers    
Economics at your fingertips  
 

Capturing forceful interaction with deformable objects using a deep learning-powered stretchable tactile array

Chunpeng Jiang, Wenqiang Xu, Yutong Li, Zhenjun Yu, Longchun Wang, Xiaotong Hu, Zhengyi Xie, Qingkun Liu, Bin Yang, Xiaolin Wang, Wenxin Du, Tutian Tang, Dongzhe Zheng, Siqiong Yao, Cewu Lu () and Jingquan Liu ()
Additional contact information
Chunpeng Jiang: Shanghai Jiao Tong University
Wenqiang Xu: Shanghai Jiao Tong University
Yutong Li: Shanghai Jiao Tong University
Zhenjun Yu: Shanghai Jiao Tong University
Longchun Wang: Shanghai Jiao Tong University
Xiaotong Hu: Shanghai Jiao Tong University
Zhengyi Xie: Shanghai Jiao Tong University
Qingkun Liu: Shanghai Jiao Tong University
Bin Yang: Shanghai Jiao Tong University
Xiaolin Wang: Shanghai Jiao Tong University
Wenxin Du: Shanghai Jiao Tong University
Tutian Tang: Shanghai Jiao Tong University
Dongzhe Zheng: Shanghai Jiao Tong University
Siqiong Yao: AI Institute Shanghai Jiao Tong University
Cewu Lu: Shanghai Jiao Tong University
Jingquan Liu: Shanghai Jiao Tong University

Nature Communications, 2024, vol. 15, issue 1, 1-14

Abstract: Abstract Capturing forceful interaction with deformable objects during manipulation benefits applications like virtual reality, telemedicine, and robotics. Replicating full hand-object states with complete geometry is challenging because of the occluded object deformations. Here, we report a visual-tactile recording and tracking system for manipulation featuring a stretchable tactile glove with 1152 force-sensing channels and a visual-tactile joint learning framework to estimate dynamic hand-object states during manipulation. To overcome the strain interference caused by contact with deformable objects, an active suppression method based on symmetric response detection and adaptive calibration is proposed and achieves 97.6% accuracy in force measurement, contributing to an improvement of 45.3%. The learning framework processes the visual-tactile sequence and reconstructs hand-object states. We experiment on 24 objects from 6 categories including both deformable and rigid ones with an average reconstruction error of 1.8 cm for all sequences, demonstrating a universal ability to replicate human knowledge in manipulating objects with varying degrees of deformability.

Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41467-024-53654-y Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-53654-y

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-024-53654-y

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-53654-y