EconPapers    
Economics at your fingertips  
 

Touchless interactive teaching of soft robots through flexible bimodal sensory interfaces

Wenbo Liu, Youning Duo, Jiaqi Liu, Feiyang Yuan, Lei Li, Luchen Li, Gang Wang, Bohan Chen, Siqi Wang, Hui Yang, Yuchen Liu, Yanru Mo, Yun Wang, Bin Fang, Fuchun Sun, Xilun Ding, Chi Zhang and Li Wen ()
Additional contact information
Wenbo Liu: Beihang University
Youning Duo: Beihang University
Jiaqi Liu: Beihang University
Feiyang Yuan: Beihang University
Lei Li: Beihang University
Luchen Li: Beihang University
Gang Wang: Beihang University
Bohan Chen: Beihang University
Siqi Wang: Beihang University
Hui Yang: Guangdong Academy of Sciences
Yuchen Liu: Beihang University
Yanru Mo: Beihang University
Yun Wang: Beihang University
Bin Fang: Tsinghua University
Fuchun Sun: Tsinghua University
Xilun Ding: Beihang University
Chi Zhang: Chinese Academy of Sciences
Li Wen: Beihang University

Nature Communications, 2022, vol. 13, issue 1, 1-14

Abstract: Abstract In this paper, we propose a multimodal flexible sensory interface for interactively teaching soft robots to perform skilled locomotion using bare human hands. First, we develop a flexible bimodal smart skin (FBSS) based on triboelectric nanogenerator and liquid metal sensing that can perform simultaneous tactile and touchless sensing and distinguish these two modes in real time. With the FBSS, soft robots can react on their own to tactile and touchless stimuli. We then propose a distance control method that enabled humans to teach soft robots movements via bare hand-eye coordination. The results showed that participants can effectively teach a self-reacting soft continuum manipulator complex motions in three-dimensional space through a “shifting sensors and teaching” method within just a few minutes. The soft manipulator can repeat the human-taught motions and replay them at different speeds. Finally, we demonstrate that humans can easily teach the soft manipulator to complete specific tasks such as completing a pen-and-paper maze, taking a throat swab, and crossing a barrier to grasp an object. We envision that this user-friendly, non-programmable teaching method based on flexible multimodal sensory interfaces could broadly expand the domains in which humans interact with and utilize soft robots.

Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)

Downloads: (external link)
https://www.nature.com/articles/s41467-022-32702-5 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-32702-5

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-022-32702-5

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-32702-5