EconPapers    
Economics at your fingertips  
 

A hyperconformal dual-modal metaskin for well-defined and high-precision contextual interactions

Shifan Yu, Zhenzhou Ji, Lei Liu, Zijian Huang, Yanhao Luo, Huasen Wang, Ruize Wangyuan, Ziquan Guo, Zhong Chen, Qingliang Liao, Yuanjin Zheng and Xinqin Liao ()
Additional contact information
Shifan Yu: Xiamen University, Department of Electronic Science
Zhenzhou Ji: Xiamen University, Department of Electronic Science
Lei Liu: Xiamen University, Department of Electronic Science
Zijian Huang: Xiamen University, Department of Electronic Science
Yanhao Luo: Xiamen University, Department of Electronic Science
Huasen Wang: Xiamen University, Department of Electronic Science
Ruize Wangyuan: Xiamen University, Department of Electronic Science
Ziquan Guo: Xiamen University, Department of Electronic Science
Zhong Chen: Xiamen University, Department of Electronic Science
Qingliang Liao: University of Science and Technology Beijing, Academy for Advanced Interdisciplinary Science and Technology, Key Laboratory of Advanced Materials and Devices for Post-Moore Chips Ministry of Education
Yuanjin Zheng: Nanyang Technological University, School of Electrical and Electronic Engineering
Xinqin Liao: Xiamen University, Department of Electronic Science

Nature Communications, 2025, vol. 16, issue 1, 1-15

Abstract: Abstract Proprioception and touch serve as complementary sensory modalities to coordinate hand kinematics and recognize users’ intent for precise interactions. However, current motion-tracking electronics remain bulky and insufficiently precise. Accurately decoding both is also challenging owing to the mechanical crosstalk of endogenous and exogenous deformations. Here, we report a hyperconformal dual-modal (HDM) metaskin for interactive hand motion interpretation. The metaskin integrates a strongly coupled hydrophilic interface with a two-step transfer strategy to minimize interfacial mechanical losses. The 10-μm-scale hyperconformal film is highly sensitive to intricate skin stretches while minimizing signal distortion. It accurately tracks skin stretches as well as touch locations and translates them into polar signals, which are individually salient. This approach enables a differentiable signaling topology within one single data channel without burdening structural complexity to the metaskin. When combined with temporal differential calculations and time-series machine learning network, the metaskin extracts interactive context and action cues from the low-dimensional data. This phenomenon is further exemplified through demonstrations in contextual navigation, typing and control integration, and multi-scenario object interaction. We demonstrate this fundamental approach in advanced skin-integrated electronics, highlighting its potential for instinctive interaction paradigms and paving the way for augmented somatosensation recognition.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41467-025-65624-z Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-65624-z

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-025-65624-z

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-11-28
Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-65624-z