Physical unclonable in-memory computing for simultaneous protecting private data and deep learning models
Wenshuo Yue,
Kai Wu,
Zhiyuan Li,
Juchen Zhou,
Zeyu Wang,
Teng Zhang,
Yuxiang Yang,
Lintao Ye,
Yongqin Wu,
Weihai Bu,
Shaozhi Wang,
Xiaodong He,
Xiaobing Yan (),
Yaoyu Tao,
Bonan Yan (),
Ru Huang and
Yuchao Yang ()
Additional contact information
Wenshuo Yue: Peking University
Kai Wu: Hebei University
Zhiyuan Li: Peking University
Juchen Zhou: Peking University
Zeyu Wang: Chinese Institute for Brain Research (CIBR)
Teng Zhang: Peking University
Yuxiang Yang: Peking University
Lintao Ye: Peking University
Yongqin Wu: Semiconductor Technology Innovation Center (Beijing) Corporation
Weihai Bu: Semiconductor Technology Innovation Center (Beijing) Corporation
Shaozhi Wang: Semiconductor Technology Innovation Center (Beijing) Corporation
Xiaodong He: Semiconductor Technology Innovation Center (Beijing) Corporation
Xiaobing Yan: Hebei University
Yaoyu Tao: Peking University
Bonan Yan: Peking University
Ru Huang: Peking University
Yuchao Yang: Peking University
Nature Communications, 2025, vol. 16, issue 1, 1-13
Abstract:
Abstract Compute-in-memory based on resistive random-access memory has emerged as a promising technology for accelerating neural networks on edge devices. It can reduce frequent data transfers and improve energy efficiency. However, the nonvolatile nature of resistive memory raises concerns that stored weights can be easily extracted during computation. To address this challenge, we propose RePACK, a threefold data protection scheme that safeguards neural network input, weight, and structural information. It utilizes a bipartite-sort coding scheme to store data with a fully on-chip physical unclonable function. Experimental results demonstrate the effectiveness of increasing enumeration complexity to 5.77 × 1075 for a 128-column compute-in-memory core. We further implement and evaluate a RePACK computing system on a 40 nm resistive memory compute-in-memory chip. This work represents a step towards developing safe, robust, and efficient edge neural network accelerators. It potentially serves as the hardware infrastructure for edge devices in federated learning or other systems.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41467-025-56412-w Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56412-w
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/s41467-025-56412-w
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().