A Novel Deep Reinforcement Learning Based Framework for Gait Adjustment
Ang Li,
Jianping Chen (),
Qiming Fu (),
Hongjie Wu,
Yunzhe Wang and
You Lu
Additional contact information
Ang Li: School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
Jianping Chen: Jiangsu Province Key Laboratory of Intelligent Building Energy Efficiency, Suzhou University of Science and Technology, Suzhou 215009, China
Qiming Fu: School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
Hongjie Wu: School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
Yunzhe Wang: School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
You Lu: School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
Mathematics, 2022, vol. 11, issue 1, 1-18
Abstract:
Nowadays, millions of patients suffer from physical disabilities, including lower-limb disabilities. Researchers have adopted a variety of physical therapies based on the lower-limb exoskeleton, in which it is difficult to adjust equipment parameters in a timely fashion. Therefore, intelligent control methods, for example, deep reinforcement learning (DRL), have been used to control the medical equipment used in human gait adjustment. In this study, based on the key-value attention mechanism, we reconstructed the agent’s observations by capturing the self-dependent feature information for decision-making in regard to each state sampled from the replay buffer. Moreover, based on Softmax Deep Double Deterministic policy gradients (SD3), a novel DRL-based framework, key-value attention-based SD3 (AT_SD3), has been proposed for gait adjustment. We demonstrated the effectiveness of our proposed framework in gait adjustment by comparing different gait trajectories, including the desired trajectory and the adjusted trajectory. The results showed that the simulated trajectories were closer to the desired trajectory, both in their shapes and values. Furthermore, by comparing the results of our experiments with those of other state-of-the-art methods, the results proved that our proposed framework exhibited better performance.
Keywords: deep reinforcement learning; attention mechanism; state reconstruction; gait adjustment (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/11/1/178/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/1/178/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2022:i:1:p:178-:d:1019053
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().