Deep reinforcement learning-based energy-aware disassembly planning for end-of-life products with stimuli-activated self-disassembly
Di Wang,
Jing Zhao,
Muyue Han and
Lin Li ()
Additional contact information
Di Wang: University of Illinois at Chicago
Jing Zhao: Pennsylvania State University
Muyue Han: North Carolina A&T State University
Lin Li: University of Illinois at Chicago
Journal of Intelligent Manufacturing, 2025, vol. 36, issue 8, No 12, 5475-5494
Abstract:
Abstract Remanufacturing stands as a cornerstone strategy for end-of-life (EOL) product management, playing a vital role in fostering a circular economy. Despite its significance, the widespread implementation remains difficult, mainly due to challenges such as labor-intensive operations, diminished quality, and time-consuming processes involved in component disassembly. A potential solution emerges in stimuli-activated self-disassembly, offering a non-destructive pathway that encourages seamless human–machine collaboration. This innovative approach facilitates the simultaneous disassembly of multiple components, reducing damage, labor costs, and energy consumption. Notably, limited studies have addressed real-time disassembly planning (DP), especially within self-disassembling workstations. Our research aims to maximize disassembly profit and energy recovery by optimizing disassembly sequences, EOL options, and a hybrid scheme that combines manual and self-disassembly operations. We propose an advanced deep reinforcement learning (DRL) algorithm that incorporates an innovative loss function, a revised training scheme, and parameter embedding to generate the Pareto frontier. Additionally, we propose a compact product representation that captures dynamics and uncertainties, such as product type variations, missing components, potential disassembly failure, and stochastic product quality. The effectiveness of our approach is demonstrated through a case study involving a TV disassembly line, benchmarked against six baselines. Furthermore, a sensitivity analysis is conducted to elucidate the impact of labor expenses and hybrid disassembly schemes on the ultimate profit recovery.
Keywords: Deep reinforcement learning; Disassembly planning; End; Of; Life management; Multi; Objective optimization; Stimuli; Activated self; Disassembly (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s10845-024-02527-8 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:joinma:v:36:y:2025:i:8:d:10.1007_s10845-024-02527-8
Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10845
DOI: 10.1007/s10845-024-02527-8
Access Statistics for this article
Journal of Intelligent Manufacturing is currently edited by Andrew Kusiak
More articles in Journal of Intelligent Manufacturing from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().