EconPapers    
Economics at your fingertips  
 

Large Language Model-Assisted Reinforcement Learning for Hybrid Disassembly Line Problem

Xiwang Guo, Chi Jiao, Peng Ji, Jiacun Wang, Shujin Qin, Bin Hu (), Liang Qi () and Xianming Lang
Additional contact information
Xiwang Guo: College of Information and Control Engineering, Liaoning Shihua University, Fushun 113001, China
Chi Jiao: College of Information and Control Engineering, Liaoning Shihua University, Fushun 113001, China
Peng Ji: College of Information and Control Engineering, Liaoning Shihua University, Fushun 113001, China
Jiacun Wang: Department of Computer Science and Software Engineering, Monmouth University, West Long Branch, NJ 07764, USA
Shujin Qin: College of Economics and Management, Shangqiu Normal University, Shangqiu 476000, China
Bin Hu: Department of Computer Science and Technology, Kean University, Union, NJ 07083, USA
Liang Qi: Department of Artificial Intelligence, Shandong University of Science and Technology, Qingdao 266590, China
Xianming Lang: College of Information and Control Engineering, Liaoning Shihua University, Fushun 113001, China

Mathematics, 2024, vol. 12, issue 24, 1-20

Abstract: Recycling end-of-life products is essential for reducing environmental impact and promoting resource reuse. In the realm of remanufacturing, researchers are increasingly concentrating on the challenge of the disassembly line balancing problem (DLBP), particularly on how to allocate work tasks effectively to enhance productivity. However, many current studies overlook two key issues: (1) how to reasonably arrange the posture of workers during disassembly, and (2) how to reasonably arrange disassembly tasks when the disassembly environment is not a single type of disassembly line but a hybrid disassembly line. To address these issues, we propose a mixed-integrated programming model suitable for linear and U-shaped hybrid disassembly lines, while also considering how to reasonably allocate worker postures to alleviate worker fatigue. Additionally, we introduce large language model-assisted reinforcement learning to solve this model, which employs a Dueling Deep Q-Network (Duel-DQN) to tackle the problem and integrates a large language model (LLM) into the algorithm. The experimental results show that compared to solutions that solely use reinforcement learning, large language model-assisted reinforcement learning reduces the number of iterations required for convergence by approximately 50% while ensuring the quality of the solutions. This provides new insights into the application of LLM in reinforcement learning and DLBP.

Keywords: hybrid disassembly line balancing problem; large language model; reinforcement learning; prompt engineering (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/12/24/4000/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/24/4000/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:24:p:4000-:d:1548017

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:12:y:2024:i:24:p:4000-:d:1548017