A Dynamic Scheduling Method Combining Iterative Optimization and Deep Reinforcement Learning to Solve Sudden Disturbance Events in a Flexible Manufacturing Process
Jun Yan,
Tianzuo Zhao,
Tao Zhang,
Hongyan Chu,
Congbin Yang and
Yueze Zhang ()
Additional contact information
Jun Yan: Mechanical Industry Key Laboratory of Heavy Machine Tool Digital Design and Testing, Beijing University of Technology, Beijing 100124, China
Tianzuo Zhao: Mechanical Industry Key Laboratory of Heavy Machine Tool Digital Design and Testing, Beijing University of Technology, Beijing 100124, China
Tao Zhang: Beijing Key Laboratory of Advanced Manufacturing Technology, Beijing University of Technology, Beijing 100124, China
Hongyan Chu: Mechanical Industry Key Laboratory of Heavy Machine Tool Digital Design and Testing, Beijing University of Technology, Beijing 100124, China
Congbin Yang: Mechanical Industry Key Laboratory of Heavy Machine Tool Digital Design and Testing, Beijing University of Technology, Beijing 100124, China
Yueze Zhang: Mechanical Industry Key Laboratory of Heavy Machine Tool Digital Design and Testing, Beijing University of Technology, Beijing 100124, China
Mathematics, 2024, vol. 13, issue 1, 1-28
Abstract:
Unpredictable sudden disturbances such as machine failure, processing time lag, and order changes increase the deviation between actual production and the planned schedule, seriously affecting production efficiency. This phenomenon is particularly severe in flexible manufacturing. In this paper, a dynamic scheduling method combining iterative optimization and deep reinforcement learning (DRL) is proposed to address the impact of uncertain disturbances. A real-time DRL production environment model is established for the flexible job scheduling problem. Based on the DRL model, an agent training strategy and an autonomous decision-making method are proposed. An event-driven and period-driven hybrid dynamic rescheduling trigger strategy (HDRS) with four judgment mechanisms has been developed. The decision-making method and rescheduling trigger strategy solve the problem of how and when to reschedule for the dynamic scheduling problem. The data experiment results show that the trained DRL decision-making model can provide timely feedback on the adjusted scheduling arrangements for different-scale order problems. The proposed dynamic-scheduling decision-making method and rescheduling trigger strategy can achieve high responsiveness, quick feedback, high quality, and high stability for flexible manufacturing process scheduling decision making under sudden disturbance.
Keywords: dynamic scheduling; deep reinforcement learning; flexible job shop; double deep Q-network; rescheduling (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/1/4/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/1/4/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2024:i:1:p:4-:d:1551522
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().