Multi agent reinforcement learning for online layout planning and scheduling in flexible assembly systems
Lea Kaven (),
Philipp Huke,
Amon Göppert and
Robert H. Schmitt
Additional contact information
Lea Kaven: Laboratory for Machine Tools and Production Engineering (WZL) of RWTH Aachen University
Philipp Huke: Laboratory for Machine Tools and Production Engineering (WZL) of RWTH Aachen University
Amon Göppert: Laboratory for Machine Tools and Production Engineering (WZL) of RWTH Aachen University
Robert H. Schmitt: Laboratory for Machine Tools and Production Engineering (WZL) of RWTH Aachen University
Journal of Intelligent Manufacturing, 2024, vol. 35, issue 8, No 17, 3917-3936
Abstract:
Abstract Manufacturing systems are undergoing systematic change facing the trade-off between the customer's needs and the economic and ecological pressure. Especially assembly systems must be more flexible due to many product generations or unpredictable material and demand fluctuations. As a solution line-less mobile assembly systems implement flexible job routes through movable multi-purpose resources and flexible transportation systems. Moreover, a completely reactive rearrangeable layout with mobile resources enables reconfigurations without interrupting production. A scheduling that can handle the complexity of dynamic events is necessary to plan job routes and control transportation in such an assembly system. Conventional approaches for this control task require exponentially rising computational capacities with increasing problem sizes. Therefore, the contribution of this work is an algorithm to dynamically solve the integrated problem of layout optimization and scheduling in line-less mobile assembly systems. The proposed multi agent deep reinforcement learning algorithm uses proximal policy optimization and consists of a decoder and encoder, allowing for various-sized system state descriptions. A simulation study shows that the proposed algorithm performs better in 78% of the scenarios compared to a random agent regarding the makespan optimization objective. This allows for adaptive optimization of line-less mobile assembly systems that can face global challenges.
Keywords: Production control; Production scheduling; Layout optimization; Multi-agent deep reinforcement learning; Proximal policy optimization; Mobile resources; Flexible assembly (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://link.springer.com/10.1007/s10845-023-02309-8 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:joinma:v:35:y:2024:i:8:d:10.1007_s10845-023-02309-8
Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10845
DOI: 10.1007/s10845-023-02309-8
Access Statistics for this article
Journal of Intelligent Manufacturing is currently edited by Andrew Kusiak
More articles in Journal of Intelligent Manufacturing from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().