Scalable deep reinforcement learning in the non-stationary capacitated lot sizing problem
Lotte van Hezewijk,
Nico P. Dellaert and
Willem L. van Jaarsveld
International Journal of Production Economics, 2025, vol. 284, issue C
Abstract:
Capacitated lot sizing problems in situations with stationary and non-stationary demand (SCLSP) are very common in practice. Solving problems with a large number of items using Deep Reinforcement Learning (DRL) is challenging due to the large action space. This paper proposes a new Markov Decision Process (MDP) formulation to solve this problem, by decomposing the production quantity decisions in a period into sub-decisions, which reduces the action space dramatically. We demonstrate that applying Deep Controlled Learning (DCL) yields policies that outperform the benchmark heuristic as well as a prior DRL implementation. By using the decomposed MDP formulation and DCL method outlined in this paper, we can solve larger problems compared to the previous DRL implementation. Moreover, we adopt a non-stationary demand model for training the policy, which enables us to readily apply the trained policy in dynamic environments when demand changes.
Keywords: Deep reinforcement learning; Capacitated lot sizing; Non-stationary demand (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0925527325000866
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:proeco:v:284:y:2025:i:c:s0925527325000866
DOI: 10.1016/j.ijpe.2025.109601
Access Statistics for this article
International Journal of Production Economics is currently edited by Stefan Minner
More articles in International Journal of Production Economics from Elsevier
Bibliographic data for series maintained by Catherine Liu ().