EconPapers    
Economics at your fingertips  
 

A Deep Q-Network for the Beer Game: Deep Reinforcement Learning for Inventory Optimization

Afshin Oroojlooyjadid (), MohammadReza Nazari (), Lawrence V. Snyder () and Martin Takáč ()
Additional contact information
Afshin Oroojlooyjadid: Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, Pennsylvania 18015
MohammadReza Nazari: Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, Pennsylvania 18015
Lawrence V. Snyder: Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, Pennsylvania 18015
Martin Takáč: Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, Pennsylvania 18015

Manufacturing & Service Operations Management, 2022, vol. 24, issue 1, 285-304

Abstract: Problem definition : The beer game is widely used in supply chain management classes to demonstrate the bullwhip effect and the importance of supply chain coordination. The game is a decentralized, multiagent, cooperative problem that can be modeled as a serial supply chain network in which agents choose order quantities while cooperatively attempting to minimize the network’s total cost, although each agent only observes local information. Academic/practical relevance : Under some conditions, a base-stock replenishment policy is optimal. However, in a decentralized supply chain in which some agents act irrationally, there is no known optimal policy for an agent wishing to act optimally. Methodology : We propose a deep reinforcement learning (RL) algorithm to play the beer game. Our algorithm makes no assumptions about costs or other settings. As with any deep RL algorithm, training is computationally intensive, but once trained, the algorithm executes in real time. We propose a transfer-learning approach so that training performed for one agent can be adapted quickly for other agents and settings. Results : When playing with teammates who follow a base-stock policy, our algorithm obtains near-optimal order quantities. More important, it performs significantly better than a base-stock policy when other agents use a more realistic model of human ordering behavior. We observe similar results using a real-world data set. Sensitivity analysis shows that a trained model is robust to changes in the cost coefficients. Finally, applying transfer learning reduces the training time by one order of magnitude. Managerial implications : This paper shows how artificial intelligence can be applied to inventory optimization. Our approach can be extended to other supply chain optimization problems, especially those in which supply chain partners act in irrational or unpredictable ways. Our RL agent has been integrated into a new online beer game, which has been played more than 17,000 times by more than 4,000 people.

Keywords: inventory optimization; reinforcement learning; beer game (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/msom.2020.0939 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:ormsom:v:24:y:2022:i:1:p:285-304

Access Statistics for this article

More articles in Manufacturing & Service Operations Management from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:ormsom:v:24:y:2022:i:1:p:285-304