A deep reinforcement learning approach to seat inventory control for airline revenue management
Syed A. M. Shihab () and
Peng Wei ()
Additional contact information
Syed A. M. Shihab: Kent State University
Peng Wei: George Washington University
Journal of Revenue and Pricing Management, 2022, vol. 21, issue 2, No 8, 183-199
Abstract:
Abstract Commercial airlines use revenue management systems to maximize their revenue by making real-time decisions on the booking limits of different fare classes offered in each of its scheduled flights. Traditional approaches—such as mathematical programming, dynamic programming, and heuristic rule-based decision models—heavily rely on external mathematical models of demand and passenger arrival, choice, and cancelation, making their performance sensitive to the accuracy of these model estimates. Moreover, many of these approaches scale poorly with increase in problem dimensionality. Additionally, they lack the ability to explore and “directly” learn the true market dynamics from interactions with passengers and adapt to changes in market conditions on their own. To overcome these limitations, this research uses deep reinforcement learning (DRL), a model-free decision-making framework, for finding the optimal policy of the seat inventory control problem. The DRL framework employs a deep neural network to approximate the expected optimal revenues for all possible state-action combinations, allowing it to handle the large state space of the problem. Multiple fare classes with stochastic demand, passenger arrivals, and booking cancelations have been considered in the problem. An air travel market simulator was developed based on the market dynamics and passenger behavior for training and testing the agent. The results demonstrate that the DRL agent is capable of learning the optimal airline revenue management policy through interactions with the market, matching the performance of exact dynamic programming methods. The revenue generated by the agent in different simulated market scenarios was found to be close to the maximum possible flight revenues and surpass that produced by the expected marginal seat revenue-b (EMSRb) method.
Keywords: Airline revenue management; Seat inventory control; Deep reinforcement learning (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://link.springer.com/10.1057/s41272-021-00281-7 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:pal:jorapm:v:21:y:2022:i:2:d:10.1057_s41272-021-00281-7
Ordering information: This journal article can be ordered from
https://www.palgrave.com/gp/journal/41272
DOI: 10.1057/s41272-021-00281-7
Access Statistics for this article
Journal of Revenue and Pricing Management is currently edited by Ian Yeoman
More articles in Journal of Revenue and Pricing Management from Palgrave Macmillan
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().