A Partially Observed Markov Decision Process for Dynamic Pricing
Yossi Aviv () and
Amit Pazgal ()
Additional contact information
Yossi Aviv: Olin School of Business, Washington University, St. Louis, Missouri 63130
Amit Pazgal: Olin School of Business, Washington University, St. Louis, Missouri 63130
Management Science, 2005, vol. 51, issue 9, 1400-1416
Abstract:
In this paper, we develop a stylized partially observed Markov decision process (POMDP) framework to study a dynamic pricing problem faced by sellers of fashion-like goods. We consider a retailer that plans to sell a given stock of items during a finite sales season. The objective of the retailer is to dynamically price the product in a way that maximizes expected revenues. Our model brings together various types of uncertainties about the demand, some of which are resolvable through sales observations. We develop a rigorous upper bound for the seller's optimal dynamic decision problem and use it to propose an active-learning heuristic pricing policy. We conduct a numerical study to test the performance of four different heuristic dynamic pricing policies in order to gain insight into several important managerial questions that arise in the context of revenue management.
Keywords: learning; partially observed Markov decision processes; pricing; revenue management (search for similar items in EconPapers)
Date: 2005
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (50)
Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.1050.0393 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ormnsc:v:51:y:2005:i:9:p:1400-1416
Access Statistics for this article
More articles in Management Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().