EconPapers    
Economics at your fingertips  
 

Markov Decision Processes for Customer Lifetime Value

Wai-Ki Ching, Ximin Huang, Michael K. Ng and Tak Kuen Siu
Additional contact information
Wai-Ki Ching: The University of Hong Kong
Ximin Huang: Georgia Institute of Technology
Michael K. Ng: Hong Kong Baptist University

Chapter Chapter 5 in Markov Chains, 2013, pp 107-139 from Springer

Abstract: Abstract In this chapter a stochastic dynamic programming model with a Markov chain is proposed to capture customer behavior. The advantage of using Markov chains is that the model can take into account the customers switching between the company and its competitors. Therefore customer relationships can be described in a probabilistic way, see for instance Pfeifer and Carraway [170]. Stochastic dynamic programming is then applied to solve the optimal allocation of the promotion budget for maximizing the Customer Lifetime Value (CLV). The proposed model is then applied to practical data in a computer services company.

Keywords: Markov Decision Process; Markov Chain Model; Infinite Horizon; Stochastic Dynamic Programming; Finite Horizon (search for similar items in EconPapers)
Date: 2013
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:isochp:978-1-4614-6312-2_5

Ordering information: This item can be ordered from
http://www.springer.com/9781461463122

DOI: 10.1007/978-1-4614-6312-2_5

Access Statistics for this chapter

More chapters in International Series in Operations Research & Management Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-01
Handle: RePEc:spr:isochp:978-1-4614-6312-2_5