EconPapers    
Economics at your fingertips  
 

Literature Review

Philipp Melchiors
Additional contact information
Philipp Melchiors: Technische Universität München

Chapter Chapter 3 in Dynamic and Stochastic Multi-Project Planning, 2015, pp 19-28 from Springer

Abstract: Abstract Dynamic programming is a general technique for solving sequential problems. The first comprehensive books on the topic have been written by Bellman [13] and Howard [62]. The most important methodologies for determining an optimal policy for a Markov decision process (MDP) are backward induction, value iteration (VI), policy iteration (PI) and linear programming. As MDPs are in discrete time where transitions have the same deterministic durations many results and methodologies, such as VI, cannot be directly applied to continuous-time Markov decision processes with exponentially distributed transition times.

Keywords: Markov Decision Process; Project Schedule; Policy Iteration; Approximate Dynamic Programming; Order Acceptance (search for similar items in EconPapers)
Date: 2015
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:lnechp:978-3-319-04540-5_3

Ordering information: This item can be ordered from
http://www.springer.com/9783319045405

DOI: 10.1007/978-3-319-04540-5_3

Access Statistics for this chapter

More chapters in Lecture Notes in Economics and Mathematical Systems from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-01
Handle: RePEc:spr:lnechp:978-3-319-04540-5_3