On the Bellman’s principle of optimality
Eitan Gross
Physica A: Statistical Mechanics and its Applications, 2016, vol. 462, issue C, 217-221
Abstract:
Bellman’s equation is widely used in solving stochastic optimal control problems in a variety of applications including investment planning, scheduling problems and routing problems. Building on Markov decision processes for stationary policies, we present a new proof for Bellman’s equation of optimality. Our proof rests its case on the availability of an explicit model of the environment that embodies transition probabilities and associated costs.
Keywords: Dynamic programming; Markov decision processes; Principle of optimality (search for similar items in EconPapers)
Date: 2016
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S037843711630351X
Full text for ScienceDirect subscribers only. Journal offers the option of making the article available online on Science direct for a fee of $3,000
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:phsmap:v:462:y:2016:i:c:p:217-221
DOI: 10.1016/j.physa.2016.06.083
Access Statistics for this article
Physica A: Statistical Mechanics and its Applications is currently edited by K. A. Dawson, J. O. Indekeu, H.E. Stanley and C. Tsallis
More articles in Physica A: Statistical Mechanics and its Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().