EconPapers    
Economics at your fingertips  
 

Invariant problems in dynamic programming- average reward criterion

David Assaf

Stochastic Processes and their Applications, 1980, vol. 10, issue 3, 313-322

Abstract: A dynamic programming problem is called invariant if its transition mechanism depends only on the action taken and does not depend on the current state of the systm. Replacement and maintenance problems are two typical types of problems which arise in applications and are often invariant. The paper studies properties of invariant problems when the state space is arbitrary and the action space is finite. The main result is a method of obtaining optimal policies for this case when the optimality criterion is that of maximizing the average reward per unit time. Results are illustrated by examples.

Keywords: Average; reward; optimal; policy; dynamic; programming; optimality; equation; average; reward; invariant; problems; [beta]-optimal; policy (search for similar items in EconPapers)
Date: 1980
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/0304-4149(80)90014-9
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:spapps:v:10:y:1980:i:3:p:313-322

Ordering information: This journal article can be ordered from
http://http://www.elsevier.com/wps/find/supportfaq.cws_home/regional
https://shop.elsevie ... _01_ooc_1&version=01

Access Statistics for this article

Stochastic Processes and their Applications is currently edited by T. Mikosch

More articles in Stochastic Processes and their Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:spapps:v:10:y:1980:i:3:p:313-322