EconPapers    
Economics at your fingertips  
 

Optimal Control Problem

Dipak Basu and Victoria Miroshnik
Additional contact information
Dipak Basu: Nagasaki University

Chapter 1 in Dynamic Systems Modeling and Optimal Control, 2015, pp 1-32 from Palgrave Macmillan

Abstract: Abstract Pontryagin (1962) and his associates developed the maximum principle for solving continuous-time control problems. Basically, the maximum (or minimum) principle provides a set of local necessary conditions for optimality. According to this method, variables analogous to the Lagrange multipliers should be introduced. These variables, usually denoted by p, are often called the co-state or adjoint-system variables. A scalar-value function H, which generally is a function of x, p, u (state, co-state, control vector) and t, named Hamiltonian function of the problem, is also considered.

Keywords: Optimal Control Problem; Riccati Equation; Generalize Inverse; Dynamic System Modeling; Stochastic Control Problem (search for similar items in EconPapers)
Date: 2015
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:pal:palchp:978-1-137-50895-9_1

Ordering information: This item can be ordered from
http://www.palgrave.com/9781137508959

DOI: 10.1057/9781137508959_1

Access Statistics for this chapter

More chapters in Palgrave Macmillan Books from Palgrave Macmillan
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-01
Handle: RePEc:pal:palchp:978-1-137-50895-9_1