A leader–follower partially observed, multiobjective Markov game
Yanling Chang (),
Alan Erera () and
Chelsea White ()
Annals of Operations Research, 2015, vol. 235, issue 1, 103-128
Abstract:
The intent of this research is to generate a set of non-dominated finite-memory policies from which one of two agents (the leader) can select a most preferred policy to control a dynamic system that is also affected by the control decisions of the other agent (the follower). The problem is described by an infinite horizon total discounted reward, partially observed Markov game (POMG). For each candidate finite-memory leader policy, we assume the follower, fully aware of the leader policy, determines a (perfect memory) policy that optimizes the follower’s (scalar) criterion. The leader–follower assumption allows the POMG to be transformed into a specially structured, partially observed Markov decision process that we use to determine the follower’s best response policy for a given leader policy. We then approximate the follower’s policy by a finite-memory policy. Each agent’s policy assumes that the agent knows its current and recent state values, its recent actions, and the current and recent possibly inaccurate observations of the other agent’s state. For each leader/follower policy pair, we determine the values of the leader’s criteria. We use a multi-objective genetic algorithm to create the next generation of leader policies based on the values of the leader criteria for each leader/follower policy pair in the current generation. Based on this information for the final generation of policies, we determine the set of non-dominated leader policies. We present an example that illustrates how these results can be used to support a manager of a liquid egg production process (the leader) in selecting a sequence of actions to maximize expected process productivity while mitigating the risk due to an attacker (the follower) who seeks to contaminate the process with a chemical or biological toxin. Copyright Springer Science+Business Media New York 2015
Keywords: Dynamic programming; Artificial intelligence; Sequential decision making (search for similar items in EconPapers)
Date: 2015
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
http://hdl.handle.net/10.1007/s10479-015-1935-0 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:annopr:v:235:y:2015:i:1:p:103-128:10.1007/s10479-015-1935-0
Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10479
DOI: 10.1007/s10479-015-1935-0
Access Statistics for this article
Annals of Operations Research is currently edited by Endre Boros
More articles in Annals of Operations Research from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().