LEARNING INDIRECT ACTIONS IN COMPLEX DOMAINS: ACTION SUGGESTIONS FOR AIR TRAFFIC CONTROL
Adrian Agogino () and
Kagan Tumer ()
Additional contact information
Adrian Agogino: UCSC, NASA Ames Research Center, Mailstop 269-3, Moffett Field, California 94035, USA
Kagan Tumer: Oregon State University, 204 Rogers Hall, Corvallis, Oregon 97331, USA
Advances in Complex Systems (ACS), 2009, vol. 12, issue 04n05, 493-512
Abstract:
Providing intelligent algorithms to manage the ever-increasing flow of air traffic is critical to the efficiency and economic viability of air transportation systems. Yet, current automated solutions leave existing human controllers "out of the loop" rendering the potential solutions both technically dangerous (e.g. inability to react to suddenly developing conditions) and politically charged (e.g. role of air traffic controllers in a fully automated system). Instead, this paper outlines a distributed agent-based solution where agents provide suggestions to human controllers. Though conceptually pleasing, this approach introduces two critical research issues. First, the agent actions are now filtered through interactions with other agents, human controllers and the environment before leading to a system state. This indirect action-to-effect process creates a complex learning problem. Second, even in the best case, not all air traffic controllers will be willing or able to follow the agents' suggestions. This partial participation effect will require the system to be robust to the number of controllers that follow the agent suggestions. In this paper, we present an agent reward structure that allows agents to learn good actions in this indirect environment, and explore the ability of those suggestion agents to achieve good system level performance. We present a series of experiments based on real historical air traffic data combined with simulation of air traffic flow around the New York city area. Results show that the agents can improve system-wide performance by up to 20% over that of human controllers alone, and that these results degrade gracefully when the number of human controllers that follow the agents' suggestions declines.
Keywords: Air traffic control; learning indirect actions; suggestion agents; multiagent learning (search for similar items in EconPapers)
Date: 2009
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.worldscientific.com/doi/abs/10.1142/S0219525909002283
Access to full text is restricted to subscribers
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:wsi:acsxxx:v:12:y:2009:i:04n05:n:s0219525909002283
Ordering information: This journal article can be ordered from
DOI: 10.1142/S0219525909002283
Access Statistics for this article
Advances in Complex Systems (ACS) is currently edited by Frank Schweitzer
More articles in Advances in Complex Systems (ACS) from World Scientific Publishing Co. Pte. Ltd.
Bibliographic data for series maintained by Tai Tone Lim ().