Information and Memory in Dynamic Resource Allocation
Kuang Xu () and
Yuan Zhong ()
Additional contact information
Kuang Xu: Graduate School of Business, Stanford University, Stanford, California 94305
Yuan Zhong: Booth School of Business, University of Chicago, Chicago, Illinois 60637
Operations Research, 2020, vol. 68, issue 6, 1698-1715
Abstract:
We propose a general framework, dubbed stochastic processing under imperfect information , to study the impact of information constraints and memories on dynamic resource allocation. The framework involves a stochastic processing network (SPN) scheduling problem in which the scheduler may access the system state only through a noisy channel, and resource allocation decisions must be carried out through the interaction between an encoding policy (that observes the state) and allocation policy (that chooses the allocation). Applications in the management of large-scale data centers and human-in-the-loop service systems are among our chief motivations. We quantify the degree to which information constraints reduce the size of the capacity region in general SPNs and how such reduction depends on the amount of memories available to the encoding and allocation policies. Using a novel metric, capacity factor , our main theorem characterizes the reduction in capacity region (under “optimal” policies) for all nondegenerate channels and across almost all combinations of memory sizes. Notably, the theorem demonstrates, in substantial generality, that (1) the presence of a noisy channel always reduces capacity, (2) more memory for the allocation policy always improves capacity, and (3) more memory for the encoding policy has little to no effect on capacity. Finally, all of our positive (achievability) results are established through constructive, implementable policies. Our proof program involves the development of a host of new techniques, largely from first principles, by combining ideas from information theory, learning and queuing theory. As a submodule of one of the policies proposed, we create a simple yet powerful generalization of the maximum-weight (max-weight) policy, in which individual Markov chains are selected dynamically, in a manner analogous to how schedules are used in a conventional max-weight policy.
Keywords: resource allocation; scheduling; max-weight algorithm; queuing; stochastic processing network; information theory; memory (search for similar items in EconPapers)
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://doi.org/10.1287/opre.2019.1940 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:68:y:2020:i:6:p:1698-1715
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().