Bayesian Meta-Prior Learning Using Empirical Bayes
Sareh Nabi (),
Houssam Nassif (),
Joseph Hong (),
Hamed Mamani () and
Guido Imbens ()
Additional contact information
Sareh Nabi: Foster School of Business, University of Washington, Seattle, Washington 98195; Amazon, Seattle, Washington 98109
Houssam Nassif: Amazon, Seattle, Washington 98109
Joseph Hong: Amazon, Seattle, Washington 98109
Hamed Mamani: Foster School of Business, University of Washington, Seattle, Washington 98195
Guido Imbens: Amazon, Seattle, Washington 98109; Graduate School of Business, Stanford University, Stanford, California 94305
Management Science, 2022, vol. 68, issue 3, 1737-1755
Abstract:
Adding domain knowledge to a learning system is known to improve results. In multiparameter Bayesian frameworks, such knowledge is incorporated as a prior. On the other hand, the various model parameters can have different learning rates in real-world problems, especially with skewed data. Two often-faced challenges in operation management and management science applications are the absence of informative priors and the inability to control parameter learning rates. In this study, we propose a hierarchical empirical Bayes approach that addresses both challenges and that can generalize to any Bayesian framework. Our method learns empirical meta-priors from the data itself and uses them to decouple the learning rates of first-order and second-order features (or any other given feature grouping) in a generalized linear model. Because the first-order features are likely to have a more pronounced effect on the outcome, focusing on learning first-order weights first is likely to improve performance and convergence time. Our empirical Bayes method clamps features in each group together and uses the deployed model’s observed data to empirically compute a hierarchical prior in hindsight. We report theoretical results for the unbiasedness, strong consistency, and optimal frequentist cumulative regret properties of our meta-prior variance estimator. We apply our method to a standard supervised learning optimization problem as well as an online combinatorial optimization problem in a contextual bandit setting implemented in an Amazon production system. During both simulations and live experiments, our method shows marked improvements, especially in cases of small traffic. Our findings are promising because optimizing over sparse data is often a challenge.
Keywords: informative prior; meta-prior; empirical Bayes; Bayesian bandit; generalized linear models; Thompson sampling; feature grouping; learning rate (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.2021.4136 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ormnsc:v:68:y:2022:i:3:p:1737-1755
Access Statistics for this article
More articles in Management Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().