Discrete Hit-and-Run for Sampling Points from Arbitrary Distributions Over Subsets of Integer Hyperrectangles
Stephen Baumert (),
Archis Ghate (),
Seksan Kiatsupaibul (),
Yanfang Shen (),
Robert L. Smith () and
Zelda B. Zabinsky ()
Additional contact information
Stephen Baumert: U.S. Postal Service, Office of Inspector General, Arlington, Virginia 22209
Archis Ghate: Department of Industrial and Systems Engineering, University of Washington, Seattle, Washington 98195
Seksan Kiatsupaibul: Department of Statistics, Chulalongkorn University, Bangkok 10330, Thailand
Yanfang Shen: Clearsight Systems, Inc., Seattle, Washington 98104
Robert L. Smith: Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, Michigan 48109
Zelda B. Zabinsky: Department of Industrial and Systems Engineering, University of Washington, Seattle, Washington 98195
Operations Research, 2009, vol. 57, issue 3, 727-739
Abstract:
We consider the problem of sampling a point from an arbitrary distribution (pi) over an arbitrary subset S of an integer hyperrectangle. Neither the distribution (pi) nor the support set S are assumed to be available as explicit mathematical equations, but may only be defined through oracles and, in particular, computer programs. This problem commonly occurs in black-box discrete optimization as well as counting and estimation problems. The generality of this setting and high dimensionality of S precludes the application of conventional random variable generation methods. As a result, we turn to Markov chain Monte Carlo (MCMC) sampling, where we execute an ergodic Markov chain that converges to (pi) so that the distribution of the point delivered after sufficiently many steps can be made arbitrarily close to (pi). Unfortunately, classical Markov chains, such as the nearest-neighbor random walk or the coordinate direction random walk, fail to converge to (pi) because they can get trapped in isolated regions of the support set. To surmount this difficulty, we propose discrete hit-and-run (DHR), a Markov chain motivated by the hit-and-run algorithm known to be the most efficient method for sampling from log-concave distributions over convex bodies in R n . We prove that the limiting distribution of DHR is (pi) as desired, thus enabling us to sample approximately from (pi) by delivering the last iterate of a sufficiently large number of iterations of DHR. In addition to this asymptotic analysis, we investigate finite-time behavior of DHR and present a variety of examples where DHR exhibits polynomial performance.
Keywords: simulation; Markov chain Monte Carlo; probability; random walk (search for similar items in EconPapers)
Date: 2009
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (4)
Downloads: (external link)
http://dx.doi.org/10.1287/opre.1080.0600 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:57:y:2009:i:3:p:727-739
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().