Metropolis-Hastings sampling efficiency can be improved by the implementation of advanced sampling techniques such as delayed rejection (Green and Mira, 2001) and adaptive Metropolis (Haario et al 2001). Used together, the techniques are referred to as DRAM (Haario et al 2006).
Traditional Metropolis-Hastings sampling considers a single proposal at a time, accepting or rejecting on the basis of the ratio between proposal and prior likelihood. Typically, the proposal is drawn from some fixed distribution, often a Gaussian distribution with fixed covariance about the prior point. In a case where the proposal variance is chosen to be too high, many proposals will be rejected, increasing autocorrelation of the chain (this is bad). The result will be a "lumpy" looking sample of the target distribution. Likewise, if the target distribution deviates greatly from Gaussian, the proposal may not match the local shape of the distribution, resulting in poor sampling.
Delayed rejection seeks to rectify issues associated with inappropriate proposals by allowing for a second, modified proposal before treating the sampler cycle as a rejection. The second proposal may be proposed with a modified, usually decreased, variance and is accepted with a modified probability.
In cases where the proposal variance is chosen to be too small to efficiently sample the target distribution, the sampler will often perform a random walk through regions of medium-to-high likelihood in the target distribution, without efficiently sampling the full distribution. Often a sign of this problem is an anomolously high sampler acceptance rate. In such cases, delaying rejection with a reduced second proposal variance is unlikely to improve matters.
As the aforementioned poorly adjusted sampler drifts slowly across the target distribution, the chain's covariance should become a better choice than the proposal covariance. Therefore, one can consider continuously adapting the proposal to match the chain's covariance, rather that sticking to a static proposal (which may be wholly inappropriate). Adaptive Metropolis sampling does just this, continuously adapting the proposal covariance based on the entire saved chain. Because each proposal depends on all previous samples, the sampler is not longer Markovian, though more importantly, it is ergodic.
(A) Metropolis Hasting sampling, Lorenz model.
(B) Same as previous, delayed rejection sampling.
(C) Same as previous, adaptive Metropolis sampling.
(D) Same as previous, DRAM sampling.
This page uses the Perfect 'Left Menu' 2 Column Liquid Layout by Matthew James Taylor. View more website layouts and web design articles.
This personal webpage is not an official University of Miami webpage. See disclaimer.
© 2011 Marcus van Lier-Walqui