Title: | Sampling from a Empirical Likelihood Bayesian Posterior of Parameters Using Hamiltonian Monte Carlo |
---|---|
Description: | A tool to draw samples from a Empirical Likelihood Bayesian posterior of parameters using Hamiltonian Monte Carlo. |
Authors: | Dang Trung Kien [aut], Sanjay Chaudhuri [aut, cre], Neo Han Wei, [aut] |
Maintainer: | Sanjay Chaudhuri <[email protected]> |
License: | GPL-2 |
Version: | 1.2.1 |
Built: | 2024-10-30 09:23:10 UTC |
Source: | CRAN |
This function draws samples from a Empirical Likelihood Bayesian posterior distribution of parameters using Hamiltonian Monte Carlo.
ELHMC( initial, data, fun, dfun, prior, dprior, n.samples = 100, lf.steps = 10, epsilon = 0.05, p.variance = 1, tol = 10^-5, detailed = FALSE, print.interval = 1000, plot.interval = 0, which.plot = NULL, FUN, DFUN )
ELHMC( initial, data, fun, dfun, prior, dprior, n.samples = 100, lf.steps = 10, epsilon = 0.05, p.variance = 1, tol = 10^-5, detailed = FALSE, print.interval = 1000, plot.interval = 0, which.plot = NULL, FUN, DFUN )
initial |
a vector containing the initial values of the parameters |
data |
a matrix containing the data |
fun |
the estimating function |
dfun |
a function that calculates the gradient of the estimating function
|
prior |
a function with one argument |
dprior |
a function with one argument |
n.samples |
number of samples to draw |
lf.steps |
number of leap frog steps in each Hamiltonian Monte Carlo update |
epsilon |
the leap frog step size(s). This has to be a single numeric
value or a vector of the same length as |
p.variance |
the covariance matrix of a multivariate normal distribution
used to generate the initial values of momentum |
tol |
EL tolerance |
detailed |
If this is set to |
print.interval |
the frequency at which the results would be printed on the terminal. Defaults to 1000. |
plot.interval |
the frequency at which the drawn samples would be plotted. The last half of the samples drawn are plotted after each plot.interval steps. The acceptance rate is also plotted. Defaults to 0, which means no plot. |
which.plot |
the vector of parameters to be plotted after each plot.interval. Defaults to NULL, which means no plot. |
FUN |
the same as |
DFUN |
the same as |
Suppose there are data where
takes values in
and follow probability distribution
.
Also,
comes from a family of distributions that depends on
a parameter
and there is
a smooth function
that
satisfies
for
.
ELHMC
draws samples from a Empirical Likelihood Bayesian
posterior distribution of the parameter , given the data
as
data
, the smoothing function as
fun
,
and the gradient of as
dfun
or as
FUN
and the gradient of
as
DFUN
.
The function returns a list with the following elements:
samples |
A matrix containing the parameter samples |
acceptance.rate |
The acceptance rate |
call |
The matched call |
If detailed = TRUE
, the list contains these extra elements:
proposed |
A matrix containing the proposed values at
|
acceptance |
A vector of |
trajectory |
A list with 2 elements |
Chaudhuri, S., Mondal, D. and Yin, T. (2017) Hamiltonian Monte Carlo sampling in Bayesian empirical likelihood computation. Journal of the Royal Statistical Society: Series B.
Neal, R. (2011) MCMC for using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo (eds S. Brooks, A.Gelman, G. L.Jones and X.-L. Meng), pp. 113-162. New York: Taylor and Francis.
## Suppose there are four data points (1, 1), (1, -1), (-1, -1), (-1, 1) x = rbind(c(1, 1), c(1, -1), c(-1, -1), c(-1, 1)) ## If the parameter of interest is the mean, the smoothing function and ## its gradient would be f <- function(params, x) { x - params } df <- function(params, x) { rbind(c(-1, 0), c(0, -1)) } ## Draw 50 samples from the Empirical Likelihood Bayesian posterior distribution ## of the mean, using initial values (0.96, 0.97) and standard normal distributions ## as priors: normal_prior <- function(x) { -0.5 * (x[1] ^ 2 + x[2] ^ 2) -log(2 * pi) } normal_prior_log_gradient <- function(x) { -x } set.seed(1234) mean.samples <- ELHMC(initial = c(0.96, 0.97), data = x, fun = f, dfun = df, n.samples = 50, prior = normal_prior, dprior = normal_prior_log_gradient) plot(mean.samples$samples, type = "l", xlab = "", ylab = "")
## Suppose there are four data points (1, 1), (1, -1), (-1, -1), (-1, 1) x = rbind(c(1, 1), c(1, -1), c(-1, -1), c(-1, 1)) ## If the parameter of interest is the mean, the smoothing function and ## its gradient would be f <- function(params, x) { x - params } df <- function(params, x) { rbind(c(-1, 0), c(0, -1)) } ## Draw 50 samples from the Empirical Likelihood Bayesian posterior distribution ## of the mean, using initial values (0.96, 0.97) and standard normal distributions ## as priors: normal_prior <- function(x) { -0.5 * (x[1] ^ 2 + x[2] ^ 2) -log(2 * pi) } normal_prior_log_gradient <- function(x) { -x } set.seed(1234) mean.samples <- ELHMC(initial = c(0.96, 0.97), data = x, fun = f, dfun = df, n.samples = 50, prior = normal_prior, dprior = normal_prior_log_gradient) plot(mean.samples$samples, type = "l", xlab = "", ylab = "")