Title: | Bayesian Screening and Variable Selection |
---|---|
Description: | Performs Bayesian variable screening and selection for ultra-high dimensional linear regression models. |
Authors: | Dongjin Li [aut, cre], Debarshi Chakraborty [aut], Somak Dutta [aut], Vivekananda Roy [ctb] |
Maintainer: | Dongjin Li <[email protected]> |
License: | GPL-3 |
Version: | 3.2.2 |
Built: | 2024-10-30 09:22:24 UTC |
Source: | CRAN |
Perform Bayesian iterated screening in Gaussian regression models
bits(X, y, lam = 1, w = 0.5, pp = FALSE, max.var = nrow(X), verbose = TRUE)
bits(X, y, lam = 1, w = 0.5, pp = FALSE, max.var = nrow(X), verbose = TRUE)
X |
An |
y |
The response vector of length |
lam |
The slab precision parameter. Default: |
w |
The prior inclusion probability of each variable. Default: |
pp |
Boolean: If |
max.var |
The maximum number of variables to be included. |
verbose |
If |
A list with components
model.pp |
An integer vector of the screened model. |
postprobs |
The sequence of posterior probabilities until the last included variable. |
lam |
The value of lam, the slab precision parameter. |
w |
The value of w, the prior inclusion probability. |
Wang, R., Dutta, S., Roy, V. (2021) Bayesian iterative screening in ultra-high dimensional settings. https://arxiv.org/abs/2107.10175
n=50; p=100; TrueBeta <- c(rep(5,3),rep(0,p-3)) rho <- 0.6 x1 <- matrix(rnorm(n*p), n, p) X <- sqrt(1-rho)*x1 + sqrt(rho)*rnorm(n) y <- 0.5 + X %*% TrueBeta + rnorm(n) res<-bits(X,y, pp=TRUE) res$model.pp # the vector of screened model res$postprobs # the log (unnormalized) posterior probabilities corresponding to the model.pp.
n=50; p=100; TrueBeta <- c(rep(5,3),rep(0,p-3)) rho <- 0.6 x1 <- matrix(rnorm(n*p), n, p) X <- sqrt(1-rho)*x1 + sqrt(rho)*rnorm(n) y <- 0.5 + X %*% TrueBeta + rnorm(n) res<-bits(X,y, pp=TRUE) res$model.pp # the vector of screened model res$postprobs # the log (unnormalized) posterior probabilities corresponding to the model.pp.
This function computes the marginal inclusion probabilities of all variables from a fitted "sven" object.
mip.sven(object, threshold = 0)
mip.sven(object, threshold = 0)
object |
A fitted "sven" object |
threshold |
marginal inclusion probabilities above this threshold are stored. Default 0. |
The object returned is a data frame if the sven
was run with a single matrix,
or a list of two data frames if sven
was run with a list of two matrices.
The first column are the variable names (or numbers if column names of were absent).
Only the nonzero marginal inclusion probabilities are stored.
Somak Dutta
Maintainer:
Somak Dutta <[email protected]>
n <- 50; p <- 100; nonzero <- 3 trueidx <- 1:3 truebeta <- c(4,5,6) X <- matrix(rnorm(n*p), n, p) # n x p covariate matrix y <- 0.5 + X[,trueidx] %*% truebeta + rnorm(n) res <- sven(X=X, y=y) res$model.map # the MAP model mip.sven(res) Z <- matrix(rnorm(n*p), n, p) # another covariate matrix y2 = 0.5 + X[,trueidx] %*% truebeta + Z[,1:2] %*% c(-2,-2) + rnorm(n) res2 <- sven(X=list(X,Z), y=y2) mip.sven(res2) # two data frames, one for X and another for Z
n <- 50; p <- 100; nonzero <- 3 trueidx <- 1:3 truebeta <- c(4,5,6) X <- matrix(rnorm(n*p), n, p) # n x p covariate matrix y <- 0.5 + X[,trueidx] %*% truebeta + rnorm(n) res <- sven(X=X, y=y) res$model.map # the MAP model mip.sven(res) Z <- matrix(rnorm(n*p), n, p) # another covariate matrix y2 = 0.5 + X[,trueidx] %*% truebeta + Z[,1:2] %*% c(-2,-2) + rnorm(n) res2 <- sven(X=list(X,Z), y=y2) mip.sven(res2) # two data frames, one for X and another for Z
This function makes point predictions and computes prediction intervals from a fitted "sven" object.
## S3 method for class 'sven' predict( object, newdata, model = c("WAM", "MAP"), interval = c("none", "MC", "Z"), return.draws = FALSE, Nsim = 10000, level = 0.95, alpha = 1 - level, ... )
## S3 method for class 'sven' predict( object, newdata, model = c("WAM", "MAP"), interval = c("none", "MC", "Z"), return.draws = FALSE, Nsim = 10000, level = 0.95, alpha = 1 - level, ... )
object |
A fitted "sven" object |
newdata |
Matrix of new values for |
model |
The model to be used to make predictions. Model "MAP" gives the predictions calculated using the MAP model; model "WAM" gives the predictions calculated using the WAM. Default: "WAM". |
interval |
Type of interval calculation. If |
return.draws |
only required if |
Nsim |
only required if |
level |
Confidence level of the interval. Default: 0.95. |
alpha |
Type one error rate. Default: 1- |
... |
Further arguments passed to or from other methods. |
The object returned depends on "interval" argument. If interval
= "none"
, the object is an
vector of the point predictions; otherwise, the object is an
matrix with the point predictions in the first column and the lower and upper bounds
of prediction intervals in the second and third columns, respectively.
if return.draws is TRUE
, a list with the following components is returned:
prediction |
vector or matrix as above |
mc.draws |
an |
Dongjin Li and Somak Dutta
Maintainer:
Dongjin Li <[email protected]>
Li, D., Dutta, S., Roy, V.(2020) Model Based Screening Embedded Bayesian Variable Selection for Ultra-high Dimensional Settings http://arxiv.org/abs/2006.07561
n = 80; p = 100; nonzero = 5 trueidx <- 1:5 nonzero.value <- c(0.50, 0.75, 1.00, 1.25, 1.50) TrueBeta = numeric(p) TrueBeta[trueidx] <- nonzero.value X <- matrix(rnorm(n*p), n, p) y <- 0.5 + X %*% TrueBeta + rnorm(n) res <- sven(X=X, y=y) newx <- matrix(rnorm(20*p), 20, p) # predicted values at a new data matrix using MAP model yhat <- predict(object = res, newdata = newx, model = "MAP", interval = "none") # 95% Monte Carlo prediction interval using WAM MC.interval <- predict(object = res, model = "WAM", newdata = newx, interval = "MC", level=0.95) # 95% Z-prediction interval using MAP model Z.interval <- predict(object = res, model = "MAP", newdata = newx, interval = "Z", level = 0.95)
n = 80; p = 100; nonzero = 5 trueidx <- 1:5 nonzero.value <- c(0.50, 0.75, 1.00, 1.25, 1.50) TrueBeta = numeric(p) TrueBeta[trueidx] <- nonzero.value X <- matrix(rnorm(n*p), n, p) y <- 0.5 + X %*% TrueBeta + rnorm(n) res <- sven(X=X, y=y) newx <- matrix(rnorm(20*p), 20, p) # predicted values at a new data matrix using MAP model yhat <- predict(object = res, newdata = newx, model = "MAP", interval = "none") # 95% Monte Carlo prediction interval using WAM MC.interval <- predict(object = res, model = "WAM", newdata = newx, interval = "MC", level=0.95) # 95% Z-prediction interval using MAP model Z.interval <- predict(object = res, model = "MAP", newdata = newx, interval = "Z", level = 0.95)
SVEN is an approach to selecting variables with embedded screening using a Bayesian hierarchical model. It is also a variable selection method in the spirit of the stochastic shotgun search algorithm. However, by embedding a unique model based screening and using fast Cholesky updates, SVEN produces a highly scalable algorithm to explore gigantic model spaces and rapidly identify the regions of high posterior probabilities. It outputs the log (unnormalized) posterior probability of a set of best (highest probability) models. For more details, see Li et al. (2023, https://doi.org/10.1080/10618600.2022.2074428)
sven( X, y, w = NULL, lam = NULL, Ntemp = 10, Tmax = NULL, Miter = 50, wam.threshold = 0.5, log.eps = -16, L = 20, verbose = FALSE )
sven( X, y, w = NULL, lam = NULL, Ntemp = 10, Tmax = NULL, Miter = 50, wam.threshold = 0.5, log.eps = -16, L = 20, verbose = FALSE )
X |
The |
y |
The response vector of length |
w |
The prior inclusion probability of each variable. Default: NULL, whence it is set as
|
lam |
The slab precision parameter. Default: NULL, whence it is set as |
Ntemp |
The number of temperatures. Default: 10. |
Tmax |
The maximum temperature. Default: |
Miter |
The number of iterations per temperature. Default: |
wam.threshold |
The threshold probability to select the covariates for WAM. A covariate will be included in WAM if its corresponding marginal inclusion probability is greater than the threshold. Default: 0.5. |
log.eps |
The tolerance to choose the number of top models. See detail. Default: -16. |
L |
The minimum number of neighboring models screened. Default: 20. |
verbose |
If |
SVEN is developed based on a hierarchical Gaussian linear model with priors placed on the regression coefficients as well as on the model space as follows:
where is the
submatrix of
consisting of
those columns of
for which
and similarly,
is the
subvector of
corresponding to
.
Degenerate spike priors on inactive variables and Gaussian slab priors on active
covariates makes the posterior
probability (up to a normalizing constant) of a model
available in
explicit form (Li et al., 2020).
The variable selection starts from an empty model and updates the model according to the posterior probability of its neighboring models for some pre-specified number of iterations. In each iteration, the models with small probabilities are screened out in order to quickly identify the regions of high posterior probabilities. A temperature schedule is used to facilitate exploration of models separated by valleys in the posterior probability function, thus mitigate posterior multimodality associated with variable selection models. The default maximum temperature is guided by the asymptotic posterior model selection consistency results in Li et al. (2020).
SVEN provides the maximum a posteriori (MAP) model as well as the weighted average model
(WAM). WAM is obtained in the following way: (1) keep the best (highest probability)
distinct models
with
where is chosen so that
;
(2) assign the weights
to the model ; (3) define the approximate marginal inclusion probabilities
for the
th variable as
Then, the WAM is defined as the model containing variables with
. SVEN also provides all the top
models which
are stored in an
sparse matrix, along with their corresponding log (unnormalized)
posterior probabilities.
When X
is a list with two matrices, say, W
and Z
, the above method is extended
to ncol(W)+ncol(Z)
dimensional regression. However, the hyperparameters lam
and w
are chosen separately for the two matrices, the default values being nrow(W)/ncol(W)^2
and nrow(Z)/ncol(Z)^2
for lam
and sqrt(nrow(W))/ncol(W)
and
sqrt(nrow(Z))/ncol(Z)
for w
.
The marginal inclusion probabities can be extracted by using the function mip
.
A list with components
model.map |
A vector of indices corresponding to the selected variables in the MAP model. |
model.wam |
A vector of indices corresponding to the selected variables in the WAM. |
model.top |
A sparse matrix storing the top models. |
beta.map |
The ridge estimator of regression coefficients in the MAP model. |
beta.wam |
The ridge estimator of regression coefficients in the WAM. |
mip.map |
The marginal inclusion probabilities of the variables in the MAP model. |
mip.wam |
The marginal inclusion probabilities of the variables in the WAM. |
pprob.map |
The log (unnormalized) posterior probability corresponding to the MAP model. |
pprob.top |
A vector of the log (unnormalized) posterior probabilities corresponding to the top models. |
stats |
Additional statistics. |
Dongjin Li, Debarshi Chakraborty, and Somak Dutta
Maintainer:
Dongjin Li <[email protected]>
Li, D., Dutta, S., and Roy, V. (2023). Model based screening embedded Bayesian variable selection for ultra-high dimensional settings. Journal of Computational and Graphical Statistics, 32(1), 61-73.
[mip.sven()] for marginal inclusion probabilities, [predict.sven()](via [predict()]) for prediction for .
n <- 50; p <- 100; nonzero <- 3 trueidx <- 1:3 truebeta <- c(4,5,6) X <- matrix(rnorm(n*p), n, p) # n x p covariate matrix y <- 0.5 + X[,trueidx] %*% truebeta + rnorm(n) res <- sven(X=X, y=y) res$model.map # the MAP model Z <- matrix(rnorm(n*p), n, p) # another covariate matrix y2 = 0.5 + X[,trueidx] %*% truebeta + Z[,1:2] %*% c(-2,-2) + rnorm(n) res2 <- sven(X=list(X,Z), y=y2)
n <- 50; p <- 100; nonzero <- 3 trueidx <- 1:3 truebeta <- c(4,5,6) X <- matrix(rnorm(n*p), n, p) # n x p covariate matrix y <- 0.5 + X[,trueidx] %*% truebeta + rnorm(n) res <- sven(X=X, y=y) res$model.map # the MAP model Z <- matrix(rnorm(n*p), n, p) # another covariate matrix y2 = 0.5 + X[,trueidx] %*% truebeta + Z[,1:2] %*% c(-2,-2) + rnorm(n) res2 <- sven(X=list(X,Z), y=y2)