Title: | Convolution-Type Smoothed Quantile Regression |
---|---|
Description: | Estimation and inference for conditional linear quantile regression models using a convolution smoothed approach. In the low-dimensional setting, efficient gradient-based methods are employed for fitting both a single model and a regression process over a quantile range. Normal-based and (multiplier) bootstrap confidence intervals for all slope coefficients are constructed. In high dimensions, the conquer method is complemented with flexible types of penalties (Lasso, elastic-net, group lasso, sparse group lasso, scad and mcp) to deal with complex low-dimensional structures. |
Authors: | Xuming He [aut], Xiaoou Pan [aut, cre], Kean Ming Tan [aut], Wen-Xin Zhou [aut] |
Maintainer: | Xiaoou Pan <[email protected]> |
License: | GPL-3 |
Version: | 1.3.3 |
Built: | 2024-11-11 07:18:37 UTC |
Source: | CRAN |
Estimation and inference for conditional linear quantile regression models using a convolution smoothed approach.
In the low-dimensional setting, efficient gradient-based methods are employed for fitting both a single model and a regression process over a quantile range. Normal-based and (multiplier) bootstrap confidence intervals for all slope coefficients are constructed.
In high dimensions, the conquer methods complemented with -penalization and iteratively reweighted
-penalization are used to fit sparse models.
Commonly used penalities, such as the elastic-net, group lasso and sparse group lasso, are also incorporated to deal with more complex low-dimensional structures.
Xuming He <[email protected]>, Xiaoou Pan <[email protected]>, Kean Ming Tan <[email protected]>, and Wen-Xin Zhou <[email protected]>
Barzilai, J. and Borwein, J. M. (1988). Two-point step size gradient methods. IMA J. Numer. Anal., 8, 141–148.
Belloni, A. and Chernozhukov, V. (2011). penalized quantile regression in high-dimensional sparse models. Ann. Statist., 39, 82-130.
Fan, J., Liu, H., Sun, Q. and Zhang, T. (2018). I-LAMM for sparse learning: Simultaneous control of algorithmic complexity and statistical error. Ann. Statist., 46, 814-841.
Fernandes, M., Guerre, E. and Horta, E. (2021). Smoothing quantile regressions. J. Bus. Econ. Statist., 39, 338-357.
He, X., Pan, X., Tan, K. M., and Zhou, W.-X. (2022+). Smoothed quantile regression for large-scale inference. J. Econometrics, in press.
Koenker, R. (2005). Quantile Regression. Cambridge University Press, Cambridge.
Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33-50.
Tan, K. M., Wang, L. and Zhou, W.-X. (2022). High-dimensional quantile regression: convolution smoothing and concave regularization. J. Roy. Statist. Soc. Ser. B, 84(1), 205-233.
Estimation and inference for conditional linear quantile regression models using a convolution smoothed approach. Efficient gradient-based methods are employed for fitting both a single model and a regression process over a quantile range. Normal-based and (multiplier) bootstrap confidence intervals for all slope coefficients are constructed.
conquer( X, Y, tau = 0.5, kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"), h = 0, checkSing = FALSE, tol = 1e-04, iteMax = 5000, stepBounded = TRUE, stepMax = 100, ci = c("none", "bootstrap", "asymptotic", "both"), alpha = 0.05, B = 1000 )
conquer( X, Y, tau = 0.5, kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"), h = 0, checkSing = FALSE, tol = 1e-04, iteMax = 5000, stepBounded = TRUE, stepMax = 100, ci = c("none", "bootstrap", "asymptotic", "both"), alpha = 0.05, B = 1000 )
X |
An |
Y |
An |
tau |
(optional) The desired quantile level. Default is 0.5. Value must be between 0 and 1. |
kernel |
(optional) A character string specifying the choice of kernel function. Default is "Gaussian". Choices are "Gaussian", "logistic", "uniform", "parabolic" and "triangular". |
h |
(optional) Bandwidth/smoothing parameter. Default is |
checkSing |
(optional) A logical flag. Default is FALSE. If |
tol |
(optional) Tolerance level of the gradient descent algorithm. The iteration will stop when the maximum magnitude of all the elements of the gradient is less than |
iteMax |
(optional) Maximum number of iterations. Default is 5000. |
stepBounded |
(optional) A logical flag. Default is TRUE. If |
stepMax |
(optional) Maximum bound for the gradient descent step size. Default is 100. |
ci |
(optional) A character string specifying methods to construct confidence intervals. Choices are "none" (default), "bootstrap", "asymptotic" and "both". If |
alpha |
(optional) Miscoverage level for each confidence interval. Default is 0.05. |
B |
(optional) The size of bootstrap samples. Default is 1000. |
An object containing the following items will be returned:
coeff
A -vector of estimated quantile regression coefficients, including the intercept.
ite
Number of iterations until convergence.
residual
An -vector of fitted residuals.
bandwidth
Bandwidth value.
tau
Quantile level.
kernel
Kernel function.
n
Sample size.
p
Number of covariates.
perCI
The percentile confidence intervals for regression coefficients. Only available if ci = "bootstrap"
or ci = "both"
.
pivCI
The pivotal confidence intervals for regression coefficients. Only available if ci = "bootstrap"
or ci = "both"
.
normCI
The normal-based confidence intervals for regression coefficients. Only available if ci = "bootstrap"
or ci = "both"
.
asyCI
The asymptotic confidence intervals for regression coefficients. Only available if ci = "asymptotic"
or ci = "both"
.
Barzilai, J. and Borwein, J. M. (1988). Two-point step size gradient methods. IMA J. Numer. Anal., 8, 141–148.
Fernandes, M., Guerre, E. and Horta, E. (2021). Smoothing quantile regressions. J. Bus. Econ. Statist., 39, 338-357.
He, X., Pan, X., Tan, K. M., and Zhou, W.-X. (2022+). Smoothed quantile regression for large-scale inference. J. Econometrics, in press.
Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33-50.
See conquer.process
for smoothed quantile regression process.
n = 500; p = 10 beta = rep(1, p) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Smoothed quantile regression with Gaussian kernel fit.Gauss = conquer(X, Y, tau = 0.5, kernel = "Gaussian") beta.hat.Gauss = fit.Gauss$coeff ## Smoothe quantile regression with uniform kernel fit.unif = conquer(X, Y, tau = 0.5, kernel = "uniform") beta.hat.unif = fit.unif$coeff ## Construct three types of confidence intervals via multiplier bootstrap fit = conquer(X, Y, tau = 0.5, kernel = "Gaussian", ci = "bootstrap") ci.per = fit$perCI ci.piv = fit$pivCI ci.norm = fit$normCI
n = 500; p = 10 beta = rep(1, p) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Smoothed quantile regression with Gaussian kernel fit.Gauss = conquer(X, Y, tau = 0.5, kernel = "Gaussian") beta.hat.Gauss = fit.Gauss$coeff ## Smoothe quantile regression with uniform kernel fit.unif = conquer(X, Y, tau = 0.5, kernel = "uniform") beta.hat.unif = fit.unif$coeff ## Construct three types of confidence intervals via multiplier bootstrap fit = conquer(X, Y, tau = 0.5, kernel = "Gaussian", ci = "bootstrap") ci.per = fit$perCI ci.piv = fit$pivCI ci.norm = fit$normCI
Fit sparse quantile regression models via regularized conquer methods with "lasso", "elastic-net", "group lasso", "sparse group lasso", "scad" and "mcp" penalties. The regularization parameter is selected via cross-validation.
conquer.cv.reg( X, Y, lambdaSeq = NULL, tau = 0.5, kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"), h = 0, penalty = c("lasso", "elastic", "group", "sparse-group", "scad", "mcp"), para.elastic = 0.5, group = NULL, weights = NULL, para.scad = 3.7, para.mcp = 3, kfolds = 5, numLambda = 50, epsilon = 0.001, iteMax = 500, phi0 = 0.01, gamma = 1.2, iteTight = 3 )
conquer.cv.reg( X, Y, lambdaSeq = NULL, tau = 0.5, kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"), h = 0, penalty = c("lasso", "elastic", "group", "sparse-group", "scad", "mcp"), para.elastic = 0.5, group = NULL, weights = NULL, para.scad = 3.7, para.mcp = 3, kfolds = 5, numLambda = 50, epsilon = 0.001, iteMax = 500, phi0 = 0.01, gamma = 1.2, iteTight = 3 )
X |
An |
Y |
An |
lambdaSeq |
(optional) A sequence of candidate regularization parameters. If unspecified, the sequence will be generated by a simulated pivotal quantity approach proposed in Belloni and Chernozhukov (2011). |
tau |
(optional) Quantile level (between 0 and 1). Default is 0.5. |
kernel |
(optional) A character string specifying the choice of kernel function. Default is "Gaussian". Choices are "Gaussian", "logistic", "uniform", "parabolic" and "triangular". |
h |
(optional) The bandwidth parameter for kernel smoothing. Default is |
penalty |
(optional) A character string specifying the penalty. Default is "lasso" (Tibshirani, 1996). The other options are "elastic" for elastic-net (Zou and Hastie, 2005), "group" for group lasso (Yuan and Lin, 2006), "sparse-group" for sparse group lasso (Simon et al., 2013), "scad" (Fan and Li, 2001) and "mcp" (Zhang, 2010). |
para.elastic |
(optional) The mixing parameter between 0 and 1 (usually noted as |
group |
(optional) A |
weights |
(optional) A vector specifying groups weights for group Lasso and sparse group Lasso. The length must be equal to the number of groups. If not specified, the default weights are square roots of group sizes.
For example , if |
para.scad |
(optional) The constant parameter for "scad". Default value is 3.7. Only specify it if |
para.mcp |
(optional) The constant parameter for "mcp". Default value is 3. Only specify it if |
kfolds |
(optional) Number of folds for cross-validation. Default is 5. |
numLambda |
(optional) Number of |
epsilon |
(optional) A tolerance level for the stopping rule. The iteration will stop when the maximum magnitude of the change of coefficient updates is less than |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
phi0 |
(optional) The initial quadratic coefficient parameter in the local adaptive majorize-minimize algorithm. Default is 0.01. |
gamma |
(optional) The adaptive search parameter (greater than 1) in the local adaptive majorize-minimize algorithm. Default is 1.2. |
iteTight |
(optional) Maximum number of tightening iterations in the iteratively reweighted |
An object containing the following items will be returned:
coeff.min
A vector of estimated coefficients including the intercept selected by minimizing the cross-validation errors.
coeff.1se
A vector of estimated coefficients including the intercept. The corresponding
is the largest
such that the cross-validation error is within 1 standard error of the minimum.
lambdaSeq
The sequence of regularization parameter candidates for cross-validation.
lambda.min
Regularization parameter selected by minimizing the cross-validation errors. This is the corresponding of
coeff.min
.
lambda.1se
The largest regularization parameter such that the cross-validation error is within 1 standard error of the minimum. This is the corresponding of
coeff.1se
.
deviance
Cross-validation errors based on the quantile loss. The length is equal to the length of lambdaSeq
.
deviance.se
Estimated standard errors of deviance
. The length is equal to the length of lambdaSeq
.
bandwidth
Bandwidth value.
tau
Quantile level.
kernel
Kernel function.
penalty
Penalty type.
n
Sample size.
p
Number of covariates.
Belloni, A. and Chernozhukov, V. (2011). penalized quantile regression in high-dimensional sparse models. Ann. Statist., 39, 82-130.
Fan, J. and Li, R. (2001). Variable selection via nonconcave regularized likelihood and its oracle properties. J. Amer. Statist. Assoc., 96, 1348-1360.
Fan, J., Liu, H., Sun, Q. and Zhang, T. (2018). I-LAMM for sparse learning: Simultaneous control of algorithmic complexity and statistical error. Ann. Statist., 46, 814-841.
Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33-50.
Simon, N., Friedman, J., Hastie, T. and Tibshirani, R. (2013). A sparse-group lasso. J. Comp. Graph. Statist., 22, 231-245.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Statist. Soc. Ser. B, 58, 267–288.
Tan, K. M., Wang, L. and Zhou, W.-X. (2022). High-dimensional quantile regression: convolution smoothing and concave regularization. J. Roy. Statist. Soc. Ser. B, 84, 205-233.
Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables., J. Roy. Statist. Soc. Ser. B, 68, 49-67.
Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist., 38, 894-942.
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Statist. Soc. Ser. B, 67, 301-320.
See conquer.reg
for regularized quantile regression with a prescribed .
n = 100; p = 200; s = 5 beta = c(rep(1.5, s), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Cross-validated regularized conquer with lasso penalty at tau = 0.7 fit.lasso = conquer.cv.reg(X, Y, tau = 0.7, penalty = "lasso") beta.lasso = fit.lasso$coeff.min ## Cross-validated regularized conquer with elastic-net penalty at tau = 0.7 fit.elastic = conquer.cv.reg(X, Y, tau = 0.7, penalty = "elastic", para.elastic = 0.7) beta.elastic = fit.elastic$coeff.min ## Cross-validated regularized conquer with scad penalty at tau = 0.7 fit.scad = conquer.cv.reg(X, Y, tau = 0.7, penalty = "scad") beta.scad = fit.scad$coeff.min ## Regularized conquer with group lasso at tau = 0.7 beta = c(rep(1.3, 2), rep(1.5, 3), rep(0, p - s)) err = rt(n, 2) Y = X %*% beta + err group = c(rep(1, 2), rep(2, 3), rep(3, p - s)) fit.group = conquer.cv.reg(X, Y,tau = 0.7, penalty = "group", group = group) beta.group = fit.group$coeff.min
n = 100; p = 200; s = 5 beta = c(rep(1.5, s), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Cross-validated regularized conquer with lasso penalty at tau = 0.7 fit.lasso = conquer.cv.reg(X, Y, tau = 0.7, penalty = "lasso") beta.lasso = fit.lasso$coeff.min ## Cross-validated regularized conquer with elastic-net penalty at tau = 0.7 fit.elastic = conquer.cv.reg(X, Y, tau = 0.7, penalty = "elastic", para.elastic = 0.7) beta.elastic = fit.elastic$coeff.min ## Cross-validated regularized conquer with scad penalty at tau = 0.7 fit.scad = conquer.cv.reg(X, Y, tau = 0.7, penalty = "scad") beta.scad = fit.scad$coeff.min ## Regularized conquer with group lasso at tau = 0.7 beta = c(rep(1.3, 2), rep(1.5, 3), rep(0, p - s)) err = rt(n, 2) Y = X %*% beta + err group = c(rep(1, 2), rep(2, 3), rep(3, p - s)) fit.group = conquer.cv.reg(X, Y,tau = 0.7, penalty = "group", group = group) beta.group = fit.group$coeff.min
Fit a smoothed quantile regression process over a quantile range. The algorithm is essentially the same as conquer
.
conquer.process( X, Y, tauSeq = seq(0.1, 0.9, by = 0.05), kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"), h = 0, checkSing = FALSE, tol = 1e-04, iteMax = 5000, stepBounded = TRUE, stepMax = 100 )
conquer.process( X, Y, tauSeq = seq(0.1, 0.9, by = 0.05), kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"), h = 0, checkSing = FALSE, tol = 1e-04, iteMax = 5000, stepBounded = TRUE, stepMax = 100 )
X |
An |
Y |
An |
tauSeq |
(optional) A sequence of quantile values (between 0 and 1). Default is |
kernel |
(optional) A character string specifying the choice of kernel function. Default is "Gaussian". Choices are "Gaussian", "logistic", "uniform", "parabolic" and "triangular". |
h |
(optional) The bandwidth/smoothing parameter. Default is |
checkSing |
(optional) A logical flag. Default is FALSE. If |
tol |
(optional) Tolerance level of the gradient descent algorithm. The iteration will stop when the maximum magnitude of all the elements of the gradient is less than |
iteMax |
(optional) Maximum number of iterations. Default is 5000. |
stepBounded |
(optional) A logical flag. Default is TRUE. If |
stepMax |
(optional) Maximum bound for the gradient descent step size. Default is 100. |
An object containing the following items will be returned:
coeff
A by
matrix of estimated quantile regression process coefficients, including the intercept. m is the length of
tauSeq
.
bandwidth
Bandwidth value.
tauSeq
The sequence of quantile levels.
kernel
The choice of kernel function.
n
Sample size.
p
Number the covariates.
Barzilai, J. and Borwein, J. M. (1988). Two-point step size gradient methods. IMA J. Numer. Anal., 8, 141–148.
Fernandes, M., Guerre, E. and Horta, E. (2021). Smoothing quantile regressions. J. Bus. Econ. Statist., 39, 338-357.
He, X., Pan, X., Tan, K. M., and Zhou, W.-X. (2022+). Smoothed quantile regression for large-scale inference. J. Econometrics, in press.
Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33-50.
See conquer
for single-index smoothed quantile regression.
n = 500; p = 10 beta = rep(1, p) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Smoothed quantile regression process with Gaussian kernel fit.Gauss = conquer.process(X, Y, tauSeq = seq(0.2, 0.8, by = 0.05), kernel = "Gaussian") beta.hat.Gauss = fit.Gauss$coeff ## Smoothe quantile regression with uniform kernel fit.unif = conquer.process(X, Y, tauSeq = seq(0.2, 0.8, by = 0.05), kernel = "uniform") beta.hat.unif = fit.unif$coeff
n = 500; p = 10 beta = rep(1, p) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Smoothed quantile regression process with Gaussian kernel fit.Gauss = conquer.process(X, Y, tauSeq = seq(0.2, 0.8, by = 0.05), kernel = "Gaussian") beta.hat.Gauss = fit.Gauss$coeff ## Smoothe quantile regression with uniform kernel fit.unif = conquer.process(X, Y, tauSeq = seq(0.2, 0.8, by = 0.05), kernel = "uniform") beta.hat.unif = fit.unif$coeff
Fit sparse quantile regression models in high dimensions via regularized conquer methods with "lasso", "elastic-net", "group lasso", "sparse group lasso", "scad" and "mcp" penalties.
For "scad" and "mcp", the iteratively reweighted -penalized algorithm is complemented with a local adpative majorize-minimize algorithm.
conquer.reg( X, Y, lambda = 0.2, tau = 0.5, kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"), h = 0, penalty = c("lasso", "elastic", "group", "sparse-group", "scad", "mcp"), para.elastic = 0.5, group = NULL, weights = NULL, para.scad = 3.7, para.mcp = 3, epsilon = 0.001, iteMax = 500, phi0 = 0.01, gamma = 1.2, iteTight = 3 )
conquer.reg( X, Y, lambda = 0.2, tau = 0.5, kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"), h = 0, penalty = c("lasso", "elastic", "group", "sparse-group", "scad", "mcp"), para.elastic = 0.5, group = NULL, weights = NULL, para.scad = 3.7, para.mcp = 3, epsilon = 0.001, iteMax = 500, phi0 = 0.01, gamma = 1.2, iteTight = 3 )
X |
An |
Y |
An |
lambda |
(optional) Regularization parameter. Can be a scalar or a sequence. If the input is a sequence, the function will sort it in ascending order, and run the regression accordingly. Default is 0.2. |
tau |
(optional) Quantile level (between 0 and 1). Default is 0.5. |
kernel |
(optional) A character string specifying the choice of kernel function. Default is "Gaussian". Choices are "Gaussian", "logistic", "uniform", "parabolic" and "triangular". |
h |
(optional) Bandwidth/smoothing parameter. Default is |
penalty |
(optional) A character string specifying the penalty. Default is "lasso" (Tibshirani, 1996). The other options are "elastic" for elastic-net (Zou and Hastie, 2005), "group" for group lasso (Yuan and Lin, 2006), "sparse-group" for sparse group lasso (Simon et al., 2013), "scad" (Fan and Li, 2001) and "mcp" (Zhang, 2010). |
para.elastic |
(optional) The mixing parameter between 0 and 1 (usually noted as |
group |
(optional) A |
weights |
(optional) A vector specifying groups weights for group Lasso and sparse group Lasso. The length must be equal to the number of groups. If not specified, the default weights are square roots of group sizes.
For example , if |
para.scad |
(optional) The constant parameter for "scad". Default value is 3.7. Only specify it if |
para.mcp |
(optional) The constant parameter for "mcp". Default value is 3. Only specify it if |
epsilon |
(optional) A tolerance level for the stopping rule. The iteration will stop when the maximum magnitude of the change of coefficient updates is less than |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
phi0 |
(optional) The initial quadratic coefficient parameter in the local adaptive majorize-minimize algorithm. Default is 0.01. |
gamma |
(optional) The adaptive search parameter (greater than 1) in the local adaptive majorize-minimize algorithm. Default is 1.2. |
iteTight |
(optional) Maximum number of tightening iterations in the iteratively reweighted |
An object containing the following items will be returned:
coeff
If the input lambda
is a scalar, then coeff
returns a vector of estimated coefficients, including the intercept. If the input
lambda
is a sequence, then coeff
returns a by
matrix, where
refers to the length of
lambda
sequence.
bandwidth
Bandwidth value.
tau
Quantile level.
kernel
Kernel function.
penalty
Penalty type.
lambda
Regularization parameter(s).
n
Sample size.
p
Number of the covariates.
Belloni, A. and Chernozhukov, V. (2011). penalized quantile regression in high-dimensional sparse models. Ann. Statist., 39, 82-130.
Fan, J. and Li, R. (2001). Variable selection via nonconcave regularized likelihood and its oracle properties. J. Amer. Statist. Assoc., 96, 1348-1360.
Fan, J., Liu, H., Sun, Q. and Zhang, T. (2018). I-LAMM for sparse learning: Simultaneous control of algorithmic complexity and statistical error. Ann. Statist., 46, 814-841.
Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33-50.
Simon, N., Friedman, J., Hastie, T. and Tibshirani, R. (2013). A sparse-group lasso. J. Comp. Graph. Statist., 22, 231-245.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Statist. Soc. Ser. B, 58, 267–288.
Tan, K. M., Wang, L. and Zhou, W.-X. (2022). High-dimensional quantile regression: convolution smoothing and concave regularization. J. Roy. Statist. Soc. Ser. B, 84, 205-233.
Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables., J. Roy. Statist. Soc. Ser. B, 68, 49-67.
Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist., 38, 894-942.
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Statist. Soc. Ser. B, 67, 301-320.
See conquer.cv.reg
for regularized quantile regression with cross-validation.
n = 200; p = 500; s = 10 beta = c(rep(1.5, s), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Regularized conquer with lasso penalty at tau = 0.7 fit.lasso = conquer.reg(X, Y, lambda = 0.05, tau = 0.7, penalty = "lasso") beta.lasso = fit.lasso$coeff ## Regularized conquer with elastic-net penalty at tau = 0.7 fit.elastic = conquer.reg(X, Y, lambda = 0.1, tau = 0.7, penalty = "elastic", para.elastic = 0.7) beta.elastic = fit.elastic$coeff ## Regularized conquer with scad penalty at tau = 0.7 fit.scad = conquer.reg(X, Y, lambda = 0.13, tau = 0.7, penalty = "scad") beta.scad = fit.scad$coeff ## Regularized conquer with group lasso at tau = 0.7 beta = c(rep(1.3, 5), rep(1.5, 5), rep(0, p - s)) err = rt(n, 2) Y = X %*% beta + err group = c(rep(1, 5), rep(2, 5), rep(3, p - s)) fit.group = conquer.reg(X, Y, lambda = 0.05, tau = 0.7, penalty = "group", group = group) beta.group = fit.group$coeff
n = 200; p = 500; s = 10 beta = c(rep(1.5, s), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Regularized conquer with lasso penalty at tau = 0.7 fit.lasso = conquer.reg(X, Y, lambda = 0.05, tau = 0.7, penalty = "lasso") beta.lasso = fit.lasso$coeff ## Regularized conquer with elastic-net penalty at tau = 0.7 fit.elastic = conquer.reg(X, Y, lambda = 0.1, tau = 0.7, penalty = "elastic", para.elastic = 0.7) beta.elastic = fit.elastic$coeff ## Regularized conquer with scad penalty at tau = 0.7 fit.scad = conquer.reg(X, Y, lambda = 0.13, tau = 0.7, penalty = "scad") beta.scad = fit.scad$coeff ## Regularized conquer with group lasso at tau = 0.7 beta = c(rep(1.3, 5), rep(1.5, 5), rep(0, p - s)) err = rt(n, 2) Y = X %*% beta + err group = c(rep(1, 5), rep(2, 5), rep(3, p - s)) fit.group = conquer.reg(X, Y, lambda = 0.05, tau = 0.7, penalty = "group", group = group) beta.group = fit.group$coeff