Title: | Adaptive Huber Estimation and Regression |
---|---|
Description: | Huber-type estimation for mean, covariance and (regularized) regression. For all the methods, the robustification parameter tau is chosen by a tuning-free principle. |
Authors: | Xiaoou Pan [aut, cre], Wen-Xin Zhou [aut] |
Maintainer: | Xiaoou Pan <[email protected]> |
License: | GPL-3 |
Version: | 1.1 |
Built: | 2024-11-09 06:11:39 UTC |
Source: | CRAN |
Huber-type robust estimation for mean, covariance and (penalized) regression.
Xiaoou Pan <[email protected]> and Wen-Xin Zhou <[email protected]>
Ke, Y., Minsker, S., Ren, Z., Sun, Q. and Zhou, W.-X. (2019). User-friendly covariance estimation for heavy-tailed distributions. Statis. Sci., 34, 454-471.
Pan, X., Sun, Q. and Zhou, W.-X. (2021). Iteratively reweighted l1-penalized robust regression. Electron. J. Stat., 15, 3287-3348.
Sun, Q., Zhou, W.-X. and Fan, J. (2020). Adaptive Huber regression. J. Amer. Stat. Assoc., 115, 254-265.
Wang, L., Zheng, C., Zhou, W. and Zhou, W.-X. (2021). A new principle for tuning-free Huber regression. Stat. Sinica, 31, 2153-2177.
Adaptive Huber covariance estimator from a data sample, with robustification parameter determined by a tuning-free principle.
adaHuber.cov(X, epsilon = 1e-04, iteMax = 500)
adaHuber.cov(X, epsilon = 1e-04, iteMax = 500)
X |
An |
epsilon |
(optional) The tolerance level in the iterative estimation procedure. The problem is converted to mean estimation, and the stopping rule is the same as |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
The observed data is an
by
matrix. The distribution of each entry can be asymmetrix and/or heavy-tailed. The function outputs a robust estimator for the covariance matrix of
. For the input matrix
X
, both low-dimension () and high-dimension (
) are allowed.
A list including the following terms will be returned:
means
The Huber estimators for column means. A -dimensional vector.
cov
The Huber estimator for covariance matrix. A by
matrix.
Huber, P. J. (1964). Robust estimation of a location parameter. Ann. Math. Statist., 35, 73–101.
Ke, Y., Minsker, S., Ren, Z., Sun, Q. and Zhou, W.-X. (2019). User-friendly covariance estimation for heavy-tailed distributions. Statis. Sci., 34, 454-471.
adaHuber.mean
for adaptive Huber mean estimation.
n = 100 p = 5 X = matrix(rt(n * p, 3), n, p) fit.cov = adaHuber.cov(X) fit.cov$means fit.cov$cov
n = 100 p = 5 X = matrix(rt(n * p, 3), n, p) fit.cov = adaHuber.cov(X) fit.cov$means fit.cov$cov
Sparse regularized adaptive Huber regressionwith "lasso" penalty. The function implements a localized majorize-minimize algorithm with a gradient-based method. The regularization parameter is selected by cross-validation, and the robustification parameter
is determined by a tuning-free principle.
adaHuber.cv.lasso( X, Y, lambdaSeq = NULL, kfolds = 5, numLambda = 50, phi0 = 0.01, gamma = 1.2, epsilon = 0.001, iteMax = 500 )
adaHuber.cv.lasso( X, Y, lambdaSeq = NULL, kfolds = 5, numLambda = 50, phi0 = 0.01, gamma = 1.2, epsilon = 0.001, iteMax = 500 )
X |
A |
Y |
An |
lambdaSeq |
(optional) A sequence of candidate regularization parameters. If unspecified, a reasonable sequence will be generated. |
kfolds |
(optional) Number of folds for cross-validation. Default is 5. |
numLambda |
(optional) Number of |
phi0 |
(optional) The initial quadratic coefficient parameter in the local adaptive majorize-minimize algorithm. Default is 0.01. |
gamma |
(optional) The adaptive search parameter (greater than 1) in the local adaptive majorize-minimize algorithm. Default is 1.2. |
epsilon |
(optional) A tolerance level for the stopping rule. The iteration will stop when the maximum magnitude of the change of coefficient updates is less than |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
An object containing the following items will be returned:
coef
A vector of estimated sparse regression coefficients, including the intercept.
lambdaSeq
The sequence of candidate regularization parameters.
lambda
Regularization parameter selected by cross-validation.
tau
The robustification parameter calibrated by the tuning-free principle.
iteration
Number of iterations until convergence.
phi
The quadratic coefficient parameter in the local adaptive majorize-minimize algorithm.
Pan, X., Sun, Q. and Zhou, W.-X. (2021). Iteratively reweighted l1-penalized robust regression. Electron. J. Stat., 15, 3287-3348.
Sun, Q., Zhou, W.-X. and Fan, J. (2020). Adaptive Huber regression. J. Amer. Statist. Assoc., 115 254-265.
Wang, L., Zheng, C., Zhou, W. and Zhou, W.-X. (2021). A new principle for tuning-free Huber regression. Stat. Sinica, 31, 2153-2177.
See adaHuber.lasso
for regularized adaptive Huber regression with a specified .
n = 100; p = 200; s = 5 beta = c(rep(1.5, s + 1), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) err = rt(n, 2) Y = cbind(rep(1, n), X) %*% beta + err fit.lasso = adaHuber.cv.lasso(X, Y) beta.lasso = fit.lasso$coef
n = 100; p = 200; s = 5 beta = c(rep(1.5, s + 1), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) err = rt(n, 2) Y = cbind(rep(1, n), X) %*% beta + err fit.lasso = adaHuber.cv.lasso(X, Y) beta.lasso = fit.lasso$coef
Sparse regularized Huber regression models in high dimensions with (lasso) penalty. The function implements a localized majorize-minimize algorithm with a gradient-based method.
adaHuber.lasso( X, Y, lambda = 0.5, tau = 0, phi0 = 0.01, gamma = 1.2, epsilon = 0.001, iteMax = 500 )
adaHuber.lasso( X, Y, lambda = 0.5, tau = 0, phi0 = 0.01, gamma = 1.2, epsilon = 0.001, iteMax = 500 )
X |
A |
Y |
An |
lambda |
(optional) Regularization parameter. Must be positive. Default is 0.5. |
tau |
(optional) The robustness parameter. If not specified or the input value is non-positive, a tuning-free principle is applied. Default is 0 (hence, tuning-free). |
phi0 |
(optional) The initial quadratic coefficient parameter in the local adaptive majorize-minimize algorithm. Default is 0.01. |
gamma |
(optional) The adaptive search parameter (greater than 1) in the local adaptive majorize-minimize algorithm. Default is 1.2. |
epsilon |
(optional) Tolerance level of the gradient-based algorithm. The iteration will stop when the maximum magnitude of all the elements of the gradient is less than |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
An object containing the following items will be returned:
coef
A vector of estimated sparse regression coefficients, including the intercept.
tau
The robustification parameter calibrated by the tuning-free principle (if the input is non-positive).
iteration
Number of iterations until convergence.
phi
The quadratic coefficient parameter in the local adaptive majorize-minimize algorithm.
Pan, X., Sun, Q. and Zhou, W.-X. (2021). Iteratively reweighted l1-penalized robust regression. Electron. J. Stat., 15, 3287-3348.
Sun, Q., Zhou, W.-X. and Fan, J. (2020). Adaptive Huber regression. J. Amer. Statist. Assoc., 115 254-265.
Wang, L., Zheng, C., Zhou, W. and Zhou, W.-X. (2021). A new principle for tuning-free Huber regression. Stat. Sinica, 31, 2153-2177.
See adaHuber.cv.lasso
for regularized adaptive Huber regression with cross-validation.
n = 200; p = 500; s = 10 beta = c(rep(1.5, s + 1), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) err = rt(n, 2) Y = cbind(rep(1, n), X) %*% beta + err fit.lasso = adaHuber.lasso(X, Y, lambda = 0.5) beta.lasso = fit.lasso$coef
n = 200; p = 500; s = 10 beta = c(rep(1.5, s + 1), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) err = rt(n, 2) Y = cbind(rep(1, n), X) %*% beta + err fit.lasso = adaHuber.lasso(X, Y, lambda = 0.5) beta.lasso = fit.lasso$coef
Adaptive Huber mean estimator from a data sample, with robustification parameter determined by a tuning-free principle.
adaHuber.mean(X, epsilon = 1e-04, iteMax = 500)
adaHuber.mean(X, epsilon = 1e-04, iteMax = 500)
X |
An |
epsilon |
(optional) The tolerance level in the iterative estimation procedure, iteration will stop when |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
A list including the following terms will be returned:
mu
The Huber mean estimator.
tau
The robustness parameter determined by the tuning-free principle.
iteration
The number of iterations in the estimation procedure.
Huber, P. J. (1964). Robust estimation of a location parameter. Ann. Math. Statist., 35, 73–101.
Wang, L., Zheng, C., Zhou, W. and Zhou, W.-X. (2021). A new principle for tuning-free Huber regression. Stat. Sinica, 31, 2153-2177.
n = 1000 mu = 2 X = rt(n, 2) + mu fit.mean = adaHuber.mean(X) fit.mean$mu
n = 1000 mu = 2 X = rt(n, 2) + mu fit.mean = adaHuber.mean(X) fit.mean$mu
Adaptive Huber regression from a data sample, with robustification parameter determined by a tuning-free principle.
adaHuber.reg( X, Y, method = c("standard", "adaptive"), epsilon = 1e-04, iteMax = 500 )
adaHuber.reg( X, Y, method = c("standard", "adaptive"), epsilon = 1e-04, iteMax = 500 )
X |
A |
Y |
An |
method |
(optional) A character string specifying the method to calibrate the robustification parameter |
epsilon |
(optional) Tolerance level of the gradient descent algorithm. The iteration will stop when the maximum magnitude of all the elements of the gradient is less than |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
An object containing the following items will be returned:
coef
A -vector of estimated regression coefficients, including the intercept.
tau
The robustification parameter calibrated by the tuning-free principle.
iteration
Number of iterations until convergence.
Huber, P. J. (1964). Robust estimation of a location parameter. Ann. Math. Statist., 35, 73–101.
Sun, Q., Zhou, W.-X. and Fan, J. (2020). Adaptive Huber regression. J. Amer. Statist. Assoc., 115, 254-265.
Wang, L., Zheng, C., Zhou, W. and Zhou, W.-X. (2021). A new principle for tuning-free Huber regression. Stat. Sinica, 31, 2153-2177.
n = 200 p = 10 beta = rep(1.5, p + 1) X = matrix(rnorm(n * p), n, p) err = rt(n, 2) Y = cbind(1, X) %*% beta + err fit.huber = adaHuber.reg(X, Y, method = "standard") beta.huber = fit.huber$coef fit.adahuber = adaHuber.reg(X, Y, method = "adaptive") beta.adahuber = fit.adahuber$coef
n = 200 p = 10 beta = rep(1.5, p + 1) X = matrix(rnorm(n * p), n, p) err = rt(n, 2) Y = cbind(1, X) %*% beta + err fit.huber = adaHuber.reg(X, Y, method = "standard") beta.huber = fit.huber$coef fit.adahuber = adaHuber.reg(X, Y, method = "adaptive") beta.adahuber = fit.adahuber$coef