Title: | Quantile, Composite Quantile Regression and Regularized Versions |
---|---|
Description: | Estimate quantile regression(QR) and composite quantile regression (cqr) and with adaptive lasso penalty using interior point (IP), majorize and minimize(MM), coordinate descent (CD), and alternating direction method of multipliers algorithms(ADMM). |
Authors: | Jueyu Gao & Linglong Kong |
Maintainer: | Jueyu Gao <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.2.1 |
Built: | 2024-11-22 06:54:09 UTC |
Source: | CRAN |
Composite quantile regression (cqr) find the estimated coefficient which minimize the absolute error for various quantile level. The problem is well suited to distributed convex optimization and is based on Alternating Direction Method of Multipliers (ADMM) algorithm .
cqr.admm(X,y,tau,rho,beta, maxit, toler)
cqr.admm(X,y,tau,rho,beta, maxit, toler)
X |
the design matrix |
y |
response variable |
tau |
vector of quantile level |
rho |
augmented Lagrangian parameter |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
cqr.admm(x,y,tau) work properly only if the least square estimation is good.
S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein.(2010) Distributed Optimization and Statistical Learning via the Alternating Direction. Method of Multipliers Foundations and Trends in Machine Learning, 3, No. 1, 1–122
Hui Zou and Ming Yuan(2008). Composite Quantile Regression and the Oracle Model Selection Theory, The Annals of Statistics, 36, Number 3, Page 1108–1126.
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector cqr.admm(x,y,tau)
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector cqr.admm(x,y,tau)
Composite quantile regression (cqr) find the estimated coefficient which minimize the absolute error for various quantile level.
The algorithm base on greedy coordinate descent and Edgeworth's for ordinary regression.
cqr.cd(X,y,tau,beta,maxit,toler)
cqr.cd(X,y,tau,beta,maxit,toler)
X |
the design matrix |
y |
response variable |
tau |
vector of quantile level |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
cqr.cd(x,y,tau) work properly only if the least square estimation is good.
Wu, T.T. and Lange, K. (2008). Coordinate Descent Algorithms for Lasso Penalized Regression. Annals of Applied Statistics, 2, No 1, 224–244.
Hui Zou and Ming Yuan(2008). Composite Quantile Regression and the Oracle Model Selection Theory, The Annals of Statistics, 36, Number 3, Page 1108–1126.
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector cqr.cd(x,y,tau)
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector cqr.cd(x,y,tau)
Composite quantile regression (cqr) find the estimated coefficient which minimize the absolute error for various quantile level. High level function for estimating parameter by composite quantile regression.
cqr.fit(X,y,tau,beta,method,maxit,toler,rho)
cqr.fit(X,y,tau,beta,method,maxit,toler,rho)
X |
the design matrix |
y |
response variable |
tau |
vector of quantile level |
method |
"mm" for majorize and minimize method,"cd" for coordinate descent method, "admm" for Alternating method of mulipliers method,"ip" for interior point mehod |
rho |
augmented Lagrangian parameter |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
cqr.fit(x,y,tau) work properly only if the least square estimation is good. Interior point method is done by quantreg.
Composite quantile regression (cqr) find the estimated coefficient which minimize the absolute error for various quantile level. High level function for estimating and selecting parameter by composite quantile regression with adaptive lasso penalty.
cqr.fit.lasso(X,y,tau,lambda,beta,method,maxit,toler,rho)
cqr.fit.lasso(X,y,tau,lambda,beta,method,maxit,toler,rho)
X |
the design matrix |
y |
response variable |
tau |
vector of quantile level |
method |
"mm" for majorize and minimize method,"cd" for coordinate descent method, "admm" for Alternating method of mulipliers method |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
rho |
augmented Lagrangian parameter |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
cqr.fit.lasso(x,y,tau) work properly only if the least square estimation is good.
The function use the interior point method from quantreg to solve the quantile regression problem.
cqr.ip(X,y,tau)
cqr.ip(X,y,tau)
X |
the design matrix |
y |
response variable |
tau |
vector of quantile level |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
Need to install quantreg package from CRAN.
Koenker, R. and S. Portnoy (1997). The Gaussian Hare and the Laplacian Tortoise: Computability of squared-error vs. absolute-error estimators, with discussion, Statistical Science, 12, 279-300.
Hui Zou and Ming Yuan(2008). Composite Quantile Regression and the Oracle Model Selection Theory, The Annals of Statistics, 36, Number 3, Page 1108–1126.
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector #you should install quantreg first to run following command #cqr.ip(x,y,tau)
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector #you should install quantreg first to run following command #cqr.ip(x,y,tau)
The adaptive lasso parameter base on the estimated coefficient without penalty function. Composite quantile regression find the estimated coefficient which minimize the absolute error for various quantile level. The problem is well suited to distributed convex optimization and is based on Alternating Direction Method of Multipliers (ADMM) algorithm .
cqr.lasso.admm(X,y,tau,lambda,rho,beta,maxit)
cqr.lasso.admm(X,y,tau,lambda,rho,beta,maxit)
X |
the design matrix |
y |
response variable |
tau |
vector of quantile level |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
rho |
augmented Lagrangian parameter |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
cqr.lasso.admm(x,y,tau) work properly only if the least square estimation is good.
S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein.(2010) Distributed Optimization and Statistical Learning via the Alternating Direction. Method of Multipliers Foundations and Trends in Machine Learning, 3, No. 1, 1–122
Hui Zou and Ming Yuan(2008). Composite Quantile Regression and the Oracle Model Selection Theory, The Annals of Statistics, 36, Number 3, Page 1108–1126.
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. cqr.lasso.admm(x,y,tau)
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. cqr.lasso.admm(x,y,tau)
The adaptive lasso parameter base on the estimated coefficient without penalty function.
Composite quantile regression find the estimated coefficient which minimize the absolute error for various quantile level.
The algorithm base on greedy coordinate descent and Edgeworth's for ordinary regression.
cqr.lasso.cd(X,y,tau,lambda,beta,maxit,toler)
cqr.lasso.cd(X,y,tau,lambda,beta,maxit,toler)
X |
the design matrix |
y |
response variable |
tau |
vector of quantile level |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
cqr.lasso.cd(x,y,tau) work properly only if the least square estimation is good.
Wu, T.T. and Lange, K. (2008). Coordinate Descent Algorithms for Lasso Penalized Regression. Annals of Applied Statistics, 2, No 1, 224–244.
Hui Zou and Ming Yuan(2008). Composite Quantile Regression and the Oracle Model Selection Theory, The Annals of Statistics, 36, Number 3, Page 1108–1126.
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. cqr.lasso.cd(x,y,tau)
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. cqr.lasso.cd(x,y,tau)
The adaptive lasso penalty parameter base on the estimated coefficient without penalty function. Composite quantile regression find the estimated coefficient which minimize the absolute error for various quantile level. The algorithm majorizing the objective function by a quadratic function followed by minimizing that quadratic.
cqr.lasso.mm(X,y,tau,lambda,beta,maxit,toler)
cqr.lasso.mm(X,y,tau,lambda,beta,maxit,toler)
X |
the design matrix |
y |
response variable |
tau |
vector of quantile level |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept for various quantile level |
cqr.lasso.mm(x,y,tau) work properly only if the least square estimation is good.
David R.Hunter and Runze Li.(2005) Variable Selection Using MM Algorithms,The Annals of Statistics 33, Number 4, Page 1617–1642.
Hui Zou and Ming Yuan(2008). Composite Quantile Regression and the Oracle Model Selection Theory, The Annals of Statistics, 36, Number 3, Page 1108–1126.
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. cqr.lasso.mm(x,y,tau)
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. cqr.lasso.mm(x,y,tau)
Composite quantile regression find the estimated coefficient which minimize the absolute error for various quantile level. The algorithm majorizing the objective function by a quadratic function followed by minimizing that quadratic.
cqr.mm(X,y,tau,beta,maxit,toler)
cqr.mm(X,y,tau,beta,maxit,toler)
X |
the design matrix |
y |
response variable |
tau |
vector of quantile level |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept for various quantile level |
cqr.mm(x,y,tau) work properly only if the least square estimation is good.
David R.Hunter and Kenneth Lange. Quantile Regression via an MM Algorithm,Journal of Computational and Graphical Statistics, 9, Number 1, Page 60–77.
Hui Zou and Ming Yuan(2008). Composite Quantile Regression and the Oracle Model Selection Theory, The Annals of Statistics, 36, Number 3, Page 1108–1126.
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector cqr.mm(x,y,tau)
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) tau=1:5/6 # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector cqr.mm(x,y,tau)
Composite quantile regression (cqr) find the estimated coefficient which minimize the absolute error for various quantile level. The problem is well suited to distributed convex optimization and is based on Alternating Direction Method of Multipliers (ADMM) algorithm .
Composite quantile regression (cqr) find the estimated coefficient which minimize the absolute error for various quantile level.
The algorithm base on greedy coordinate descent and Edgeworth's for ordinary regression.
Composite quantile regression find the estimated coefficient which minimize the absolute error for various quantile level. The algorithm majorizing the objective function by a quadratic function followed by minimizing that quadratic.
The adaptive lasso parameter base on the estimated coefficient without penalty function. Composite quantile regression find the estimated coefficient which minimize the absolute error for various quantile level. The problem is well suited to distributed convex optimization and is based on Alternating Direction Method of Multipliers (ADMM) algorithm .
The adaptive lasso parameter base on the estimated coefficient without penalty function.
Composite quantile regression find the estimated coefficient which minimize the absolute error for various quantile level.
The algorithm base on greedy coordinate descent and Edgeworth's for ordinary regression.
The adaptive lasso penalty parameter base on the estimated coefficient without penalty function. Composite quantile regression find the estimated coefficient which minimize the absolute error for various quantile level. The algorithm majorizing the objective function by a quadratic function followed by minimizing that quadratic.
The problem is well suited to distributed convex optimization and is based on Alternating Direction Method of Multipliers (ADMM) algorithm .
QR.admm(X,y,tau,rho,beta, maxit, toler)
QR.admm(X,y,tau,rho,beta, maxit, toler)
X |
the design matrix |
y |
response variable |
tau |
quantile level |
rho |
augmented Lagrangian parameter |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
QR.admm(x,y,tau) work properly only if the least square estimation is good.
S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein.(2010) Distributed Optimization and Statistical Learning via the Alternating Direction.Method of Multipliers Foundations and Trends in Machine Learning, 3, No.1, 1–122
Koenker, Roger. Quantile Regression, New York, 2005. Print.
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector QR.admm(x,y,0.1)
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector QR.admm(x,y,0.1)
The algorithm base on greedy coordinate descent and Edgeworth's for ordinary regression.
QR.cd(X,y,tau,beta,maxit,toler)
QR.cd(X,y,tau,beta,maxit,toler)
X |
the design matrix |
y |
response variable |
tau |
quantile level |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
QR.cd(x,y,tau) work properly only if the least square estimation is good.
Wu, T.T. and Lange, K. (2008). Coordinate Descent Algorithms for Lasso Penalized Regression. Annals of Applied Statistics, 2, No 1, 224–244.
Koenker, Roger. Quantile Regression, New York, 2005. Print.
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector QR.cd(x,y,0.1)
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector QR.cd(x,y,0.1)
The function use the interior point method from quantreg to solve the quantile regression problem.
QR.ip(X,y,tau)
QR.ip(X,y,tau)
X |
the design matrix |
y |
response variable |
tau |
quantile level |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
Need to install quantreg package from CRAN.
Koenker, Roger. Quantile Regression, New York, 2005. Print.
Koenker, R. and S. Portnoy (1997). The Gaussian Hare and the Laplacian Tortoise: Computability of squared-error vs. absolute-error estimators, with discussion, Statistical Science, 12, 279-300.
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector #you should install Rmosek first to run following command #QR.ip(x,y,0.1)
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector #you should install Rmosek first to run following command #QR.ip(x,y,0.1)
The adaptive lasso parameter base on the estimated coefficient without penalty function. The problem is well suited to distributed convex optimization and is based on Alternating Direction Method of Multipliers (ADMM) algorithm .
QR.lasso.admm(X,y,tau,lambda,rho,beta,maxit)
QR.lasso.admm(X,y,tau,lambda,rho,beta,maxit)
X |
the design matrix |
y |
response variable |
tau |
quantile level |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
rho |
augmented Lagrangian parameter |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
QR.lasso.admm(x,y,tau) work properly only if the least square estimation is good.
S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein.(2010) Distributed Optimization and Statistical Learning via the Alternating Direction. Method of Multipliers Foundations and Trends in Machine Learning, 3, No.1, 1–122
Wu, Yichao and Liu, Yufeng (2009). Variable selection in quantile regression. Statistica Sinica, 19, 801–817.
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. QR.lasso.admm(x,y,0.1)
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. QR.lasso.admm(x,y,0.1)
The adaptive lasso parameter base on the estimated coefficient without penalty function.
The algorithm base on greedy coordinate descent and Edgeworth's for ordinary regression. As explored by Tong Tong Wu and Kenneth Lange.
QR.lasso.cd(X,y,tau,lambda,beta,maxit,toler)
QR.lasso.cd(X,y,tau,lambda,beta,maxit,toler)
X |
the design matrix |
y |
response variable |
tau |
quantile level |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
QR.lasso.cd(x,y,tau) work properly only if the least square estimation is good.
Wu, T.T. and Lange, K. (2008). Coordinate Descent Algorithms for Lasso Penalized Regression. Annals of Applied Statistics, 2, No 1, 224–244.
Wu, Yichao and Liu, Yufeng (2009). Variable selection in quantile regression. Statistica Sinica, 19, 801–817.
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. QR.lasso.cd(x,y,0.1)
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. QR.lasso.cd(x,y,0.1)
The function use the interior point method from quantreg to solve the quantile regression problem.
QR.lasso.ip(X,y,tau,lambda)
QR.lasso.ip(X,y,tau,lambda)
X |
the design matrix |
y |
response variable |
tau |
quantile level |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
Need to install quantreg package from CRAN.
Koenker, R. and S. Portnoy (1997). The Gaussian Hare and the Laplacian Tortoise: Computability of squared-error vs. absolute-error estimators, with discussion, Statistical Science, 12, 279-300.
Wu, Yichao and Liu, Yufeng (2009). Variable selection in quantile regression. Statistica Sinica, 19, 801–817.
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. #you should install Rmosek first to run following command #QR.lasso.ip(x,y,0.1)
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. #you should install Rmosek first to run following command #QR.lasso.ip(x,y,0.1)
The adaptive lasso parameter base on the estimated coefficient without penalty function. The algorithm majorizing the objective function by a quadratic function followed by minimizing that quadratic.
QR.lasso.mm(X,y,tau,lambda,beta,maxit,toler)
QR.lasso.mm(X,y,tau,lambda,beta,maxit,toler)
X |
the design matrix. |
y |
response variable. |
tau |
quantile level. |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
beta |
initial value of estimate coefficient.(default naive guess by least square estimation) |
maxit |
maxim iteration. (default 200) |
toler |
the tolerance critical for stop the algorithm. (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
QR.lasso.mm(x,y,tau) work properly only if the least square estimation is good.
David R.Hunter and Runze Li.(2005) Variable Selection Using MM Algorithms,The Annals of Statistics 33, Number 4, Page 1617–1642.
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. QR.lasso.mm(x,y,0.1)
set.seed(1) n=100 p=2 a=2*rnorm(n*2*p, mean = 1, sd =1) x=matrix(a,n,2*p) beta=2*rnorm(p,1,1) beta=rbind(matrix(beta,p,1),matrix(0,p,1)) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. QR.lasso.mm(x,y,0.1)
The algorithm majorizing the objective function by a quadratic function followed by minimizing that quadratic.
QR.mm(X,y,tau,beta,maxit,toler)
QR.mm(X,y,tau,beta,maxit,toler)
X |
the design matrix |
y |
response variable |
tau |
quantile level |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
QR.mm(x,y,tau) work properly only if the least square estimation is good.
David R.Hunter and Kenneth Lange. Quantile Regression via an MM Algorithm, Journal of Computational and Graphical Statistics, 9, Number 1, Page 60–77
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector QR.mm(x,y,0.1)
set.seed(1) n=100 p=2 a=rnorm(n*p, mean = 1, sd =1) x=matrix(a,n,p) beta=rnorm(p,1,1) beta=matrix(beta,p,1) y=x%*%beta-matrix(rnorm(n,0.1,1),n,1) # x is 1000*10 matrix, y is 1000*1 vector, beta is 10*1 vector QR.mm(x,y,0.1)
The problem is well suited to distributed convex optimization and is based on Alternating Direction Method of Multipliers (ADMM) algorithm .
The algorithm base on greedy coordinate descent and Edgeworth's for ordinary regression.
High level function for estimating parameters by quantile regression
qrfit(X,y,tau,beta,method,maxit,toler,rho)
qrfit(X,y,tau,beta,method,maxit,toler,rho)
X |
the design matrix |
y |
response variable |
tau |
quantile level |
method |
"mm" for majorize and minimize method,"cd" for coordinate descent method, "admm" for Alternating method of mulipliers method,"ip" for interior point mehod |
rho |
augmented Lagrangian parameter |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
qrfit(x,y,tau) work properly only if the least square estimation is good. Interior point method is done by quantreg.
High level function for estimating and selecting parameter by quantile regression with adaptive lasso penalty.
qrfit.lasso(X,y,tau,lambda,beta,method,maxit,toler,rho)
qrfit.lasso(X,y,tau,lambda,beta,method,maxit,toler,rho)
X |
the design matrix |
y |
response variable |
tau |
quantile level |
method |
"mm" for majorize and minimize method,"cd" for coordinate descent method, "admm" for Alternating method of mulipliers method,"ip" for interior point mehod |
lambda |
The constant coefficient of penalty function. (default lambda=1) |
rho |
augmented Lagrangian parameter |
beta |
initial value of estimate coefficient (default naive guess by least square estimation) |
maxit |
maxim iteration (default 200) |
toler |
the tolerance critical for stop the algorithm (default 1e-3) |
a list
structure is with components
beta |
the vector of estimated coefficient |
b |
intercept |
qrfit.lasso(x,y,tau) work properly only if the least square estimation is good. Interior point method is done by quantreg.
The algorithm majorizing the objective function by a quadratic function followed by minimizing that quadratic.
The adaptive lasso parameter base on the estimated coefficient without penalty function. The problem is well suited to distributed convex optimization and is based on Alternating Direction Method of Multipliers (ADMM) algorithm .
The adaptive lasso parameter base on the estimated coefficient without penalty function.
The algorithm base on greedy coordinate descent and Edgeworth's for ordinary regression. As explored by Tong Tong Wu and Kenneth Lange.
The adaptive lasso parameter base on the estimated coefficient without penalty function. The algorithm majorizing the objective function by a quadratic function followed by minimizing that quadratic.