Package 'scalreg'

Title: Scaled Sparse Linear Regression
Description: Algorithms for fitting scaled sparse linear regression and estimating precision matrices.
Authors: Tingni Sun
Maintainer: Tingni Sun <[email protected]>
License: GPL-2
Version: 1.0.1
Built: 2024-12-22 06:23:10 UTC
Source: CRAN

Help Index


Scaled sparse linear regression

Description

This package fits scaled sparse linear regression with l_1 penalty. The algorithm jointly estimates the regression coefficients and the noise level in linear regression problem. In addition, the package estimates inverse covariance matrices (precision matrices) via a scale-invariant method.

Details

Package: scalreg
Type: Package
Version: 1.0
Date: 2013-12-16
License: GPL-2

Author(s)

Tingni Sun <[email protected]>

References

Sun, T. and Zhang, C.-H. (2012) Scaled sparse linear regression. Biometrika, 99 (4), 879-898.

Sun, T. and Zhang, C.-H. (2013) Sparse matrix inversion with scaled Lasso. Journal of Machine Learning Research, 14, 3385-3418.

See Also

scalreg

Examples

## See examples in scalreg

Prediction based on a scalreg object

Description

When the type of a scalreg object is "regression", this predict method applies.

Usage

## S3 method for class 'scalreg'
predict(object, newX = NULL,...)

Arguments

object

a fitted scalreg object.

newX

X values at which the fit is required. If newX is NULL, return the fitted value of the object.

...

Additonal arguments for generic methods

Value

y

the predicted values.

Author(s)

Tingni Sun

See Also

scalreg


Printing the solution from a scalreg object

Description

Print the solution from a scalreg object

Usage

## S3 method for class 'scalreg'
print(x,...)

Arguments

x

a scalreg object

...

Additonal arguments for generic methods

Author(s)

Tingni Sun

See Also

scalreg


Scaled sparse linear regression

Description

The algorithm gives the scaled Lasso solution with given penalty constants for a sparse linear regression. When the response vector is not set, the algorithm estimates the precision matrix of predictors.

Usage

scalreg(X, y, lam0 = NULL, LSE = FALSE)

Arguments

X

predictors, an n by p matrix with n > 1 and p > 1.

y

response, an n-vector with n > 1. If NULL, the algorithm computes the precision matrix of predictors.

lam0

penalty constant; c("univ","quantile") or other specified numerical value. If p < 10^6, default is "quantile"; otherwise, default is "univ".

LSE

If TRUE, compute least squares estimates after scaled Lasso selection. Default is FALSE.

Details

Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model, described in details in Sun and Zhang (2012). It alternates between estimating the noise level via the mean residual square and scaling the penalty in proportion to the estimated noise level. The theoretical performance of scaled Lasso with lam0="univ" was proven in Sun and Zhang (2012), while the quantile-based penalty level (lam0="quantile") was introduced and studied in Sun and Zhang (2013).

Precision matrix estimation was described in details in Sun and Zhang (2013). The algorithm first estimates each column of the matrix by scaled sparse linear regression and then adjusts the matrix estimator to be symmetric.

Value

A "scalreg" object is returned. If it is a linear regression solution, some significant components of the object are:

type

"regression".

hsigma

the estimated noise level.

coefficients

the estimated coefficients.

fitted.values

the fitted mean values.

residuals

the residuals, that is response minus fitted values.

lse

the object of least square estimation after the selection, which includes the similar values as "scalreg" (e.g. hsigma, coefficients, fitted.values, residual).

If it estimates a precition matrix, some significant components of the object are:

type

"precision matrix".

precision

the estimated precision matrix.

hsigma

the estimated noise level for the linear regression problem of each column.

lse

the object of least square estimation, containing values of precision and hsigma.

Author(s)

Tingni Sun <[email protected]>

References

Sun, T. and Zhang, C.-H. (2012) Scaled sparse linear regression. Biometrika, 99 (4), 879-898.

Sun, T. and Zhang, C.-H. (2013) Sparse matrix inversion with scaled Lasso. Journal of Machine Learning Research, 14, 3385-3418.

Examples

data(sp500)
attach(sp500)
x = sp500.percent[,3: (dim(sp500.percent)[2])]
y = sp500.percent[,1]

object = scalreg(x,y)
##print(object)

object = scalreg(x,y,LSE=TRUE)
print(object$hsigma)
print(object$lse$hsigma)

detach(sp500)

sp500

Description

The sp500 datafile contains a year's worth of close-of-day data for most of the Standard and Poors 500 stocks. The data is in reverse chronological order, with the top row being Dec 31st, 2008.

Usage

sp500

Format

This data file contains the following items:

sp500.2008

The raw close-of-day data. The first column is of the DJIA index, the second is the S&P 500 index, the rest are individual labeled stocks.

sp500.percent

The daily percentage change.

References

This database was used in the R package "plus".

Examples

## See examples in scalreg