Title: | Constrained Ordinary Least Squares |
---|---|
Description: | Constrained ordinary least squares is performed. One constraint is that all beta coefficients (including the constant) cannot be negative. They can be either 0 or strictly positive. Another constraint is that the sum of the beta coefficients equals a constant. References: Hansen, B. E. (2022). Econometrics, Princeton University Press. <ISBN:9780691235899>. |
Authors: | Michail Tsagris [aut, cre] |
Maintainer: | Michail Tsagris <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.3 |
Built: | 2024-11-25 14:52:17 UTC |
Source: | CRAN |
Constrained ordinary least squares is performed. One constraint is that all beta coefficients (including the constant) cannot be negative. They can be either 0 or strictly positive. Another constraint is that the sum of the beta coefficients equals a constant. References: Hansen, B.E. (2022). Econometrics, Princeton University Press.
Package: | cols |
Type: | Package |
Version: | 1.3 |
Date: | 2024-11-19 |
Michail Tsagris <[email protected]>.
Michail Tsagris [email protected]
Hansen, B. E. (2022). Econometrics, Princeton University Press.
Constrained least squares.
cls(y, x, R, ca) mvcls(y, x, R, ca)
cls(y, x, R, ca) mvcls(y, x, R, ca)
y |
The response variable. For the cls() a numerical vector with observations, but for the mvcls() a numerical matrix . |
x |
A matrix with independent variables, the design matrix. |
R |
The R vector that contains the values that will multiply the beta coefficients. See details and examples. |
ca |
The value of the constraint, |
This is described in Chapter 8.2 of Hansen (2019). The idea is to inimise the sum of squares of the residuals under the constraint . As mentioned above, be careful with the input you give in the x matrix and the R vector. The cls() function performs a single regression model, whereas the mcls() function performs a regression for each column of y. Each regression is independent of the others.
A list including:
be |
A numerical matrix with the constrained beta coefficients. |
mse |
A numerical vector with the mean squared error. |
Michail Tsagris.
R implementation and documentation: Michail Tsagris [email protected].
Hansen, B. E. (2022). Econometrics, Princeton University Press.
x <- as.matrix( iris[1:50, 1:4] ) y <- rnorm(50) R <- c(1, 1, 1, 1) cls(y, x, R, 1)
x <- as.matrix( iris[1:50, 1:4] ) y <- rnorm(50) R <- c(1, 1, 1, 1) cls(y, x, R, 1)
Lower and upper bound constrained least squares
int.cls(y, x, lb, ub)
int.cls(y, x, lb, ub)
y |
The response variable. For the cls() a numerical vector with observations, but for the mvcls() a numerical matrix . |
x |
A matrix with independent variables, the design matrix. |
lb |
A vector or a single value with the lower bound(s) in the coefficients. |
ub |
A vector or a single value with the upper bound(s) in the coefficients. |
This function performs least squares under the constraint that the beta coefficients lie within interval(s).
A list including:
be |
A numerical matrix with the constrained beta coefficients. |
mse |
A numerical vector with the mean squared error. |
Michail Tsagris.
R implementation and documentation: Michail Tsagris [email protected].
x <- as.matrix( iris[1:50, 1:4] ) y <- rnorm(50) int.cls(y, x, -1, 1)
x <- as.matrix( iris[1:50, 1:4] ) y <- rnorm(50) int.cls(y, x, -1, 1)
Positive and unit sum constrained least squares.
pcls(y, x) mpcls(y, x)
pcls(y, x) mpcls(y, x)
y |
The response variable. For the pcls() a numerical vector with observations, but for the mpcls() a numerical matrix. |
x |
A matrix with independent variables, the design matrix. |
The constraint is that all beta coefficients are positive and sum to 1. The pcls() function performs a single regression model, whereas the mpcls() function performs a regression for each column of y. Each regression is independent of the others.
A list including:
be |
A numerical matrix with the positively constrained beta coefficients. |
mse |
A numerical vector with the mean squared error. |
Michail Tsagris.
R implementation and documentation: Michail Tsagris [email protected].
x <- as.matrix( iris[1:50, 1:4] ) y <- rnorm(50) pcls(y, x)
x <- as.matrix( iris[1:50, 1:4] ) y <- rnorm(50) pcls(y, x)
Positively constrained least squares.
pls(y, x) mpls(y, x)
pls(y, x) mpls(y, x)
y |
The response variable. For the pls() a numerical vector with observations, but for the mpls() a numerical matrix . |
x |
A matrix with independent variables, the design matrix. |
The constraint is that all beta coefficients (including the constant) are non negative. The pls() function performs a single regression model, whereas the mpls() function performs a regression for each column of y. Each regression is independent of the others.
A list including:
be |
A numerical matrix with the positively constrained beta coefficients. |
mse |
A numerical vector with the mean squared error(s). |
Michail Tsagris.
R implementation and documentation: Michail Tsagris [email protected].
x <- as.matrix( iris[1:50, 1:4] ) y <- rnorm(50) pls(y, x)
x <- as.matrix( iris[1:50, 1:4] ) y <- rnorm(50) pls(y, x)
Positively constrained least squares with a multivariate response.
mvpls(y, x)
mvpls(y, x)
y |
The response variables, a numerical matrix with observations. |
x |
A matrix with independent variables, the design matrix. |
The constraint is that all beta coefficients (including the constant) are positive.
A list including:
be |
The positively constrained beta coefficients. |
mse |
The mean squared error. |
Michail Tsagris.
R implementation and documentation: Michail Tsagris [email protected].
y <- as.matrix( iris[, 1:2] ) x <- as.matrix( iris[, 3:4] ) mvpls(y, x)
y <- as.matrix( iris[, 1:2] ) x <- as.matrix( iris[, 3:4] ) mvpls(y, x)