Title: | Dual Feature Reduction for SGL |
---|---|
Description: | Implementation of the Dual Feature Reduction (DFR) approach for the Sparse Group Lasso (SGL) and the Adaptive Sparse Group Lasso (aSGL) (Feser and Evangelou (2024) <doi:10.48550/arXiv.2405.17094>). The DFR approach is a feature reduction approach that applies strong screening to reduce the feature space before optimisation, leading to speed-up improvements for fitting SGL (Simon et al. (2013) <doi:10.1080/10618600.2012.681250>) and aSGL (Mendez-Civieta et al. (2020) <doi:10.1007/s11634-020-00413-8> and Poignard (2020) <doi:10.1007/s10463-018-0692-7>) models. DFR is implemented using the Adaptive Three Operator Splitting (ATOS) (Pedregosa and Gidel (2018) <doi:10.48550/arXiv.1804.02339>) algorithm, with linear and logistic SGL models supported, both of which can be fit using k-fold cross-validation. Dense and sparse input matrices are supported. |
Authors: | Fabio Feser [aut, cre] |
Maintainer: | Fabio Feser <[email protected]> |
License: | GPL (>= 3) |
Version: | 0.1.2 |
Built: | 2024-11-28 13:55:15 UTC |
Source: | CRAN |
Adaptive Sparse-group lasso (aSGL) with DFR main fitting function. Supports both linear and logistic regression, both with dense and sparse matrix implementations.
dfr_adap_sgl( X, y, groups, type = "linear", lambda = "path", alpha = 0.95, gamma_1 = 0.1, gamma_2 = 0.1, max_iter = 5000, backtracking = 0.7, max_iter_backtracking = 100, tol = 1e-05, standardise = "l2", intercept = TRUE, path_length = 20, min_frac = 0.05, screen = TRUE, verbose = FALSE, v_weights = NULL, w_weights = NULL )
dfr_adap_sgl( X, y, groups, type = "linear", lambda = "path", alpha = 0.95, gamma_1 = 0.1, gamma_2 = 0.1, max_iter = 5000, backtracking = 0.7, max_iter_backtracking = 100, tol = 1e-05, standardise = "l2", intercept = TRUE, path_length = 20, min_frac = 0.05, screen = TRUE, verbose = FALSE, v_weights = NULL, w_weights = NULL )
X |
Input matrix of dimensions |
y |
Output vector of dimension |
groups |
A grouping structure for the input data. Should take the form of a vector of group indices. |
type |
The type of regression to perform. Supported values are: |
lambda |
The regularisation parameter. Defines the level of sparsity in the model. A higher value leads to sparser models:
|
alpha |
The value of |
gamma_1 |
Hyperparameter which determines the shape of the variable penalties. |
gamma_2 |
Hyperparameter which determines the shape of the group penalties. |
max_iter |
Maximum number of ATOS iterations to perform. |
backtracking |
The backtracking parameter, |
max_iter_backtracking |
Maximum number of backtracking line search iterations to perform per global iteration. |
tol |
Convergence tolerance for the stopping criteria. |
standardise |
Type of standardisation to perform on
|
intercept |
Logical flag for whether to fit an intercept. |
path_length |
The number of |
min_frac |
Smallest value of |
screen |
Logical flag for whether to apply the DFR screening rules (see Feser and Evangelou (2024)). |
verbose |
Logical flag for whether to print fitting information. |
v_weights |
Optional vector for the variable penalty weights. Overrides the adaptive SGL penalties if specified. When entering custom weights, these are multiplied internally by |
w_weights |
Optional vector for the group penalty weights. Overrides the adaptive SGL penalties if specified. When entering custom weights, these are multiplied internally by |
dfr_adap_sgl()
fits a DFR-aSGL model (Feser and Evangelou (2024)) using Adaptive Three Operator Splitting (ATOS) (Pedregosa and Gidel (2018)).
It solves the convex optimisation problem given by (Poignard (2020) and Mendez-Civieta et al. (2020))
where is the loss function,
are the group sizes, and
are adaptive weights. In the case of the linear model, the loss function is given by the mean-squared error loss:
In the logistic model, the loss function is given by
where the log-likelihood is given by
The adaptive weights are chosen as, for a group and variable
(Mendez-Civieta et al. (2020))
DFR uses the dual norm (the -norm) and the KKT conditions to discard features at
that would have been inactive at
.
It applies two layers of screening, so that it first screens out any groups that satisfy
and then screens out any variables that satisfy
leading to effective input dimensionality reduction. See Feser and Evangelou (2024) for full details.
A list containing:
beta |
The fitted values from the regression. Taken to be the more stable fit between |
group_effects |
The group values from the regression. Taken by applying the |
selected_var |
A list containing the indicies of the active/selected variables for each |
selected_grp |
A list containing the indicies of the active/selected groups for each |
num_it |
Number of iterations performed. If convergence is not reached, this will be |
success |
Logical flag indicating whether ATOS converged, according to |
certificate |
Final value of convergence criteria. |
x |
The solution to the original problem (see Pedregosa and Gidel (2018)). |
u |
The solution to the dual problem (see Pedregosa and Gidel (2018)). |
z |
The updated values from applying the first proximal operator (see Pedregosa and Gidel (2018)). |
screen_set_var |
List of variables that were kept after screening step for each |
screen_set_grp |
List of groups that were kept after screening step for each |
epsilon_set_var |
List of variables that were used for fitting after screening for each |
epsilon_set_grp |
List of groups that were used for fitting after screening for each |
kkt_violations_var |
List of variables that violated the KKT conditions each |
kkt_violations_grp |
List of groups that violated the KKT conditions each |
v_weights |
Vector of the variable penalty sequence. |
w_weights |
Vector of the group penalty sequence. |
screen |
Logical flag indicating whether screening was performed. |
type |
Indicates which type of regression was performed. |
intercept |
Logical flag indicating whether an intercept was fit. |
lambda |
Value(s) of |
Feser, F., Evangelou, M. (2024). Dual feature reduction for the sparse-group lasso and its adaptive variant, https://arxiv.org/abs/2405.17094
Mendez-Civieta, A., Carmen Aguilera-Morillo, M., Lillo, R. (2020). Adaptive sparse group LASSO in quantile regression, https://link.springer.com/article/10.1007/s11634-020-00413-8
Pedregosa, F., Gidel, G. (2018). Adaptive Three Operator Splitting, https://proceedings.mlr.press/v80/pedregosa18a.html
Poignard, B. (2020). Asymptotic theory of the adaptive Sparse Group Lasso, https://link.springer.com/article/10.1007/s10463-018-0692-7
Other SGL-methods:
dfr_adap_sgl.cv()
,
dfr_sgl()
,
dfr_sgl.cv()
,
plot.sgl()
,
predict.sgl()
,
print.sgl()
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-aSGL model = dfr_adap_sgl(X = data$X, y = data$y, groups = groups, type="linear", path_length = 5, alpha=0.95, standardise = "l2", intercept = TRUE, verbose=FALSE)
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-aSGL model = dfr_adap_sgl(X = data$X, y = data$y, groups = groups, type="linear", path_length = 5, alpha=0.95, standardise = "l2", intercept = TRUE, verbose=FALSE)
Function to fit a pathwise solution of the adaptive sparse-group lasso (aSGL) applied with DFR using k-fold cross-validation. Supports both linear and logistic regression, both with dense and sparse matrix implementations.
dfr_adap_sgl.cv( X, y, groups, type = "linear", lambda = "path", path_length = 20, nfolds = 10, alpha = 0.95, gamma_1 = 0.1, gamma_2 = 0.1, backtracking = 0.7, max_iter = 5000, max_iter_backtracking = 100, tol = 1e-05, min_frac = 0.05, standardise = "l2", intercept = TRUE, error_criteria = "mse", screen = TRUE, verbose = FALSE, v_weights = NULL, w_weights = NULL )
dfr_adap_sgl.cv( X, y, groups, type = "linear", lambda = "path", path_length = 20, nfolds = 10, alpha = 0.95, gamma_1 = 0.1, gamma_2 = 0.1, backtracking = 0.7, max_iter = 5000, max_iter_backtracking = 100, tol = 1e-05, min_frac = 0.05, standardise = "l2", intercept = TRUE, error_criteria = "mse", screen = TRUE, verbose = FALSE, v_weights = NULL, w_weights = NULL )
X |
Input matrix of dimensions |
y |
Output vector of dimension |
groups |
A grouping structure for the input data. Should take the form of a vector of group indices. |
type |
The type of regression to perform. Supported values are: |
lambda |
The regularisation parameter. Defines the level of sparsity in the model. A higher value leads to sparser models:
|
path_length |
The number of |
nfolds |
The number of folds to use in cross-validation. |
alpha |
The value of |
gamma_1 |
Hyperparameter which determines the shape of the variable penalties. |
gamma_2 |
Hyperparameter which determines the shape of the group penalties. |
backtracking |
The backtracking parameter, |
max_iter |
Maximum number of ATOS iterations to perform. |
max_iter_backtracking |
Maximum number of backtracking line search iterations to perform per global iteration. |
tol |
Convergence tolerance for the stopping criteria. |
min_frac |
Smallest value of |
standardise |
Type of standardisation to perform on
|
intercept |
Logical flag for whether to fit an intercept. |
error_criteria |
The criteria used to discriminate between models along the path. Supported values are: |
screen |
Logical flag for whether to apply the DFR screening rules (see Feser and Evangelou (2024)). |
verbose |
Logical flag for whether to print fitting information. |
v_weights |
Optional vector for the variable penalty weights. Overrides the adaptive SGL penalties if specified. When entering custom weights, these are multiplied internally by |
w_weights |
Optional vector for the group penalty weights. Overrides the adaptive SGL penalties if specified. When entering custom weights, these are multiplied internally by |
Fits DFR-aSGL models under a pathwise solution using Adaptive Three Operator Splitting (ATOS) (Pedregosa and Gidel (2018)), picking the 1se model as optimum. Warm starts are implemented.
A list containing:
all_models |
A list of all the models fitted along the path. |
fit |
The 1se chosen model, which is a |
best_lambda |
The value of |
best_lambda_id |
The path index for the chosen model. |
errors |
A table containing fitting information about the models on the path. |
type |
Indicates which type of regression was performed. |
Feser, F., Evangelou, M. (2024). Dual feature reduction for the sparse-group lasso and its adaptive variant, https://arxiv.org/abs/2405.17094
Pedregosa, F., Gidel, G. (2018). Adaptive Three Operator Splitting, https://proceedings.mlr.press/v80/pedregosa18a.html
Other SGL-methods:
dfr_adap_sgl()
,
dfr_sgl()
,
dfr_sgl.cv()
,
plot.sgl()
,
predict.sgl()
,
print.sgl()
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-SGL with cross-validation cv_model = dfr_adap_sgl.cv(X = data$X, y = data$y, groups=groups, type = "linear", path_length = 5, nfolds=5, alpha = 0.95, min_frac = 0.05, standardise="l2",intercept=TRUE,verbose=TRUE)
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-SGL with cross-validation cv_model = dfr_adap_sgl.cv(X = data$X, y = data$y, groups=groups, type = "linear", path_length = 5, nfolds=5, alpha = 0.95, min_frac = 0.05, standardise="l2",intercept=TRUE,verbose=TRUE)
Sparse-group lasso (SGL) with DFR main fitting function. Supports both linear and logistic regression, both with dense and sparse matrix implementations.
dfr_sgl( X, y, groups, type = "linear", lambda = "path", alpha = 0.95, max_iter = 5000, backtracking = 0.7, max_iter_backtracking = 100, tol = 1e-05, standardise = "l2", intercept = TRUE, path_length = 20, min_frac = 0.05, screen = TRUE, verbose = FALSE )
dfr_sgl( X, y, groups, type = "linear", lambda = "path", alpha = 0.95, max_iter = 5000, backtracking = 0.7, max_iter_backtracking = 100, tol = 1e-05, standardise = "l2", intercept = TRUE, path_length = 20, min_frac = 0.05, screen = TRUE, verbose = FALSE )
X |
Input matrix of dimensions |
y |
Output vector of dimension |
groups |
A grouping structure for the input data. Should take the form of a vector of group indices. |
type |
The type of regression to perform. Supported values are: |
lambda |
The regularisation parameter. Defines the level of sparsity in the model. A higher value leads to sparser models:
|
alpha |
The value of |
max_iter |
Maximum number of ATOS iterations to perform. |
backtracking |
The backtracking parameter, |
max_iter_backtracking |
Maximum number of backtracking line search iterations to perform per global iteration. |
tol |
Convergence tolerance for the stopping criteria. |
standardise |
Type of standardisation to perform on
|
intercept |
Logical flag for whether to fit an intercept. |
path_length |
The number of |
min_frac |
Smallest value of |
screen |
Logical flag for whether to apply the DFR screening rules (see Feser and Evangelou (2024)). |
verbose |
Logical flag for whether to print fitting information. |
dfr_sgl()
fits a DFR-SGL model (Feser and Evangelou (2024)) using Adaptive Three Operator Splitting (ATOS) (Pedregosa and Gidel (2018)).
It solves the convex optimisation problem given by (Simon et al. (2013))
where is the loss function and
are the group sizes. In the case of the linear model, the loss function is given by the mean-squared error loss:
In the logistic model, the loss function is given by
where the log-likelihood is given by
SGL can be seen to be a convex combination of the lasso and group lasso, balanced through alpha
, such that it reduces to the lasso for alpha = 0
and to the group lasso for alpha = 1
.
By applying both the lasso and group lasso norms, SGL shrinks inactive groups to zero, as well as inactive variables in active groups.
DFR uses the dual norm (the -norm) and the KKT conditions to discard features at
that would have been inactive at
.
It applies two layers of screening, so that it first screens out any groups that satisfy
and then screens out any variables that satisfy
leading to effective input dimensionality reduction. See Feser and Evangelou (2024) for full details.
A list containing:
beta |
The fitted values from the regression. Taken to be the more stable fit between |
group_effects |
The group values from the regression. Taken by applying the |
selected_var |
A list containing the indicies of the active/selected variables for each |
selected_grp |
A list containing the indicies of the active/selected groups for each |
num_it |
Number of iterations performed. If convergence is not reached, this will be |
success |
Logical flag indicating whether ATOS converged, according to |
certificate |
Final value of convergence criteria. |
x |
The solution to the original problem (see Pedregosa and Gidel (2018)). |
u |
The solution to the dual problem (see Pedregosa and Gidel (2018)). |
z |
The updated values from applying the first proximal operator (see Pedregosa and Gidel (2018)). |
screen_set_var |
List of variables that were kept after screening step for each |
screen_set_grp |
List of groups that were kept after screening step for each |
epsilon_set_var |
List of variables that were used for fitting after screening for each |
epsilon_set_grp |
List of groups that were used for fitting after screening for each |
kkt_violations_var |
List of variables that violated the KKT conditions each |
kkt_violations_grp |
List of groups that violated the KKT conditions each |
screen |
Logical flag indicating whether screening was performed. |
type |
Indicates which type of regression was performed. |
intercept |
Logical flag indicating whether an intercept was fit. |
lambda |
Value(s) of |
Feser, F., Evangelou, M. (2024). Dual feature reduction for the sparse-group lasso and its adaptive variant, https://arxiv.org/abs/2405.17094
Pedregosa, F., Gidel, G. (2018). Adaptive Three Operator Splitting, https://proceedings.mlr.press/v80/pedregosa18a.html
Simon, N., Friedman, J., Hastie, T., Tibshirani, R. (2013). A Sparse-Group Lasso, https://www.tandfonline.com/doi/abs/10.1080/10618600.2012.681250
Other SGL-methods:
dfr_adap_sgl()
,
dfr_adap_sgl.cv()
,
dfr_sgl.cv()
,
plot.sgl()
,
predict.sgl()
,
print.sgl()
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-SGL model = dfr_sgl(X = data$X, y = data$y, groups = groups, type="linear", path_length = 5, alpha=0.95, standardise = "l2", intercept = TRUE, verbose=FALSE)
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-SGL model = dfr_sgl(X = data$X, y = data$y, groups = groups, type="linear", path_length = 5, alpha=0.95, standardise = "l2", intercept = TRUE, verbose=FALSE)
Function to fit a pathwise solution of the sparse-group lasso (SGL) applied with DFR using k-fold cross-validation. Supports both linear and logistic regression, both with dense and sparse matrix implementations.
dfr_sgl.cv( X, y, groups, type = "linear", lambda = "path", path_length = 20, nfolds = 10, alpha = 0.95, backtracking = 0.7, max_iter = 5000, max_iter_backtracking = 100, tol = 1e-05, min_frac = 0.05, standardise = "l2", intercept = TRUE, error_criteria = "mse", screen = TRUE, verbose = FALSE )
dfr_sgl.cv( X, y, groups, type = "linear", lambda = "path", path_length = 20, nfolds = 10, alpha = 0.95, backtracking = 0.7, max_iter = 5000, max_iter_backtracking = 100, tol = 1e-05, min_frac = 0.05, standardise = "l2", intercept = TRUE, error_criteria = "mse", screen = TRUE, verbose = FALSE )
X |
Input matrix of dimensions |
y |
Output vector of dimension |
groups |
A grouping structure for the input data. Should take the form of a vector of group indices. |
type |
The type of regression to perform. Supported values are: |
lambda |
The regularisation parameter. Defines the level of sparsity in the model. A higher value leads to sparser models:
|
path_length |
The number of |
nfolds |
The number of folds to use in cross-validation. |
alpha |
The value of |
backtracking |
The backtracking parameter, |
max_iter |
Maximum number of ATOS iterations to perform. |
max_iter_backtracking |
Maximum number of backtracking line search iterations to perform per global iteration. |
tol |
Convergence tolerance for the stopping criteria. |
min_frac |
Smallest value of |
standardise |
Type of standardisation to perform on
|
intercept |
Logical flag for whether to fit an intercept. |
error_criteria |
The criteria used to discriminate between models along the path. Supported values are: |
screen |
Logical flag for whether to apply the DFR screening rules (see Feser and Evangelou (2024)). |
verbose |
Logical flag for whether to print fitting information. |
Fits DFR-SGL models under a pathwise solution using Adaptive Three Operator Splitting (ATOS) (Pedregosa and Gidel (2018)), picking the 1se model as optimum. Warm starts are implemented.
A list containing:
all_models |
A list of all the models fitted along the path. |
fit |
The 1se chosen model, which is a |
best_lambda |
The value of |
best_lambda_id |
The path index for the chosen model. |
errors |
A table containing fitting information about the models on the path. |
type |
Indicates which type of regression was performed. |
Feser, F., Evangelou, M. (2024). Dual feature reduction for the sparse-group lasso and its adaptive variant, https://arxiv.org/abs/2405.17094
Pedregosa, F., Gidel, G. (2018). Adaptive Three Operator Splitting, https://proceedings.mlr.press/v80/pedregosa18a.html
Other SGL-methods:
dfr_adap_sgl()
,
dfr_adap_sgl.cv()
,
dfr_sgl()
,
plot.sgl()
,
predict.sgl()
,
print.sgl()
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-SGL with cross-validation cv_model = dfr_sgl.cv(X = data$X, y = data$y, groups=groups, type = "linear", path_length = 5, nfolds=5, alpha = 0.95, min_frac = 0.05, standardise="l2",intercept=TRUE,verbose=TRUE)
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-SGL with cross-validation cv_model = dfr_sgl.cv(X = data$X, y = data$y, groups=groups, type = "linear", path_length = 5, nfolds=5, alpha = 0.95, min_frac = 0.05, standardise="l2",intercept=TRUE,verbose=TRUE)
"sgl"
, "sgl_cv"
.Plots the pathwise solution of a cross-validation fit, from a call to one of the following: dfr_sgl()
, dfr_sgl.cv()
, dfr_adap_sgl()
, dfr_adap_sgl.cv()
.
## S3 method for class 'sgl' plot(x, how_many = 10, ...)
## S3 method for class 'sgl' plot(x, how_many = 10, ...)
x |
Object of one of the following classes: |
how_many |
Defines how many predictors to plot. Plots the predictors in decreasing order of largest absolute value. |
... |
further arguments passed to base function. |
A list containing:
response |
The predicted response. In the logistic case, this represents the predicted class probabilities. |
class |
The predicted class assignments. Only returned if type = "logistic" in the model object. |
dfr_sgl()
, dfr_sgl.cv()
, dfr_adap_sgl()
, dfr_adap_sgl.cv()
Other SGL-methods:
dfr_adap_sgl()
,
dfr_adap_sgl.cv()
,
dfr_sgl()
,
dfr_sgl.cv()
,
predict.sgl()
,
print.sgl()
# specify a grouping structure groups = c(1,1,2,2,3) # generate data data = sgs::gen_toy_data(p=5, n=4, groups = groups, seed_id=3,signal_mean=20,group_sparsity=1) # run DFR-SGL model = dfr_sgl(X = data$X, y = data$y, groups=groups, type = "linear", path_length = 20, alpha = 0.95, min_frac = 0.05, standardise="l2",intercept=TRUE,verbose=FALSE) plot(model, how_many = 10)
# specify a grouping structure groups = c(1,1,2,2,3) # generate data data = sgs::gen_toy_data(p=5, n=4, groups = groups, seed_id=3,signal_mean=20,group_sparsity=1) # run DFR-SGL model = dfr_sgl(X = data$X, y = data$y, groups=groups, type = "linear", path_length = 20, alpha = 0.95, min_frac = 0.05, standardise="l2",intercept=TRUE,verbose=FALSE) plot(model, how_many = 10)
"sgl"
, "sgl_cv"
.Performs prediction from one of the following fits: dfr_sgl()
, dfr_sgl.cv()
, dfr_adap_sgl()
, dfr_adap_sgl.cv()
. The predictions are calculated for each "lambda"
value in the path.
## S3 method for class 'sgl' predict(object, x, ...)
## S3 method for class 'sgl' predict(object, x, ...)
object |
Object of one of the following classes: |
x |
Input data to use for prediction. |
... |
further arguments passed to stats function. |
A list containing:
response |
The predicted response. In the logistic case, this represents the predicted class probabilities. |
class |
The predicted class assignments. Only returned if type = "logistic" in the |
dfr_sgl()
, dfr_sgl.cv()
, dfr_adap_sgl()
, dfr_adap_sgl.cv()
Other SGL-methods:
dfr_adap_sgl()
,
dfr_adap_sgl.cv()
,
dfr_sgl()
,
dfr_sgl.cv()
,
plot.sgl()
,
print.sgl()
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-SGL model = dfr_sgl(X = data$X, y = data$y, groups = groups, type="linear", lambda = 1, alpha=0.95, standardise = "l2", intercept = TRUE, verbose=FALSE) # use predict function model_predictions = predict(model, x = data$X)
# specify a grouping structure groups = c(1,1,1,2,2,3,3,3,4,4) # generate data data = sgs::gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1) # run DFR-SGL model = dfr_sgl(X = data$X, y = data$y, groups = groups, type="linear", lambda = 1, alpha=0.95, standardise = "l2", intercept = TRUE, verbose=FALSE) # use predict function model_predictions = predict(model, x = data$X)
"sgl"
, "sgl_cv"
.Prints out useful metric from a model fit.
## S3 method for class 'sgl' print(x, ...)
## S3 method for class 'sgl' print(x, ...)
x |
Object of one of the following classes: |
... |
further arguments passed to base function. |
A summary of the model fit(s).
dfr_sgl()
, dfr_sgl.cv()
, dfr_adap_sgl()
, dfr_adap_sgl.cv()
Other SGL-methods:
dfr_adap_sgl()
,
dfr_adap_sgl.cv()
,
dfr_sgl()
,
dfr_sgl.cv()
,
plot.sgl()
,
predict.sgl()
# specify a grouping structure groups = c(rep(1:20, each=3), rep(21:40, each=4), rep(41:60, each=5), rep(61:80, each=6), rep(81:100, each=7)) # generate data data = sgs::gen_toy_data(p=500, n=400, groups = groups, seed_id=3) # run DFR-SGL model = dfr_sgl(X = data$X, y = data$y, groups = groups, type="linear", lambda = 1, alpha=0.95, standardise = "l2", intercept = TRUE, verbose=FALSE) # print model print(model)
# specify a grouping structure groups = c(rep(1:20, each=3), rep(21:40, each=4), rep(41:60, each=5), rep(61:80, each=6), rep(81:100, each=7)) # generate data data = sgs::gen_toy_data(p=500, n=400, groups = groups, seed_id=3) # run DFR-SGL model = dfr_sgl(X = data$X, y = data$y, groups = groups, type="linear", lambda = 1, alpha=0.95, standardise = "l2", intercept = TRUE, verbose=FALSE) # print model print(model)