Package: fuser 1.0.1

Frank Dondelinger
fuser: Fused Lasso for High-Dimensional Regression over Groups
Enables high-dimensional penalized regression across heterogeneous subgroups. Fusion penalties are used to share information about the linear parameters across subgroups. The underlying model is described in detail in Dondelinger and Mukherjee (2017) <arxiv:1611.00953>.
Authors:
fuser_1.0.1.tar.gz
fuser_1.0.1.tar.gz(r-4.5-noble)fuser_1.0.1.tar.gz(r-4.4-noble)
fuser_1.0.1.tgz(r-4.4-emscripten)fuser_1.0.1.tgz(r-4.3-emscripten)
fuser.pdf |fuser.html✨
fuser/json (API)
NEWS
# Install 'fuser' in R: |
install.packages('fuser', repos = 'https://cloud.r-project.org') |
This package does not link to any Github/Gitlab/R-forge repository. No issue tracker or development information is available.
Last updated 7 years agofrom:6cd4856235. Checks:3 OK. Indexed: yes.
Target | Result | Latest binary |
---|---|---|
Doc / Vignettes | OK | Mar 11 2025 |
R-4.5-linux-x86_64 | OK | Mar 11 2025 |
R-4.4-linux-x86_64 | OK | Mar 11 2025 |
Exports:bigeigenfusedL2DescentGLMNetfusedLassoProximalfusedLassoProximalIterationsTakengenerateBlockDiagonalMatrices
Dependencies:codetoolsforeachglmnetirlbaiteratorslatticeMatrixRcppRcppEigenRSpectrashapesurvival
Citation
To cite package ‘fuser’ in publications use:
Dondelinger F, Wilkinson O (2018). fuser: Fused Lasso for High-Dimensional Regression over Groups. R package version 1.0.1, https://CRAN.R-project.org/package=fuser.
Corresponding BibTeX entry:
@Manual{, title = {fuser: Fused Lasso for High-Dimensional Regression over Groups}, author = {Frank Dondelinger and Olivier Wilkinson}, year = {2018}, note = {R package version 1.0.1}, url = {https://CRAN.R-project.org/package=fuser}, }
Readme and manuals
fuser
Fused lasso for high-dimensional regression over groups. This package implements the model described in Dondelinger et al. (2016).
Installation
library('devtools')
install_github('FrankD/fuser')
Example
See also the included vignette.
library(fuser)
set.seed(123)
# Generate simple heterogeneous dataset
k = 4 # number of groups
p = 100 # number of covariates
n.group = 15 # number of samples per group
sigma = 0.05 # observation noise sd
groups = rep(1:k, each=n.group) # group indicators
# sparse linear coefficients
beta = matrix(0, p, k)
nonzero.ind = rbinom(p*k, 1, 0.025/k) # Independent coefficients
nonzero.shared = rbinom(p, 1, 0.025) # shared coefficients
beta[which(nonzero.ind==1)] = rnorm(sum(nonzero.ind), 1, 0.25)
beta[which(nonzero.shared==1),] = rnorm(sum(nonzero.shared), -1, 0.25)
X = lapply(1:k, function(k.i) matrix(rnorm(n.group*p),n.group, p)) # covariates
y = sapply(1:k, function(k.i) X[[k.i]] %*% beta[,k.i] + rnorm(n.group, 0, sigma)) # response
X = do.call('rbind', X)
# Pairwise Fusion strength hyperparameters (tau(k,k'))
# Same for all pairs in this example
G = matrix(1, k, k)
# Use L1 fusion to estimate betas (with near-optimal sparsity and
# information sharing among groups)
beta.estimate = fusedLassoProximal(X, y, groups, lambda=0.001, tol=9e-5,
gamma=0.001, G, intercept=FALSE,
num.it=2000)
# Generate block diagonal matrices for L2 fusion approach
transformed.data = generateBlockDiagonalMatrices(X, y, groups, G)
# Use L2 fusion to estimate betas (with near-optimal information sharing among groups)
beta.estimate = fusedL2DescentGLMNet(transformed.data$X, transformed.data$X.fused,
transformed.data$Y, groups, lambda=c(0,0.001,0.1,1),
gamma=0.001)
Help Manual
Help page | Topics |
---|---|
Big eigenvalue calculation | bigeigen |
Optimise the fused L2 model with glmnet (using transformed input data) | fusedL2DescentGLMNet |
Fused lasso optimisation with proximal-gradient method. (Chen et al. 2010) | fusedLassoProximal |
Following a call to fusedLassoProximal, returns the actual number of iterations taken. | fusedLassoProximalIterationsTaken |
Generate block diagonal matrices to allow for fused L2 optimization with glmnet. | generateBlockDiagonalMatrices |