Title: | Variable Selection in Sparse Multivariate GLARMA Models |
---|---|
Description: | Performs variable selection in high-dimensional sparse GLARMA models. For further details we refer the reader to the paper Gomtsyan et al. (2022), <arXiv:2208.14721>. |
Authors: | Marina Gomtsyan |
Maintainer: | Marina Gomtsyan <[email protected]> |
License: | GPL-2 |
Version: | 1.0 |
Built: | 2024-11-02 06:28:29 UTC |
Source: | CRAN |
MultiGlarmaVarSel consists of four functions: "variable_selection.R", "grad_hess_L_gamma.R", "grad_hess_L_eta.R", and "NR_gamma.R" For further information on how to use these functions, we refer the reader to the vignette of the package.
This package consists of four functions: "variable_selection.R", "grad_hess_L_gamma.R", "grad_hess_L_eta.R" and "NR_gamma.R" For further information on how to use these functions, we refer the reader to the vignette of the package.
Marina Gomtsyan
Maintainer: Marina Gomtsyan <[email protected]>
M. Gomtsyan et al. "Variable selection in sparse multivariate GLARMA models: Application to germination control by environment", arXiv:2208.14721
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) result=variable_selection(Y, X, gamma_0, k_max=1, n_iter=100, method="min", nb_rep_ss=1000, threshold=0.6) estim_active = result$estim_active eta_est = result$eta_est gamma_est = result$gamma_est
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) result=variable_selection(Y, X, gamma_0, k_max=1, n_iter=100, method="min", nb_rep_ss=1000, threshold=0.6) estim_active = result$estim_active eta_est = result$eta_est gamma_est = result$gamma_est
This function calculates the gradient and Hessian of the log-likelihood with respect to eta
grad_hess_L_eta(Y, X, eta_vect, gamma, I, J)
grad_hess_L_eta(Y, X, eta_vect, gamma, I, J)
Y |
Observation matrix |
X |
Design matrix |
eta_vect |
Initial eta vector |
gamma |
Initial gamma vector |
I |
Number of conditions |
J |
Number of replications |
grad_L_eta |
Vector of the gradient of L with respect to eta |
hess_L_eta |
Matrix of the Hessian of L with respect to eta |
Marina Gomtsyan
Maintainer: Marina Gomtsyan <[email protected]>
M. Gomtsyan et al. "Variable selection in sparse multivariate GLARMA models: Application to germination control by environment", arXiv:2208.14721
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) eta_glm_mat_0 = matrix(0,ncol=T,nrow=I) for (t in 1:T) { result_glm_0 = glm(Y[,t]~X-1,family=poisson(link='log')) eta_glm_mat_0[,t]=as.numeric(result_glm_0$coefficients) } eta_0 = round(as.numeric(t(eta_glm_mat_0)),digits=6) result = grad_hess_L_eta(Y, X, eta_0, gamma_0, I, J) grad = result$grad_L_eta Hessian = result$hess_L_eta
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) eta_glm_mat_0 = matrix(0,ncol=T,nrow=I) for (t in 1:T) { result_glm_0 = glm(Y[,t]~X-1,family=poisson(link='log')) eta_glm_mat_0[,t]=as.numeric(result_glm_0$coefficients) } eta_0 = round(as.numeric(t(eta_glm_mat_0)),digits=6) result = grad_hess_L_eta(Y, X, eta_0, gamma_0, I, J) grad = result$grad_L_eta Hessian = result$hess_L_eta
This function calculates the gradient and Hessian of the log-likelihood with respect to gamma
grad_hess_L_gamma(Y, X, eta, gamma, I, J)
grad_hess_L_gamma(Y, X, eta, gamma, I, J)
Y |
Observation matrix |
X |
Design matrix |
eta |
Initial eta vector |
gamma |
Initial gamma vector |
I |
Number of conditions |
J |
Number of replications |
grad_L_gamma |
Vector of the gradient of L with respect to gamma |
hess_L_gamma |
Matrix of the Hessian of L with respect to gamma |
Marina Gomtsyan
Maintainer: Marina Gomtsyan <[email protected]>
M. Gomtsyan et al. "Variable selection in sparse multivariate GLARMA models: Application to germination control by environment", arXiv:2208.14721
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) eta_glm_mat_0 = matrix(0,ncol=T,nrow=I) for (t in 1:T) { result_glm_0 = glm(Y[,t]~X-1,family=poisson(link='log')) eta_glm_mat_0[,t]=as.numeric(result_glm_0$coefficients) } eta_0 = round(as.numeric(t(eta_glm_mat_0)),digits=6) result = grad_hess_L_gamma(Y, X, eta_0, gamma_0, I, J) grad = result$grad_L_gamma Hessian = result$hess_L_gamma
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) eta_glm_mat_0 = matrix(0,ncol=T,nrow=I) for (t in 1:T) { result_glm_0 = glm(Y[,t]~X-1,family=poisson(link='log')) eta_glm_mat_0[,t]=as.numeric(result_glm_0$coefficients) } eta_0 = round(as.numeric(t(eta_glm_mat_0)),digits=6) result = grad_hess_L_gamma(Y, X, eta_0, gamma_0, I, J) grad = result$grad_L_gamma Hessian = result$hess_L_gamma
This function estimates gamma with Newton-Raphson method
NR_gamma(Y, X, eta, gamma, I, J, n_iter = 100)
NR_gamma(Y, X, eta, gamma, I, J, n_iter = 100)
Y |
Observation matrix |
X |
Design matrix |
eta |
Initial eta vector |
gamma |
Initial gamma vector |
I |
Number of conditions |
J |
Number of replications |
n_iter |
Number of iterations of the algorithm. Default=100 |
Estimated gamma vector
Marina Gomtsyan
Maintainer: Marina Gomtsyan <[email protected]>
M. Gomtsyan et al. "Variable selection in sparse multivariate GLARMA models: Application to germination control by environment", arXiv:2208.14721
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) eta_glm_mat_0 = matrix(0,ncol=T,nrow=I) for (t in 1:T) { result_glm_0 = glm(Y[,t]~X-1,family=poisson(link='log')) eta_glm_mat_0[,t]=as.numeric(result_glm_0$coefficients) } eta_0 = round(as.numeric(t(eta_glm_mat_0)),digits=6) gamma_est=NR_gamma(Y, X, eta_0, gamma_0, I, J, n_iter = 100)
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) eta_glm_mat_0 = matrix(0,ncol=T,nrow=I) for (t in 1:T) { result_glm_0 = glm(Y[,t]~X-1,family=poisson(link='log')) eta_glm_mat_0[,t]=as.numeric(result_glm_0$coefficients) } eta_0 = round(as.numeric(t(eta_glm_mat_0)),digits=6) gamma_est=NR_gamma(Y, X, eta_0, gamma_0, I, J, n_iter = 100)
This function performs variable selection, estimates a new vector eta and a new vector gamma
variable_selection(Y, X, gamma, k_max = 1, n_iter = 100, method = "min", nb_rep_ss = 1000, threshold = 0.6)
variable_selection(Y, X, gamma, k_max = 1, n_iter = 100, method = "min", nb_rep_ss = 1000, threshold = 0.6)
Y |
Observation matrix |
X |
Design matrix |
gamma |
Initial gamma vector |
k_max |
Number of iteration to repeat the whole algorithm |
n_iter |
Number of iteration for Newton-Raphson algorithm |
method |
Stability selection method: "min" or "cv". In "min" the smallest lambda is chosen, in "cv" cross-validation lambda is chosen for stability selection. The default is "min" |
nb_rep_ss |
Number of replications in stability selection step. The default is 1000 |
threshold |
Threshold for stability selection. The default is 0.9 |
estim_active |
Vector of stimated active coefficients |
eta_est |
Vector of estimated eta values |
gamma_est |
Vector of estimated gamma values |
Marina Gomtsyan
Maintainer: Marina Gomtsyan <[email protected]>
M. Gomtsyan et al. "Variable selection in sparse multivariate GLARMA models: Application to germination control by environment", arXiv:2208.14721
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) result=variable_selection(Y, X, gamma_0, k_max=1, n_iter=100, method="min", nb_rep_ss=1000, threshold=0.6) estim_active = result$estim_active eta_est = result$eta_est gamma_est = result$gamma_est
data(Y) I=3 J=100 T=dim(Y)[2] q=1 X=matrix(0,nrow=(I*J),ncol=I) for (i in 1:I) { X[((i-1)*J+1):(i*J),i]=rep(1,J) } gamma_0 = matrix(0, nrow = 1, ncol = q) result=variable_selection(Y, X, gamma_0, k_max=1, n_iter=100, method="min", nb_rep_ss=1000, threshold=0.6) estim_active = result$estim_active eta_est = result$eta_est gamma_est = result$gamma_est
An example of observation matrix
data("Y")
data("Y")
The format is: num [1:300, 1:15] 3 1 1 0 0 3 2 0 3 2 ...
M. Gomtsyan et al. "Variable selection in sparse multivariate GLARMA models: Application to germination control by environment", arXiv:2208.14721
data(Y)
data(Y)