Title: | Shared, Joint (Generalized) Frailty Models; Surrogate Endpoints |
---|---|
Description: | The following several classes of frailty models using a penalized likelihood estimation on the hazard function but also a parametric estimation can be fit using this R package: 1) A shared frailty model (with gamma or log-normal frailty distribution) and Cox proportional hazard model. Clustered and recurrent survival times can be studied. 2) Additive frailty models for proportional hazard models with two correlated random effects (intercept random effect with random slope). 3) Nested frailty models for hierarchically clustered data (with 2 levels of clustering) by including two iid gamma random effects. 4) Joint frailty models in the context of the joint modelling for recurrent events with terminal event for clustered data or not. A joint frailty model for two semi-competing risks and clustered data is also proposed. 5) Joint general frailty models in the context of the joint modelling for recurrent events with terminal event data with two independent frailty terms. 6) Joint Nested frailty models in the context of the joint modelling for recurrent events with terminal event, for hierarchically clustered data (with two levels of clustering) by including two iid gamma random effects. 7) Multivariate joint frailty models for two types of recurrent events and a terminal event. 8) Joint models for longitudinal data and a terminal event. 9) Trivariate joint models for longitudinal data, recurrent events and a terminal event. 10) Joint frailty models for the validation of surrogate endpoints in multiple randomized clinical trials with failure-time and/or longitudinal endpoints with the possibility to use a mediation analysis model. 11) Conditional and Marginal two-part joint models for longitudinal semicontinuous data and a terminal event. 12) Joint frailty-copula models for the validation of surrogate endpoints in multiple randomized clinical trials with failure-time endpoints. 13) Generalized shared and joint frailty models for recurrent and terminal events. Proportional hazards (PH), additive hazard (AH), proportional odds (PO) and probit models are available in a fully parametric framework. For PH and AH models, it is possible to consider type-varying coefficients and flexible semiparametric hazard function. Prediction values are available (for a terminal event or for a new recurrent event). Left-truncated (not for Joint model), right-censored data, interval-censored data (only for Cox proportional hazard and shared frailty model) and strata are allowed. In each model, the random effects have the gamma or normal distribution. Now, you can also consider time-varying covariates effects in Cox, shared and joint frailty models (1-5). The package includes concordance measures for Cox proportional hazards models and for shared frailty models. 14) Competing Joint Frailty Model: A single type of recurrent event and two terminal events. Moreover, the package can be used with its shiny application, in a local mode or by following the link below. |
Authors: | Virginie Rondeau [aut, cre] , Juan R. Gonzalez [aut], Yassin Mazroui [aut], Audrey Mauguen [aut], Amadou Diakite [aut], Alexandre Laurent [aut], Myriam Lopez [aut], Agnieszka Krol [aut], Casimir L. Sofeu [aut], Julien Dumerc [aut], Denis Rustand [aut], Jocelyn Chauvet [aut], Quentin Le Coent [aut], Romain Pierlot [aut], Lacey Etzkorn [aut], David Hill [cph], John Burkardt [cph], Alan Genz [cph], Ashwith J. Rego [cph] |
Maintainer: | Virginie Rondeau <[email protected]> |
License: | GPL (>= 2.0) |
Version: | 3.6.5 |
Built: | 2024-12-14 17:38:00 UTC |
Source: | CRAN |
Frailtypack fits several classes of frailty models using a penalized likelihood estimation on the hazard function but also a parametric estimation.
The following several classes of frailty models using a penalized likelihood estimation on the hazard function but also a parametric estimation can be fit using this R package:
1) A shared frailty model (with gamma or log-normal frailty distribution) and Cox proportional hazard model. Clustered and recurrent survival times can be studied.
2) Additive frailty models for proportional hazard models with two correlated random effects (intercept random effect with random slope).
3) Nested frailty models for hierarchically clustered data (with 2 levels of clustering) by including two iid gamma random effects.
4) Joint frailty models in the context of the joint modelling for recurrent events with terminal event for clustered data or not. A joint frailty model for two semi-competing risks and clustered data is also proposed.
5) Joint general frailty models in the context of the joint modelling for recurrent events with terminal event data with two independent frailty terms.
6) Joint Nested frailty models in the context of the joint modelling for recurrent events with terminal event, for hierarchically clustered data (with two levels of clustering) by including two iid gamma random effects.
7) Multivariate joint frailty models for two types of recurrent events and a terminal event.
8) Joint models for longitudinal data and a terminal event.
9) Trivariate joint models for longitudinal data, recurrent events and a terminal event.
10) Joint frailty models for the validation of surrogate endpoints in multiple randomized clinical trials with failure-time and/or longitudinal endpoints with the possibility to use a mediation analysis model.
11) Conditional and Marginal two-part joint models for longitudinal semicontinuous data and a terminal event.
12) Joint frailty-copula models for the validation of surrogate endpoints in multiple randomized clinical trials with failure-time endpoints.
13) Generalized shared and joint frailty models for recurrent and terminal events. Proportional hazards (PH), additive hazard (AH), proportional odds (PO) and probit models are available in a fully parametric framework.
14) Competing Joint Frailty Model: A single type of recurrent event and two terminal events.
The package includes concordance measures for Cox proportional hazards models and for shared frailty models. Now, you can also consider time-varying covariates effects in Cox, shared and joint frailty models (1-5). Some of the Fortran routines in the package can speed-up computation time by making use of parallelization through OpenMP. Moreover, the package can be used with its shiny application, in a local mode or by following the link below.
Package: | frailtypack |
Type: | Package |
Version: | 3.6.5 |
Date: | 2024-12-09 |
License: | GPL (>= 2.0) |
LazyLoad: | no |
Virginie Rondeau, Juan R. Gonzalez, Yassin Mazroui, Audrey Mauguen, Amadou Diakite, Alexandre Laurent, Myriam Lopez, Agnieszka Krol, Casimir L. Sofeu, Denis Rustand, Quentin Le Coent, Lacey Etzkorn and Romain Pierlot
V. Rondeau, Y. Mazroui and J. R. Gonzalez (2012). Frailtypack: An R package for the analysis of correlated survival data with frailty models using penalized likelihood estimation or parametric estimation. Journal of Statistical Software 47, 1-28.
Y. Mazroui, S. Mathoulin-Pelissier,P. Soubeyranb and Virginie Rondeau (2012) General joint frailty model for recurrent event data with a dependent terminalevent: Application to follicular lymphoma data. Statistics in Medecine, 31, 11-12, 1162-1176.
V. Rondeau and J. R. Gonzalez (2005). Frailtypack: A computer program for the analysis of correlated failure time data using penalized likelihood estimation. Computer Methods and Programs in Biomedicine 80, 2, 154-164.
V. Rondeau, S. Michiels, B. Liquet, and J. P. Pignon (2008). Investigating trial and treatment heterogeneity in an individual patient data meta-analysis of survival data by mean of the maximum penalized likelihood approach. Statistics in Medecine, 27, 1894-1910.
V. Rondeau, S. Mathoulin-Pellissier, H. Jacqmin-Gadda, V. Brouste, P. Soubeyran (2007). Joint frailty models for recurring events and death using maximum penalized likelihood estimation:application on cancer events. Biostatistics, 8, 4, 708-721.
V. Rondeau, D. Commenges, and P. Joly (2003). Maximum penalized likelihood estimation in a gamma-frailty model. Lifetime Data Analysis 9, 139-153.
D. Marquardt (1963). An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal of Applied Mathematics, 431-441.
V. Rondeau, L. Filleul, P. Joly (2006). Nested frailty models using maximum penalized likelihood estimation. Statistics in Medecine, 25, 4036-4052.
## Not run: ###--- Additive model with 1 covariate ---### data(dataAdditive) modAdd <- additivePenal(Surv(t1,t2,event)~ cluster(group)+var1+slope(var1), correlation=TRUE,data=dataAdditive, n.knots=8,kappa=10000,hazard="Splines") ###--- Joint model (recurrent and terminal events) with 2 covariates ---### data(readmission) modJoint.gap <- frailtyPenal(Surv(time,event)~ cluster(id)+sex+dukes+charlson+terminal(death), formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=10,kappa=c(100,100), recurrentAG=FALSE,hazard="Splines") ###--- General Joint model (recurrent and terminal events) with 2 covariates ---### data(readmission) modJoint.general <- frailtyPenal(Surv(time,event) ~ cluster(id) + dukes + charlson + sex + chemo + terminal(death), formula.terminalEvent = ~ dukes + charlson + sex + chemo, data = readmission, jointGeneral = TRUE, n.knots = 8, kappa = c(2.11e+08, 9.53e+11)) ###--- Nested model (or hierarchical model) with 2 covariates ---### data(dataNested) modClu <- frailtyPenal(Surv(t1,t2,event)~ cluster(group)+subcluster(subgroup)+cov1+cov2, data=dataNested,n.knots=8,kappa=50000,hazard="Splines") ###--- Joint Nested Frailty model ---### #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) modJointNested_Splines <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) modJointNested_Weib <- frailtyPenal(Surv(t.start,t.stop,event)~subcluster(id) +cluster(group)+dukes+ terminal(death),formula.terminalEvent=~dukes, hazard = ('Weibull'), data=readmissionNested,recurrentAG=TRUE, initialize = FALSE) JoiNes-GapSpline <- frailtyPenal(formula = Surv(time, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = FALSE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE, init.Alpha = 1.091, Ksi = "None") ###--- Semiparametric Shared model ---### data(readmission) sha.sp <- frailtyPenal(Surv(t.start,t.stop,event)~ sex+dukes+charlson+cluster(id),data=readmission, n.knots=6,kappa=5000,recurrentAG=TRUE, cross.validation=TRUE,hazard="Splines") ###--- Parametric Shared model ---### data(readmission) sha.p <- frailtyPenal(Surv(t.start,t.stop,event)~ cluster(id)+sex+dukes+charlson, data=readmission,recurrentAG=TRUE, hazard="Piecewise-per",nb.int=6) ###--- Joint model for longitudinal ---### ###--- data and a terminal event ---### data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) model.weib.RE <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, hazard = "Weibull") ###--- Trivariate joint model for longitudinal ---### ###--- data, recurrent and terminal events ---### data(colorectal) data(colorectalLongi) # (computation takes around 40 minutes) model.spli.RE.cal <-trivPenal(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + who.PS + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = TRUE, n.knots = 6, kappa=c(0.01, 2), method.GH="Pseudo-adaptive", n.nodes=7, init.B = c(-0.07, -0.13, -0.16, -0.17, 0.42, #recurrent events covariates -0.23, -0.1, -0.09, -0.12, 0.8, -0.23, #terminal event covariates 3.02, -0.30, 0.05, -0.63, -0.02, -0.29, 0.11, 0.74)) #biomarker covariates ##---Surrogacy evaluation based on ganerated data with a combination ##of Monte Carlo and classical Gaussian Hermite integration. ## (Computation takes around 5 minutes) # Generation of data to use data.sim <- jointSurrSimul(n.obs=600, n.trial = 30,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, rsqrt = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) # Joint surrogate model estimation joint.surro.sim.MCGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20) ## End(Not run)
## Not run: ###--- Additive model with 1 covariate ---### data(dataAdditive) modAdd <- additivePenal(Surv(t1,t2,event)~ cluster(group)+var1+slope(var1), correlation=TRUE,data=dataAdditive, n.knots=8,kappa=10000,hazard="Splines") ###--- Joint model (recurrent and terminal events) with 2 covariates ---### data(readmission) modJoint.gap <- frailtyPenal(Surv(time,event)~ cluster(id)+sex+dukes+charlson+terminal(death), formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=10,kappa=c(100,100), recurrentAG=FALSE,hazard="Splines") ###--- General Joint model (recurrent and terminal events) with 2 covariates ---### data(readmission) modJoint.general <- frailtyPenal(Surv(time,event) ~ cluster(id) + dukes + charlson + sex + chemo + terminal(death), formula.terminalEvent = ~ dukes + charlson + sex + chemo, data = readmission, jointGeneral = TRUE, n.knots = 8, kappa = c(2.11e+08, 9.53e+11)) ###--- Nested model (or hierarchical model) with 2 covariates ---### data(dataNested) modClu <- frailtyPenal(Surv(t1,t2,event)~ cluster(group)+subcluster(subgroup)+cov1+cov2, data=dataNested,n.knots=8,kappa=50000,hazard="Splines") ###--- Joint Nested Frailty model ---### #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) modJointNested_Splines <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) modJointNested_Weib <- frailtyPenal(Surv(t.start,t.stop,event)~subcluster(id) +cluster(group)+dukes+ terminal(death),formula.terminalEvent=~dukes, hazard = ('Weibull'), data=readmissionNested,recurrentAG=TRUE, initialize = FALSE) JoiNes-GapSpline <- frailtyPenal(formula = Surv(time, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = FALSE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE, init.Alpha = 1.091, Ksi = "None") ###--- Semiparametric Shared model ---### data(readmission) sha.sp <- frailtyPenal(Surv(t.start,t.stop,event)~ sex+dukes+charlson+cluster(id),data=readmission, n.knots=6,kappa=5000,recurrentAG=TRUE, cross.validation=TRUE,hazard="Splines") ###--- Parametric Shared model ---### data(readmission) sha.p <- frailtyPenal(Surv(t.start,t.stop,event)~ cluster(id)+sex+dukes+charlson, data=readmission,recurrentAG=TRUE, hazard="Piecewise-per",nb.int=6) ###--- Joint model for longitudinal ---### ###--- data and a terminal event ---### data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) model.weib.RE <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, hazard = "Weibull") ###--- Trivariate joint model for longitudinal ---### ###--- data, recurrent and terminal events ---### data(colorectal) data(colorectalLongi) # (computation takes around 40 minutes) model.spli.RE.cal <-trivPenal(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + who.PS + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = TRUE, n.knots = 6, kappa=c(0.01, 2), method.GH="Pseudo-adaptive", n.nodes=7, init.B = c(-0.07, -0.13, -0.16, -0.17, 0.42, #recurrent events covariates -0.23, -0.1, -0.09, -0.12, 0.8, -0.23, #terminal event covariates 3.02, -0.30, 0.05, -0.63, -0.02, -0.29, 0.11, 0.74)) #biomarker covariates ##---Surrogacy evaluation based on ganerated data with a combination ##of Monte Carlo and classical Gaussian Hermite integration. ## (Computation takes around 5 minutes) # Generation of data to use data.sim <- jointSurrSimul(n.obs=600, n.trial = 30,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, rsqrt = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) # Joint surrogate model estimation joint.surro.sim.MCGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20) ## End(Not run)
Fit an additive frailty model using a semiparametric penalized likelihood estimation or a parametric estimation. The main issue in a meta-analysis study is how to take into account the heterogeneity between trials and between the treatment effects across trials. Additive models are proportional hazard model with two correlated random trial effects that act either multiplicatively on the hazard function or in interaction with the treatment, which allows studying for instance meta-analysis or multicentric datasets. Right-censored data are allowed, but not the left-truncated data. A stratified analysis is possible (maximum number of strata = 2). This approach is different from the shared frailty models.
In an additive model, the hazard function for the jth subject in the ith trial with random trial effect ui as well as the random treatment-by-trial interaction vi is:
where 0(0) is the baseline hazard function,
k the
fixed effect associated to the covariate Xijk (k=1,..,p),
1 is the treatment effect and Xij1 the treatment
variable.
is the corresponding correlation coefficient for the two frailty terms.
additivePenal(formula, data, correlation = FALSE, recurrentAG = FALSE, cross.validation = FALSE, n.knots, kappa, maxit = 350, hazard = "Splines", nb.int, LIMparam = 1e-4, LIMlogl = 1e-4, LIMderiv = 1e-3, print.times = TRUE)
additivePenal(formula, data, correlation = FALSE, recurrentAG = FALSE, cross.validation = FALSE, n.knots, kappa, maxit = 350, hazard = "Splines", nb.int, LIMparam = 1e-4, LIMlogl = 1e-4, LIMderiv = 1e-3, print.times = TRUE)
formula |
a formula object, with the response on the left of a
|
data |
a 'data.frame' with the variables used in 'formula'. |
correlation |
Logical value. Are the random effects correlated? If so, the correlation coefficient is estimated. The default is FALSE. |
recurrentAG |
Always FALSE for additive models (left-truncated data are not allowed). |
cross.validation |
Logical value. Is cross validation procedure used for estimating smoothing parameter in the penalized likelihood estimation? If so a search of the smoothing parameter using cross validation is done, with kappa as the seed. The cross validation is not implemented for two strata. The default is FALSE. |
n.knots |
integer giving the number of knots to use. Value required in the penalized likelihood estimation. It corresponds to the (n.knots+2) splines functions for the approximation of the hazard or the survival functions. Number of knots must be between 4 and 20. (See Note) |
kappa |
positive smoothing parameter in the penalized likelihood
estimation. In a stratified additive model, this argument must be a vector
with kappas for both strata. The coefficient kappa of the integral of the
squared second derivative of hazard function in the fit. To obtain an
initial value for |
maxit |
maximum number of iterations for the Marquardtt algorithm. Default is 350 |
hazard |
Type of hazard functions: "Splines" for semiparametric hazard functions with the penalized likelihood estimation, "Piecewise-per" for piecewise constant hazards functions using percentile, "Piecewise-equi" for piecewise constant hazard functions using equidistant intervals, "Weibull" for parametric Weibull functions. Default is "Splines". |
nb.int |
Number of intervals (between 1 and 20) for the parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters (see Details), |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood (see Details), |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the
gradient (see Details), |
print.times |
a logical parameter to print iteration process. Default is TRUE. |
The estimated parameter are obtained by maximizing the penalized
log-likelihood or by a simple log-likelihood (in the parametric case) using
the robust Marquardtt algorithm (Marquardtt,1963). The parameters are
initialized with values obtained with Cox proportional hazard model. The
iterations are stopped when the difference between two consecutive
loglikelhoods was small , the estimated coefficients were
stable (consecutive values
, and the gradient small enough
. To be sure of having a positive function at all stages of
the algorithm, the spline coefficients were reparametrized to be positive at
each stage. The variance space of the two random effects is reduced, so the
variances are positive, and the correlation coefficient values are
constrained to be between -1 and 1. The marginal log-likelihood depends on
integrations that are approximated by using the Laplace integration
technique with a first order approximation. The smoothing parameter can be
fixed or estimated by maximizing likelihood cross-validation criterion. The
usual squared Wald statistic was modified to a mixture of two
distribution to get significance test for the variance of the random
effects.
INITIAL VALUES
The splines and the regression coefficients are initialized to 0.1. An adjusted Cox model is fitted, it provides new initial values for the splines coefficients and the regression coefficients. The variances of the frailties are initialized to 0.1. Then an additive frailty model with independent frailties is fitted. At last, an additive frailty model with correlated frailties is fitted.
An additive model or more generally an object of class 'additivePenal'. Methods defined for 'additivePenal' objects are provided for print, plot and summary.
b |
sequence of the corresponding estimation of the splines coefficients, the random effects variances and the regression coefficients. |
call |
The code used for fitting the model. |
coef |
the regression coefficients. |
cov |
covariance between the two frailty terms
|
cross.Val |
Logical value. Is cross validation procedure used for estimating the smoothing parameters in the penalized likelihood estimation? |
correlation |
Logical value. Are the random effects correlated? |
DoF |
degrees of freedom associated with the "kappa". |
formula |
the formula part of the code used for the model. |
groups |
the maximum number of groups used in the fit. |
kappa |
A vector with the smoothing parameters in the penalized likelihood estimation corresponding to each baseline function as components. |
loglikPenal |
the complete marginal penalized log-likelihood in the semiparametric case. |
loglik |
the marginal log-likelihood in the parametric case. |
n |
the number of observations used in the fit. |
n.events |
the number of events observed in the fit. |
n.iter |
number of iterations needed to converge. |
n.knots |
number of knots for estimating the baseline functions. |
n.strat |
number of stratum. |
rho |
the corresponding correlation coefficient for the two frailty terms. |
sigma2 |
Variance for the random intercept (the random effect associated to the baseline hazard functions). |
tau2 |
Variance for the random slope (the random effect associated to the treatment effect across trials). |
varH |
the variance matrix of all parameters before positivity constraint transformation (Sigma2, Tau2, the regression coefficients and the spline coefficients). Then after, the delta method is needed to obtain the estimated variance parameters. |
varHIH |
the robust estimation of the variance matrix of all parameters (Sigma2, Tau2, the regression coefficients and the spline coefficients). |
varSigma2 |
The variance of the estimates of "sigma2". |
varTau2 |
The variance of the estimates of "tau2". |
varcov |
Variance of the estimates of "cov". |
x |
matrix of times where both survival and hazard functions are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lam |
array (dim=3) of hazard estimates and confidence bands. |
surv |
array (dim=3) of baseline survival estimates and confidence bands. |
median |
The value of the median survival and its confidence bands. If there are two stratas or more, the first value corresponds to the value for the first strata, etc. |
type.of.hazard |
Type of hazard functions (0:"Splines", "1:Piecewise", "2:Weibull"). |
type.of.Piecewise |
Type of Piecewise hazard functions (1:"percentile", 0:"equidistant"). |
nbintervR |
Number of intervals (between 1 and 20) for the parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
npar |
number of parameters. |
nvar |
number of explanatory variables. |
noVar |
indicator of explanatory variable. |
LCV |
the approximated likelihood cross-validation criterion in the semiparametric case (with H minus the converged Hessian matrix, and l(.) the full log-likelihood).
|
AIC |
the Akaike information Criterion for the parametric case.
|
n.knots.temp |
initial value for the number of knots. |
shape.weib |
shape parameter for the Weibull hazard function. |
scale.weib |
scale parameter for the Weibull hazard function. |
martingale.res |
martingale residuals for each cluster. |
frailty.pred |
empirical Bayes prediction of the first frailty term. |
frailty.pred2 |
empirical Bayes prediction of the second frailty term. |
linear.pred |
linear predictor: uses simply "Beta'X + u_i + v_i * X_1" in the additive Frailty models. |
global_chisq |
a vector with the values of each multivariate Wald test. |
dof_chisq |
a vector with the degree of freedom for each multivariate Wald test. |
global_chisq.test |
a binary variable equals to 0 when no multivariate Wald is given, 1 otherwise. |
p.global_chisq |
a vector with the p_values for each global multivariate Wald test. |
names.factor |
Names of the "as.factor" variables. |
Xlevels |
vector of the values that factor might have taken. |
contrasts |
type of contrast for factor variable. |
beta_p.value |
p-values of the Wald test for the estimated regression coefficients. |
"kappa" and "n.knots" are the arguments that the user have to change if the fitted model does not converge. "n.knots" takes integer values between 4 and 20. But with n.knots=20, the model would take a long time to converge. So, usually, begin first with n.knots=7, and increase it step by step until it converges. "kappa" only takes positive values. So, choose a value for kappa (for instance 10000), and if it does not converge, multiply or divide this value by 10 or 5 until it converges.
V. Rondeau, Y. Mazroui and J. R. Gonzalez (2012). Frailtypack: An R package for the analysis of correlated survival data with frailty models using penalized likelihood estimation or parametric estimation. Journal of Statistical Software 47, 1-28.
V. Rondeau, S. Michiels, B. Liquet, and J. P. Pignon (2008). Investigating trial and treatment heterogeneity in an individual patient data meta-analysis of survival data by mean of the maximum penalized likelihood approach. Statistics in Medecine, 27, 1894-1910.
###--- Additive model with 1 covariate ---### data(dataAdditive) modAdd <- additivePenal(Surv(t1,t2,event)~cluster(group)+ var1+slope(var1),correlation=TRUE,data=dataAdditive, n.knots=8,kappa=10000) #-- Var1 is boolean as a treatment variable
###--- Additive model with 1 covariate ---### data(dataAdditive) modAdd <- additivePenal(Surv(t1,t2,event)~cluster(group)+ var1+slope(var1),correlation=TRUE,data=dataAdditive, n.knots=8,kappa=10000) #-- Var1 is boolean as a treatment variable
The often used data set for interval-censored data, described and given in full in Finkelstein and Wolfe (1985). It involves 94 breast cancer patients who were randomized to either radiation therapy with chemotherapy or radiation therapy alone. The outcome is time until the onset of breast retraction which is interval-censored between the last clinic visit before the event was observed and the first visit when the event was observed. Patients without breast retraction were right-censored.
data(bcos)
data(bcos)
A data frame with 94 observations and 3 variables:
left end point of the breast retraction interval
right end point of the breast retraction interval
type of treatment received
Finkelstein, D.M. and Wolfe, R.A. (1985). A semiparametric model for regression analysis of interval-censored failure time data. Biometrics 41, 731-740.
This is a special function used in the context of the models for grouped data. It identifies correlated groups of observations defined by using 'cluster' function, and is used of 'frailtyPenal' formula for fitting univariate and joint models.
cluster(x)
cluster(x)
x |
A character, factor, or numeric variable which is supposed to indicate the variable group |
x |
A variable identified as a cluster |
data(readmission) modSha <- frailtyPenal(Surv(time,event)~as.factor(dukes)+cluster(id), n.knots=10,kappa=10000,data=readmission,hazard="Splines") print(modSha)
data(readmission) modSha <- frailtyPenal(Surv(time,event)~as.factor(dukes)+cluster(id), n.knots=10,kappa=10000,data=readmission,hazard="Splines") print(modSha)
Compute concordance probability estimation for Cox proportional hazard or shared frailty models in case of grouped data (Mauguen et al. 2012). Concordance is given at different levels of comparison, taking into account the cluster membership: between-groups, within-groups and an overall measure, being a weighted average of the previous two. Can also compute the c-index (Harrell et al. 1996) at these three levels. It is possible to exclude tied pairs from concordance estimation (otherwise, account for 1/2).
Cmeasures(fitc, ties = 1, marginal = 0, cindex = 0, Nboot = 0, tau = 0, data.val)
Cmeasures(fitc, ties = 1, marginal = 0, cindex = 0, Nboot = 0, tau = 0, data.val)
fitc |
A frailtyPenal object, for a shared frailty model. If the fit is a Cox model, no clustering membership is taken into account and only marginal concordance probability estimation is provided. Only an overall measure is given, where all patients are compared two by two. If a counting process formulation is used to performed the fit, with 't.start' and 't.stop', the gap-times (t.stop-t.start) are used in the concordance estimation. |
ties |
Indicates if the tied pairs on prediction value must be included (ties=1) or excluded (ties=0) from the concordance estimation. Default is ties=1. When included, tied pairs account for 1/2 in the concordance. |
marginal |
Indicates if the concordance based on marginal predictions must be given (marginal=1) in addition to conditional ones or not (marginal=0). Marginal predictions do not include the frailty estimation in the linear predictor computation: uses "‘Beta’X"' instead of "Beta'X + log z_i". Default is marginal=0. |
cindex |
Indicates if the c-index (Harrell et al. 1996) must be computed (cindex=1) in addition to the concordance probability estimation or not (cindex=0). C-index is also given at the three comparison levels (between, within and overall). Default is cindex=0. |
Nboot |
Number of bootstrap resamplings to compute standard-error of the concordances measures, as well as a percentile 95% confidence interval. Nboot=0 indicates no bootstrap procedure. Maximum admitted is 1000. Minimum admitted is 2. Default is 0. Resampling is done at the group level. If Cox model is used, resampling is done at individual level. |
tau |
Time used to limit the interval on which the concordance is estimated. Note that the survival function for the underlying censoring time distribution needs to be positive at tau. If tau=0, the maximum of the observed event times is used. Default is tau=0. |
data.val |
A dataframe. It is possible to specify a different dataset than the one used in the model input in the argument 'fitc'. This new dataset will be a validation population and the function will compute new concordance measures from the parameters estimated on the development population. In this case for conditional measures, the frailties are a posteriori predicted. The two datasets must have the same covariates with the same coding without missing data. |
call |
The shared frailty model evaluated. |
Frailty |
Logical value. Was model with frailties fitted. |
frequencies |
Numbers of patients, events and groups used to fit the model. |
Npairs |
Number of pairs of subjects, between-groups, within-groups and over all the population. If cindex=1, number of comparable (useable) pairs also available. |
Nboot |
Number of bootstrap resamplings required. |
ties |
A binary, indicating if the tied pairs on prediction were used to compute the concordance. |
CPEcond |
Values of Gonen & Heller's measure (conditional). If Nboot>0, give SE, the standard-error of the parameters evaluated by bootstrap, IC.low and IC.high, the lower and upper bounds of the percentile confidence interval evaluated by bootstrap (2.5% and 97.5% percentiles). |
Cunocond |
Values of Uno's measure (conditional). If Nboot>0, give SE, the standard-error of the parameters evaluated by bootstrap, IC.low and IC.high, the lower and upper bounds of the percentile confidence interval evaluated by bootstrap (2.5% and 97.5% percentiles). |
marginal |
A binary, indicating if the marginal values were computed. |
CPEmarg |
Values of Gonen & Heller's measure (marginal), if marginal=1. If Nboot>0, give SE, the standard-error of the parameters evaluated by bootstrap, IC.low and IC.high, the lower and upper bounds of the percentile confidence interval evaluated by bootstrap (2.5% and 97.5% percentiles). |
Cunomarg |
Values of Uno's measure (marginal), if marginal=1. If Nboot>0, give SE, the standard-error of the parameters evaluated by bootstrap, IC.low and IC.high, the lower and upper bounds of the percentile confidence interval evaluated by bootstrap (2.5% and 97.5% percentiles). |
cindex |
A binary, indicating if the c-indexes were computed. |
cindexcond |
Values of the C-index of Harrell (conditional). If Nboot>0, give SE, the standard-error of the parameters evaluated by bootstrap, IC.low and IC.high, the lower and upper bounds of the percentile confidence interval evaluated by bootstrap (2.5% and 97.5% percentiles). |
cindexmarg |
Values of the C-index of Harrell (marginal), if marginal=1. If Nboot>0, give SE, the standard-error of the parameters evaluated by bootstrap, IC.low and IC.high, the lower and upper bounds of the percentile confidence interval evaluated by bootstrap (2.5% and 97.5% percentiles). |
Mauguen, A., Collette, S., Pignon, J. P. and Rondeau, V. (2013). Concordance measures in shared frailty models: application to clustered data in cancer prognosis. Statistics in Medicine 32, 27, 4803-4820
Harrell, F.E. et al. (1996). Tutorial in biostatistics: multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Statistics in Medicine 15, 361-387.
Gonen, M., Heller, G. (2005). Concordance probability and discriminatory power in proportional hazards regression. Biometrika 92, 965-970.
#-- load data data(readmission) #-- a frailtypenal fit fit <- frailtyPenal(Surv(time,event)~cluster(id)+dukes+ charlson+chemo,data=readmission,cross.validation=FALSE, n.knots=10,kappa=1,hazard="Splines") #-- a Cmeasures call fit.Cmeasures <- Cmeasures(fit) fit.Cmeasures.noties <- Cmeasures(fit, ties=0) fit.Cmeasures.marginal <- Cmeasures(fit, marginal=1) fit.Cmeasures.cindex <- Cmeasures(fit, cindex=1) #-- a short summary fit.Cmeasures fit.Cmeasures.noties fit.Cmeasures.marginal fit.Cmeasures.cindex
#-- load data data(readmission) #-- a frailtypenal fit fit <- frailtyPenal(Surv(time,event)~cluster(id)+dukes+ charlson+chemo,data=readmission,cross.validation=FALSE, n.knots=10,kappa=1,hazard="Splines") #-- a Cmeasures call fit.Cmeasures <- Cmeasures(fit) fit.Cmeasures.noties <- Cmeasures(fit, ties=0) fit.Cmeasures.marginal <- Cmeasures(fit, marginal=1) fit.Cmeasures.cindex <- Cmeasures(fit, cindex=1) #-- a short summary fit.Cmeasures fit.Cmeasures.noties fit.Cmeasures.marginal fit.Cmeasures.cindex
Randomly chosen 150 patients from the follow-up of the FFCD 2000-05 multicenter phase III clinical trial originally including 410 patients with metastatic colorectal cancer randomized into two therapeutic strategies: combination and sequential. The dataset contains times of observed appearances of new lesions censored by a terminal event (death or right-censoring) with baseline characteristics (treatment arm, age, WHO performance status and previous resection).
data(colorectal)
data(colorectal)
This data frame contains the following columns:
identification of each subject. Repeated for each recurrence
start of interval (0 or previous recurrence time)
recurrence or censoring time
Appearance of new lesions status. 0: censsored or no event, 1: new lesions
To which treatment arm a patient was allocated? 1: sequential (S); 2: combination (C)
Age at baseline: 1: <50 years, 2: 50-69 years, 3: >69 years
WHO performance status at baseline: 1: status 0, 2: status 1, 3: status 2
Previous resection of the primate tumor? 0: No, 1: Yes
death indicator. 0: alive, 1: dead
interocurrence time or censoring time
We thank the Federation Francophone de Cancerologie Digestive and Gustave Roussy for sharing the data of the FFCD 2000-05 trial supported by an unrestricted Grant from Sanofi.
M. Ducreux, D. Malka, J. Mendiboure, P.-L. Etienne, P. Texereau, D. Auby, P. Rougier, M. Gasmi, M. Castaing, M. Abbas, P. Michel, D. Gargot, A. Azzedine, C. Lombard- Bohas, P. Geoffroy, B. Denis, J.-P., Pignon, L.,Bedenne, and O. Bouche (2011). Sequential versus combination chemotherapy for the treatment of advanced colorectal cancer (FFCD 2000-05): an open-label, randomised, phase 3 trial. The Lancet Oncology 12, 1032-44.
Randomly chosen 150 patients from the follow-up of the FFCD 2000-05 multicenter phase III clinical trial originally including 410 patients with metastatic colorectal cancer randomized into two therapeutic strategies: combination and sequential. The dataset contains measurements of tumor size (left-censored sums of the longest diameters of target lesions; transformed using Box-Cox) with baseline characteristics(treatment arm, age, WHO performance status and previous resection).
data(colorectalLongi)
data(colorectalLongi)
This data frame contains the following columns:
identification of each subject. Repeated for each recurrence
time of visit counted in years from baseline
Individual longitudinal measurement of transformed
(Box-Cox with parameter 0.3) sums of the longest diameters, left-censored
due to a detection limit (threshold ).
To which treatment arm a patient was allocated? 1: sequential (S); 2: combination (C)
Age at baseline: 1: <50 years, 2: 50-69 years, 3: >69 years
WHO performance status at baseline: 1: status 0, 2: status 1, 3: status 2
Previous resection of the primate tumor? 0: No, 1: Yes
We thank the Federation Francophone de Cancerologie Digestive and Gustave Roussy for sharing the data of the FFCD 2000-05 trial supported by an unrestricted Grant from Sanofi.
Ducreux, M., Malka, D., Mendiboure, J., Etienne, P.-L., Texereau, P., Auby, D., Rougier, P., Gasmi, M., Castaing, M., Abbas, M., Michel, P., Gargot, D., Azzedine, A., Lombard- Bohas, C., Geoffroy, P., Denis, B., Pignon, J.-P., Bedenne, L., and Bouche, O. (2011). Sequential versus combination chemotherapy for the treatment of advanced colorectal cancer (FFCD 2000-05): an open-label, randomised, phase 3 trial. The Lancet Oncology 12, 1032-44.
This contains simulated samples of 100 clusters with 100 subjects in each
cluster, like a gathering of clinical trials databases. Two correlated
centred gaussian random effects are generated with the same variance fixed
at 0.3 and the covariance at -0.2. The regression coefficient is
fixed at -0.11. The percentage of right-censored data is around 30 percent
which are generated from a uniform distribution on [1,150]. The baseline
hazard function is considered as a simple Weibull.
data(dataAdditive)
data(dataAdditive)
This data frame contains the following columns:
identification variable
start of interval (=0, because left-truncated data are not allowed)
end of interval (death or censoring time)
censoring status (0:alive, 1:death, as acensoring indicator
dichotomous covariate (=0 or 1,as a treatment variable)
dichotomous covariate (=0 or 1,as a treatment variable)
V. Rondeau, S. Michiels, B.Liquet, and J.P. Pignon (2008). Investigating trial and treatment heterogeneity in an individual patient data meta-analysis of survival data by mean of the maximum penalized likelihood approach. Statistics in Medecine, 27, 1894-1910.
This contains a simulated sample of of 800 subjects and 1652 observations.
This dataset can be used to illustrate how to fit a joint multivariate
frailty model. Two gaussian correlated random effects were generated with
mean 0, variances 0.5 and a correlation coefficient equals to 0.5. The
coefficients and
were fixed to 1. The three
baseline hazard functions followed a Weibull distribution and right
censoring was fixed at 5.
data(dataMultiv)
data(dataMultiv)
This data frame contains the following columns:
identification of patient
number of observation for a patient
start of interval
end of interval (death or censoring time)
recurrent of type 1 status (0:no, 1:yes)
recurrent of type 2 status (0:no, 1:yes)
censoring status (0:alive, 1:death)
dichotomous covariate (0,1)
dichotomous covariate (0,1)
dichotomous covariate (0,1)
time to event
This contains a simulated sample of of 819 subjects and 1510 observations. This dataset can be used to illustrate how to fit a joint frailty model for data from nested case-control studies.
data(dataNCC)
data(dataNCC)
This data frame contains the following columns:
identification of patient
dichotomous covariate (0,1)
dichotomous covariate (0,1)
start of interval
end of interval (death or censoring time)
time to event
recurrent event status (0:no, 1:yes)
time of terminal event (death or right-censoring)
censoring status (0:alive, 1:death)
weights for NCC design
This contains a simulated sample of 400 observations which allow establishing 20 clusters with 4 subgroups and 5 subjects in each subgroup, in order to obtain two levels of grouping. This data set is useful to illustrate how to fit a nested model. Two independent gamma frailty parameters with a variance fixed at 0.1 for the cluster effect and at 0.5 for the subcluster effect were generated. Independent survival times were generated from a simple Weibull baseline risk function. The percentage of censoring data was around 30 per cent. The right-censoring variables were generated from a uniform distribution on [1,36] and a left-truncating variable was generated with a uniform distribution on [0,10]. Observations were included only if the survival time is greater than the truncated time.
data(dataNested)
data(dataNested)
This data frame contains the following columns:
group identification variable
subgroup identification variable
start of interval (0 or truncated time)
end of interval (death or censoring time)
censoring status (0: alive, 1: death)
dichotomous covariate (0,1)
dichotomous covariate (0,1)
V. Rondeau, L. Filleul, P. Joly (2006). Nested frailty models using maximum penalized likelihood estimation. Statistics in Medecine, 25, 4036-4052.
This dataset combines the data that were collected in four double-blind randomized clinical trials in advanced ovarian cancer. In these trials, the objective was to examine the efficacy of cyclophosphamide plus cisplatin (CP) versus cyclophosphamide plus adriamycin plus cisplatin (CAP) to treat advanced ovarian cancer. The candidate surrogate endpoint S is progression-free survival time, defined as the time (in years) from randomization to clinical progression of the disease or death. The true endpoint T is survival time, defined as the time (in years) from randomization to death of any cause
data(dataOvarian)
data(dataOvarian)
This data frame contains the following columns:
The identification number of a patient
The center in which a patient was treated
The treatment indicator, coded as 0 = cyclophosphamide plus cisplatin (CP) and 1 = cyclophosphamide plus adriamycin plus cisplatin(CAP)
The candidate surrogate (progression-free survival)
Censoring indicator for for Progression-free survival
The true endpoint (survival time)
Censoring indicator for survival time
Ovarian cancer Meta-Analysis Project (1991). Cyclophosphamide plus cisplatin plus adriamycin versus Cyclophosphamide, doxorubicin, and cisplatin chemotherapy of ovarian carcinoma: A meta-analysis. Classic Papers and Current Comments, 3, 237-234.
This function computes the difference of two EPOCE estimates (CVPOL and
MPOL) and its 95% tracking interval between two joint models estimated
using frailtyPenal
, longiPenal
or trivPenal
. Difference
in CVPOL is computed when the EPOCE was previously estimated on the same
dataset as used for estimation (using an approximated cross-validation), and
difference in MPOL is computed when the EPOCE was previously estimated on an
external dataset.
Diffepoce(epoce1, epoce2)
Diffepoce(epoce1, epoce2)
epoce1 |
a first object inheriting from class epoce. |
epoce2 |
a second object inheriting from class epoce. |
From the EPOCE estimates and the individual contributions to the prognostic
observed log-likelihood obtained with epoce
function on the same
dataset from two different estimated joint models, the difference of CVPOL
(or MPOL) and its 95% tracking interval is computed. The 95% tracking
interval is : Delta(MPOL) +/- qnorm(0.975)*sqrt(VARIANCE) for an external
dataset Delta(CVPOL) +/- qnorm(0.975)*sqrt(VARIANCE) for the dataset used in
frailtyPenal
, longiPenal
or trivPenal
where
Delta(CVPOL) (or Delta(MPOL)) is the difference of CVPOL (or MPOL) of the
two joint models, and VARIANCE is the empirical variance of the difference
of individuals contributions to the prognostic observed log-likelihoods of
the two joint models.
The estimators of EPOCE from arguments epoce1 and epoce2 must have been computed on the same dataset and with the pred.times.
new.data |
a boolean which is FALSE if computation is done on the same data as for estimation, and TRUE otherwise |
pred.times |
time or vector of times used in the function |
DEPOCE |
the difference between the two MPOL or CVPOL for each time |
TIinf |
lower confidence band for the difference |
TIsup |
upper confidence band for the difference |
D. Commenges, B. Liquet, C. Proust-Lima (2012). Choice of prognostic estimators in joint models by estimating differences of expected conditional Kullback-Leibler risks. Biometrics, 68(2), 380-387.
## Not run: #Example for joint frailty models data(readmission) # first joint frailty model joint1 <- frailtyPenal(Surv(t.start,t.stop,event)~ cluster(id) + dukes + charlson + sex + chemo + terminal(death), formula.terminalEvent = ~ dukes + charlson + sex + chemo , data = readmission, n.knots = 8, kappa = c(2.11e+08,9.53e+11), recurrentAG=TRUE) # second joint frailty model without dukes nor charlson as covariates joint2 <- frailtyPenal(Surv(t.start,t.stop,event)~ cluster(id) + sex + chemo + terminal(death), formula.terminalEvent = ~ sex + chemo , data = readmission, n.knots = 8, kappa = c(2.11e+08,9.53e+11), recurrentAG=TRUE) temps <- c(200,500,800,1100) # computation of estimators of EPOCE for the two models epoce1 <- epoce(joint1,temps) epoce2 <- epoce(joint2,temps) # computation of the difference diff <- Diffepoce(epoce1,epoce2) print(diff) plot(diff) #Example for joint models with a biomarker data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) # first joint model for a biomarker and a terminal event modLongi <- longiPenal(Surv(time0, time1, state) ~ age + treatment + who.PS, tumor.size ~ year*treatment + age + who.PS, colorectalSurv, data.Longi =colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, hazard = "Weibull", method.GH = "Pseudo-adaptive") # second joint model for a biomarker, recurrent events and a terminal event # (computation takes around 30 minutes) modTriv <- model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes=7) time <- c(1, 1.5, 2, 2.5) # computation of estimators of EPOCE for the two models epoce1 <- epoce(modLongi, time) # (computation takes around 10 minutes) epoce2 <- epoce(modTriv, time) # computation of the difference diff <- Diffepoce(epoce1, epoce2) print(diff) plot(diff) ## End(Not run)
## Not run: #Example for joint frailty models data(readmission) # first joint frailty model joint1 <- frailtyPenal(Surv(t.start,t.stop,event)~ cluster(id) + dukes + charlson + sex + chemo + terminal(death), formula.terminalEvent = ~ dukes + charlson + sex + chemo , data = readmission, n.knots = 8, kappa = c(2.11e+08,9.53e+11), recurrentAG=TRUE) # second joint frailty model without dukes nor charlson as covariates joint2 <- frailtyPenal(Surv(t.start,t.stop,event)~ cluster(id) + sex + chemo + terminal(death), formula.terminalEvent = ~ sex + chemo , data = readmission, n.knots = 8, kappa = c(2.11e+08,9.53e+11), recurrentAG=TRUE) temps <- c(200,500,800,1100) # computation of estimators of EPOCE for the two models epoce1 <- epoce(joint1,temps) epoce2 <- epoce(joint2,temps) # computation of the difference diff <- Diffepoce(epoce1,epoce2) print(diff) plot(diff) #Example for joint models with a biomarker data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) # first joint model for a biomarker and a terminal event modLongi <- longiPenal(Surv(time0, time1, state) ~ age + treatment + who.PS, tumor.size ~ year*treatment + age + who.PS, colorectalSurv, data.Longi =colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, hazard = "Weibull", method.GH = "Pseudo-adaptive") # second joint model for a biomarker, recurrent events and a terminal event # (computation takes around 30 minutes) modTriv <- model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes=7) time <- c(1, 1.5, 2, 2.5) # computation of estimators of EPOCE for the two models epoce1 <- epoce(modLongi, time) # (computation takes around 10 minutes) epoce2 <- epoce(modTriv, time) # computation of the difference diff <- Diffepoce(epoce1, epoce2) print(diff) plot(diff) ## End(Not run)
This function computes estimators of the Expected Prognostic Observed
Cross-Entropy (EPOCE) for evaluating the predictive accuracy of joint models
using frailtyPenal
, longiPenal
, trivPenal
or
trivPenalNL
. On the same data as used for estimation of the joint
model, this function computes both the Mean Prognosis Observed Loss (MPOL)
and the Cross-Validated Prognosis Observed Loss (CVPOL), two estimators of
EPOCE. The latter corrects the MPOL estimate for over-optimism by
approximated cross-validation. On external, this function only computes
MPOL.
epoce(fit, pred.times, newdata = NULL, newdata.Longi = NULL)
epoce(fit, pred.times, newdata = NULL, newdata.Longi = NULL)
fit |
A jointPenal, longiPenal, trivPenal or trivPenalNL object. |
pred.times |
Time or vector of times to compute epoce. |
newdata |
Optional. In case of joint models obtained with
|
newdata.Longi |
Optional. In case of joint models obtained with
|
data |
name of the data used to compute epoce |
new.data |
a boolean which is FALSE if computation is done on the same data as for estimation, and TRUE otherwise |
pred.times |
time or vector of times used in the function |
mpol |
values of MPOL for each pred.times |
cvpol |
values of CVPOL for each pred.times |
IndivContrib |
all the contributions to the log-likelihood for each pred.times |
AtRisk |
number of subject still at risk for each pred.times |
D. Commenges, B. Liquet, C. Proust-Lima (2012). Choice of prognostic estimators in joint models by estimating differences of expected conditional Kullback-Leibler risks. Biometrics, 68(2), 380-387.
## Not run: ######################################## #### EPOCE on a joint frailty model #### ######################################## data(readmission) modJoint.gap <- frailtyPenal(Surv(t.start,t.stop,event)~ cluster(id) + dukes + charlson + sex + chemo + terminal(death), formula.terminalEvent = ~ dukes + charlson + sex + chemo , data = readmission, n.knots = 8, kappa =c(2.11e+08,9.53e+11), recurrentAG=TRUE) # computation on the same dataset temps <- c(200,500,800,1100) epoce <- epoce(modJoint.gap,temps) print(epoce) plot(epoce,type = "cvpol") # computation on a new dataset # here a sample of readmission with the first 50 subjects s <- readmission[1:100,] epoce <- epoce(modJoint.gap,temps,newdata=s) print(epoce) plot(epoce,type = "mpol") ################################################# #### EPOCE on a joint model for a biomarker #### ######### and a terminal event ############### ################################################# data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) modLongi <- longiPenal(Surv(time0, time1, state) ~ age + treatment + who.PS, tumor.size ~ year*treatment + age + who.PS, colorectalSurv, data.Longi =colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, hazard = "Weibull", method.GH = "Pseudo-adaptive") # computation on the same dataset time <- c(1, 1.5, 2, 2.5) epoce <- epoce(modLongi,time) print(epoce) plot(epoce, type = "cvpol") # computation on a new dataset # here a sample of colorectal data with the first 50 subjects s <- subset(colorectal, new.lesions == 0 & id%in%1:50) s.Longi <- subset(colorectalLongi, id%in%1:50) epoce <- epoce(modLongi, time, newdata = s, newdata.Longi = s.Longi) print(epoce) plot(epoce, type = "mpol") ################################################### #### EPOCE on a joint model for a biomarker, ###### #### recurrent events and a terminal event ###### ################################################### data(colorectal) data(colorectalLongi) # Linear model for the biomarker # (computation takes around 30 minutes) model.trivPenalNL <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes=7) # computation on the same dataset time <- c(1, 1.5, 2, 2.5) # (computation takes around 10 minutes) epoce <- epoce(model.trivPenalNL,time) print(epoce) plot(epoce, type = "cvpol") # computation on a new dataset # here a sample of colorectal data with the first 100 subjects s <- subset(colorectal, id%in%1:100) s.Longi <- subset(colorectalLongi, id%in%1:100) # (computation takes around 10 minutes) epoce <- epoce(model.trivPenalNL, time, newdata = s, newdata.Longi = s.Longi) print(epoce) plot(epoce, type = "mpol") # Non-linear model for the biomarker # No information on dose - creation of a dummy variable colorectalLongi$dose <- 1 # (computation can take around 40 minutes) model.trivPenalNL <- trivPenalNL(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + terminal(state), formula.terminalEvent =~ age + treatment, biomarker = "tumor.size", formula.KG ~ 1, formula.KD ~ treatment, dose = "dose", time.biomarker = "year", data = colorectal, data.Longi =colorectalLongi, random = c("y0", "KG"), id = "id", init.B = c(-0.22, -0.16, -0.35, -0.19, 0.04, -0.41, 0.23), init.Alpha = 1.86, init.Eta = c(0.5, 0.57, 0.5, 2.34), init.Biomarker = c(1.24, 0.81, 1.07, -1.53), recurrentAG = TRUE, n.knots = 5, kappa = c(0.01, 2), method.GH = "Pseudo-adaptive") # computation on the same dataset time <- c(1, 1.5, 2, 2.5) epoce <- epoce(model.trivPenalNL, time) ## End(Not run)
## Not run: ######################################## #### EPOCE on a joint frailty model #### ######################################## data(readmission) modJoint.gap <- frailtyPenal(Surv(t.start,t.stop,event)~ cluster(id) + dukes + charlson + sex + chemo + terminal(death), formula.terminalEvent = ~ dukes + charlson + sex + chemo , data = readmission, n.knots = 8, kappa =c(2.11e+08,9.53e+11), recurrentAG=TRUE) # computation on the same dataset temps <- c(200,500,800,1100) epoce <- epoce(modJoint.gap,temps) print(epoce) plot(epoce,type = "cvpol") # computation on a new dataset # here a sample of readmission with the first 50 subjects s <- readmission[1:100,] epoce <- epoce(modJoint.gap,temps,newdata=s) print(epoce) plot(epoce,type = "mpol") ################################################# #### EPOCE on a joint model for a biomarker #### ######### and a terminal event ############### ################################################# data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) modLongi <- longiPenal(Surv(time0, time1, state) ~ age + treatment + who.PS, tumor.size ~ year*treatment + age + who.PS, colorectalSurv, data.Longi =colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, hazard = "Weibull", method.GH = "Pseudo-adaptive") # computation on the same dataset time <- c(1, 1.5, 2, 2.5) epoce <- epoce(modLongi,time) print(epoce) plot(epoce, type = "cvpol") # computation on a new dataset # here a sample of colorectal data with the first 50 subjects s <- subset(colorectal, new.lesions == 0 & id%in%1:50) s.Longi <- subset(colorectalLongi, id%in%1:50) epoce <- epoce(modLongi, time, newdata = s, newdata.Longi = s.Longi) print(epoce) plot(epoce, type = "mpol") ################################################### #### EPOCE on a joint model for a biomarker, ###### #### recurrent events and a terminal event ###### ################################################### data(colorectal) data(colorectalLongi) # Linear model for the biomarker # (computation takes around 30 minutes) model.trivPenalNL <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes=7) # computation on the same dataset time <- c(1, 1.5, 2, 2.5) # (computation takes around 10 minutes) epoce <- epoce(model.trivPenalNL,time) print(epoce) plot(epoce, type = "cvpol") # computation on a new dataset # here a sample of colorectal data with the first 100 subjects s <- subset(colorectal, id%in%1:100) s.Longi <- subset(colorectalLongi, id%in%1:100) # (computation takes around 10 minutes) epoce <- epoce(model.trivPenalNL, time, newdata = s, newdata.Longi = s.Longi) print(epoce) plot(epoce, type = "mpol") # Non-linear model for the biomarker # No information on dose - creation of a dummy variable colorectalLongi$dose <- 1 # (computation can take around 40 minutes) model.trivPenalNL <- trivPenalNL(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + terminal(state), formula.terminalEvent =~ age + treatment, biomarker = "tumor.size", formula.KG ~ 1, formula.KD ~ treatment, dose = "dose", time.biomarker = "year", data = colorectal, data.Longi =colorectalLongi, random = c("y0", "KG"), id = "id", init.B = c(-0.22, -0.16, -0.35, -0.19, 0.04, -0.41, 0.23), init.Alpha = 1.86, init.Eta = c(0.5, 0.57, 0.5, 2.34), init.Biomarker = c(1.24, 0.81, 1.07, -1.53), recurrentAG = TRUE, n.knots = 5, kappa = c(0.01, 2), method.GH = "Pseudo-adaptive") # computation on the same dataset time <- c(1, 1.5, 2, 2.5) epoce <- epoce(model.trivPenalNL, time) ## End(Not run)
This is a special function used in the context of multivariate frailty model
with two types of recurrent events and a terminal event (e.g., censoring
variable related to both recurrent events). It contains the indicator of the
recurrent event of type 2, normally 0=no event, 1=event, and is used on the
right hand side of a formula of a 'multivPenal' object. Using
event2()
in a formula implies that a multivariate frailty model for
two types of recurrent events and a terminal event is fitted.
event2(x)
event2(x)
x |
A numeric variable but should be a boolean which equals 1 if the subject has experienced an event of type 2 and 0 if not. |
x |
an indicator for an event of type 2 |
Shared Frailty model
Fit a shared gamma or log-normal frailty model using a semiparametric Penalized Likelihood estimation or parametric estimation on the hazard function. Left-truncated, right-censored data, interval-censored data and strata (up to 6 levels) are allowed. It allows to obtain a non-parametric smooth hazard of survival function. This approach is different from the partial penalized likelihood approach of Therneau et al.
The hazard function, conditional on the frailty term i, of a
shared gamma frailty model for the jth subject in the ith
group:
where 0(t) is the baseline hazard function,
the vector of the regression coefficient associated to the covariate vector
ij for the jth individual in the ith
group.
Otherwise, in case of a shared log-normal frailty model, we have for the jth subject in the ith group:
From now on, you can also consider time-varying effects covariates in your
model, see timedep
function for more details.
Joint Frailty model
Fit a joint either with gamma or log-normal frailty model for recurrent and terminal events using a penalized likelihood estimation on the hazard function or a parametric estimation. Right-censored data and strata (up to 6 levels) for the recurrent event part are allowed. Left-truncated data is not possible. Joint frailty models allow studying, jointly, survival processes of recurrent and terminal events, by considering the terminal event as an informative censoring.
There is two kinds of joint frailty models that can be fitted with
frailtyPenal
:
- The first one (Rondeau et al. 2007) includes a common frailty term to the
individuals i for the two rates which will take into account
the heterogeneity in the data, associated with unobserved covariates. The
frailty term acts differently for the two rates (
i for the
recurrent rate and
i\eqn{\alpha} for the death rate). The
covariates could be different for the recurrent rate and death rate.
For the jth recurrence (j=1,...,ni) and the
ith subject (i=1,...,G), the joint gamma frailty model
for recurrent event hazard function ij(.) and death rate
i is :
where 0(t) (resp.
0(t)) is the recurrent (resp.
terminal) event baseline hazard function,
1 (resp.
2) the regression coefficient vector,
i(t)
the covariate vector. The random effects of frailties
i~
(1/
,1/
) and are
iid.
The joint log-normal frailty model will be :
- The second one (Rondeau et al. 2011) is quite similar but the frailty term is common to the individuals from a same group. This model is useful for the joint modelling two clustered survival outcomes. This joint models have been developed for clustered semi-competing events. The follow-up of each of the two competing outcomes stops when the event occurs. In this case, j is for the subject and i for the cluster.
It should be noted that in these models it is not recommended to include
parameter as there is not enough information to estimate it and
thus there might be convergence problems.
In case of a log-normal distribution of the frailties, we will have :
This joint frailty model can also be applied to clustered recurrent events and a terminal event (example on "readmission" data below).
From now on, you can also consider time-varying effects covariates in your
model, see timedep
function for more details.
There is a possibility to use a weighted penalized maximum likelihood approach for nested case-control design, in which risk set sampling is performed based on a single outcome (Jazic et al., Submitted).
General Joint Frailty model Fit a general joint frailty model for recurrent
and terminal events considering two independent frailty terms. The frailty
term i represents the unobserved association between recurrences and
death. The frailty term
i is specific to the recurrent event rate.
Thus, the general joint frailty model is:
where the random effects
i ~
(1/
,1/
) and the
random effects
i ~
(1/
,1/
) are independent
from each other. The joint model is fitted using a penalized likelihood
estimation on the hazard. Right-censored data and time-varying covariates
i(t) are allowed.
Nested Frailty model
Data should be ordered according to cluster and subcluster
Fit a nested frailty model using a Penalized Likelihood on the hazard
function or using a parametric estimation. Nested frailty models allow
survival studies for hierarchically clustered data by including two iid
gamma random effects. Left-truncated and right-censored data are allowed.
Stratification analysis is allowed (maximum of strata = 2).
The hazard function conditional on the two frailties i and
ij for the kth individual of the jth subgroup of
the ith group is :
where 0(t) is the baseline hazard function,
ijk
denotes the covariate vector and
the corresponding vector of
regression parameters.
Joint Nested Frailty Model
Fit a joint model for recurrent and terminal events using a penalized likelihood on the hazard functions or a parametric estimation. Right-censored data are allowed but left-truncated data and stratified analysis are not allowed.
Joint nested frailty models allow studying, jointly, survival processes of recurrent and terminal events for hierarchically clustered data, by considering the terminal event as an informative censoring and by including two iid gamma random effects.
The joint nested frailty model includes two shared frailty terms, one for
the subgroup (fi) and one for the group (
f) into the
hazard functions. This random effects account the heterogeneity in the data,
associated with unobserved covariates. The frailty terms act differently for
the two rates (
fi,
f\eqn{\xi} for the recurrent rate and
fi\eqn{\alpha},
i for the terminal event rate). The covariates
could be different for the recurrent rate and death rate.
For the jth recurrence (j = 1, ..., i) of the ith
individual (i = 1, ...,
f) of the
th group (f = 1,...,
n), the joint nested gamma frailty model for recurrent event hazard function
fij(.) and for terminal event hazard function
fi
is :
where 0(resp.
0) is the recurrent (resp.
terminal) event baseline hazard function,
(resp.
)
the regression coefficient vector,
fij(t) the covariates
vector. The random effects are
f~
(1/
,1/
)
and
fi~
(1/
,1/
).
frailtyPenal(formula, formula.terminalEvent, data, recurrentAG = FALSE, cross.validation = FALSE, jointGeneral,n.knots, kappa, maxit = 300, hazard = "Splines", nb.int, RandDist = "Gamma", nb.gh, nb.gl, betaknots = 1, betaorder = 3, initialize = TRUE, init.B, init.Theta, init.Alpha, Alpha, init.Ksi, Ksi, init.Eta, LIMparam = 1e-3, LIMlogl = 1e-3, LIMderiv = 1e-3, print.times = TRUE)
frailtyPenal(formula, formula.terminalEvent, data, recurrentAG = FALSE, cross.validation = FALSE, jointGeneral,n.knots, kappa, maxit = 300, hazard = "Splines", nb.int, RandDist = "Gamma", nb.gh, nb.gl, betaknots = 1, betaorder = 3, initialize = TRUE, init.B, init.Theta, init.Alpha, Alpha, init.Ksi, Ksi, init.Eta, LIMparam = 1e-3, LIMlogl = 1e-3, LIMderiv = 1e-3, print.times = TRUE)
formula |
a formula object, with the response on the left of a
|
formula.terminalEvent |
only for joint and joint nested frailty models : a formula object, only requires terms on the right to indicate which variables are modelling the terminal event. Interactions are possible using * or :. |
data |
a 'data.frame' with the variables used in 'formula'. |
recurrentAG |
Logical value. Is Andersen-Gill model fitted? If so indicates that recurrent event times with the counting process approach of Andersen and Gill is used. This formulation can be used for dealing with time-dependent covariates. The default is FALSE. |
cross.validation |
Logical value. Is cross validation procedure used for estimating smoothing parameter in the penalized likelihood estimation? If so a search of the smoothing parameter using cross validation is done, with kappa as the seed. The cross validation is not implemented for several strata, neither for interval-censored data. The cross validation has been implemented for a Cox proportional hazard model, with no covariates. The default is FALSE. |
jointGeneral |
Logical value. Does the model include two independent
random effects? If so, this will fit a general joint frailty model with an
association between the recurrent events and a terminal event (explained by
the variance |
n.knots |
integer giving the number of knots to use. Value required in the penalized likelihood estimation. It corresponds to the (n.knots+2) splines functions for the approximation of the hazard or the survival functions. We estimate I or M-splines of order 4. When the user set a number of knots equals to k (n.knots=k) then the number of interior knots is (k-2) and the number of splines is (k-2)+order. Number of knots must be between 4 and 20. (See Note) |
kappa |
positive smoothing parameter in the penalized likelihood
estimation. In a stratified shared model, this argument must be a vector
with kappas for both strata. In a stratified joint model, this argument
must be a vector with kappas for both strata for recurrent events plus one
kappa for terminal event. The coefficient kappa of the integral of the
squared second derivative of hazard function in the fit (penalized log
likelihood). To obtain an initial value for |
maxit |
maximum number of iterations for the Marquardt algorithm. Default is 300 |
hazard |
Type of hazard functions: "Splines" for semiparametric hazard
functions using equidistant intervals or "Splines-per" using percentile with
the penalized likelihood estimation, "Piecewise-per" for piecewise constant
hazard function using percentile (not available for interval-censored data),
"Piecewise-equi" for piecewise constant hazard function using equidistant
intervals, "Weibull" for parametric Weibull functions. Default is "Splines".
In case of |
nb.int |
Number of time intervals (between 1 and 20) for the parametric hazard functions ("Piecewise-per", "Piecewise-equi"). In a joint model, you need to specify a number of time interval for both recurrent hazard function and the death hazard function (vector of length 2). |
RandDist |
Type of random effect distribution: "Gamma" for a gamma
distribution, "LogN" for a log-normal distribution. Default is "Gamma". Not
implemented for nested model. If |
nb.gh |
Number of nodes for the Gaussian-Hermite quadrature. It can be chosen among 5, 7, 9, 12, 15, 20 and 32. The default is 20 if hazard = "Splines", 32 otherwise. |
nb.gl |
Number of nodes for the Gaussian-Laguerre quadrature. It can be chosen between 20 and 32. The default is 20 if hazard = "Splines", 32 otherwise. |
betaknots |
Number of inner knots used for the estimation of B-splines. Default is 1. See 'timedep' function for more details. Not implemented for nested and joint nested frailty models. |
betaorder |
Order of the B-splines. Default is cubic B-splines (order = 3). See 'timedep' function for more details. Not implemented for nested and joint nested frailty models. |
initialize |
Logical value, only for joint nested frailty models.
Option |
init.B |
A vector of initial values for regression coefficients. This vector should be of the same size as the whole vector of covariates with the first elements for the covariates related to the recurrent events and then to the terminal event (interactions in the end of each component). Default is 0.1 for each (for Cox and shared model) or 0.5 (for joint and joint nested frailty models). |
init.Theta |
Initial value for variance of the frailties. |
init.Alpha |
Only for joint and joint nested frailty models : initial value for parameter alpha. |
Alpha |
Only for joint and joint nested frailty model : input "None" so as to fit a joint model without the parameter alpha. |
init.Ksi |
Only for joint nested frailty model : initial value for
parameter |
Ksi |
Only for joint nested frailty model : input |
init.Eta |
Only for general joint and joint nested frailty models :
initial value for the variance |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters (see Details), |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood (see Details), |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the
gradient (see Details), |
print.times |
a logical parameter to print iteration process. Default is TRUE. |
Typical usages are for a Cox model
frailtyPenal(Surv(time,event)~var1+var2, data, \dots)
for a shared model
frailtyPenal(Surv(time,event)~cluster(group)+var1+var2, data, \dots)
for a joint model
frailtyPenal(Surv(time,event)~cluster(group)+var1+var2+ var3+terminal(death), formula.terminalEvent=~ var1+var4, data, \dots)
for a joint model for clustered data
frailtyPenal(Surv(time,event)~cluster(group)+num.id(group2)+ var1+var2+var3+terminal(death), formula.terminalEvent=~var1+var4, data, \dots)
for a joint model for data from nested case-control studies
frailtyPenal(Surv(time,event)~cluster(group)+num.id(group2)+ var1+var2+var3+terminal(death)+wts(wts.ncc), formula.terminalEvent=~var1+var4, data, \dots)
for a nested model
frailtyPenal(Surv(time,event)~cluster(group)+subcluster(sbgroup)+ var1+var2, data, \dots)
for a joint nested frailty model
frailtyPenal(Surv(time,event)~cluster(group)+subcluster(sbgroup)+ var1+var2++terminal(death), formula.terminalEvent=~var1+var4, data, \dots)
The estimated parameter are obtained using the robust Marquardt algorithm
(Marquardt, 1963) which is a combination between a Newton-Raphson algorithm
and a steepest descent algorithm. The iterations are stopped when the
difference between two consecutive log-likelihoods was small
, the estimated coefficients were stable (consecutive values
, and the gradient small enough
. When
frailty parameter is small, numerical problems may arise. To solve this
problem, an alternative formula of the penalized log-likelihood is used (see
Rondeau, 2003 for further details). Cubic M-splines of order 4 are used for
the hazard function, and I-splines (integrated M-splines) are used for the
cumulative hazard function.
The inverse of the Hessian matrix is the variance estimator and to deal with
the positivity constraint of the variance component and the spline
coefficients, a squared transformation is used and the standard errors are
computed by the -method (Knight & Xekalaki, 2000). The smooth
parameter can be chosen by maximizing a likelihood cross validation
criterion (Joly and other, 1998). The integrations in the full log
likelihood were evaluated using Gaussian quadrature. Laguerre polynomials
with 20 points were used to treat the integrations on
INITIAL VALUES
The splines and the regression coefficients are initialized to 0.1. In case
of shared model, the program fits, firstly, an adjusted Cox model to give
new initial values for the splines and the regression coefficients. The
variance of the frailty term is initialized to 0.1. Then, a
shared frailty model is fitted.
In case of a joint frailty model, the splines and the regression
coefficients are initialized to 0.5. The program fits an adjusted Cox model
to have new initial values for the regression and the splines coefficients.
The variance of the frailty term and the coefficient
associated in the death hazard function are initialized to 1.
Then, it fits a joint frailty model.
In case of a general joint frailty model we need to initialize the
jointGeneral
logical value to TRUE
.
In case of a nested model, the program fits an adjusted Cox model to provide new initial values for the regression and the splines coefficients. The variances of the frailties are initialized to 0.1. Then, a shared frailty model with covariates with only subgroup frailty is fitted to give a new initial value for the variance of the subgroup frailty term. Then, a shared frailty model with covariates and only group frailty terms is fitted to give a new initial value for the variance of the group frailties. In a last step, a nested frailty model is fitted.
In case of a joint nested model, the splines and the regression coefficients
are initialized to 0.5 and the variances of the frailty terms and
are initialized to 1. If the option
'initialize'
is
TRUE
, the program fits a joint frailty model to provide initial
values for the splines, covariates coefficients, variance of
the frailty terms and
. The variances of the second frailty term
(
) and the second coefficient
are initialized to 1.
Then, a joint nested frailty model is fitted.
NCC DESIGN
It is possible to fit a joint frailty model for data from nested case-control studies using the approach of weighted penalized maximum likelihood. For this model, only splines can be used for baseline hazards and no time-varying effects of covariates can be included. To accommodate the nested case-control design, the formula for the recurrent events should simply include the special term wts(wts.ncc), where wts.ncc refers to a column of prespecified weights in the data set for every observation. For details, see Jazic et al., Submitted (available on request from the package authors).
The following components are included in a 'frailtyPenal' object for each model.
b |
sequence of the corresponding estimation of the coefficients for the hazard functions (parametric or semiparametric), the random effects variances and the regression coefficients. |
call |
The code used for the model. |
formula |
the formula part of the code used for the model. |
coef |
the regression coefficients. |
cross.Val |
Logical value. Is cross validation procedure used for estimating the smoothing parameters in the penalized likelihood estimation? |
DoF |
Degrees of freedom associated with the "kappa". |
groups |
the maximum number of groups used in the fit. |
kappa |
A vector with the smoothing parameters in the penalized likelihood estimation corresponding to each baseline function as components. |
loglikPenal |
the complete marginal penalized log-likelihood in the semiparametric case. |
loglik |
the marginal log-likelihood in the parametric case. |
n |
the number of observations used in the fit. |
n.events |
the number of events observed in the fit. |
n.iter |
number of iterations needed to converge. |
n.knots |
number of knots for estimating the baseline functions in the penalized likelihood estimation. |
n.strat |
number of stratum. |
varH |
the variance matrix of all parameters before positivity constraint transformation. Then, the delta method is needed to obtain the estimated variance parameters. That is why some variances don't match with the printed values at the end of the model. |
varHIH |
the robust estimation of the variance matrix of all parameters. |
x |
matrix of times where both survival and hazard function are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lam |
array (dim=3) of hazard estimates and confidence bands. |
surv |
array (dim=3) of baseline survival estimates and confidence bands. |
median |
The value of the median survival and its confidence bands. If there are two stratas or more, the first value corresponds to the value for the first strata, etc. |
nbintervR |
Number of intervals (between 1 and 20) for the parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
npar |
number of parameters. |
nvar |
number of explanatory variables. |
LCV |
the approximated likelihood cross-validation criterion in the semiparametric case (with H minus the converged Hessian matrix, and l(.) the full log-likelihood).
|
AIC |
the Akaike information Criterion for the parametric case.
|
n.knots.temp |
initial value for the number of knots. |
shape.weib |
shape parameter for the Weibull hazard function. |
scale.weib |
scale parameter for the Weibull hazard function. |
martingale.res |
martingale residuals for each cluster. |
martingaleCox |
martingale residuals for observation in the Cox model. |
Frailty |
Logical value. Was model with frailties fitted ? |
frailty.pred |
empirical Bayes prediction of the frailty term (ie, using conditional posterior distributions). |
frailty.var |
variance of the empirical Bayes prediction of the frailty term (only for gamma frailty models). |
frailty.sd |
standard error of the frailty empirical Bayes prediction (only for gamma frailty models). |
global_chisq |
a vector with the values of each multivariate Wald test. |
dof_chisq |
a vector with the degree of freedom for each multivariate Wald test. |
global_chisq.test |
a binary variable equals to 0 when no multivariate Wald is given, 1 otherwise. |
p.global_chisq |
a vector with the p_values for each global multivariate Wald test. |
names.factor |
Names of the "as.factor" variables. |
Xlevels |
vector of the values that factor might have taken. |
contrasts |
type of contrast for factor variable. |
beta_p.value |
p-values of the Wald test for the estimated regression coefficients. |
The following components are specific to shared models.
equidistant |
Indicator for the intervals used the estimation of
baseline hazard functions (for splines or pieceiwse-constaant functions) : 1
for equidistant intervals ; 0 for intervals using percentile (note:
|
intcens |
Logical value. Indicator if a joint frailty model with interval-censored data was fitted) |
theta |
variance of the
gamma frailty parameter |
sigma2 |
variance
of the log-normal frailty parameter |
linear.pred |
linear predictor: uses simply "Beta'X" in the cox proportional hazard model or "Beta'X + log w_i" in the shared gamma frailty models, otherwise uses "Beta'X + w_i" for log-normal frailty distribution. |
BetaTpsMat |
matrix of time varying-effects and confidence bands (the first column used for abscissa of times) |
theta_p.value |
p-value of the Wald test for the estimated variance of the gamma frailty. |
sigma2_p.value |
p-value of the Wald test for the estimated variance of the log-normal frailty. |
The following components are specific to joint models.
intcens |
Logical value. Indicator if a joint frailty model with interval-censored data was fitted) |
theta |
variance of the gamma
frailty parameter |
sigma2 |
variance of the log-normal frailty parameter
|
eta |
variance
of the second gamma frailty parameter in general joint frailty models
|
indic_alpha |
indicator if a joint frailty
model with |
alpha |
the coefficient
|
nbintervR |
Number of intervals (between 1 and 20) for the recurrent parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
nbintervDC |
Number of intervals (between 1 and 20) for the death parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
nvar |
A vector with the number of covariates of each type of hazard function as components. |
nvarRec |
number of recurrent explanatory variables. |
nvarEnd |
number of death explanatory variables. |
noVar1 |
indicator of recurrent explanatory variables. |
noVar2 |
indicator of death explanatory variables. |
xR |
matrix of times where both survival and hazard function are estimated for the recurrent event. By default seq(0,max(time),length=99), where time is the vector of survival times. |
xD |
matrix of times for the terminal event. |
lamR |
array (dim=3) of hazard estimates and confidence bands for recurrent event. |
lamD |
the same value as lamR for the terminal event. |
survR |
array (dim=3) of baseline survival estimates and confidence bands for recurrent event. |
survD |
the same value as survR for the terminal event. |
martingale.res |
martingale residuals for each cluster (recurrent). |
martingaledeath.res |
martingale residuals for each cluster (death). |
linear.pred |
linear predictor: uses "Beta'X + log w_i" in the gamma frailty model, otherwise uses "Beta'X + eta_i" for log-normal frailty distribution |
lineardeath.pred |
linear predictor for the terminal part : "Beta'X + alpha.log w_i" for gamma, "Beta'X + alpha.eta_i" for log-normal frailty distribution |
Xlevels |
vector of the values that factor might have taken for the recurrent part. |
contrasts |
type of contrast for factor variable for the recurrent part. |
Xlevels2 |
vector of the values that factor might have taken for the death part. |
contrasts2 |
type of contrast for factor variable for the death part. |
BetaTpsMat |
matrix of time varying-effects and confidence bands for recurrent event (the first column used for abscissa of times of recurrence) |
BetaTpsMatDc |
matrix of time varying-effects and confidence bands for terminal event (the first column used for abscissa of times of death) |
alpha_p.value |
p-value of the Wald test for the
estimated |
ncc |
Logical value whether nested case-control design with weights was used for the joint model. |
The following components are specific to nested models.
alpha |
variance of the cluster effect |
eta |
variance of the subcluster effect |
subgroups |
the maximum number of subgroups used in the fit. |
frailty.pred.group |
empirical Bayes prediction of the frailty term by group. |
frailty.pred.subgroup |
empirical Bayes prediction of the frailty term by subgroup. |
linear.pred |
linear predictor: uses "Beta'X + log v_i.w_ij". |
subgbyg |
subgroup by group. |
n.strat |
A vector with the number of covariates of each type of hazard function as components. |
alpha_p.value |
p-value of the Wald test for the estimated variance of the cluster effect. |
eta_p.value |
p-value of the Wald test for the estimated variance of the subcluster effect. |
The following components are specific to joint nested frailty models.
theta |
variance of the subcluster effect |
eta |
variance of the cluster effect |
alpha |
the power coefficient |
ksi |
the power coefficient |
indic_alpha |
indicator if a joint frailty model with |
indic_ksi |
indicator if a joint frailty
model with |
frailty.fam.pred |
empirical Bayes prediction of the frailty term by family. |
eta_p.value |
p-value of the Wald test for the estimated variance of the cluster effect. |
alpha_p.value |
p-value of the Wald
test for the estimated power coefficient |
ksi_p.value |
p-value of the Wald test for the estimated power
coefficient |
From a prediction aim, we recommend you to input a data sorted by the group variable with numerical numbers from 1 to n (number of groups). In case of a nested model, we recommend you to input a data sorted by the group variable then sorted by the subgroup variable both with numerical numbers from 1 to n (number of groups) and from 1 to m (number or subgroups). "kappa" and "n.knots" are the arguments that the user have to change if the fitted model does not converge. "n.knots" takes integer values between 4 and 20. But with n.knots=20, the model would take a long time to converge. So, usually, begin first with n.knots=7, and increase it step by step until it converges. "kappa" only takes positive values. So, choose a value for kappa (for instance 10000), and if it does not converge, multiply or divide this value by 10 or 5 until it converges.
I. Jazic, S. Haneuse, B. French, G. MacGrogan, and V. Rondeau. Design and analysis of nested case-control studies for recurrent events subject to a terminal event. Submitted.
A. Krol, A. Mauguen, Y. Mazroui, A. Laurent, S. Michiels and V. Rondeau (2017). Tutorial in Joint Modeling and Prediction: A Statistical Software for Correlated Longitudinal Outcomes, Recurrent Events and a Terminal Event. Journal of Statistical Software 81(3), 1-52.
V. Rondeau, Y. Mazroui and J. R. Gonzalez (2012). Frailtypack: An R package for the analysis of correlated survival data with frailty models using penalized likelihood estimation or parametric estimation. Journal of Statistical Software 47, 1-28.
Y. Mazroui, S. Mathoulin-Pelissier, P. Soubeyranb and V. Rondeau (2012) General joint frailty model for recurrent event data with a dependent terminalevent: Application to follicular lymphoma data. Statistics in Medecine, 31, 11-12, 1162-1176.
V. Rondeau, J.P. Pignon, S. Michiels (2011). A joint model for the dependance between clustered times to tumour progression and deaths: A meta-analysis of chemotherapy in head and neck cancer. Statistical methods in medical research 897, 1-19.
V. Rondeau, S. Mathoulin-Pellissier, H. Jacqmin-Gadda, V. Brouste, P. Soubeyran (2007). Joint frailty models for recurring events and death using maximum penalized likelihood estimation:application on cancer events. Biostatistics 8,4, 708-721.
V. Rondeau, L. Filleul, P. Joly (2006). Nested frailty models using maximum penalized likelihood estimation. Statistics in Medicine, 25, 4036-4052.
V. Rondeau, D. Commenges, and P. Joly (2003). Maximum penalized likelihood estimation in a gamma-frailty model. Lifetime Data Analysis 9, 139-153.
C.A. McGilchrist, and C.W. Aisbett (1991). Regression with frailty in survival analysis. Biometrics 47, 461-466.
D. Marquardt (1963). An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal of Applied Mathematics, 431-441.
SurvIC
, cluster
,
subcluster
, terminal
, num.id
,
timedep
###--- COX proportional hazard model (SHARED without frailties) ---### ###--- estimated with penalized likelihood ---### data(kidney) frailtyPenal(Surv(time,status)~sex+age, n.knots=12,kappa=10000,data=kidney) ###--- Shared Frailty model ---### frailtyPenal(Surv(time,status)~cluster(id)+sex+age, n.knots=12,kappa=10000,data=kidney) #-- with an initialisation of regression coefficients frailtyPenal(Surv(time,status)~cluster(id)+sex+age, n.knots=12,kappa=10000,data=kidney,init.B=c(-1.44,0)) #-- with truncated data data(dataNested) frailtyPenal(Surv(t1,t2,event) ~ cluster(group), data=dataNested,n.knots=10,kappa=10000, cross.validation=TRUE,recurrentAG=FALSE) #-- stratified analysis data(readmission) frailtyPenal(Surv(time,event)~cluster(id)+dukes+strata(sex), n.knots=10,kappa=c(10000,10000),data=readmission) #-- recurrentAG=TRUE frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+sex+dukes+ charlson,data=readmission,n.knots=6,kappa=1e5,recurrentAG=TRUE) #-- cross.validation=TRUE frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+sex+dukes+ charlson,data=readmission,n.knots=6,kappa=5000,recurrentAG=TRUE, cross.validation=TRUE) #-- log-normal distribution frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+sex+dukes+ charlson,data=readmission,n.knots=6,kappa=5000,recurrentAG=TRUE, RandDist="LogN") ###--- Joint Frailty model (recurrent and terminal events) ---### data(readmission) #-- Gap-time modJoint.gap <- frailtyPenal(Surv(time,event)~cluster(id)+sex+dukes+charlson+ terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=14,kappa=c(9.55e+9,1.41e+12), recurrentAG=FALSE) #-- Calendar time modJoint.calendar <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+ sex+dukes+charlson+terminal(death),formula.terminalEvent=~sex +dukes+charlson,data=readmission,n.knots=10,kappa=c(9.55e9,1.41e12), recurrentAG=TRUE) #-- without alpha parameter modJoint.gap <- frailtyPenal(Surv(time,event)~cluster(id)+sex+dukes+charlson+ terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=10,kappa=c(9.55e9,1.41e12), recurrentAG=FALSE,Alpha="None") #-- log-normal distribution modJoint.log <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+sex +dukes+charlson+terminal(death),formula.terminalEvent=~sex +dukes+charlson,data=readmission,n.knots=10,kappa=c(9.55e9,1.41e12), recurrentAG=TRUE,RandDist="LogN") ###--- Joint frailty model for NCC data ---### data(dataNCC) modJoint.ncc <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+cov1 +cov2+terminal(death)+wts(ncc.wts), formula.terminalEvent=~cov1+cov2, data=dataNCC,n.knots=8,kappa=c(1.6e+10, 5.0e+03),recurrentAG=TRUE, RandDist="LogN") ###--- Joint Frailty model for clustered data ---### #-- here is generated cluster (5 clusters) readmission <- transform(readmission,group=id%%5+1) #-- exclusion all recurrent events --# #-- to obtain framework of semi-competing risks --# readmission2 <- subset(readmission, (t.start == 0 & event == 1) | event == 0) joi.clus.gap <- frailtyPenal(Surv(time,event)~cluster(group)+ num.id(id)+dukes+charlson+sex+chemo+terminal(death), formula.terminalEvent=~dukes+charlson+sex+chemo, data=readmission2,recurrentAG=FALSE, n.knots=8, kappa=c(1.e+10,1.e+10) ,Alpha="None") ###--- General Joint model (recurrent and terminal events) ###--- with 2 covariates ---### data(readmission) modJoint.general <- frailtyPenal(Surv(time,event) ~ cluster(id) + dukes + charlson + sex + chemo + terminal(death), formula.terminalEvent = ~ dukes + charlson + sex + chemo, data = readmission, jointGeneral = TRUE, n.knots = 8, kappa = c(2.11e+08, 9.53e+11)) ###--- Nested Frailty model ---### ##***** WARNING *****## # Data should be ordered according to cluster and subcluster data(dataNested) modClu <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+cov2,data=dataNested, n.knots=8,kappa=50000) modClu.str <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+strata(cov2),data=dataNested, n.knots=8,kappa=c(50000,50000)) ## Not run: ###--- Joint Nested Frailty model ---### #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) modJointNested_Splines <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) modJointNested_Weib <- frailtyPenal(Surv(t.start,t.stop,event)~subcluster(id) +cluster(group)+dukes+ terminal(death),formula.terminalEvent=~dukes, hazard = ('Weibull'), data=readmissionNested,recurrentAG=TRUE, initialize = FALSE) JoiNesGapSpline <- frailtyPenal(formula = Surv(time, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = FALSE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE, init.Alpha = 1.091, Ksi = "None") ## End(Not run)
###--- COX proportional hazard model (SHARED without frailties) ---### ###--- estimated with penalized likelihood ---### data(kidney) frailtyPenal(Surv(time,status)~sex+age, n.knots=12,kappa=10000,data=kidney) ###--- Shared Frailty model ---### frailtyPenal(Surv(time,status)~cluster(id)+sex+age, n.knots=12,kappa=10000,data=kidney) #-- with an initialisation of regression coefficients frailtyPenal(Surv(time,status)~cluster(id)+sex+age, n.knots=12,kappa=10000,data=kidney,init.B=c(-1.44,0)) #-- with truncated data data(dataNested) frailtyPenal(Surv(t1,t2,event) ~ cluster(group), data=dataNested,n.knots=10,kappa=10000, cross.validation=TRUE,recurrentAG=FALSE) #-- stratified analysis data(readmission) frailtyPenal(Surv(time,event)~cluster(id)+dukes+strata(sex), n.knots=10,kappa=c(10000,10000),data=readmission) #-- recurrentAG=TRUE frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+sex+dukes+ charlson,data=readmission,n.knots=6,kappa=1e5,recurrentAG=TRUE) #-- cross.validation=TRUE frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+sex+dukes+ charlson,data=readmission,n.knots=6,kappa=5000,recurrentAG=TRUE, cross.validation=TRUE) #-- log-normal distribution frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+sex+dukes+ charlson,data=readmission,n.knots=6,kappa=5000,recurrentAG=TRUE, RandDist="LogN") ###--- Joint Frailty model (recurrent and terminal events) ---### data(readmission) #-- Gap-time modJoint.gap <- frailtyPenal(Surv(time,event)~cluster(id)+sex+dukes+charlson+ terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=14,kappa=c(9.55e+9,1.41e+12), recurrentAG=FALSE) #-- Calendar time modJoint.calendar <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+ sex+dukes+charlson+terminal(death),formula.terminalEvent=~sex +dukes+charlson,data=readmission,n.knots=10,kappa=c(9.55e9,1.41e12), recurrentAG=TRUE) #-- without alpha parameter modJoint.gap <- frailtyPenal(Surv(time,event)~cluster(id)+sex+dukes+charlson+ terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=10,kappa=c(9.55e9,1.41e12), recurrentAG=FALSE,Alpha="None") #-- log-normal distribution modJoint.log <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+sex +dukes+charlson+terminal(death),formula.terminalEvent=~sex +dukes+charlson,data=readmission,n.knots=10,kappa=c(9.55e9,1.41e12), recurrentAG=TRUE,RandDist="LogN") ###--- Joint frailty model for NCC data ---### data(dataNCC) modJoint.ncc <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+cov1 +cov2+terminal(death)+wts(ncc.wts), formula.terminalEvent=~cov1+cov2, data=dataNCC,n.knots=8,kappa=c(1.6e+10, 5.0e+03),recurrentAG=TRUE, RandDist="LogN") ###--- Joint Frailty model for clustered data ---### #-- here is generated cluster (5 clusters) readmission <- transform(readmission,group=id%%5+1) #-- exclusion all recurrent events --# #-- to obtain framework of semi-competing risks --# readmission2 <- subset(readmission, (t.start == 0 & event == 1) | event == 0) joi.clus.gap <- frailtyPenal(Surv(time,event)~cluster(group)+ num.id(id)+dukes+charlson+sex+chemo+terminal(death), formula.terminalEvent=~dukes+charlson+sex+chemo, data=readmission2,recurrentAG=FALSE, n.knots=8, kappa=c(1.e+10,1.e+10) ,Alpha="None") ###--- General Joint model (recurrent and terminal events) ###--- with 2 covariates ---### data(readmission) modJoint.general <- frailtyPenal(Surv(time,event) ~ cluster(id) + dukes + charlson + sex + chemo + terminal(death), formula.terminalEvent = ~ dukes + charlson + sex + chemo, data = readmission, jointGeneral = TRUE, n.knots = 8, kappa = c(2.11e+08, 9.53e+11)) ###--- Nested Frailty model ---### ##***** WARNING *****## # Data should be ordered according to cluster and subcluster data(dataNested) modClu <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+cov2,data=dataNested, n.knots=8,kappa=50000) modClu.str <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+strata(cov2),data=dataNested, n.knots=8,kappa=c(50000,50000)) ## Not run: ###--- Joint Nested Frailty model ---### #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) modJointNested_Splines <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) modJointNested_Weib <- frailtyPenal(Surv(t.start,t.stop,event)~subcluster(id) +cluster(group)+dukes+ terminal(death),formula.terminalEvent=~dukes, hazard = ('Weibull'), data=readmissionNested,recurrentAG=TRUE, initialize = FALSE) JoiNesGapSpline <- frailtyPenal(formula = Surv(time, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = FALSE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE, init.Alpha = 1.091, Ksi = "None") ## End(Not run)
This meta-analysis was carried out by the GASTRIC (Global Advanced/Adjuvant Stomach Tumor Research international Collaboration) group, using individual data on patients with curatively resected gastric cancer. Data from all published randomized trials, with a patient recruitment end date before 2004, and comparing adjuvant chemotherapy with surgery alone for resectable gastric cancers, were searched electronically. The candidate surrogate endpoint S was Desease-free survival time, defined as the time (in days) to relapse, second cancer or dead from any cause. The true endpoint T was the overall survival time, defined as the time (in days) from randomization to death of any cause or to the last follow-up.
data(gastadj)
data(gastadj)
This data frame contains the following columns:
The trial in which the patient was treated
The identification number of a patient
The treatment indicator, coded as 0 = Control and 1 = Experimental
The candidate surrogate (progression-free survival in days)
Censoring indicator for for Progression-free survival (0 = alive and progression-free, 1 = with progression or dead)
The true endpoint (overall survival time in days)
Censoring indicator for survival time (0 = alive, 1 = dead)
Oba K, Paoletti X, Alberts S, Bang YJ, Benedetti J, Bleiberg H, Catalona P, Lordick F, Michiels S, Morita A, Okashi Y, Pignon JP, Rougier P, Sasako M, Sakamoto J, Sargent D, Shitara K, Van Cutsem E, Buyse M, Burzykowski T on behalf of the GASTRIC group (2013). Disease-Free Survival as a Surrogate for Overall Survival in Adjuvant Trials of Gastric Cancer: A Meta-Analysis. JNCI: Journal of the National Cancer Institute;105(21):1600-1607
I. SHARED FRAILTY GENERALIZED SURVIVAL MODELS
Fit a gamma Shared Frailty Generalized Survival Model using a parametric estimation, or a semi-parametric penalized likelihood estimation. Right-censored data and strata (up to 6 levels) are allowed. It allows to obtain a parametric or flexible semi-parametric smooth hazard and survival functions.
Each frailty term i is assumed
to act multiplicatively on the hazard function, and to be drawn from a
Gamma distribution with unit mean and variance
.
Conditional on the frailty term, the hazard function for the
th subject in the
th group
is then expressed by
where ij
is a collection of baseline covariates,
is a vector of parameters, and
ij
(
|
ij ;
)
is the hazard function for an average value of the frailty.
The associated conditional survival function writes
where
ij
(
|
ij ;
)
designates the survival function for an average value of the frailty.
Following Liu et al. (2017, 2018), the latter function is expressed in terms of
a link function (.) and a linear predictor
ij
(
,
ij;
)
such that
[
ij
(
|
ij ;
)]
=
ij
(
,
ij;
),
i.e.
ij
(
|
ij ;
)
=
[
ij
(
,
ij;
)]
with
() =
-1().
The conditional survival function is finally modeled by
The table below summarizes the most commonly used (inverse) link functions and their associated conditional survival, hazard and cumulative hazard functions. PHM stands for "Proportional Hazards Model", POM for "Proportional Odds Model, PROM for "Probit Model" and AHM for "Additive Hazards Model".
I.(a) Fully parametric case
In the fully parametric case, linear predictors considered are of the form
where is a shape parameter,
a scale parameter,
a vector of regression coefficients,
and
= (
,
,
).
With the appropriate link function, such linear parametric predictors
make it possible to recover
a Weibull baseline survival function for PHMs and AHMs,
a log-logistic baseline survival function for POMs,
and a log-normal one for PROMs.
I. (b) Flexible semi-parametric case
For PHM and AHM, a more flexible splines-based approach is proposed for
modeling the baseline hazard function and time-varying regression coefficients.
In this case, conditional on the frailty term i,
the hazard function for the
th subject
in the
th group is still expressed by
ij
(
|
ij,
i ;
)
=
i
ij
(
|
ij ;
),
but we have this time
The smoothness of baseline hazard function 0()
is ensured by penalizing the log-likelihood by a term which has
large values for rough functions.
Moreover, for parametric and flexible semi-parametric AHMs, the log-likelihood is constrained to ensure the strict positivity of the hazards, since the latter is not naturally guaranteed by the model.
II. JOINT FRAILTY GENERALIZED SURVIVAL MODELS
Fit a gamma Joint Frailty Generalized Survival Model for recurrent and terminal events using a parametric estimation, or a semi-parametric penalized likelihood estimation. Right-censored data and strata (up to 6 levels) for the recurrent event part are allowed. Joint frailty models allow studying, jointly, survival processes of recurrent and terminal events, by considering the terminal event as an informative censoring.
This model includes an common patient-specific frailty term
i for the two survival functions which will
take into account the unmeasured heterogeneity in the data,
associated with unobserved covariates.
The frailty term acts differently for the two survival functions
(
i for the recurrent survival function and
iα for the death one).
The covariates could be different for the recurrent and terminal event parts.
II.(a) Fully parametric case
For the th recurrence (j=1,...,ni)
and the
th patient (i=1,...,N),
the gamma Joint Frailty Generalized Survival Model
for recurrent event survival function
Rij(.) and death survival function
Di(.) is
- Rij (resp.
Di)
is the linear predictor for the recurrent (resp. terminal) event process.
The form of these linear predictors is the same as the one presented in I.(a).
- R(.) (resp.
D(.))
is the inverse link function associated with
recurrent events (resp. terminal event).
- Rij and
Di
are two vectors of baseline covariates associated with
recurrent and terminal events.
- R and
D
are the parameter vectors for recurrent and terminal events.
- is a parameter allowing more flexibility in the association
between recurrent and terminal events processes.
- The random frailties i are still assumed iid and
drown from a
(1/
,1/
).
II.(b) Flexible semi-parametric case
If one chooses to fit a PHM or an AHM for recurrent and/or terminal events,
a splines-based approach for modeling baseline hazard functions
and time-varying regression coefficients is still available.
In this approach, the submodel for recurrent events is expressed as
Rij
(
|
Rij,
i ;
R)
=
i
Rij
(
|
Rij ;
R), where
The submodel for terminal event is expressed as
Di
(
|
Di,
i ;
D)
=
iα
Di
(
|
Di ;
D), where
Baseline hazard functions
R0(.) and
D0(.)
are estimated using cubic M-splines (of order 4)
with positive coefficients, and the time-varying coefficients
R(.) and
D(.)
are estimated using B-splines of order q.
The smoothness of baseline hazard functions is ensured by penalizing the log-likelihood by two terms which has large values for rough functions.
Moreover, if one chooses an AHM for recurrent and/or terminal event submodel, the log-likelihood is constrained to ensure the strict positivity of the hazards, since the latter is not naturally guaranteed by the model.
GenfrailtyPenal(formula, formula.terminalEvent, data, recurrentAG = FALSE, family, hazard = "Splines", n.knots, kappa, betaknots = 1, betaorder = 3, RandDist = "Gamma", init.B, init.Theta, init.Alpha, Alpha, maxit = 300, nb.gh, nb.gl, LIMparam = 1e-3, LIMlogl = 1e-3, LIMderiv = 1e-3, print.times = TRUE, cross.validation, jointGeneral, nb.int, initialize, init.Ksi, Ksi, init.Eta)
GenfrailtyPenal(formula, formula.terminalEvent, data, recurrentAG = FALSE, family, hazard = "Splines", n.knots, kappa, betaknots = 1, betaorder = 3, RandDist = "Gamma", init.B, init.Theta, init.Alpha, Alpha, maxit = 300, nb.gh, nb.gl, LIMparam = 1e-3, LIMlogl = 1e-3, LIMderiv = 1e-3, print.times = TRUE, cross.validation, jointGeneral, nb.int, initialize, init.Ksi, Ksi, init.Eta)
formula |
A formula object, with the response on the left of a
|
formula.terminalEvent |
Only for joint frailty models: a formula object, only requires terms on the right to indicate which variables are used for the terminal event. Interactions are possible using ' * ' or ' : '. |
data |
A 'data.frame' with the variables used in ' |
recurrentAG |
Logical value. Is Andersen-Gill model fitted? If so indicates that recurrent event times with the counting process approach of Andersen and Gill is used. This formulation can be used for dealing with time-dependent covariates. The default is FALSE. |
family |
Type of Generalized Survival Model to fit.
|
hazard |
Type of hazard functions:
|
n.knots |
Integer giving the number of knots to use. Value required in
the penalized likelihood estimation. It corresponds to the |
kappa |
Positive smoothing parameter in the penalized likelihood estimation. The coefficient kappa tunes the intensity of the penalization (the integral of the squared second derivative of hazard function). In a stratified shared model, this argument must be a vector with kappas for both strata. In a stratified joint model, this argument must be a vector with kappas for both strata for recurrent events plus one kappa for terminal event. We advise the user to identify several possible tuning parameters, note their defaults and look at the sensitivity of the results to varying them. Value required. (See Note). |
betaknots |
Number of inner knots used for the
B-splines time-varying coefficient estimation. Default is 1.
See ' |
betaorder |
Order of the B-splines used for the
time-varying coefficient estimation.
Default is cubic B-splines ( |
RandDist |
Type of random effect distribution:
|
init.B |
A vector of initial values for regression coefficients. This vector should be of the same size as the whole vector of covariates with the first elements for the covariates related to the recurrent events and then to the terminal event (interactions in the end of each component). Default is 0.1 for each (for Generalized Survival and Shared Frailty Models) or 0.5 (for Generalized Joint Frailty Models). |
init.Theta |
Initial value for frailty variance. |
init.Alpha |
Only for Generalized Joint Frailty Models: initial value for parameter alpha. |
Alpha |
Only for Generalized Joint Frailty Models: input "None" so as to fit a joint model without the parameter alpha. |
maxit |
Maximum number of iterations for the Marquardt algorithm. Default is 300 |
nb.gh |
Number of nodes for the Gaussian-Hermite quadrature.
It can be chosen among 5, 7, 9, 12, 15, 20 and 32.
The default is 20 if |
nb.gl |
Number of nodes for the Gaussian-Laguerre quadrature.
It can be chosen between 20 and 32.
The default is 20 if |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters (see Details), |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood (see Details), |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the
gradient (see Details), |
print.times |
A logical parameter to print iteration process. Default is TRUE. |
cross.validation |
Not implemented yet for the generalized settings. |
jointGeneral |
Not implemented yet for the generalized settings. |
nb.int |
Not implemented yet for the generalized settings. |
initialize |
Not implemented yet for the generalized settings. |
init.Ksi |
Not implemented yet for the generalized settings. |
Ksi |
Not implemented yet for the generalized settings. |
init.Eta |
Not implemented yet for the generalized settings. |
TYPICAL USES
For a Generalized Survival Model:
GenfrailtyPenal( formula=Surv(time,event)~var1+var2, data, family, \dots)
For a Shared Frailty Generalized Survival Model:
GenfrailtyPenal( formula=Surv(time,event)~cluster(group)+var1+var2, data, family, \dots)
For a Joint Frailty Generalized Survival Model:
GenfrailtyPenal( formula=Surv(time,event)~cluster(group)+var1+var2+var3+terminal(death), formula.terminalEvent= ~var1+var4, data, family, \dots)
OPTIMIZATION ALGORITHM
The estimated parameters are obtained using the robust Marquardt algorithm
(Marquardt, 1963) which is a combination between a Newton-Raphson algorithm
and a steepest descent algorithm. The iterations are stopped when
the difference between two consecutive log-likelihoods is small
(-3),
the estimated coefficients are stable
(consecutive values
-3,
and the gradient small enough (
-3).
When the frailty variance is small, numerical problems may arise.
To solve this problem, an alternative formula of the penalized log-likelihood
is used (see Rondeau, 2003 for further details).
For Proportional Hazards and Additive Hazards submodels,
cubic M-splines of order 4 can be used to estimate the hazard function.
In this case, I-splines (integrated M-splines) are used to compute the
cumulative hazard function.
The inverse of the Hessian matrix is the variance estimator.
To deal with the positivity constraint of the variance component and the
spline coefficients, a squared transformation is used and the standard errors
are computed by the -method (Knight & Xekalaki, 2000).
The integrations in the full log likelihood are evaluated using
Gaussian quadrature. Laguerre polynomials with 20 points are used to treat
the integrations on
.
INITIAL VALUES
In case of a shared frailty model,
the splines and the regression coefficients are initialized to 0.1.
The program fits, firstly, an adjusted Cox model to give new initial values
for the splines and the regression coefficients.
The variance of the frailty term is initialized to 0.1.
Then, a shared frailty model is fitted.
In case of a joint frailty model,
the splines and the regression coefficients are initialized to 0.5.
The program fits firstly, an adjusted Cox model to have new initial values
for the splines and the regression coefficients.
The variance of the frailty term and the association parameter
are initialized to 1.
Then, a joint frailty model is fitted.
The following components are included in a 'frailtyPenal' object for each model.
b |
Sequence of the corresponding estimation of the coefficients for the hazard functions (parametric or semiparametric), the random effects variances and the regression coefficients. |
call |
The code used for the model. |
formula |
The formula part of the code used for the model. |
n |
The number of observations used in the fit. |
groups |
The maximum number of groups used in the fit. |
n.events |
The number of events observed in the fit. |
n.eventsbygrp |
A vector of length the number of groups giving the number of observed events in each group. |
loglik |
The marginal log-likelihood in the parametric case. |
loglikPenal |
The marginal penalized log-likelihood in the semiparametric case. |
coef |
The regression coefficients. |
varH |
The variance matrix of the regression coefficients before positivity constraint transformation. Then, the delta method is needed to obtain the estimated variance parameters. That is why some variances don't match with the printed values at the end of the model. |
varHtotal |
The variance matrix of all the parameters before positivity constraint transformation. Then, the delta method is needed to obtain the estimated variance parameters. That is why some variances don't match with the printed values at the end of the model. |
varHIH |
The robust estimation of the variance matrix of the regression coefficients |
varHIHtotal |
The robust estimation of the variance matrix of all parameters. |
x |
Matrix of times where the hazard functions are estimated. |
xSu |
Matrix of times where the survival functions are estimated. |
lam |
Array (dim=3) of baseline hazard estimates and confidence bands. |
surv |
Array (dim=3) of baseline survival estimates and confidence bands. |
type |
Character string specifying the type of censoring,
see the |
n.strat |
Number of strata. |
n.iter |
Number of iterations needed to converge. |
median |
The value of the median survival and its confidence bands. If there are two strata or more, the first value corresponds to the value for the first strata, etc. |
LCV |
The approximated likelihood cross-validation criterion in the
semiparametric case.
With H (resp. Hpen) the hessian matrix
of log-likelihood (resp. penalized log-likelihood),
EDF = Hpen-1 H
the effective degrees of freedom,
L(
|
AIC |
The Akaike information Criterion for the parametric case.
With p the number of parameters,
n the number of observations and L(
|
npar |
Number of parameters. |
nvar |
Number of explanatory variables. |
typeof |
Indicator of the type of hazard functions computed : 0 for "Splines", 2 for "parametric". |
istop |
Convergence indicator: 1 if convergence is reached, 2 if convergence is not reached, 3 if the hessian matrix is not positive definite, 4 if a numerical problem has occurred in the likelihood calculation |
shape.param |
Shape parameter for the parametric hazard function (a Weibull distribution is used for proportional and additive hazards models, a log-logistic distribution is used for proportional odds models, a log-normal distribution is used for probit models). |
scale.param |
Scale parameter for the parametric hazard function. |
Names.data |
Name of the dataset. |
Frailty |
Logical value. Was model with frailties fitted ? |
linear.pred |
Linear predictor:
|
BetaTpsMat |
Matrix of time varying-effects and confidence bands (the first column used for abscissa of times). |
nvartimedep |
Number of covariates with time-varying effects. |
Names.vardep |
Name of the covariates with time-varying effects. |
EPS |
Convergence criteria concerning the parameters, the likelihood and the gradient. |
family |
Type of Generalized Survival Model fitted (0 for PH, 1 for PO, 2 for probit, 3 for AH). |
global_chisq.test |
A binary variable equals to 0 when no multivariate Wald is given, 1 otherwise. |
beta_p.value |
p-values of the Wald test for the estimated regression coefficients. |
cross.Val |
Logical value. Is cross validation procedure used for estimating the smoothing parameters in the penalized likelihood estimation? |
DoF |
Degrees of freedom associated with the smoothing parameter
|
kappa |
A vector with the smoothing parameters in the penalized likelihood estimation corresponding to each baseline function as components. |
n.knots |
Number of knots for estimating the baseline functions in the penalized likelihood estimation. |
n.knots.temp |
Initial value for the number of knots. |
global_chisq |
A vector with the values of each multivariate Wald test. |
dof_chisq |
A vector with the degree of freedom for each multivariate Wald test. |
p.global_chisq |
A vector with the p-values for each global multivariate Wald test. |
names.factor |
Names of the "as.factor" variables. |
Xlevels |
Vector of the values that factor might have taken. |
The following components are specific to shared models.
equidistant |
Indicator for the intervals used in the spline estimation
of baseline hazard functions :
1 for equidistant intervals ; 0 for intervals using percentile
(note: |
Names.cluster |
Cluster names. |
theta |
Variance of the gamma frailty parameter, i.e.
Var( |
varTheta |
Variance of parameter |
theta_p.value |
p-value of the Wald test for the estimated variance of the gamma frailty. |
The following components are specific to joint models.
formula |
The formula part of the code used for the recurrent events. |
formula.terminalEvent |
The formula part of the code used for the terminal model. |
n.deaths |
Number of observed deaths. |
n.censored |
Number of censored individuals. |
theta |
Variance of the gamma frailty parameter, i.e.
Var( |
indic_alpha |
Indicator if a joint frailty model with
|
alpha |
The coefficient |
nvar |
A vector with the number of covariates of each type of hazard function as components. |
nvarnotdep |
A vector with the number of constant effect covariates of each type of hazard function as components. |
nvarRec |
Number of recurrent explanatory variables. |
nvarEnd |
Number of death explanatory variables. |
noVar1 |
Indicator of recurrent explanatory variables. |
noVar2 |
Indicator of death explanatory variables. |
Names.vardep |
Name of the covariates with time-varying effects for the recurrent events. |
Names.vardepdc |
Name of the covariates with time-varying effects for the terminal event. |
xR |
Matrix of times where both survival and hazard function are estimated for the recurrent event. |
xD |
Matrix of times for the terminal event. |
lamR |
Array (dim=3) of hazard estimates and confidence bands for recurrent event. |
lamD |
The same value as |
survR |
Array (dim=3) of baseline survival estimates and confidence bands for recurrent event. |
survD |
The same value as |
nb.gh |
Number of nodes for the Gaussian-Hermite quadrature. |
nb.gl |
Number of nodes for the Gaussian-Laguerre quadrature. |
medianR |
The value of the median survival for the recurrent events and its confidence bands. |
medianD |
The value of the median survival for the terminal event and its confidence bands. |
names.factor |
Names of the "as.factor" variables for the recurrent events. |
names.factordc |
Names of the "as.factor" variables for the terminal event. |
Xlevels |
Vector of the values that factor might have taken for the recurrent events. |
Xlevels2 |
Vector of the values that factor might have taken for the terminal event. |
linear.pred |
Linear predictor for the recurrent part:
|
lineardeath.pred |
Linear predictor for the terminal part:
|
Xlevels |
Vector of the values that factor might have taken for the recurrent part. |
Xlevels2 |
vector of the values that factor might have taken for the death part. |
BetaTpsMat |
Matrix of time varying-effects and confidence bands for recurrent event (the first column used for abscissa of times of recurrence). |
BetaTpsMatDc |
Matrix of time varying-effects and confidence bands for terminal event (the first column used for abscissa of times of death). |
alpha_p.value |
p-value of the Wald test for the estimated |
In the flexible semiparametric case, smoothing parameters kappa
and
number of knots n.knots
are the arguments that the user have to change
if the fitted model does not converge.
n.knots
takes integer values between 4 and 20.
But with n.knots=20
, the model would take a long time to converge.
So, usually, begin first with n.knots=7
, and increase it step by step
until it converges.
kappa
only takes positive values. So, choose a value for kappa (for
instance 10000), and if it does not converge, multiply or divide this value
by 10 or 5 until it converges.
J. Chauvet and V. Rondeau (2021). A flexible class of generalized joint frailty models for the analysis of survival endpoints. In revision.
Liu XR, Pawitan Y, Clements M. (2018) Parametric and penalized generalized survival models. Statistical Methods in Medical Research 27(5), 1531-1546.
Liu XR, Pawitan Y, Clements MS. (2017) Generalized survival models for correlated time-to-event data. Statistics in Medicine 36(29), 4743-4762.
A. Krol, A. Mauguen, Y. Mazroui, A. Laurent, S. Michiels and V. Rondeau (2017). Tutorial in Joint Modeling and Prediction: A Statistical Software for Correlated Longitudinal Outcomes, Recurrent Events and a Terminal Event. Journal of Statistical Software 81(3), 1-52.
V. Rondeau, Y. Mazroui and J. R. Gonzalez (2012). Frailtypack: An R package for the analysis of correlated survival data with frailty models using penalized likelihood estimation or parametric estimation. Journal of Statistical Software 47, 1-28.
V. Rondeau, J.P. Pignon, S. Michiels (2011). A joint model for the dependance between clustered times to tumour progression and deaths: A meta-analysis of chemotherapy in head and neck cancer. Statistical methods in medical research 897, 1-19.
V. Rondeau, S. Mathoulin-Pellissier, H. Jacqmin-Gadda, V. Brouste, P. Soubeyran (2007). Joint frailty models for recurring events and death using maximum penalized likelihood estimation:application on cancer events. Biostatistics 8,4, 708-721.
V. Rondeau, D. Commenges, and P. Joly (2003). Maximum penalized likelihood estimation in a gamma-frailty model. Lifetime Data Analysis 9, 139-153.
C.A. McGilchrist, and C.W. Aisbett (1991). Regression with frailty in survival analysis. Biometrics 47, 461-466.
D. Marquardt (1963). An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal of Applied Mathematics, 431-441.
############################################################################# # ----- GENERALIZED SURVIVAL MODELS (without frailties) ----- # ############################################################################# adult.retino = retinopathy[retinopathy$type == "adult", ] adult.retino[adult.retino$futime >= 50, "status"] = 0 adult.retino[adult.retino$futime >= 50, "futime"] = 50 ### --- Parametric PH, AH, PO and probit models --- ### GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino, hazard="parametric", family="PH") GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino, hazard="parametric", family="AH") GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino, hazard="parametric", family="PO") GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino, hazard="parametric", family="probit") ### --- Semi-parametric PH and AH models --- ### GenfrailtyPenal(formula=Surv(futime,status)~timedep(trt), data=adult.retino, family="PH", hazard="Splines", n.knots=8, kappa=10^6, betaknots=1, betaorder=2) GenfrailtyPenal(formula=Surv(futime,status)~timedep(trt), data=adult.retino, family="AH", hazard="Splines", n.knots=8, kappa=10^10, betaknots=1, betaorder=2) ############################################################################# # ----- SHARED FRAILTY GENERALIZED SURVIVAL MODELS ----- # ############################################################################# adult.retino = retinopathy[retinopathy$type == "adult", ] adult.retino[adult.retino$futime >= 50, "status"] = 0 adult.retino[adult.retino$futime >= 50, "futime"] = 50 ### --- Parametric PH, AH, PO and probit models --- ### GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino, hazard="parametric", family="PH") GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino, hazard="parametric", family="AH") GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino, hazard="parametric", family="PO") GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino, hazard="parametric", family="probit") ### --- Semi-parametric PH and AH models --- ### GenfrailtyPenal(formula=Surv(futime,status)~cluster(id)+timedep(trt), data=adult.retino, family="PH", hazard="Splines", n.knots=8, kappa=10^6, betaknots=1, betaorder=2) GenfrailtyPenal(formula=Surv(futime,status)~cluster(id)+timedep(trt), data=adult.retino, family="AH", hazard="Splines", n.knots=8, kappa=10^10, betaknots=1, betaorder=2) ############################################################################# # ----- JOINT FRAILTY GENERALIZED SURVIVAL MODELS ----- # ############################################################################# data("readmission") readmission[, 3:5] = readmission[, 3:5]/365.25 ### --- Parametric dual-PH, AH, PO and probit models --- ### GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo, formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE, hazard="parametric", family=c("PH","PH")) GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo, formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE, hazard="parametric", family=c("AH","AH")) GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo, formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE, hazard="parametric", family=c("PO","PO")) GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo, formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE, hazard="parametric", family=c("probit","probit")) ### --- Semi-parametric dual-PH and AH models --- ### GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+timedep(chemo), formula.terminalEvent=~sex+dukes+timedep(chemo), data=readmission, recurrentAG=TRUE, hazard="Splines", family=c("PH","PH"), n.knots=5, kappa=c(100,100), betaknots=1, betaorder=3) GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+timedep(chemo), formula.terminalEvent=~sex+dukes+timedep(chemo), data=readmission, recurrentAG=TRUE, hazard="Splines", family=c("AH","AH"), n.knots=5, kappa=c(600,600), betaknots=1, betaorder=3)
############################################################################# # ----- GENERALIZED SURVIVAL MODELS (without frailties) ----- # ############################################################################# adult.retino = retinopathy[retinopathy$type == "adult", ] adult.retino[adult.retino$futime >= 50, "status"] = 0 adult.retino[adult.retino$futime >= 50, "futime"] = 50 ### --- Parametric PH, AH, PO and probit models --- ### GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino, hazard="parametric", family="PH") GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino, hazard="parametric", family="AH") GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino, hazard="parametric", family="PO") GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino, hazard="parametric", family="probit") ### --- Semi-parametric PH and AH models --- ### GenfrailtyPenal(formula=Surv(futime,status)~timedep(trt), data=adult.retino, family="PH", hazard="Splines", n.knots=8, kappa=10^6, betaknots=1, betaorder=2) GenfrailtyPenal(formula=Surv(futime,status)~timedep(trt), data=adult.retino, family="AH", hazard="Splines", n.knots=8, kappa=10^10, betaknots=1, betaorder=2) ############################################################################# # ----- SHARED FRAILTY GENERALIZED SURVIVAL MODELS ----- # ############################################################################# adult.retino = retinopathy[retinopathy$type == "adult", ] adult.retino[adult.retino$futime >= 50, "status"] = 0 adult.retino[adult.retino$futime >= 50, "futime"] = 50 ### --- Parametric PH, AH, PO and probit models --- ### GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino, hazard="parametric", family="PH") GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino, hazard="parametric", family="AH") GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino, hazard="parametric", family="PO") GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino, hazard="parametric", family="probit") ### --- Semi-parametric PH and AH models --- ### GenfrailtyPenal(formula=Surv(futime,status)~cluster(id)+timedep(trt), data=adult.retino, family="PH", hazard="Splines", n.knots=8, kappa=10^6, betaknots=1, betaorder=2) GenfrailtyPenal(formula=Surv(futime,status)~cluster(id)+timedep(trt), data=adult.retino, family="AH", hazard="Splines", n.knots=8, kappa=10^10, betaknots=1, betaorder=2) ############################################################################# # ----- JOINT FRAILTY GENERALIZED SURVIVAL MODELS ----- # ############################################################################# data("readmission") readmission[, 3:5] = readmission[, 3:5]/365.25 ### --- Parametric dual-PH, AH, PO and probit models --- ### GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo, formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE, hazard="parametric", family=c("PH","PH")) GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo, formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE, hazard="parametric", family=c("AH","AH")) GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo, formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE, hazard="parametric", family=c("PO","PO")) GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo, formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE, hazard="parametric", family=c("probit","probit")) ### --- Semi-parametric dual-PH and AH models --- ### GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+timedep(chemo), formula.terminalEvent=~sex+dukes+timedep(chemo), data=readmission, recurrentAG=TRUE, hazard="Splines", family=c("PH","PH"), n.knots=5, kappa=c(100,100), betaknots=1, betaorder=3) GenfrailtyPenal( formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+timedep(chemo), formula.terminalEvent=~sex+dukes+timedep(chemo), data=readmission, recurrentAG=TRUE, hazard="Splines", family=c("AH","AH"), n.knots=5, kappa=c(600,600), betaknots=1, betaorder=3)
Let t be a continuous variable, we determine the value of the hazard function to t after run fit.
hazard(t, ObjFrailty)
hazard(t, ObjFrailty)
t |
time for hazard function. |
ObjFrailty |
an object from the frailtypack fit. |
return the value of hazard function in t.
## Not run: #-- a fit Shared data(readmission) fit.shared <- frailtyPenal(Surv(time,event)~dukes+cluster(id)+ strata(sex),n.knots=10,kappa=c(10000,10000),data=readmission) #-- calling survival hazard(20,fit.shared) ## End(Not run)
## Not run: #-- a fit Shared data(readmission) fit.shared <- frailtyPenal(Surv(time,event)~dukes+cluster(id)+ strata(sex),n.knots=10,kappa=c(10000,10000),data=readmission) #-- calling survival hazard(20,fit.shared) ## End(Not run)
Fit a joint competing frailty model for a single recurrent event and two terminal events defined as,
where is the frailty term and
and
are vectors of baseline covariates (possibly the same). The parameters
and
are power parameters.
jointRecCompet(formula, formula.terminalEvent = NULL, formula.terminalEvent2 = NULL, data, initialize = TRUE, recurrentAG = FALSE, maxit = 350, hazard = "Weibull", n.knots=7, kappa = rep(10, 3), crossVal=FALSE, constraint.frailty = "squared", GHpoints = 32, tolerance = rep(10^-3, 3), init.hazard = NULL, init.Sigma = 0.5, init.Alpha1 = 0.1, init.Alpha2 = -0.1, init.B = NULL)
jointRecCompet(formula, formula.terminalEvent = NULL, formula.terminalEvent2 = NULL, data, initialize = TRUE, recurrentAG = FALSE, maxit = 350, hazard = "Weibull", n.knots=7, kappa = rep(10, 3), crossVal=FALSE, constraint.frailty = "squared", GHpoints = 32, tolerance = rep(10^-3, 3), init.hazard = NULL, init.Sigma = 0.5, init.Alpha1 = 0.1, init.Alpha2 = -0.1, init.B = NULL)
formula |
a formula object, with the response for the first recurrent
event on the left of a |
formula.terminalEvent |
a formula object,
empty on the left of a |
formula.terminalEvent2 |
a formula object,
empty on the left of a |
data |
a 'data.frame' with the variables used in 'formula', 'formula.terminalEvent', and 'formula.terminalEvent2'. |
initialize |
Logical value to internally initialize regression coefficients and baseline hazard functions parameters using simpler models from frailtypack. When initialization is requested, the program first fits two joint frailty models for the recurrent events and each terminal event. When FALSE, parameters are initialized via the arguments init.hazard, init.Sigma, init.Alpha1, init.Alpha2, init.B. |
recurrentAG |
Logical value. Is Andersen-Gill model fitted? If so indicates that recurrent event times with the counting process approach of Andersen and Gill is used. This formulation can be used for dealing with time-dependent covariates. The default is FALSE. |
maxit |
maximum number of iterations for the Marquardt algorithm. Default is 350. |
hazard |
Type of hazard functions. Available options are |
n.knots |
In the case of splines hazard functions, number of knots to be used in the splines basis. This number should be between 4 and 20. Default is 7. |
kappa |
In the case of splines hazard functions, a vector of size 3 containing the values of the smoothing parameters to be used for each baseline hazard function. Default value is 10 for each function. |
crossVal |
In the case of splines hazard functions, indicates how
the smoothing parameters are chosen. If set to "TRUE" then those parameters
are chosen automatically using cross-validation on reduced models for each
baseline hazard function. If set to "FALSE" then the parameters are those provided
by the argument |
constraint.frailty |
Type of positivity constraint used for the variance of of the random effect in the likelihood. Possible values are 'squared' or 'exponential'. Default is 'squared'. See Details. |
GHpoints |
Integer. Number of nodes for Gauss-Hermite integration to marginalize random effects/frailties. Default is 32. |
tolerance |
Numeric, length 3. Optimizer's tolerance for (1) successive change in parameter values, (2) log likelihood, and (3) score, respectively. |
init.hazard |
Numeric. Initialization values for hazard parameters. If a weibull model is used, the order is: shapeR, scaleR, shapeTerminal1, scaleTerminal1, shapeTerminal2, scaleTerminal2. |
init.Sigma |
Numeric,. Initialization value for the standard deviation of the normally-distributed random effects. |
init.Alpha1 |
Numeric. Initialization value for the parameter alpha that links the hazard function of the recurrent event to the first terminal event. |
init.Alpha2 |
Numeric. Initialization value for the parameter alpha that links the hazard function of the recurrent event to the second terminal event. |
init.B |
Numeric vector of the same length and order as the three covariate vectors for the recurrent, terminal1, and terminal2 events (in that order). |
Right-censored data are allowed.
Left-truncated data and stratified analysis are not possible.
Prediction options are not yet available.
The constraint.frailty
argument defines the positivity constraint
used for the frailty variance in the likelihood. By default it uses the square
so that the absolute value of the parameter is the standard deviation of the frailty
(i.e ).
The other parametrization uses the square of the exponential for the variance
so that the parameter is the logarithm of the standard deviation (
).
For others parameters in the model needing a positivity constraint (parameters related to the
baseline hazard functions), the parametrization used is the exponential squared.
Parameters estimates of a competing joint frailty model, more generally a 'jointRecCompet' object. Methods defined for 'jointRecCompet' objects are provided for print, plot and summary. The following components are included in a 'jointRecCompet' object.
summary.table |
A table describing the estimate, standard error, confidence interval, and pvalues for each of the parameters in the model. |
controls |
A vector of named control parameters |
k0 |
For splines baseline hazard functions, vector of penalization terms. |
noVarEvent |
A vector containing for each event type if there is no covariate used in the model. |
np |
Total number of parameters |
b |
Vector containing the estimated coefficients of the model before any positivity constraint.
The values are in order: the coefficients associated with the baseline hazard functions
(either the splines or the shape and scale parameters for Weibull hazard),
the random effect variance, the coefficients of the frailty ( |
H_hessOut |
Covariance matrix of the estimated parameters |
HIHOut |
Covariance matrix of the estimated parameters for the penalized likelihood in the case of Splines baseline hazard functions. |
LCV |
The approximated likelihood cross-validation criterion in the spline case |
critCV |
Convergence criteria |
x1 |
Vector of times for which the hazard function of the recurrent event is estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lam1 |
Matrix of hazard estimates and confidence bands for the recurrent event. |
xSu1 |
Vector of times for the survival function of the recurrent event. |
surv1 |
Matrix of baseline survival estimates and confidence bands for recurrent event. |
x2 |
Vector of times for the first terminal event (see x1 value). |
lam2 |
Matrix of hazard estimates and confidence bands for the first terminal event. |
xSu2 |
Vector of times for the survival function of the first terminal event. |
surv2 |
Vector of the survival function of the first terminal event evaluated at xSu2. |
x3 |
Vector of times for the second terminal event (see x1 value). |
lam3 |
Matrix of hazard estimates and confidence bands for the second terminal event. |
xSu3 |
Vector of times for the survival function of the second terminal event. |
surv3 |
Vector of the survival function of the second terminal event evaluated at xSu3. |
ni |
Number of iterations needed to converge. |
constraintfrailty |
Positivity constraint used for the variance of the random effect |
ziOut1 |
In the spline case, vector of knots used in the spline basis for the recurrent event |
ziOutdc |
In the spline case, vector of knots used in the spline basis for the terminal events |
ghnodes |
Nodes used for the Gauss-Hermite quadrature. |
ghweights |
Weights used for the Gauss-Hermite quadrature. |
tolerance |
Numeric, length 3. Optimizer's tolerance for (1) successive change in parameter values, (2) log likelihood, and (3) score, respectively. |
call |
Call of the function. |
loglikPenal |
Estimated penalized log-likelihood in the spline case |
logLik |
Estimated log-likelihood in the Weibull case |
AIC |
For the Weibull case, Akaike Information criterion |
n |
Total number of subjects |
nevts |
Number of events for each event type. |
set.seed(1) data=simulatejointRecCompet(n=500, par0=c(shapeR = 1.5, scaleR = 10, shapeM = 1.75, scaleM = 16, shapeD = 1.75, scaleD = 16, sigma = 0.5, alphaM = 1, alphaD = 1, betaR = -0.5, betaM = -0.5, betaD = 0) ) mod <-jointRecCompet(formula = Surv(tstart, tstop, event)~cluster(id)+treatment+ terminal(terminal1)+terminal2(terminal2), formula.terminalEvent = ~treatment, formula.terminalEvent2 = ~treatment, data = data, recurrentAG = TRUE, initialize = TRUE, n.knots=7, crossVal=TRUE, hazard = "Splines", maxit = 350) #This example uses an extract of 500 patients of the REDUCE trial data(reduce) mod_reduce <-jointRecCompet(formula = Surv(t.start,t.stop, del)~cluster(id)+ treatment+terminal(death)+terminal2(discharge), formula.terminalEvent = ~treatment, formula.terminalEvent2 = ~treatment, data = reduce, initialize = TRUE, recurrentAG = TRUE, hazard = "Weibull", constraint.frailty = "exponential", maxit = 350) print(mod_reduce)
set.seed(1) data=simulatejointRecCompet(n=500, par0=c(shapeR = 1.5, scaleR = 10, shapeM = 1.75, scaleM = 16, shapeD = 1.75, scaleD = 16, sigma = 0.5, alphaM = 1, alphaD = 1, betaR = -0.5, betaM = -0.5, betaD = 0) ) mod <-jointRecCompet(formula = Surv(tstart, tstop, event)~cluster(id)+treatment+ terminal(terminal1)+terminal2(terminal2), formula.terminalEvent = ~treatment, formula.terminalEvent2 = ~treatment, data = data, recurrentAG = TRUE, initialize = TRUE, n.knots=7, crossVal=TRUE, hazard = "Splines", maxit = 350) #This example uses an extract of 500 patients of the REDUCE trial data(reduce) mod_reduce <-jointRecCompet(formula = Surv(t.start,t.stop, del)~cluster(id)+ treatment+terminal(death)+terminal2(discharge), formula.terminalEvent = ~treatment, formula.terminalEvent2 = ~treatment, data = reduce, initialize = TRUE, recurrentAG = TRUE, hazard = "Weibull", constraint.frailty = "exponential", maxit = 350) print(mod_reduce)
Date are generated from the one-step joint frailty-copula model, under the Claton
copula function (see jointSurroCopPenal
for more details)
jointSurrCopSimul( n.obs = 600, n.trial = 30, prop.cens = 0, cens.adm = 549, alpha = 1.5, gamma = 2.5, sigma.s = 0.7, sigma.t = 0.7, cor = 0.9, betas = c(-1.25, 0.5), betat = c(-1.25, 0.5), frailt.base = 1, lambda.S = 1.3, nu.S = 0.0025, lambda.T = 1.1, nu.T = 0.0025, ver = 2, typeOf = 1, equi.subj.trial = 1, equi.subj.trt = 1, prop.subj.trial = NULL, prop.subj.trt = NULL, full.data = 0, random.generator = 1, random = 0, random.nb.sim = 0, seed = 0, nb.reject.data = 0, thetacopule = 6, filter.surr = c(1, 1), filter.true = c(1, 1), covar.names = "trt", pfs = 0 )
jointSurrCopSimul( n.obs = 600, n.trial = 30, prop.cens = 0, cens.adm = 549, alpha = 1.5, gamma = 2.5, sigma.s = 0.7, sigma.t = 0.7, cor = 0.9, betas = c(-1.25, 0.5), betat = c(-1.25, 0.5), frailt.base = 1, lambda.S = 1.3, nu.S = 0.0025, lambda.T = 1.1, nu.T = 0.0025, ver = 2, typeOf = 1, equi.subj.trial = 1, equi.subj.trt = 1, prop.subj.trial = NULL, prop.subj.trt = NULL, full.data = 0, random.generator = 1, random = 0, random.nb.sim = 0, seed = 0, nb.reject.data = 0, thetacopule = 6, filter.surr = c(1, 1), filter.true = c(1, 1), covar.names = "trt", pfs = 0 )
n.obs |
Number of considered subjects. The default is |
n.trial |
Number of considered trials. The default is |
prop.cens |
A value between |
cens.adm |
Censorship time. If argument |
alpha |
Fixed value for |
gamma |
Fixed value for |
sigma.s |
Fixed value for
|
sigma.t |
Fixed value for
|
cor |
Desired level of correlation between vSi and vTi.
|
betas |
Vector of the fixed effects for |
betat |
Vector of the fixed effects for |
frailt.base |
Considered heterogeneity on the baseline risk |
lambda.S |
Desired scale parameter for the |
nu.S |
Desired shape parameter for the |
lambda.T |
Desired scale parameter for the |
nu.T |
Desired shape parameter for the |
ver |
Number of covariates. The mandatory covariate is the treatment arm. The default is |
typeOf |
Type of joint model used for data generation: 0 = classical joint model
with a shared individual frailty effect (Rondeau, 2007), 1 = joint frailty-copula model with shared frailty
effects |
equi.subj.trial |
A binary variable that indicates if the same proportion of subjects should be included per trial (1)
or not (0). If 0, the proportions of subject per trial are required with parameter |
equi.subj.trt |
A binary variable that indicates if the same proportion of subjects is randomized per trial (1)
or not (0). If 0, the proportions of subject per trial are required with parameter |
prop.subj.trial |
The proportions of subjects per trial. Requires if |
prop.subj.trt |
The proportions of randomized subject per trial. Requires if |
full.data |
Specified if you want the function to return the full dataset (1), including the random effects,
or the restictive dataset (0) with at least |
random.generator |
The random number generator used by the Fortran compiler,
|
random |
A binary that says if we reset the random number generation with a different environment
at each call |
random.nb.sim |
required if |
seed |
The seed to use for data (or samples) generation. Required if the argument |
nb.reject.data |
Number of generation to reject before the considered dataset. This parameter is required
when data generation is for simulation. With a fixed parameter and |
thetacopule |
The desired value for the copula parameter. The default is |
filter.surr |
Vector of size the number of covariates, with the i-th element that indicates if the hazard for surrogate is adjusted on the i-th covariate (code 1) or not (code 0). By default, 2 covariates are considered. |
filter.true |
Vector defines as |
covar.names |
Vector of the names of covariables. By default it contains "trt" for the tratment arm. Should contains the names of all covarites wished in the generated dataset. |
pfs |
Is used to specify if the time to progression should be censored by the death time (0) or not (1). The default is 0. In the event with pfs set to 1, death is included in the surrogate endpoint as in the definition of PFS or DFS. |
We just considered in this generation, the Gaussian random effects. If the parameter full.data
is set to 1,
this function return a list containning severals parameters, including the generated random effects.
The desired individual level correlation (Kendall's ) depend on the values of the copula parameter
, given that
under the clayton copula model.
This function returns if the parameter full.data
is set to 0, a data.frame
with columns :
patientID |
A numeric, that represents the patient's identifier, must be unique; |
trialID |
A numeric, that represents the trial in which each patient was randomized; |
trt |
The treatment indicator for each patient, with 1 = treated, 0 = untreated; |
timeS |
The follow up time associated with the surrogate endpoint; |
statusS |
The event indicator associated with the surrogate endpoint. Normally 0 = no event, 1 = event; |
timeT |
The follow up time associated with the true endpoint; |
statusT |
The event indicator associated with the true endpoint. Normally 0 = no event, 1 = event; |
and other covariates named Var2, var3, ..., var[ver-1]
if ver > 1
.
If the argument full.data
is set to 1, additionnal colums corresponding to random effects
u
i, v
Si and
v
Ti are returned.
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Rondeau V., Mathoulin-Pelissier S., Jacqmin-Gadda H., Brouste V. and Soubeyran P. (2007). Joint frailty models for recurring events and death using maximum penalized likelihood estimation: application on cancer events. Biostatistics 8(4), 708-721.
Sofeu, C. L., Emura, T., and Rondeau, V. (2020). A joint frailty-copula model for meta-analytic
validation of failure time surrogate endpoints in clinical trials. Under review
jointSurrSimul, jointSurroCopPenal
# dataset with 2 covariates and fixed censorship data.sim <- jointSurrCopSimul(n.obs=600, n.trial = 30, prop.cens = 0, cens.adm=549, alpha = 1.5, gamma = 2.5, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = c(-1.25, 0.5), betat = c(-1.25, 0.5), full.data = 0, random.generator = 1,ver = 2, covar.names = "trt", nb.reject.data = 0, thetacopule = 6, filter.surr = c(1,1), filter.true = c(1,1), seed = 0) #dataset with 2 covariates and random censorship data.sim2 <- jointSurrCopSimul(n.obs=600, n.trial = 30, prop.cens = 0.75, cens.adm = 549, alpha = 1.5, gamma = 2.5, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = c(-1.25, 0.5), betat = c(-1.25, 0.5), full.data = 0, random.generator = 1, ver = 2, covar.names = "trt", nb.reject.data = 0, thetacopule = 6, filter.surr = c(1,1), filter.true = c(1,1), seed = 0)
# dataset with 2 covariates and fixed censorship data.sim <- jointSurrCopSimul(n.obs=600, n.trial = 30, prop.cens = 0, cens.adm=549, alpha = 1.5, gamma = 2.5, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = c(-1.25, 0.5), betat = c(-1.25, 0.5), full.data = 0, random.generator = 1,ver = 2, covar.names = "trt", nb.reject.data = 0, thetacopule = 6, filter.surr = c(1,1), filter.true = c(1,1), seed = 0) #dataset with 2 covariates and random censorship data.sim2 <- jointSurrCopSimul(n.obs=600, n.trial = 30, prop.cens = 0.75, cens.adm = 549, alpha = 1.5, gamma = 2.5, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = c(-1.25, 0.5), betat = c(-1.25, 0.5), full.data = 0, random.generator = 1, ver = 2, covar.names = "trt", nb.reject.data = 0, thetacopule = 6, filter.surr = c(1,1), filter.true = c(1,1), seed = 0)
Joint Frailty-Copula model for Surrogacy definition
Fit the one-step Joint frailty-copula surrogate model for the evaluation of a canditate surrogate endpoint,
with different integration methods on the random effects, using a semiparametric penalized
likelihood estimation. This approach extends that of Burzykowski et al.
(2001) by
including in the bivariate copula model the random effects treatment-by-trial interaction.
Assume Sij and Tij the failure times associated respectively
with the surrogate and the true endpoints, for subject j(j = 1,..., n
i)
belonging to
the trial i (i = 1,..., G)
.
Let vi = (ui, vSi, vTi) be the vector of trial level random effects; ZS,ij = (ZSij1, ..., ZSijp)' and ZT,ij = (ZTij1, ..., ZTijp)' be covariates associated with Sij and Tij. The joint frailty-copula model is defined as follows:
where
and the conditional survival functions are given by
in which
ui ~ (0,
),
ui ⊥ vSi, ui ⊥ vTi;
(vSi, vTi)T ~
(0,
v)
with
In this model, 0s(x) is the baseline hazard function associated with the
surrogate endpoint and
S the fixed effects (or log-hazard ratio) corresponding
to the covariates ZS,ij;
0T(x) is the baseline hazard function associated with the true endpoint
and
T the fixed treatment effects corresponding
to the covariates ZT,ij. The copula model serves to consider dependence between
the surrogate and true endpoints at the individual level. In the copula model,
is the copula
parameter used to quantify the strength of association. ui is a shared frailty effect associated
with the baseline hazard function that serve to take into account the heterogeneity between trials
of the baseline hazard function, associated with the fact that we have several trials in this
meta-analytical design. The power parameter
distinguishes
trial-level heterogeneity between the surrogate and the true endpoint.
vSi and vTi are two correlated random effects treatment-by-trial interactions.
Sij1 or
Tij1 represents the treatment arm to which the patient has been randomized.
For simplicity, we focus on the Clayton and Gumbel-Hougaard copula functions. In Clayton's model, the copula function has the form
and in Gumbel's model, the copula function has the form
Surrogacy evaluation
We propose to base validation of a candidate surrogate endpoint on Kendall's at the individual level and
coefficient of determination at the trial level, as in the classical approach (Burzykowski
et al.
, 2001).
The formulations are given below.
Individual-level surrogacy
From the proposed model, according to the copula function, it can be shown that Kendall's
is defined as :
where is the copula parameter. Kendall's
is the difference between the probability of
concordance and the probability of discordance of two realizations of
ij and
ij.
It belongs to the interval [-1,1] and assumes a zero value when
ij and
ij are
independent.
Trial-level surrogacy
The key motivation for validating a surrogate endpoint is to be able to predict the effect
of treatment on the true endpoint, based on the observed effect of treatment on the
surrogate endpoint. As shown by Buyse et al. (2000), the coefficenient of
determination obtains from the covariance matrix v of the random effects
treatment-by-trial interaction can be used to evaluate underlined prediction, and
therefore as surrogacy evaluation measurement at trial-level. It is defined by:
The SEs of trial2 and
are calculated using the Delta-method. We also propose
trial2 and 95% CI computed using the parametric bootstrap. The use of delta-method
can lead to confidence limits violating the [0,1], as noted by
(Burzykowski et al., 2001). However, using other methods would not significantly alter
the findings of the surrogacy assessment
jointSurroCopPenal(data, maxit = 40, indicator.alpha = 1, frail.base = 1, n.knots = 6, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 1000, nb.gh = 20, nb.gh2 = 32, adaptatif = 0, int.method = 0, nb.iterPGH = 5, true.init.val = 0, thetacopula.init = 1, sigma.ss.init = 0.5, sigma.tt.init = 0.5, sigma.st.init = 0.48, gamma.init = 0.5, alpha.init = 1, betas.init = 0.5, betat.init = 0.5, scale = 1, random.generator = 1, kappa.use = 4, random = 0, random.nb.sim = 0, seed = 0, init.kappa = NULL, ckappa = c(0,0), typecopula = 1, nb.decimal = 4, print.times = TRUE, print.iter = FALSE)
jointSurroCopPenal(data, maxit = 40, indicator.alpha = 1, frail.base = 1, n.knots = 6, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 1000, nb.gh = 20, nb.gh2 = 32, adaptatif = 0, int.method = 0, nb.iterPGH = 5, true.init.val = 0, thetacopula.init = 1, sigma.ss.init = 0.5, sigma.tt.init = 0.5, sigma.st.init = 0.48, gamma.init = 0.5, alpha.init = 1, betas.init = 0.5, betat.init = 0.5, scale = 1, random.generator = 1, kappa.use = 4, random = 0, random.nb.sim = 0, seed = 0, init.kappa = NULL, ckappa = c(0,0), typecopula = 1, nb.decimal = 4, print.times = TRUE, print.iter = FALSE)
data |
A
|
maxit |
maximum number of iterations for the Marquardt algorithm.
The default being |
indicator.alpha |
A binary, indicating whether the power's parameter |
frail.base |
A binary, indicating whether the heterogeneity between trial on the baseline
risk is considered ( |
n.knots |
integer giving the number of knots to use. Value required in
the penalized likelihood estimation. It corresponds to the (n.knots+2)
splines functions for the approximation of the hazard or the survival
functions. We estimate I or M-splines of order 4. When the user set a
number of knots equals to k (n.knots=k) then the number of interior knots
is (k-2) and the number of splines is (k-2)+order. Number of knots must be
between 4 and 20. (See |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters, 10-3 by default (See |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood, 10-3 by default (See |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the gradient,
10-3 by default
(See |
nb.mc |
Number of samples considered in the Monte-Carlo integration. Required in the event
|
nb.gh |
Number of nodes for the Gaussian-Hermite quadrature. It can
be chosen among 5, 7, 9, 12, 15, 20 and 32. The default is |
nb.gh2 |
Number of nodes for the Gauss-Hermite quadrature used to re-estimate the model,
in the event of non-convergence, defined as previously. The default is |
adaptatif |
A binary, indicates whether the pseudo adaptive Gaussian-Hermite quadrature |
int.method |
A numeric, indicates the integration method: |
nb.iterPGH |
Number of iterations before the re-estimation of the posterior random effects,
in the event of the two-steps pseudo-adaptive Gaussian-hermite quadrature. If set to |
true.init.val |
Numerical value. Indicates if the given initial values to parameters |
thetacopula.init |
Initial values for the copula parameter ( |
sigma.ss.init |
Initial values for
|
sigma.tt.init |
Initial values for
|
sigma.st.init |
Initial values for
|
gamma.init |
Initial values for |
alpha.init |
Initial values for |
betas.init |
Initial values for |
betat.init |
Initial values for |
scale |
A numeric that allows to rescale (by multiplication) the survival times, to avoid numerical
problems in the event of some convergence issues. If no change is needed the argument is set to 1, the default value.
eg: |
random.generator |
Random number generator used by the Fortran compiler,
|
kappa.use |
A numeric, that indicates how to manage the smoothing parameters
k1
and k2 in the event of convergence issues. If it is set to |
random |
A binary that says if we reset the random number generation with a different environment
at each call |
random.nb.sim |
If |
seed |
The seed to use for data (or samples) generation. required if |
init.kappa |
smoothing parameter used to penalized the log-likelihood. By default (init.kappa = NULL) the values used are obtain by cross-validation. |
ckappa |
Vector of two fixed values to add to the smoothing parameters. By default it is set to (0,0). this argument allows to well manage the smoothing parameters in the event of convergence issues. |
typecopula |
The copula function used, can be 1 for clayton or 2 for Gumbel-Hougaard. The default is |
nb.decimal |
Number of decimal required for results presentation. |
print.times |
a logical parameter to print estimation time. Default is TRUE. |
print.iter |
a logical parameter to print iteration process. Default is FALSE. |
The estimated parameter are obtained using the robust Marquardt algorithm (Marquardt, 1963) which is a combination between a Newton-Raphson algorithm and a steepest descent algorithm. The iterations are stopped when the difference between two consecutive log-likelihoods was small (< 10-3 ), the estimated coefficients were stable (consecutive values (< 10-3 )), and the gradient small enough (< 10-3 ), by default. Cubic M-splines of order 4 are used for the hazard function, and I-splines (integrated M-splines) are used for the cumulative hazard function.
The inverse of the Hessian matrix is the variance estimator and to deal
with the positivity constraint of the variance component and the spline
coefficients, a squared transformation is used and the standard errors are
computed by the -method (Knight & Xekalaki, 2000). The smooth
parameter can be chosen by maximizing a likelihood cross validation
criterion (Joly and other, 1998).
We proposed based on the joint surrogate model a new definition
of the Kendall's . Moreover, distinct numerical integration methods are available to approximate the
integrals in the marginal log-likelihood.
Non-convergence case management procedure
Special attention must be given to initializing model parameters, the choice of the number of
spline knots, the smoothing parameters and the number of quadrature points to solve convergence
issues. We first initialized parameters using the user's desired strategy, as specified
by the option true.init.val
. When numerical or convergence problems are encountered,
with kappa.use
set to 4
, the model is fitted again using a combination of the following strategies:
vary the number of quadrature point (nb.gh
to nb.gh2
or nb.gh2
to nb.gh
)
in the event of the use of the Gaussian Hermite quadrature integration (see int.method
);
divided or multiplied the smoothing parameters ( k1,
k2) by 10 or 100 according to
their preceding values, or used parameter vectors obtained during the last iteration (with a
modification of the number of quadrature points and smoothing parameters). Using this strategy,
we usually obtained during simulation the rejection rate less than 3%. A sensitivity analysis
was conducted without this strategy, and similar results were obtained on the converged samples,
with about a 23% rejection rate.
This function return an object of class jointSurroPenal with elements :
EPS |
A vector containing the obtained convergence thresholds with the Marquardt algorithm, for the parameters, the log-likelihood and for the gradient; |
b |
A vector containing estimates for the splines parameter's; elements of the
lower triangular matrix (L) from the Cholesky decomposition such that |
varH |
The variance matrix of all parameters in |
varHIH |
The robust estimation of the variance matrix of all parameters in |
loglikPenal |
The complete marginal penalized log-likelihood; |
LCV |
the approximated likelihood cross-validation criterion in the semiparametric case (with |
xS |
vector of times for surrogate endpoint where both survival and hazard function are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times; |
lamS |
array (dim = 3) of hazard estimates and confidence bands, for surrogate endpoint; |
survS |
array (dim = 3) of baseline survival estimates and confidence bands, for surrogate endpoint; |
xT |
vector of times for true endpoint where both survival and hazard function are estimated. By default seq(0, max(time), length = 99), where time is the vector of survival times; |
lamT |
array (dim = 3) of hazard estimates and confidence bands, for true endpoint; |
survT |
array (dim = 3) of baseline survival estimates and confidence bands, for true endpoint; |
n.iter |
number of iterations needed to converge; |
theta |
Estimate for |
gamma |
Estimate for |
alpha |
Estimate for |
zeta |
A value equals to |
sigma.s |
Estimate for |
sigma.t |
Estimate for |
sigma.st |
Estimate for |
beta.s |
Estimate for |
beta.t |
Estimate for |
ui |
A binary, that indicates if the heterogeneity between trial on the baseline risk
has been Considered ( |
ktau |
The Kendall's |
R2.boot |
The
|
Coefficients |
The estimates with the corresponding standard errors and the 95 |
kappa |
Positive smoothing parameters used for convergence. These values could be different to initial
values if |
scale |
The value used to rescale the survival times |
data |
The dataset used in the model |
varcov.Sigma |
Covariance matrix of the estimates of
the estimates of ( |
parameter |
List of all arguments used in the model |
type.joint |
A code |
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Burzykowski, T., Molenberghs, G., Buyse, M., Geys, H., and Renard, D. (2001). Validation of surrogate end points in multiple randomized clinical trials with failure time end points. Journal of the Royal Statistical Society: Series C (Applied Statistics) 50, 405-422.
Buyse, M., Molenberghs, G., Burzykowski, T., Renard, D., and Geys, H. (2000). The validation of surrogate endpoints in meta-analyses of randomized experiments. Biostatistics 1, 49-67
Sofeu, C. L., Emura, T., and Rondeau, V. (2019). One-step validation method for surrogate endpoints using data from multiple randomized cancer clinical trials with failure-time endpoints. Statistics in Medicine 38, 2928-2942.
R. B. Nelsen. An introduction to Copulas. Springer, 2006
Prenen, L., Braekers, R., and Duchateau, L. (2017). Extending the archimedean copula methodology to model multivariate survival data grouped in clusters of variable size. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 79, 483-505.
Sofeu, C. L., Emura, T., and Rondeau, V. (2020). A joint frailty-copula model for meta-analytic
validation of failure time surrogate endpoints in clinical trials. Under review
jointSurrCopSimul
, summary.jointSurroPenal
, jointSurroPenal
, jointSurroPenalSimul
## Not run: # Data from the advanced ovarian cancer randomized clinical trials. data(dataOvarian) joint.surro.Gumbel <- jointSurroCopPenal(data = dataOvarian, int.method = 0, n.knots = 8, maxit = 50, kappa.use = 4, nb.mc = 1000, typecopula = 2, print.iter = FALSE, scale = 1/365) print(joint.surro.Gumbel) joint.surro.Clayton <- jointSurroCopPenal(data = dataOvarian, int.method = 0, n.knots = 8, maxit = 50, kappa.use = 4, nb.mc = 1000, typecopula = 1, print.iter = FALSE, scale = 1/365) print(joint.surro.Clayton) ## End(Not run)
## Not run: # Data from the advanced ovarian cancer randomized clinical trials. data(dataOvarian) joint.surro.Gumbel <- jointSurroCopPenal(data = dataOvarian, int.method = 0, n.knots = 8, maxit = 50, kappa.use = 4, nb.mc = 1000, typecopula = 2, print.iter = FALSE, scale = 1/365) print(joint.surro.Gumbel) joint.surro.Clayton <- jointSurroCopPenal(data = dataOvarian, int.method = 0, n.knots = 8, maxit = 50, kappa.use = 4, nb.mc = 1000, typecopula = 1, print.iter = FALSE, scale = 1/365) print(joint.surro.Clayton) ## End(Not run)
Joint Frailty Surrogate model definition
Fit the one-step Joint surrogate model for the evaluation of a canditate surrogate endpoint,
with different integration methods on the random effects, using a semiparametric penalized
likelihood estimation. This approach extends that of Burzykowski et al.
(2001) by
including in the same joint frailty model the individual-level and the trial-level random effects.
This function can also be used for mediation analysis where a direct effect of the surrogate time
on the final endpoint
is allowed through a function
.
For the jth subject (j=1,...,ni) of the ith trial (i=1,...,G), the joint surrogate model is defined as follows:
where,
ij ~
(0,
), ui ~
(0,
),
i ⊥ ui,
ui ⊥ vSi, ui ⊥ vTi
and
(vSi,vTi)T ~ (0,
v)
with
In this model, 0s(t) is the baseline hazard function associated with the
surrogate endpoint and
S the fixed treatment effect (or log-hazard ratio);
0T(t) is the baseline hazard function associated with the true endpoint
and
T the fixed treatment effect.
ij is a shared individual-level frailty that serve to take into account the
heterogeneity in the data at the individual level; ui is a shared frailty effect associated
with the baseline hazard function that serve to take into account the heterogeneity between trials
of the baseline hazard function, associated with the fact that we have several trials in this
meta-analytical design. The power parameters
and
distinguish
both individual and trial-level heterogeneities between the surrogate and the true endpoint.
vSi and vTi are two correlated random effects treatment-by-trial interactions.
ij1 represents the treatment arm to which the patient has been randomized.
In the mediation analysis setting, the hazard function for the true endpoint
becomes:
where the term I(Sij≤ t)g(Sij) allows
for a direct effect of the surrogate time on the risk
of occurrence of the final endpoint
.
Surrogacy evaluation
We proposed new definitions of Kendall's and coefficient of determination as
individual-level and trial-level association measurements, to evaluate a candidate
surrogate endpoint (Sofeu et al., 2018). For the surrogacy in the mediation analysis setting
see the "Surrogacy through mediation" Section.
Individual-level surrogacy
To measure the strength of association between ij and
ij after
adjusting the marginal distributions for the trial and the treatment effects, as show in
Sofeu et al.(2018), we use the Kendall's
define by :
where ,
,
and
are estimated using the joint surrogate model
defined previously. Kendall's
is the difference between the probability of
concordance and the probability of discordance of two realizations of
ij and
ij.
It belongs to the interval [-1,1] and assumes a zero value when
ij and
ij are
independent. We estimate Kendall's
using Monte-Carlo or Gaussian Hermite
quadrature integration methods. Its confidence interval is estimated using parametric
bootstrap
Trial-level surrogacy
The key motivation for validating a surrogate endpoint is to be able to predict the effect
of treatment on the true endpoint, based on the observed effect of treatment on the
surrogate endpoint. As shown by Buyse et al. (2000), the coefficenient of
determination obtains from the covariance matrix v of the random effects
treatment-by-trial interaction can be used to evaluate underlined prediction, and
therefore as surrogacy evaluation measurement at trial-level. It is defined by:
The SEs of trial2 is calculated using the Delta-method. We also propose
trial2 and 95% CI computed using the parametric bootstrap. The use of delta-method
can lead to confidence limits violating the [0,1], as noted by
(Burzykowski et al., 2001). However, using other methods would not significantly alter
the findings of the surrogacy assessment
Surrogacy through mediation
In the mediation analysis setting, the surrogacy measure is the proportion of treatment effect
on the final endpoint that goes through its effect on the surrogate
.
This measure is a time-dependent function
defined as:
where and
stand for "natual indirect effect" and "total effect"
respectively. The numerator is the difference of the survival function of
for a subject whose treatment has been set to
(experimental arm) for
both
and
versus a subject for which the treatment for
is still
but is set
for
. This corresponds
to the indirect effect (in ther of survival probability) of the treatment on
through
. The denominator is the total effect of the treatment on
.
jointSurroPenal(data, maxit=50, indicator.zeta = 1, indicator.alpha = 1, frail.base = 1, n.knots = 6, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 300, nb.gh = 32, nb.gh2 = 20, adaptatif = 0, int.method = 2, nb.iterPGH = 5, nb.MC.kendall = 10000, nboot.kendall = 1000, true.init.val = 0, theta.init = 1, sigma.ss.init = 0.5, sigma.tt.init = 0.5, sigma.st.init = 0.48, gamma.init = 0.5, alpha.init = 1, zeta.init = 1, betas.init = 0.5, betat.init = 0.5, scale = 1, random.generator = 1, kappa.use = 4, random = 0, random.nb.sim = 0, seed = 0, init.kappa = NULL, ckappa = c(0,0), nb.decimal = 4, print.times = TRUE, print.iter=FALSE,mediation=FALSE, g.nknots=1,pte.times=NULL,pte.ntimes=NULL,pte.nmc=500,pte.boot=FALSE, pte.nboot=2000,pte.boot.nmc=500,pte.integ.type=2)
jointSurroPenal(data, maxit=50, indicator.zeta = 1, indicator.alpha = 1, frail.base = 1, n.knots = 6, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 300, nb.gh = 32, nb.gh2 = 20, adaptatif = 0, int.method = 2, nb.iterPGH = 5, nb.MC.kendall = 10000, nboot.kendall = 1000, true.init.val = 0, theta.init = 1, sigma.ss.init = 0.5, sigma.tt.init = 0.5, sigma.st.init = 0.48, gamma.init = 0.5, alpha.init = 1, zeta.init = 1, betas.init = 0.5, betat.init = 0.5, scale = 1, random.generator = 1, kappa.use = 4, random = 0, random.nb.sim = 0, seed = 0, init.kappa = NULL, ckappa = c(0,0), nb.decimal = 4, print.times = TRUE, print.iter=FALSE,mediation=FALSE, g.nknots=1,pte.times=NULL,pte.ntimes=NULL,pte.nmc=500,pte.boot=FALSE, pte.nboot=2000,pte.boot.nmc=500,pte.integ.type=2)
data |
A
|
maxit |
maximum number of iterations for the Marquardt algorithm.
The default being |
indicator.zeta |
A binary, indicates whether the power's parameter |
indicator.alpha |
A binary, indicating whether the power's parameter |
frail.base |
A binary, indicating whether the heterogeneity between trial on the baseline risk
is considered ( |
n.knots |
integer giving the number of knots to use. Value required in
the penalized likelihood estimation. It corresponds to the (n.knots+2)
splines functions for the approximation of the hazard or the survival
functions. We estimate I or M-splines of order 4. When the user set a
number of knots equals to k (n.knots=k) then the number of interior knots
is (k-2) and the number of splines is (k-2)+order. Number of knots must be
between 4 and 20. (See |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters, 10-3 by default (See |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood, 10-3 by default (See |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the gradient, 10-3 by default
(See |
nb.mc |
Number of samples considered in the Monte-Carlo integration. Required in the event
|
nb.gh |
Number of nodes for the Gaussian-Hermite quadrature. It can be chosen among 5, 7, 9, 12, 15, 20 and 32. The default is 32. |
nb.gh2 |
Number of nodes for the Gauss-Hermite quadrature used to re-estimate the model,
in the event of non-convergence, defined as previously. The default is |
adaptatif |
A binary, indicates whether the pseudo adaptive Gaussian-Hermite quadrature |
int.method |
A numeric, indicates the integration method: |
nb.iterPGH |
Number of iterations before the re-estimation of the posterior random effects,
in the event of the two-steps pseudo-adaptive Gaussian-hermite quadrature. If set to |
nb.MC.kendall |
Number of generated points used with the Monte-Carlo to estimate
integrals in the Kendall's |
nboot.kendall |
Number of samples considered in the parametric bootstrap to estimate the confidence
interval of the Kendall's |
true.init.val |
Numerical value. Indicates if the given initial values to parameters |
theta.init |
Initial values for |
sigma.ss.init |
Initial values for
|
sigma.tt.init |
Initial values for
|
sigma.st.init |
Initial values for
|
gamma.init |
Initial values for |
alpha.init |
Initial values for |
zeta.init |
Initial values for |
betas.init |
Initial values for |
betat.init |
Initial values for |
scale |
A numeric that allows to rescale (multiplication) the survival times, to avoid numerical
problems in the event of some convergence issues. If no change is needed the argument is set to 1, the default value.
eg: |
random.generator |
Random number generator used by the Fortran compiler,
|
kappa.use |
A numeric, that indicates how to manage the smoothing parameters k1
and k2 in the event of convergence issues. If it is set to |
random |
A binary that says if we reset the random number generation with a different environment
at each call |
random.nb.sim |
If |
seed |
The seed to use for data (or samples) generation. required if |
init.kappa |
smoothing parameter used to penalized the log-likelihood. By default (init.kappa = NULL) the values used are obtain by cross-validation. |
ckappa |
Vector of two fixed values to add to the smoothing parameters. By default it is set to (0,0). this argument allows to well manage the smoothing parameters in the event of convergence issues. |
nb.decimal |
Number of decimal required for results presentation. |
print.times |
a logical parameter to print estimation time. Default is TRUE. |
print.iter |
a logical parameter to print iteration process. Default is FALSE. |
mediation |
a logical value indicating if the mediation analysis method is used. Default is FALSE. |
g.nknots |
In the case of a mediation analysis, indicates how many inner knots
are used in the splines basis for estimating the function |
pte.times |
In the mediation analysis setting, a vector of times for which the
funtion |
pte.ntimes |
In the mediation setting, if the argument |
pte.nmc |
An integer indicating how many Monte Carlo simulations are used
to integrate over the random effects in the computation of the function |
pte.boot |
A logical value indicating if bootstrapped confidence bands needs to be computed for the
function |
pte.nboot |
An integer indicating how many bootstrapped replicates of |
pte.boot.nmc |
If |
pte.integ.type |
An integer indicating which type of integration over the distribution of
|
The estimated parameter are obtained using the robust Marquardt algorithm (Marquardt, 1963) which is a combination between a Newton-Raphson algorithm and a steepest descent algorithm. The iterations are stopped when the difference between two consecutive log-likelihoods was small (< 10-3 ), the estimated coefficients were stable (consecutive values (< 10-3 )), and the gradient small enough (< 10-3 ), by default. Cubic M-splines of order 4 are used for the hazard function, and I-splines (integrated M-splines) are used for the cumulative hazard function.
The inverse of the Hessian matrix is the variance estimator and to deal
with the positivity constraint of the variance component and the spline
coefficients, a squared transformation is used and the standard errors are
computed by the -method (Knight & Xekalaki, 2000). The smooth
parameter can be chosen by maximizing a likelihood cross validation
criterion (Joly and other, 1998).
We proposed based on the joint surrogate model a new definition
of the Kendall's . Moreover, distinct numerical integration methods are available to approximate the
integrals in the marginal log-likelihood.
Non-convergence case management procedure
Special attention must be given to initializing model parameters, the choice of the number of
spline knots, the smoothing parameters and the number of quadrature points to solve convergence
issues. We first initialized parameters using the user's desired strategy, as specified
by the option true.init.val
. When numerical or convergence problems are encountered,
with kappa.use
set to 4
, the model is fitted again using a combination of the following strategies:
vary the number of quadrature point (nb.gh
to nb.gh2
or nb.gh2
to nb.gh
)
in the event of the use of the Gaussian Hermite quadrature integration (see int.method
);
divided or multiplied the smoothing parameters ( k1,
k2) by 10 or 100 according to
their preceding values, or used parameter vectors obtained during the last iteration (with a
modification of the number of quadrature points and smoothing parameters). Using this strategy,
we usually obtained during simulation the rejection rate less than 3%. A sensitivity analysis
was conducted without this strategy, and similar results were obtained on the converged samples,
with about a 23% rejection rate.
This function return an object of class jointSurroPenal or jointSurroMed in the mediation analysis setting with elements:
EPS |
A vector containing the obtained convergence thresholds with the Marquardt algorithm, for the parameters, the log-likelihood and for the gradient; |
b |
A vector containing estimates for the splines parameter's;
the power's parameter |
varH |
The variance matrix of all parameters in |
varHIH |
The robust estimation of the variance matrix of all parameters in |
loglikPenal |
The complete marginal penalized log-likelihood; |
LCV |
the approximated likelihood cross-validation criterion in the semiparametric case (with |
xS |
vector of times for surrogate endpoint where both survipte.nmc.bootval and hazard function are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times; |
lamS |
array (dim = 3) of hazard estimates and confidence bands, for surrogate endpoint; |
survS |
array (dim = 3) of baseline survival estimates and confidence bands, for surrogate endpoint; |
xT |
vector of times for true endpoint where both survival and hazard function are estimated. By default seq(0, max(time), length = 99), where time is the vector of survival times; |
lamT |
array (dim = 3) of hazard estimates and confidence bands, for true endpoint; |
survT |
array (dim = 3) of baseline survival estimates and confidence bands, for true endpoint; |
n.iter |
number of iterations needed to converge; |
theta |
Estimate for |
gamma |
Estimate for |
alpha |
Estimate for |
zeta |
Estimate for |
sigma.s |
Estimate for |
sigma.t |
Estimate for |
sigma.st |
Estimate for |
beta.s |
Estimate for |
beta.t |
Estimate for |
ui |
A binary, that indicates if the heterogeneity between trial on the baseline risk
has been Considered ( |
ktau |
The Kendall's |
R2.boot |
The
|
Coefficients |
The estimates with the corresponding standard errors and the 95 |
kappa |
Positive smoothing parameters used for convergence. These values could be different to initial
values if |
scale |
The value used to rescale the survival times |
data |
The dataset used in the model |
varcov.Sigma |
covariance matrix of
the estimates of ( |
parameter |
list of all arguments used in the model |
mediation |
List returned in the case where the option
|
Casimir Ledoux Sofeu [email protected], [email protected], Quentin Le Coent [email protected] and Virginie Rondeau [email protected],
Burzykowski, T., Molenberghs, G., Buyse, M., Geys, H., and Renard, D. (2001). Validation of surrogate end points in multiple randomized clinical trials with failure time end points. Journal of the Royal Statistical Society: Series C (Applied Statistics) 50, 405-422.
Buyse, M., Molenberghs, G., Burzykowski, T., Renard, D., and Geys, H. (2000). The validation of surrogate endpoints in meta-analyses of randomized experiments. Biostatistics 1, 49-67
Sofeu, C. L., Emura, T., and Rondeau, V. (2019). One-step validation method for surrogate endpoints using data from multiple randomized cancer clinical trials with failure-time endpoints. Statistics in Medicine 38, 2928-2942.
Le Coent, Q., Legrand, C., Rondeau, V. (2021). Time-to-event surrogate endpoint validation using mediation and meta-analytic data. Article submitted.
jointSurrSimul
, summary.jointSurroPenal
, jointSurroPenalSimul
## Not run: # Generation of data to use data.sim <- jointSurrSimul(n.obs=600, n.trial = 30,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) #Surrogacy evaluation based on generated data with a combination of Monte Carlo #and classical Gaussian Hermite integration.* # (Computation takes around 5 minutes) joint.surro.sim.MCGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20) #Surrogacy evaluation based on generated data with a combination of Monte Carlo # and Pseudo-adaptive Gaussian Hermite integration. # (Computation takes around 4 minutes) joint.surro.sim.MCPGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20, adaptatif = 1) # Results summary(joint.surro.sim.MCGH) summary(joint.surro.sim.MCPGH) # Data from the advanced ovarian cancer randomized clinical trials. # Joint surrogate model with \eqn{\zeta} fixed to 1, 8 nodes spline # and the rescaled survival time. data(dataOvarian) # (Computation takes around 20 minutes) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) # results summary(joint.surro.ovar) print(joint.surro.ovar) # Mediation analysis on the adjuvant chemotherapy # dataset where the surrogate is a time-to-relapse and the final endpoint is death. # 4 knots are used to estimate the two baseline hazard functions. # The function g(s) is estimated using cubic b-splines with 1 interior # knot ('g.nkots=1'). The function \eqn{PTE(t)} is computed at 100 time points # using 10.000 Monte Carlo simulations for integration over the random effects. # To reduce computation time in the provided example only one fifth of the # the original dataset is used and the confidence bands for the function # \eqn{PTE(t)} are not computed as well as the power parameters associated with # the random effects. Full example is commented thereafter. # We first need to change the variable "statusS" which in the dataset # encodes the indicator of a disease free survival event to an indicator # of a time to relapse event (i.e., resurgence of cancer or # onset of a second cancer) that excludes death as a composite event. # Thus, the patients whose variables "timeS" and "timeT" are equal # and whose variable "statusS" is equal to 1 will have # "statuS" be set to 0. We do this because composite endpoint may not # be appropriate in the setting of mediation analysis. data(gastadj) gastadj$timeS<-gastadj$timeS/365 gastadj$timeT<-gastadj$timeT/365 #here changing "statusS" to corresponds to a time to relapse event gastadj[gastadj$timeS==gastadj$timeT & gastadj$statusS==1,c("statusS")]<-0 # select 20% of the original dataset set.seed(1) n<-nrow(gastadj) subset<-gastadj[sort(sample(1:nrow(gastadj),round(n*0.2),replace = FALSE)),] # Mediation model ('mediation=TRUE'). Computation takes around 17 minutes mod.gast<-jointSurroPenal(subset,n.knots = 4,indicator.zeta = 0, indicator.alpha = 0,mediation=TRUE,g.nknots=1, pte.ntimes=30,pte.nmc=10000,pte.boot=FALSE) summary(mod.gast) plot(mod.gast) # Example on the full dataset, including estimation of the power parameters # mod.gast2<-jointSurroPenal(gastadj,n.knots = 4,mediation=TRUE,g.nknots=1, # pte.ntimes=30,pte.nmc=10000,pte.boot=TRUE, # pte.nboot=2000,pte.boot.nmc=10000) # results # plot(mod.gast2) # summary(mod.gast2) ## End(Not run)
## Not run: # Generation of data to use data.sim <- jointSurrSimul(n.obs=600, n.trial = 30,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) #Surrogacy evaluation based on generated data with a combination of Monte Carlo #and classical Gaussian Hermite integration.* # (Computation takes around 5 minutes) joint.surro.sim.MCGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20) #Surrogacy evaluation based on generated data with a combination of Monte Carlo # and Pseudo-adaptive Gaussian Hermite integration. # (Computation takes around 4 minutes) joint.surro.sim.MCPGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20, adaptatif = 1) # Results summary(joint.surro.sim.MCGH) summary(joint.surro.sim.MCPGH) # Data from the advanced ovarian cancer randomized clinical trials. # Joint surrogate model with \eqn{\zeta} fixed to 1, 8 nodes spline # and the rescaled survival time. data(dataOvarian) # (Computation takes around 20 minutes) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) # results summary(joint.surro.ovar) print(joint.surro.ovar) # Mediation analysis on the adjuvant chemotherapy # dataset where the surrogate is a time-to-relapse and the final endpoint is death. # 4 knots are used to estimate the two baseline hazard functions. # The function g(s) is estimated using cubic b-splines with 1 interior # knot ('g.nkots=1'). The function \eqn{PTE(t)} is computed at 100 time points # using 10.000 Monte Carlo simulations for integration over the random effects. # To reduce computation time in the provided example only one fifth of the # the original dataset is used and the confidence bands for the function # \eqn{PTE(t)} are not computed as well as the power parameters associated with # the random effects. Full example is commented thereafter. # We first need to change the variable "statusS" which in the dataset # encodes the indicator of a disease free survival event to an indicator # of a time to relapse event (i.e., resurgence of cancer or # onset of a second cancer) that excludes death as a composite event. # Thus, the patients whose variables "timeS" and "timeT" are equal # and whose variable "statusS" is equal to 1 will have # "statuS" be set to 0. We do this because composite endpoint may not # be appropriate in the setting of mediation analysis. data(gastadj) gastadj$timeS<-gastadj$timeS/365 gastadj$timeT<-gastadj$timeT/365 #here changing "statusS" to corresponds to a time to relapse event gastadj[gastadj$timeS==gastadj$timeT & gastadj$statusS==1,c("statusS")]<-0 # select 20% of the original dataset set.seed(1) n<-nrow(gastadj) subset<-gastadj[sort(sample(1:nrow(gastadj),round(n*0.2),replace = FALSE)),] # Mediation model ('mediation=TRUE'). Computation takes around 17 minutes mod.gast<-jointSurroPenal(subset,n.knots = 4,indicator.zeta = 0, indicator.alpha = 0,mediation=TRUE,g.nknots=1, pte.ntimes=30,pte.nmc=10000,pte.boot=FALSE) summary(mod.gast) plot(mod.gast) # Example on the full dataset, including estimation of the power parameters # mod.gast2<-jointSurroPenal(gastadj,n.knots = 4,mediation=TRUE,g.nknots=1, # pte.ntimes=30,pte.nmc=10000,pte.boot=TRUE, # pte.nboot=2000,pte.boot.nmc=10000) # results # plot(mod.gast2) # summary(mod.gast2) ## End(Not run)
This function aims to allow simulation studies, based on the joint frailty surrogate model, described in jointSurroPenal. Simulation can also be based on the joint frailty-copula model described in jointSurroCopPenal
jointSurroPenalSimul(maxit=40, indicator.zeta = 1, indicator.alpha = 1, frail.base = 1, n.knots = 6, nb.dataset = 1, nbSubSimul=1000, ntrialSimul=30, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 300, nb.gh = 32, nb.gh2 = 20, adaptatif = 0, int.method = 2, nb.iterPGH = 5, nb.MC.kendall = 10000, nboot.kendall = 1000, true.init.val = 0, theta.init = 1, sigma.ss.init = 0.5, sigma.tt.init = 0.5, sigma.st.init = 0.48, gamma.init = 0.5, alpha.init = 1, zeta.init = 1, betas.init = 0.5, betat.init = 0.5, random.generator = 1, equi.subj.trial = 1, prop.subj.trial = NULL, equi.subj.trt = 1, prop.subj.trt = NULL, theta2 = 3.5, zeta = 1, gamma.ui = 2.5, alpha.ui = 1, betas = -1.25, betat = -1.25, lambdas = 1.8, nus = 0.0045, lambdat = 3, nut = 0.0025, prop.cens = 0, time.cens = 549, R2 = 0.81, sigma.s = 0.7, sigma.t = 0.7, kappa.use = 4, random = 0, random.nb.sim = 0, seed = 0, nb.reject.data = 0, init.kappa = NULL, ckappa = c(0,0), type.joint.estim = 1, type.joint.simul = 1, mbetast =NULL, mbetast.init = NULL, typecopula =1, theta.copula = 6, thetacopula.init = 3, filter.surr = c(1), filter.true = c(1), nb.decimal = 4, pfs = 0, print.times = TRUE, print.iter=FALSE)
jointSurroPenalSimul(maxit=40, indicator.zeta = 1, indicator.alpha = 1, frail.base = 1, n.knots = 6, nb.dataset = 1, nbSubSimul=1000, ntrialSimul=30, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 300, nb.gh = 32, nb.gh2 = 20, adaptatif = 0, int.method = 2, nb.iterPGH = 5, nb.MC.kendall = 10000, nboot.kendall = 1000, true.init.val = 0, theta.init = 1, sigma.ss.init = 0.5, sigma.tt.init = 0.5, sigma.st.init = 0.48, gamma.init = 0.5, alpha.init = 1, zeta.init = 1, betas.init = 0.5, betat.init = 0.5, random.generator = 1, equi.subj.trial = 1, prop.subj.trial = NULL, equi.subj.trt = 1, prop.subj.trt = NULL, theta2 = 3.5, zeta = 1, gamma.ui = 2.5, alpha.ui = 1, betas = -1.25, betat = -1.25, lambdas = 1.8, nus = 0.0045, lambdat = 3, nut = 0.0025, prop.cens = 0, time.cens = 549, R2 = 0.81, sigma.s = 0.7, sigma.t = 0.7, kappa.use = 4, random = 0, random.nb.sim = 0, seed = 0, nb.reject.data = 0, init.kappa = NULL, ckappa = c(0,0), type.joint.estim = 1, type.joint.simul = 1, mbetast =NULL, mbetast.init = NULL, typecopula =1, theta.copula = 6, thetacopula.init = 3, filter.surr = c(1), filter.true = c(1), nb.decimal = 4, pfs = 0, print.times = TRUE, print.iter=FALSE)
maxit |
maximum number of iterations for the Marquardt algorithm.
Default is |
indicator.zeta |
A binary, indicates whether the power's parameter |
indicator.alpha |
A binary, indicates whether the power's parameter |
frail.base |
Considered the heterogeneity between trial on the baseline risk ( |
n.knots |
integer giving the number of knots to use. Value required in
the penalized likelihood estimation. It corresponds to the (n.knots+2)
splines functions for the approximation of the hazard or the survival
functions. We estimate I or M-splines of order 4. When the user set a
number of knots equals to k (n.knots=k) then the number of interior knots
is (k-2) and the number of splines is (k-2)+order. Number of knots must be
between 4 and 20. (See |
nb.dataset |
Number of dataset to analyze. The default is |
nbSubSimul |
Number of subjects. |
ntrialSimul |
Number of trials. |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters, 10-3
by default (See |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood, 10-3
by default (See |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the gradient,
10-3 by default
(See |
nb.mc |
Number of samples considered in the Monte-Carlo integration. Required in the event
|
nb.gh |
Number of nodes for the Gaussian-Hermite quadrature. It can be chosen among 5, 7, 9, 12, 15, 20 and 32. The default is 32. |
nb.gh2 |
Number of nodes for the Gauss-Hermite quadrature used to re-estimate the model,
in the event of non-convergence, defined as previously. The default is |
adaptatif |
A binary, indicates whether the pseudo adaptive Gaussian-Hermite quadrature
|
int.method |
A numeric, indicates the integration method: |
nb.iterPGH |
Number of iterations before the re-estimation of the posterior random effects,
in the event of the two-steps pseudo-adaptive Gaussian-hermite quadrature. If set to |
nb.MC.kendall |
Number of generated points used with the Monte-Carlo to estimate
integrals in the Kendall's |
nboot.kendall |
Number of samples considered in the parametric bootstrap to estimate the confidence
interval of the Kendall's |
true.init.val |
Numerical value. Indicates if the real parameter values
|
theta.init |
Initial values for |
sigma.ss.init |
Initial values for
|
sigma.tt.init |
Initial values for
|
sigma.st.init |
Initial values for
|
gamma.init |
Initial values for |
alpha.init |
Initial values for |
zeta.init |
Initial values for |
betas.init |
Initial values for |
betat.init |
Initial values for |
random.generator |
Random number generator used by the Fortran compiler,
|
equi.subj.trial |
A binary, that indicates if the same proportion of subjects per trial
should be considered in the procces of data generation (1) or not (0). In the event of
different trial sizes, fill in |
prop.subj.trial |
Vector of the proportions of subjects to consider per trial.
Requires if the argument |
equi.subj.trt |
Indicates if the same proportion of treated subjects per trial should be
considered |
prop.subj.trt |
Vector of the proportions of treated subjects to consider per trial.
Requires if the argument |
theta2 |
True value for |
zeta |
True value for |
gamma.ui |
True value for |
alpha.ui |
True value for |
betas |
True value for |
betat |
True value for |
lambdas |
Desired scale parameter for the |
nus |
Desired shape parameter for the |
lambdat |
Desired scale parameter for the |
nut |
Desired shape parameter for the |
prop.cens |
A value between |
time.cens |
Censorship time. If argument |
R2 |
Desired
|
sigma.s |
True value for |
sigma.t |
True value for |
kappa.use |
A numeric, that indicates how to manage the smoothing parameters k1
and k2 in the event of convergence issues. If it is set
to |
random |
A binary that says if we reset the random number generation with a different environment
at each call |
random.nb.sim |
If |
seed |
The seed to use for data generation. Required if |
nb.reject.data |
When the simulations have been split into several packets, this argument
indicates the number of generated datasets to reject before starting the simulations studies.
This prevents to reproduce the same datasets for all simulation packages. It must be set to
|
init.kappa |
smoothing parameter used to penalized the log-likelihood. By default (init.kappa = NULL) the values used are obtain by cross-validation. |
ckappa |
Vector of two constantes to add to the smoothing parameters. By default it is set to (0,0). this argument allows to well manage the smoothing parameters in the event of convergence issues. |
type.joint.estim |
Model to considered for the estimation. If this argument is set to |
type.joint.simul |
Model to considered for data generation. If this argument is set to |
mbetast |
Matrix or dataframe containing the true fixed traitment effects associated with the covariates. This matrix includes
two columns (first one for surrogate endpoint and second one for true endpoint) and the number of row corresponding
to the number of covariate. Require if |
mbetast.init |
Matrix or dataframe containing the initial values for the fixed effects associated with the covariates. This matrix include
two columns (first one for surrogate endpoint and second one for true endpoint) and the number of row corresponding
to the number of covariate. Require if |
typecopula |
The copula function used for estimation: 1 = clayton, 2 = Gumbel. Require if |
theta.copula |
The copula parameter. Require if |
thetacopula.init |
Initial value for the copula parameter. Require if |
filter.surr |
Vector of size the number of covariates, with the i-th element that indicates if the hazard for surrogate is adjusted on the i-th covariate (code 1) or not (code 0). By default, only the treatment effect is considered. |
filter.true |
Vector defines as |
nb.decimal |
Number of decimal required for results presentation. |
pfs |
Is used to specified if the time to progression should be censored by the death time (0) or not (1). The default is 0. In the event with pfs set to 1, death is included in the surrogate endpoint as in the definition of PFS or DFS. |
print.times |
a logical parameter to print estimation time. Default is TRUE. |
print.iter |
a logical parameter to print iteration process. Default is FALSE. |
The estimated parameter are obtained using the robust Marquardt algorithm (Marquardt, 1963) which is a combination between a Newton-Raphson algorithm and a steepest descent algorithm. The iterations are stopped when the difference between two consecutive log-likelihoods was small (<10-3 ), the estimated coefficients were stable (consecutive values (<10-3 ), and the gradient small enough (<10-3 ), by default. Cubic M-splines of order 4 are used for the hazard function, and I-splines (integrated M-splines) are used for the cumulative hazard function.
The inverse of the Hessian matrix is the variance estimator and to deal
with the positivity constraint of the variance component and the spline
coefficients, a squared transformation is used and the standard errors are
computed by the -method (Knight & Xekalaki, 2000). The smooth
parameter can be chosen by maximizing a likelihood cross validation
criterion (Joly and other, 1998).
We proposed based on the joint surrogate model a new definition
of the Kendall's . By cons, for the joint frailty-copula model, we
based the individual-level association on a definition of
clause to
that of the classical two-step approch (Burzykowski et al, 2001), but conditional
on the random effects. Moreover, distinct numerical integration methods are available to approximate the
integrals in the marginal log-likelihood.
Non-convergence case management procedure
Special attention must be given to initializing model parameters, the choice of the number of
spline knots, the smoothing parameters and the number of quadrature points to solve convergence
issues. We first initialized parameters using the user's desired strategy, as specified
by the option true.init.val
. When numerical or convergence problems are encountered,
with kappa.use
set to 4
, the model is fitted again using a combination of the following strategies:
vary the number of quadrature point (nb.gh
to nb.gh2
or nb.gh2
to nb.gh
)
in the event of the use of the Gaussian Hermite quadrature integration (see int.method
);
divided or multiplied the smoothing parameters ( k1,
k2) by 10 or 100 according to
their preceding values, or used parameter vectors obtained during the last iteration (with a
modification of the number of quadrature points and smoothing parameters). Using this strategy,
we usually obtained during simulation the rejection rate less than 3%. A sensitivity analysis
was conducted without this strategy, and similar results were obtained on the converged samples,
with about a 23% rejection rate.
This function returns an object of class jointSurroPenalSimul with elements :
theta2 |
True value for |
theta.copula |
Copula parameter, if |
zeta |
true value for |
gamma.ui |
true value for |
alpha.ui |
true value for |
sigma.s |
true value for
|
sigma.t |
true value for
|
sigma.st |
true value for
|
betas |
true value for |
betat |
true value for |
R2 |
true value for
|
nb.subject |
total number of subjects used; |
nb.trials |
total number of trials used; |
nb.simul |
number of simulated datasets; |
nb.gh |
number of nodes for the Gaussian-Hermite quadrature; |
nb.gh2 |
number of nodes for the Gauss-Hermite quadrature used to re-estimate the model, in the event of non-convergence; |
nb.mc |
number of samples considered in the Monte-Carlo integration; |
kappa.use |
a numeric, that indicates how to manage the smoothing parameters k1 and k2 in the event of convergence issues; |
n.knots |
number of knots used for splines; |
int.method |
integration method used; |
n.iter |
mean number of iterations needed to converge; |
dataTkendall |
a matrix with |
dataR2boot |
a matrix with |
dataParamEstim |
a dataframe including all estimates with the associated standard errors, for all simulation. All non-convergence cases are represented by a line of 0; |
dataHessian |
Dataframe of the variance-Covariance matrices of the estimates for all simulations |
dataHessianIH |
Dataframe of the robust estimation of the variance matrices of the estimates for all simulations |
datab |
Dataframe of the estimates for all simulations which rich convergence |
type.joint |
the estimation model; 1 for the joint surrogate and 3 for joint frailty-copula model |
type.joint.simul |
The model used for data generation; 1 for joint surrogate and 3 for joint frailty-copula |
true.init.val |
Indicates if the real parameter values have been used as initial values for the model |
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Burzykowski, T., Molenberghs, G., Buyse, M., Geys, H., and Renard, D. (2001). Validation of surrogate end points in multiple randomized clinical trials with failure time end points. Journal of the Royal Statistical Society: Series C (Applied Statistics) 50, 405-422.
Sofeu, C. L., Emura, T., and Rondeau, V. (2019). One-step validation method for surrogate endpoints using data from multiple randomized cancer clinical trials with failure-time endpoints. Statistics in Medicine 38, 2928-2942.
jointSurroPenal
, jointSurroCopPenal
, summary.jointSurroPenalSimul
, jointSurrSimul
, jointSurrCopSimul
## Not run: # Surrogacy model evaluation performance study based on 10 generated data # (Computation takes around 20 minutes using a processor including 40 # cores and a read only memory of 378 Go) # To realize a simulation study on 100 samples or more (as required), use # nb.dataset = 100 ### joint frailty model joint.simul <- jointSurroPenalSimul(nb.dataset = 10, nbSubSimul= 600, ntrialSimul = 30, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 200, nb.gh = 20, nb.gh2 = 32, true.init.val = 1, print.iter = FALSE, pfs = 0) # results summary(joint.simul, d = 3, R2boot = 1) # bootstrap summary(joint.simul, d = 3, R2boot = 0) # Delta-method ### joint frailty copula model joint.simul.cop.clay <- jointSurroPenalSimul(nb.dataset = 10, nbSubSimul= 600, ntrialSimul = 30, nb.mc = 1000, type.joint.estim = 3, typecopula = 1, type.joint.simul = 3, theta.copula = 3, time.cens = 349, true.init.val = 1, R2 = 0.81, maxit = 40, print.iter = FALSE) summary(joint.simul.cop.clay) ## End(Not run)
## Not run: # Surrogacy model evaluation performance study based on 10 generated data # (Computation takes around 20 minutes using a processor including 40 # cores and a read only memory of 378 Go) # To realize a simulation study on 100 samples or more (as required), use # nb.dataset = 100 ### joint frailty model joint.simul <- jointSurroPenalSimul(nb.dataset = 10, nbSubSimul= 600, ntrialSimul = 30, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 200, nb.gh = 20, nb.gh2 = 32, true.init.val = 1, print.iter = FALSE, pfs = 0) # results summary(joint.simul, d = 3, R2boot = 1) # bootstrap summary(joint.simul, d = 3, R2boot = 0) # Delta-method ### joint frailty copula model joint.simul.cop.clay <- jointSurroPenalSimul(nb.dataset = 10, nbSubSimul= 600, ntrialSimul = 30, nb.mc = 1000, type.joint.estim = 3, typecopula = 1, type.joint.simul = 3, theta.copula = 3, time.cens = 349, true.init.val = 1, R2 = 0.81, maxit = 40, print.iter = FALSE) summary(joint.simul.cop.clay) ## End(Not run)
estimation using numerical integration methodsThis function estimate the Kendall's based on the joint surrogate model
described in jointSurroPenal (Sofeu et al., 2018), for the evaluation of
a candidate surrogate endpoints, at the individual-level . We used the Monte-carlo and the gaussian Hermite
quadrature methods for numerical integration. in the event of Gaussian Hermite quadrature,
it is better to choose at least
20
quadature nodes for better results.
The actual value of nodes used is the maximum between 20
and nb.gh
jointSurroTKendall(object = NULL, theta, gamma, alpha = 1, zeta = 1, sigma.v = matrix(rep(0,4),2,2), int.method = 0, nb.MC.kendall = 10000, nb.gh = 32, random.generator = 1, random = 0, random.nb.sim = 0, seed = 0, ui = 1)
jointSurroTKendall(object = NULL, theta, gamma, alpha = 1, zeta = 1, sigma.v = matrix(rep(0,4),2,2), int.method = 0, nb.MC.kendall = 10000, nb.gh = 32, random.generator = 1, random = 0, random.nb.sim = 0, seed = 0, ui = 1)
object |
An object inheriting from |
theta |
Variance of the individual-level random effect,
|
gamma |
Variance of the trial-level random effect associated with the baseline risk,
|
alpha |
Power parameter associated with
|
zeta |
Power parameter associated with
|
sigma.v |
Covariance matrix of the random effects treatment-by-trial interaction (vSi, vTi) |
int.method |
A numeric, indicates the integration method: |
nb.MC.kendall |
Number of generated points used with the Monte-Carlo to estimate
integrals in the Kendall's |
nb.gh |
Number of nodes for the Gaussian-Hermite quadrature. The default is |
random.generator |
Random number generator to use by the Fortran compiler,
|
random |
A binary that says if we reset the random number generation with a different environment
at each call |
random.nb.sim |
If |
seed |
The seed to use for data (or samples) generation. required if |
ui |
A binary, indicates whether one considered trial random effect associated with
the baseline risk ( |
This function return the estimated Kendall's
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Sofeu C.L., Emura T. and Rondeau V. (2018). One-step validation method for surrogate
endpoints in multiple randomized cancer clinical trials with failure-time endpoints.
Under review
jointSurrSimul
, summary.jointSurroPenal
Ktau1 <- jointSurroTKendall(theta = 3.5, gamma = 2.5, nb.gh = 32) Ktau2 <- jointSurroTKendall(theta = 1, gamma = 0.8, alpha = 1, zeta = 1, nb.gh = 32) ###---Kendall's \eqn{\tau} from a joint surrogate model ---### ## Not run: data.sim <-jointSurrSimul(n.obs=400, n.trial = 20,cens.adm=549, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7,cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###---Estimation---### joint.surrogate <- jointSurroPenal(data = data.sim, nb.mc = 300, nb.gh = 20, indicator.alpha = 1, n.knots = 6) Ktau3 <- jointSurroTKendall(joint.surrogate) Ktau4 <- jointSurroTKendall(joint.surrogate,nb.MC.kendall = 4000, seed = 1) ## End(Not run)
Ktau1 <- jointSurroTKendall(theta = 3.5, gamma = 2.5, nb.gh = 32) Ktau2 <- jointSurroTKendall(theta = 1, gamma = 0.8, alpha = 1, zeta = 1, nb.gh = 32) ###---Kendall's \eqn{\tau} from a joint surrogate model ---### ## Not run: data.sim <-jointSurrSimul(n.obs=400, n.trial = 20,cens.adm=549, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7,cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###---Estimation---### joint.surrogate <- jointSurroPenal(data = data.sim, nb.mc = 300, nb.gh = 20, indicator.alpha = 1, n.knots = 6) Ktau3 <- jointSurroTKendall(joint.surrogate) Ktau4 <- jointSurroTKendall(joint.surrogate,nb.MC.kendall = 4000, seed = 1) ## End(Not run)
Date are generated from the one-step joint surrogate model (see jointSurroPenal
for more details)
jointSurrSimul( n.obs = 600, n.trial = 30, cens.adm = 549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, frailt.base = 1, lambda.S = 1.8, nu.S = 0.0045, lambda.T = 3, nu.T = 0.0025, ver = 1, typeOf = 1, equi.subj.trial = 1, equi.subj.trt = 1, prop.subj.trial = NULL, prop.subj.trt = NULL, full.data = 0, random.generator = 1, random = 0, random.nb.sim = 0, seed = 0, nb.reject.data = 0, pfs = 0 )
jointSurrSimul( n.obs = 600, n.trial = 30, cens.adm = 549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, frailt.base = 1, lambda.S = 1.8, nu.S = 0.0045, lambda.T = 3, nu.T = 0.0025, ver = 1, typeOf = 1, equi.subj.trial = 1, equi.subj.trt = 1, prop.subj.trial = NULL, prop.subj.trt = NULL, full.data = 0, random.generator = 1, random = 0, random.nb.sim = 0, seed = 0, nb.reject.data = 0, pfs = 0 )
n.obs |
Number of considered subjects. The default is |
n.trial |
Number of considered trials. The default is |
cens.adm |
censorship time. The default is |
alpha |
Fixed value for |
theta |
Fixed value for |
gamma |
Fixed value for |
zeta |
Fixed value for |
sigma.s |
Fixed value for
|
sigma.t |
Fixed value for
|
cor |
Desired level of correlation between vSi and vTi.
|
betas |
Fixed value for |
betat |
Fixed value for |
frailt.base |
considered the heterogeneity on the baseline risk |
lambda.S |
Desired scale parameter for the |
nu.S |
Desired shape parameter for the |
lambda.T |
Desired scale parameter for the |
nu.T |
Desired shape parameter for the |
ver |
Number of covariates. For surrogte evaluation, we just considered one covatiate, the treatment arm |
typeOf |
Type of joint model used for data generation: 0 = classical joint model
with a shared individual frailty effect (Rondeau, 2007), 1 = joint surrogate model with shared frailty
effects |
equi.subj.trial |
A binary variable that indicates if the same proportion of subjects should be included per trial (1)
or not (0). If 0, the proportions of subject per trial are required in parameter |
equi.subj.trt |
A binary variable that indicates if the same proportion of subjects is randomized per trial (1)
or not (0). If 0, the proportions of subject per trial are required in parameter |
prop.subj.trial |
The proportions of subjects per trial. Requires if |
prop.subj.trt |
The proportions of randomized subject per trial. Requires if |
full.data |
Specified if you want the function to return the full dataset (1), including the random effects,
or the restictive dataset (0) with |
random.generator |
Random number generator used by the Fortran compiler,
|
random |
A binary that says if we reset the random number generation with a different environment
at each call |
random.nb.sim |
required if |
seed |
The seed to use for data (or samples) generation. Required if the argument |
nb.reject.data |
Number of generation to reject before the considered dataset. This parameter is required
when data generation is for simulation. With a fixed parameter and |
pfs |
Is used to specify if the time to progression should be censored by the death time (0) or not (1). The default is 0. In the event with pfs set to 1, death is included in the surrogate endpoint as in the definition of PFS or DFS. |
We just considered in this generation, the Gaussian random effects. If the parameter full.data
is set to 1,
this function return a list containning severals parameters, including the generated random effects.
the desired individual level correlation (Kendall's ) depend on the values of
,
,
and
.
This function return if the parameter full.data
is set to 0, a data.frame
with columns :
patientID |
A numeric, that represents the patient's identifier, must be unique; |
trialID |
A numeric, that represents the trial in which each patient was randomized; |
trt |
The treatment indicator for each patient, with 1 = treated, 0 = untreated; |
timeS |
The follow up time associated with the surrogate endpoint; |
statusS |
The event indicator associated with the surrogate endpoint. Normally 0 = no event, 1 = event; |
timeT |
The follow up time associated with the true endpoint; |
statusT |
The event indicator associated with the true endpoint. Normally 0 = no event, 1 = event; |
If the argument full.data
is set to 1, additionnal colums corresponding to random effects
ij,
u
i, vSi and
vTi are returned. Note that
u
i, vSi and
vTi are returned if typeOf
is set to 1
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Rondeau V., Mathoulin-Pelissier S., Jacqmin-Gadda H., Brouste V. and Soubeyran P. (2007). Joint frailty models for recurring events and death using maximum penalized likelihood estimation: application on cancer events. Biostatistics 8(4), 708-721.
Sofeu, C. L., Emura, T., and Rondeau, V. (2019). One-step validation method for surrogate endpoints using data from multiple randomized cancer clinical trials with failure-time endpoints. Statistics in Medicine 38, 2928-2942.
data.sim <- jointSurrSimul(n.obs=600, n.trial = 30,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, sigma.s = 0.7, zeta = 1, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0, pfs = 0)
data.sim <- jointSurrSimul(n.obs=600, n.trial = 30,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, sigma.s = 0.7, zeta = 1, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0, pfs = 0)
This is a simulated dataset used to illustrate the two-part joint model included in the longiPenal function.
data(longDat)
data(longDat)
This data frame contains the following columns:
The identification number of a patient
The measurement times of the biomarker
Treatment covariate
Biomarker value
Fit a joint model for longitudinal data and a terminal event using a semiparametric penalized likelihood estimation or a parametric estimation on the hazard function.
The longitudinal outcomes yi(tik) (k=1,...,ni, i=1,...,N) for N subjects are described by a linear mixed model and the risk of the terminal event is represented by a proportional hazard risk model. The joint model is constructed assuming that the processes are linked via a latent structure (Wulfsohn and Tsiatsis 1997):
where Li(t) and
Ti are vectors of fixed effects covariates and
L and
T
are the associated coefficients. Measurements errors
i(tik) are iid normally
distributed with mean 0 and variance
\epsilon2. The random
effects bi =
(b0i,...,bqi)T~
(0,B1) are
associated to covariates
Li(t) and independent
from the measurement error. The relationship between the two processes is
explained via
h(bi,
L,
Li(t),
Li(t))
with coefficients
T. Two forms of the function
h(.) are available: the random effects bi
and the current biomarker level
bi(t)=
Li(tik)T
L +
i(tik)Tbi.
We consider that the longitudinal outcome can be a subject to a quantification limit, i.e. some observations, below a level of detection s cannot be quantified (left-censoring).
Alternatively, a two-part model is proposed to fit a semicontinuous biomarker. The two-part model decomposes the biomarker's distribution into a binary outcome (zero vs. positive values) and a continuous outcome (positive values). In the conditional form, the continuous part is conditional on a positive value while in the marginal form, the continuous part corresponds to the marginal mean of the biomarker. A logistic mixed effects model fits the binary outcome and a linear or a lognormal mixed effects model fits the continuous outcome.
A mediation analysis is possible to derive the proportion of the treatment effect on the survival outcome due to the treatment effect on the longitudinal outcome. The proportion of treatment effect is derived as the ratio of the indirect effect over the total effect. This proportion is defined on the survival scale and the indirect effect is taken as the difference of survivals at a given time when the treatment is set to 1 for both the survival and longitudinal outcome and when the treatment is only set to 1 for the survival endpoint and 0 for the longitudinal outcome. The total effect is the difference of survivals when the treatment is set to 1 or 0 for both endpoints.
longiPenal(formula, formula.LongitudinalData, data, data.Longi, formula.Binary=FALSE, random,random.Binary=FALSE, fixed.Binary=FALSE, GLMlog=FALSE, MTP=FALSE, id, intercept = TRUE,link="Random-effects", timevar=FALSE,left.censoring=FALSE,n.knots, kappa,maxit=350, hazard="Splines",mediation=FALSE,med.center=NULL,med.trt=NULL,init.B, init.Random, init.Eta, method.GH = "Standard",seed.MC=1, n.nodes, LIMparam=1e-3,LIMlogl=1e-3, LIMderiv=1e-3, print.times=TRUE,med.nmc=500, pte.times=NULL,pte.ntimes=NULL,pte.nmc=500,pte.boot=FALSE,pte.nboot=2000)
longiPenal(formula, formula.LongitudinalData, data, data.Longi, formula.Binary=FALSE, random,random.Binary=FALSE, fixed.Binary=FALSE, GLMlog=FALSE, MTP=FALSE, id, intercept = TRUE,link="Random-effects", timevar=FALSE,left.censoring=FALSE,n.knots, kappa,maxit=350, hazard="Splines",mediation=FALSE,med.center=NULL,med.trt=NULL,init.B, init.Random, init.Eta, method.GH = "Standard",seed.MC=1, n.nodes, LIMparam=1e-3,LIMlogl=1e-3, LIMderiv=1e-3, print.times=TRUE,med.nmc=500, pte.times=NULL,pte.ntimes=NULL,pte.nmc=500,pte.boot=FALSE,pte.nboot=2000)
formula |
a formula object, with the response on the left of a
|
formula.LongitudinalData |
a formula object, only requires terms on the right to indicate which variables are modelling the longitudinal outcome. It must follow the standard form used for linear mixed-effects models. Interactions are possible using * or :. |
data |
a 'data.frame' with the variables used in |
data.Longi |
a 'data.frame' with the variables used in
|
formula.Binary |
a formula object, only requires terms on the right to indicate which variables are modelling the binary part of the two-part model fitting the longitudinal semicontinuous outcome. It must follow the standard form used for linear mixed-effects models. Interactions are possible using * or :. |
random |
Names of variables for the random effects of the longitudinal
outcome. Maximum 3 random effects are possible at the moment. The random
intercept is chosen using |
random.Binary |
Names of variables for the random effects of the binary
part of the two-part model fitting the longitudinal semicontinuous outcome.
The random intercept is chosen using |
fixed.Binary |
Fix the value of the intercept in the binary part of a two-part model. |
GLMlog |
Logical value. Use a lognormal distribution for the biomarker (instead of the default normal distribution). |
MTP |
Logical value. Marginal two-part joint model instead of conditional two-part joint model (only with two-part models). |
id |
Name of the variable representing the individuals. |
intercept |
Logical value. Is the fixed intercept of the biomarker
included in the mixed-effects model? The default is |
link |
Type of link function for the dependence between the biomarker
and death: |
timevar |
Indicates the time varying variables to take into account this evolution over time in the link with the survival model (useful with 'Current-level' and 'Two-part' links) |
left.censoring |
Is the biomarker left-censored below a threshold
|
n.knots |
Integer giving the number of knots to use. Value required in
the penalized likelihood estimation. It corresponds to the (n.knots+2)
splines functions for the approximation of the hazard or the survival
functions. We estimate I or M-splines of order 4. When the user set a
number of knots equals to k (n.knots=k) then the number of interior knots
is (k-2) and the number of splines is (k-2)+order. Number of knots must be
between 4 and 20. (See Note in |
kappa |
Positive smoothing parameter in the penalized likelihood
estimation. The coefficient kappa of the integral of the squared second
derivative of hazard function in the fit (penalized log likelihood). To
obtain an initial value for |
maxit |
Maximum number of iterations for the Marquardt algorithm. The default is 350. |
hazard |
Type of hazard functions: |
mediation |
a logical value indicating if the mediation analysis method is used. Default is FALSE. |
med.center |
For mediation analysis, a vector containing the center indicator for each subject
If no center then this argument should be |
med.trt |
For mediation analysis, a vector containing the treatment indicator for each subject. |
init.B |
Vector of initial values for regression coefficients. This vector should be of the same size as the whole vector of covariates with the first elements for the covariates related to the terminal event and then for the covariates related to the biomarker (interactions in the end of each component). Default is 0.5 for each. |
init.Random |
Initial value for variance of the elements of the matrix of the distribution of the random effects. Default is 0.5 for each element. |
init.Eta |
Initial values for regression coefficients for the link function. Default is 0.5 for each. |
method.GH |
Method for the Gauss-Hermite quadrature: |
seed.MC |
Monte-carlo integration points selection (1=fixed, 0=random) |
n.nodes |
Number of nodes for the Gauss-Hermite quadrature or the Monte-carlo method. They can be chosen among 5, 7, 9, 12, 15, 20 and 32 for the GH quadrature and any number for the Monte-carlo method. The default is 9. |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters (see Details of |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood (see Details of |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the
gradient (see Details of |
print.times |
a logical parameter to print iteration process. The default is TRUE. |
med.nmc |
For mediation analysis, the number of Monte Carlo points used for computing the integral over the random effects in the likelihood computation. Default is 500. |
pte.times |
For mediation analysis, a vector of times for which the
funtion |
pte.ntimes |
For mediation analysis, if the argument |
pte.nmc |
For mediation analysis, nn integer indicating how many Monte Carlo simulations are used
to integrate over the random effects in the computation of the function |
pte.boot |
For mediation analysis, a logical value indicating if bootstrapped confidence bands needs to be computed for the
function |
pte.nboot |
For mediation analysis, an integer indicating how many bootstrapped replicates of PTE(t) needs to be computed to derive confidence bands for PTE(t). Should be less than 10000. Default is 2000. |
Typical usage for the joint model
longiPenal(Surv(time,event)~var1+var2, biomarker ~ var1+var2, data, data.Longi, ...)
The method of the Gauss-Hermite quadrature for approximations of the
multidimensional integrals, i.e. length of random
is 2, can be chosen
among the standard, non-adaptive, pseudo-adaptive in which the quadrature
points are transformed using the information from the fitted mixed-effects
model for the biomarker (Rizopoulos 2012) or multivariate non-adaptive
procedure proposed by Genz et al. 1996 and implemented in FORTRAN subroutine
HRMSYM. The choice of the method is important for estimations. The standard
non-adaptive Gauss-Hermite quadrature ("Standard"
) with a specific
number of points gives accurate results but can be time consuming. The
non-adaptive procedure ("HRMSYM"
) offers advantageous computational
time but in case of datasets in which some individuals have few repeated
observations (biomarker measures or recurrent events), this method may be
moderately unstable. The pseudo-adaptive quadrature uses transformed
quadrature points to center and scale the integrand by utilizing estimates of
the random effects from an appropriate linear mixed-effects model. This
method enables using less quadrature points while preserving the estimation
accuracy and thus lead to a better computational time.The Monte-Carlo method
is also proposed for approximations of the multidimensional integrals.
NOTE. Data frames data
and data.Longi
must be consistent. Names
and types of corresponding covariates must be the same, as well as the number
and identification of individuals.
The following components are included in a 'longiPenal' object for each model:
b |
The sequence of the corresponding estimation of the coefficients for the hazard functions (parametric or semiparametric), the random effects variances and the regression coefficients. |
call |
The code used for the model. |
formula |
The formula part of the code used for the terminal event part of the model. |
formula.LongitudinalData |
The formula part of the code used for the longitudinal part of the model. |
formula.Binary |
The formula part of the code used for the binary part of the two-part model. |
coef |
The regression coefficients (first for the terminal event and then for the biomarker. |
groups |
The number of groups used in the fit. |
kappa |
The value of the smoothing parameter in the penalized likelihood estimation corresponding to the baseline hazard function for the terminal event. |
logLikPenal |
The complete marginal penalized log-likelihood in the semiparametric case. |
logLik |
The marginal log-likelihood in the parametric case. |
n.measurements |
The number of biomarker observations used in the fit. |
max_rep |
The maximal number of repeated measurements per individual. |
n.deaths |
The number of events observed in the fit. |
n.iter |
The number of iterations needed to converge. |
n.knots |
The number of knots for estimating the baseline hazard function in the penalized likelihood estimation. |
n.strat |
The number of stratum. |
varH |
The variance matrix of all parameters (before positivity constraint transformation for the variance of the measurement error, for which the delta method is used). |
varHIH |
The robust estimation of the variance matrix of all parameters. |
xD |
The vector of times where both survival and hazard function of the terminal event are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lamD |
The array (dim=3) of baseline hazard estimates and confidence bands (terminal event). |
survD |
The array (dim=3) of baseline survival estimates and confidence bands (terminal event). |
median |
The value of the median survival and its confidence bands. |
typeof |
The type of the baseline hazard functions (0:"Splines", "2:Weibull"). |
npar |
The number of parameters. |
nvar |
The vector of number of explanatory variables for the terminal event and biomarker. |
nvarEnd |
The number of explanatory variables for the terminal event. |
nvarY |
The number of explanatory variables for the biomarker. |
noVarEnd |
The indicator of absence of the explanatory variables for the terminal event. |
noVarY |
The indicator of absence of the explanatory variables for the biomarker. |
LCV |
The approximated likelihood cross-validation criterion in the semiparametric case (with H minus the converged Hessian matrix, and l(.) the full log-likelihood).
|
AIC |
The Akaike information Criterion for the parametric case.
|
n.knots.temp |
The initial value for the number of knots. |
shape.weib |
The shape parameter for the Weibull hazard function. |
scale.weib |
The scale parameter for the Weibull hazard function. |
martingaledeath.res |
The martingale residuals for each individual. |
conditional.res |
The conditional residuals for
the biomarker (subject-specific):
|
marginal.res |
The marginal residuals for the biomarker (population
averaged):
|
marginal_chol.res |
The Cholesky marginal residuals for the biomarker:
|
conditional_st.res |
The standardized conditional residuals for the biomarker. |
marginal_st.res |
The standardized marginal residuals for the biomarker. |
random.effects.pred |
The empirical Bayes predictions of the random effects (ie. using conditional posterior distributions). |
pred.y.marg |
The marginal predictions of the longitudinal outcome. |
pred.y.cond |
The conditional (given the random effects) predictions of the longitudinal outcome. |
lineardeath.pred |
The linear predictor for the terminal part. |
global_chisq_d |
The vector with values of each multivariate Wald test for the terminal part. |
dof_chisq_d |
The vector with degrees of freedom for each multivariate Wald test for the terminal part. |
global_chisq.test_d |
The binary variable equals to 0 when no multivariate Wald is given, 1 otherwise (for the terminal part). |
p.global_chisq_d |
The vector with the p_values for each global multivariate Wald test for the terminal part. |
global_chisq |
The vector with values of each multivariate Wald test for the longitudinal part. |
dof_chisq |
The vector with degrees of freedom for each multivariate Wald test for the longitudinal part. |
global_chisq.test |
The binary variable equals to 0 when no multivariate Wald is given, 1 otherwise (for the longitudinal part). |
p.global_chisq |
The vector with the p_values for each global multivariate Wald test for the longitudinal part. |
names.factordc |
The names of the "as.factor" variables for the terminal part. |
names.factor |
The names of the "as.factor" variables for the longitudinal part. |
intercept |
The logical value. Is the fixed intercept included in the linear mixed-effects model? |
B1 |
The variance matrix of the random effects for the longitudinal outcome. |
ResidualSE |
The standard deviation of the measurement error. |
eta |
The regression coefficients for the link function. |
ne_re |
The number of random effects used in the fit. |
names.re |
The names of variables for the random effects. |
link |
The name of the type of the link function. |
eta_p.value |
p-values of the Wald test for the estimated regression coefficients for the link function. |
beta_p.value |
p-values of the Wald test for the estimated regression coefficients. |
leftCensoring |
The logical value. Is the longitudinal outcome left-censored? |
leftCensoring.threshold |
For the left-censored biomarker, the value of the left-censoring threshold used for the fit. |
prop.censored |
The fraction of observations subjected to the left-censoring. |
methodGH |
The method used for approximations of the multidimensional integrals. |
n.nodes |
The number of integration points. |
A. Krol, A. Mauguen, Y. Mazroui, A. Laurent, S. Michiels and V. Rondeau (2017). Tutorial in Joint Modeling and Prediction: A Statistical Software for Correlated Longitudinal Outcomes, Recurrent Events and a Terminal Event. Journal of Statistical Software 81(3), 1-52.
A. Krol, L. Ferrer, JP. Pignon, C. Proust-Lima, M. Ducreux, O. Bouche, S. Michiels, V. Rondeau (2016). Joint Model for Left-Censored Longitudinal Data, Recurrent Events and Terminal Event: Predictive Abilities of Tumor Burden for Cancer Evolution with Application to the FFCD 2000-05 Trial. Biometrics 72(3) 907-16.
D. Rizopoulos (2012). Fast fitting of joint models for longitudinal and event time data using a pseudo-adaptive Gaussian quadrature rule. Computational Statistics and Data Analysis 56, 491-501.
M.S. Wulfsohn, A.A. and Tsiatis, A. A. (1997). A joint model for survival and longitudinal data measured with error. Biometrics 53, 330-9.
A. Genz and B. Keister (1996). Fully symmetric interpolatory rules for multiple integrals over infinite regions with Gaussian weight. Journal of Computational and Applied Mathematics 71, 299-309.
D. Rustand, L. Briollais, C. Tournigand and V. Rondeau (2020). Two-part joint model for a longitudinal semicontinuous marker and a terminal event with application to metastatic colorectal cancer data. Biostatistics.
plot.longiPenal
,print.longiPenal
,summary.longiPenal
## Not run: ###--- Joint model for longitudinal data and a terminal event ---### data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) # Baseline hazard function approximated with splines # Random effects as the link function model.spli.RE <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , data=colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, n.knots = 7, kappa = 2) # Weibull baseline hazard function # Current level of the biomarker as the link function model.weib.CL <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , timevar="year", data=colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Current-level", left.censoring = -3.33, hazard = "Weibull") ###--- Two-part Joint model for semicontinuous # longitudinal data and a terminal event ---### data(colorectal) data(colorectalLongi) colorectalSurv <- subset(colorectal, new.lesions == 0) # Box-cox back transformation (lambda=0.3) and apply logarithm (with a 1 unit shift) colorectalLongi$Yo <- (colorectalLongi$tumor.size*0.3+1)^(1/0.3) colorectalLongi$Y <- log(colorectalLongi$Y+1) # log transformation with shift=1 # Conditional two-part joint model - random-effects association structure (~15min) CTPJM_re <-longiPenal(Surv(time1, state)~age + treatment + who.PS+ prev.resection, Y~year*treatment, formula.Binary=Y~year*treatment, data = colorectalSurv, data.Longi = colorectalLongi, random = c("1"), random.Binary=c("1"), id = "id", link ="Random-effects", left.censoring = F, n.knots = 7, kappa = 2, hazard="Splines-per") print(CTPJM_re) # Conditional two-part joint model - current-level association structure (~15min) # Simulated dataset (github.com/DenisRustand/TPJM_sim) data(longDat) data(survDat) tte <- frailtyPenal(Surv(deathTimes, d)~trt,n.knots=5,kappa=0, data=survDat,cross.validation = T) kap <- round(tte$kappa,2);kap # smoothing parameter CTPJM_cl <- longiPenal(Surv(deathTimes, d)~trt, Y~timej*trtY, data=survDat, data.Longi = longDat, random = c("1","timej"), formula.Binary=Y~timej*trtY, random.Binary=c("1"), timevar="timej", id = "id", link = "Current-level", n.knots = 5, kappa = kap, hazard="Splines-per", method.GH="Monte-carlo", n.nodes=500) print(CTPJM_cl) # Marginal two-part joint model - random-effects association structure (~10min) longDat$Yex <- exp(longDat$Y)-1 MTPJM_re <- longiPenal(Surv(deathTimes, d)~trt, Yex~timej*trtY, data=survDat, data.Longi = longDat,MTP=T,GLMlog = T, random = c("1"), formula.Binary=Y~timej*trtY, random.Binary=c("1"), timevar="timej", id = "id", link = "Random-effects", n.knots = 5, kappa = kap, hazard="Splines-per", method.GH="Monte-carlo", n.nodes=500) print(MTPJM_re) # Marginal two-part joint model - current-level association structure (~45min) MTPJM_cl <- longiPenal(Surv(deathTimes, d)~trt, Yex~timej*trtY, data=survDat, data.Longi = longDat,MTP=T,GLMlog = T, random = c("1","timej"), formula.Binary=Y~timej*trtY, random.Binary=c("1"), timevar="timej", id = "id", link = "Current-level", n.knots = 5, kappa = kap, hazard="Splines-per", method.GH="Monte-carlo", n.nodes=500) print(MTPJM_cl) ###--- Mediation analysis #Takes ~ 10 minutes to run data(colorectal) data(colorectalLongi) colorectalSurv <- subset(colorectal, new.lesions == 0) colorectalSurv$treatment<-sapply(colorectalSurv$treatment,function(t) ifelse(t=="S",1,0)) colorectalLongi$treatment<-sapply(colorectalLongi$treatment,function(t) ifelse(t=="S",1,0)) mod.col=longiPenal(Surv(time1, state) ~ age+treatment, tumor.size ~ age+year*treatment, data=colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Current-level",timevar="year",method.GH = "Pseudo-adaptive", mediation = TRUE,med.trt = colorectalSurv$treatment, med.center = NULL,med.nmc = 50,n.knots = 7, kappa = 2, pte.ntimes = 30,pte.boot = T,pte.nmc = 1000,pte.nboot = 1000) print(mod.col) plot(mod.col,plot.mediation='All') ## End(Not run)
## Not run: ###--- Joint model for longitudinal data and a terminal event ---### data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) # Baseline hazard function approximated with splines # Random effects as the link function model.spli.RE <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , data=colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, n.knots = 7, kappa = 2) # Weibull baseline hazard function # Current level of the biomarker as the link function model.weib.CL <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , timevar="year", data=colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Current-level", left.censoring = -3.33, hazard = "Weibull") ###--- Two-part Joint model for semicontinuous # longitudinal data and a terminal event ---### data(colorectal) data(colorectalLongi) colorectalSurv <- subset(colorectal, new.lesions == 0) # Box-cox back transformation (lambda=0.3) and apply logarithm (with a 1 unit shift) colorectalLongi$Yo <- (colorectalLongi$tumor.size*0.3+1)^(1/0.3) colorectalLongi$Y <- log(colorectalLongi$Y+1) # log transformation with shift=1 # Conditional two-part joint model - random-effects association structure (~15min) CTPJM_re <-longiPenal(Surv(time1, state)~age + treatment + who.PS+ prev.resection, Y~year*treatment, formula.Binary=Y~year*treatment, data = colorectalSurv, data.Longi = colorectalLongi, random = c("1"), random.Binary=c("1"), id = "id", link ="Random-effects", left.censoring = F, n.knots = 7, kappa = 2, hazard="Splines-per") print(CTPJM_re) # Conditional two-part joint model - current-level association structure (~15min) # Simulated dataset (github.com/DenisRustand/TPJM_sim) data(longDat) data(survDat) tte <- frailtyPenal(Surv(deathTimes, d)~trt,n.knots=5,kappa=0, data=survDat,cross.validation = T) kap <- round(tte$kappa,2);kap # smoothing parameter CTPJM_cl <- longiPenal(Surv(deathTimes, d)~trt, Y~timej*trtY, data=survDat, data.Longi = longDat, random = c("1","timej"), formula.Binary=Y~timej*trtY, random.Binary=c("1"), timevar="timej", id = "id", link = "Current-level", n.knots = 5, kappa = kap, hazard="Splines-per", method.GH="Monte-carlo", n.nodes=500) print(CTPJM_cl) # Marginal two-part joint model - random-effects association structure (~10min) longDat$Yex <- exp(longDat$Y)-1 MTPJM_re <- longiPenal(Surv(deathTimes, d)~trt, Yex~timej*trtY, data=survDat, data.Longi = longDat,MTP=T,GLMlog = T, random = c("1"), formula.Binary=Y~timej*trtY, random.Binary=c("1"), timevar="timej", id = "id", link = "Random-effects", n.knots = 5, kappa = kap, hazard="Splines-per", method.GH="Monte-carlo", n.nodes=500) print(MTPJM_re) # Marginal two-part joint model - current-level association structure (~45min) MTPJM_cl <- longiPenal(Surv(deathTimes, d)~trt, Yex~timej*trtY, data=survDat, data.Longi = longDat,MTP=T,GLMlog = T, random = c("1","timej"), formula.Binary=Y~timej*trtY, random.Binary=c("1"), timevar="timej", id = "id", link = "Current-level", n.knots = 5, kappa = kap, hazard="Splines-per", method.GH="Monte-carlo", n.nodes=500) print(MTPJM_cl) ###--- Mediation analysis #Takes ~ 10 minutes to run data(colorectal) data(colorectalLongi) colorectalSurv <- subset(colorectal, new.lesions == 0) colorectalSurv$treatment<-sapply(colorectalSurv$treatment,function(t) ifelse(t=="S",1,0)) colorectalLongi$treatment<-sapply(colorectalLongi$treatment,function(t) ifelse(t=="S",1,0)) mod.col=longiPenal(Surv(time1, state) ~ age+treatment, tumor.size ~ age+year*treatment, data=colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Current-level",timevar="year",method.GH = "Pseudo-adaptive", mediation = TRUE,med.trt = colorectalSurv$treatment, med.center = NULL,med.nmc = 50,n.knots = 7, kappa = 2, pte.ntimes = 30,pte.boot = T,pte.nmc = 1000,pte.nboot = 1000) print(mod.col) plot(mod.col,plot.mediation='All') ## End(Not run)
The trials leave-one-out crossvalidation for evaluating the joint surrogate model
loocv(object, unusedtrial, var.used = "error.estim", alpha. = 0.05, dec = 3, print.times = TRUE)
loocv(object, unusedtrial, var.used = "error.estim", alpha. = 0.05, dec = 3, print.times = TRUE)
object |
An object inheriting from |
unusedtrial |
A list of trial not to be taken into account in the cross-validation. This parameter is useful when after excluding some trials, the model is facing convergence problem. |
var.used |
This argument takes two values. The first one is |
alpha. |
The confidence level for the prediction interval. The default is |
dec |
The desired number of digits after the decimal point for parameters and confidence intervals. Default of 3 digits is used. |
print.times |
a logical parameter to print estimation time. Default is TRUE. |
This function returns an object of class jointSurroPenalloocv
containing:
result |
A dataframe including for each trial the number of included subjects, the observed treatment effect on the surrogate endpoint, the observed treatment effect on the true endpoint and the predicted treatment effect on the true enpoint with the associated prediction intervals. If the observed treatment effect on the true endpoint is included into the prediction interval, the last columns contains "*". |
ntrial |
The number of trials in the meta-analysis |
notconvtrial |
The vector of trials that have not converged |
pred.error |
The prediction error, corresponding to the number of cases where the prediction interval does not included the observed treatment effect on T |
different.models |
The list of the |
loocv.summary |
A dataframe of the estimates for the |
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Burzykowski T, Buyse M (2006). "Surrogate threshold effect: an alternative measure for meta-analytic surrogate endpoint validation." Pharmaceutical Statistics, 5(3), 173-186.ISSN 1539-1612.
jointSurroPenal, jointSurroCopPenal
## Not run: # Generation of data to use data.sim <- jointSurrSimul(n.obs=300, n.trial = 10,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###--- Joint surrogate model ---### joint.surro.sim.MCGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20, print.iter = F) # Example of loocv taking into accountn ony trial 2 trials (1 and 3) dloocv <- loocv(joint.surro.sim.MCGH, unusedtrial = c(2,4:10)) dloocv$result dloocv$loocv.summary # In order to summarize all the estimated models during the loocv proccess: dloocv$different.models ## End(Not run)
## Not run: # Generation of data to use data.sim <- jointSurrSimul(n.obs=300, n.trial = 10,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###--- Joint surrogate model ---### joint.surro.sim.MCGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20, print.iter = F) # Example of loocv taking into accountn ony trial 2 trials (1 and 3) dloocv <- loocv(joint.surro.sim.MCGH, unusedtrial = c(2,4:10)) dloocv$result dloocv$loocv.summary # In order to summarize all the estimated models during the loocv proccess: dloocv$different.models ## End(Not run)
Fit a multivariate frailty model for two types of recurrent events with a terminal event using a penalized likelihood estimation on the hazard function or a parametric estimation. Right-censored data are allowed. Left-truncated data and stratified analysis are not possible. Multivariate frailty models allow studying, with a joint model, three survival dependent processes for two types of recurrent events and a terminal event. Multivariate joint frailty models are applicable in mainly two settings. First, when focus is on the terminal event and we wish to account for the effect of previous endogenous recurrent event. Second, when focus is on a recurrent event and we wish to correct for informative censoring.
The multivariate frailty model for two types of recurrent events with a terminal event is (in the calendar or time-to-event timescale):
where 0l(t), (l∈{1,3}) and
0(t) are
respectively the recurrent and terminal event baseline hazard functions, and
1,
2,
3 the regression coefficient vectors associated
with
i(t) the covariate vector. The covariates could be different
for the different event hazard functions and may be time-dependent. We
consider that death stops new occurrences of recurrent events of any type,
hence given
,
R(l)*(t), (l∈{1,2}) takes the value 0.
Thus, the terminal and the two recurrent event processes are not independent
or even conditional upon frailties and covariates. We consider the hazard
functions of recurrent events among individuals still alive.
components in the above multivariate frailty model are linked together by
two Gaussian and correlated random effects
i,
i:
(
i,
i)T~
(0,
uv), with
Dependencies between these three types of event are taken into account by
two correlated random effects and parameters 1,
2 the
variance of the random effects and
1,
2 the coefficients
for these random effects into the terminal event part. If
1 and
1 are both significantly different from 0, then the recurrent
events of type 1 and death are significantly associated (the sign of the
association is the sign of
1). If
2 and
2 are both significantly different from 0, then the recurrent
events of type 2 and death are significantly associated (the sign of the
association is the sign of
2). If
, the correlation
between the two random effects, is significantly different from 0, then the
recurrent events of type 1 and the recurrent events of type 2 are
significantly associated (the sign of the association is the sign of
).
multivPenal(formula, formula.Event2, formula.terminalEvent, data, initialize = TRUE, recurrentAG = FALSE, n.knots, kappa, maxit = 350, hazard = "Splines", nb.int, print.times = TRUE)
multivPenal(formula, formula.Event2, formula.terminalEvent, data, initialize = TRUE, recurrentAG = FALSE, n.knots, kappa, maxit = 350, hazard = "Splines", nb.int, print.times = TRUE)
formula |
a formula object, with the response for the first recurrent
event on the left of a |
formula.Event2 |
a formula object, with the response for the second
recurrent event on the left of a |
formula.terminalEvent |
a formula object, with the response for the
terminal event on the left of a |
data |
a 'data.frame' with the variables used in 'formula', 'formula.Event2' and 'formula.terminalEvent'. |
initialize |
Logical value to initialize regression coefficients and baseline hazard functions parameters. When the estimation is semi-parametric with splines, this initialization produces also values for smoothing parameters (by cross validation). When initialization is requested, the program first fit two shared frailty models (for the two types of recurrent events) and a Cox proportional hazards model (for the terminal event). Default is TRUE. |
recurrentAG |
Logical value. Is Andersen-Gill model fitted? If so indicates that recurrent event times with the counting process approach of Andersen and Gill is used. This formulation can be used for dealing with time-dependent covariates. The default is FALSE. |
n.knots |
integer vector of length 3 (for the three outcomes) giving the number of knots to use. First is for the recurrent of type 1, second is for the recurrent of type 2 and third is for the terminal event hazard function. Value required in the penalized likelihood estimation. It corresponds to the (n.knots+2) splines functions for the approximation of the hazard or the survival functions. Number of knots must be between 4 and 20. (See Note) |
kappa |
vector of length 3 (for the three outcomes) for positive smoothing parameters in the penalized likelihood estimation. First is for the recurrent of type 1, second is for the recurrent of type 2 and third is for the terminal event hazard function. The coefficient kappa of the integral of the squared second derivative of hazard function in the fit (penalized log likelihood). Initial values for the kappas can be obtained with the option "initialize=TRUE". We advise the user to identify several possible tuning parameters, note their defaults and look at the sensitivity of the results to varying them. Value required.(See Note) |
maxit |
maximum number of iterations for the Marquardt algorithm. Default is 350. |
hazard |
Type of hazard functions: "Splines" for semi-parametric hazard functions with the penalized likelihood estimation, "Piecewise-per" for piecewise constant hazard function using percentile, "Piecewise-equi" for piecewise constant hazard function using equidistant intervals, "Weibull" for parametric Weibull function. Default is "Splines". |
nb.int |
An integer vector of length 3 (for the three outcomes). First is the Number of intervals (between 1 and 20) for the recurrent of type 1 parametric hazard functions ("Piecewise-per", "Piecewise-equi"). Second is the Number of intervals (between 1 and 20) for the recurrent of type 2 parametric hazard functions ("Piecewise-per", "Piecewise-equi"). Third is Number of intervals (between 1 and 20) for the death parametric hazard functions ("Piecewise-per", "Piecewise-equi") |
print.times |
a logical parameter to print iteration process. Default is TRUE. |
Parameters estimates of a multivariate joint frailty model, more generally a 'multivPenal' object. Methods defined for 'multivPenal' objects are provided for print, plot and summary. The following components are included in a 'multivPenal' object for multivariate Joint frailty models.
b |
sequence of the corresponding estimation of the splines coefficients, the random effects variances, the coefficients of the frailties and the regression coefficients. |
call |
The code used for fitting the model. |
n |
the number of observations used in the fit. |
groups |
the number of subjects used in the fit. |
n.events |
the number of recurrent events of type 1 observed in the fit. |
n.events2 |
the number of the recurrent events of type 2 observed in the fit. |
n.deaths |
the number of deaths observed in the fit. |
loglikPenal |
the complete marginal penalized log-likelihood in the semi-parametric case. |
loglik |
the marginal log-likelihood in the parametric case. |
LCV |
the approximated likelihood cross-validation criterion in the semi parametric case (with H minus the converged Hessian matrix, and l(.) the full log-likelihood.
) |
AIC |
the Akaike information Criterion for the parametric case.
|
theta1 |
variance of the
frailty parameter for recurrences of type 1 |
theta2 |
variance of the frailty parameter for recurrences of type 2
|
alpha1 |
the coefficient associated with the
frailty parameter |
alpha2 |
the coefficient associated with the frailty parameter
|
rho |
the correlation
coefficient between |
npar |
number of parameters. |
coef |
the regression coefficients. |
nvar |
A vector with the number of covariates of each type of hazard function as components. |
varH |
the variance matrix of all parameters before positivity constraint transformation (theta, the regression coefficients and the spline coefficients). Then, the delta method is needed to obtain the estimated variance parameters. |
varHIH |
the robust estimation of the variance matrix of all parameters (theta, the regression coefficients and the spline coefficients). |
formula |
the formula part of the code used for the model for the recurrent event. |
formula.Event2 |
the formula part of the code used for the model for the second recurrent event. |
formula.terminalEvent |
the formula part of the code used for the model for the terminal event. |
x1 |
vector of times for hazard functions of the recurrent events of type 1 are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lam1 |
matrix of hazard estimates and confidence bands for recurrent events of type 1. |
xSu1 |
vector of times for the survival function of the recurrent event of type 1. |
surv1 |
matrix of baseline survival estimates and confidence bands for recurrent events of type 1. |
x2 |
vector of times for the recurrent event of type 2 (see x1 value). |
lam2 |
the same value as lam1 for the recurrent event of type 2. |
xSu2 |
vector of times for the survival function of the recurrent event of type 2 |
surv2 |
the same value as surv1 for the recurrent event of type 2. |
xEnd |
vector of times for the terminal event (see x1 value). |
lamEnd |
the same value as lam1 for the terminal event. |
xSuEnd |
vector of times for the survival function of the terminal event |
survEnd |
the same value as surv1 for the terminal event. |
median1 |
The value of the median survival and its confidence bands for the recurrent event of type 1. |
median2 |
The value of the median survival and its confidence bands for the recurrent event of type 2. |
medianEnd |
The value of the median survival and its confidence bands for the terminal event. |
type.of.Piecewise |
Type of Piecewise hazard functions (1:"percentile", 0:"equidistant"). |
n.iter |
number of iterations needed to converge. |
type.of.hazard |
Type of hazard functions (0:"Splines", "1:Piecewise", "2:Weibull"). |
n.knots |
a vector with number of knots for estimating the baseline functions. |
kappa |
a vector with the smoothing parameters in the penalized likelihood estimation corresponding to each baseline function as components. |
n.knots.temp |
initial value for the number of knots. |
zi |
splines knots. |
time |
knots for Piecewise hazard function for the recurrent event of type 1. |
timedc |
knots for Piecewise hazard function for the terminal event. |
time2 |
knots for Piecewise hazard function for the recurrent event of type 2. |
noVar |
indicator vector for recurrent, death and recurrent 2 explanatory variables. |
nvarRec |
number of the recurrent of type 1 explanatory variables. |
nvarEnd |
number of death explanatory variables. |
nvarRec2 |
number of the recurrent of type 2 explanatory variables. |
nbintervR |
Number of intervals (between 1 and 20) for the the recurrent of type 1 parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
nbintervDC |
Number of intervals (between 1 and 20) for the death parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
nbintervR2 |
Number of intervals (between 1 and 20) for the the recurrent of type 2 parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
istop |
Vector of the convergence criteria. |
shape.weib |
shape parameters for the Weibull hazard function. |
scale.weib |
scale parameters for the Weibull hazard function. |
martingale.res |
martingale residuals for each cluster (recurrent of type 1). |
martingale2.res |
martingale residuals for each cluster (recurrent of type 2). |
martingaledeath.res |
martingale residuals for each cluster (death). |
frailty.pred |
empirical Bayes prediction of the first frailty term. |
frailty2.pred |
empirical Bayes prediction of the second frailty term. |
frailty.var |
variance of the empirical Bayes prediction of the first frailty term. |
frailty2.var |
variance of the empirical Bayes prediction of the second frailty term. |
frailty.corr |
Correlation between the empirical Bayes prediction of the two frailty. |
linear.pred |
linear predictor: uses Beta'X + ui in the multivariate frailty models. |
linear2.pred |
linear predictor: uses Beta'X + vi in the multivariate frailty models. |
lineardeath.pred |
linear predictor for the terminal part form the multivariate frailty models: Beta'X + alpha1 ui + alpha2 vi |
global_chisq |
Recurrent event of type 1: a vector with the values of each multivariate Wald test. |
dof_chisq |
Recurrent event of type 1: a vector with the degree of freedom for each multivariate Wald test. |
global_chisq.test |
Recurrent event of type 1: a binary variable equals to 0 when no multivariate Wald is given, 1 otherwise. |
p.global_chisq |
Recurrent event of type 1: a vector with the p-values for each global multivariate Wald test. |
names.factor |
Recurrent event of type 1: Names of the "as.factor" variables. |
global_chisq2 |
Recurrent event of type 2: a vector with the values of each multivariate Wald test. |
dof_chisq2 |
Recurrent event of type 2: a vector with the degree of freedom for each multivariate Wald test. |
global_chisq.test2 |
Recurrent event of type 2: a binary variable equals to 0 when no multivariate Wald is given, 1 otherwise. |
p.global_chisq2 |
Recurrent event of type 2: a vector with the p_values for each global multivariate Wald test. |
names.factor2 |
Recurrent event of type 2: Names of the "as.factor" variables. |
global_chisq_d |
Terminal event: a vector with the values of each multivariate Wald test. |
dof_chisq_d |
Terminal event: a vector with the degree of freedom for each multivariate Wald test. |
global_chisq.test_d |
Terminal event: a binary variable equals to 0 when no multivariate Wald is given, 1 otherwise. |
p.global_chisq_d |
Terminal event: a vector with the p-values for each global multivariate Wald test. |
names.factordc |
Terminal event: Names of the "as.factor" variables. |
"kappa" (kappa[1], kappa[2] and kappa[3]) and "n.knots" (n.knots[1], n.knots[2] and n.knots[3]) are the arguments that the user has to change if the fitted model does not converge. "n.knots" takes integer values between 4 and 20. But with n.knots=20, the model will take a long time to converge. So, usually, begin first with n.knots=7, and increase it step by step until it converges. "kappa" only takes positive values. So, choose a value for kappa (for instance 10000), and if it does not converge, multiply or divide this value by 10 or 5 until it converges. Moreover, it may be useful to change the value of the initialize argument.
Mazroui Y., Mathoulin-Pellissier S., MacGrogan G., Brouste V., Rondeau V. (2013). Multivariate frailty models for two types of recurrent events with an informative terminal event : Application to breast cancer data. Biometrical journal, 55(6), 866-884.
terminal
,event2
,
print.multivPenal
,summary.multivPenal
,plot.multivPenal
###--- Multivariate Frailty model ---### data(dataMultiv) # (computation takes around 60 minutes) modMultiv.spli <- multivPenal(Surv(TIMEGAP,INDICREC)~cluster(PATIENT)+v1+v2+ event2(INDICMETA)+terminal(INDICDEATH),formula.Event2=~v1+v2+v3, formula.terminalEvent=~v1,data=dataMultiv,n.knots=c(8,8,8), kappa=c(1,1,1),initialize=FALSE) print(modMultiv.spli) modMultiv.weib <- multivPenal(Surv(TIMEGAP,INDICREC)~cluster(PATIENT)+v1+v2+ event2(INDICMETA)+terminal(INDICDEATH),formula.Event2=~v1+v2+v3, formula.terminalEvent=~v1,data=dataMultiv,hazard="Weibull") print(modMultiv.weib) modMultiv.cpm <- multivPenal(Surv(TIMEGAP,INDICREC)~cluster(PATIENT)+v1+v2+ event2(INDICMETA)+terminal(INDICDEATH),formula.Event2=~v1+v2+v3, formula.terminalEvent=~v1,data=dataMultiv,hazard="Piecewise-per", nb.int=c(6,6,6)) print(modMultiv.cpm)
###--- Multivariate Frailty model ---### data(dataMultiv) # (computation takes around 60 minutes) modMultiv.spli <- multivPenal(Surv(TIMEGAP,INDICREC)~cluster(PATIENT)+v1+v2+ event2(INDICMETA)+terminal(INDICDEATH),formula.Event2=~v1+v2+v3, formula.terminalEvent=~v1,data=dataMultiv,n.knots=c(8,8,8), kappa=c(1,1,1),initialize=FALSE) print(modMultiv.spli) modMultiv.weib <- multivPenal(Surv(TIMEGAP,INDICREC)~cluster(PATIENT)+v1+v2+ event2(INDICMETA)+terminal(INDICDEATH),formula.Event2=~v1+v2+v3, formula.terminalEvent=~v1,data=dataMultiv,hazard="Weibull") print(modMultiv.weib) modMultiv.cpm <- multivPenal(Surv(TIMEGAP,INDICREC)~cluster(PATIENT)+v1+v2+ event2(INDICMETA)+terminal(INDICDEATH),formula.Event2=~v1+v2+v3, formula.terminalEvent=~v1,data=dataMultiv,hazard="Piecewise-per", nb.int=c(6,6,6)) print(modMultiv.cpm)
This is a special function used in addition to the cluster()
function
in the context of survival joint models for clustered data. This function
identifies subject index. It is used on the right hand side of a
'frailtyPenal' formula. Using num.id()
in a formula implies that a
joint frailty model for clustered data is fitted (Rondeau et al. 2011).
num.id(x)
num.id(x)
x |
A character or numeric variable which is supposed to indicate the variable identifying individuals |
No return value
V. Rondeau, J.P. Pignon, S. Michiels (2011). A joint model for the dependence between clustered times to tumour progression and deaths: A meta-analysis of chemotherapy in head and neck cancer. Statistical methods in medical research 897, 1-19.
data(readmission) #-- here is generated cluster (5 clusters) readmission <- transform(readmission,group=id%%5+1) #-- exclusion all recurrent events --# #-- to obtain framework of semi-competing risks --# readmission2 <- subset(readmission, (t.start == 0 & event == 1) | event == 0) joi.clus.gap <- frailtyPenal(Surv(time,event)~cluster(group)+ num.id(id)+dukes+charlson+sex+chemo+terminal(death), formula.terminalEvent=~dukes+charlson+sex+chemo, data=readmission2,recurrentAG=FALSE, n.knots=8, kappa=c(1.e+10,1.e+10) ,Alpha="None")
data(readmission) #-- here is generated cluster (5 clusters) readmission <- transform(readmission,group=id%%5+1) #-- exclusion all recurrent events --# #-- to obtain framework of semi-competing risks --# readmission2 <- subset(readmission, (t.start == 0 & event == 1) | event == 0) joi.clus.gap <- frailtyPenal(Surv(time,event)~cluster(group)+ num.id(id)+dukes+charlson+sex+chemo+terminal(death), formula.terminalEvent=~dukes+charlson+sex+chemo, data=readmission2,recurrentAG=FALSE, n.knots=8, kappa=c(1.e+10,1.e+10) ,Alpha="None")
Plots estimated baseline survival and hazard functions (output from an object of class'additivePenal' object for additive frailty model ). Confidence bands are allowed.
## S3 method for class 'additivePenal' plot(x, type.plot="Hazard", conf.bands=TRUE, pos.legend="topright", cex.legend=0.7, main, color=2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'additivePenal' plot(x, type.plot="Hazard", conf.bands=TRUE, pos.legend="topright", cex.legend=0.7, main, color=2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
An object of a fitted additive frailty model (output from calling |
type.plot |
a character string specifying the type of curve. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first words are required, e.g "Haz", "Su" |
conf.bands |
logical value. Determines whether confidence bands will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
main |
plot title |
color |
curve color (integer) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
Other graphical parameters like those in
|
Print a plot of the baseline survival or hazard functions with the confidence bands or not (conf.bands argument)
data(dataAdditive) modAdd <- additivePenal(Surv(t1,t2,event)~cluster(group)+var1+slope(var1), correlation=TRUE,data=dataAdditive,n.knots=8,kappa=862,hazard="Splines") #-- 'var1' is boolean as a treatment variable plot(modAdd)
data(dataAdditive) modAdd <- additivePenal(Surv(t1,t2,event)~cluster(group)+var1+slope(var1), correlation=TRUE,data=dataAdditive,n.knots=8,kappa=862,hazard="Splines") #-- 'var1' is boolean as a treatment variable plot(modAdd)
Plots values of the difference of two Cross-Validated Prognosis Observed Loss (CVPOL) computed with two joint frailty models. Confidence intervals are allowed.
## S3 method for class 'Diffepoce' plot(x, conf.bands=TRUE, Xlab = "Time", Ylab = "EPOCE difference" , ...)
## S3 method for class 'Diffepoce' plot(x, conf.bands=TRUE, Xlab = "Time", Ylab = "EPOCE difference" , ...)
x |
An object inheriting from |
conf.bands |
Logical value. Determines whether confidence intervals will be plotted. The default is FALSE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"EPOCE difference"' |
... |
Other unused arguments. |
Print one plot with one curve and its confidence interval.
Plots values of estimators MPOL and CVPOL for evaluating EPOCE. No confidence interval.
## S3 method for class 'epoce' plot(x, type, pos.legend="topright", cex.legend=0.7, Xlab="Time",Ylab="Epoce", ...)
## S3 method for class 'epoce' plot(x, type, pos.legend="topright", cex.legend=0.7, Xlab="Time",Ylab="Epoce", ...)
x |
An object inheriting from |
type |
Type of estimator to plot. If new dataset was used only mpol can
be plotted ( |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"'. |
cex.legend |
size of the legend. Default is 0.7. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Epoce"' |
... |
Other unused arguments. |
Print a curve of the estimator of EPOCE using time points defined in
epoce
.
Plots estimated baseline survival and hazard functions from an object of class 'frailtyPenal'. Confidence bands are allowed.
## S3 method for class 'frailtyPenal' plot(x, type.plot = "Hazard", conf.bands=TRUE, pos.legend = "topright", cex.legend=0.7, main, color=2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'frailtyPenal' plot(x, type.plot = "Hazard", conf.bands=TRUE, pos.legend = "topright", cex.legend=0.7, main, color=2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
A shared frailty model, i.e. a |
type.plot |
a character string specifying the type of curve. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first letters are required, e.g "Haz", "Su" |
conf.bands |
Logical value. Determines whether confidence bands will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
main |
title of plot |
color |
color of the curve (integer) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
other unused arguments |
Print a plot of a shared frailty model.
## Not run: data(readmission) ###--- Shared frailty model ---### modSha <- frailtyPenal(Surv(time,event)~as.factor(dukes)+cluster(id), n.knots=10,kappa=10000,data=readmission,hazard="Splines") plot(modSha,type="Survival",conf=FALSE) ###--- Cox proportional hazard model ---### modCox <- frailtyPenal(Surv(time,event)~as.factor(dukes),n.knots=10, kappa=10000,data=readmission,hazard="Splines") plot(modCox) #-- no confidence bands plot(modSha,conf.bands=FALSE) plot(modCox,conf.bands=FALSE) ## End(Not run)
## Not run: data(readmission) ###--- Shared frailty model ---### modSha <- frailtyPenal(Surv(time,event)~as.factor(dukes)+cluster(id), n.knots=10,kappa=10000,data=readmission,hazard="Splines") plot(modSha,type="Survival",conf=FALSE) ###--- Cox proportional hazard model ---### modCox <- frailtyPenal(Surv(time,event)~as.factor(dukes),n.knots=10, kappa=10000,data=readmission,hazard="Splines") plot(modCox) #-- no confidence bands plot(modSha,conf.bands=FALSE) plot(modCox,conf.bands=FALSE) ## End(Not run)
Plots estimated baseline survival and hazard functions of a joint nested frailty model (output from an object of class 'jointNestedPenal' for joint nested frailty models) for each type of event (terminal or recurrent). Confidence bands are allowed.
## S3 method for class 'jointNestedPenal' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend="topright", cex.legend = 0.7, ylim, main, color = 2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'jointNestedPenal' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend="topright", cex.legend = 0.7, ylim, main, color = 2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
A joint nested model, i.e. an object of class
|
event |
a character string specifying the type of curve. Possible value are "Terminal", "Recurrent", or "Both". The default is "Both". |
type.plot |
a character string specifying the type of curve. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first letters are required, e.g "Haz", "Su" |
conf.bands |
logical value. Determines whether confidence bands will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
ylim |
y-axis limits |
main |
plot title |
color |
curve color (integer) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
other unused arguments |
Print a plot of the baseline survival or hazard functions for each type of event or both with the confidence bands or not (conf.bands argument)
## Not run: #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) # Baseline hazard function approximated with splines with calendar-timescale model.spli.AG <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12),initialize = TRUE) # Plot the estimated baseline hazard function with the confidence intervals plot(model.spli.AG) # Plot the estimated baseline hazard function with the confidence intervals plot(model.spli.RE, type = "Survival") ## End(Not run)
## Not run: #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) # Baseline hazard function approximated with splines with calendar-timescale model.spli.AG <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12),initialize = TRUE) # Plot the estimated baseline hazard function with the confidence intervals plot(model.spli.AG) # Plot the estimated baseline hazard function with the confidence intervals plot(model.spli.RE, type = "Survival") ## End(Not run)
Plots estimated baseline survival and hazard functions of a joint frailty model (output from an object of class 'JointPenal' for joint frailty models ) for each type of event (terminal or recurrent). Confidence bands are allowed.
## S3 method for class 'jointPenal' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend="topright", cex.legend = 0.7, ylim, main, color = 2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'jointPenal' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend="topright", cex.legend = 0.7, ylim, main, color = 2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
A joint model, i.e. an object of class |
event |
a character string specifying the type of curve. Possible value are "Terminal", "Recurrent", or "Both". The default is "Both". |
type.plot |
a character string specifying the type of curve. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first letters are required, e.g "Haz", "Su" |
conf.bands |
logical value. Determines whether confidence bands will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
ylim |
y-axis limits |
main |
plot title |
color |
curve color (integer) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
other unused arguments |
Print a plot of the baseline survival or hazard functions for each type of event or both with the confidence bands or not (conf.bands argument)
## Not run: data(readmission) #-- Gap-time modJoint.gap <- frailtyPenal(Surv(time,event)~cluster(id)+sex+dukes+ charlson+terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=14,kappa=c(100,100)) #-- It takes around 1 minute to converge --# plot(modJoint.gap,type.plot="Haz",event="recurrent",conf.bands=TRUE) plot(modJoint.gap,type.plot="Haz",event="terminal",conf.bands=TRUE) plot(modJoint.gap,type.plot="Haz",event="both",conf.bands=TRUE) plot(modJoint.gap,type.plot="Su",event="recurrent",conf.bands=TRUE) plot(modJoint.gap,type.plot="Su",event="terminal",conf.bands=TRUE) plot(modJoint.gap,type.plot="Su",event="both",conf.bands=TRUE) ## End(Not run)
## Not run: data(readmission) #-- Gap-time modJoint.gap <- frailtyPenal(Surv(time,event)~cluster(id)+sex+dukes+ charlson+terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=14,kappa=c(100,100)) #-- It takes around 1 minute to converge --# plot(modJoint.gap,type.plot="Haz",event="recurrent",conf.bands=TRUE) plot(modJoint.gap,type.plot="Haz",event="terminal",conf.bands=TRUE) plot(modJoint.gap,type.plot="Haz",event="both",conf.bands=TRUE) plot(modJoint.gap,type.plot="Su",event="recurrent",conf.bands=TRUE) plot(modJoint.gap,type.plot="Su",event="terminal",conf.bands=TRUE) plot(modJoint.gap,type.plot="Su",event="both",conf.bands=TRUE) ## End(Not run)
Plots of estimated baseline survival and hazard functions of joint competing recurrent model (output from an object of class 'jointRecCompet') for each type of event (recurrent and the two terminal events). Confidence intervals are allowed.
## S3 method for class 'jointRecCompet' plot(x, event = "All", type.plot = "Hazard", conf.bands = FALSE, pos.legend = "topright", cex.legend = 0.7, ylim, main, color1="red", color2="blue", colorEnd="green", median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'jointRecCompet' plot(x, event = "All", type.plot = "Hazard", conf.bands = FALSE, pos.legend = "topright", cex.legend = 0.7, ylim, main, color1="red", color2="blue", colorEnd="green", median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
A joint competing risk model, i.e. an object of class
|
event |
a character string specifying the type of outcome. Possible value are "Recurrent", "Terminal1", "Terminal2", or "All". The default is "All". |
type.plot |
a character string specifying the type of curve. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first words are required, e.g "Haz", "Su" |
conf.bands |
logical value. Determines whether confidence intervals will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
ylim |
y-axis limits |
main |
plot title |
color1 |
curve color for recurrent event of type 1 (integer or color name in quotation marks) |
color2 |
curve color for recurrent event of type 2 (integer or color name in quotation marks) |
colorEnd |
curve color for terminal event (integer or color name in quotation marks) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
Other graphical parameters |
Print a plot of the baseline survival or hazard functions for each type of event or both with the confidence intervals or not (conf.bands argument)
Plots the estimated functions associated with the mediation analysis, i.e.
,
as well as the natural direct, indirect and total effects.
An option to plot the confidence bands of the function
is available.
This option is also implemented for the confidence bands of the functions
and of the natural effects if these confidence bands are available.
## S3 method for class 'jointSurroMed' plot(x,plot.mediation="All",type.plot="Hazard", conf.bands=TRUE,endpoint=2, legend.pos = "topleft",...)
## S3 method for class 'jointSurroMed' plot(x,plot.mediation="All",type.plot="Hazard", conf.bands=TRUE,endpoint=2, legend.pos = "topleft",...)
x |
An object of class |
plot.mediation |
A character string specifying the desired plot. Possible values are "All", "g","PTE" or "Effects". The default is "All" which displays all three plots. |
type.plot |
A character string specifying the type of curve for the baseline hazards functions. Possible value are "Hazard", or "Survival". |
conf.bands |
Logical value. Determines whether confidence bands should be plotted. The default is to do so if the confidence bands are available. |
endpoint |
An integer specifying for which endpoint should the baseline curves be plotted. Possible values are 0 for the surrogate endpoint only and 1 for the final endpoint or 2 for both. Default is 2. |
legend.pos |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topleft"' |
... |
other unused arguments. |
Print one or several plots for the mediation analysis of a joint surrogate model
Plots estimated baseline survival and hazard functions for the surrogate endpoint and the true endpoint from an object of class 'jointSurroPenal'. Confidence bands are allowed.
## S3 method for class 'jointSurroPenal' plot(x, type.plot = "Hazard", conf.bands=TRUE, pos.legend = "topright", cex.legend=0.7, main, Xlab = "Time", Ylab = "Baseline hazard function", median = TRUE, xmin = 0, xmax = NULL, ylim = c(0,1), endpoint = 2, scale = 1, ...)
## S3 method for class 'jointSurroPenal' plot(x, type.plot = "Hazard", conf.bands=TRUE, pos.legend = "topright", cex.legend=0.7, main, Xlab = "Time", Ylab = "Baseline hazard function", median = TRUE, xmin = 0, xmax = NULL, ylim = c(0,1), endpoint = 2, scale = 1, ...)
x |
An object inheriting from |
type.plot |
A character string specifying the type of curve. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first letters are required, e.g "Haz", "Su". |
conf.bands |
Logical value. Determines whether confidence bands will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"'. |
cex.legend |
Character expansion factor *relative* to current 'par("cex")'. Default is 0.7. |
main |
Title of plot. |
Xlab |
Label of x-axis. Default is '"Time"'. |
Ylab |
Label of y-axis. Default is '"Baseline hazard function"'. |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
xmin |
Minimum value for x-axis, the default is |
xmax |
Maximum value for x-axis, the default is |
ylim |
Range of y-axis. Default is from 0 to 1. |
endpoint |
A binary that indicates the endpoint to represent. |
scale |
A numeric that allows to rescale (by multiplication) the survival times. If no change is need the argument is set to 1, the default value. eg: 1/365 aims to convert days to years . |
... |
other unused arguments. |
Print a plot of the baseline survival or hazard functions for each type of event or both with the confidence bands or not (conf.bands argument)
jointSurroPenal, jointSurroCopPenal
## Not run: ###--- Joint surrogate model ---### ###---evaluation of surrogate endpoints---### data(dataOvarian) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) # Baseline Hazards fonctions for both the surrogate endpoint # and the true endpoint plot(joint.surro.ovar,endpoint = 2,type.plot = "Haz", conf.bands = T) # Baseline survival fonctions for both the surrogate endpoint # and the true endpoint plot(joint.surro.ovar,endpoint = 2,type.plot = "Su", conf.bands = T) ## End(Not run)
## Not run: ###--- Joint surrogate model ---### ###---evaluation of surrogate endpoints---### data(dataOvarian) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) # Baseline Hazards fonctions for both the surrogate endpoint # and the true endpoint plot(joint.surro.ovar,endpoint = 2,type.plot = "Haz", conf.bands = T) # Baseline survival fonctions for both the surrogate endpoint # and the true endpoint plot(joint.surro.ovar,endpoint = 2,type.plot = "Su", conf.bands = T) ## End(Not run)
Plot of trials leave-one-out crossvalidation Outputs for evaluating the joint surrogate model
## S3 method for class 'jointSurroPenalloocv' plot(x, unusedtrial = NULL, xleg = "bottomleft", yleg = NULL, main = NULL, xlab = "Trials", ylab = "Log Hazard ratio of the true endpoint", legend = c("Beta observed", "Beta predict"), ...)
## S3 method for class 'jointSurroPenalloocv' plot(x, unusedtrial = NULL, xleg = "bottomleft", yleg = NULL, main = NULL, xlab = "Trials", ylab = "Log Hazard ratio of the true endpoint", legend = c("Beta observed", "Beta predict"), ...)
x |
An object inherent from the |
unusedtrial |
Vector of unconsidered trials, may be due to the fact that the predicted treatment effects on true endpoint have an outlier. In this case, one can drop from the data the trials with very hight absolute predicted value |
xleg |
X-coordinate for the location of the legend. |
yleg |
Y-coordinate for the location of the legend, the default is |
main |
An overall title for the plot: see title. |
xlab |
A title for the x axis: see title. |
ylab |
A title for the y axis: see title. |
legend |
A vector of characters string of length >= 1 to appear in the legend |
... |
other unused arguments. |
This function displays the boxplots corresponding to the number of trials in the
dataset. Each boxplot includes 3 elements corresponding to the predicted treatment effect on true endpoint
with the prediction interval. The circles inside or outside the boxplot represent the observed
treatment effects on true endpoint. For each trial with convergence issues or outliers, the boxplot is replaced
by a dash. In this case, we display in the title of the figure a vector of these trials, if argument main
is set to NULL
. The function returns the list of unused trials.
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Burzykowski T, Buyse M (2006). "Surrogate threshold effect: an alternative measure for meta-analytic surrogate endpoint validation." Pharmaceutical Statistics, 5(3), 173-186.ISSN 1539-1612.
## Not run: # Generation of data to use data.sim <- jointSurrSimul(n.obs=300, n.trial = 10,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###--- Joint surrogate model ---### joint.surro.sim.MCGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20, print.iter = T) # Example of loocv taking into accountn ony trial 2 trials (1 and 3) dloocv <- loocv(joint.surro.sim.MCGH, unusedtrial = c(2,4:10)) plot(x = dloocv, xleg = "topright", bty = "n") ## End(Not run)
## Not run: # Generation of data to use data.sim <- jointSurrSimul(n.obs=300, n.trial = 10,cens.adm=549.24, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###--- Joint surrogate model ---### joint.surro.sim.MCGH <- jointSurroPenal(data = data.sim, int.method = 2, nb.mc = 300, nb.gh = 20, print.iter = T) # Example of loocv taking into accountn ony trial 2 trials (1 and 3) dloocv <- loocv(joint.surro.sim.MCGH, unusedtrial = c(2,4:10)) plot(x = dloocv, xleg = "topright", bty = "n") ## End(Not run)
Plots estimated baseline survival and hazard functions for a terminal outcome from an object of class 'longiPenal'. If available, plot the estimated quantities related to a mediation analysis. Confidence bands are allowed.
## S3 method for class 'longiPenal' plot(x, type.plot = "Hazard",plot.mediation="All", conf.bands=TRUE,pos.legend= "topright", cex.legend=0.7, main, color, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'longiPenal' plot(x, type.plot = "Hazard",plot.mediation="All", conf.bands=TRUE,pos.legend= "topright", cex.legend=0.7, main, color, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
A joint model for longitudinal outcome and a terminal event, i.e. a
|
type.plot |
a character string specifying the type of curve for the terminal event. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first words are required, e.g "Haz", "Su" |
plot.mediation |
A character string specifying the desired plot. Possible values are "All", "PTE" or "Effects". The default is "All" which displays both plots. |
conf.bands |
Logical value. Determines whether confidence bands will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
main |
title of plot |
color |
color of the curve (integer) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
other unused arguments |
Print a plot for the terminal event of the joint model for a longitudinal and survival data.
## Not run: ###--- Joint model for longitudinal data and a terminal event ---### data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) # Baseline hazard function approximated with splines # Random effects as the link function model.spli.RE <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, n.knots = 7, kappa = 2) pdf(file = "/home/agareb1/etudiants/al10/newpack/test/plot_longi.pdf") # Plot the estimated baseline hazard function with the confidence intervals plot(model.spli.RE) # Plot the estimated baseline hazard function with the confidence intervals plot(model.spli.RE, type = "Survival") ## End(Not run)
## Not run: ###--- Joint model for longitudinal data and a terminal event ---### data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) # Baseline hazard function approximated with splines # Random effects as the link function model.spli.RE <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, n.knots = 7, kappa = 2) pdf(file = "/home/agareb1/etudiants/al10/newpack/test/plot_longi.pdf") # Plot the estimated baseline hazard function with the confidence intervals plot(model.spli.RE) # Plot the estimated baseline hazard function with the confidence intervals plot(model.spli.RE, type = "Survival") ## End(Not run)
Plots of estimated baseline survival and hazard functions of a multivariate frailty model (output from an object of class 'multivPenal' for multivariate frailty models ) for each type of event (recurrent, terminal and second recurrent). Confidence intervals are allowed.
## S3 method for class 'multivPenal' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend = "topright", cex.legend = 0.7, ylim, main, color1="red", color2="blue", colorEnd="green", median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'multivPenal' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend = "topright", cex.legend = 0.7, ylim, main, color1="red", color2="blue", colorEnd="green", median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
A joint multivariate model, i.e. an object of class
|
event |
a character string specifying the type of outcome. Possible value are "Terminal", "Recurrent", "Recurrent2", or "Both". The default is "Both". |
type.plot |
a character string specifying the type of curve. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first words are required, e.g "Haz", "Su" |
conf.bands |
logical value. Determines whether confidence intervals will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
ylim |
y-axis limits |
main |
plot title |
color1 |
curve color for recurrent event of type 1 (integer or color name in quotation marks) |
color2 |
curve color for recurrent event of type 2 (integer or color name in quotation marks) |
colorEnd |
curve color for terminal event (integer or color name in quotation marks) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
Other graphical parameters |
Print a plot of the baseline survival or hazard functions for each type of event or both with the confidence intervals or not (conf.bands argument)
Plots estimated baseline survival and hazard functions (output from an object of class 'NestedPenal' for nested frailty models). Confidence bands are allowed.
## S3 method for class 'nestedPenal' plot(x, type.plot="Hazard", conf.bands=TRUE, pos.legend="topright", cex.legend=0.7, main, color=2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'nestedPenal' plot(x, type.plot="Hazard", conf.bands=TRUE, pos.legend="topright", cex.legend=0.7, main, color=2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
A nested model, i.e. an object of class |
type.plot |
a character string specifying the type of curve. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first words are required, e.g "Haz", "Su" |
conf.bands |
logical value. Determines whether confidence bands will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
main |
plot title |
color |
curve color (integer) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
Other graphical parameters like those in
|
Print a plot of the baseline survival or hazard functions with the confidence bands or not (conf.bands argument)
## Not run: data(dataNested) modNested <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+cov2,data=dataNested,n.knots=8, kappa=50000,hazard="Splines") plot(modNested,conf.bands=FALSE) ## End(Not run)
## Not run: data(dataNested) modNested <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+cov2,data=dataNested,n.knots=8, kappa=50000,hazard="Splines") plot(modNested,conf.bands=FALSE) ## End(Not run)
Plots predicted probabilities of event. Confidence intervals are allowed.
## S3 method for class 'predFrailty' plot(x, conf.bands=FALSE, pos.legend="topright", cex.legend=0.7, ylim=c(0,1), Xlab = "Time t", Ylab, ...)
## S3 method for class 'predFrailty' plot(x, conf.bands=FALSE, pos.legend="topright", cex.legend=0.7, ylim=c(0,1), Xlab = "Time t", Ylab, ...)
x |
An object from the 'prediction' function, i.e. a |
conf.bands |
Logical value. Determines whether confidence intervals will be plotted. The default is FALSE. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"'. |
cex.legend |
size of the legend. Default is 0.7. |
ylim |
range of y-axis. Default is from 0 to 1. |
Xlab |
Label of x-axis. Default is '"Time t"' |
Ylab |
Label of y-axis. |
... |
Other unused arguments. |
Print one plot with as many curves as the number of profiles.
Plots predicted probabilities of terminal event. Confidence intervals are allowed.
## S3 method for class 'predJoint' plot(x, conf.bands=FALSE, relapses=TRUE,pos.legend="topright", cex.legend=0.7, ylim=c(0,1), Xlab = "Time t", Ylab = "Prediction probability of event", ...)
## S3 method for class 'predJoint' plot(x, conf.bands=FALSE, relapses=TRUE,pos.legend="topright", cex.legend=0.7, ylim=c(0,1), Xlab = "Time t", Ylab = "Prediction probability of event", ...)
x |
An object from the 'prediction' function, more generaly a
|
conf.bands |
Logical value. Determines whether confidence intervals will be plotted. The default is FALSE. |
relapses |
Logical value. Determines whether observed recurrent events will be plotted. The default is TRUE. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
size of the legend. Default is 0.7 |
ylim |
range of y-axis. Default is from 0 to 1 |
Xlab |
Label of x-axis. Default is '"Time t"' |
Ylab |
Label of y-axis. Default is '"Prediction probability of event"' |
... |
Other unused arguments |
Print as many plots as the number of subjects.
Plots predicted probabilities of the event. Confidence intervals are allowed.
## S3 method for class 'predLongi' plot(x, conf.bands=FALSE, pos.legend="topright", cex.legend=0.7, ylim=c(0,1), Xlab = "Time t", Ylab, ...)
## S3 method for class 'predLongi' plot(x, conf.bands=FALSE, pos.legend="topright", cex.legend=0.7, ylim=c(0,1), Xlab = "Time t", Ylab, ...)
x |
An object inheriting from |
conf.bands |
Logical value. Determines whether confidence intervals will be plotted. The default is FALSE. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"'. |
cex.legend |
size of the legend. Default is 0.7. |
ylim |
range of y-axis. Default is from 0 to 1. |
Xlab |
Label of x-axis. Default is '"Time t"' |
Ylab |
Label of y-axis. |
... |
Other unused arguments. |
Print one plot with as many curves as the number of profiles.
Plots estimated baseline survival and hazard functions of a joint model (output from an object of class 'trivPenal') for each type of event (terminal or recurrent). Confidence bands are allowed.
## S3 method for class 'trivPenal' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend="topright", cex.legend = 0.7, ylim, main, color = 2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'trivPenal' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend="topright", cex.legend = 0.7, ylim, main, color = 2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
A joint model, an object of class |
event |
a character string specifying the type of curve. Possible value are "Terminal", "Recurrent", or "Both". The default is "Both". |
type.plot |
a character string specifying the type of curve. Possible value are "Hazard", or "Survival". The default is "Hazard". Only the first words are required, e.g "Haz", "Su" |
conf.bands |
logical value. Determines whether confidence bands will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
ylim |
y-axis limits |
main |
plot title |
color |
curve color (integer) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
other unused arguments |
Print a plot of the baseline survival or hazard functions for each type of event or both with the confidence bands or not (conf.bands argument)
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes = 7) plot(model.weib.RE.gap) plot(model.weib.RE.gap, type = "survival") ## End(Not run)
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes = 7) plot(model.weib.RE.gap) plot(model.weib.RE.gap, type = "survival") ## End(Not run)
Plots estimated baseline survival and hazard functions of a joint model (output from an object of class 'trivPenalNL') for each type of event (terminal or recurrent). Confidence bands are allowed.
## S3 method for class 'trivPenalNL' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend="topright", cex.legend = 0.7, ylim, main, color = 2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
## S3 method for class 'trivPenalNL' plot(x, event = "Both", type.plot = "Hazard", conf.bands = FALSE, pos.legend="topright", cex.legend = 0.7, ylim, main, color = 2, median=TRUE, Xlab = "Time", Ylab = "Hazard function", ...)
x |
A joint model, an object of class |
event |
a character string specifying the type of curve. Possible value are "terminal", "recurrent", or "both". The default is "both". |
type.plot |
a character string specifying the type of curve. Possible value are "Hazard", or "survival". The default is "hazard". Only the first words are required, e.g "haz", "su" |
conf.bands |
logical value. Determines whether confidence bands will be plotted. The default is to do so. |
pos.legend |
The location of the legend can be specified by setting this argument to a single keyword from the list '"bottomright"', '"bottom"', '"bottomleft"', '"left"', '"topleft"', '"top"', '"topright"', '"right"' and '"center"'. The default is '"topright"' |
cex.legend |
character expansion factor *relative* to current 'par("cex")'. Default is 0.7 |
ylim |
y-axis limits |
main |
plot title |
color |
curve color (integer) |
median |
Logical value. Determines whether survival median will be plotted. Default is TRUE. |
Xlab |
Label of x-axis. Default is '"Time"' |
Ylab |
Label of y-axis. Default is '"Hazard function"' |
... |
other unused arguments |
Print a plot of the baseline survival or hazard functions for each type of event or both with the confidence bands or not (conf.bands argument)
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes = 7) plot(model.weib.RE.gap) plot(model.weib.RE.gap, type = "survival") ## End(Not run)
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes = 7) plot(model.weib.RE.gap) plot(model.weib.RE.gap, type = "survival") ## End(Not run)
Plot the prediction of the treatment effect on the true endpoint based on the observed treatment effect
on the surrogate endpoint, with the prediction interval: results from the one-step Joint surrogate model
for evaluating a canditate surrogate endpoint. The graphic also includes vertical lines that cut
the x axis to the values of ste. A hatched rectagle/zone indicates the values of
S that predict a non zeto
T, according to the number of value
for
STE
and the shape of the upper confidence limit for the prediction model.
plotTreatPredJointSurro( object, from = -3, to = 2, type = "Coef", var.used = "error.estim", alpha. = 0.05, n = 1000, lty = 2, d = 3, colCI = "blue", xlab = "beta.S", ylab = "beta.T.predict", pred.int.use = "up", main = NULL, add.accept.area.betaS = TRUE, ybottom = -0.05, ytop = 0.05, density = 20, angle = 45, legend.show = TRUE, leg.x = NULL, leg.y = 2, legend = c("Prediction model", "95% prediction Interval", "Beta.S for nonzero beta.T", "STE"), leg.text.col = "black", leg.lty = c(1, 2, 4, NA), leg.pch = c(NA, NA, 7, 1), leg.bg = "white", leg.bty = "n", leg.cex = 0.85, ... )
plotTreatPredJointSurro( object, from = -3, to = 2, type = "Coef", var.used = "error.estim", alpha. = 0.05, n = 1000, lty = 2, d = 3, colCI = "blue", xlab = "beta.S", ylab = "beta.T.predict", pred.int.use = "up", main = NULL, add.accept.area.betaS = TRUE, ybottom = -0.05, ytop = 0.05, density = 20, angle = 45, legend.show = TRUE, leg.x = NULL, leg.y = 2, legend = c("Prediction model", "95% prediction Interval", "Beta.S for nonzero beta.T", "STE"), leg.text.col = "black", leg.lty = c(1, 2, 4, NA), leg.pch = c(NA, NA, 7, 1), leg.bg = "white", leg.bty = "n", leg.cex = 0.85, ... )
object |
An object inheriting from |
from |
The range (with |
to |
The range (with |
type |
The type of graphic, |
var.used |
This argument can take two values. The first one is |
alpha. |
The confidence level for the prediction interval. The default is |
n |
An integer that indicates the number of values for |
lty |
The line type. Line types can either be specified as an integer
(0=blank, 1=solid (default), 2=dashed, 3=dotted, 4=dotdash, 5=longdash, 6=twodash) or as one
of the character strings |
d |
The desired number of digits after the decimal point for parameters and confidence intervals. Default of 3 digits is used. |
colCI |
The color used to display the confidence interval. |
xlab |
A title for the x axis. |
ylab |
A title for the y axis. |
pred.int.use |
A character string that indicates the bound of the prediction interval
to use to compute the STE. Possible values are |
main |
Title of the graphics |
add.accept.area.betaS |
A boolean that indicates if the plot should add acceptance area for
|
ybottom |
A scalar for the left y bottom position of the rectangle on the x-axis associated with acceptable
value for |
ytop |
A scalar for the top right y position of the rectangle on the x-axis associated with acceptable
value for |
density |
The density of shading lines, in lines per inch. The default
value of 'NULL' means that no shading lines are drawn. A
zero value of 'density' means no shading lines whereas
negative values (and 'NA') suppress shading (and so allow color filling). The default is |
angle |
Angle (in degrees) of the shading lines. The default is |
legend.show |
A boolean that indicates if the legend should be displayed |
leg.x |
The x co-ordinate to be used to position the legend. |
leg.y |
The y co-ordinate to be used to position the legend. The default is |
legend |
A character or expression vector of length >= 1 to appear in the legend |
leg.text.col |
The color used for the legend text. The default is |
leg.lty |
The line type, width and color for the legend box (if bty = "o"). |
leg.pch |
= The plotting symbols appearing in the legend, as numeric vector or a
vector of 1-character strings (see points). Unlike |
leg.bg |
The background color for the legend box. (Note that this is only used if bty |
leg.bty |
The type of box to be drawn around the legend. The allowed values are |
leg.cex |
Character expansion factor relative to current par( |
... |
other unused arguments |
For a considered treatment effects on the surrogate enpoint, plot the associated treatment effects on the true endpoint predicted from the joint surrogate model with the prediction interval.
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Burzykowski T, Buyse M (2006). "Surrogate threshold effect: an alternative measure for meta-analytic surrogate endpoint validation." Pharmaceutical Statistics, 5(3), 173-186.ISSN 1539-1612.
Sofeu, C. L. and Rondeau, V. (2020). How to use frailtypack for validating failure-time surrogate endpoints using individual patient data from meta-analyses of randomized controlled trials. PLOS ONE; 15, 1-25.
jointSurroPenal, jointSurroCopPenal, predict.jointSurroPenal
## Not run: ###--- Joint surrogate model ---### ###---evaluation of surrogate endpoints---### data(dataOvarian) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) ## "HR" plotTreatPredJointSurro(joint.surro.ovar, from = 0, to = 4, type = "HR", lty = 2, leg.y = 13) ## or without acceptance area for betaS: plotTreatPredJointSurro(joint.surro.ovar, from = 0, to = 4, type = "HR", lty = 2, leg.y = 13, add.accept.area.betaS = FALSE) ## "log HR" plotTreatPredJointSurro(joint.surro.ovar, from = -2, to = 2, type = "Coef", lty = 2, leg.y = 3.5) ### For a value of ste greater than 0 (HR > 1), which induces deleterious ### treatment effet, argument "pred.int.use" can be set to "lw" plotTreatPredJointSurro(joint.surro.ovar, from = 0, to = 2, type = "HR", lty = 2, leg.y = 4, pred.int.use = "lw") ## End(Not run)
## Not run: ###--- Joint surrogate model ---### ###---evaluation of surrogate endpoints---### data(dataOvarian) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) ## "HR" plotTreatPredJointSurro(joint.surro.ovar, from = 0, to = 4, type = "HR", lty = 2, leg.y = 13) ## or without acceptance area for betaS: plotTreatPredJointSurro(joint.surro.ovar, from = 0, to = 4, type = "HR", lty = 2, leg.y = 13, add.accept.area.betaS = FALSE) ## "log HR" plotTreatPredJointSurro(joint.surro.ovar, from = -2, to = 2, type = "Coef", lty = 2, leg.y = 3.5) ### For a value of ste greater than 0 (HR > 1), which induces deleterious ### treatment effet, argument "pred.int.use" can be set to "lw" plotTreatPredJointSurro(joint.surro.ovar, from = 0, to = 2, type = "HR", lty = 2, leg.y = 4, pred.int.use = "lw") ## End(Not run)
Predict the treatment effect on the true endpoint ( T), based on the
treatment effect observed on the surrogate endpoint (
S).
## S3 method for class 'jointSurroPenal' predict(object, datapred = NULL, betaS.obs = NULL, betaT.obs = NULL, ntrial0 = NULL, var.used = "error.estim", alpha. = 0.05, dec = 3, colCI = "red", from = -2, to = 2, type = "Coef", ...)
## S3 method for class 'jointSurroPenal' predict(object, datapred = NULL, betaS.obs = NULL, betaT.obs = NULL, ntrial0 = NULL, var.used = "error.estim", alpha. = 0.05, dec = 3, colCI = "red", from = -2, to = 2, type = "Coef", ...)
object |
An object inheriting from |
datapred |
Dataset to use for the prediction. If this argument is specified,
the data structure must be the same as the parameter |
betaS.obs |
Observed treatment effect on the surrogate endpoint, to use for the prediction of
the treatment effect on the true endpoint. If not null, this value is used for prediction instead of
|
betaT.obs |
Observed treatment effect on the true endpoint. Used to assess the prediction if not null.
The defaut is |
ntrial0 |
Number of subjects include in the new trial. Required if |
var.used |
This argument can take two values. The first one is |
alpha. |
The confidence level for the prediction interval. The default is |
dec |
The desired number of digits after the decimal point for parameters and confidence intervals. Default of 3 digits is used. |
colCI |
The color used to display the confidence interval. |
from |
The range (with |
to |
The range (with |
type |
The type of graphic, |
... |
other unused arguments. See the function (plotTreatPredJointSurro) |
Prediction is based on the formulas described in (Burzikwosky et al., 2006). We do not consider the case in which the prediction take into account estimation error on the estimate of the treatment effect on the surrogate endpoint in the new trial.
Returns and display a dataframe including for each trial the number of included subjects (if available), the observed treatment effect on surrogate endpoint, the observed treatment effect on true endpoint (if available) and the predicted treatment effect on true enpoint with the associated prediction intervals. If the observe treatment effect on true endpoint (if available) is included into the prediction interval, the last columns contains "*". This function also produces a plot of predicted treatment effects on the true endpoint according to the given values of the treatment effects on the surrogate endpoint, with the prediction intervals.
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Burzykowski T, Buyse M (2006). "Surrogate threshold effect: an alternative measure for meta-analytic surrogate endpoint validation." Pharmaceutical Statistics, 5(3), 173-186.ISSN 1539-1612.
Sofeu, C. L. and Rondeau, V. (2020). How to use frailtypack for validating failure-time surrogate endpoints using individual patient data from meta-analyses of randomized controlled trials. PLOS ONE; 15, 1-25.
jointSurroPenal, jointSurroCopPenal
## Not run: ###--- Joint surrogate model ---### ###---evaluation of surrogate endpoints---### data(dataOvarian) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) # prediction of the treatment effects on the true endpoint in each trial of # the dataOvarian dataset predict(joint.surro.ovar) # prediction of the treatment effect on the true endpoint from an observed # treatment effect on the surrogate endpoint in a given trial # in log HR predict(joint.surro.ovar, betaS.obs = -0.797, betaT.obs = -1.018) predict(joint.surro.ovar, type = "Coef", betaS.obs = -1, leg.y = 0, leg.x = 0.3, to = 2.3) predict(joint.surro.ovar, type = "Coef", leg.y = 3.5, add.accept.area.betaS = F, to = 2.3) # in HR predict(joint.surro.ovar, betaS.obs = exp(-0.797), betaT.obs = exp(-1.018)) predict(joint.surro.ovar, type = "HR", betaS.obs = log(0.65), leg.y = 5, to = 2.3) predict(joint.surro.ovar, type = "HR", leg.y = 5, add.accept.area.betaS = F, to = 2.3) ## End(Not run)
## Not run: ###--- Joint surrogate model ---### ###---evaluation of surrogate endpoints---### data(dataOvarian) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) # prediction of the treatment effects on the true endpoint in each trial of # the dataOvarian dataset predict(joint.surro.ovar) # prediction of the treatment effect on the true endpoint from an observed # treatment effect on the surrogate endpoint in a given trial # in log HR predict(joint.surro.ovar, betaS.obs = -0.797, betaT.obs = -1.018) predict(joint.surro.ovar, type = "Coef", betaS.obs = -1, leg.y = 0, leg.x = 0.3, to = 2.3) predict(joint.surro.ovar, type = "Coef", leg.y = 3.5, add.accept.area.betaS = F, to = 2.3) # in HR predict(joint.surro.ovar, betaS.obs = exp(-0.797), betaT.obs = exp(-1.018)) predict(joint.surro.ovar, type = "HR", betaS.obs = log(0.65), leg.y = 5, to = 2.3) predict(joint.surro.ovar, type = "HR", leg.y = 5, add.accept.area.betaS = F, to = 2.3) ## End(Not run)
For Cox proportional hazard model
A predictive probability of event between t and horizon time t+w, with w the window of prediction.
For Gamma Shared Frailty model for clustered (not recurrent) events
Two kinds of predictive probabilities can be calculated:
- a conditional predictive probability of event between t and horizon time t+w, i.e. given a specific group
- a marginal predictive probability of event between t and horizon time t+w, i.e. averaged over the population
For Gaussian Shared Frailty model for clustered (not recurrent) events
Two kinds of predictive probabilities can be calculated:
- a conditional predictive probability of event between t and horizon time
t+w, i.e. given a specific group and given a specific Gaussian random effect
- a marginal predictive probability of event between t and horizon time t+w, i.e. averaged over the population
For Gamma Shared Frailty model for recurrent events
Two kinds of predictive probabilities can be calculated:
- A marginal predictive probability of event between t and horizon time t+w, i.e. averaged over the population.
- a conditional predictive probability of event between t and horizon time t+w, i.e. given a specific individual.
This prediction method is the same as the conditional gamma prediction
method applied for clustered events (see formula cond before).
For Gaussian Shared Frailty model for recurrent events
Two kinds of predictive probabilities can be calculated:
- A marginal predictive probability of event between t and horizon time t+w, i.e. averaged over the population.
- a conditional predictive probability of event between t and horizon time t+w, i.e. given a specific individual.
This prediction method is the same as the conditional Gaussian prediction
method applied for clustered events (see formula cond before).
It is possible to compute all these predictions in two ways on a scale of times : - either you want a cumulative probability of developing the event between t and t+w (with t fixed, but with a varying window of prediction w); - either you want at a specific time the probability to develop the event in the next w (ie, for a varying prediction time t, but for a fixed window of prediction). See Details.
For Joint Frailty model
Prediction for two types of event can be calculated : for a terminal event or for a new recurrent event, knowing patient's characteristics.
- Prediction of death knowing patients' characteristics :
It is to predict the probability of death in a specific time window given the history of patient i before the time of prediction t. The history HiJ,l, (l=1,2) is the information on covariates before time t, but also the number of recurrences and the time of occurences. Three types of marginal probabilities are computed:
- a prediction of death between t and t+w given that the patient had exactly J recurrences (HiJ,1) before t
- a prediction of death between t and t+w given that the patient had at least J recurrences (HiJ,2) before t
- a prediction of death between t and t+w considering the recurrence history only in the parameters estimation. It corresponds to the average probability of death between t and t+w for a patient with these given characteristics.
- Prediction of risk of a new recurrent event knowing patients' characteristics :
It is to predict the probability of a new recurrent event in a specific time
window given the history of patient i before the time of prediction t. The
history HiJ is the information on covariates before time t, but also
the number of recurrences and the time of occurences. The marginal
probability computed is a prediction of a new recurrent event between t and
t+w given that the patient had exactly J recurrences (iJ) before t:
It is possible to compute all these predictions in two ways : - either you want a cumulative probability of developing the event between t and t+w (with t fixed, but with a varying window of prediction w); - either you want at a specific time the probability to develop the event in the next w (ie, for a varying prediction time t, but for a fixed window of prediction). See Details.
With Gaussian frailties (), the same expressions are used but with
uiJ replaced by
(J
i) and
corresponds to the Gaussian distribution.
For Joint Nested Frailty models
Prediction of the probability of developing a terminal event between t and t+w for subject i who survived by time t based on the visiting and disease histories of their own and other family members observed by time t.
Let (YfiR(t)) be the history of subject i in family f,
before time t, which includes all the recurrent events and covariate
information. For disease history, let TfiD(t) = min(Tfi,t) be
the observed time to an event before t ; fiD(t) the disease
indicator by time t and XfiD(t) the covariate information
observed up to time t. We define the family history of subject i in
family f by
which includes the visiting and disease history of all subjects except for subject i in family f as well as their covariate information by time t.
The prediction probability can be written as :
For Joint models for longitudinal data and a terminal event
The predicted probabilities are calculated in a specific time window given the history of biomarker measurements before the time of prediction t (ƴi(t)). The probabilities are conditional also on covariates before time t and that the subject was at risk at t. The marginal predicted probability of the terminal event is
These probabilities can be calculated in several time points with fixed time of prediction t and varying window w or with fixed window w and varying time of prediction t. See Details for an example of how to construct time windows.
For Trivariate joint models for longitudinal data, recurrent events and a terminal event
The predicted probabilities are calculated in a specific time window given the history of biomarker measurements ƴi(t) and recurrences HiJ,1 (complete history of recurrences with known J number of observed events) before the time of prediction t. The probabilities are conditional also on covariates before time t and that the subject was at risk at t. The marginal predicted probability of the terminal event is
The biomarker history can be represented using a linear (trivPenal
)
or non-linear mixed-effects model (trivPenalNL
).
These probabilities can be calculated in several time points with fixed time of prediction t and varying window w or with fixed window w and varying time of prediction t. See Details for an example of how to construct time windows.
prediction(fit, data, data.Longi, t, window, event="Both", conditional = FALSE, MC.sample=0, individual)
prediction(fit, data, data.Longi, t, window, event="Both", conditional = FALSE, MC.sample=0, individual)
fit |
A frailtyPenal, jointPenal, longiPenal, trivPenal or trivPenalNL object. |
data |
Data frame for the prediction. See Details. |
data.Longi |
Data frame for the prediction used for joint models with longitudinal data. See Details. |
t |
Time or vector of times for prediction. |
window |
Window or vector of windows for prediction. |
event |
Only for joint and shared models. The type of event you want to predict : "Terminal" for a terminal event, "Recurrent" for a recurrent event or "Both". Default value is "Both". For joint nested model, only 'Terminal' is allowed. In a shared model, if you want to predict a new recurrent event then the argument "Recurrent" should be use. If you want to predict a new event from clustered data, do not use this option. |
conditional |
Only for prediction method applied on shared models. Provides distinction between the conditional and marginal prediction methods. Default is FALSE. |
MC.sample |
Number of samples used to calculate confidence bands with a Monte-Carlo method (with a maximum of 1000 samples). If MC.sample=0 (default value), no confidence intervals are calculated. |
individual |
Only for joint nested model. Vector of individuals (of the same family) you want to make prediction. |
To compute predictions with a prediction time t fixed and a variable window:
prediction(fit, datapred, t=10, window=seq(1,10,by=1))
Otherwise, you can have a variable prediction time and a fixed window.
prediction(fit, datapred, t=seq(10,20,by=1), window=5)
Or fix both prediction time t and window.
prediction(fit, datapred, t=10, window=5)
The data frame building is an important step. It will contain profiles of patient on which you want to do predictions. To make predictions on a Cox proportional hazard or a shared frailty model, only covariates need to be included. You have to distinguish between numerical and categorical variables (factors). If we fit a shared frailty model with two covariates sex (factor) and age (numeric), here is the associated data frame for three profiles of prediction.
datapred <- data.frame(sex=0,age=0) datapred$sex <- as.factor(datapred$sex) levels(datapred$sex)<- c(1,2) datapred[1,] <- c(1,40) # man, 40 years old datapred[2,] <- c(2,45) # woman, 45 years old datapred[3,] <- c(1,60) # man, 60 years old
Time-dependent covariates: In the context of time-dependent covariate, the last previous value of the covariate is used before the time t of prediction.
It should be noted, that in a data frame for both marginal and conditional prediction on a shared frailty model for clustered data, the group must be specified. In the case of marginal predictions this can be any number as it does not influence predictions. However, for conditional predictions, the group must be also included in the data set used for the model fitting. The conditional predictions apply the empirical Bayes estimate of the frailty from the specified cluster. Here, three individuals belong to group 5.
datapred <- data.frame(group=0, sex=0,age=0) datapred$sex <- as.factor(datapred$sex) levels(datapred$sex)<- c(1,2) datapred[1,] <- c(5,1,40) # man, 40 years old (cluster 5) datapred[2,] <- c(5,2,45) # woman, 45 years old (cluster 5) datapred[3,] <- c(5,1,60) # man, 60 years old (cluster 5)
To use the prediction function on joint frailty models and trivariate joint models, the construction will be a little bit different. In these cases, the prediction for the terminal event takes into account covariates but also history of recurrent event times for a patient. You have to create a data frame with the relapse times, the indicator of event, the cluster variable and the covariates. Relapses occurring after the prediction time may be included but will be ignored for the prediction. A joint model with calendar-timescale need to be fitted with Surv(start,stop,event), relapse times correspond to the "stop" variable and indicators of event correspond to the "event" variable (if event=0, the relapse will not be taken into account). For patients without relapses, all the values of "event" variable should be set to 0. Finally, the same cluster variable name needs to be in the joint model and in the data frame for predictions ("id" in the following example). For instance, we observe relapses of a disease and fit a joint model adjusted for two covariates sex (1:male 2:female) and chemo (treatment by chemotherapy 1:no 2:yes). We describe 3 different profiles of prediction all treated by chemotherapy: 1) a man with four relapses at 100, 200, 300 and 400 days, 2) a man with only one relapse at 1000 days, 3) a woman without relapse.
datapred <- data.frame(time=0,event=0,id=0,sex=0,chemo=0) datapred$sex <- as.factor(datapred$sex) levels(datapred$sex) <- c(1,2) datapred$chemo <- as.factor(datapred$chemo) levels(datapred$chemo) <- c(1,2) datapred[1,] <- c(100,1,1,1,2) # first relapse of the patient 1 datapred[2,] <- c(200,1,1,1,2) # second relapse of the patient 1 datapred[3,] <- c(300,1,1,1,2) # third relapse of the patient 1 datapred[4,] <- c(400,1,1,1,2) # fourth relapse of the patient 1 datapred[5,] <- c(1000,1,2,1,2) # one relapse at 1000 days for patient 2 datapred[6,] <- c(100,0,3,2,2) # patient 3 did not relapse
The data can also be the dataset used to fit the joint model. In this case, you will obtain as many prediction rows as patients.
Finally, for the predictions using joint models for longitudinal data and a terminal event and trivariate joint models, a data frame with the history of the biomarker measurements must be provided. It must include data on measurements (values and time points), cluster variable and covariates. Measurements taken after the prediction time may be included but will be ignored for the prediction. The same cluster variable name must be in the data frame, in the data frame used for the joint model and in the data frame with the recurrent event and terminal event times. For instance, we observe two patients and each one had 5 tumor size measurements (patient 1 had an increasing tumor size and patient 2, decreasing). The joint model used for the predictions was adjusted on sex (1: male, 2: female), treatment (1: sequential arm, 2: combined arm), WHO baseline performance status (1: 0 status, 2: 1 status, 3: 2 status) and previous resection of the primate tumor (0: no, 1: yes). The data frame for the biomarker measurements can be:
datapredj_longi <- data.frame(id = 0, year = 0, tumor.size = 0, treatment = 0, age = 0, who.PS = 0, prev.resection = 0) datapredj_longi$treatment <- as.factor(datapredj_longi$treatment) levels(datapredj_longi$treatment) <- 1:2 datapredj_longi$age <- as.factor(datapredj_longi$age) levels(datapredj_longi$age) <- 1:3 datapredj_longi$who.PS <- as.factor(datapredj_longi$who.PS) levels(datapredj_longi$who.PS) <- 1:3 datapredj_longi$prev.resection <- as.factor (datapredj_longi$prev.resection) levels(datapredj_longi$prev.resection) <- 1:2 # patient 1: increasing tumor size datapredj_longi[1,] <- c(1, 0,1.2 ,2,1,1,1) datapredj_longi[2,] <- c(1,0.3,1.4,2,1,1,1) datapredj_longi[3,] <- c(1,0.6,1.9,2,1,1,1) datapredj_longi[4,] <- c(1,0.9,2.5,2,1,1,1) datapredj_longi[5,] <- c(1,1.5,3.9,2,1,1,1) # patient 2: decreasing tumor size datapredj_longi[6,] <- c(2, 0,1.2 ,2,1,1,1) datapredj_longi[7,] <- c(2,0.3,0.7,2,1,1,1) datapredj_longi[8,] <- c(2,0.5,0.3,2,1,1,1) datapredj_longi[9,] <- c(2,0.7,0.1,2,1,1,1) datapredj_longi[10,] <- c(2,0.9,0.1,2,1,1,1)
The following components are included in a 'predFrailty' object obtained by using prediction function for Cox proportional hazard and shared frailty model.
npred |
Number of individual predictions |
x.time |
A vector of prediction times of interest (used for plotting predictions): vector of prediction times t if fixed window. Otherwise vector of prediction times t+w |
window |
Prediction window or vector of prediction windows |
pred |
Predictions estimated for each profile |
icproba |
Logical value. Were confidence intervals estimated ? |
predLow |
Lower limit of Monte-Carlo confidence interval for each prediction |
predHigh |
Upper limit of Monte-Carlo confidence interval for each prediction |
type |
Type of prediction probability (marginal or conditional) |
group |
For conditional probability, the list of group on which you make predictions |
The following components are included in a 'predJoint' object obtained by using prediction function for joint frailty model.
npred |
Number of individual predictions |
x.time |
A vector of prediction times of interest (used for plotting predictions): vector of prediction times t if fixed window. Otherwise vector of prediction times t+w |
window |
Prediction window or vector of prediction windows |
group |
Id of each patient |
pred1 |
Estimation of probability of type 1: exactly j recurrences |
pred2 |
Estimation of probability of type 2: at least j recurrences |
pred3 |
Estimation of probability of type 3 |
pred1_rec |
Estimation of prediction of relapse |
icproba |
Logical value. Were confidence intervals estimated ? |
predlow1 |
Lower limit of Monte-Carlo confidence interval for probability of type 1 |
predhigh1 |
Upper limit of Monte-Carlo confidence interval for probability of type 1 |
predlow2 |
Lower limit of Monte-Carlo confidence interval for probability of type 2 |
predhigh2 |
Upper limit of Monte-Carlo confidence interval for probability of type 2 |
predlow3 |
Lower limit of Monte-Carlo confidence interval for probability of type 3 |
predhigh3 |
Upper limit of Monte-Carlo confidence interval for probability of type 3 |
predhigh1_rec |
Upper limit of Monte-Carlo confidence interval for prediction of relapse |
predlow1_rec |
Lower limit of Monte-Carlo confidence interval for prediction of relapse |
The following components are included in a 'predLongi' object obtained by using prediction function for joint models with longitudinal data.
npred |
Number of individual predictions |
x.time |
A vector of prediction times of interest (used for plotting predictions): vector of prediction times t if fixed window. Otherwise vector of prediction times t+w |
window |
Prediction window or vector of prediction windows |
group |
Id of each patient |
pred |
Estimation of probability |
icproba |
Logical value. Were confidence intervals estimated? |
predLow |
Lower limit of Monte-Carlo confidence intervals |
predHigh |
Upper limit of Monte-Carlo confidence intervals |
trivariate |
Logical value. Are the prediction calculated from the trivariate model? |
A. Krol, L. Ferrer, JP. Pignon, C. Proust-Lima, M. Ducreux, O. Bouche, S. Michiels, V. Rondeau (2016). Joint Model for Left-Censored Longitudinal Data, Recurrent Events and Terminal Event: Predictive Abilities of Tumor Burden for Cancer Evolution with Application to the FFCD 2000-05 Trial. Biometrics 72(3) 907-16.
A. Mauguen, B. Rachet, S. Mathoulin-Pelissier, G. MacGrogan, A. Laurent, V. Rondeau (2013). Dynamic prediction of risk of death using history of cancer recurrences in joint frailty models. Statistics in Medicine, 32(30), 5366-80.
V. Rondeau, A. Laurent, A. Mauguen, P. Joly, C. Helmer (2015). Dynamic prediction models for clustered and interval-censored outcomes: investigating the intra-couple correlation in the risk of dementia. Statistical Methods in Medical Research
## Not run: ##################################################### #### prediction on a COX or SHARED frailty model #### ##################################################### data(readmission) #-- here is a generated cluster (31 clusters of 13 subjects) readmission <- transform(readmission,group=id%%31+1) #-- we compute predictions of death #-- we extract last row of each subject for the time of death readmission <- aggregate(readmission,by=list(readmission$id), FUN=function(x){x[length(x)]})[,-1] ##-- predictions on a Cox proportional hazard model --## cox <- frailtyPenal(Surv(t.stop,death)~sex+dukes, n.knots=10,kappa=10000,data=readmission) #-- construction of the data frame for predictions datapred <- data.frame(sex=0,dukes=0) datapred$sex <- as.factor(datapred$sex) levels(datapred$sex)<- c(1,2) datapred$dukes <- as.factor(datapred$dukes) levels(datapred$dukes)<- c(1,2,3) datapred[1,] <- c(1,2) # man, dukes 2 datapred[2,] <- c(2,3) # woman, dukes 3 #-- prediction of death for two patients between 100 and 100+w, #-- with w in (50,100,...,1900) pred.cox <- prediction(cox,datapred,t=100,window=seq(50,1900,50)) plot(pred.cox) #-- prediction of death for two patients between t and t+400, #-- with t in (100,150,...,1500) pred.cox2 <- prediction(cox,datapred,t=seq(100,1500,50),window=400) plot(pred.cox2) ##-- predictions on a shared frailty model for clustered data --## sha <- frailtyPenal(Surv(t.stop,death)~cluster(group)+sex+dukes, n.knots=10,kappa=10000,data=readmission) #-- marginal prediction # a group must be specified but it does not influence the results # in the marginal predictions setting datapred$group[1:2] <- 1 pred.sha.marg <- prediction(sha,datapred,t=100,window=seq(50,1900,50)) plot(pred.sha.marg) #-- conditional prediction, given a specific cluster (group=5) datapred$group[1:2] <- 5 pred.sha.cond <- prediction(sha,datapred,t=100,window=seq(50,1900,50), conditional = TRUE) plot(pred.sha.cond) ##-- marginal prediction of a recurrent event, on a shared frailty model data(readmission) datapred <- data.frame(t.stop=0,event=0,id=0,sex=0,dukes=0) datapred$sex <- as.factor(datapred$sex) levels(datapred$sex)<- c(1,2) datapred$dukes <- as.factor(datapred$dukes) levels(datapred$dukes)<- c(1,2,3) datapred[1,] <- c(100,1,1,1,2) #man, dukes 2, 3 recurrent events datapred[2,] <- c(200,1,1,1,2) datapred[3,] <- c(300,1,1,1,2) datapred[4,] <- c(350,0,2,1,2) #man, dukes 2 0 recurrent event #-- Shared frailty model with gamma distribution sha <- frailtyPenal(Surv(t.stop,event)~cluster(id)+sex+dukes,n.knots=10, kappa=10000,data=readmission) pred.sha.rec.marg <- prediction(sha,datapred,t=200,window=seq(50,1900,50), event='Recurrent',MC.sample=100) plot(pred.sha.rec.marg,conf.bands=TRUE) ##-- conditional prediction of a recurrent event, on a shared frailty model pred.sha.rec.cond <- prediction(sha,datapred,t=200,window=seq(50,1900,50), event='Recurrent',conditional = TRUE,MC.sample=100) plot(pred.sha.rec.cond,conf.bands=TRUE) ##################################################### ######## prediction on a JOINT frailty model ######## ##################################################### data(readmission) ##-- predictions of death on a joint model --## joi <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id) +sex+dukes+terminal(death),formula.terminalEvent=~sex +dukes,data=readmission,n.knots=10,kappa=c(100,100),recurrentAG=TRUE) #-- construction of the data frame for predictions datapredj <- data.frame(t.stop=0,event=0,id=0,sex=0,dukes=0) datapredj$sex <- as.factor(datapredj$sex) levels(datapredj$sex) <- c(1,2) datapredj$dukes <- as.factor(datapredj$dukes) levels(datapredj$dukes) <- c(1,2,3) datapredj[1,] <- c(100,1,1,1,2) datapredj[2,] <- c(200,1,1,1,2) datapredj[3,] <- c(300,1,1,1,2) datapredj[4,] <- c(400,1,1,1,2) datapredj[5,] <- c(380,1,2,1,2) #-- prediction of death between 100 and 100+500 given relapses pred.joint0 <- prediction(joi,datapredj,t=100,window=500,event = "Terminal") print(pred.joint0) #-- prediction of death between 100 and 100+w given relapses # (with confidence intervals) pred.joint <- prediction(joi,datapredj,t=100,window=seq(50,1500,50), event = "Terminal",MC.sample=100) plot(pred.joint,conf.bands=TRUE) # each y-value of the plot corresponds to the prediction between [100,x] #-- prediction of death between t and t+500 given relapses pred.joint2 <- prediction(joi,datapredj,t=seq(100,1000,50), window=500,event = "Terminal") plot(pred.joint2) # each y-value of the plot corresponds to the prediction between [x,x+500], #or in the next 500 #-- prediction of relapse between 100 and 100+w given relapses # (with confidence intervals) pred.joint <- prediction(joi,datapredj,t=100,window=seq(50,1500,50), event = "Recurrent",MC.sample=100) plot(pred.joint,conf.bands=TRUE) # each y-value of the plot corresponds to the prediction between [100,x] #-- prediction of relapse and death between 100 and 100+w given relapses # (with confidence intervals) pred.joint <- prediction(joi,datapredj,t=100,window=seq(50,1500,50), event = "Both",MC.sample=100) plot(pred.joint,conf.bands=TRUE) # each y-value of the plot corresponds to the prediction between [100,x] ############################################################################# ### prediction on a JOINT model for longitudinal data and a terminal event #### ############################################################################# data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) #-- construction of the data-frame for predictions #-- biomarker observations datapredj_longi <- data.frame(id = 0, year = 0, tumor.size = 0, treatment = 0, age = 0, who.PS = 0, prev.resection = 0) datapredj_longi$treatment <- as.factor(datapredj_longi$treatment) levels(datapredj_longi$treatment) <- 1:2 datapredj_longi$age <- as.factor(datapredj_longi$age) levels(datapredj_longi$age) <- 1:3 datapredj_longi$who.PS <- as.factor(datapredj_longi$who.PS) levels(datapredj_longi$who.PS) <- 1:3 datapredj_longi$prev.resection <- as.factor(datapredj_longi$prev.resection) levels(datapredj_longi$prev.resection) <- 1:2 # patient 1: increasing tumor size datapredj_longi[1,] <- c(1, 0,1.2 ,2,1,1,1) datapredj_longi[2,] <- c(1,0.3,1.4,2,1,1,1) datapredj_longi[3,] <- c(1,0.6,1.9,2,1,1,1) datapredj_longi[4,] <- c(1,0.9,2.5,2,1,1,1) datapredj_longi[5,] <- c(1,1.5,3.9,2,1,1,1) # patient 2: decreasing tumor size datapredj_longi[6,] <- c(2, 0,1.2 ,2,1,1,1) datapredj_longi[7,] <- c(2,0.3,0.7,2,1,1,1) datapredj_longi[8,] <- c(2,0.5,0.3,2,1,1,1) datapredj_longi[9,] <- c(2,0.7,0.1,2,1,1,1) datapredj_longi[10,] <- c(2,0.9,0.1,2,1,1,1) #-- terminal event datapredj <- data.frame(id = 0, treatment = 0, age = 0, who.PS = 0, prev.resection = 0) datapredj$treatment <- as.factor(datapredj$treatment) levels(datapredj$treatment) <- 1:2 datapredj$age <- as.factor(datapredj$age) levels(datapredj$age) <- 1:3 datapredj$who.PS <- as.factor(datapredj$who.PS) datapredj$prev.resection <- as.factor(datapredj$prev.resection) levels(datapredj$prev.resection) <- 1:2 levels(datapredj$who.PS) <- 1:3 datapredj[1,] <- c(1,2,1,1,1) datapredj[2,] <- c(2,2,1,1,1) model.spli.CL <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Current-level", left.censoring = -3.33, n.knots = 6, kappa = 1) #-- prediction of death between 1 year and 1+2 given history of the biomarker pred.jointLongi0 <- prediction(model.spli.CL, datapredj, datapredj_longi, t = 1, window = 2) print(pred.jointLongi0) #-- prediction of death between 1 year and 1+w given history of the biomarker pred.jointLongi <- prediction(model.spli.CL, datapredj, datapredj_longi, t = 1, window = seq(0.5, 2.5, 0.2), MC.sample = 100) plot(pred.jointLongi, conf.bands = TRUE) # each y-value of the plot corresponds to the prediction between [1,x] #-- prediction of death between t and t+0.5 given history of the biomarker pred.jointLongi2 <- prediction(model.spli.CL, datapredj, datapredj_longi, t = seq(1, 2.5, 0.5), window = 0.5, MC.sample = 100) plot(pred.jointLongi2, conf.bands = TRUE) # each y-value of the plot corresponds to the prediction between [x,x+0.5], #or in the next 0.5 ############################################################################# ##### marginal prediction on a JOINT NESTED model for a terminal event ###### ############################################################################# #*--Warning! You can compute this prediction method with ONLY ONE family #*--by dataset of prediction. #*--Please make sure your data frame contains a column for individuals AND a #*--column for the reference number of the family chosen. data(readmission) readmissionNested <- transform(readmission,group=id%%30+1) #-- construction of the data frame for predictions : #-- family 5 was selected for the prediction DataPred <- readmissionNested[which(readmissionNested$group==5),] #-- Fitting the model modJointNested_Splines <- frailtyPenal(formula = Surv(t.start, t.stop, event)~subcluster(id)+ cluster(group) + dukes + terminal(death),formula.terminalEvent =~dukes, data = readmissionNested, recurrentAG = TRUE,n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) #-- Compute prediction over the individuals 274 and 4 of the family 5 predRead <- prediction(modJointNested_Splines, data=DataPred,t=500, window=seq(100,1500,200), conditional=FALSE, individual = c(274, 4)) ######################################################################### ##### prediction on TRIVARIATE JOINT model (linear and non-linear) ###### ######################################################################### data(colorectal) data(colorectalLongi) #-- construction of the data frame for predictions #-- history of recurrences and terminal event datapredj <- data.frame(time0 = 0, time1 = 0, new.lesions = 0, id = 0, treatment = 0, age = 0, who.PS = 0, prev.resection =0) datapredj$treatment <- as.factor(datapredj$treatment) levels(datapredj$treatment) <- 1:2 datapredj$age <- as.factor(datapredj$age) levels(datapredj$age) <- 1:3 datapredj$who.PS <- as.factor(datapredj$who.PS) levels(datapredj$who.PS) <- 1:3 datapredj$prev.resection <- as.factor(datapredj$prev.resection) levels(datapredj$prev.resection) <- 1:2 datapredj[1,] <- c(0,0.4,1,1,2,1,1,1) datapredj[2,] <- c(0.4,1.2,1,1,2,1,1,1) datapredj[3,] <- c(0,0.5,1,2,2,1,1,1) # Linear trivariate joint model # (computation takes around 40 minutes) model.trivPenal <-trivPenal(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + who.PS + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = TRUE, n.knots = 6, kappa=c(0.01, 2), method.GH="Pseudo-adaptive", n.nodes=7, init.B = c(-0.07, -0.13, -0.16, -0.17, 0.42, #recurrent events covarates -0.23, -0.1, -0.09, -0.12, 0.8, -0.23, #terminal event covariates 3.02, -0.30, 0.05, -0.63, -0.02, -0.29, 0.11, 0.74)) #biomarker covariates #-- prediction of death between 1 year and 1+2 pred.jointTri0 <- prediction(model.trivPenal, datapredj, datapredj_longi, t = 1, window = 2) print(pred.jointTri0) #-- prediction of death between 1 year and 1+w pred.jointTri <- prediction(model.trivPenal, datapredj, datapredj_longi, t = 1, window = seq(0.5, 2.5, 0.2), MC.sample = 100) plot(pred.jointTri, conf.bands = TRUE) #-- prediction of death between t and t+0.5 pred.jointTri2 <- prediction(model.trivPenal, datapredj, datapredj_longi, t = seq(1, 2.5, 0.5), window = 0.5, MC.sample = 100) plot(pred.jointTri2, conf.bands = TRUE) ############################### # No information on dose - creation of a dummy variable colorectalLongi$dose <- 1 # (computation can take around 40 minutes) model.trivPenalNL <- trivPenalNL(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + terminal(state), formula.terminalEvent =~ age + treatment, biomarker = "tumor.size", formula.KG ~ 1, formula.KD ~ treatment, dose = "dose", time.biomarker = "year", data = colorectal, data.Longi =colorectalLongi, random = c("y0", "KG"), id = "id", init.B = c(-0.22, -0.16, -0.35, -0.19, 0.04, -0.41, 0.23), init.Alpha = 1.86, init.Eta = c(0.5, 0.57, 0.5, 2.34), init.Biomarker = c(1.24, 0.81, 1.07, -1.53), recurrentAG = TRUE, n.knots = 5, kappa = c(0.01, 2), method.GH = "Pseudo-adaptive") #-- prediction of death between 1 year and 1+2 pred.jointTriNL0 <- prediction(model.trivPenalNL, datapredj, datapredj_longi, t = 1, window = 2) print(pred.jointTriNL0) #-- prediction of death between 1 year and 1+w pred.jointTriNL <- prediction(model.trivPenalNL, datapredj, datapredj_longi, t = 1, window = seq(0.5, 2.5, 0.2), MC.sample = 100) plot(pred.jointTriNL, conf.bands = TRUE) #-- prediction of death between t and t+0.5 pred.jointTriNL2 <- prediction(model.trivPenalNL, datapredj, datapredj_longi, t = seq(2, 3, 0.2), window = 0.5, MC.sample = 100) plot(pred.jointTriNL2, conf.bands = TRUE) ## End(Not run)
## Not run: ##################################################### #### prediction on a COX or SHARED frailty model #### ##################################################### data(readmission) #-- here is a generated cluster (31 clusters of 13 subjects) readmission <- transform(readmission,group=id%%31+1) #-- we compute predictions of death #-- we extract last row of each subject for the time of death readmission <- aggregate(readmission,by=list(readmission$id), FUN=function(x){x[length(x)]})[,-1] ##-- predictions on a Cox proportional hazard model --## cox <- frailtyPenal(Surv(t.stop,death)~sex+dukes, n.knots=10,kappa=10000,data=readmission) #-- construction of the data frame for predictions datapred <- data.frame(sex=0,dukes=0) datapred$sex <- as.factor(datapred$sex) levels(datapred$sex)<- c(1,2) datapred$dukes <- as.factor(datapred$dukes) levels(datapred$dukes)<- c(1,2,3) datapred[1,] <- c(1,2) # man, dukes 2 datapred[2,] <- c(2,3) # woman, dukes 3 #-- prediction of death for two patients between 100 and 100+w, #-- with w in (50,100,...,1900) pred.cox <- prediction(cox,datapred,t=100,window=seq(50,1900,50)) plot(pred.cox) #-- prediction of death for two patients between t and t+400, #-- with t in (100,150,...,1500) pred.cox2 <- prediction(cox,datapred,t=seq(100,1500,50),window=400) plot(pred.cox2) ##-- predictions on a shared frailty model for clustered data --## sha <- frailtyPenal(Surv(t.stop,death)~cluster(group)+sex+dukes, n.knots=10,kappa=10000,data=readmission) #-- marginal prediction # a group must be specified but it does not influence the results # in the marginal predictions setting datapred$group[1:2] <- 1 pred.sha.marg <- prediction(sha,datapred,t=100,window=seq(50,1900,50)) plot(pred.sha.marg) #-- conditional prediction, given a specific cluster (group=5) datapred$group[1:2] <- 5 pred.sha.cond <- prediction(sha,datapred,t=100,window=seq(50,1900,50), conditional = TRUE) plot(pred.sha.cond) ##-- marginal prediction of a recurrent event, on a shared frailty model data(readmission) datapred <- data.frame(t.stop=0,event=0,id=0,sex=0,dukes=0) datapred$sex <- as.factor(datapred$sex) levels(datapred$sex)<- c(1,2) datapred$dukes <- as.factor(datapred$dukes) levels(datapred$dukes)<- c(1,2,3) datapred[1,] <- c(100,1,1,1,2) #man, dukes 2, 3 recurrent events datapred[2,] <- c(200,1,1,1,2) datapred[3,] <- c(300,1,1,1,2) datapred[4,] <- c(350,0,2,1,2) #man, dukes 2 0 recurrent event #-- Shared frailty model with gamma distribution sha <- frailtyPenal(Surv(t.stop,event)~cluster(id)+sex+dukes,n.knots=10, kappa=10000,data=readmission) pred.sha.rec.marg <- prediction(sha,datapred,t=200,window=seq(50,1900,50), event='Recurrent',MC.sample=100) plot(pred.sha.rec.marg,conf.bands=TRUE) ##-- conditional prediction of a recurrent event, on a shared frailty model pred.sha.rec.cond <- prediction(sha,datapred,t=200,window=seq(50,1900,50), event='Recurrent',conditional = TRUE,MC.sample=100) plot(pred.sha.rec.cond,conf.bands=TRUE) ##################################################### ######## prediction on a JOINT frailty model ######## ##################################################### data(readmission) ##-- predictions of death on a joint model --## joi <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id) +sex+dukes+terminal(death),formula.terminalEvent=~sex +dukes,data=readmission,n.knots=10,kappa=c(100,100),recurrentAG=TRUE) #-- construction of the data frame for predictions datapredj <- data.frame(t.stop=0,event=0,id=0,sex=0,dukes=0) datapredj$sex <- as.factor(datapredj$sex) levels(datapredj$sex) <- c(1,2) datapredj$dukes <- as.factor(datapredj$dukes) levels(datapredj$dukes) <- c(1,2,3) datapredj[1,] <- c(100,1,1,1,2) datapredj[2,] <- c(200,1,1,1,2) datapredj[3,] <- c(300,1,1,1,2) datapredj[4,] <- c(400,1,1,1,2) datapredj[5,] <- c(380,1,2,1,2) #-- prediction of death between 100 and 100+500 given relapses pred.joint0 <- prediction(joi,datapredj,t=100,window=500,event = "Terminal") print(pred.joint0) #-- prediction of death between 100 and 100+w given relapses # (with confidence intervals) pred.joint <- prediction(joi,datapredj,t=100,window=seq(50,1500,50), event = "Terminal",MC.sample=100) plot(pred.joint,conf.bands=TRUE) # each y-value of the plot corresponds to the prediction between [100,x] #-- prediction of death between t and t+500 given relapses pred.joint2 <- prediction(joi,datapredj,t=seq(100,1000,50), window=500,event = "Terminal") plot(pred.joint2) # each y-value of the plot corresponds to the prediction between [x,x+500], #or in the next 500 #-- prediction of relapse between 100 and 100+w given relapses # (with confidence intervals) pred.joint <- prediction(joi,datapredj,t=100,window=seq(50,1500,50), event = "Recurrent",MC.sample=100) plot(pred.joint,conf.bands=TRUE) # each y-value of the plot corresponds to the prediction between [100,x] #-- prediction of relapse and death between 100 and 100+w given relapses # (with confidence intervals) pred.joint <- prediction(joi,datapredj,t=100,window=seq(50,1500,50), event = "Both",MC.sample=100) plot(pred.joint,conf.bands=TRUE) # each y-value of the plot corresponds to the prediction between [100,x] ############################################################################# ### prediction on a JOINT model for longitudinal data and a terminal event #### ############################################################################# data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) #-- construction of the data-frame for predictions #-- biomarker observations datapredj_longi <- data.frame(id = 0, year = 0, tumor.size = 0, treatment = 0, age = 0, who.PS = 0, prev.resection = 0) datapredj_longi$treatment <- as.factor(datapredj_longi$treatment) levels(datapredj_longi$treatment) <- 1:2 datapredj_longi$age <- as.factor(datapredj_longi$age) levels(datapredj_longi$age) <- 1:3 datapredj_longi$who.PS <- as.factor(datapredj_longi$who.PS) levels(datapredj_longi$who.PS) <- 1:3 datapredj_longi$prev.resection <- as.factor(datapredj_longi$prev.resection) levels(datapredj_longi$prev.resection) <- 1:2 # patient 1: increasing tumor size datapredj_longi[1,] <- c(1, 0,1.2 ,2,1,1,1) datapredj_longi[2,] <- c(1,0.3,1.4,2,1,1,1) datapredj_longi[3,] <- c(1,0.6,1.9,2,1,1,1) datapredj_longi[4,] <- c(1,0.9,2.5,2,1,1,1) datapredj_longi[5,] <- c(1,1.5,3.9,2,1,1,1) # patient 2: decreasing tumor size datapredj_longi[6,] <- c(2, 0,1.2 ,2,1,1,1) datapredj_longi[7,] <- c(2,0.3,0.7,2,1,1,1) datapredj_longi[8,] <- c(2,0.5,0.3,2,1,1,1) datapredj_longi[9,] <- c(2,0.7,0.1,2,1,1,1) datapredj_longi[10,] <- c(2,0.9,0.1,2,1,1,1) #-- terminal event datapredj <- data.frame(id = 0, treatment = 0, age = 0, who.PS = 0, prev.resection = 0) datapredj$treatment <- as.factor(datapredj$treatment) levels(datapredj$treatment) <- 1:2 datapredj$age <- as.factor(datapredj$age) levels(datapredj$age) <- 1:3 datapredj$who.PS <- as.factor(datapredj$who.PS) datapredj$prev.resection <- as.factor(datapredj$prev.resection) levels(datapredj$prev.resection) <- 1:2 levels(datapredj$who.PS) <- 1:3 datapredj[1,] <- c(1,2,1,1,1) datapredj[2,] <- c(2,2,1,1,1) model.spli.CL <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Current-level", left.censoring = -3.33, n.knots = 6, kappa = 1) #-- prediction of death between 1 year and 1+2 given history of the biomarker pred.jointLongi0 <- prediction(model.spli.CL, datapredj, datapredj_longi, t = 1, window = 2) print(pred.jointLongi0) #-- prediction of death between 1 year and 1+w given history of the biomarker pred.jointLongi <- prediction(model.spli.CL, datapredj, datapredj_longi, t = 1, window = seq(0.5, 2.5, 0.2), MC.sample = 100) plot(pred.jointLongi, conf.bands = TRUE) # each y-value of the plot corresponds to the prediction between [1,x] #-- prediction of death between t and t+0.5 given history of the biomarker pred.jointLongi2 <- prediction(model.spli.CL, datapredj, datapredj_longi, t = seq(1, 2.5, 0.5), window = 0.5, MC.sample = 100) plot(pred.jointLongi2, conf.bands = TRUE) # each y-value of the plot corresponds to the prediction between [x,x+0.5], #or in the next 0.5 ############################################################################# ##### marginal prediction on a JOINT NESTED model for a terminal event ###### ############################################################################# #*--Warning! You can compute this prediction method with ONLY ONE family #*--by dataset of prediction. #*--Please make sure your data frame contains a column for individuals AND a #*--column for the reference number of the family chosen. data(readmission) readmissionNested <- transform(readmission,group=id%%30+1) #-- construction of the data frame for predictions : #-- family 5 was selected for the prediction DataPred <- readmissionNested[which(readmissionNested$group==5),] #-- Fitting the model modJointNested_Splines <- frailtyPenal(formula = Surv(t.start, t.stop, event)~subcluster(id)+ cluster(group) + dukes + terminal(death),formula.terminalEvent =~dukes, data = readmissionNested, recurrentAG = TRUE,n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) #-- Compute prediction over the individuals 274 and 4 of the family 5 predRead <- prediction(modJointNested_Splines, data=DataPred,t=500, window=seq(100,1500,200), conditional=FALSE, individual = c(274, 4)) ######################################################################### ##### prediction on TRIVARIATE JOINT model (linear and non-linear) ###### ######################################################################### data(colorectal) data(colorectalLongi) #-- construction of the data frame for predictions #-- history of recurrences and terminal event datapredj <- data.frame(time0 = 0, time1 = 0, new.lesions = 0, id = 0, treatment = 0, age = 0, who.PS = 0, prev.resection =0) datapredj$treatment <- as.factor(datapredj$treatment) levels(datapredj$treatment) <- 1:2 datapredj$age <- as.factor(datapredj$age) levels(datapredj$age) <- 1:3 datapredj$who.PS <- as.factor(datapredj$who.PS) levels(datapredj$who.PS) <- 1:3 datapredj$prev.resection <- as.factor(datapredj$prev.resection) levels(datapredj$prev.resection) <- 1:2 datapredj[1,] <- c(0,0.4,1,1,2,1,1,1) datapredj[2,] <- c(0.4,1.2,1,1,2,1,1,1) datapredj[3,] <- c(0,0.5,1,2,2,1,1,1) # Linear trivariate joint model # (computation takes around 40 minutes) model.trivPenal <-trivPenal(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + who.PS + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = TRUE, n.knots = 6, kappa=c(0.01, 2), method.GH="Pseudo-adaptive", n.nodes=7, init.B = c(-0.07, -0.13, -0.16, -0.17, 0.42, #recurrent events covarates -0.23, -0.1, -0.09, -0.12, 0.8, -0.23, #terminal event covariates 3.02, -0.30, 0.05, -0.63, -0.02, -0.29, 0.11, 0.74)) #biomarker covariates #-- prediction of death between 1 year and 1+2 pred.jointTri0 <- prediction(model.trivPenal, datapredj, datapredj_longi, t = 1, window = 2) print(pred.jointTri0) #-- prediction of death between 1 year and 1+w pred.jointTri <- prediction(model.trivPenal, datapredj, datapredj_longi, t = 1, window = seq(0.5, 2.5, 0.2), MC.sample = 100) plot(pred.jointTri, conf.bands = TRUE) #-- prediction of death between t and t+0.5 pred.jointTri2 <- prediction(model.trivPenal, datapredj, datapredj_longi, t = seq(1, 2.5, 0.5), window = 0.5, MC.sample = 100) plot(pred.jointTri2, conf.bands = TRUE) ############################### # No information on dose - creation of a dummy variable colorectalLongi$dose <- 1 # (computation can take around 40 minutes) model.trivPenalNL <- trivPenalNL(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + terminal(state), formula.terminalEvent =~ age + treatment, biomarker = "tumor.size", formula.KG ~ 1, formula.KD ~ treatment, dose = "dose", time.biomarker = "year", data = colorectal, data.Longi =colorectalLongi, random = c("y0", "KG"), id = "id", init.B = c(-0.22, -0.16, -0.35, -0.19, 0.04, -0.41, 0.23), init.Alpha = 1.86, init.Eta = c(0.5, 0.57, 0.5, 2.34), init.Biomarker = c(1.24, 0.81, 1.07, -1.53), recurrentAG = TRUE, n.knots = 5, kappa = c(0.01, 2), method.GH = "Pseudo-adaptive") #-- prediction of death between 1 year and 1+2 pred.jointTriNL0 <- prediction(model.trivPenalNL, datapredj, datapredj_longi, t = 1, window = 2) print(pred.jointTriNL0) #-- prediction of death between 1 year and 1+w pred.jointTriNL <- prediction(model.trivPenalNL, datapredj, datapredj_longi, t = 1, window = seq(0.5, 2.5, 0.2), MC.sample = 100) plot(pred.jointTriNL, conf.bands = TRUE) #-- prediction of death between t and t+0.5 pred.jointTriNL2 <- prediction(model.trivPenalNL, datapredj, datapredj_longi, t = seq(2, 3, 0.2), window = 0.5, MC.sample = 100) plot(pred.jointTriNL2, conf.bands = TRUE) ## End(Not run)
Prints a short summary of the parameter estimates of an additive frailty model or more generally of an 'additivePenal' object
## S3 method for class 'additivePenal' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'additivePenal' print(x, digits = max(options()$digits - 4, 6), ...)
x |
the result of a call to the additivePenal function |
digits |
number of digits to print |
... |
other unused arguments |
Print the parameter estimates of the survival or hazard functions.
Print a short summary of results of the concordance measure estimated by the Cmeasure function.
## S3 method for class 'Cmeasures' print(x, ...)
## S3 method for class 'Cmeasures' print(x, ...)
x |
a Cmeasures object. |
... |
Other unused arguments |
Print concordance measures estimated.
Prints a short summary of parameter estimates of a 'frailtyPenal' object
## S3 method for class 'frailtyPenal' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'frailtyPenal' print(x, digits = max(options()$digits - 4, 6), ...)
x |
the result of a call to the frailtyPenal function. |
digits |
number of digits to print. |
... |
other unused arguments. |
Print the parameter estimates of the survival or hazard functions.
Prints a short summary of parameter estimates of a joint nested frailty model, or more generally an object of class 'jointNestedPenal' for joint nested frailty models.
## S3 method for class 'jointNestedPenal' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'jointNestedPenal' print(x, digits = max(options()$digits - 4, 6), ...)
x |
the result of a call to the jointNestedPenal function |
digits |
number of digits to print |
... |
other unused arguments |
Print, separately for each type of event (recurrent and terminal), the parameter estimates of the survival or hazard functions.
Prints a short summary of parameter estimates of a joint frailty model, or more generally an object of class 'frailtyPenal' for joint frailty models.
## S3 method for class 'jointPenal' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'jointPenal' print(x, digits = max(options()$digits - 4, 6), ...)
x |
the result of a call to the jointPenal function |
digits |
number of digits to print |
... |
other unused arguments |
Print, separately for each type of event (recurrent and terminal), the parameter estimates of the survival or hazard functions.
Prints a short summary of parameter estimates of a joint competing risks model or more generally an object of class 'jointRecCompet'.
## S3 method for class 'jointRecCompet' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'jointRecCompet' print(x, digits = max(options()$digits - 4, 6), ...)
x |
the result of a call to the jointRecCompet function |
digits |
number of digits to print |
... |
other unused arguments |
Print, separately for each type of event (Recurrent, Terminal1 and Terminal2), the parameter estimates of the survival or hazard functions.
This function returns the estimate of the coefficients and their standard error with p-values
of the Wald test for the joint surrogate model, also hazard ratios (HR) and their
confidence intervals for the fixed treatment effects, and finaly an estimate of the
surrogacy evaluation criterian (Kendall's and
R
2trial)
## S3 method for class 'jointSurroPenal' print(x, d = 4, len = 3, nb.gh = 32, ...)
## S3 method for class 'jointSurroPenal' print(x, d = 4, len = 3, nb.gh = 32, ...)
x |
An object inheriting from |
d |
The desired number of digits after the decimal point for parameters. The maximum of 4 digits is required for the estimates. Default of 3 digits is used. |
len |
The desired number of digits after the decimal point for p-value and convergence criteria. Default of 4 digits is used. |
nb.gh |
Number of nodes for the Gaussian-Hermite quadrature. The default is |
... |
other unused arguments. |
For the variances parameters of the random effects, it prints the estimate of
the coefficients with their standard error, Z-statistics and p-values
of the Wald test. For the fixed treatment effects, it also prints HR and its confidence
intervals for each covariate. For the surrogacy evaluation criteria, its prints the estimated
Kendall's with its 95% Confidence interval obtained by the parametric bootstrap
or Delta-method,
the estimated
R
2trial(R2trial) with standard error and the 95% Confidence interval
obtained by Delta-method (Dowd et al., 2014),
R
2trial(R2.boot) and its 95%
Confidence interval obtained by the parametric bootstrap.
We notice that, using bootstrap,
the standard error of the point estimate is not available. We propose a classification of
R
2trial according to
the suggestion of the Institute of Quality and Efficiency in Health Care
(Prasad et al., 2015).
We also display the surrogate threshold effect (ste
) with the associated hazard risk.
The rest of parameters concerns the convergence characteristics and
included: the penalized marginal log-likelihood, the number of iterations, the LCV and the Convergence criteria.
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Dowd BE, Greene WH, Norton EC (2014). "Computation of Standard Errors." Health Services Research, 49(2), 731-750.
Prasad V, Kim C, Burotto M, Vandross A (2015). "The strength of association between surrogate end points and survival in oncology: A systematic review of trial-level meta- alyses." JAMA Internal Medicine, 175(8), 1389-1398.
jointSurroPenal, jointSurroCopPenal, jointSurroTKendall
## Not run: ###---Data generation---### data.sim <-jointSurrSimul(n.obs=400, n.trial = 20,cens.adm=549, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###---Estimation---### joint.surrogate <- jointSurroPenal(data = data.sim, nb.mc = 300, nb.gh = 20, indicator.alpha = 1, n.knots = 6) print(joint.surrogate) # or joint.surrogate ## End(Not run)
## Not run: ###---Data generation---### data.sim <-jointSurrSimul(n.obs=400, n.trial = 20,cens.adm=549, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###---Estimation---### joint.surrogate <- jointSurroPenal(data = data.sim, nb.mc = 300, nb.gh = 20, indicator.alpha = 1, n.knots = 6) print(joint.surrogate) # or joint.surrogate ## End(Not run)
Prints a short summary of parameter estimates of a joint model for
longitudinal data and a terminal event, an object inheriting from class
'longiPenal'. If a mediation analysis was performed (option mediation
set to
TRUE
in longiPenal
) this function displays estimations of the related
quantities.
## S3 method for class 'longiPenal' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'longiPenal' print(x, digits = max(options()$digits - 4, 6), ...)
x |
an object inheriting from |
digits |
number of digits to print |
... |
other unused arguments |
Print, separately for each part of the model (longitudinal and terminal) the parameter estimates and details on the estimation. Also print in a separate part the results of the mediation analysis if one was performed
Prints a short summary of parameter estimates of a multivariate frailty model, or more generally an object of class 'multivPenal'.
## S3 method for class 'multivPenal' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'multivPenal' print(x, digits = max(options()$digits - 4, 6), ...)
x |
the result of a call to the multivPenal function |
digits |
number of digits to print |
... |
other unused arguments |
Print, separately for each type of event (recurrent1, recurrent2 and terminal), the parameter estimates of the survival or hazard functions.
Prints a short summary of parameter estimates of a nested frailty model
## S3 method for class 'nestedPenal' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'nestedPenal' print(x, digits = max(options()$digits - 4, 6), ...)
x |
the result of a call to the frailtyPenal function for nested frailty models |
digits |
number of digits to print |
... |
other unused arguments |
n |
the number of observations used in the fit. |
n.groups |
the maximum number of groups used in the fit |
n.events |
the number of events observed in the fit |
eta |
variance of the subcluster effect |
theta |
variance of the cluster effect |
coef |
the coefficients of the linear predictor, which multiply the columns of the model matrix. |
SE(H) |
the standard error of the estimates deduced from the variance matrix of theta and of the coefficients. |
SE(HIH) |
the standard error of the estimates deduced from the robust estimation of the variance matrix of theta and of the coefficients. |
p |
p-value |
Print a short summary of results of prediction function.
## S3 method for class 'predFrailty' print(x, digits = 3, ...) ## S3 method for class 'predJoint' print(x, digits = 3, ...) ## S3 method for class 'predLongi' print(x, digits = 3, ...)
## S3 method for class 'predFrailty' print(x, digits = 3, ...) ## S3 method for class 'predJoint' print(x, digits = 3, ...) ## S3 method for class 'predLongi' print(x, digits = 3, ...)
x |
An object from the 'prediction' function, objects inheriting from
|
digits |
Number of digits to print |
... |
Other unused arguments |
Print the probabilities estimated.
Prints a short summary of parameter estimates of a joint model for longitudinal data, recurrent events and a terminal event, an object inheriting from class 'trivPenal'.
## S3 method for class 'trivPenal' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'trivPenal' print(x, digits = max(options()$digits - 4, 6), ...)
x |
an object inheriting from |
digits |
number of digits to print |
... |
other unused arguments |
Print, separately for each part of the model (longitudinal, recurrent and terminal) the parameter estimates and details on the estimation.
Prints a short summary of parameter estimates of a non-linear trivariate joint model for longitudinal data, recurrent events and a terminal event, an object inheriting from class 'trivPenalNL'.
## S3 method for class 'trivPenalNL' print(x, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'trivPenalNL' print(x, digits = max(options()$digits - 4, 6), ...)
x |
an object inheriting from |
digits |
number of digits to print |
... |
other unused arguments |
Print, separately for each part of the model (biomarker growth, biomarker decline, recurrent events and terminal event) the parameter estimates and details on the estimation.
This contains rehospitalization times after surgery in patients diagnosed with colorectal cancer
data(readmission)
data(readmission)
This data frame contains the following columns:
identification of each subject. Repeated for each recurrence
which readmission
start of interval (0 or previous recurrence time)
recurrence or censoring time
interocurrence or censoring time
rehospitalization status. All event are 1 for each subject excepting last one that it is 0
Did patient receive chemotherapy? 1: No; 2:Yes
gender: 1:Males 2:Females
Dukes' tumoral stage: 1:A-B; 2:C 3:D
Comorbidity Charlson's index. Time-dependent covariate. 0: Index 0; 1: Index 1-2; 3: Index >=3
death indicator. 1:dead and 0:alive
Gonzalez, JR., Fernandez, E., Moreno, V., Ribes, J., Peris, M., Navarro, M., Cambray, M. and Borras, JM (2005). Sex differences in hospital readmission among colorectal cancer patients. Journal of Epidemiology and Community Health, 59, 6, 506-511.
This dataset contains an extract of 500 randomly selected patients from the randomized, double-blind, placebo-controlled REDUCE trial for critically ill patient admited to ICU. This trial investigated whether Haloperidol (1 or 2 mg) administered as a prophylactic improved 28-day survival compared to placebo. Recurrent episodes of delirium are recorded and patients and patients can be censored by death or discharge from the ICU.
data(reduce)
data(reduce)
This data frame contains the following columns:
Identification number of a patient
Start time of the interval (0 or time of last recurrence)
Stop time of the interval, either delirium recurrence time or censoring time.
Delirium status
Death status
Discharge status
Treatment indicator, 1 if patient was randomized to receive 2mg of Haloperidol, 0 for control
Van Den Boogaard, M., Slooter, A. J., Bruggemann, R. J., Schoonhoven, L., Beishuizen, A., Vermeijden, J. W., et al. (2018). Effect of haloperidol on survival among critically ill adults with a high risk of delirium: the REDUCE randomized clinical trial. Jama, 319(7), 680-690.
This function loads the shiny package and runs the application for modelisation and prediction of several frailty models using package frailtypack.
runShiny()
runShiny()
No value returned.
Rizopoulos D. (2016)
## Not run: runShiny() ## End(Not run)
## Not run: runShiny() ## End(Not run)
Generates data under a joint frailty model for a single recurrent event and two terminal events in a calendar-time format. Only a single covariate representing the treatment is allowed. Event times are generated under Weibull distributions.
simulatejointRecCompet(n, censoring = 28, maxrecurrent = 50, par0 = c(shapeR = 1.5, scaleR = 10, shapeM = 1.75, scaleM = 16, shapeD = 1.75, scaleD = 16, sigma = 0.5, alphaM = 1, alphaD = 0, betaR = -0.5, betaM = -0.5, betaD = 0))
simulatejointRecCompet(n, censoring = 28, maxrecurrent = 50, par0 = c(shapeR = 1.5, scaleR = 10, shapeM = 1.75, scaleM = 16, shapeD = 1.75, scaleD = 16, sigma = 0.5, alphaM = 1, alphaD = 0, betaR = -0.5, betaM = -0.5, betaD = 0))
n |
Number of subjects. Default is 1500. |
censoring |
A number indicating a fixed right censoring time for all subjects
(as an administrative censoring). If |
maxrecurrent |
Maximum number of recurrent events per subject.
If |
par0 |
A vector of arguments controlling the parameters of the generating model.
|
Returns a data.frame
object with the following columns:
id
Id number for each subject
treatment
Binary treatment indicator
tstart
Start time of the observation period
tstop
Stop time of the observation period
recurrent
Censoring indicator for the recurrent event
terminal1
Censoring indicator for the first terminal event
terminal2
Censoring indicator for the second terminal event
This is a special function used in the context of survival additive models.
It identifies the variable which is in interaction with the random slope
(). Generally, this variable is the treatment variable.
Using
interaction()
in a formula implies that an additive frailty
model is fitted.
slope(x)
slope(x)
x |
A factor, a character or a numerical variable |
x |
The variable in interaction with the random slope |
It is necessary to specify which variable is in interaction with the random slope, even if only one explanatory variable is included in the model.
data(dataAdditive) ##-- Additive with one covariate --## modAdd1cov <- additivePenal(Surv(t1,t2,event)~cluster(group)+var1+ slope(var1),data=dataAdditive,n.knots=8,kappa=10000,hazard="Splines") ##-- Additive with two covariates --## set.seed(1234) dataAdditive$var2 <- rbinom(nrow(dataAdditive),1,0.5) modAdd2cov <- additivePenal(Surv(t1,t2,event)~cluster(group)+var1+ var2+slope(var1),data=dataAdditive,n.knots=8,kappa=10000, hazard="Splines") ##-- Additive with 2 covariates and stratification --## dataAdditive$var2 <- rbinom(nrow(dataAdditive),1,0.5) modAddstrat <- additivePenal(Surv(t1,t2,event)~cluster(group)+ strata(var2)+var1+slope(var1),data=dataAdditive,n.knots=8, kappa=c(10000,10000),hazard="Splines")
data(dataAdditive) ##-- Additive with one covariate --## modAdd1cov <- additivePenal(Surv(t1,t2,event)~cluster(group)+var1+ slope(var1),data=dataAdditive,n.knots=8,kappa=10000,hazard="Splines") ##-- Additive with two covariates --## set.seed(1234) dataAdditive$var2 <- rbinom(nrow(dataAdditive),1,0.5) modAdd2cov <- additivePenal(Surv(t1,t2,event)~cluster(group)+var1+ var2+slope(var1),data=dataAdditive,n.knots=8,kappa=10000, hazard="Splines") ##-- Additive with 2 covariates and stratification --## dataAdditive$var2 <- rbinom(nrow(dataAdditive),1,0.5) modAddstrat <- additivePenal(Surv(t1,t2,event)~cluster(group)+ strata(var2)+var1+slope(var1),data=dataAdditive,n.knots=8, kappa=c(10000,10000),hazard="Splines")
This function compute the surrogate threshold effect (STE) from the one-step joint frailty
model
or joint frailty-copula
model
. The STE is defined as the minimum treament effect
on surrogate endpoint, necessary to predict a non-zero effect on
true endpoint (Burzykowski et al., 2006).
ste(object, var.used = "error.estim", alpha. = 0.05, pred.int.use = "up")
ste(object, var.used = "error.estim", alpha. = 0.05, pred.int.use = "up")
object |
An object inheriting from |
var.used |
This argument takes two values. The first one is |
alpha. |
The confidence level for the prediction interval. The default is |
pred.int.use |
A character string that indicates the bound of the prediction interval
to use to compute the STE. Possible values are |
The STE is obtained by solving the equation
l
(0)
= 0
(resp.
u
(0)
= 0
), where
0 represents
the corresponding STE, and
l
(0) (resp.
u
(0)) is the lower (resp. upper) bound of the prediction interval
of the treatment effect on the true endpoint (
+ b0) . Thereby,
where
represents the set of estimates for the fixed-effects and the
variance-covariance parameters of the random effects obtained from the joint surrogate
model
(Sofeu et al., 2019).
If the previous equations gives two solutions, STE can be the
minimum (resp. the maximum) value or both of them, according to the shape of the function.
If the concavity of the function is turned upwards, STE is the first value and
the second value represents the maximum (res. the minimum) treament value observable
on the surrogate that can predict a nonzero treatment effect on true endpoint.
If the concavity of the function is turned down, both of the solutions
represent the STE and the interpretation is such that accepted values of the
treatment effects on S
predict a nonzero treatment effects on T
Given that negative values of treatment effect indicate a reduction of the risk
of failure and are considered beneficial, STE is recommended to be computed from
the upper prediction
limit
u
(0).
The details on the computation of STE are described in Burzykowski et al. (2006).
Returns and displays the STE.
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Burzykowski T, Buyse M (2006). "Surrogate threshold effect: an alternative measure for meta-analytic surrogate endpoint validation." Pharmaceutical Statistics, 5(3), 173-186.ISSN 1539-1612.
Sofeu, C. L., Emura, T., and Rondeau, V. (2019). One-step validation method for surrogate endpoints using data from multiple randomized cancer clinical trials with failure-time endpoints. Statistics in Medicine 38, 2928-2942.
Sofeu, C. L. and Rondeau, V. (2020). How to use frailtypack for validating failure-time surrogate endpoints using individual patient data from meta-analyses of randomized controlled trials. PLOS ONE; 15, 1-25.
jointSurroPenal, jointSurroCopPenal
, predict
###--- Joint surrogate model ---### ###---evaluation of surrogate endpoints---### data(dataOvarian) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) # ======STE===== # Assuming errors on the estimates ste(joint.surro.ovar, var.used = "error.estim") # Assuming no errors on the estimates ste(joint.surro.ovar, var.used = "No.error", pred.int.use = "up")
###--- Joint surrogate model ---### ###---evaluation of surrogate endpoints---### data(dataOvarian) joint.surro.ovar <- jointSurroPenal(data = dataOvarian, n.knots = 8, init.kappa = c(2000,1000), indicator.alpha = 0, nb.mc = 200, scale = 1/365) # ======STE===== # Assuming errors on the estimates ste(joint.surro.ovar, var.used = "error.estim") # Assuming no errors on the estimates ste(joint.surro.ovar, var.used = "No.error", pred.int.use = "up")
This is a special function used in the context of survival nested or joint
nested models. It identifies correlated groups of observations within other
groups defined by using 'cluster' function from 'survival' package, and is
used on the right hand side of 'frailtyPenal' formula for fitting a nested
or joint nested model. Using subcluster()
in a formula implies that
a nested or a joint nested frailty model is estimated.
subcluster(x)
subcluster(x)
x |
A character, factor, or numeric variable which is supposed to indicate the variable subgroup |
x |
A variable identified as a subcluster |
## Not run: data(dataNested) modClu <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+cov2,data=dataNested, n.knots=8,kappa=c(50000,50000),hazard="Splines") print(modClu) #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) modJointNested_Splines <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) ## End(Not run)
## Not run: data(dataNested) modClu <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+cov2,data=dataNested, n.knots=8,kappa=c(50000,50000),hazard="Splines") print(modClu) #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) modJointNested_Splines <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) ## End(Not run)
This function returns hazard ratios (HR) and its confidence intervals
## S3 method for class 'additivePenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
## S3 method for class 'additivePenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
object |
output from a call to additivePenal. |
level |
significance level of confidence interval. Default is 95%. |
len |
the total field width. Default is 6. |
d |
the desired number of digits after the decimal point. Default of 6 digits is used. |
lab |
label of printed results. |
... |
other unused arguments. |
Prints HR and its confidence intervals for each covariate. Confidence level is allowed (level argument)
## Not run: data(dataAdditive) modAdd <- additivePenal(Surv(t1,t2,event)~cluster(group)+var1+slope(var1), correlation=TRUE,data=dataAdditive,n.knots=8,kappa=862,hazard="Splines") #- 'var1' is boolean as a treatment variable. summary(modAdd) ## End(Not run)
## Not run: data(dataAdditive) modAdd <- additivePenal(Surv(t1,t2,event)~cluster(group)+var1+slope(var1), correlation=TRUE,data=dataAdditive,n.knots=8,kappa=862,hazard="Splines") #- 'var1' is boolean as a treatment variable. summary(modAdd) ## End(Not run)
This function returns hazard rations (HR) and its confidence intervals
## S3 method for class 'frailtyPenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
## S3 method for class 'frailtyPenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
object |
output from a call to frailtyPenal. |
level |
significance level of confidence interval. Default is 95%. |
len |
the total field width. Default is 6. |
d |
the desired number of digits after the decimal point. Default of 6 digits is used. |
lab |
label of printed results. |
... |
other unused arguments. |
Prints HR and its confidence intervals. Confidence level is allowed (level argument).
## Not run: data(kidney) ##-- Shared frailty model --## modSha <- frailtyPenal(Surv(time,status)~age+sex+cluster(id), n.knots=8,kappa=10000,data=kidney,hazard="Splines") ##-- Cox proportional hazard model --## modCox <- frailtyPenal(Surv(time,status)~age+sex, n.knots=8,kappa=10000,data=kidney,hazard="Splines") #-- confidence interval at 95% level (default) summary(modSha) summary(modCox) #-- confidence interval at 99% level summary(modSha,level=0.99) summary(modCox,level=0.99) ## End(Not run)
## Not run: data(kidney) ##-- Shared frailty model --## modSha <- frailtyPenal(Surv(time,status)~age+sex+cluster(id), n.knots=8,kappa=10000,data=kidney,hazard="Splines") ##-- Cox proportional hazard model --## modCox <- frailtyPenal(Surv(time,status)~age+sex, n.knots=8,kappa=10000,data=kidney,hazard="Splines") #-- confidence interval at 95% level (default) summary(modSha) summary(modCox) #-- confidence interval at 99% level summary(modSha,level=0.99) summary(modCox,level=0.99) ## End(Not run)
This function returns hazard rations (HR) and its confidence intervals.
## S3 method for class 'jointNestedPenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
## S3 method for class 'jointNestedPenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
object |
output from a call to frailtyPenal for joint nested models |
level |
significance level of confidence interval. Default is 95%. |
len |
the total field width. Default is 6. |
d |
the desired number of digits after the decimal point. Default of 6 digits is used. |
lab |
label of printed results. |
... |
other unused arguments. |
Prints HR and its confidence intervals for each covariate. Confidence level is allowed (level argument).
## Not run: #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) # Baseline hazard function approximated with splines with calendar-timescale model.spli.AG <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) summary(model.spli.AG) ## End(Not run)
## Not run: #-- here is generated cluster (30 clusters) readmissionNested <- transform(readmission,group=id%%30+1) # Baseline hazard function approximated with splines with calendar-timescale model.spli.AG <- frailtyPenal(formula = Surv(t.start, t.stop, event) ~ subcluster(id) + cluster(group) + dukes + terminal(death), formula.terminalEvent = ~dukes, data = readmissionNested, recurrentAG = TRUE, n.knots = 8, kappa = c(9.55e+9, 1.41e+12), initialize = TRUE) summary(model.spli.AG) ## End(Not run)
This function returns hazard rations (HR) and its confidence intervals.
## S3 method for class 'jointPenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
## S3 method for class 'jointPenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
object |
output from a call to frailtyPenal for joint models |
level |
significance level of confidence interval. Default is 95%. |
len |
the total field width. Default is 6. |
d |
the desired number of digits after the decimal point. Default of 6 digits is used. |
lab |
label of printed results. |
... |
other unused arguments. |
Prints HR and its confidence intervals for each covariate. Confidence level is allowed (level argument).
## Not run: data(readmission) #-- gap-time modJoint.gap <- frailtyPenal(Surv(time,event)~cluster(id)+sex+dukes+ charlson+terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=14,kappa=c(9.55e+9,1.41e+12)) #-- calendar time modJoint.calendar <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+ sex+dukes+charlson+terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=10,kappa=c(9.55e+9,1.41e+12),recurrentAG=TRUE) #-- It takes around 1 minute to converge summary(modJoint.gap) summary(modJoint.calendar) ## End(Not run)
## Not run: data(readmission) #-- gap-time modJoint.gap <- frailtyPenal(Surv(time,event)~cluster(id)+sex+dukes+ charlson+terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=14,kappa=c(9.55e+9,1.41e+12)) #-- calendar time modJoint.calendar <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+ sex+dukes+charlson+terminal(death),formula.terminalEvent=~sex+dukes+charlson, data=readmission,n.knots=10,kappa=c(9.55e+9,1.41e+12),recurrentAG=TRUE) #-- It takes around 1 minute to converge summary(modJoint.gap) summary(modJoint.calendar) ## End(Not run)
Prints a short summary of parameter estimates of a joint competing risks model or more generally an object of class 'jointRecCompet'.
## S3 method for class 'jointRecCompet' summary(object, digits = max(options()$digits - 4, 6), ...)
## S3 method for class 'jointRecCompet' summary(object, digits = max(options()$digits - 4, 6), ...)
object |
the result of a call to the jointRecCompet function |
digits |
number of digits to print |
... |
other unused arguments |
Print, separately for each type of event (Recurrent, Terminal1 and Terminal2), the parameter estimates of the survival or hazard functions.
This function returns the estimate of the coefficients of the model, their standard error and the
associated p-values of the Wald test for the joint surrogate model, also hazard ratios (HR) and their
confidence intervals for the fixed treatment effects. It also displays summary of the surrogacy measure
and of the natural direct, indirect and total effect.
## S3 method for class 'jointSurroMed' summary(object,d=4,len=3,n=3,...)
## S3 method for class 'jointSurroMed' summary(object,d=4,len=3,n=3,...)
object |
An object inheriting from |
d |
The desired number of digits after the decimal point for parameters. The maximum of 4 digits is required for the estimates. Default of 3 digits is used. |
len |
The desired number of digits after the decimal point for p-value and convergence criteria. Default of 4 digits is used. |
n |
The number of time points to be used in the results of the differents function
related to the mediation analysis: |
... |
other unused arguments. |
For the variances parameters of the random effects, it prints the estimate of
the coefficients with their standard error, Z-statistics and p-values
of the Wald test. For the fixed treatment effects, it also prints HR and its confidence
intervals for each covariate.
For the surrogacy assessment, prints n
value of the estimation function and
.
Also prints the values of the estimated direct, indirect and total effects.
The remaining displayed information concern the convergence characteristics and
include the penalized marginal log-likelihood, the number of iterations, the LCV and the convergence criteria.
This function returns the estimate of the coefficient, the hazard ratios (HR) and their
confidence intervals for the fixed treatment effects. Also, an estimate of the
surrogacy evaluation criteria (Kendall's ,
R
2trial and STE)
## S3 method for class 'jointSurroPenal' summary(object, d = 4, len = 3, nb.gh = 32, ...)
## S3 method for class 'jointSurroPenal' summary(object, d = 4, len = 3, nb.gh = 32, ...)
object |
An object inheriting from |
d |
The desired number of digits after the decimal point for parameters. The maximum of 4 digits is required for the estimates. Default of 3 digits is used. |
len |
The desired number of digits after the decimal point for p-value and convergence criteria. Default of 4 digits is used. |
nb.gh |
Number of nodes for the Gaussian-Hermite quadrature. The default is |
... |
other unused arguments. |
For the fixed treatment effects, it also prints HR and its confidence
intervals for each covariate. For the surrogacy evaluation criteria, its prints the estimated
Kendall's with its 95% Confidence interval obtained by the parametric bootstrap
or Delta-method,
the estimated
R
2trial(R2trial) with standard error and the 95% Confidence interval
obtained by Delta-method (Dowd et al., 2014),
R
2trial(R2.boot) and its 95%
Confidence interval obtained by the parametric bootstrap.
We notice that, using bootstrap,
the standard error of the point estimate is not available. We propose a classification of
R
2trial according to
the suggestion of the Institute of Quality and Efficiency in Health Care
(Prasad et al., 2015).
We also display the surrogate threshold effect (ste
) with the associated hazard risk.
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
Dowd BE, Greene WH, Norton EC (2014). "Computation of Standard Errors." Health Services Research, 49(2), 731-750.
Prasad V, Kim C, Burotto M, Vandross A (2015). "The strength of association between surrogate end points and survival in oncology: A systematic review of trial-level meta- alyses." JAMA Internal Medicine, 175(8), 1389-1398.
jointSurroPenal, jointSurroCopPenal, jointSurroTKendall, print.jointSurroPenal
## Not run: ###---Data generation---### data.sim <-jointSurrSimul(n.obs=400, n.trial = 20,cens.adm=549, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###---Estimation---### joint.surrogate <- jointSurroPenal(data = data.sim, nb.mc = 300, nb.gh = 20, indicator.alpha = 1, n.knots = 6) summary(joint.surrogate) ## End(Not run)
## Not run: ###---Data generation---### data.sim <-jointSurrSimul(n.obs=400, n.trial = 20,cens.adm=549, alpha = 1.5, theta = 3.5, gamma = 2.5, zeta = 1, sigma.s = 0.7, sigma.t = 0.7, cor = 0.8, betas = -1.25, betat = -1.25, full.data = 0, random.generator = 1, seed = 0, nb.reject.data = 0) ###---Estimation---### joint.surrogate <- jointSurroPenal(data = data.sim, nb.mc = 300, nb.gh = 20, indicator.alpha = 1, n.knots = 6) summary(joint.surrogate) ## End(Not run)
This function returns the true value, the mean of the estimates, the empirical standard error, the mean of the estimated standard errors (Mean SE), and the coverage probability for model parameters
## S3 method for class 'jointSurroPenalSimul' summary(object, d = 3, R2boot = 0, displayMSE = 0, printResult = 1, CP = 0, ...)
## S3 method for class 'jointSurroPenalSimul' summary(object, d = 3, R2boot = 0, displayMSE = 0, printResult = 1, CP = 0, ...)
object |
an object inheriting from |
d |
The desired number of digits after the decimal point f. Default of 3 |
R2boot |
A binary that specifies whether the confidence interval of
|
displayMSE |
A binary that indicates if the results include bias and mean square errors (MSE), case 1, or the standard errors with the coverage percentage, case 0. By default this argument is set to 0. In the event of 1 the results just include the individual level and the trial level association measurements. |
printResult |
A binary that indicates if the summary of the results should be displayed |
CP |
A binary that indicate in the event of |
... |
other unused arguments. |
For each parameter of the joint surrogate model , we print the true simulation value,
the empirical standard error (empirical SE), the mean of the estimated standard errors
(Mean SE), and the coverate probability (CP).
For Kendall's , the 95% Confidence interval is obtained by
parametric bootstrap (for joint frailty model) or Delta-method (for joint frailty-copula model).
For
R
2trial(R2trial), the standard error is obtained
by Delta-method and the 95% Confidence interval could be obtained directly or by
parametric bootstrap. We also display the total number of non convergence case with
the associated percentage (R : n(%)), the mean number of iterations to reach convergence,
and other estimation and simulation parameters. We also return a dataframe of the simulations
results
.
Casimir Ledoux Sofeu [email protected], [email protected] and Virginie Rondeau [email protected]
# Studies simulation ## Not run: # (Computation takes around 45 minutes using a processor including 40 # cores and a read only memory of 378 Go) joint.simul <- jointSurroPenalSimul(nb.dataset = 10, nbSubSimul=600, ntrialSimul=30, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 200, nb.gh = 20, nb.gh2 = 32, true.init.val = 1, print.iter=F) # results summary(joint.simul, d = 3, R2boot = 1) # bootstrap summary(joint.simul, d = 3, R2boot = 0) # Delta-method ## End(Not run)
# Studies simulation ## Not run: # (Computation takes around 45 minutes using a processor including 40 # cores and a read only memory of 378 Go) joint.simul <- jointSurroPenalSimul(nb.dataset = 10, nbSubSimul=600, ntrialSimul=30, LIMparam = 0.001, LIMlogl = 0.001, LIMderiv = 0.001, nb.mc = 200, nb.gh = 20, nb.gh2 = 32, true.init.val = 1, print.iter=F) # results summary(joint.simul, d = 3, R2boot = 1) # bootstrap summary(joint.simul, d = 3, R2boot = 0) # Delta-method ## End(Not run)
This function returns coefficients estimates and their standard error with
p-values of the Wald test for the longitudinal outcome and hazard ratios
(HR) and their confidence intervals for the terminal event. If a mediation analysis
was performed (option mediation
set to TRUE
in longiPenal
)
this function displays estimations of the related quantities.
## S3 method for class 'longiPenal' summary(object, level = 0.95, len = 6, d = 2, lab=c("coef","hr"), ...)
## S3 method for class 'longiPenal' summary(object, level = 0.95, len = 6, d = 2, lab=c("coef","hr"), ...)
object |
an object inheriting from |
level |
significance level of confidence interval. Default is 95%. |
len |
the total field width for the terminal part. Default is 6. |
d |
the desired number of digits after the decimal point. Default of 6 digits is used. |
lab |
labels of printed results for the longitudinal outcome and the terminal event respectively. |
... |
other unused arguments. |
For the longitudinal outcome it prints the estimates of coefficients of the fixed covariates with their standard error and p-values of the Wald test. For the terminal event it prints HR and its confidence intervals for each covariate. Confidence level is allowed (level argument).
## Not run: ###--- Joint model for longitudinal data and a terminal event ---### data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) # Baseline hazard function approximated with splines # Random effects as the link function model.spli.RE <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, n.knots = 7, kappa = 2) # Weibull baseline hazard function # Current level of the biomarker as the link function model.weib.CL <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Current-level", left.censoring = -3.33, hazard = "Weibull") summary(model.spli.RE) summary(model.weib.CL) ## End(Not run)
## Not run: ###--- Joint model for longitudinal data and a terminal event ---### data(colorectal) data(colorectalLongi) # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) # Baseline hazard function approximated with splines # Random effects as the link function model.spli.RE <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, n.knots = 7, kappa = 2) # Weibull baseline hazard function # Current level of the biomarker as the link function model.weib.CL <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Current-level", left.censoring = -3.33, hazard = "Weibull") summary(model.spli.RE) summary(model.weib.CL) ## End(Not run)
This function returns hazard ratio (HR) and its confidence intervals.
## S3 method for class 'multivPenal' summary(object, level = 0.95, len = 6, d = 2, lab = "hr", ...)
## S3 method for class 'multivPenal' summary(object, level = 0.95, len = 6, d = 2, lab = "hr", ...)
object |
output from a call to multivPenal for joint multivariate models |
level |
significance level of confidence interval. Default is 95%. |
len |
the total field width. Default is 6. |
d |
the desired number of digits after the decimal point. Default of 6 digits is used. |
lab |
label of printed results. |
... |
other unused arguments. |
Prints HR and its confidence intervals for each covariate. Confidence level is allowed (level argument)
This function returns hazard rations (HR) and its confidence intervals for each regression coefficient.
## S3 method for class 'nestedPenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
## S3 method for class 'nestedPenal' summary(object, level = 0.95, len = 6, d = 2, lab="hr", ...)
object |
output from a call to nestedPenal. |
level |
significance level of confidence interval. Default is 95%. |
len |
the total field width. Default is 6. |
d |
the desired number of digits after the decimal point. Default of 6 digits is used. |
lab |
label of printed results. |
... |
other unused arguments. |
Prints HR and its confidence intervals for each regression coefficient. Confidence level is allowed (level argument).
## Not run: data(dataNested) modNested <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+cov2,data=dataNested, n.knots=8,kappa=c(50000,50000),hazard="Splines") #- It takes 90 minutes to converge (depends on processor) summary(modNested) ## End(Not run)
## Not run: data(dataNested) modNested <- frailtyPenal(Surv(t1,t2,event)~cluster(group)+ subcluster(subgroup)+cov1+cov2,data=dataNested, n.knots=8,kappa=c(50000,50000),hazard="Splines") #- It takes 90 minutes to converge (depends on processor) summary(modNested) ## End(Not run)
This function returns coefficients estimates and their standard error with p-values of the Wald test for the longitudinal outcome and hazard ratios (HR) and their confidence intervals for the terminal event.
## S3 method for class 'trivPenal' summary(object, level = 0.95, len = 6, d = 2, lab=c("coef","hr"), ...)
## S3 method for class 'trivPenal' summary(object, level = 0.95, len = 6, d = 2, lab=c("coef","hr"), ...)
object |
an object inheriting from |
level |
significance level of confidence interval. Default is 95%. |
len |
the total field width for the terminal part. Default is 6. |
d |
the desired number of digits after the decimal point. Default of 6 digits is used. |
lab |
labels of printed results for the longitudinal outcome and the terminal event respectively. |
... |
other unused arguments. |
For the longitudinal outcome it prints the estimates of coefficients of the fixed covariates with their standard error and p-values of the Wald test. For the terminal event it prints HR and its confidence intervals for each covariate. Confidence level is allowed (level argument).
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes = 7) summary(model.weib.RE.gap) ## End(Not run)
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes = 7) summary(model.weib.RE.gap) ## End(Not run)
This function returns coefficients estimates and their standard error with p-values of the Wald test for the biomarker growth (KG) and decline (KD) and hazard ratios and their confidence intervals for the terminal event.
## S3 method for class 'trivPenalNL' summary(object, level = 0.95, len = 6, d = 2, lab=c("coef","hr"), ...)
## S3 method for class 'trivPenalNL' summary(object, level = 0.95, len = 6, d = 2, lab=c("coef","hr"), ...)
object |
an object inheriting from |
level |
significance level of confidence interval. Default is 95%. |
len |
the total field width for the terminal part. Default is 6. |
d |
the desired number of digits after the decimal point. Default of 6 digits is used. |
lab |
labels of printed results for the longitudinal outcome and the terminal event respectively. |
... |
other unused arguments. |
For the longitudinal outcome it prints the estimates of coefficients of the fixed covariates with their standard error and p-values of the Wald test (separetely for the biomarker growth and decline). For the terminal event it prints HR and its confidence intervals for each covariate. Confidence level is allowed (level argument).
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes = 7) summary(model.weib.RE.gap) ## End(Not run)
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive", n.nodes = 7) summary(model.weib.RE.gap) ## End(Not run)
This is a simulated dataset used to illustrate the two-part joint model included in the longiPenal function.
data(survDat)
data(survDat)
This data frame contains the following columns:
The identification number of a patient
The event times (death or censoring)
Censoring indicator
Treatment covariate
This is a function used in case of interval-censoring as a response variable in a model formula only for Cox proportional hazard or shared frailty model. Sometimes, an unobserved event might occur in a time interval [L,U]. RecurrentAG argument gets invalid with the use of SurvIC. Note that this function used a Kronecker product which can suffer from computation issue when the number of subjects in each cluster is high. Time dependent variables are not allowed.
SurvIC(t0, lower, upper, event)
SurvIC(t0, lower, upper, event)
t0 |
Truncation time for left truncated data only. To be ignored otherwise. |
lower |
Starting time of the interval for interval-censored data. Time of right-censoring instead. |
upper |
Ending time of the interval for interval-censored data. For right-censored data, lower and upper time must be equal (for numerical reason). |
event |
Status indicator 0=right-censored, 1=interval-censored |
Typical usages are SurvIC(lower,upper,event)
or
SurvIC(t0,lower,upper,event)
No return value
data(bcos) bcos$event <- ifelse(bcos$left!=bcos$right,1,0) ###--- Cox proportional hazard model with interval censoring ---### cox.ic <- frailtyPenal(SurvIC(left,right,event)~treatment, data=bcos,n.knots=8,kappa=10000) ###--- Shared model with interval censoring ---### bcos$group <- c(rep(1:20,4),1:14) sha.ic <- frailtyPenal(SurvIC(left,right,event)~cluster(group)+ treatment,data=bcos,n.knots=8,kappa=10000)
data(bcos) bcos$event <- ifelse(bcos$left!=bcos$right,1,0) ###--- Cox proportional hazard model with interval censoring ---### cox.ic <- frailtyPenal(SurvIC(left,right,event)~treatment, data=bcos,n.knots=8,kappa=10000) ###--- Shared model with interval censoring ---### bcos$group <- c(rep(1:20,4),1:14) sha.ic <- frailtyPenal(SurvIC(left,right,event)~cluster(group)+ treatment,data=bcos,n.knots=8,kappa=10000)
Let t be a continuous variable, we determine the value of the survival function to t after run fit.
survival(t, ObjFrailty)
survival(t, ObjFrailty)
t |
time for survival function. |
ObjFrailty |
an object from the frailtypack fit. |
return the value of survival function in t.
## Not run: #-- a fit Shared data(readmission) fit.shared <- frailtyPenal(Surv(time,event)~dukes+cluster(id)+ strata(sex),n.knots=10,kappa=c(10000,10000),data=readmission) #-- calling survival survival(20,fit.shared) ## End(Not run)
## Not run: #-- a fit Shared data(readmission) fit.shared <- frailtyPenal(Surv(time,event)~dukes+cluster(id)+ strata(sex),n.knots=10,kappa=c(10000,10000),data=readmission) #-- calling survival survival(20,fit.shared) ## End(Not run)
This is a special function used in the context of recurrent event models
with terminal event (e.g., censoring variable related to recurrent events).
It contains the status indicator, normally 0=alive, 1=dead, and is used on
the right hand side of a formula of a 'frailtyPenal', 'longiPenal' and
'trivPenal' functions. Using terminal()
in a formula implies that a
joint frailty model for recurrent events and terminal events is fitted.
terminal(x)
terminal(x)
x |
A numeric variable but should be a Boolean which equals 1 if the subject is dead and 0 if he is alive or censored, as a death indicator. |
x |
a death indicator |
This is a special function used in the context of Cox models and shared and joint frailty models. It identifies time-varying effects of covariates in the model. It is used in 'frailtyPenal' on the right hand side of formula or of formula.terminalEvent.
When considering time-varying effects in a survival model, regression
coefficients can be modeled with a linear combination of B-splines
with coefficients
of order
with
interior knots :
You can notice that a linear combination of B-splines of order 1 without any interior knots (0 interior knot) is the same as a model without time-varying effect (or with constant effect over time).
Statistical tests (likelihood ratio tests) can be done in order to know whether the time-dependent coefficients are significantly different from zero or to test whether a covariate has a time-dependent effect significantly different from zero or not. These tests are correct only with a parametric approach yet.
- Proportional Hazard assumption ?
Time-dependency of a covariate effect can be tested. We need to estimate
parameters
for
for a time-varying
coefficient. Only one (
,
) parameter is estimated for a
constant effect. A global test is done.
The corresponding LR statistic has a distribution of degree
.
- Significant association ?
We can also use a LR test to test whether a covariate has a significant effect on the hazard function. The null hypothesis is :
For that we fit a model considering the covariate with a regression
coefficent modeled using B-splines and a model without the covariate. Hence,
the LR statistic has a distribution of degree
.
timedep(x)
timedep(x)
x |
A numerical or a factor variable that would have a time-varying effect on the event |
x |
A variable identified with a time-varying effect |
Y. Mazroui, A. Mauguen, S. Mathoulin-Pelissier, G. MacGrogan, V. Brouste, V. Rondeau (2013). Time-varying coefficients in a multivariate frailty model: Application to breast cancer recurrences of several types and death. To appear.
data(readmission) ###--- Shared Frailty model with time-varying effect ---### sha.time <- frailtyPenal(Surv(time,event)~cluster(id)+dukes+charlson+ timedep(sex)+chemo,data=readmission,n.knots=8,kappa=1, betaknots=3,betaorder=3) #-- print results of the fit and the associated curves for the #-- time-dependent effects print(sha.time) ###--- Joint Frailty model with time-varying effect ---### joi.time <- frailtyPenal(Surv(time,event)~cluster(id)+timedep(sex)+ chemo+terminal(death),formula.terminalEvent=~timedep(sex)+chemo, data=readmission,n.knots=8,kappa=c(1,1),betaknots=3,betaorder=3) print(joi.time)
data(readmission) ###--- Shared Frailty model with time-varying effect ---### sha.time <- frailtyPenal(Surv(time,event)~cluster(id)+dukes+charlson+ timedep(sex)+chemo,data=readmission,n.knots=8,kappa=1, betaknots=3,betaorder=3) #-- print results of the fit and the associated curves for the #-- time-dependent effects print(sha.time) ###--- Joint Frailty model with time-varying effect ---### joi.time <- frailtyPenal(Surv(time,event)~cluster(id)+timedep(sex)+ chemo+terminal(death),formula.terminalEvent=~timedep(sex)+chemo, data=readmission,n.knots=8,kappa=c(1,1),betaknots=3,betaorder=3) print(joi.time)
Fit a trivariate joint model for longitudinal data, recurrent events and a terminal event using a semiparametric penalized likelihood estimation or a parametric estimation on the hazard functions.
The longitudinal outcomes yi(tik) (k=1,...,ni, i=1,...,N) for N subjects are described by a linear mixed model and the risks of the recurrent and terminal events are represented by proportional hazard risk models. The joint model is constructed assuming that the processes are linked via a latent structure (Krol et al. 2015):
where Li(t),
Rij(t) and
Ti(t) are vectors of fixed effects covariates and
L,
R and
T are the
associated coefficients. Measurements errors
i(tik) are
iid normally distributed with mean 0 and variance
\epsilon2.
The random effects
i = (
0i,...,
qi)T
~
(0,
1) are associated to covariates
i(t)
and independent from the measurement error. The relationship between the
biomarker and recurrent events is explained via
g(
i,
L,
i(t),
Li(t)) with
coefficients
R and between the biomarker and terminal
event is explained via
h(
i,
L,
i(t),
Li(t)) with
coefficients
T. Two forms of the functions g(.)
and h(.) are available: the random effects
i and
the current biomarker level
i(t)=
Li(tik)T
L +
i(tik)T
i. The frailty term
vi is gaussian with mean 0 and variance
v. Together with
i constitutes the random effects of the model:
We consider that the longitudinal outcome can be a subject to a
quantification limit, i.e. some observations, below a level of detection
cannot be quantified (left-censoring).
trivPenal(formula, formula.terminalEvent, formula.LongitudinalData, data, data.Longi, random, id, intercept = TRUE, link = "Random-effects", left.censoring = FALSE, recurrentAG = FALSE, n.knots, kappa, maxit = 300, hazard = "Splines", init.B, init.Random, init.Eta, init.Alpha, method.GH = "Standard", n.nodes, LIMparam = 1e-3, LIMlogl = 1e-3, LIMderiv = 1e-3, print.times = TRUE)
trivPenal(formula, formula.terminalEvent, formula.LongitudinalData, data, data.Longi, random, id, intercept = TRUE, link = "Random-effects", left.censoring = FALSE, recurrentAG = FALSE, n.knots, kappa, maxit = 300, hazard = "Splines", init.B, init.Random, init.Eta, init.Alpha, method.GH = "Standard", n.nodes, LIMparam = 1e-3, LIMlogl = 1e-3, LIMderiv = 1e-3, print.times = TRUE)
formula |
a formula object, with the response on the left of a
|
formula.terminalEvent |
A formula object, only requires terms on the right to indicate which variables are modelling the terminal event. Interactions are possible using * or :. |
formula.LongitudinalData |
A formula object, only requires terms on the right to indicate which variables are modelling the longitudinal outcome. It must follow the standard form used for linear mixed-effects models. Interactions are possible using * or :. |
data |
A 'data.frame' with the variables used in |
data.Longi |
A 'data.frame' with the variables used in
|
random |
Names of variables for the random effects of the longitudinal
outcome. Maximum 3 random effects are possible at the moment. The random
intercept is chosen using |
id |
Name of the variable representing the individuals. |
intercept |
Logical value. Is the fixed intercept of the biomarker
included in the mixed-effects model? The default is |
link |
Type of link functions for the dependence between the biomarker
and death and between the biomarker and the recurrent events:
|
left.censoring |
Is the biomarker left-censored below a threshold
|
recurrentAG |
Logical value. Is Andersen-Gill model fitted? If so indicates that recurrent event times with the counting process approach of Andersen and Gill is used. This formulation can be used for dealing with time-dependent covariates. The default is FALSE. |
n.knots |
Integer giving the number of knots to use. Value required in
the penalized likelihood estimation. It corresponds to the (n.knots+2)
splines functions for the approximation of the hazard or the survival
functions. We estimate I or M-splines of order 4. When the user set a
number of knots equals to k (n.knots=k) then the number of interior knots is
(k-2) and the number of splines is (k-2)+order. Number of knots must be
between 4 and 20. (See Note in |
kappa |
Positive smoothing parameters in the penalized likelihood
estimation. The coefficient kappa of the integral of the squared second
derivative of hazard function in the fit (penalized log likelihood). To
obtain an initial value for |
maxit |
Maximum number of iterations for the Marquardt algorithm. Default is 300 |
hazard |
Type of hazard functions: |
init.B |
Vector of initial values for regression coefficients. This vector should be of the same size as the whole vector of covariates with the first elements for the covariates related to the recurrent events, then to the terminal event and then to the biomarker (interactions in the end of each component). Default is 0.5 for each. |
init.Random |
Initial value for variance of the elements of the matrix of the distribution of the random effects. |
init.Eta |
Initial values for regression coefficients for the link
functions, first for the recurrent events ( |
init.Alpha |
Initial value for parameter alpha |
method.GH |
Method for the Gauss-Hermite quadrature: |
n.nodes |
Number of nodes for the Gauss-Hermite quadrature. They can be chosen amon 5, 7, 9, 12, 15, 20 and 32. The default is 9. |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters (see Details), |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood (see Details), |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the
gradient (see Details), |
print.times |
a logical parameter to print iteration process. Default is TRUE. |
Typical usage for the joint model
trivPenal(Surv(time,event)~cluster(id) + var1 + var2 + terminal(death), formula.terminalEvent =~ var1 + var3, biomarker ~ var1+var2, data, data.Longi, ...)
The method of the Gauss-Hermite quadrature for approximations of the
multidimensional integrals, i.e. length of random
is 2, can be chosen
among the standard, non-adaptive, pseudo-adaptive in which the quadrature
points are transformed using the information from the fitted mixed-effects
model for the biomarker (Rizopoulos 2012) or multivariate non-adaptive
procedure proposed by Genz et al. 1996 and implemented in FORTRAN subroutine
HRMSYM. The choice of the method is important for estimations. The standard
non-adaptive Gauss-Hermite quadrature ("Standard"
) with a specific
number of points gives accurate results but can be time consuming. The
non-adaptive procedure ("HRMSYM"
) offers advantageous computational
time but in case of datasets in which some individuals have few repeated
observations (biomarker measures or recurrent events), this method may be
moderately unstable. The pseudo-adaptive quadrature uses transformed
quadrature points to center and scale the integrand by utilizing estimates
of the random effects from an appropriate linear mixed-effects model (this
transformation does not include the frailty in the trivariate model, for
which the standard method is used). This method enables using less
quadrature points while preserving the estimation accuracy and thus lead to
a better computational time.
NOTE. Data frames data
and data.Longi
must be consistent.
Names and types of corresponding covariates must be the same, as well as the
number and identification of individuals.
The following components are included in a 'trivPenal' object for each model:
b |
The sequence of the corresponding estimation of the coefficients for the hazard functions (parametric or semiparametric), the random effects variances and the regression coefficients. |
call |
The code used for the model. |
formula |
The formula part of the code used for the recurrent event part of the model. |
formula.terminalEvent |
The formula part of the code used for the terminal event part of the model. |
formula.LongitudinalData |
The formula part of the code used for the longitudinal part of the model. |
coef |
The regression coefficients (first for the recurrent events, then for the terminal event and then for the biomarker. |
groups |
The number of groups used in the fit. |
kappa |
The values of the smoothing parameters in the penalized likelihood estimation corresponding to the baseline hazard functions for the recurrent and terminal events. |
logLikPenal |
The complete marginal penalized log-likelihood in the semiparametric case. |
logLik |
The marginal log-likelihood in the parametric case. |
n.measurements |
The number of biomarker observations used in the fit. |
max_rep |
The maximal number of repeated measurements per individual. |
n |
The number of observations in 'data' (recurrent and terminal events) used in the fit. |
n.events |
The number of recurrent events observed in the fit. |
n.deaths |
The number of terminal events observed in the fit. |
n.iter |
The number of iterations needed to converge. |
n.knots |
The number of knots for estimating the baseline hazard function in the penalized likelihood estimation. |
n.strat |
The number of stratum. |
varH |
The variance matrix of all parameters (before positivity constraint transformation for the variance of the measurement error, for which the delta method is used). |
varHIH |
The robust estimation of the variance matrix of all parameters. |
xR |
The vector of times where both survival and hazard function of the recurrent events are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lamR |
The array (dim=3) of baseline hazard estimates and confidence bands (recurrent events). |
survR |
The array (dim=3) of baseline survival estimates and confidence bands (recurrent events). |
xD |
The vector of times where both survival and hazard function of the terminal event are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lamD |
The array (dim=3) of baseline hazard estimates and confidence bands. |
survD |
The array (dim=3) of baseline survival estimates and confidence bands. |
medianR |
The value of the median survival and its confidence bands for the recurrent event. |
medianD |
The value of the median survival and its confidence bands for the terminal event. |
typeof |
The type of the baseline hazard function (0:"Splines", "2:Weibull"). |
npar |
The number of parameters. |
nvar |
The vector of number of explanatory variables for the recurrent events, terminal event and biomarker. |
nvarRec |
The number of explanatory variables for the recurrent events. |
nvarEnd |
The number of explanatory variables for the terminal event. |
nvarY |
The number of explanatory variables for the biomarker. |
noVarRec |
The indicator of absence of the explanatory variables for the recurrent events. |
noVarEnd |
The indicator of absence of the explanatory variables for the terminal event. |
noVarY |
The indicator of absence of the explanatory variables for the biomarker. |
LCV |
The approximated likelihood cross-validation criterion in the semiparametric case (with H minus the converged Hessian matrix, and l(.) the full log-likelihood).
|
AIC |
The Akaike information Criterion for the parametric case.
|
n.knots.temp |
The initial value for the number of knots. |
shape.weib |
The shape parameter for the Weibull hazard functions (the first element for the recurrences and the second one for the terminal event). |
scale.weib |
The scale parameter for the Weibull hazard functions (the first element for the recurrences and the second one for the terminal event). |
martingale.res |
The martingale residuals related to the recurrences for each individual. |
martingaledeath.res |
The martingale residuals related to the terminal event for each individual. |
conditional.res |
The conditional residuals
for the biomarker (subject-specific):
|
marginal.res |
The marginal residuals for the biomarker (population
averaged):
|
marginal_chol.res |
The Cholesky marginal residuals for the biomarker:
|
conditional_st.res |
The standardized conditional residuals for the biomarker. |
marginal_st.res |
The standardized marginal residuals for the biomarker. |
random.effects.pred |
The empirical Bayes predictions of the random effects (ie. using conditional posterior distributions). |
frailty.pred |
The empirical Bayes predictions of the frailty term (ie. using conditional posterior distributions). |
pred.y.marg |
The marginal predictions of the longitudinal outcome. |
pred.y.cond |
The conditional (given the random effects) predictions of the longitudinal outcome. |
linear.pred |
The linear predictor for the recurrent events part. |
lineardeath.pred |
The linear predictor for the terminal event part. |
global_chisqR |
The vector with values of each multivariate Wald test for the recurrent part. |
dof_chisqR |
The vector with degrees of freedom for each multivariate Wald test for the recurrent part. |
global_chisq.testR |
The binary variable equals to 0 when no multivariate Wald is given, 1 otherwise (for the recurrent part). |
p.global_chisqR |
The vector with the p_values for each global multivariate Wald test for the recurrent part. |
global_chisqT |
The vector with values of each multivariate Wald test for the terminal part. |
dof_chisqT |
The vector with degrees of freedom for each multivariate Wald test for the terminal part. |
global_chisq.testT |
The binary variable equals to 0 when no multivariate Wald is given, 1 otherwise (for the terminal part). |
p.global_chisqT |
The vector with the p_values for each global multivariate Wald test for the terminal part. |
global_chisqY |
The vector with values of each multivariate Wald test for the longitudinal part. |
dof_chisqY |
The vector with degrees of freedom for each multivariate Wald test for the longitudinal part. |
global_chisq.testY |
The binary variable equals to 0 when no multivariate Wald is given, 1 otherwise (for the longitudinal part). |
p.global_chisqY |
The vector with the p_values for each global multivariate Wald test for the longitudinal part. |
names.factorR |
The names of the "as.factor" variables for the recurrent part. |
names.factorT |
The names of the "as.factor" variables for the terminal part. |
names.factorY |
The names of the "as.factor" variables for the longitudinal part. |
AG |
The logical value. Is Andersen-Gill model fitted? |
intercept |
The logical value. Is the fixed intercept included in the linear mixed-effects model? |
B1 |
The variance matrix of the random effects for the longitudinal outcome. |
sigma2 |
The variance of the
frailty term ( |
alpha |
The coefficient |
ResidualSE |
The variance of the measurement error. |
etaR |
The
regression coefficients for the link function |
etaT |
The regression coefficients for the link function
|
ne_re |
The number of random effects b used in the fit. |
names.re |
The names of variables for the random effects
|
link |
The name of the type of the link functions. |
leftCensoring |
The logical value. Is the longitudinal outcome left-censored? |
leftCensoring.threshold |
For the left-censored biomarker, the value of the left-censoring threshold used for the fit. |
prop.censored |
The fraction of observations subjected to the left-censoring. |
methodGH |
The Gaussian quadrature method used in the fit. |
n.nodes |
The number of nodes used for the Gaussian quadrature in the fit. |
alpha_p.value |
p-value of the Wald test for the estimated coefficient
|
sigma2_p.value |
p-value of the Wald test for the
estimated variance of the frailty term ( |
etaR_p.value |
p-values of the Wald test for the estimated regression
coefficients for the link function |
etaT_p.value |
p-values of the Wald test for the estimated regression
coefficients for the link function |
beta_p.value |
p-values of the Wald test for the estimated regression coefficients. |
It is recommended to initialize the parameter values using the results
from the reduced models (for example, longiPenal
for the longitudinal
and terminal part and frailtyPenal
for the recurrent part. See
example.
A. Krol, A. Mauguen, Y. Mazroui, A. Laurent, S. Michiels and V. Rondeau (2017). Tutorial in Joint Modeling and Prediction: A Statistical Software for Correlated Longitudinal Outcomes, Recurrent Events and a Terminal Event. Journal of Statistical Software 81(3), 1-52.
A. Krol, L. Ferrer, JP. Pignon, C. Proust-Lima, M. Ducreux, O. Bouche, S. Michiels, V. Rondeau (2016). Joint Model for Left-Censored Longitudinal Data, Recurrent Events and Terminal Event: Predictive Abilities of Tumor Burden for Cancer Evolution with Application to the FFCD 2000-05 Trial. Biometrics 72(3) 907-16.
D. Rizopoulos (2012). Fast fitting of joint models for longitudinal and event time data using a pseudo-adaptive Gaussian quadrature rule. Computational Statistics and Data Analysis 56, 491-501.
A. Genz and B. Keister (1996). Fully symmetric interpolatory rules for multiple integrals over infinite regions with Gaussian weight. Journal of Computational and Applied Mathematics 71, 299-309.
plot.trivPenal
,print.trivPenal
,summary.trivPenal
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Parameter initialisation for covariates - longitudinal and terminal part # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) initial.longi <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, n.knots = 6, kappa = 2, method.GH="Pseudo-adaptive", maxit=40, n.nodes=7) # Parameter initialisation for covariates - recurrent part initial.frailty <- frailtyPenal(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + who.PS, data = colorectal, recurrentAG = TRUE, RandDist = "LogN", n.knots = 6, kappa =2) # Baseline hazard function approximated with splines # Random effects as the link function, Calendar timescale # (computation takes around 40 minutes) model.spli.RE.cal <-trivPenal(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + who.PS + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = TRUE, n.knots = 6, kappa=c(0.01, 2), method.GH="Standard", n.nodes = 7, init.B = c(-0.07, -0.13, -0.16, -0.17, 0.42, #recurrent events covariates -0.16, -0.14, -0.14, 0.08, 0.86, -0.24, #terminal event covariates 2.93, -0.28, -0.13, 0.17, -0.41, 0.23, 0.97, -0.61)) #biomarker covariates # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive",n.nodes=7) ## End(Not run)
## Not run: ###--- Trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # Parameter initialisation for covariates - longitudinal and terminal part # Survival data preparation - only terminal events colorectalSurv <- subset(colorectal, new.lesions == 0) initial.longi <- longiPenal(Surv(time1, state) ~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS , colorectalSurv, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, n.knots = 6, kappa = 2, method.GH="Pseudo-adaptive", maxit=40, n.nodes=7) # Parameter initialisation for covariates - recurrent part initial.frailty <- frailtyPenal(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + who.PS, data = colorectal, recurrentAG = TRUE, RandDist = "LogN", n.knots = 6, kappa =2) # Baseline hazard function approximated with splines # Random effects as the link function, Calendar timescale # (computation takes around 40 minutes) model.spli.RE.cal <-trivPenal(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + who.PS + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = TRUE, n.knots = 6, kappa=c(0.01, 2), method.GH="Standard", n.nodes = 7, init.B = c(-0.07, -0.13, -0.16, -0.17, 0.42, #recurrent events covariates -0.16, -0.14, -0.14, 0.08, 0.86, -0.24, #terminal event covariates 2.93, -0.28, -0.13, 0.17, -0.41, 0.23, 0.97, -0.61)) #biomarker covariates # Weibull baseline hazard function # Random effects as the link function, Gap timescale # (computation takes around 30 minutes) model.weib.RE.gap <-trivPenal(Surv(gap.time, new.lesions) ~ cluster(id) + age + treatment + who.PS + prev.resection + terminal(state), formula.terminalEvent =~ age + treatment + who.PS + prev.resection, tumor.size ~ year * treatment + age + who.PS, data = colorectal, data.Longi = colorectalLongi, random = c("1", "year"), id = "id", link = "Random-effects", left.censoring = -3.33, recurrentAG = FALSE, hazard = "Weibull", method.GH="Pseudo-adaptive",n.nodes=7) ## End(Not run)
Fit a non-linear trivariate joint model for a longitudinal biomarker, recurrent events and a terminal event using a semiparametric penalized likelihood estimation or a parametric estimation on the hazard functions.
The values yi(t) (i=1,...,N) for N subjects represent the individual evolution of the biomarker e.g. tumor size expressed as the sum of the longest diameters (SLD) of target lesions. The dynamics of the biomarker are described by an ordinary differential equation (ODE) that includes the effect of the natural net growth and the treatment effect:
The model includes the following parameters (using the interpretation of
tumor dynamics): exp(G,0) the constant tumor growth rate,
exp(
D,0) the drug-induced tumor decline rate,
resistance effect to drug (exponential tumor decay change with time),
exp(y0) the initial level of the biomarker and di is the
treatment concentration (e.g dose). The random effects biT
= (by0,i,bG,i,bD,i,b\lambda,i)T are gaussian variables
with a diagonal covariance matrix Bi. In the trivariate model
we use the analytical solution of the equation with the population-based
approach of the non-linear mixed effects model. We can also assume a
transformation for the observations of the biomarker (one parameter Box-Cox
transformation) and we include a gaussian measurement error, for individual
i and observation k (k=1,...,ni),
ik ~
(0,
\epsilon2).
The risks of the recurrent (rij(.) the risk of the jth
event of the individual i) and terminal events (i the
risk of the event of the individual i) are represented by proportional
hazard risk models. The joint model is constructed assuming that the
processes are linked via a latent structure and includes the non-linear
mixed effects model for the longitudinal data:
where G,i(t),
D,i(t),
R,ij(t) and
T,i(t) are vectors of possible
time-varying fixed effects covariates and
G,
D,
R and
T are the
associated coefficients. The random effects bi are independent
from the measurement error. The relationship between the biomarker and
recurrent events is explained via g(yi(t)) with coefficients
R and between the biomarker and terminal event is
explained via h(yi(t)) with coefficients
T.
Currently, only one form of the functions g(.) and h(.)
is available: the random effects bi. The frailty term
vi is gaussian with mean 0 and variance
v. Together with
bi constitutes the random effects of the model:
Any combination of the random effects bi, e.g. bi=by0,i or bi = {bG,i,bD,i,b\lambda,i} can be chosen for the model.
We consider that the longitudinal outcome can be a subject to a quantification limit, i.e. some observations, below a level of detection s cannot be quantified (left-censoring).
trivPenalNL(formula, formula.terminalEvent, biomarker, formula.KG, formula.KD, dose, time.biomarker, data, data.Longi, random, id, link = "Random-effects", BoxCox = FALSE, left.censoring = FALSE, recurrentAG = FALSE, n.knots, kappa, maxit = 300, hazard = "Splines", init.B, init.Random, init.Eta, init.Alpha, init.Biomarker, method.GH = "Standard", init.GH = FALSE, n.nodes, LIMparam = 1e-3, LIMlogl = 1e-3, LIMderiv = 1e-3, print.times = TRUE)
trivPenalNL(formula, formula.terminalEvent, biomarker, formula.KG, formula.KD, dose, time.biomarker, data, data.Longi, random, id, link = "Random-effects", BoxCox = FALSE, left.censoring = FALSE, recurrentAG = FALSE, n.knots, kappa, maxit = 300, hazard = "Splines", init.B, init.Random, init.Eta, init.Alpha, init.Biomarker, method.GH = "Standard", init.GH = FALSE, n.nodes, LIMparam = 1e-3, LIMlogl = 1e-3, LIMderiv = 1e-3, print.times = TRUE)
formula |
a formula object, with the response on the left of a
|
formula.terminalEvent |
A formula object, only requires terms on the right to indicate which variables are modelling the terminal event. Interactions are possible using * or :. |
biomarker |
Name of the variable representing the longitudinal biomarker. |
formula.KG |
A formula object, only requires terms on the right to indicate which covariates related to the biomarker growth are included in the longitudinal sub-model. It must follow the standard form used for linear mixed-effects models. Interactions are possible using * or :. |
formula.KD |
A formula object, only requires terms on the right to indicate which covariates related to the biomarker drug-induced decline are included in the longitudinal sub-model. It must follow the standard form used for linear mixed-effects models. Interactions are possible using * or :. |
dose |
Name of the variable representing the drug concentration indicator. |
time.biomarker |
Name of the variable of times of biomarker measurements. |
data |
A 'data.frame' with the variables used in |
data.Longi |
A 'data.frame' with the variables used in
|
random |
Names of parameters for which the random effects are included
in the mixed model. The names must be chosen among |
id |
Name of the variable representing the individuals. |
link |
Type of link functions for the dependence between the biomarker
and death and between the biomarker and the recurrent events: only
|
BoxCox |
Should the Box-Cox transformation be used for the longitudinal
biomarker? If there is no transformation, the argument must be equal to
|
left.censoring |
Is the biomarker left-censored below a threshold
|
recurrentAG |
Logical value. Is Andersen-Gill model fitted? If so indicates that recurrent event times with the counting process approach of Andersen and Gill is used. This formulation can be used for dealing with time-dependent covariates. The default is FALSE. |
n.knots |
Integer giving the number of knots to use. Value required in
the penalized likelihood estimation. It corresponds to the (n.knots+2)
splines functions for the approximation of the hazard or the survival
functions. We estimate I or M-splines of order 4. When the user set a
number of knots equals to k (n.knots=k) then the number of interior knots is
(k-2) and the number of splines is (k-2)+order. Number of knots must be
between 4 and 20. (See Note in |
kappa |
Positive smoothing parameters in the penalized likelihood
estimation. The coefficient kappa of the integral of the squared second
derivative of hazard function in the fit (penalized log likelihood). To
obtain an initial value for |
maxit |
Maximum number of iterations for the Marquardt algorithm. Default is 300 |
hazard |
Type of hazard functions: |
init.B |
Vector of initial values for regression coefficients. This vector should be of the same size as the whole vector of covariates with the first elements for the covariates related to the recurrent events, then to the terminal event and then to the biomarker (interactions in the end of each component). Default is 0.5 for each. |
init.Random |
Initial value for variance of the elements of the matrix of the distribution of the random effects. |
init.Eta |
Initial values for regression coefficients for the link
functions, first for the recurrent events ( |
init.Alpha |
Initial value for parameter alpha |
init.Biomarker |
Initial values for biomarker parameters: |
method.GH |
Method for the Gauss-Hermite quadrature: |
init.GH |
Only when the opiton |
n.nodes |
Number of nodes for the Gauss-Hermite quadrature (from 5 to 32). The default is 9. |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters (see Details), |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood (see Details), |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the
gradient (see Details), |
print.times |
a logical parameter to print iteration process. Default is TRUE. |
Typical usage for the joint model
trivPenalNL(Surv(time,event)~cluster(id) + var1 + var2 + terminal(death), formula.terminalEvent =~ var1 + var3, biomarker = "biomarker.name", dose = "dose.name", time.biomarker = "time", formula.KG ~ var1, formula.KD ~ var2, data, data.Longi, ...)
The method of the Gauss-Hermite quadrature for approximations of the
multidimensional integrals, i.e. length of random
more than 2, can be
chosen among the standard (non-adaptive) and pseudo-adaptive in which the
quadrature points are transformed using the information from the fitted
mixed-effects model for the biomarker (Rizopoulos 2012) or multivariate
non-adaptive procedure proposed by Genz et al. 1996 and implemented in
FORTRAN subroutine HRMSYM. The choice of the method is important for
estimations. The standard non-adaptive Gauss-Hermite quadrature
("Standard"
) with a specific number of points gives accurate results
but can be time consuming. The pseudo-adaptive quadrature uses transformed
quadrature points to center and scale the integrand by utilizing estimates
of the random effects from an appropriate non-linear mixed-effects model
(this transformation does not include the frailty in the trivariate model,
for which the standard method, with 20 quadrature points, is used). This
method enables using less quadrature points while preserving the estimation
accuracy and thus lead to a better computational time.
NOTE. Data frames data
and data.Longi
must be consistent.
Names and types of corresponding covariates must be the same, as well as the
number and identification of individuals.
The following components are included in a 'trivPenalNL' object for each model:
b |
The sequence of the corresponding estimation of the coefficients for the hazard functions (parametric or semiparametric), the random effects variances and the regression coefficients. |
call |
The code used for the model. |
formula |
The formula part of the code used for the recurrent event part of the model. |
formula.terminalEvent |
The formula part of the code used for the terminal event part of the model. |
formula.KG |
The formula part of the code used for the longitudinal part of the model, for the biomarker growth dynamics. |
formula.KD |
The formula part of the code used for the longitudinal part of the model, for the biomarker decline dynamics. |
coef |
The regression coefficients (first for the recurrent events, then for the terminal event, then for the biomarker growth and then for the biomarker decline. |
groups |
The number of groups used in the fit. |
kappa |
The values of the smoothing parameters in the penalized likelihood estimation corresponding to the baseline hazard functions for the recurrent and terminal events. |
logLikPenal |
The complete marginal penalized log-likelihood in the semiparametric case. |
logLik |
The marginal log-likelihood in the parametric case. |
n.measurements |
The number of biomarker observations used in the fit. |
max_rep |
The maximal number of repeated measurements per individual. |
n |
The number of observations in 'data' (recurrent and terminal events) used in the fit. |
n.events |
The number of recurrent events observed in the fit. |
n.deaths |
The number of terminal events observed in the fit. |
n.iter |
The number of iterations needed to converge. |
n.knots |
The number of knots for estimating the baseline hazard function in the penalized likelihood estimation. |
n.strat |
The number of stratum. |
varH |
The variance matrix of all parameters (before positivity constraint transformation for the variance of the measurement error, for which the delta method is used). |
varHIH |
The robust estimation of the variance matrix of all parameters. |
xR |
The vector of times where both survival and hazard function of the recurrent events are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lamR |
The array (dim=3) of baseline hazard estimates and confidence bands (recurrent events). |
survR |
The array (dim=3) of baseline survival estimates and confidence bands (recurrent events). |
xD |
The vector of times where both survival and hazard function of the terminal event are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lamD |
The array (dim=3) of baseline hazard estimates and confidence bands. |
survD |
The array (dim=3) of baseline survival estimates and confidence bands. |
medianR |
The value of the median survival and its confidence bands for the recurrent event. |
medianD |
The value of the median survival and its confidence bands for the terminal event. |
typeof |
The type of the baseline hazard function (0:"Splines", "2:Weibull"). |
npar |
The number of parameters. |
nvar |
The vector of number of explanatory variables for the recurrent events, terminal event, biomarker growth and biomarker decline. |
nvarRec |
The number of explanatory variables for the recurrent events. |
nvarEnd |
The number of explanatory variables for the terminal event. |
nvarKG |
The number of explanatory variables for the biomarker growth. |
nvarKD |
The number of explanatory variables for the biomarker decline. |
noVarRec |
The indicator of absence of the explanatory variables for the recurrent events. |
noVarEnd |
The indicator of absence of the explanatory variables for the terminal event. |
noVarKG |
The indicator of absence of the explanatory variables for the biomarker growth. |
noVarKD |
The indicator of absence of the explanatory variables for the biomarker decline. |
LCV |
The approximated likelihood cross-validation criterion in the semiparametric case (with H minus the converged Hessian matrix, and l(.) the full log-likelihood).
|
AIC |
The Akaike information Criterion for the parametric case.
|
n.knots.temp |
The initial value for the number of knots. |
shape.weib |
The shape parameter for the Weibull hazard functions (the first element for the recurrences and the second one for the terminal event). |
scale.weib |
The scale parameter for the Weibull hazard functions (the first element for the recurrences and the second one for the terminal event). |
random.effects.pred |
The empirical Bayes predictions of the random effects (ie. using conditional posterior distributions). |
global_chisq.testR |
The binary variable equals to 0 when no multivariate Wald is given, 1 otherwise (for the recurrent part). |
global_chisq.testT |
The binary variable equals to 0 when no multivariate Wald is given, 1 otherwise (for the terminal part). |
global_chisq.testKG |
The binary variable equals to 0 when no multivariate Wald is given, 1 otherwise (for the biomarker growth). |
global_chisq.testKD |
The binary variable equals to 0 when no multivariate Wald is given, 1 otherwise (for the biomarker decline). |
AG |
The logical value. Is Andersen-Gill model fitted? |
B1 |
The variance matrix of the random effects for the longitudinal outcome. |
sigma2 |
The variance of the frailty term ( |
alpha |
The coefficient |
ResidualSE |
The variance of the measurement error. |
etaR |
The regression coefficients for the
link function |
etaT |
The regression coefficients for
the link function |
ne_re |
The number of random effects b used in the fit. |
names.re |
The names of variables for the random
effects |
link |
The name of the type of the link functions. |
leftCensoring |
The logical value. Is the longitudinal outcome left-censored? |
leftCensoring.threshold |
For the left-censored biomarker, the value of the left-censoring threshold used for the fit. |
prop.censored |
The fraction of observations subjected to the left-censoring. |
methodGH |
The Gaussian quadrature method used in the fit. |
n.nodes |
The number of nodes used for the Gaussian quadrature in the fit. |
K_G0 |
Value of the estimate of the biomarker growth parameter. |
K_D0 |
Value of the estimate of the biomarker decay parameter. |
lambda |
Value of the estimate of the biomarker resistance to drug. |
y_0 |
Value of the estimate of the biomarker intial level. |
biomarker |
Name of the variable associated with the biomarker in the data. |
time.biomarker |
Name of the variable associated with the time of measurements of the biomarker in the data. |
dose |
Name of the variable associated with the drug concentration in the data. |
BoxCox |
The logical value. Is the BoxCox transformation applied for the biomarker? |
BoxCox_parameter |
The value of the BoxCox transformation parameter. |
alpha_p.value |
p-value of the Wald test for the estimated coefficient
|
sigma2_p.value |
p-value of the Wald test for the
estimated variance of the frailty term ( |
etaR_p.value |
p-values of the Wald test for the estimated regression
coefficients for the link function |
etaT_p.value |
p-values of the Wald test for the estimated regression
coefficients for the link function |
y_0_p.value |
p-value of the Wald test for the estimated biomarker intial level. |
K_G0_p.value |
p-value of the Wald test for the estimated biomarker growth parameter. |
K_D0_p.value |
p-value of the Wald test for the estimated biomarker decay parameter. |
lambda_p.value |
p-value of the Wald test for the estimated biomarker resistance to drug. |
beta_p.value |
p-values of the Wald test for the estimated regression coefficients. |
It is recommended to initialize the parameter values using the results
from a corresponding reduced model (frailtyPenal
for the recurrent
and terminal part). See example.
Estimations of models with more than three random effects can be very long.
A. Krol, C. Tournigand, S. Michiels and V. RondeauS (2018). Multivariate joint frailty model for the analysis of nonlinear tumor kinetics and dynamic predictions of death. Statistics in Medicine.
A. Krol, L. Ferrer, JP. Pignon, C. Proust-Lima, M. Ducreux, O. Bouche, S. Michiels, V. Rondeau (2016). Joint Model for Left-Censored Longitudinal Data, Recurrent Events and Terminal Event: Predictive Abilities of Tumor Burden for Cancer Evolution with Application to the FFCD 2000-05 Trial. Biometrics 72(3) 907-16.
D. Rizopoulos (2012). Fast fitting of joint models for longitudinal and event time data using a pseudo-adaptive Gaussian quadrature rule. Computational Statistics and Data Analysis 56, 491-501.
L. Claret, P. Girard, P.M. Hoff, E. Van Cutsem, K.P. Zuideveld, K. Jorga, J. Fagerberg, R Bruno (2009). Model-based prediction of phase III overall survival in colorectal cancer on the basis of phase II tumor dynamics. Journal of Clinical Oncology 27(25), 4103-8.
plot.trivPenalNL
,print.trivPenalNL
,summary.trivPenalNL
## Not run: ###--- Non-linear trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # No information on dose - creation of a dummy variable colorectalLongi$dose <- 1 # Parameters initialisation - estimation of a simplified model # with two random effects (a frailty term and a random effect # related to biomarker growth (KG)) initial.model <- trivPenalNL(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + terminal(state), formula.terminalEvent =~ age + treatment, biomarker = "tumor.size", formula.KG ~ 1, formula.KD ~ treatment, dose = "dose", time.biomarker = "year", data = colorectal, data.Longi =colorectalLongi, random = "KG", id = "id", recurrentAG = TRUE, n.knots = 5, kappa = c(0.01, 2), method.GH = "Pseudo-adaptive") # Trivariate joint model with initial values for parameters # (computation takes around 40 minutes) model <- trivPenalNL(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + terminal(state), formula.terminalEvent =~ age + treatment, biomarker = "tumor.size", formula.KG ~ 1, formula.KD ~ treatment, dose = "dose", time.biomarker = "year", data = colorectal, data.Longi =colorectalLongi, random = c("y0", "KG"), id = "id", init.B = c(-0.22, -0.16, -0.35, -0.19, 0.04, -0.41, 0.23), init.Alpha = 1.86, init.Eta = c(0.5, 0.57, 0.5, 2.34), init.Biomarker = c(1.24, 0.81, 1.07, -1.53), recurrentAG = TRUE, n.knots = 5, kappa = c(0.01, 2), method.GH = "Pseudo-adaptive") ## End(Not run)
## Not run: ###--- Non-linear trivariate joint model for longitudinal data, ---### ###--- recurrent events and a terminal event ---### data(colorectal) data(colorectalLongi) # No information on dose - creation of a dummy variable colorectalLongi$dose <- 1 # Parameters initialisation - estimation of a simplified model # with two random effects (a frailty term and a random effect # related to biomarker growth (KG)) initial.model <- trivPenalNL(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + terminal(state), formula.terminalEvent =~ age + treatment, biomarker = "tumor.size", formula.KG ~ 1, formula.KD ~ treatment, dose = "dose", time.biomarker = "year", data = colorectal, data.Longi =colorectalLongi, random = "KG", id = "id", recurrentAG = TRUE, n.knots = 5, kappa = c(0.01, 2), method.GH = "Pseudo-adaptive") # Trivariate joint model with initial values for parameters # (computation takes around 40 minutes) model <- trivPenalNL(Surv(time0, time1, new.lesions) ~ cluster(id) + age + treatment + terminal(state), formula.terminalEvent =~ age + treatment, biomarker = "tumor.size", formula.KG ~ 1, formula.KD ~ treatment, dose = "dose", time.biomarker = "year", data = colorectal, data.Longi =colorectalLongi, random = c("y0", "KG"), id = "id", init.B = c(-0.22, -0.16, -0.35, -0.19, 0.04, -0.41, 0.23), init.Alpha = 1.86, init.Eta = c(0.5, 0.57, 0.5, 2.34), init.Biomarker = c(1.24, 0.81, 1.07, -1.53), recurrentAG = TRUE, n.knots = 5, kappa = c(0.01, 2), method.GH = "Pseudo-adaptive") ## End(Not run)
This is a special function used in the context of the joint frailty models for data from nested case-control studies. It specifies weights defined by using 'wts' function, and is used of 'frailtyPenal' formula for fitting joint models.
wts(x)
wts(x)
x |
A numeric variable which is supposed to indicate the weights |
x |
A variable identified as weights |
data(dataNCC) modJoint.ncc <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+cov1 +cov2+terminal(death)+wts(ncc.wts), formula.terminalEvent=~cov1+cov2, data=dataNCC,n.knots=8,kappa=c(1.6e+10, 5.0e+03),recurrentAG=TRUE, RandDist="LogN") print(modJoint.ncc)
data(dataNCC) modJoint.ncc <- frailtyPenal(Surv(t.start,t.stop,event)~cluster(id)+cov1 +cov2+terminal(death)+wts(ncc.wts), formula.terminalEvent=~cov1+cov2, data=dataNCC,n.knots=8,kappa=c(1.6e+10, 5.0e+03),recurrentAG=TRUE, RandDist="LogN") print(modJoint.ncc)