--- title: "CSCNet vignette" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{CSCNet vignette} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "" ) ``` CSCNet is package with flexible tools for fitting and evaluating cause-specific cox models with elastic-net penalty. Each cause is modeled in a separate penalized cox model (using elastic-net penalty) with its exclusive $\alpha$ and $\lambda$ assuming other involved competing causes as censored. ### Regularized cause-specific cox and absolute risk predictions In this package we will use ```Melanoma``` data from 'riskRegression' package (which will load up with 'CSCNet') so we start by loading the package and the ```Melanoma``` data. ```{r message=F, warning=F} library(CSCNet) library(riskRegression) data(Melanoma) as_tibble(Melanoma) table(Melanoma$status) ``` There are 2 events in the Melanoma data coded as 1 & 2. To introduce how setting up variables and hyper-parameters works in CSCNet, we will fit the a model with the following hyper-parameters to the ```Melanoma``` data: $$(\alpha_{1},\alpha_{2},\lambda_{1},\lambda_{2})=(0,0.5,0.01,0.02)$$ We set variables affecting the event: 1 as `age,sex,invasion,thick` and variables affecting event: 2 as `age,sex,epicel,ici,thick`. #### Fitting regularized cause-specific cox models In CSCNet, setting variables and hyper-parameters are done through named lists. Variables and hyper-parameters related to each involved cause are stored in list positions with the name of that position being that cause. Of course these names must be the same as values in the status variable in the data. ```{r} vl <- list('1'=c('age','sex','invasion','thick'), '2'=~age+sex+epicel+ici+thick) penfit <- penCSC(time = 'time', status = 'status', vars.list = vl, data = Melanoma, alpha.list = list('1'=0,'2'=.5), lambda.list = list('1'=.01,'2'=.02)) penfit ``` `penfit` is a comprehensive list with all information related to the data and fitted models in detail that user can access. **Note:** As we saw, variable specification in `vars.list` is possible in 2 ways which are introducing a vector of variable names or a one hand sided formula for different causes. #### Predictions and semi-parametric estimates of absolute risk Now to obtain predictions, specially estimates of the absolute risks, `predict.penCSC` method was developed so user can obtain different forms of values in the easiest way possible. By this method on objects of class `penCSCS` and for different involved causes, user can obtain values for linear predictors (`type='lp'` or `type='link'`), exponential of linear predictors (`type='risk'` or `type='response'`) and finally semi-parametric estimates of absolute risks (`type='absRisk'`) at desired time horizons. **Note:** Default value for `event` argument in `predict.penCSC` is `NULL`. If user leaves it as that, values for all involved causes will be returned. Values of linear predictors for event: 1 related to 1st five individuals of the data: ```{r} predict(penfit,Melanoma[1:5,],type='lp',event=1) ``` Or the risk values of the same individuals for all involved causes: ```{r} predict(penfit,Melanoma[1:5,],type='response') ``` Now let's say we want estimates of absolute risks related to the event: 1 as our event of interest at 3 and 5 year time horizons: ```{r} predict(penfit,Melanoma[1:5,],type='absRisk',event=1,time=365*c(3,5)) ``` **Note:** There's also `predictRisk.penCSC` to obtain absolute risk predictions. This method was developed for compatibility with tools from 'riskRegression' package. ### Tuning the hyper-parameters The above example was for illustration purposes. In real world analysis, one must tune the hyper-parameters with respect to a proper loss function through resampling procedures. `tune_penCSC` is a comprehensive function that was built for this purpose on regularized cause-specific cox models. Like before, specification of variables and hyper-parameters are done through named lists and sequences of candidate hyper-parameters related to each involved cause are stored in list positions with the name of that position being that cause. After that, `tune_penCSC` will create all possible combinations from user's specified sequences and evaluates them using either IPCW brier score or IPCW AUC (as loss functions) based on absolute risk predictions of the event of interest (linking) through a chosen resampling process. Supported resampling procedures are: cross validation (`method='cv'`), repeated cross validation (`method='repcv'`), bootstrap (`method='boot'`), Monte-Carlo or leave group out cross validation (`method='lgocv'`) and leave one out cross validation (`method='loocv'`). #### Automatic specification of hyper-parameters sequences `tune_penCSC` has the ability to automatically determine the candidate sequences of $\alpha$ & $\lambda$ values. Setting any of `alpha.grid` & `lambda.grid` to `NULL` will order the function to calculate them automatically. While the automatic sequence of $\alpha$ values for all causes is `seq(0,1,.5)`, the process of determining the $\lambda$ values automatically is by: 1. Starting from $\lambda=0$, the algorithm fits LASSO models until finding a $\lambda$ value that creates a NULL model where all variables were shrunk to be exactly 0. 2. The obtained $\lambda$ value will be used as the maximum value of a sequence starting from 0. The length of this sequence is controlled by values in `nlambdas.list`. This will be done for each cause-specific model to create exclusive sequences of $\lambda$s for each of them. #### Pre-processing within resampling If the data requires pre-processing steps, it must be done within the resampling process to avoid data leakage. This can be achieved by using `preProc.fun` argument of `tune_penCSC` function. This arguments accepts a function that has a data as its only input and returns a modified version of that data. Any pre-processing steps can be specified within this function. **Note:** `tune_penCSC` has the parallel processing option. If a user has specified a function for pre-processing steps with global objects or calls from other packages and wants to run the code in parallel, the names of those extra packages and global objects must be given through `preProc.pkgs` and `preProc.globals`. Now let's see all that was mentioned in this section in an example. Let's say we want to tune our model for 5 year absolute risk prediction of event: 1 based on time dependent (IPCW) AUC as the loss function (evaluation metric) through a 5-fold cross validation process: ```{r message=T, warning=F} #Writing a hypothetical pre-processing function library(recipes) std.fun <- function(data){ cont_vars <- data %>% select(where(~is.numeric(.))) %>% names cont_vars <- cont_vars[-which(cont_vars %in% c('time','status'))] #External functions from recipes package are being used recipe(~.,data=data) %>% step_center(all_of(cont_vars)) %>% step_scale(all_of(cont_vars)) %>% prep(training=data) %>% juice } #Tuning a regularized cause-specific cox set.seed(455) #for reproducibility tune_melanoma <- tune_penCSC(time = 'time', status = 'status', vars.list = vl, data = Melanoma, horizons = 365*5, event = 1, method = 'cv', k = 5, standardize = FALSE, metrics = 'AUC', alpha.grid = list('1'=0,'2'=c(.5,1)), preProc.fun = std.fun, parallel = TRUE, preProc.pkgs = 'recipes') tune_melanoma$validation_result %>% arrange(desc(mean.AUC)) %>% head tune_melanoma$final_params tune_melanoma$final_fits ```