IPD meta-analysis-with-missing-data

Fitting IPD meta-analysis model with missing data

In this vignette, we go over how to run IPD meta-analysis using models from this package when there is missing data. This would be a simple example showcasing a way of handling missing data. We will use multiple imputation using the mice package. First let’s generate a dataset with some missingness.

#install.packages("bipd")
#or devtools::install_github("MikeJSeo/bipd")
library(bipd)
set.seed(1)
simulated_dataset <- generate_sysmiss_ipdma_example(Nstudies = 10, Ncov = 5, sys_missing_prob = 0, magnitude = 0.5, heterogeneity = 0.1)

simulated_dataset_missing <- simulated_dataset
randomindex <- sample(c(TRUE,FALSE), dim(simulated_dataset_missing)[1], replace = TRUE, prob = c(0.2, 0.8))
randomindex2 <- sample(c(TRUE,FALSE), dim(simulated_dataset_missing)[1], replace = TRUE, prob = c(0.1, 0.9))
simulated_dataset_missing[randomindex,c("x1")] <- NA
simulated_dataset_missing[randomindex2,c("x3")] <- NA
head(simulated_dataset_missing)
#> # A tibble: 6 × 8
#>       y     x1 x2    x3         x4     x5 study treat
#>   <dbl>  <dbl> <fct> <fct>   <dbl>  <dbl> <int> <int>
#> 1  2.48  0.271 0     0     -0.280  0.819      1     0
#> 2  3.61  0.768 1     1      0.429  0.0837     1     0
#> 3  2.61 -1.31  1     0     -1.23   0.165      1     1
#> 4  1.47 -0.590 1     1      0.435  0.345      1     0
#> 5  4.39  1.20  1     1      0.0561 0.287      1     0
#> 6  1.95 -1.00  0     0     -2.19   1.63       1     1

Now we would like to create multiply imputed datasets. You can use mice package to create your own imputations or you can use pre-built imputation tools in this package. For demonstration, we will use the imputation tool in this package using ‘2l.pmm’, which is predictive mean matching accounting for multilevel via mixed effects modelling. See miceadds package for details.

library(miceadds) #for multilevel datasets without systematically missing predictors
imputation <- ipdma.impute(simulated_dataset_missing, covariates = c("x1", "x2", "x3", "x4", "x5"), typeofvar = c("continuous", "binary", "binary", "continuous", "continuous"), interaction = TRUE, studyname = "study", treatmentname = "treat", outcomename = "y", m = 5)  

Okay, so we have obtained 5 sets of imputed datasets with the call above. One convenient aspect of utilizing Bayesian methods under missing data is that to do a proper analysis, we can simply fit the IPD-MA model for each imputed dataset and merge the mcmc results. We do not need to use Rubin’s rule to combine multiply imputed datasets. Just simple merging of mcmc.list would work.

multiple.imputations <- imputation$imp.list
for(ii in 1:length(multiple.imputations)){
  
  current.data <- multiple.imputations[[ii]]
  
  X <- with(current.data, apply(current.data[,c("x1", "x2", "x3", "x4", "x5")],2, function(x) as.numeric(x)))
  
  ipd <- with(current.data, ipdma.model.onestage(y = y, study = study, treat = treat, X = X, response = "normal", shrinkage = "none"))
  
  #Run only 100 iterations for demonstration
  samples <- ipd.run(ipd, pars.save = c("beta", "gamma", "delta"), n.chains = 3, n.burnin = 100, n.iter = 100) 

  if(ii == 1){
    final.result <- samples
  } else{
    final.result <- add.mcmc(final.result, samples)
  }
}
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 2088
#>    Unobserved stochastic nodes: 33
#>    Total graph size: 27192
#> 
#> Initializing model
#> 
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 2088
#>    Unobserved stochastic nodes: 33
#>    Total graph size: 27192
#> 
#> Initializing model
#> 
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 2088
#>    Unobserved stochastic nodes: 33
#>    Total graph size: 27192
#> 
#> Initializing model
#> 
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 2088
#>    Unobserved stochastic nodes: 33
#>    Total graph size: 27192
#> 
#> Initializing model
#> 
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 2088
#>    Unobserved stochastic nodes: 33
#>    Total graph size: 27192
#> 
#> Initializing model

We have written a convenient function ‘mcmc.add’ which combines mcmc.list. Having run the code above, we have mcmc.list containing all the results from each multiply imputed datasets. Now we can use this to summarize the findings.

summary(final.result)
#> 
#> Iterations = 1:500
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 500 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>             Mean      SD  Naive SE Time-series SE
#> beta[1]  0.59742 0.03906 0.0010086       0.002500
#> beta[2]  0.26185 0.03508 0.0009056       0.002109
#> beta[3]  0.24652 0.03705 0.0009565       0.002144
#> beta[4]  0.63447 0.03837 0.0009907       0.002159
#> beta[5]  0.52971 0.03582 0.0009249       0.001907
#> delta[1] 0.00000 0.00000 0.0000000       0.000000
#> delta[2] 0.88810 0.19795 0.0051111       0.005296
#> gamma[1] 0.17063 0.05046 0.0013030       0.002778
#> gamma[2] 0.09593 0.04902 0.0012656       0.002496
#> gamma[3] 0.13438 0.05640 0.0014562       0.004444
#> gamma[4] 0.36289 0.05429 0.0014018       0.002982
#> gamma[5] 0.23830 0.05053 0.0013047       0.002546
#> 
#> 2. Quantiles for each variable:
#> 
#>              2.5%     25%     50%    75%  97.5%
#> beta[1]  0.518486 0.57155 0.59889 0.6237 0.6730
#> beta[2]  0.192909 0.23734 0.26224 0.2854 0.3303
#> beta[3]  0.176947 0.22165 0.24572 0.2718 0.3222
#> beta[4]  0.562804 0.60785 0.63500 0.6586 0.7107
#> beta[5]  0.457395 0.50567 0.52999 0.5541 0.5979
#> delta[1] 0.000000 0.00000 0.00000 0.0000 0.0000
#> delta[2] 0.483624 0.76641 0.88669 1.0046 1.2873
#> gamma[1] 0.069976 0.13687 0.17116 0.2060 0.2681
#> gamma[2] 0.003062 0.06223 0.09681 0.1292 0.1881
#> gamma[3] 0.019646 0.09681 0.13621 0.1704 0.2458
#> gamma[4] 0.254041 0.32605 0.36337 0.4002 0.4670
#> gamma[5] 0.141955 0.20302 0.23883 0.2725 0.3359

Also another important function in this package is finding the individual treatment effect. One additional precaution we need to take is determining what to use for the overall mean and standard deviation (sd). Instead of mean and sd of the imputed dataset, we need to calculate mean and sd of the original dataset. We have allowed the users to specify the overall mean (scale_mean) and overall sd (scale_sd) parameters in the treatment.effect function.

X <- as.matrix(apply(simulated_dataset[, c("x1", "x2", "x3", "x4", "x5")], 2, as.numeric))
# calculate overall mean
overall_mean <- apply(X, 2, mean, na.rm = TRUE)
overall_sd <- apply(X, 2, sd)

treatment.effect(ipd, samples, newpatient = c(0.5, 1, 1, -0.5, 0.5), scale_mean = overall_mean, scale_sd = overall_sd)
#>     0.025       0.5     0.975 
#> 0.8651982 1.2429041 1.6280947