Mixgb offers a scalable solution for imputing large datasets using XGBoost, subsampling and predictive mean matching. Our method utilizes the capabilities of XGBoost, a highly efficient implementation of gradient boosted trees, to capture interactions and non-linear relations automatically. Moreover, we have integrated subsampling and predictive mean matching to minimize bias and reflect appropriate imputation variability. Our package supports various types of variables and offers flexible settings for subsampling and predictive mean matching. We also include diagnostic tools for evaluating the quality of the imputed values.
mixgb
We first load the mixgb
package and the
nhanes3_newborn
dataset, which contains 16 variables of
various types (integer/numeric/factor/ordinal factor). There are 9
variables with missing values.
library(mixgb)
str(nhanes3_newborn)
#> tibble [2,107 × 16] (S3: tbl_df/tbl/data.frame)
#> $ HSHSIZER: int [1:2107] 4 3 5 4 4 3 5 3 3 3 ...
#> $ HSAGEIR : int [1:2107] 2 5 10 10 8 3 10 7 2 7 ...
#> $ HSSEX : Factor w/ 2 levels "1","2": 2 1 2 2 1 1 2 2 2 1 ...
#> $ DMARACER: Factor w/ 3 levels "1","2","3": 1 1 2 1 1 1 2 1 2 2 ...
#> $ DMAETHNR: Factor w/ 3 levels "1","2","3": 3 1 3 3 3 3 3 3 3 3 ...
#> $ DMARETHN: Factor w/ 4 levels "1","2","3","4": 1 3 2 1 1 1 2 1 2 2 ...
#> $ BMPHEAD : num [1:2107] 39.3 45.4 43.9 45.8 44.9 42.2 45.8 NA 40.2 44.5 ...
#> ..- attr(*, "label")= chr "Head circumference (cm)"
#> $ BMPRECUM: num [1:2107] 59.5 69.2 69.8 73.8 69 61.7 74.8 NA 64.5 70.2 ...
#> ..- attr(*, "label")= chr "Recumbent length (cm)"
#> $ BMPSB1 : num [1:2107] 8.2 13 6 8 8.2 9.4 5.2 NA 7 5.9 ...
#> ..- attr(*, "label")= chr "First subscapular skinfold (mm)"
#> $ BMPSB2 : num [1:2107] 8 13 5.6 10 7.8 8.4 5.2 NA 7 5.4 ...
#> ..- attr(*, "label")= chr "Second subscapular skinfold (mm)"
#> $ BMPTR1 : num [1:2107] 9 15.6 7 16.4 9.8 9.6 5.8 NA 11 6.8 ...
#> ..- attr(*, "label")= chr "First triceps skinfold (mm)"
#> $ BMPTR2 : num [1:2107] 9.4 14 8.2 12 8.8 8.2 6.6 NA 10.9 7.6 ...
#> ..- attr(*, "label")= chr "Second triceps skinfold (mm)"
#> $ BMPWT : num [1:2107] 6.35 9.45 7.15 10.7 9.35 7.15 8.35 NA 7.35 8.65 ...
#> ..- attr(*, "label")= chr "Weight (kg)"
#> $ DMPPIR : num [1:2107] 3.186 1.269 0.416 2.063 1.464 ...
#> ..- attr(*, "label")= chr "Poverty income ratio"
#> $ HFF1 : Factor w/ 2 levels "1","2": 2 2 1 1 1 2 2 1 2 1 ...
#> $ HYD1 : Ord.factor w/ 5 levels "1"<"2"<"3"<"4"<..: 1 3 1 1 1 1 1 1 2 1 ...
colSums(is.na(nhanes3_newborn))
#> HSHSIZER HSAGEIR HSSEX DMARACER DMAETHNR DMARETHN BMPHEAD BMPRECUM
#> 0 0 0 0 0 0 124 114
#> BMPSB1 BMPSB2 BMPTR1 BMPTR2 BMPWT DMPPIR HFF1 HYD1
#> 161 169 124 167 117 192 7 0
To impute this dataset, we can use the default settings. The default
number of imputed datasets is m = 5
. Note that we do not
need to convert our data into dgCMatrix or one-hot coding format. Our
package will automatically convert it for you. Variables should be of
the following types: numeric, integer, factor or ordinal factor.
We can also customize imputation settings:
The number of imputed datasets m
The number of imputation iterations maxit
XGBoost hyperparameters and verbose settings.
xgb.params
, nrounds
,
early_stopping_rounds
, print_every_n
and
verbose
.
Subsampling ratio. By default, subsample = 0.7
.
Users can change this value under the xgb.params
argument.
Predictive mean matching settings pmm.type
,
pmm.k
and pmm.link
.
Whether ordinal factors should be converted to integer
(imputation process may be faster)
ordinalAsInteger
Whether or not to use bootstrapping
bootstrap
Initial imputation methods for different types of variables
initial.num
, initial.int
and
initial.fac
.
Whether to save models for imputing newdata
save.models
and save.vars
.
# Use mixgb with chosen settings
params <- list(
max_depth = 5,
subsample = 0.9,
nthread = 2,
tree_method = "hist"
)
imputed.data <- mixgb(
data = nhanes3_newborn, m = 10, maxit = 2,
ordinalAsInteger = FALSE, bootstrap = FALSE,
pmm.type = "auto", pmm.k = 5, pmm.link = "prob",
initial.num = "normal", initial.int = "mode", initial.fac = "mode",
save.models = FALSE, save.vars = NULL,
xgb.params = params, nrounds = 200, early_stopping_rounds = 10, print_every_n = 10L, verbose = 0
)
Imputation performance can be affected by the hyperparameter
settings. Although tuning a large set of hyperparameters may appear
intimidating, it is often possible to narrowing down the search space
because many hyperparameters are correlated. In our package, the
function mixgb_cv()
can be used to tune the number of
boosting rounds - nrounds
. There is no default
nrounds
value in XGBoost,
so users are
required to specify this value themselves. The default
nrounds
in mixgb()
is 100. However, we
recommend using mixgb_cv()
to find the optimal
nrounds
first.
params <- list(max_depth = 3, subsample = 0.7, nthread = 2)
cv.results <- mixgb_cv(data = nhanes3_newborn, nrounds = 100, xgb.params = params, verbose = FALSE)
cv.results$evaluation.log
#> iter train_logloss_mean train_logloss_std test_logloss_mean
#> <num> <num> <num> <num>
#> 1: 1 0.6503470 0.002514878 0.6553148
#> 2: 2 0.6270844 0.002595517 0.6350355
#> 3: 3 0.6108011 0.003147366 0.6269023
#> 4: 4 0.5995431 0.001987800 0.6234168
#> 5: 5 0.5905565 0.002298646 0.6208053
#> 6: 6 0.5833184 0.002381468 0.6170933
#> 7: 7 0.5764182 0.001943841 0.6160033
#> 8: 8 0.5706461 0.001333846 0.6179017
#> 9: 9 0.5662591 0.002294238 0.6181951
#> 10: 10 0.5615253 0.003000531 0.6178397
#> 11: 11 0.5571904 0.003492340 0.6144473
#> 12: 12 0.5531999 0.003844692 0.6143277
#> 13: 13 0.5491742 0.003494488 0.6152173
#> 14: 14 0.5450680 0.003203270 0.6168030
#> 15: 15 0.5408189 0.003511933 0.6163993
#> 16: 16 0.5360055 0.002969772 0.6169275
#> 17: 17 0.5321713 0.002717521 0.6188476
#> 18: 18 0.5273558 0.002844037 0.6182719
#> 19: 19 0.5239154 0.002604099 0.6191260
#> 20: 20 0.5202982 0.002391477 0.6190032
#> 21: 21 0.5166039 0.002468970 0.6196945
#> 22: 22 0.5125846 0.002623426 0.6222280
#> iter train_logloss_mean train_logloss_std test_logloss_mean
#> test_logloss_std
#> <num>
#> 1: 0.002644914
#> 2: 0.008048746
#> 3: 0.007315882
#> 4: 0.009689579
#> 5: 0.009719378
#> 6: 0.008451104
#> 7: 0.006565145
#> 8: 0.008928678
#> 9: 0.010365335
#> 10: 0.011280842
#> 11: 0.011080556
#> 12: 0.010743490
#> 13: 0.011460739
#> 14: 0.009625544
#> 15: 0.010250481
#> 16: 0.009831367
#> 17: 0.009321676
#> 18: 0.007321067
#> 19: 0.006559523
#> 20: 0.005919038
#> 21: 0.006837630
#> 22: 0.006413069
#> test_logloss_std
cv.results$response
#> [1] "HFF1"
cv.results$best.nrounds
#> [1] 12
By default, mixgb_cv()
will randomly choose an
incomplete variable as the response and build an XGBoost model with
other variables as explanatory variables using the complete cases of the
dataset. Therefore, each run of mixgb_cv()
will likely
return different results. Users can also specify the response and
covariates in the argument response
and
select_features
respectively.
cv.results <- mixgb_cv(
data = nhanes3_newborn, nfold = 10, nrounds = 100, early_stopping_rounds = 1,
response = "BMPHEAD", select_features = c("HSAGEIR", "HSSEX", "DMARETHN", "BMPRECUM", "BMPSB1", "BMPSB2", "BMPTR1", "BMPTR2", "BMPWT"), xgb.params = params, verbose = FALSE
)
cv.results$best.nrounds
#> [1] 18
Let us just try setting
nrounds = cv.results$best.nrounds
in mixgb()
to obtain 5 imputed datasets.
The mixgb
package used to provide a few visual
diagnostics functions. However, we have moved these functions to the
vismi
package, which provides a wide range of visualisation
tools for multiple imputation.
For more details, please check the vismi
package on
GitHub Visualisation Tools
for Multiple Imputation.