SuperML R package is designed to
unify the model training process in R like Python. Generally, it’s seen
that people spend lot of time in searching for packages, figuring out
the syntax for training machine learning models in R. This behaviour is
highly apparent in users who frequently switch between R and Python.
This package provides a python´s scikit-learn interface
(fit
, predict
) to train models faster.
In addition to building machine learning models, there are handy functionalities to do feature engineering
This ambitious package is my ongoing effort to help the r-community build ML models easily and faster in R.
You can install latest cran version using (recommended):
You can install the developmemt version directly from github using:
For machine learning, superml is based on the existing R packages. Hence, while installing the package, we don’t install all the dependencies. However, while training any model, superml will automatically install the package if its not found. Still, if you want to install all dependencies at once, you can simply do:
This package uses existing r-packages to build machine learning model. In this tutorial, we’ll use data.table R package to do all tasks related to data manipulation.
We’ll quickly prepare the data set to be ready to served for model training.
load("../data/reg_train.rda")
# if the above doesn't work, you can try: load("reg_train.rda")
# superml::check_package("caret")
library(data.table)
library(caret)
#> Loading required package: ggplot2
#> Loading required package: lattice
library(superml)
library(Metrics)
#>
#> Attaching package: 'Metrics'
#> The following objects are masked from 'package:caret':
#>
#> precision, recall
head(reg_train)
#> Id MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape
#> <int> <int> <char> <int> <int> <char> <char> <char>
#> 1: 1 60 RL 65 8450 Pave <NA> Reg
#> 2: 2 20 RL 80 9600 Pave <NA> Reg
#> 3: 3 60 RL 68 11250 Pave <NA> IR1
#> 4: 4 70 RL 60 9550 Pave <NA> IR1
#> 5: 5 60 RL 84 14260 Pave <NA> IR1
#> 6: 6 50 RL 85 14115 Pave <NA> IR1
#> LandContour Utilities LotConfig LandSlope Neighborhood Condition1 Condition2
#> <char> <char> <char> <char> <char> <char> <char>
#> 1: Lvl AllPub Inside Gtl CollgCr Norm Norm
#> 2: Lvl AllPub FR2 Gtl Veenker Feedr Norm
#> 3: Lvl AllPub Inside Gtl CollgCr Norm Norm
#> 4: Lvl AllPub Corner Gtl Crawfor Norm Norm
#> 5: Lvl AllPub FR2 Gtl NoRidge Norm Norm
#> 6: Lvl AllPub Inside Gtl Mitchel Norm Norm
#> BldgType HouseStyle OverallQual OverallCond YearBuilt YearRemodAdd RoofStyle
#> <char> <char> <int> <int> <int> <int> <char>
#> 1: 1Fam 2Story 7 5 2003 2003 Gable
#> 2: 1Fam 1Story 6 8 1976 1976 Gable
#> 3: 1Fam 2Story 7 5 2001 2002 Gable
#> 4: 1Fam 2Story 7 5 1915 1970 Gable
#> 5: 1Fam 2Story 8 5 2000 2000 Gable
#> 6: 1Fam 1.5Fin 5 5 1993 1995 Gable
#> RoofMatl Exterior1st Exterior2nd MasVnrType MasVnrArea ExterQual ExterCond
#> <char> <char> <char> <char> <int> <char> <char>
#> 1: CompShg VinylSd VinylSd BrkFace 196 Gd TA
#> 2: CompShg MetalSd MetalSd None 0 TA TA
#> 3: CompShg VinylSd VinylSd BrkFace 162 Gd TA
#> 4: CompShg Wd Sdng Wd Shng None 0 TA TA
#> 5: CompShg VinylSd VinylSd BrkFace 350 Gd TA
#> 6: CompShg VinylSd VinylSd None 0 TA TA
#> Foundation BsmtQual BsmtCond BsmtExposure BsmtFinType1 BsmtFinSF1
#> <char> <char> <char> <char> <char> <int>
#> 1: PConc Gd TA No GLQ 706
#> 2: CBlock Gd TA Gd ALQ 978
#> 3: PConc Gd TA Mn GLQ 486
#> 4: BrkTil TA Gd No ALQ 216
#> 5: PConc Gd TA Av GLQ 655
#> 6: Wood Gd TA No GLQ 732
#> BsmtFinType2 BsmtFinSF2 BsmtUnfSF TotalBsmtSF Heating HeatingQC CentralAir
#> <char> <int> <int> <int> <char> <char> <char>
#> 1: Unf 0 150 856 GasA Ex Y
#> 2: Unf 0 284 1262 GasA Ex Y
#> 3: Unf 0 434 920 GasA Ex Y
#> 4: Unf 0 540 756 GasA Gd Y
#> 5: Unf 0 490 1145 GasA Ex Y
#> 6: Unf 0 64 796 GasA Ex Y
#> Electrical 1stFlrSF 2ndFlrSF LowQualFinSF GrLivArea BsmtFullBath
#> <char> <int> <int> <int> <int> <int>
#> 1: SBrkr 856 854 0 1710 1
#> 2: SBrkr 1262 0 0 1262 0
#> 3: SBrkr 920 866 0 1786 1
#> 4: SBrkr 961 756 0 1717 1
#> 5: SBrkr 1145 1053 0 2198 1
#> 6: SBrkr 796 566 0 1362 1
#> BsmtHalfBath FullBath HalfBath BedroomAbvGr KitchenAbvGr KitchenQual
#> <int> <int> <int> <int> <int> <char>
#> 1: 0 2 1 3 1 Gd
#> 2: 1 2 0 3 1 TA
#> 3: 0 2 1 3 1 Gd
#> 4: 0 1 0 3 1 Gd
#> 5: 0 2 1 4 1 Gd
#> 6: 0 1 1 1 1 TA
#> TotRmsAbvGrd Functional Fireplaces FireplaceQu GarageType GarageYrBlt
#> <int> <char> <int> <char> <char> <int>
#> 1: 8 Typ 0 <NA> Attchd 2003
#> 2: 6 Typ 1 TA Attchd 1976
#> 3: 6 Typ 1 TA Attchd 2001
#> 4: 7 Typ 1 Gd Detchd 1998
#> 5: 9 Typ 1 TA Attchd 2000
#> 6: 5 Typ 0 <NA> Attchd 1993
#> GarageFinish GarageCars GarageArea GarageQual GarageCond PavedDrive
#> <char> <int> <int> <char> <char> <char>
#> 1: RFn 2 548 TA TA Y
#> 2: RFn 2 460 TA TA Y
#> 3: RFn 2 608 TA TA Y
#> 4: Unf 3 642 TA TA Y
#> 5: RFn 3 836 TA TA Y
#> 6: Unf 2 480 TA TA Y
#> WoodDeckSF OpenPorchSF EnclosedPorch 3SsnPorch ScreenPorch PoolArea PoolQC
#> <int> <int> <int> <int> <int> <int> <char>
#> 1: 0 61 0 0 0 0 <NA>
#> 2: 298 0 0 0 0 0 <NA>
#> 3: 0 42 0 0 0 0 <NA>
#> 4: 0 35 272 0 0 0 <NA>
#> 5: 192 84 0 0 0 0 <NA>
#> 6: 40 30 0 320 0 0 <NA>
#> Fence MiscFeature MiscVal MoSold YrSold SaleType SaleCondition SalePrice
#> <char> <char> <int> <int> <int> <char> <char> <int>
#> 1: <NA> <NA> 0 2 2008 WD Normal 208500
#> 2: <NA> <NA> 0 5 2007 WD Normal 181500
#> 3: <NA> <NA> 0 9 2008 WD Normal 223500
#> 4: <NA> <NA> 0 2 2006 WD Abnorml 140000
#> 5: <NA> <NA> 0 12 2008 WD Normal 250000
#> 6: MnPrv Shed 700 10 2009 WD Normal 143000
split <- createDataPartition(y = reg_train$SalePrice, p = 0.7)
xtrain <- reg_train[split$Resample1]
xtest <- reg_train[!split$Resample1]
# remove features with 90% or more missing values
# we will also remove the Id column because it doesn't contain
# any useful information
na_cols <- colSums(is.na(xtrain)) / nrow(xtrain)
na_cols <- names(na_cols[which(na_cols > 0.9)])
xtrain[, c(na_cols, "Id") := NULL]
xtest[, c(na_cols, "Id") := NULL]
# encode categorical variables
cat_cols <- names(xtrain)[sapply(xtrain, is.character)]
for(c in cat_cols){
lbl <- LabelEncoder$new()
lbl$fit(c(xtrain[[c]], xtest[[c]]))
xtrain[[c]] <- lbl$transform(xtrain[[c]])
xtest[[c]] <- lbl$transform(xtest[[c]])
}
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
# removing noise column
noise <- c('GrLivArea','TotalBsmtSF')
xtrain[, c(noise) := NULL]
xtest[, c(noise) := NULL]
# fill missing value with -1
xtrain[is.na(xtrain)] <- -1
xtest[is.na(xtest)] <- -1
KNN Regression
knn <- KNNTrainer$new(k = 2,prob = T,type = 'reg')
knn$fit(train = xtrain, test = xtest, y = 'SalePrice')
probs <- knn$predict(type = 'prob')
labels <- knn$predict(type='raw')
rmse(actual = xtest$SalePrice, predicted=labels)
#> [1] 48967.55
SVM Regression
svm <- SVMTrainer$new()
svm$fit(xtrain, 'SalePrice')
pred <- svm$predict(xtest)
rmse(actual = xtest$SalePrice, predicted = pred)
Simple Regresison
lf <- LMTrainer$new(family="gaussian")
lf$fit(X = xtrain, y = "SalePrice")
summary(lf$model)
#>
#> Call:
#> stats::glm(formula = f, family = self$family, data = X, weights = self$weights)
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -7.807e+04 1.266e+06 -0.062 0.950856
#> MSSubClass -5.959e+01 3.516e+01 -1.695 0.090401 .
#> MSZoning -3.827e+02 1.178e+03 -0.325 0.745336
#> LotFrontage 6.702e+01 2.743e+01 2.443 0.014757 *
#> LotArea 4.561e-01 9.480e-02 4.812 1.74e-06 ***
#> Street -3.292e+04 1.419e+04 -2.319 0.020584 *
#> LotShape 3.826e+03 1.644e+03 2.328 0.020143 *
#> LandContour 4.885e+02 1.777e+03 0.275 0.783474
#> Utilities -3.164e+04 2.708e+04 -1.168 0.242949
#> LotConfig 1.750e+03 1.041e+03 1.680 0.093210 .
#> LandSlope -1.996e+03 4.122e+03 -0.484 0.628325
#> Neighborhood 1.273e+02 1.531e+02 0.831 0.405930
#> Condition1 -2.044e+03 5.997e+02 -3.409 0.000680 ***
#> Condition2 -9.774e+02 2.382e+03 -0.410 0.681664
#> BldgType -3.174e+03 1.537e+03 -2.066 0.039137 *
#> HouseStyle 8.136e+02 8.089e+02 1.006 0.314776
#> OverallQual 1.034e+04 1.128e+03 9.164 < 2e-16 ***
#> OverallCond 7.358e+03 9.887e+02 7.442 2.22e-13 ***
#> YearBuilt 5.833e+02 6.428e+01 9.074 < 2e-16 ***
#> YearRemodAdd 1.182e+02 6.607e+01 1.788 0.074027 .
#> RoofStyle -5.904e+02 1.699e+03 -0.347 0.728334
#> RoofMatl -2.048e+03 3.636e+03 -0.563 0.573503
#> Exterior1st -5.298e+02 4.533e+02 -1.169 0.242791
#> Exterior2nd 1.378e+03 4.755e+02 2.899 0.003826 **
#> MasVnrType 4.497e+03 1.311e+03 3.429 0.000632 ***
#> MasVnrArea 2.681e+01 5.574e+00 4.811 1.75e-06 ***
#> ExterQual 4.297e+03 1.978e+03 2.173 0.030057 *
#> ExterCond 1.744e+02 2.037e+03 0.086 0.931779
#> Foundation -2.638e+03 8.453e+02 -3.121 0.001859 **
#> BsmtQual 8.217e+03 1.192e+03 6.894 9.91e-12 ***
#> BsmtCond -2.068e+03 1.577e+03 -1.312 0.189995
#> BsmtExposure 5.380e+03 8.999e+02 5.979 3.18e-09 ***
#> BsmtFinType1 -2.236e+01 6.964e+02 -0.032 0.974389
#> BsmtFinSF1 4.856e+01 5.195e+00 9.346 < 2e-16 ***
#> BsmtFinType2 -2.174e+02 8.296e+02 -0.262 0.793365
#> BsmtFinSF2 3.374e+01 7.659e+00 4.405 1.18e-05 ***
#> BsmtUnfSF 2.904e+01 4.879e+00 5.953 3.71e-09 ***
#> Heating -7.919e+02 2.799e+03 -0.283 0.777260
#> HeatingQC -2.643e+03 1.183e+03 -2.234 0.025700 *
#> CentralAir 2.956e+03 4.434e+03 0.667 0.505104
#> Electrical 1.311e+03 1.204e+03 1.089 0.276338
#> `1stFlrSF` 6.319e+01 6.104e+00 10.353 < 2e-16 ***
#> `2ndFlrSF` 7.870e+01 5.121e+00 15.369 < 2e-16 ***
#> LowQualFinSF 2.436e+01 1.743e+01 1.398 0.162396
#> BsmtFullBath 1.232e+03 2.432e+03 0.507 0.612578
#> BsmtHalfBath -2.835e+03 3.509e+03 -0.808 0.419424
#> FullBath -2.703e+02 2.592e+03 -0.104 0.916962
#> HalfBath -1.052e+03 2.465e+03 -0.427 0.669739
#> BedroomAbvGr -8.939e+03 1.588e+03 -5.628 2.40e-08 ***
#> KitchenAbvGr -9.061e+03 5.527e+03 -1.639 0.101485
#> KitchenQual 8.607e+03 1.462e+03 5.887 5.44e-09 ***
#> TotRmsAbvGrd -1.993e+02 1.151e+03 -0.173 0.862596
#> Functional -6.793e+03 1.302e+03 -5.217 2.24e-07 ***
#> Fireplaces 2.182e+02 2.072e+03 0.105 0.916145
#> FireplaceQu 4.298e+02 1.118e+03 0.384 0.700785
#> GarageType 8.104e+02 1.018e+03 0.796 0.426077
#> GarageYrBlt 3.708e+00 4.152e+00 0.893 0.372077
#> GarageFinish 7.256e+02 1.178e+03 0.616 0.538221
#> GarageCars 3.666e+03 2.688e+03 1.364 0.172924
#> GarageArea 1.864e+01 8.762e+00 2.127 0.033638 *
#> GarageQual 6.225e+03 2.767e+03 2.249 0.024717 *
#> GarageCond -3.197e+03 2.575e+03 -1.241 0.214745
#> PavedDrive 1.247e+03 2.597e+03 0.480 0.631080
#> WoodDeckSF 1.891e+01 7.303e+00 2.589 0.009761 **
#> OpenPorchSF 1.191e+01 1.330e+01 0.895 0.370826
#> EnclosedPorch -1.631e+01 1.517e+01 -1.075 0.282584
#> `3SsnPorch` 1.219e+01 2.525e+01 0.483 0.629326
#> ScreenPorch 3.267e+01 1.561e+01 2.093 0.036614 *
#> PoolArea 8.570e+01 2.138e+01 4.008 6.61e-05 ***
#> Fence -1.385e+03 1.106e+03 -1.251 0.211079
#> MiscVal -4.125e+00 3.202e+00 -1.288 0.197989
#> MoSold 5.000e+01 3.066e+02 0.163 0.870502
#> YrSold -6.920e+02 6.294e+02 -1.099 0.271854
#> SaleType 2.753e+03 1.022e+03 2.694 0.007184 **
#> SaleCondition 5.041e+02 1.054e+03 0.478 0.632613
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for gaussian family taken to be 632718500)
#>
#> Null deviance: 6.4089e+12 on 1023 degrees of freedom
#> Residual deviance: 6.0045e+11 on 949 degrees of freedom
#> AIC: 23732
#>
#> Number of Fisher Scoring iterations: 2
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 51868.63
Lasso Regression
lf <- LMTrainer$new(family = "gaussian", alpha = 1, lambda = 1000)
lf$fit(X = xtrain, y = "SalePrice")
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 56520.35
Ridge Regression
lf <- LMTrainer$new(family = "gaussian", alpha=0)
lf$fit(X = xtrain, y = "SalePrice")
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 57518.69
Logistic Regression with CV
lf <- LMTrainer$new(family = "gaussian")
lf$cv_model(X = xtrain, y = 'SalePrice', nfolds = 5, parallel = FALSE)
predictions <- lf$cv_predict(df = xtest)
coefs <- lf$get_importance()
rmse(actual = xtest$SalePrice, predicted = predictions)
Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 0)
rf$fit(X = xtrain, y = "SalePrice")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> OverallQual 793015495967
#> GarageCars 491053629502
#> GarageArea 471977251081
#> 1stFlrSF 431975335879
#> YearBuilt 365102232744
#> FullBath 321576310239
#> GarageYrBlt 281988385376
#> BsmtFinSF1 277938567260
#> 2ndFlrSF 273147047174
#> LotArea 200308997298
#> TotRmsAbvGrd 192363407558
#> ExterQual 174016617111
#> YearRemodAdd 167743983181
#> MasVnrArea 138617312730
#> BsmtQual 131850404726
#> KitchenQual 130130323372
#> FireplaceQu 124093293667
#> Fireplaces 117803440076
#> Foundation 97997079311
#> LotFrontage 88940551957
#> WoodDeckSF 71785526450
#> OpenPorchSF 68782467905
#> BsmtFinType1 62069067323
#> BsmtUnfSF 57144737458
#> HeatingQC 50418619845
#> Neighborhood 43510899871
#> BedroomAbvGr 42297732537
#> GarageType 38277415505
#> MSSubClass 37819824924
#> Exterior2nd 37246569060
#> MoSold 35108696897
#> OverallCond 33391499038
#> HouseStyle 32375620690
#> BsmtExposure 31010780899
#> HalfBath 30825510636
#> Exterior1st 26736882900
#> LotShape 26405395829
#> GarageFinish 26292367654
#> RoofStyle 24573031855
#> BsmtFullBath 22016405322
#> YrSold 20256527477
#> SaleCondition 19004795557
#> LotConfig 18690300224
#> MSZoning 17458079627
#> LandContour 15839052243
#> GarageQual 15671305650
#> SaleType 15212254807
#> MasVnrType 14518801012
#> RoofMatl 14506065440
#> PoolArea 13711035480
#> ScreenPorch 13654161333
#> LandSlope 12174946516
#> CentralAir 11813169411
#> BldgType 11795914313
#> GarageCond 11430149494
#> Fence 10906982172
#> EnclosedPorch 8651677629
#> BsmtCond 8447145322
#> BsmtFinSF2 7403460394
#> Functional 6799375124
#> ExterCond 6687063054
#> PavedDrive 6305774227
#> BsmtHalfBath 6014776973
#> Condition1 4762948045
#> BsmtFinType2 4519392551
#> KitchenAbvGr 4170529848
#> LowQualFinSF 3446377058
#> Electrical 3365048012
#> Heating 3258786567
#> 3SsnPorch 2385035072
#> MiscVal 1264354016
#> Street 747755494
#> Condition2 431334701
#> Utilities 17163981
rmse(actual = xtest$SalePrice, predicted = pred)
#> [1] 36092.58
Xgboost
xgb <- XGBTrainer$new(objective = "reg:linear"
, n_estimators = 500
, eval_metric = "rmse"
, maximize = F
, learning_rate = 0.1
,max_depth = 6)
xgb$fit(X = xtrain, y = "SalePrice", valid = xtest)
pred <- xgb$predict(xtest)
rmse(actual = xtest$SalePrice, predicted = pred)
Grid Search
xgb <- XGBTrainer$new(objective = "reg:linear")
gst <- GridSearchCV$new(trainer = xgb,
parameters = list(n_estimators = c(10,50), max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'))
gst$fit(xtrain, "SalePrice")
gst$best_iteration()
Random Search
rf <- RFTrainer$new()
rst <- RandomSearchCV$new(trainer = rf,
parameters = list(n_estimators = c(5,10),
max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'),
n_iter = 3)
rst$fit(xtrain, "SalePrice")
#> [1] "In total, 3 models will be trained"
rst$best_iteration()
#> $n_estimators
#> [1] 5
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0.01660079
#>
#> $accuracy_sd
#> [1] 0.006104784
#>
#> $auc_avg
#> [1] NaN
#>
#> $auc_sd
#> [1] NA
Here, we will solve a simple binary classification problem (predict people who survived on titanic ship). The idea here is to demonstrate how to use this package to solve classification problems.
Data Preparation
# load class
load('../data/cla_train.rda')
# if the above doesn't work, you can try: load("cla_train.rda")
head(cla_train)
#> PassengerId Survived Pclass
#> <int> <int> <int>
#> 1: 1 0 3
#> 2: 2 1 1
#> 3: 3 1 3
#> 4: 4 1 1
#> 5: 5 0 3
#> 6: 6 0 3
#> Name Sex Age SibSp Parch
#> <char> <char> <num> <int> <int>
#> 1: Braund, Mr. Owen Harris male 22 1 0
#> 2: Cumings, Mrs. John Bradley (Florence Briggs Thayer) female 38 1 0
#> 3: Heikkinen, Miss. Laina female 26 0 0
#> 4: Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35 1 0
#> 5: Allen, Mr. William Henry male 35 0 0
#> 6: Moran, Mr. James male NA 0 0
#> Ticket Fare Cabin Embarked
#> <char> <num> <char> <char>
#> 1: A/5 21171 7.2500 S
#> 2: PC 17599 71.2833 C85 C
#> 3: STON/O2. 3101282 7.9250 S
#> 4: 113803 53.1000 C123 S
#> 5: 373450 8.0500 S
#> 6: 330877 8.4583 Q
# split the data
split <- createDataPartition(y = cla_train$Survived,p = 0.7)
xtrain <- cla_train[split$Resample1]
xtest <- cla_train[!split$Resample1]
# encode categorical variables - shorter way
for(c in c('Embarked','Sex','Cabin')) {
lbl <- LabelEncoder$new()
lbl$fit(c(xtrain[[c]], xtest[[c]]))
xtrain[[c]] <- lbl$transform(xtrain[[c]])
xtest[[c]] <- lbl$transform(xtest[[c]])
}
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
# impute missing values
xtrain[, Age := replace(Age, is.na(Age), median(Age, na.rm = T))]
xtest[, Age := replace(Age, is.na(Age), median(Age, na.rm = T))]
# drop these features
to_drop <- c('PassengerId','Ticket','Name')
xtrain <- xtrain[,-c(to_drop), with=F]
xtest <- xtest[,-c(to_drop), with=F]
Now, our data is ready to be served for model training. Let’s do it.
KNN Classification
knn <- KNNTrainer$new(k = 2,prob = T,type = 'class')
knn$fit(train = xtrain, test = xtest, y = 'Survived')
probs <- knn$predict(type = 'prob')
labels <- knn$predict(type = 'raw')
auc(actual = xtest$Survived, predicted = labels)
#> [1] 0.6385027
Naive Bayes Classification
nb <- NBTrainer$new()
nb$fit(xtrain, 'Survived')
pred <- nb$predict(xtest)
#> Warning: predict.naive_bayes(): more features in the newdata are provided as
#> there are probability tables in the object. Calculation is performed based on
#> features to be found in the tables.
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.7771836
SVM Classification
#predicts labels
svm <- SVMTrainer$new()
svm$fit(xtrain, 'Survived')
pred <- svm$predict(xtest)
auc(actual = xtest$Survived, predicted=pred)
Logistic Regression
lf <- LMTrainer$new(family = "binomial")
lf$fit(X = xtrain, y = "Survived")
summary(lf$model)
#>
#> Call:
#> stats::glm(formula = f, family = self$family, data = X, weights = self$weights)
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 1.830070 0.616894 2.967 0.00301 **
#> Pclass -0.980785 0.192493 -5.095 3.48e-07 ***
#> Sex 2.508241 0.230374 10.888 < 2e-16 ***
#> Age -0.041034 0.009309 -4.408 1.04e-05 ***
#> SibSp -0.235520 0.117715 -2.001 0.04542 *
#> Parch -0.098742 0.137791 -0.717 0.47361
#> Fare 0.001281 0.002842 0.451 0.65230
#> Cabin 0.008408 0.004786 1.757 0.07899 .
#> Embarked 0.248088 0.166616 1.489 0.13649
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 831.52 on 623 degrees of freedom
#> Residual deviance: 564.76 on 615 degrees of freedom
#> AIC: 582.76
#>
#> Number of Fisher Scoring iterations: 5
predictions <- lf$predict(df = xtest)
auc(actual = xtest$Survived, predicted = predictions)
#> [1] 0.8832145
Lasso Logistic Regression
lf <- LMTrainer$new(family="binomial", alpha=1)
lf$cv_model(X = xtrain, y = "Survived", nfolds = 5, parallel = FALSE)
pred <- lf$cv_predict(df = xtest)
auc(actual = xtest$Survived, predicted = pred)
Ridge Logistic Regression
lf <- LMTrainer$new(family="binomial", alpha=0)
lf$cv_model(X = xtrain, y = "Survived", nfolds = 5, parallel = FALSE)
pred <- lf$cv_predict(df = xtest)
auc(actual = xtest$Survived, predicted = pred)
Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 1, max_features = 3)
rf$fit(X = xtrain, y = "Survived")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> Sex 69.10742
#> Fare 57.96084
#> Age 48.50156
#> Pclass 23.91175
#> Cabin 21.19329
#> SibSp 12.58503
#> Parch 10.55128
#> Embarked 10.07059
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.7988414
Xgboost
xgb <- XGBTrainer$new(objective = "binary:logistic"
, n_estimators = 500
, eval_metric = "auc"
, maximize = T
, learning_rate = 0.1
,max_depth = 6)
xgb$fit(X = xtrain, y = "Survived", valid = xtest)
pred <- xgb$predict(xtest)
auc(actual = xtest$Survived, predicted = pred)
Grid Search
xgb <- XGBTrainer$new(objective="binary:logistic")
gst <-GridSearchCV$new(trainer = xgb,
parameters = list(n_estimators = c(10,50),
max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'))
gst$fit(xtrain, "Survived")
gst$best_iteration()
Random Search
rf <- RFTrainer$new()
rst <- RandomSearchCV$new(trainer = rf,
parameters = list(n_estimators = c(10,50), max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'),
n_iter = 3)
rst$fit(xtrain, "Survived")
#> [1] "In total, 3 models will be trained"
rst$best_iteration()
#> $n_estimators
#> [1] 50
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0.8028846
#>
#> $accuracy_sd
#> [1] 0.01733438
#>
#> $auc_avg
#> [1] 0.7804264
#>
#> $auc_sd
#> [1] 0.02631447
Let’s create some new feature based on target variable using target encoding and test a model.
# add target encoding features
xtrain[, feat_01 := smoothMean(train_df = xtrain,
test_df = xtest,
colname = "Embarked",
target = "Survived")$train[[2]]]
xtest[, feat_01 := smoothMean(train_df = xtrain,
test_df = xtest,
colname = "Embarked",
target = "Survived")$test[[2]]]
# train a random forest
# Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 1, max_features = 4)
rf$fit(X = xtrain, y = "Survived")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> Sex 71.417138
#> Fare 61.039958
#> Age 51.787990
#> Pclass 24.257112
#> Cabin 21.549374
#> SibSp 12.374317
#> Parch 10.392826
#> feat_01 6.490151
#> Embarked 6.270997
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.7988414