StepReg: Stepwise Regression Analysis

Introduction

Model selection is the process of choosing the most relevant features from a set of candidate variables. This procedure is crucial because it ensures that the final model is both accurate and interpretable while being computationally efficient and avoiding overfitting. Stepwise regression algorithms iteratively add or remove features from the model based on certain criteria (e.g., significance level or P-value, information criteria like AIC or BIC, etc.). The process continues until no further improvements can be made according to the chosen criterion. At the end of the stepwise procedure, you’ll have a final model that includes the selected features and their coefficients.

StepReg simplifies model selection tasks by providing a unified programming interface. It currently supports model buildings for five distinct response variable types (section @ref(regressioncategories)), four model selection strategies (section @ref(modelselectionstrategies)) including the best subsets algorithm, and a variety of selection metrics (section @ref(selectionmetrics)). Moreover, StepReg detects and addresses the multicollinearity issues if they exist (section @ref(multicollinearity)). The output of StepReg includes multiple tables summarizing the final model and the variable selection procedures. Additionally, StepReg offers a plot function to visualize the selection steps (section @ref(stepregoutput)). For demonstration, the vignettes include four use cases covering distinct regression scenarios (section @ref(usecases)). Non-programmers can access the tool through the iterative Shiny app detailed in section @ref(shinyapp).

Quick demo

The following example selects an optimal linear regression model with the mtcars dataset.

library(StepReg)
data(mtcars)
formula <- mpg ~ .
res <- stepwise(formula = formula,
                data = mtcars,
                type = "linear",
                include = c("qsec"),
                strategy = "bidirection",
                metric = c("AIC"))

Breakdown of the parameters:

  • formula: specifies the dependent and independent variables
  • type: specifies the regression category, depending on your data, choose from “linear”, “logit”, “cox”, etc.
  • include: specifies the variables that must be in the final model
  • strategy: specifies the stepwise strategy, choose from “forward”, “backward”, “bidirection”, “subset”
  • metric: specifies the model fit evaluation metric, choose one or more from “AIC”, “AICc”, “BIC”, “SL”, etc.

The output consists of final model, which can be viewed using:

res
$bidirection
$bidirection$AIC

Call:
lm(formula = mpg ~ 1 + qsec + wt + am, data = data, weights = NULL)

Coefficients:
(Intercept)         qsec           wt           am  
      9.618        1.226       -3.917        2.936  

You can further explore the results with generic functions such as summary(), coeff(), and others. For example:

summary(res$bidirection$AIC)

Call:
lm(formula = mpg ~ 1 + qsec + wt + am, data = data, weights = NULL)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.4811 -1.5555 -0.7257  1.4110  4.6610 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   9.6178     6.9596   1.382 0.177915    
qsec          1.2259     0.2887   4.247 0.000216 ***
wt           -3.9165     0.7112  -5.507 6.95e-06 ***
am            2.9358     1.4109   2.081 0.046716 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 2.459 on 28 degrees of freedom
Multiple R-squared:  0.8497,    Adjusted R-squared:  0.8336 
F-statistic: 52.75 on 3 and 28 DF,  p-value: 1.21e-11

You can also visualize the variable selection procedures with:

plot(res, strategy = "bidirection", process = "overview")

plot(res, strategy = "bidirection", process = "details")

The (+)1 refers to original model with intercept being added, (+) indicates variables being added to the model while (-) means variables being removed from the model.

Additionally, you can generate reports of various formats with:

report(res, report_name = "path_to/demo_res", format = "html")

Replace "path_to/demo_res" with desired output file name, the suffix ".html" will be added automatically. For detailed examples and more usage, refer to section @ref(stepregoutput) and @ref(usecases).

Key features

Regression categories

StepReg supports multiple types of regressions, including linear, logit, cox, poisson, and gamma regressions. These methods primarily vary by the type of response variable, which are summarized in the table below. Additional regression techniques can be incorporated upon user requests.

Common regression categories
Regression Reponse
linear continuous
logit binary
cox time-to-event
poisson count
gamma continuous and positively skewed

Model selection strategies

Model selection aims to identify the subset of independent variables that provide the best predictive performance for the response variable. Both stepwise regression and best subsets approaches are implemented in StepReg. For stepwise regression, there are mainly three methods: Forward Selection, Backward Elimination, Bidirectional Elimination.

Model selection strategy
Strategy Description
Forward Selection In forward selection, the algorithm starts with an empty model (no predictors) and adds in variables one by one. Each step tests the addition of every possible predictor by calculating a pre-selected metric. Add the variable (if any) whose inclusion leads to the most statistically significant fit improvement. Repeat this process until more predictors no longer lead to a statistically better fit.
Backward Elimination In backward elimination, the algorithm starts with a full model (all predictors) and deletes variables one by one. Each step test the deletion of every possible predictor by calculating a pre-selected metric. Delete the variable (if any) whose loss leads to the most statistically significant fit improvement. Repeat this process until less predictors no longer lead to a statistically better fit.
Bidirectional Elimination Bidirectional elimination is essentially a forward selection procedure combined with backward elimination at each iteration. Each iteration starts with a forward selection step that adds in predictors, followed by a round of backward elimination that removes predictors. Repeat this process until no more predictors are added or excluded.
Best Subsets Stepwise algorithms add or delete one predictor at a time and output a single model without evaluating all candidates. Therefore, it is a relatively simple procedure that only produces one model. In contrast, the Best Subsets algorithm calculates all possible models and output the best-fitting models with one predictor, two predictors, etc., for users to choose from.

Given the computational constraints, when dealing with datasets featuring a substantial number of predictor variables greater than the sample size, the Bidirectional Elimination typically emerges as the most advisable approach. Forward Selection and Backward Elimination can be considered in sequence. On the contrary, the Best Subsets approach requires the most substantial processing time, yet it calculates a comprehensive set of models with varying numbers of variables. In practice, users can experiment with various methods and select a final model based on the specific dataset and research objectives at hand.

Selection metrics

Various selection metrics can be used to guide the process of adding or removing predictors from the model. These metrics help to determine the importance or significance of predictors in improving the model fit. In StepReg, selection metrics include two categories: Information Criteria and Significance Level of the coefficient associated with each predictor. Information Criteria is a means of evaluating a model’s performance, which balances model fit with complexity by penalizing models with a higher number of parameters. Lower Information Criteria values indicate a better trade-off between model fit and complexity. Note that when evaluating different models, it is important to compare them within the same Information Criteria framework rather than across multiple Information Criteria. For example, if you decide to use AIC, you should compare all models using AIC. This ensures consistency and fairness in model comparison, as each Information Criterion has its own scale and penalization factors. In practice, multiple metrics have been proposed, the ones supported by StepReg are summarized below.

Importantly, given the discrepancies in terms of the precise definitions of each metric, StepReg mirrors the formulas adopted by SAS for univariate multiple regression (UMR) except for HQ, IC(1), and IC(3/2). A subset of the UMR can be easily extended to multivariate multiple regression (MMR), which are indicated in the following table.

Statistics in selection metric
Statistic Meanings
n Sample Size
p Number of parameters including the intercept
q Number of dependent variables
σ2 Estimate of pure error variance from fitting the full model
SST Total sum of squares corrected for the mean for the dependent variable, which is a numeric value for UMR and a matrix for multivariate regression
SSE Error sum of squares, which is a numeric value for UMR and a matrix for multivariate regression
LL The natural logarithm of likelihood
|| The determinant function
ln () The natural logarithm
Abbreviation, Definition, and Formula of the Selection Metric for Linear, Logit, Cox, Possion, and Gamma regression
Abbreviation Definition Formula
linear logit, cox, poisson and gamma
AIC Akaike’s Information Criterion $n\ln\left(\frac{|\text{SSE}|}{n}\right) + 2pq + n + q(q+1)$
(Clifford M. Hurvich 1989; Al-Subaihi 2002)1
−2LL + 2p
(Darlington 1968; George G. Judge 1985)
AICc Corrected Akaike’s Information Criterion $n\ln\left(\frac{|\text{SSE}|}{n}\right) + \frac{nq(n+p)}{n-p-q-1}$
(Clifford M. Hurvich 1989; Edward J. Bedrick 1994)2
$-2\text{LL} + \frac{n(n+p)}{n-p-2}$
(Clifford M. Hurvich 1989)
BIC Sawa Bayesian Information Criterion $n\ln\left(\frac{SSE}{n}\right) + 2(p+2)o - 2o^2, o = \frac{n\sigma^2}{SSE}$
(Sawa 1978; George G. Judge 1985)
not available for MMR
not available
Cp Mallows’ Cp statistic $\frac{SSE}{\sigma^2} + 2p - n$
(Mallows 1973; Hocking 1976)
not available for MMR
not available
HQ Hannan and Quinn Information Criterion $n\ln\left(\frac{|\text{SSE}|}{n}\right) + 2pq\ln(\ln(n))$
(E. J. Hannan 1979; Allan D R McQuarrie 1998; Clifford M. Hurvich 1989)
−2LL + 2pln (ln (n))
(E. J. Hannan 1979)
IC(1) Information Criterion with Penalty Coefficient Set to 1 $n\ln\left(\frac{|\text{SSE}|}{n}\right) + p$
(J. A. Nelder 1972; A. F. M. Smith 1980) not available for MMR
−2LL + p
(J. A. Nelder 1972; A. F. M. Smith 1980)
IC(3/2) Information Criterion with Penalty Coefficient Set to 3/2 $n\ln\left(\frac{|\text{SSE}|}{n}\right) + \frac{3}{2}p$
(A. F. M. Smith 1980)
not available for MMR
$-2\text{LL} + \frac{3}{2}p$
(A. F. M. Smith 1980)
SBC Schwarz Bayesian Information Criterion $n\ln\left(\frac{|\text{SSE}|}{n}\right) + pq \ln(n)$
(Clifford M. Hurvich 1989; Schwarz 1978; George G. Judge 1985; Al-Subaihi 2002)
not available for MMR
−2LL + pln (n)
(Schwarz 1978; George G. Judge 1985)
SL Significance Level (pvalue) F test for UMR and Approximate F test for MMR Forward: LRT and Rao Chi-square test (logit, poisson, gamma); LRT (cox);

Backward: Wald test
adjRsq Adjusted R-square statistic $1 - \frac{(n-1)(1-R^2)}{n-p}$,
where $R^2=1 - \frac{SSE}{SST}$
(Darlington 1968; George G. Judge 1985)
not available for MMR
not available
1 Unsupported AIC formula (which does not affect the selection process as it only differs by constant additive and multiplicative factors):

$AIC=n\ln\left(\frac{SSE}{n}\right) + 2p$ (Darlington 1968; George G. Judge 1985)
2 Unsupported AICc formula (which does not affect the selection process as it only differs by constant additive and multiplicative factors):

$AICc=\ln\left(\frac{SSE}{n}\right) + 1 + \frac{2(p+1)}{n-p-2}$ (Allan D R McQuarrie 1998)

No metric is necessarily optimal for all datasets. The choice of them depends on your data and research goals. We recommend using multiple metrics simultaneously, which allows the selection of the best model based on your specific needs. Below summarizes general guidance.

  • AIC: AIC works by penalizing the inclusion of additional variables in a model. The lower the AIC, the better performance of the model. AIC does not include sample size in penalty calculation, and it is optimal in minimizing the mean square error of predictions (Mark J. Brewer 2016).

  • AICc: AICc is a variant of AIC, which works better for small sample size, especially when numObs / numParam < 40 (Kenneth P. Burnham 2002).

  • Cp: Cp is used for linear models. It is equivalent to AIC when dealing with Gaussian linear model selection.

  • IC(1) and IC(3/2): IC(1) and IC(3/2) have 1 and 3/2 as penalty factors respectively, compared to 2 used by AIC. As such, IC(1) turns to return a complex model with more variables that may suffer from overfitting issues.

  • BIC and SBC: Both BIC and SBC are variants of Bayesian Information Criterion. The main distinction between BIC/SBC and AIC lies in the magnitude of the penalty imposed: BIC/SBC are more parsimonious when penalizing model complexity, which typically results to a simpler model (SAS Institute Inc 2018; Sawa 1978; Clifford M. Hurvich 1989; Schwarz 1978; George G. Judge 1985; Al-Subaihi 2002).

The precise definitions of these criteria can vary across literature and in the SAS environment. Here, BIC aligns with the definition of the Sawa Bayesion Information Criterion as outlined in SAS documentation, while SBC corresponds to the Schwarz Bayesian Information Criterion. According to Richard’s post, whereas AIC often favors selecting overly complex models, BIC/SBC prioritize a small models. Consequently, when dealing with a limited sample size, AIC may seem preferable, whereas BIC/SBC tend to perform better with larger sample sizes.

  • HQ: HQ is an alternative to AIC, differing primarily in the method of penalty calculation. However, HQ has remained relatively underutilized in practice (Kenneth P. Burnham 2002).

  • adjRsq: The adjusted R-squared (adj-R²) seeks to overcome the limitation of R-squared in model selection by considering the number of predictors. It serves a similar purpose to information criteria, as both methods compare models by weighing their goodness of fit against the number of parameters. However, information criteria are typically regarded as superior in this context (Stevens 2016).

  • SL: SL stands for Significance Level (P-value), embodying a distinct approach to model selection in contrast to information criteria. The SL method operates by calculating a P-value through specific hypothesis testing. Should this P-value fall below a predefined threshold, such as 0.05, one should favor the alternative hypothesis, indicating that the full model significantly outperforms the reduced model. The effectiveness of this method hinges upon the selection of the P-value threshold, wherein smaller thresholds tend to yield simpler models.

Multicollinearity

This blog by Jim Frost gives an excellent overview of multicollinearity and when it is necessary to remove it.

Simply put, a dataset contains multicollinearity when input predictors are correlated. When multicollinearity occurs, the interpretability of predictors will be badly affected because changes in one input variable lead to changes in other input variables. Therefore, it is hard to individually estimate the relationship between each input variable and the dependent variable.

Multicollinearity can dramatically reduce the precision of the estimated regression coefficients of correlated input variables, making it hard to find the correct model. However, as Jim pointed out, “Multicollinearity affects the coefficients and p-values, but it does not influence the predictions, precision of the predictions, and the goodness-of-fit statistics. If your primary goal is to make predictions, and you don’t need to understand the role of each independent variable, you don’t need to reduce severe multicollinearity.”

In StepReg, QC Matrix Decomposition is performed ahead of time to detect and remove input variables causing multicollinearity.

StepReg output

StepReg provides multiple functions for summarizing the model building results. The function stepwise() generates a list of data frame that describe the feature selection steps and the final model. To facilitate collaborations, you can redirect the data frame into various formats such as “xlsx”, “html”, “docx”, etc. with the function report(). Furthermore, you can easily compare the variable selection procedures for multiple selection metrics by visualizing the steps with the function plot(). Details see below.

Depending on the number of selected regression strategies and metrics, you can expect to receive optimal models from stepwise(). more functions like summary(), coeff(), residuals(), fitted() can be employed to each elements of the output.

You can save the stepwise arguments, variables class, stepwise process overview/details, and voted models in different format like “xlsx”, “docx”, “html”, “pptx”, and others, facilitating easy sharing. Of note, the suffix will be automatically added to the report_name. For instance, the following example generates both “results.xlsx” and “results.docx” reports.

report(res, report_name = "results", format = c("xlsx", "docx"))

Use cases

Please choose the regression model that best suits the type of response variable. For detailed guidance, see section @ref(regressioncategories). Below, we present various examples utilizing different models tailored to specific datasets.

Linear regression with the mtcars dataset

In this section, we’ll demonstrate how to perform linear regression analysis using the mtcars dataset, showcasing different scenarios with varying numbers of predictors and dependent variables. We set type = "linear" to direct the function to perform linear regression.

Description of the mtcars dataset

The mtcars is a classic dataset in statistics and is included in the base R installation. It was sourced from the 1974 Motor Trend US magazine, comprising 32 observations on 11 variables. Here’s a brief description of the variables included:

  1. mpg: miles per gallon (fuel efficiency)
  2. cyl: number of cylinders
  3. disp: displacement (engine size) in cubic inches
  4. hp: gross horsepower
  5. drat: rear axle ratio
  6. wt: weight (in thousands of pounds)
  7. qsec: 1/4 mile time (in seconds)
  8. vs: engine type (0 = V-shaped, 1 = straight)
  9. am: transmission type (0 = automatic, 1 = manual)
  10. gear: number of forward gears
  11. carb: number of carburetors

Why choose linear regression

Linear regression is an ideal choice for analyzing the mtcars dataset due to its inclusion of continuous variables like “mpg”, “hp”, or “weight”, which can serve as response variables. Furthermore, the dataset exhibits potential linear relationships between the response variable and other variables.

Example1: single dependent variable (“mpg”)

In this example, we employ “forward” strategy with “AIC” as the selection criteria. Additionally, we specify using the include argument that “disp”, “cyl” always be included in the model.

data(mtcars)

formula <- mpg ~ .
res1 <- stepwise(formula = formula,
                 data = mtcars,
                 type = "linear",
                 include = c("disp", "cyl"),
                 strategy = "forward",
                 metric = "AIC")
res1
$forward
$forward$AIC

Call:
lm(formula = mpg ~ 1 + disp + cyl + wt + hp, data = data, weights = NULL)

Coefficients:
(Intercept)         disp          cyl           wt           hp  
   40.82854      0.01160     -1.29332     -3.85390     -0.02054  

To visualize the selection process:

plot_list <- list()
plot_list[["forward"]][["details"]] <- plot(res1, process = "details")
plot_list[["forward"]][["overview"]] <- plot(res1, process = "overview")
cowplot::plot_grid(plotlist = plot_list$forward, ncol = 1)

To exclude the intercept from the model, adjust the formula as follows:

formula <- mpg ~ . + 0
formula <- mpg ~ . - 1

To limit the model to a specific subset of predictors, adjust the formula as follows, which will only consider “cyp”, “disp”, “hp”, “wt”, “vs”, and “am” as the predictors.

formula <- mpg ~ cyl + disp + hp + wt + vs + am + 0

Another way is to use minus symbol("-") to exclude some predictors for variable selection. For example, include all variables except “disp”, “wt”, and intercept.

formula <- mpg ~ . - 1 - disp - wt

You can simultaneously provide multiple selection strategies and metrics. For example, the following code snippet employs both “forward” and “backward” strategies using metrics “AIC”, “BIC”, and “SL”. It’s worth mentioning that when “SL” is specified, you may also want to set the significance level for entry (“sle”) and stay (“sls”), both of which default to 0.15.

formula <- mpg ~ .
res2 <- stepwise(formula = formula,
                 data = mtcars,
                 type = "linear",
                 strategy = c("forward", "backward"),
                 metric = c("AIC", "BIC", "SL"),
                 sle = 0.05,
                 sls = 0.05)
res2
$forward
$forward$AIC

Call:
lm(formula = mpg ~ 1 + wt + cyl + hp, data = data, weights = NULL)

Coefficients:
(Intercept)           wt          cyl           hp  
   38.75179     -3.16697     -0.94162     -0.01804  


$forward$BIC

Call:
lm(formula = mpg ~ 1 + wt + cyl, data = data, weights = NULL)

Coefficients:
(Intercept)           wt          cyl  
     39.686       -3.191       -1.508  


$forward$SL

Call:
lm(formula = mpg ~ 1 + wt + cyl, data = data, weights = NULL)

Coefficients:
(Intercept)           wt          cyl  
     39.686       -3.191       -1.508  



$backward
$backward$AIC

Call:
lm(formula = mpg ~ 1 + wt + qsec + am, data = data, weights = NULL)

Coefficients:
(Intercept)           wt         qsec           am  
      9.618       -3.917        1.226        2.936  


$backward$BIC

Call:
lm(formula = mpg ~ 1 + wt + qsec + am, data = data, weights = NULL)

Coefficients:
(Intercept)           wt         qsec           am  
      9.618       -3.917        1.226        2.936  


$backward$SL

Call:
lm(formula = mpg ~ 1 + wt + qsec + am, data = data, weights = NULL)

Coefficients:
(Intercept)           wt         qsec           am  
      9.618       -3.917        1.226        2.936  
plot_list <- setNames(
  lapply(c("forward", "backward"),function(i){
    setNames(
      lapply(c("details","overview"),function(j){
        plot(res2,strategy=i,process=j)
    }),
    c("details","overview")
    )
  }),
  c("forward", "backward")
)

cowplot::plot_grid(plotlist = plot_list$forward, ncol = 1, rel_heights = c(2, 1))

cowplot::plot_grid(plotlist = plot_list$backward, ncol = 1, rel_heights = c(2, 1))

Example2: multivariate regression (“mpg” and “drat”)

In this scenario, there are two dependent variables, “mpg” and “drat”. The model selection aims to identify the most influential predictors that affect both variables.

formula <- cbind(mpg, drat) ~ . + 0
res3 <- stepwise(formula = formula,
                 data = mtcars,
                 type = "linear",
                 strategy = "bidirection",
                 metric = c("AIC", "HQ"))
res3
$bidirection
$bidirection$AIC

Call:
lm(formula = cbind(mpg, drat) ~ 0 + gear + qsec + wt + am, data = data, 
    weights = NULL)

Coefficients:
      mpg       drat    
gear   0.48799   0.38197
qsec   1.53717   0.11809
wt    -3.29543  -0.02583
am     3.51269   0.38752


$bidirection$HQ

Call:
lm(formula = cbind(mpg, drat) ~ 0 + gear + qsec + wt + am, data = data, 
    weights = NULL)

Coefficients:
      mpg       drat    
gear   0.48799   0.38197
qsec   1.53717   0.11809
wt    -3.29543  -0.02583
am     3.51269   0.38752
plot_list <- setNames(
  lapply(c("bidirection"),function(i){
    setNames(
      lapply(c("details","overview"),function(j){
        plot(res3,strategy=i,process=j)
    }),
    c("details","overview")
    )
  }),
  c("bidirection")
)

cowplot::plot_grid(plotlist = plot_list$bidirection, ncol = 1, rel_heights = c(2, 1))

Logistic regression with the remission dataset

In this example, we’ll showcase logistic regression using the remission dataset. By setting type = "logit", we instruct the function to perform logistic regression.

Description of the remission dataset

The remission dataset, obtained from the online course STAT501 at Penn State University, has been integrated into StepReg. It consists of 27 observations across seven variables, including a binary variable named “remiss”:

  1. remiss: whether leukemia remission occurred, a value of 1 indicates occurrence while 0 means non-occurrence
  2. cell: cellularity of the marrow clot section
  3. smear: smear differential percentage of blasts
  4. infil: percentage of absolute marrow leukemia cell infiltrate
  5. li: percentage labeling index of the bone marrow leukemia cells
  6. blast: the absolute number of blasts in the peripheral blood
  7. temp: the highest temperature before the start of treatment

Why choose logistic regression

Logistic regression effectively captures the relationship between predictors and a categorical response variable, offering insights into the probability of being assigned into specific response categories given a set of predictors. It is suitable for analyzing binary outcomes, such as the remission status (“remiss”) in the remission dataset.

Example1: using “forward” strategy

In this example, we employ a “forward” strategy with “AIC” as the selection criteria, while force ensuring that the “cell” variable is included in the model.

data(remission)

formula <- remiss ~ .
res4 <- stepwise(formula = formula,
                 data = remission,
                 type = "logit",
                 include= "cell",
                 strategy = "forward",
                 metric = "AIC")
[1] "There was an error message."
res4
$forward
$forward$AIC

Call:  glm(formula = remiss ~ 1 + cell + li + temp, family = "binomial", 
    data = data, weights = NULL)

Coefficients:
(Intercept)         cell           li         temp  
     67.634        9.652        3.867      -82.074  

Degrees of Freedom: 26 Total (i.e. Null);  23 Residual
Null Deviance:      34.37 
Residual Deviance: 21.95    AIC: 29.95
plot_list <- setNames(
  lapply(c("forward"),function(i){
    setNames(
      lapply(c("details","overview"),function(j){
        plot(res4,strategy=i,process=j)
    }),
    c("details","overview")
    )
  }),
  c("forward")
)
cowplot::plot_grid(plotlist = plot_list$forward, ncol = 1, rel_heights = c(2, 1))

Example2: using “subset” strategy

In this example, we employ a “subset” strategy, utilizing “SBC” as the selection criteria while excluding the intercept. Meanwhile, we set best_n = 3 to restrict the output to the top 3 models for each number of variables.

data(remission)

formula <- remiss ~ . + 0
res5 <- stepwise(formula = formula,
                  data = remission,
                  type = "logit",
                  strategy = "subset",
                  metric = "SBC",
                  best_n = 3)
[1] "There was an error message."
res5
$subset
$subset$SBC

Call:  glm(formula = remiss ~ 0 + li + temp, family = "binomial", data = data, 
    weights = NULL)

Coefficients:
    li    temp  
 2.942  -3.855  

Degrees of Freedom: 27 Total (i.e. Null);  25 Residual
Null Deviance:      37.43 
Residual Deviance: 25.86    AIC: 29.86
plot_list <- setNames(
  lapply(c("subset"),function(i){
    setNames(
      lapply(c("details","overview"),function(j){
        plot(res5,strategy=i,process=j)
    }),
    c("details","overview")
    )
  }),
  c("subset")
)
cowplot::plot_grid(plotlist = plot_list$subset, ncol = 1, rel_heights = c(2, 1))

Here, the 0 in the above plot means that there is no intercept in the model.

Cox regression with the lung dataset

In this example, we’ll demonstrate how to perform Cox regression analysis using the lung dataset. By setting type = "cox", we instruct the function to conduct Cox regression.

Description of the lung dataset

The lung dataset, available in the "survival" R package, includes information on survival times for 228 patients with advanced lung cancer. It comprises ten variables, among which the “status” variable codes for censoring status (1 = censored, 2 = dead), and the “time” variable denotes the patient survival time in days. To learn more about the dataset, use ?survival::lung.

Why choose Cox regression

Cox regression, also termed the Cox proportional hazards model, is specifically designed for analyzing survival data, making it well-suited for datasets like lung that include information on the time until an event (e.g., death) occurs. This method accommodates censoring and assumes proportional hazards, enhancing its applicability to medical studies involving time-to-event outcomes.

Example1: using “forward” strategy

In this example, we employ a “forward” strategy with “AICc” as the selection criteria.

library(dplyr)
library(survival)
# Preprocess:
lung <- survival::lung %>%
  mutate(sex = factor(sex, levels = c(1, 2))) %>% # make sex as factor
  na.omit() # get rid of incomplete records

formula  =  Surv(time, status) ~ .
res6 <- stepwise(formula = formula,
                 data = lung,
                 type = "cox",
                 strategy = "forward",
                 metric = "AICc")
res6
$forward
$forward$AICc
Call:
coxph(formula = Surv(time, status) ~ 0 + ph.ecog + sex + inst + 
    wt.loss + ph.karno, data = data, weights = NULL, method = "efron")

              coef exp(coef)  se(coef)      z        p
ph.ecog   0.993224  2.699926  0.232115  4.279 1.88e-05
sex2     -0.571959  0.564419  0.198865 -2.876  0.00403
inst     -0.030042  0.970404  0.012931 -2.323  0.02016
wt.loss  -0.014800  0.985309  0.007664 -1.931  0.05348
ph.karno  0.021492  1.021725  0.011222  1.915  0.05547

Likelihood ratio test=30.44  on 5 df, p=1.208e-05
n= 167, number of events= 120 
plot_list <- setNames(
  lapply(c("forward"),function(i){
    setNames(
      lapply(c("details","overview"),function(j){
        plot(res6,strategy=i,process=j)
    }),
    c("details","overview")
    )
  }),
  c("forward")
)
cowplot::plot_grid(plotlist = plot_list$forward, ncol = 1, rel_heights = c(2, 1))

Poisson regression with the creditCard dataset

In this example, we’ll demonstrate how to perform Poisson regression analysis using the creditCard dataset. We set type = "poisson" to direct the function to perform Poisson regression.

Descprition of the creditCard dataset

The creditCard dataset contains credit history information for a sample of applicants for a specific type of credit card, included in the "AER" package. It encompasses 1319 observations across 12 variables, including “reports”, “age”, “income”, among others. The “reports” variable represents the number of major derogatory reports. For detailed information, refer to ?AER::CreditCard.

Why choose Poisson regression

Poisson regression is frequently employed method for analyzing count data, where the response variable represents the occurrences of an event within a defined time or space frame. In the context of the creditCard dataset, Poisson regression can model the count of major derogatory reports (“reports”), enabling assessment of predictors’ impact on this variable.

Example1: using “forward” strategy

In this example, we employ a “forward” strategy with “SL” as the selection criteria. We set the significance level for entry to 0.05 (sle = 0.05).

data(creditCard)

formula  = reports ~ .
res7 <- stepwise(formula = formula,
                 data = creditCard,
                 type = "poisson",
                 strategy = "forward",
                 metric = "SL",
                 sle = 0.05)
[1] "There was an error message."
summary(res7$forward$SL)

Call:
glm(formula = reports ~ 1 + card + active + expenditure + months + 
    owner + majorcards, family = "poisson", data = data, weights = NULL)

Coefficients:
              Estimate Std. Error z value Pr(>|z|)    
(Intercept) -0.2986437  0.1096854  -2.723 0.006475 ** 
cardyes     -2.7035223  0.1171959 -23.068  < 2e-16 ***
active       0.0654297  0.0039975  16.367  < 2e-16 ***
expenditure  0.0006724  0.0001776   3.785 0.000153 ***
months       0.0021246  0.0005303   4.006 6.17e-05 ***
owneryes    -0.3437699  0.0926480  -3.710 0.000207 ***
majorcards   0.2740393  0.1045129   2.622 0.008740 ** 
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 2347.4  on 1318  degrees of freedom
Residual deviance: 1277.1  on 1312  degrees of freedom
AIC: 1940.7

Number of Fisher Scoring iterations: 6
plot_list <- setNames(
  lapply(c("forward"),function(i){
    setNames(
      lapply(c("details","overview"),function(j){
        plot(res7,strategy=i,process=j)
      }),
      c("details","overview")
    )
  }),
  c("forward")
)
Warning in min(x): no non-missing arguments to min; returning Inf
Warning in max(x): no non-missing arguments to max; returning -Inf
cowplot::plot_grid(plotlist = plot_list$forward, ncol = 1, rel_heights = c(2, 1))

# Interactive app {#shinyapp}

We have developed an interactive Shiny application to simplify model selection tasks for non-programmers. You can access the app through the following URL:

https://junhuili1017.shinyapps.io/StepReg/

You can also access the Shiny app directly from your local machine with the following code:

library(StepReg)
StepRegShinyApp()

Here is the user interface.

Session info

R version 4.4.2 (2024-10-31)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 24.04.1 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

time zone: Etc/UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] survival_3.7-0   dplyr_1.1.4      kableExtra_1.4.0 knitr_1.49      
[5] StepReg_1.5.6    BiocStyle_2.35.0

loaded via a namespace (and not attached):
 [1] tidyselect_1.2.1        viridisLite_0.4.2       farver_2.1.2           
 [4] fastmap_1.2.0           fontquiver_0.2.1        promises_1.3.2         
 [7] shinyjs_2.1.0           digest_0.6.37           timechange_0.3.0       
[10] mime_0.12               lifecycle_1.0.4         magrittr_2.0.3         
[13] compiler_4.4.2          rlang_1.1.4             sass_0.4.9             
[16] tools_4.4.2             utf8_1.2.4              yaml_2.3.10            
[19] data.table_1.16.2       labeling_0.4.3          askpass_1.2.1          
[22] summarytools_1.0.1      htmlwidgets_1.6.4       plyr_1.8.9             
[25] xml2_1.3.6              withr_3.0.2             purrr_1.0.2            
[28] sys_3.4.3               grid_4.4.2              fansi_1.0.6            
[31] gdtools_0.4.1           xtable_1.8-4            colorspace_2.1-1       
[34] ggplot2_3.5.1           scales_1.3.0            MASS_7.3-61            
[37] cli_3.6.3               rmarkdown_2.29          ragg_1.3.3             
[40] generics_0.1.3          rstudioapi_0.17.1       reshape2_1.4.4         
[43] ggcorrplot_0.1.4.1      cachem_1.1.0            pander_0.6.5           
[46] stringr_1.5.1           shinythemes_1.2.0       splines_4.4.2          
[49] BiocManager_1.30.25     matrixStats_1.4.1       base64enc_0.1-3        
[52] vctrs_0.6.5             Matrix_1.7-1            jsonlite_1.8.9         
[55] fontBitstreamVera_0.1.1 rapportools_1.1         ggrepel_0.9.6          
[58] systemfonts_1.1.0       maketools_1.3.1         magick_2.8.5           
[61] jquerylib_0.1.4         tidyr_1.3.1             glue_1.8.0             
[64] codetools_0.2-20        cowplot_1.1.3           DT_0.33                
[67] lubridate_1.9.3         stringi_1.8.4           flextable_0.9.7        
[70] gtable_0.3.6            later_1.4.1             shinycssloaders_1.1.0  
[73] munsell_0.5.1           tibble_3.2.1            pillar_1.9.0           
[76] htmltools_0.5.8.1       openssl_2.2.2           R6_2.5.1               
[79] tcltk_4.4.2             textshaping_0.4.0       lattice_0.22-6         
[82] evaluate_1.0.1          shiny_1.9.1             backports_1.5.0        
[85] fontLiberation_0.1.0    httpuv_1.6.15           pryr_0.1.6             
[88] bslib_0.8.0             Rcpp_1.0.13-1           zip_2.3.1              
[91] uuid_1.2-1              svglite_2.1.3           checkmate_2.3.2        
[94] officer_0.6.7           xfun_0.49               buildtools_1.0.0       
[97] pkgconfig_2.0.3        
A. F. M. Smith, D. J. Spiegelhalter. 1980. “Bayes Factors and Choice Criteria for Linear Model.” Journal Article. Journal of the Royal Statistical Society. Series B (Methodological) 42 (2): 213–20.
Allan D R McQuarrie, Chih-Ling Tsai. 1998. Regression and Time Series Model Selection. Book. River Edge, NJ.: World Scientific Publishing Co. Pte. Ltd.
Al-Subaihi, Ali A. 2002. “Variable Selection in Multivariable Regression Using SAS/IML.” Journal Article. Journal of Statistical Software 7 (12): 1–20.
Clifford M. Hurvich, Chih-Ling Tsai. 1989. “Regression and Time Series Model Selection in Small Samples.” Journal Article. Biometrika 76: 297–307.
Darlington, R. B. 1968. “Multiple Regression in Psychological Research and Practice.” Journal Article. Psychological Bulletin 69 (3): 161–82.
E. J. Hannan, B. G. Quinn. 1979. “The Determination of the Order of an Autoregression.” Journal Article. Journal of the Royal Statistical Society. Series B (Methodological) 41 (2): 190–95.
Edward J. Bedrick, Chih-Ling Tsai. 1994. “Model Selection for Multivariate Regression in Small Samples.” Journal Article. Biometrics 50 (1): 226–31.
George G. Judge, R. Carter Hill, William E. Griffiths. 1985. The Theory and Practice of Econometrics, 2nd Edition. Book. Wiley. https://www.wiley.com/en-us/The+Theory+and+Practice+of+Econometrics%2C+2nd+Edition-p-9780471895305.
Hocking, R. R. 1976. “A Biometrics Invited Paper. The Analysis and Selection of Variables in Linear Regression.” Journal Article. Biometrics 32 (1): 1–49.
Hotelling, Harold. 1992. “The Generalization of Student’s Ratio.” Book Section. In Breakthroughs in Statistics: Foundations and Basic Theory., 54–62.
J. A. Nelder, R. W. M. Wedderburn. 1972. “Generalized Linear Models.” Journal Article. Journal of the Royal Statistical Society. Series A (General) 135 (3): 370–84.
K. V. Mardia, J. M. Bibby, J. T. Ken. 1981. “Multivariate Analysis.” Journal Article. Mathematical Gazette 65 (431): 75–76.
Kenneth P. Burnham, David R. Anderson. 2002. Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach 2nd Edition. Book. Springer.
Mallows, C. L. 1973. “Some Comments on CP.” Journal Article. Technometrics 15 (4): 661–75.
Mark J. Brewer, Susan L. Cooksley, Adam Butler. 2016. “The Relative Performance of AIC, AICC and BIC in the Presence of Unobserved Heterogeneity.” Journal Article. Methods in Ecology and Evolution 7 (6): 679–92.
McKEON, JAMES J. 1974. “F Approximations to the Distribution of Hotelling’s T20.” Journal Article. Biometrika 61 (2): 381–83.
Pillai, K. C. S. 1995. “Some New Test Criteria in Multivariate Analysis.” Journal Article. Ann. Math. Statist 26 (1): 117–21.
Prathapasinghe Dharmawansa, Ofer Shwartz, Boaz Nadler. 2014. “Roy’s Largest Root Under Rank-One Alternatives:the Complex Valued Case and Applications.” Journal Article. arXiv Preprint arXiv 1411: 4226.
RS Sparks, D Coutsourides, W Zucchini. 1985. “On Variable Selection in Multivariate Regression.” Journal Article. Communication in Statistics- Theory and Methods 14 (7): 1569–87.
SAS Institute Inc. 2018. SAS/STAT® 15.1 User’s Guide. Book. Cary, NC: SAS Institute Inc.
Sawa, Takamitsu. 1978. “Information Criteria for Discriminating Among Alternative Regression Models.” Journal Article. Econometrica 46 (6): 1273–91.
Schwarz, Gideon. 1978. “Estimating the Dimension of a Model.” Journal Article. Ann. Statist. 6 (2): 461–64.
Stevens, James P. 2016. Applied Multivariate Statistics for the Social Sciences. Book. Fifth Edition. Routledge.