Hierarchical (or multilevel/3-level) meta-analysis adjusts for the dependency of effect sizes due to clustering in the data. For example, effect size estimates from multiple experiments reported in the same manuscript might be expected to be more similar than effect sizes from a different paper (Konstantopoulos, 2011). This vignette illustrates how to deal with such dependencies among effect size estimates (in cases with simple nested structure) using the Bayesian model-averaged meta-analysis (BMA) (Bartoš et al., 2021; Gronau et al., 2017, 2021). (See other vignettes for more details on BMA: Reproducing BMA or Informed BMA in medicine.)
First, we introduce the example data set. Second, we illustrate the
frequentist hierarchical meta-analysis with the metafor
R
package and discuss the results. Third, we outline the hierarchical
meta-analysis parameterization. Fourth, we estimate the Bayesian
model-averaged hierarchical meta-analysis. Finally, we end up discussing
further extension and publication bias adjustment.
We use the dat.konstantopoulos2011
data set from the
metadat
R package (Thomas et al.,
2019) that is used for the same functionality in the metafor
(Wolfgang, 2010) R package. We roughly
follow the example in the data set’s help file,
?dat.konstantopoulos2011
. The data set consists of 56
studies estimating the effects of modified school calendars on students
achievement. The 56 studies were run in individual schools, which can be
grouped into 11 districts. We might expect more similar effect size
estimates from schools in the same district – in other words, the effect
size estimates from the same district might not be completely
independent. Consequently, we might want to adjust for this dependency
(clustering) between the effect size estimates to draw a more
appropriate inference.
First, we load the data set, assign it to the dat
object, and inspect the first few rows.
In the following analyses, we use the following variables:
yi
, standardized mean differences,vi
, sampling variances of the standardized mean
differences,district
, district id which distinguishes among the
districts,school
, that distinguishes among different schools
within the same district.metafor
We follow the data set’s help file and fit a simple random effects
meta-analysis using the rma()
function from
metafor
package. This model ignores the dependency between
effect size estimates. We use this simple model as our starting point
and as a comparison with the later models.
The model summary returns a small but statistically significant effect size estimate μ = 0.128 (se = 0.044) and a considerable heterogeneity estimate τ = 0.297.
We extend the model to account for the hierarchical structure of the
data, i.e., schools within districts, by using the rma.mv()
function from the metafor
package and extending it with the
random = ~ school | district
argument.
We find that accounting for the hierarchical structure of the data
results in (1) a slightly larger effect size estimate (μ = 0.187) and (2) larger standard
error of the effect size estimate (se = 0.085). The larger standard error is a
natural consequence of accounting for the dependency between the effect
sizes. Since the effect sizes are dependent, they do not contribute
independent information. Specifying the hierarchical model then accounts
for the dependency by estimating similarity between the estimates from
the same cluster (school) and discounting the information borrowed from
each estimate. The estimate of the similarity among estimates from the
same cluster is summarized in the \rho = 0.666
estimate.
We specify a simple hierarchical meta-analytic model (see Konstantopoulos (2011) for an example). Using distributional notation, we can describe the data generating process as a multi-stage sampling procedure. In a nutshell, we assume the existence of an overall mean effect μ. Next, we assume that the effect sizes in each district k = 1, …, K, γk, systematically differ from the mean effect, with the variance of the district-level effects summarized with heterogeneity τb (as between). Furthermore, we assume that the true effects θk, j of each study j = 1, …Jk systematically differ from the district-level effect, with the variance of the study effects from the district-level effect summarized with heterogeneity τw (as within). Finally, the observed effect sizes yk, j that differ from the true effects yk, j due to random errors sek, j.
Mathematically, we can describe such a model as: $$ \begin{aligned} \gamma_k &\sim \text{N}(\mu, \tau_b^2),\\ \theta_{k,j} &\sim \text{N}(\gamma_k, \tau_w^2),\\ y_{k,j} &\sim \text{N}(\theta_{k,j}, \text{se}_{k,j}).\\ \end{aligned} $$ Where N() denotes a normal distribution with mean and variance.
Conveniently, and with a bit of algebra, we do not need to estimate the district-level and true study effects. Instead, we marginalize them out, and we sample the observed effect sizes from each district yk, . directly from a multivariate normal distributions, MN(), with a common mean μ and covariance matrix S: $$ \begin{aligned} y_{k,.} &\sim \text{MN}(\mu, \text{S}),\\ \text{S} &= \begin{bmatrix} \tau_b^2 + \tau_w^2 + \text{se}_1^2 & \tau_w^2 & \dots & \tau_w^2 \\ \tau_w^2 & \tau_b^2 + \tau_w^2 + \text{se}_2^2 & \dots & \tau_w^2 \\ \dots & \dots & \dots & \dots \\ \tau_w^2 & \tau_w^2 & \dots & \tau_b^2 + \tau_w^2 + \text{se}_{J_k}^2 & \\ \end{bmatrix}. \end{aligned} $$ The random effects marginalization is helpful as it allows us to sample much fewer parameters from the posterior distribution (which significantly simplifies marginal likelihood estimation via bridge sampling). Furthermore, the marginalization allows us to properly specify selection model publication bias adjustment models – the marginalization propagates the selection process up through all the sampling steps at once (we cannot proceed with the sequential sampling as the selection procedure on the observed effect sizes modifies the sampling distributions of all the preceding levels).
We can further re-parameterize the model by performing the following
substitution, $$
\begin{aligned}
\tau^2 &= \tau_b^2 + \tau_w^2,\\
\rho &= \frac{\tau_w^2}{\tau_b^2 + \tau_w^2},
\end{aligned}
$$ and specifying the covariance matrix using the inter-study
correlation ρ, total
heterogeneity τ, and the
standard errors se.: $$
\begin{aligned}
\text{S} &= \begin{bmatrix}
\tau^2 + \text{se}_1^2 & \rho\tau^2 & \dots &
\rho\tau^2 \\
\rho\tau^2 & \tau^2 + \text{se}_2^2 & \dots &
\rho\tau^2 \\
\dots & \dots & \dots & \dots \\
\rho\tau^2 & \rho\tau^2 & \dots & \tau^2 +
\text{se}_{J_k}^2 & \\
\end{bmatrix}.
\end{aligned}
$$ This specification corresponds to the compound symmetry
covariance matrix of random effects, the default settings in the
metafor::rma.mv()
function. More importantly, it allows us
to easily specify prior distributions on the correlation coefficient
ρ and the total heterogeneity
τ.
RoBMA
Before we estimate the complete Hierarchical Bayesian Model-Averaged
Meta-Analysis (hBMA) with the RoBMA
package, we quickly
repeat the simpler models we estimated with the metafor
package in the previous section.
First, we estimate a simple Bayesian random effects meta-analysis
(corresponding to fit_metafor.0
). We use
the RoBMA()
function and specify the effect sizes and
sampling variances via the d = dat$yi
and
v = dat$vi
arguments. We set the
priors_effect_null
, priors_heterogeneity_null
,
and priors_bias
arguments to null to omit models assuming
the absence of the effect, heterogeneity, and the publication bias
adjustment components.
fit.0 <- RoBMA(d = dat$yi, v = dat$vi,
priors_effect_null = NULL,
priors_heterogeneity_null = NULL,
priors_bias = NULL,
parallel = TRUE, seed = 1)
We generate a complete summary for the only estimated model by adding
the type = "individual"
argument to the
summary()
function.
We verify that the effect size, μ = 0.126 (95% CI [0.041, 0.211]), and heterogeneity, τ = 0.292 (95% CI [0.233, 0.364]), estimates closely correspond to the frequentist results (as we would expect from parameter estimates under weakly informative priors).
Second, we account for the clustered effect size estimates within
districts by extending the previous function call with the
study_ids = dat$district
argument. This allows us to
estimate the hierarchical Bayesian random effects meta-analysis
(corresponding to fit_metafor
). We use the default prior
distribution for the correlation parameter
\rho \sim \text{Beta}(1, 1)
, set via the
priors_hierarchical
argument, which restricts the
correlation to be positive and uniformly distributed on interval (0, 1).
fit <- RoBMA(d = dat$yi, v = dat$vi, study_ids = dat$district,
priors_effect_null = NULL,
priors_heterogeneity_null = NULL,
priors_bias = NULL,
parallel = TRUE, seed = 1)
Again, we generate the complete summary for the only estimated model,
and verify that our estimates, again, correspond to the frequentist counterparts, with the estimated effect size, μ = 0.181 (95% CI [0.017, 0.346]), heterogeneity, τ = 0.308 (95% CI [0.223, 0.442]), and correlation, ρ = 0.627 (95% CI [0.320, 0.864]).
We can further visualize the prior and posterior distribution of the
ρ parameter using the
plot()
function.
Third, we extend the previous model into a model ensemble that also
includes models assuming the absence of the effect and/or heterogeneity
(we do not incorporate models assuming presence of publication bias due
to computational complexity explained in the summary). Including those
additional models allows us to evaluate evidence in favor of the effect
and heterogeneity. Furthermore, specifying all those addition models
allows us to incorporate the uncertainty about the specified models and
weight the posterior distribution according to how well the models
predicted the data. We estimate the remaining models by removing the
priors_effect_null
and
priors_heterogeneity_null
arguments from the previous
function calls, which include the previously omitted models of no effect
and/or no heterogeneity.
fit_BMA <- RoBMA(d = dat$yi, v = dat$vi, study_ids = dat$district,
priors_bias = NULL,
parallel = TRUE, seed = 1)
Now we generate a summary for the complete model-averaged ensemble by
not specifying any additional arguments in the summary()
function.
We find the ensemble contains four models, the combination of models
assuming the presence/absence of the effect/heterogeneity, each with
equal prior model probabilities. Importantly, the models assuming
heterogeneity are also specified with the hierarchical structure and
account for the clustering. A comparison of the specified models reveals
weak evidence against the effect, BF10 = 0.917, and extreme evidence
for the presence of heterogeneity, BFrf = 9.3 × 1092.
Moreover, we find the Hierarchical
component summary with
the same values as in the heterogeneity
component summary.
The reason for this is that the default settings specify models with the
hierarchical structure in all models assuming the presence of
heterogeneity.
We also obtain the model-averaged posterior estimates that combine the posterior estimates from all models according to the posterior model probabilities, the effect size, μ = 0.087 (95% CI [0.000, 0.314]), heterogeneity, τ = 0.326 (95% CI [0.231, 0.472]), and correlation, ρ = 0.659 (95% CI [0.354, 0.879]).
In the previous analyses, we assumed that the effect sizes are indeed clustered within the districts, and we only adjusted for the clustering. However, the effect sizes within the same cluster do not necessarily need to be more similar than effect sizes across different clusters. Now, we specify a model ensemble that allows us to test this assumption by specifying two sets of random effect meta-analytic models. The first set of models assumes that there is indeed clustering and that the correlation of random effects is uniformly distributed on the (0, 1) interval (as in the previous analyses). The second set of models assumes that there is no clustering, i.e., the correlation of random effects ρ = 0, which simplifies the structured covariance matrix to a diagonal matrix. Again, we model average across models assuming the presence and absence of the effect to account for the model uncertainty.
To specify this ‘special’ model ensemble with the
RoBMA()
function, we need to modify the previous model call
in the following ways. We removed the fixed effect models by specifying
the priors_heterogeneity_null - NULL
argument.1 Furthermore, we specify the
prior distribution for models assuming the absence of the hierarchical
structure by adding the
priors_hierarchical_null = prior(distribution = "spike", parameters = list("location" = 0))
argument.
hierarchical_test <- RoBMA(d = dat$yi, v = dat$vi, study_ids = dat$district,
priors_heterogeneity_null = NULL,
priors_hierarchical_null = prior(distribution = "spike", parameters = list("location" = 0)),
priors_bias = NULL,
parallel = TRUE, seed = 1)
We summarize the resulting model ensemble and find out that the
Hierarchical
component is no longer equivalent to the
Heterogeneity
component – the new model specification
allowed us to compare random effect models assuming the presence of the
hierarchical structure to random effect models assuming the absence of
the hierarchical structure. The resulting inclusion Bayes factor of the
hierarchical structure shows extreme evidence in favor of clustering of
the effect sizes, BFρρ̄ = 4624,
i.e., there is extreme evidence that the intervention results in more
similar effects within the districts.
We illustrated how to estimated estimate a hierarchical Bayesian
model-averaged meta-analysis using the RoBMA
package. The
hBMA model allows us to test for the presence vs absence of the effect
and heterogeneity while simultaneously adjusting for clustered effect
size estimates. While the the current implementation allows us to draw a
fully Bayesian inference, incorporate prior information, and acknowledge
model uncertainty, it has a few limitations in contrast to the
metafor
package. E.g., the RoBMA
package only
allows a simple nested random effects (i.e., estimates within studies,
schools within districts etc). The simple nesting allows us to break the
full covariance matrix into per cluster block matrices which speeds the
already demanding computation. Furthermore, the computation complexity
significantly increases when considering selection models as we need to
compute an exponentially increasing number of multivariate normal
probabilities with the increasing cluster size (existence of clusters
with more than 4 studies makes the current implementation
impractically). The current limitations are however not the end of the
road as we explore other approaches (e.g., only specifying PET-PEESE
style publication bias adjustment, and other dependency adjustments) in
a future vignette.
1 We could also model-average across the hierarchical structure assuming fixed effect models, i.e., τ ∼ f(.) and ρ = 1. However specifying such a model ensemble is a beyond scope of this vignette, see Custom ensembles vignette for some hints.