--- title: "Reproducing Bayesian Model-Averaged Meta-Analysis" author: "František Bartoš" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: self_contained: yes bibliography: ../inst/REFERENCES.bib csl: ../inst/apa.csl vignette: > %\VignetteIndexEntry{Reproducing Bayesian Model-Averaged Meta-Analysis} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown_notangle} --- ```{r setup, include = FALSE} is_check <- ("CheckExEnv" %in% search()) || any(c("_R_CHECK_TIMINGS_", "_R_CHECK_LICENSE_") %in% names(Sys.getenv())) || !file.exists("../models/ReproducingBMA/BMA_PowerPoseTest.RDS") knitr::opts_chunk$set( collapse = TRUE, comment = "#>", eval = !is_check, dev = "png") if(.Platform$OS.type == "windows"){ knitr::opts_chunk$set(dev.args = list(type = "cairo")) } ``` ```{r include = FALSE} library(RoBMA) # we pre-load the RoBMA models, the fitting time is around 2-5 minutes fit_BMA_test <- readRDS(file = "../models/ReproducingBMA/BMA_PowerPoseTest.RDS") fit_BMA_est <- readRDS(file = "../models/ReproducingBMA/BMA_PowerPoseEst.RDS") fit_RoBMA_test <- readRDS(file = "../models/ReproducingBMA/PowerPoseTest.RDS") fit_RoBMA_est <- readRDS(file = "../models/ReproducingBMA/PowerPoseEst.RDS") ``` ```{r include = FALSE, eval = FALSE} # R package version updating library(RoBMA) data("power_pose", package = "metaBMA") fit_RoBMA_test <- RoBMA(d = power_pose$effectSize, se = power_pose$SE, study_names = power_pose$study, priors_effect = prior( distribution = "cauchy", parameters = list(location = 0, scale = 1/sqrt(2)), truncation = list(0, Inf)), priors_heterogeneity = prior( distribution = "invgamma", parameters = list(shape = 1, scale = 0.15)), priors_bias = NULL, transformation = "cohens_d", seed = 1, parallel = TRUE) fit_RoBMA_est <- RoBMA(d = power_pose$effectSize, se = power_pose$SE, study_names = power_pose$study, priors_effect = prior( distribution = "cauchy", parameters = list(location = 0, scale = 1/sqrt(2))), priors_heterogeneity = prior( distribution = "invgamma", parameters = list(shape = 1, scale = 0.15)), priors_bias = NULL, priors_effect_null = NULL, transformation = "cohens_d", seed = 2, parallel = TRUE) saveRDS(fit_RoBMA_test, file = "../models/ReproducingBMA/PowerPoseTest.RDS") saveRDS(fit_RoBMA_est, file = "../models/ReproducingBMA/PowerPoseEst.RDS") ``` By default, the RoBMA package estimates an ensemble of 36 meta-analytic models and provides functions for convenient manipulation with the fitted object. However, the package has been designed so it can be used as a framework for estimating any combination of meta-analytic models (or a single model). Here, we illustrate how to build a custom ensemble of meta-analytic models - specifically the same ensemble that is used in 'classical' Bayesian Model-Averaged Meta-Analysis [@gronau2017bayesian; @gronau2020primer; @bartos2021bayesian]. See [this vignette](CustomEnsembles.html) if you are interested in building more customized ensembles or @bartos2020adjusting for a tutorial on fitting (custom) models in JASP. ### Reproducing Bayesian Model-Averaged Meta-Analysis (BMA) We illustrate how to fit a classical BMA (not adjusting for publication bias) using `RoBMA`. For this purpose, we reproduce a meta-analysis of registered reports on Power posing by @gronau2017bayesian. We focus only on the analysis using all reported results using a Cauchy prior distribution with scale $1/\sqrt{2}$ for the effect size estimation (half-Cauchy for testing) and inverse-gamma distribution with scale = 1 and shape 0.15 for the heterogeneity parameter. You can find the figure from the original publication [here](https://www.flickr.com/photos/149473606@N08/34086113581/in/dateposted-public/) and the paper's supplementary materials at https://osf.io/fxg32/. First, we load the power posing data provided within the metaBMA package and reproduce the analysis performed by @gronau2017bayesian. ```{r} data("power_pose", package = "metaBMA") power_pose[,c("study", "effectSize", "SE")] ``` ``` r fit_BMA_test <- metaBMA::meta_bma(y = power_pose$effectSize, SE = power_pose$SE, d = metaBMA::prior(family = "halfcauchy", param = 1/sqrt(2)), tau = metaBMA::prior(family = "invgamma", param = c(1, .15))) fit_BMA_est <- metaBMA::meta_bma(y = power_pose$effectSize, SE = power_pose$SE, d = metaBMA::prior(family = "cauchy", param = c(0, 1/sqrt(2))), tau = metaBMA::prior(family = "invgamma", param = c(1, .15))) ``` ```{r} fit_BMA_test$inclusion round(fit_BMA_est$estimates,2) ``` From the output, we can see that the inclusion Bayes factor for the effect size was $BF_{10} = 33.14$ and the effect size estimate 0.22, 95% HDI [0.09, 0.34] which matches the reported results. Please note that the `metaBMA` package model-averages only across the $H_{1}$ models, whereas the `RoBMA` package model-averages across all models. ### Using RoBMA Now we reproduce the analysis with `RoBMA`. We set the corresponding prior distributions for effect sizes ($\mu$) and heterogeneity ($\tau$), and remove the alternative prior distributions for the publication bias by setting `priors_bias = NULL`. To specify the half-Cauchy prior distribution with the `RoBMA::prior()` function we use a regular Cauchy distribution and truncate it at zero (note that both `metaBMA` and `RoBMA` export their own `prior()` functions that will clash when loading both packages simultaneously). The inverse-gamma prior distribution for the heterogeneity parameter is the default option (we specify it for completeness) and we omit the specifications for the null prior distributions for the effect size, heterogeneity (both of which are set to a spike at 0 by default), and publication bias (which is set to no publication bias by default). Since `metaBMA` model-averages the effect size estimates only across the models assuming presence of the effect, we remove the models assuming absence of the effect from the estimation ensemble with `priors_effect_null = NULL`. Finally, we set `transformation = "cohens_d"` to estimate the models on Cohen's *d* scale (RoBMA uses Fisher's *z* scale by default and transforms the estimated coefficients back to the scale that us used for specifying the prior distributions), we speed the computation by setting `parallel = TRUE`, and set a seed for reproducibility. ``` r library(RoBMA) fit_RoBMA_test <- RoBMA(d = power_pose$effectSize, se = power_pose$SE, study_names = power_pose$study, priors_effect = prior( distribution = "cauchy", parameters = list(location = 0, scale = 1/sqrt(2)), truncation = list(0, Inf)), priors_heterogeneity = prior( distribution = "invgamma", parameters = list(shape = 1, scale = 0.15)), priors_bias = NULL, transformation = "cohens_d", seed = 1, parallel = TRUE) fit_RoBMA_est <- RoBMA(d = power_pose$effectSize, se = power_pose$SE, study_names = power_pose$study, priors_effect = prior( distribution = "cauchy", parameters = list(location = 0, scale = 1/sqrt(2))), priors_heterogeneity = prior( distribution = "invgamma", parameters = list(shape = 1, scale = 0.15)), priors_bias = NULL, priors_effect_null = NULL, transformation = "cohens_d", seed = 2, parallel = TRUE) ``` ```{r} summary(fit_RoBMA_test) summary(fit_RoBMA_est) ``` The output from the `summary.RoBMA()` function has 2 parts. The first one under the "Robust Bayesian Meta-Analysis" heading provides a basic summary of the fitted models by component types (presence of the Effect/Heterogeneity/Publication bias). We can see that there are no models correcting for publication bias (we disabled them by setting `priors_bias = NULL`). Furthermore, the table summarizes the prior and posterior probabilities and the inclusion Bayes factors of the individual components. The results for the half-Cauchy model specified for testing show that the inclusion BF is basically identical to the one computed by the `metaBMA` package, $\text{BF}_{10} = 33.11$. The second part under the 'Model-averaged estimates' heading displays the parameter estimates. The results for the unrestricted Cauchy model specified for estimation show the effect size estimate $\mu = 0.22$, 95% CI [0.10, 0.35] that also mirrors the one obtained from `metaBMA` package. ### Visualizating the Results RoBMA provides extensive options for visualizing the results. Here, we visualize the prior (grey) and posterior (black) distribution for the mean parameter. ```{r fig_mu_est, dpi = 300, fig.width = 4, fig.height = 4, out.width = "50%", fig.align = "center"} plot(fit_RoBMA_est, parameter = "mu", prior = TRUE, xlim = c(-1, 1)) ``` If we visualize the effect size from the model specified for testing, we notice a few more things. The function plots the model-averaged estimates across all models by default (here model-averaged across models assuming the absence of the effect). The arrows stand for the probability of a spike, here, at the value 0. The secondary y-axis (right) shows the probability of the value 0 decreased from .50, to 0.03 (also obtainable from the "Robust Bayesian Meta-Analysis" field in the `summary.RoBMA()` function). Furthermore, the continuous prior distributions for the effect size under the alternative hypothesis is truncated to only the positive numbers, as specified when fitting the models. ```{r fig_mu_test, dpi = 300, fig.width = 4, fig.height = 4, out.width = "50%", fig.align = "center"} plot(fit_RoBMA_test, parameter = "mu", prior = TRUE, xlim = c(-.5, 1)) ``` We can also visualize the estimates from the individual models used in the ensemble. We do that with the `plot_models()` which visualizes the effect size estimates and 95% CI of each of the specified models from the estimation ensemble (Model 1 corresponds to the fixed effect model and Model 2 to the random effect model). The size of the square representing the mean estimate reflects the posterior model probability of the model, which is also displayed in the right-hand side panel. The bottom part of the figure shows the model averaged-estimate that is a combination of the individual model posterior distributions weighted by the posterior model probabilities. ```{r fig_models, dpi = 300, fig.height = 4.5, fig.width = 7, out.width = '75%', fig.align = "center"} plot_models(fit_RoBMA_est) ``` The last type of visualization that we show here is the forest plot. It displays the original studies' effects and the meta-analytic estimate within one figure. It can be requested by using the `forest()` function. ```{r fig_forest, dpi = 300, fig.height = 4.5, fig.width = 7, out.width = '75%', fig.align = "center"} forest(fit_RoBMA_est) ``` For more options provided by the plotting function, see its documentation using `?plot.RoBMA()`, `?plot_models()`, and `?forest()`. ### References