Title: | Alternative Meta-Analysis Methods |
---|---|
Description: | Provides alternative statistical methods for meta-analysis, including: - bivariate generalized linear mixed models for synthesizing odds ratios, relative risks, and risk differences (Chu et al., 2012 <doi:10.1177/0962280210393712>) - heterogeneity tests and measures and penalization methods that are robust to outliers (Lin et al., 2017 <doi:10.1111/biom.12543>; Wang et al., 2022 <doi:10.1002/sim.9261>); - measures, tests, and visualization tools for publication bias or small-study effects (Lin and Chu, 2018 <doi:10.1111/biom.12817>; Lin, 2019 <doi:10.1002/jrsm.1340>; Lin, 2020 <doi:10.1177/0962280220910172>; Shi et al., 2020 <doi:10.1002/jrsm.1415>); - meta-analysis of combining standardized mean differences and odds ratios (Jing et al., 2023 <doi:10.1080/10543406.2022.2105345>); - meta-analysis of diagnostic tests for synthesizing sensitivities, specificities, etc. (Reitsma et al., 2005 <doi:10.1016/j.jclinepi.2005.02.022>; Chu and Cole, 2006 <doi:10.1016/j.jclinepi.2006.06.011>); - meta-analysis methods for synthesizing proportions (Lin and Chu, 2020 <doi:10.1097/ede.0000000000001232>); - models for multivariate meta-analysis, measures of inconsistency degrees of freedom in Bayesian network meta-analysis, and predictive P-score (Lin and Chu, 2018 <doi:10.1002/jrsm.1293>; Lin, 2020 <doi:10.1080/10543406.2020.1852247>; Rosenberger et al., 2021 <doi:10.1186/s12874-021-01397-5>). |
Authors: | Lifeng Lin [aut, cre] , Yaqi Jing [ctb], Kristine J. Rosenberger [ctb], Linyu Shi [ctb], Yipeng Wang [ctb], Haitao Chu [aut] |
Maintainer: | Lifeng Lin <[email protected]> |
License: | GPL (>= 2) |
Version: | 4.2 |
Built: | 2024-11-07 06:43:10 UTC |
Source: | CRAN |
This meta-analysis serves as an example to illustrate the usage of the function meta.pen
.
data("dat.ha")
data("dat.ha")
A data frame containing 32 studies.
y
the observed effect size (mean difference) for each study in the meta-analysis.
s2
the within-study variance for each study.
Adams SP, Sekhon SS, Tsang M, Wright JM (2018). "Fluvastatin for lowering lipids." Cochrane Database of Systematic Reviews, 7. Art. No.: CD012282. <doi:10.1002/14651858.CD012282.pub2>
This meta-analysis serves as an example to illustrate function usage in the package altmeta.
data("dat.aex")
data("dat.aex")
A data frame containing 29 studies with the observed effect sizes and their within-study variances.
y
the observed effect size for each collected study in the meta-analysis.
s2
the within-study variance for each study.
Ismail I, Keating SE, Baker MK, Johnson NA (2012). "A systematic review and meta-analysis of the effect of aerobic vs. resistance exercise training on visceral fat." Obesity Reviews, 13(1), 68–91. <doi:10.1111/j.1467-789X.2011.00931.x>
This dataset serves as an example of meta-analysis of mean differences.
data("dat.annane")
data("dat.annane")
A data frame with 12 studies with the following 5 variables within each study.
y
point esimates of mean differences.
s2
sample variances of mean differences.
n1
sample sizes in treatment group 1 (steroids).
n2
sample sizes in treatment group 2 (control).
n
total sample sizes.
Annane D, Bellissant E, Bollaert PE, Briegel J, Keh D, Kupfer Y (2015). "Corticosteroids for treating sepsis." Cochrane Database of Systematic Reviews, 12, Art. No.: CD002243. <doi:10.1002/14651858.CD002243.pub3>
This dataset serves as an example of network meta-analysis with binary outcomes.
data("dat.baker")
data("dat.baker")
A dataset of network meta-analysis with binary outcomes, containing 38 studies and 5 treatments.
sid
study IDs.
tid
treatment IDs.
r
event counts.
n
sample sizes.
Treatment IDs represent: 1) placebo; 2) inhaled corticosteroid; 3) inhaled corticosteroid + long-acting -agonist; 4) long-acting
-agonist; and 5) tiotropium.
Baker WL, Baker EL, Coleman CI (2009). "Pharmacologic treatments for chronic obstructive pulmonary disease: a mixed-treatment comparison meta-analysis." Pharmacotherapy, 29(8), 891–905. <doi:10.1592/phco.29.8.891>
This dataset serves as an example of meta-analysis of standardized mean differences.
data("dat.barlow")
data("dat.barlow")
A data frame with 26 studies with the following 5 variables within each study.
y
point esimates of standardized mean differences.
s2
sample variances of standardized mean differences.
n1
sample sizes in treatment group 1 (parent training programs).
n2
sample sizes in treatment group 2 (control).
n
total sample sizes.
Barlow J, Smailagic N, Huband N, Roloff V, Bennett C (2014). "Group-based parent training programmes for improving parental psychosocial health." Cochrane Database of Systematic Reviews, 5, Art. No.: CD002020. <doi:10.1002/14651858.CD002020.pub4>
This dataset serves as an example of meta-analysis of proportions.
data("dat.beck17")
data("dat.beck17")
A data frame with 6 studies with the following 2 variables within each study.
e
event counts of samples with depression or depressive symptoms.
n
sample sizes.
The original article by Rotenstein et al. (2016) stratified all extracted studies based on various screening instruments and cutoff scores. This dataset focuses on the meta-analysis of 6 studies with Beck Depression Inventory Score 17.
Rotenstein LS, Ramos MA, Torre M, Segal JB, Peluso MJ, Guille C, Sen S, Mata DA (2016). "Prevalence of depression, depressive symptoms, and suicidal ideation among medical students: a systematic review and meta-analysis." JAMA, 316(21), 2214–2236. <doi:10.1001/jama.2016.17324>
This meta-analysis serves as an example of meta-analysis with binary outcomes.
data("dat.bellamy")
data("dat.bellamy")
A data frame containing 20 cohort studies with the following 4 variables.
sid
study IDs.
tid
treatment/exposure IDs (0: non-exposure; 1: exposure).
e
event counts.
n
sample sizes.
Bellamy L, Casas JP, Hingorani AD, Williams D (2009). "Type 2 diabetes mellitus after gestational diabetes: a systematic review and meta-analysis." Lancet, 373(9677), 1773–1779. <doi:10.1016/S0140-6736(09)60731-5>
This meta-analysis serves as an example to illustrate the usage of the function meta.pen
.
data("dat.ha")
data("dat.ha")
A data frame containing 53 studies.
n1
the sample size in the treatment group in each study.
n2
the sample size in the control group in each study.
r1
the event count in the treatment group in each study.
r2
the event count in the control group in each study.
y
the observed effect size (log odds ratio) for each study in the meta-analysis.
s2
the within-study variance for each study.
Bjelakovic G, Gluud LL, Nikolova D, Whitfield K, Wetterslev J, Simonetti RG, Bjelakovic M, Gluud C (2014). "Vitamin D supplementation for prevention of mortality in adults." Cochrane Database of Systematic Reviews, 7. Art. No.: CD007470. <doi:10.1002/14651858.CD007470.pub3>
This meta-analysis serves as an example to illustrate the usage of the function meta.pen
.
data("dat.ha")
data("dat.ha")
A data frame containing 21 studies.
n1
the sample size in the treatment group in each study.
n2
the sample size in the control group in each study.
r1
the event count in the treatment group in each study.
r2
the event count in the control group in each study.
y
the observed effect size (log odds ratio) for each study in the meta-analysis.
s2
the within-study variance for each study.
Bohren MA, Hofmeyr GJ, Sakala C, Fukuzawa RK, Cuthbert A (2017). "Continuous support for women during childbirth." Cochrane Database of Systematic Reviews, 7. Art. No.: CD003766. <doi:10.1002/14651858.CD003766.pub6>
This dataset serves as an example of meta-analysis of (log) odds ratios.
data("dat.butters")
data("dat.butters")
A data frame with 16 studies with the following 7 variables within each study.
y
point esimates of log odds ratios.
s2
sample variances of log odds ratios.
n1
sample sizes in treatment group 1 (addition of drug).
n2
sample sizes in treatment group 2 (control).
r1
event counts in treatment group 1.
r2
event counts in treatment group 2.
n
total sample sizes.
Butters DJ, Ghersi D, Wilcken N, Kirk SJ, Mallon PT (2010). "Addition of drug/s to a chemotherapy regimen for metastatic breast cancer." Cochrane Database of Systematic Reviews, 11, Art. No.: CD003368. <doi:10.1002/14651858.CD003368.pub3>
This meta-analysis serves as an example to illustrate the usage of the function meta.pen
.
data("dat.ha")
data("dat.ha")
A data frame containing 20 studies.
n1
the sample size in the treatment group in each study.
n2
the sample size in the control group in each study.
r1
the event count in the treatment group in each study.
r2
the event count in the control group in each study.
y
the observed effect size (log odds ratio) for each study in the meta-analysis.
s2
the within-study variance for each study.
Carless PA, Rubens FD, Anthony DM, O'Connell D, Henry DA (2011). "Platelet-rich-plasmapheresis for minimising peri-operative allogeneic blood transfusion." Cochrane Database of Systematic Reviews, 3. Art. No.: CD004172. <doi:10.1002/14651858.CD004172.pub2>
This dataset serves as an example of meta-analysis of proportions.
data("dat.chor")
data("dat.chor")
A data frame with 21 studies with the following 2 variables within each study.
e
event counts of horioamnionitis.
n
sample sizes.
Woodd SL, Montoya A, Barreix M, Pi L, Calvert C, Rehman AM, Chou D, Campbell OMR (2019). "Incidence of maternal peripartum infection: a systematic review and meta-analysis." PLOS Medicine, 16(12), e1002984. <doi:10.1371/journal.pmed.1002984>
This dataset serves as an example of meta-analysis of combining standardized mean differences and odds ratios.
data("dat.dep")
data("dat.dep")
A data frame with 6 studies with the following 15 variables within each study.
author
The first author of each study.
year
The publication year of each study.
treatment
The treatment group.
control
The control group.
y1
The sample mean in the treatment group for the continuous outcome.
sd1
The sample standard deviation in the treatment group for the continuous outcome.
n1
The sample size in the treatment group for the continuous outcome.
y0
The sample mean in the control group for the continuous outcome.
sd0
The sample standard deviation in the control group for the continuous outcome.
n0
The sample size in the control group for the continuous outcome.
r1
The event count in the treatment group for the binary outcome.
m1
The sample size in the treatment group for the binary outcome.
r0
The event count in the control group for the binary outcome.
m0
The sample size in the control group for the binary outcome.
id.bin
An indicator of whether the outcome is binary (1) or continuous (0).
This dataset is from Cipriani et al. (2016), comparing the efficacy and tolerability of antidepressants for major depressive disorders in children and adolescents. Our case study focuses on efficacy. The authors originally performed a network meta-analysis; however, here we restrict the comparison to fluoxetine and placebo. The continuous outcomes are measured by the mean overall changes in depressive symptoms from baseline to endpoint. For the binary outcomes, events are defined as whether patients' depression rating scores were reduced by at least a specified cutoff value.
Cipriani A, Zhou X, Del Giovane C, Hetrick SE, Qin B, Whittington C, Coghill D, Zhang Y, Hazell P, Leucht S, Cuijpers P, Pu J, Cohen D, Ravindran AV, Liu Y, Michael KD, Yang L, Liu L, Xie P (2016). "Comparative efficacy and tolerability of antidepressants for major depressive disorder in children and adolescents: a network meta-analysis." The Lancet, 388(10047), 881–890. <doi:10.1016/S0140-6736(16)30385-3>
This meta-analysis serves as an example of meta-analysis with binary outcomes.
data("dat.ducharme")
data("dat.ducharme")
A data frame containing 33 studies with the following 4 variables within each study.
n00
counts of non-events in treatment group 0 (placebo).
n01
counts of events in treatment group 0 (placebo).
n10
counts of non-events in treatment group 1 (beta2-agonists).
n11
counts of events in treatment group 1 (beta2-agonists).
The original review collected 35 studies; two double-zero-counts studies are excluded from this dataset because their odds ratios are not estimable.
Ducharme FM, Ni Chroinin M, Greenstone I, Lasserson TJ (2010). "Addition of long-acting beta2-agonists to inhaled corticosteroids versus same dose inhaled corticosteroids for chronic asthma in adults and children." Cochrane Database of Systematic Reviews, 5, Art. No.: CD005535. <doi:10.1002/14651858.CD005535.pub2>
This multivariate meta-analysis serves as an example to illustrate function usage in the package altmeta. It consists of 31 studies with 4 outcomes.
data("dat.fib")
data("dat.fib")
A list containing three elements, y
, S
, and sd
.
y
a 31 x 4 numeric matrix containing the observed effect sizes; the rows represent studies and the columns represent outcomes.
S
a list containing 31 elements; each element is within-study covariance matrix of the corresponding study.
sd
a 31 x 4 numeric matrix containing the within-study standard deviations; the rows represent studies and the columns represent outcomes.
Fibrinogen Studies Collaboration (2004). "Collaborative meta-analysis of prospective studies of plasma fibrinogen and cardiovascular disease." European Journal of Cardiovascular Prevention and Rehabilitation, 11(1), 9–17. <doi:10.1097/01.hjr.0000114968.39211.01>
Fibrinogen Studies Collaboration (2005). "Plasma fibrinogen level and the risk of major cardiovascular diseases and nonvascular mortality: an individual participant meta-analysis." JAMA, 294(14), 1799–1809. <doi:10.1001/jama.294.14.1799>
This meta-analysis serves as an example to illustrate function usage in the package altmeta.
data("dat.ha")
data("dat.ha")
A data frame containing 109 studies with the observed effect sizes and their within-study variances.
y
the observed effect size for each collected study in the meta-analysis.
s2
the within-study variance for each study.
Hrobjartsson A, Gotzsche PC (2010). "Placebo interventions for all clinical conditions." Cochrane Database of Systematic Reviews, 1. Art. No.: CD003974. <doi:10.1002/14651858.CD003974.pub3>
This meta-analysis serves as an example of meta-analysis with binary outcomes.
data("dat.henry")
data("dat.henry")
A data frame containing 26 studies with the following 4 variables within each study.
n00
counts of non-events in treatment group 0 (placebo).
n01
counts of events in treatment group 0 (placebo).
n10
counts of non-events in treatment group 1 (tranexamic acid).
n11
counts of events in treatment group 1 (tranexamic acid).
The original review collected 27 studies; one double-zero-counts study is excluded from this dataset because its odds ratio is not estimable.
Henry DA, Carless PA, Moxey AJ, O'Connell, Stokes BJ, Fergusson DA, Ker K (2011). "Anti-fibrinolytic use for minimising perioperative allogeneic blood transfusion." Cochrane Database of Systematic Reviews, 1, Art. No.: CD001886. <doi:10.1002/14651858.CD001886.pub3>
This meta-analysis serves as an example to illustrate function usage in the package altmeta.
data("dat.hipfrac")
data("dat.hipfrac")
A data frame containing 17 studies with the observed effect sizes and their within-study variances.
y
the observed effect size for each collected study in the meta-analysis.
s2
the within-study variance for each study.
Haentjens P, Magaziner J, Colon-Emeric CS, Vanderschueren D, Milisen K, Velkeniers B, Boonen S (2010). "Meta-analysis: excess mortality after hip fracture among older women and men". Annals of Internal Medicine, 152(6), 380–390. <doi:10.7326/0003-4819-152-6-201003160-00008>
This dataset serves as an example of meta-analysis of risk differences.
data("dat.kaner")
data("dat.kaner")
A data frame with 13 studies with the following 7 variables within each study.
y
point esimates of risk differences.
s2
sample variances of risk differences.
n1
sample sizes in treatment group 1 (brief alcohol interventions).
n2
sample sizes in treatment group 2 (control).
r1
event counts in treatment group 1.
r2
event counts in treatment group 2.
n
total sample sizes.
Kaner EF, Dickinson HO, Beyer FR, Campbell F, Schlesinger C, Heather N, Saunders JB, Burnand B, Pienaar ED (2007). "Effectiveness of brief alcohol interventions in primary care populations." Cochrane Database of Systematic Reviews, 2, Art. No.: CD004148. <doi:10.1002/14651858.CD004148.pub3>
This meta-analysis serves as an example to illustrate function usage in the package altmeta.
data("dat.lcj")
data("dat.lcj")
A data frame containing 33 studies with the observed effect sizes and their within-study variances.
y
the observed effect size for each collected study in the meta-analysis.
s2
the within-study variance for each study.
Liu CJ, Latham NK (2009). "Progressive resistance strength training for improving physical function in older adults." Cochrane Database of Systematic Reviews, 3. Art. No.: CD002759. <doi:10.1002/14651858.CD002759.pub2>
This dataset serves as an example of meta-analysis of standardized mean differences.
data("dat.paige")
data("dat.paige")
A data frame with 6 studies with the following 5 variables within each study.
y
point esimates of standardized mean differences.
s2
sample variances of standardized mean differences.
n1
sample sizes in treatment group 1 (spinal manipulation).
n2
sample sizes in treatment group 2 (comparator).
n
total sample sizes.
Paige NM, Miake-Lye IM, Booth MS, Beroes JM, Mardian AS, Dougherty P, Branson R, Tang B, Morton SC, Shekelle PG (2017). "Association of spinal manipulative therapy with clinical benefit and harm for acute low back pain: systematic review and meta-analysis." JAMA, 317(14), 1451–1460. <doi:10.1001/jama.2017.3086>
This dataset serves as an example of meta-analysis of mean differences.
data("dat.plourde")
data("dat.plourde")
A data frame with 19 studies with the following 5 variables within each study.
y
point esimates of mean differences.
s2
sample variances of mean differences.
n1
sample sizes in treatment group 1 (radial).
n2
sample sizes in treatment group 2 (femoral).
n
total sample sizes.
Plourde G, Pancholy SB, Nolan J, Jolly S, Rao SV, Amhed I, Bangalore S, Patel T, Dahm JB, Bertrand OF (2015). "Radiation exposure in relation to the arterial access site used for diagnostic coronary angiography and percutaneous coronary intervention: a systematic review and meta-analysis." Lancet, 386(10009), 2192–2203. <doi:10.1016/S0140-6736(15)00305-0>
This meta-analysis serves as an example of meta-analysis with binary outcomes.
data("dat.poole")
data("dat.poole")
A data frame containing 24 studies with the following 4 variables within each study.
n00
counts of non-events in treatment group 0 (placebo).
n01
counts of events in treatment group 0 (placebo).
n10
counts of non-events in treatment group 1 (mucolytic).
n11
counts of events in treatment group 1 (mucolytic).
Poole P, Chong J, Cates CJ (2015). "Mucolytic agents versus placebo for chronic bronchitis or chronic obstructive pulmonary disease." Cochrane Database of Systematic Reviews, 7, Art. No.: CD001287. <doi:10.1002/14651858.CD001287.pub5>
This dataset serves as an example to illustrate network meta-analysis of multiple factors. It consists of 29 studies on a total of 8 risk factors: area of residence (rural vs. urban); education attainment (low vs. high); latitude of residence (low vs. high); occupation type (outdoor vs. indoor); smoking status (yes vs. no); use of hat (yes vs. no); use of spectacles (yes vs. no); and use of sunglasses (yes vs. no). Each study only investigates a subset of the 8 risk factors, so the dataset contains many missing values.
data("dat.pte")
data("dat.pte")
A list containing two elements, y
and se
.
y
a 29 x 8 numeric matrix containing the observed effect sizes; the rows represent studies and the columns represent outcomes.
se
a 29 x 8 numeric matrix containing the within-study standard errors; the rows represent studies and the columns represent outcomes.
Serghiou S, Patel CJ, Tan YY, Koay P, Ioannidis JPA (2016). "Field-wide meta-analyses of observational associations can map selective availability of risk factors and the impact of model specifications." Journal of Clinical Epidemiology, 71, 58–67. <doi:10.1016/j.jclinepi.2015.09.004>
The dataset is extracted from Lu and Ades (2006); it was initially reported by Hasselblad (1998) (without performing a formal network meta-analysis).
data("dat.sc")
data("dat.sc")
A dataset of network meta-analysis with binary outcomes, containing 24 studies and 4 treatments.
sid
study IDs.
tid
treatment IDs.
r
event counts.
n
sample sizes.
Treatment IDs represent: 1)no contact; 2) selfhelp; 3) individual counseling; and 4) group counseling.
Hasselblad V (1998). "Meta-analysis of multitreatment studies." Medical Decision Making, 18(1), 37–43. <doi:10.1177/0272989X9801800110>
Lu G, Ades AE (2006). "Assessing evidence inconsistency in mixed treatment comparisons." Journal of the American Statistical Association, 101(474), 447–459. <doi:10.1198/016214505000001302>
This meta-analysis serves as an example of meta-analyses of diagnostic tests.
data("dat.scheidler")
data("dat.scheidler")
A data frame with 44 studies with the following 5 variables; each row represents a study.
dt
types of diagnostic tests; CT: computed tomography; LAG: lymphangiography; and MRI: magnetic resonance imaging.
tp
counts of true positives.
fp
counts of false positives.
fn
counts of false negatives.
tn
counts of true negatives.
Scheidler J, Hricak H, Yu KK, Subak L, Segal MR (1997). "Radiological evaluation of lymph node metastases in patients with cervical cancer: a meta-analysis." JAMA, 278(13), 1096–1101. <doi:10.1001/jama.1997.03550130070040>
This meta-analysis serves as an example to illustrate function usage in the package altmeta.
data("dat.slf")
data("dat.slf")
A data frame containing 56 studies with the observed effect sizes and their within-study variances.
y
the observed effect size for each collected study in the meta-analysis.
s2
the within-study variance for each study.
Stead LF, Perera R, Bullen C, Mant D, Hartmann-Boyce J, Cahill K, Lancaster T (2012). "Nicotine replacement therapy for smoking cessation." Cochrane Database of Systematic Reviews, 11. Art. No.: CD000146. <doi:10.1002/14651858.CD000146.pub4>
This meta-analysis serves as an example of meta-analyses of diagnostic tests.
data("dat.smith")
data("dat.smith")
A data frame with 30 studies with the following 4 variables; each row represents a study.
tp
counts of true positives.
fp
counts of false positives.
fn
counts of false negatives.
tn
counts of true negatives.
Smith TO, Back T, Toms AP, Hing CB (2011). "Diagnostic accuracy of ultrasound for rotator cuff tears in adults: a systematic review and meta-analysis." Clinical Radiology, 66(11), 1036–1048. <doi:10.1016/j.crad.2011.05.007>
This dataset serves as an example of meta-analysis of (log) odds ratios.
data("dat.whiting")
data("dat.whiting")
A data frame with 29 studies with the following 9 variables within each study.
y
point esimates of log odds ratios.
s2
sample variances of log odds ratios.
n00
counts of non-events in treatment group 0 (placebo).
n01
counts of events in treatment group 0.
n10
counts of non-events in treatment group 1 (cannabinoid).
n11
counts of events in treatment group 1 (cannabinoid).
n0
sample sizes in treatment group 0.
n1
sample sizes in treatment group 1.
n
total sample sizes.
Whiting PF, Wolff RF, Deshpande S, Di Nisio M, Duffy S, Hernandez AV, Keurentjes JC, Lang S, Misso K, Ryder S, Schmidlkofer S, Westwood M, Kleijnen J (2015). "Cannabinoids for medical use: a systematic review and meta-analysis." JAMA, 313(24), 2456–2473. <doi:10.1001/jama.2015.6358>
This dataset serves as an example of meta-analysis of (log) relative risks.
data("dat.williams")
data("dat.williams")
A data frame with 20 studies with the following 7 variables within each study.
y
point esimates of log relative risks.
s2
sample variances of log relative risks.
n1
sample sizes in treatment group 1 (medication).
n2
sample sizes in treatment group 2 (placebo).
r1
event counts in treatment group 1.
r2
event counts in treatment group 2.
n
total sample sizes.
Williams T, Hattingh CJ, Kariuki CM, Tromp SA, van Balkom AJ, Ipser JC, Stein DJ (2017). "Pharmacotherapy for social anxiety disorder (SAnD)." Cochrane Database of Systematic Reviews, 10, Art. No.: CD001206. <doi:10.1002/14651858.CD001206.pub3>
This network meta-analysis investigates the effects of seven immune checkpoint inhibitor (ICI) drugs on all-grade treatment-related adverse events (TrAEs). It aimed to provide a safety ranking of the ICI drugs for the treatment of cancer.
data("dat.xu")
data("dat.xu")
A dataset of network meta-analysis with binary outcomes, containing 23 studies and 7 treatments.
sid
study IDs.
tid
treatment IDs.
r
event counts.
n
sample sizes.
Treatment IDs represent: 1) conventional therapy; 2) nivolumab; 3) pembrolizumab; 4) two ICIs; 5) ICI and conventional therapy; 6) atezolizumab; and 7) ipilimumab.
Xu C, Chen YP, Du XJ, Liu JQ, Huang CL, Chen L, Zhou GQ, Li WF, Mao YP, Hsu C, Liu Q, Lin AH, Tang LL, Sun Y, Ma J (2018). "Comparative safety of immune checkpoint inhibitors in cancer: systematic review and network meta-analysis." BMJ, 363, k4226. <doi:10.1136/bmj.k4226>
Performs a meta-analysis of proportions using generalized linear mixed models (GLMMs) with various link functions.
maprop.glmm(e, n, data, link = "logit", alpha = 0.05, pop.avg = TRUE, int.approx = 10000, b.iter = 1000, seed = 1234, ...)
maprop.glmm(e, n, data, link = "logit", alpha = 0.05, pop.avg = TRUE, int.approx = 10000, b.iter = 1000, seed = 1234, ...)
e |
a numeric vector specifying the event counts in the collected studies. |
n |
a numeric vector specifying the sample sizes in the collected studies. |
data |
an optional data frame containing the meta-analysis dataset. If |
link |
a character string specifying the link function used in the GLMM, which can be one of |
alpha |
a numeric value specifying the statistical significance level. |
pop.avg |
a logical value indicating whether the population-averaged proportion and its confidence interval are to be produced. This quantity is the marginal mean of study-specific proportions, while the commonly-reported overall proportion usually represents the median (or interpreted as a conditional measure); see more details about this quantity in Section 13.2.3 in Agresti (2013), Chu et al. (2012), Lin and Chu (2020), and Zeger et al. (1988). If |
int.approx |
an integer specifying the number of independent standard normal samples for numerically approximating the integration involved in the calculation of the population-averaged proportion; see details in Lin and Chu (2020). It is only used when |
b.iter |
an integer specifying the number of bootstrap iterations; it is only used when |
seed |
an integer for specifying the seed of the random number generation for reproducibility during the bootstrap resampling (and numerical approximation for the population-averaged proportion); it is only used when |
... |
other arguments that can be passed to the function |
This function returns a list containing the point and interval estimates of the overall proportion. Specifically, prop.c.est
is the commonly-reported median (or conditional) proportion, and prop.c.ci
is its confidence interval. It also returns information about AIC, BIC, log likelihood, deviance, and residual degrees-of-freedom. If pop.avg
= TRUE
, the following additional elements will be also in the produced list: prop.c.ci.b
is the bootstrap confidence interval of the commonly-reported median (conditional) proportion, prop.m.est
is the point estimate of the population-averaged (marginal) proportion, prop.m.ci.b
is the bootstrap confidence interval of the population-averaged (marginal) proportion, and b.w.e
is a vector of two numeric values, indicating the counts of warnings and errors occurred during the bootstrap iterations.
This function implements the GLMM for the meta-analysis of proportions via the function glmer
in the package lme4. It is possible that the algorithm of the GLMM estimation may not converge for some bootstrapped meta-analyses when pop.avg
= TRUE
, and the function glmer
may report warnings or errors about the convergence issue. The bootstrap iterations are continued until b.iter
replicates without any warnings or errors are obtained; those replicates with any warnings or errors are discarded.
Agresti A (2013). Categorical Data Analysis. Third edition. John Wiley & Sons, Hoboken, NJ.
Bakbergenuly I, Kulinskaya E (2018). "Meta-analysis of binary outcomes via generalized linear mixed models: a simulation study." BMC Medical Research Methodology, 18, 70. <doi:10.1186/s12874-018-0531-9>
Chu H, Nie L, Chen Y, Huang Y, Sun W (2012). "Bivariate random effects models for meta-analysis of comparative studies with binary outcomes: methods for the absolute risk difference and relative risk." Statistical Methods in Medical Research, 21(6), 621–633. <doi:10.1177/0962280210393712>
Hamza TH, van Houwelingen HC, Stijnen T (2008). "The binomial distribution of meta-analysis was preferred to model within-study variability." Journal of Clinical Epidemiology, 61(1), 41–51. <doi:10.1016/j.jclinepi.2007.03.016>
Lin L, Chu H (2020). "Meta-analysis of proportions using generalized linear mixed models." Epidemiology, 31(5), 713–717. <doi:10.1097/ede.0000000000001232>
Stijnen T, Hamza TH, Ozdemir P (2010). "Random effects meta-analysis of event outcome in the framework of the generalized linear mixed model with applications in sparse data." Statistics in Medicine, 29(29), 3046–3067. <doi:10.1002/sim.4040>
Zeger SL, Liang K-Y, Albert PS (1988). "Models for longitudinal data: a generalized estimating equation approach." Biometrics, 44(4), 1049–1060. <doi:10.2307/2531734>
# chorioamnionitis data data("dat.chor") # GLMM with the logit link with only 10 bootstrap iterations out.chor.glmm.logit <- maprop.glmm(e, n, data = dat.chor, link = "logit", b.iter = 10, seed = 1234) out.chor.glmm.logit # not calculating the population-averaged (marginal) proportion, # without bootstrap resampling out.chor.glmm.logit <- maprop.glmm(e, n, data = dat.chor, link = "logit", pop.avg = FALSE) out.chor.glmm.logit # increases the number of bootstrap iterations to 1000, # taking longer time out.chor.glmm.logit <- maprop.glmm(e, n, data = dat.chor, link = "logit", b.iter = 1000, seed = 1234) out.chor.glmm.logit # GLMM with the log link out.chor.glmm.log <- maprop.glmm(e, n, data = dat.chor, link = "log", b.iter = 10, seed = 1234) out.chor.glmm.log # GLMM with the probit link out.chor.glmm.probit <- maprop.glmm(e, n, data = dat.chor, link = "probit", b.iter = 10, seed = 1234) out.chor.glmm.probit # GLMM with the cauchit link out.chor.glmm.cauchit <- maprop.glmm(e, n, data = dat.chor, link = "cauchit", b.iter = 10, seed = 1234) out.chor.glmm.cauchit # GLMM with the cloglog link out.chor.glmm.cloglog <- maprop.glmm(e, n, data = dat.chor, link = "cloglog", b.iter = 10, seed = 1234) out.chor.glmm.cloglog # depression data data("dat.beck17") out.beck17.glmm.log <- maprop.glmm(e, n, data = dat.beck17, link = "log", b.iter = 10, seed = 1234) out.beck17.glmm.log out.beck17.glmm.logit <- maprop.glmm(e, n, data = dat.beck17, link = "logit", b.iter = 10, seed = 1234) out.beck17.glmm.logit out.beck17.glmm.probit <- maprop.glmm(e, n, data = dat.beck17, link = "probit", b.iter = 10, seed = 1234) out.beck17.glmm.probit out.beck17.glmm.cauchit <- maprop.glmm(e, n, data = dat.beck17, link = "cauchit", b.iter = 10, seed = 1234) out.beck17.glmm.cauchit out.beck17.glmm.cloglog<- maprop.glmm(e, n, data = dat.beck17, link = "cloglog", b.iter = 10, seed = 1234) out.beck17.glmm.cloglog
# chorioamnionitis data data("dat.chor") # GLMM with the logit link with only 10 bootstrap iterations out.chor.glmm.logit <- maprop.glmm(e, n, data = dat.chor, link = "logit", b.iter = 10, seed = 1234) out.chor.glmm.logit # not calculating the population-averaged (marginal) proportion, # without bootstrap resampling out.chor.glmm.logit <- maprop.glmm(e, n, data = dat.chor, link = "logit", pop.avg = FALSE) out.chor.glmm.logit # increases the number of bootstrap iterations to 1000, # taking longer time out.chor.glmm.logit <- maprop.glmm(e, n, data = dat.chor, link = "logit", b.iter = 1000, seed = 1234) out.chor.glmm.logit # GLMM with the log link out.chor.glmm.log <- maprop.glmm(e, n, data = dat.chor, link = "log", b.iter = 10, seed = 1234) out.chor.glmm.log # GLMM with the probit link out.chor.glmm.probit <- maprop.glmm(e, n, data = dat.chor, link = "probit", b.iter = 10, seed = 1234) out.chor.glmm.probit # GLMM with the cauchit link out.chor.glmm.cauchit <- maprop.glmm(e, n, data = dat.chor, link = "cauchit", b.iter = 10, seed = 1234) out.chor.glmm.cauchit # GLMM with the cloglog link out.chor.glmm.cloglog <- maprop.glmm(e, n, data = dat.chor, link = "cloglog", b.iter = 10, seed = 1234) out.chor.glmm.cloglog # depression data data("dat.beck17") out.beck17.glmm.log <- maprop.glmm(e, n, data = dat.beck17, link = "log", b.iter = 10, seed = 1234) out.beck17.glmm.log out.beck17.glmm.logit <- maprop.glmm(e, n, data = dat.beck17, link = "logit", b.iter = 10, seed = 1234) out.beck17.glmm.logit out.beck17.glmm.probit <- maprop.glmm(e, n, data = dat.beck17, link = "probit", b.iter = 10, seed = 1234) out.beck17.glmm.probit out.beck17.glmm.cauchit <- maprop.glmm(e, n, data = dat.beck17, link = "cauchit", b.iter = 10, seed = 1234) out.beck17.glmm.cauchit out.beck17.glmm.cloglog<- maprop.glmm(e, n, data = dat.beck17, link = "cloglog", b.iter = 10, seed = 1234) out.beck17.glmm.cloglog
Performs a meta-analysis of proportions using conventional two-step methods with various data transformations.
maprop.twostep(e, n, data, link = "logit", method = "ML", alpha = 0.05, pop.avg = TRUE, int.approx = 10000, b.iter = 1000, seed = 1234)
maprop.twostep(e, n, data, link = "logit", method = "ML", alpha = 0.05, pop.avg = TRUE, int.approx = 10000, b.iter = 1000, seed = 1234)
e |
a numeric vector specifying the event counts in the collected studies. |
n |
a numeric vector specifying the sample sizes in the collected studies. |
data |
an optional data frame containing the meta-analysis dataset. If |
link |
a character string specifying the data transformation for each study's proportion used in the two-step method, which can be one of |
method |
a character string specifying the method to perform the meta-analysis, which is passed to the argument |
alpha |
a numeric value specifying the statistical significance level. |
pop.avg |
a logical value indicating whether the population-averaged proportion and its confidence interval are to be produced. This quantity is the marginal mean of study-specific proportions, while the commonly-reported overall proportion usually represents the median (or interpreted as a conditional measure); see more details about this quantity in Section 13.2.3 in Agresti (2013), Chu et al. (2012), Lin and Chu (2020), and Zeger et al. (1988). If |
int.approx |
an integer specifying the number of independent standard normal samples for numerically approximating the integration involved in the calculation of the population-averaged proportion; see details in Lin and Chu (2020). It is only used when |
b.iter |
an integer specifying the number of bootstrap iterations; it is only used when |
seed |
an integer for specifying the seed of the random number generation for reproducibility during the bootstrap resampling (and numerical approximation for the population-averaged proportion); it is only used when |
This function returns a list containing the point and interval estimates of the overall proportion. Specifically, prop.c.est
is the commonly-reported median (or conditional) proportion, and prop.c.ci
is its confidence interval. If pop.avg
= TRUE
, the following additional elements will be also in the produced list: prop.c.ci.b
is the bootstrap confidence interval of the commonly-reported median (conditional) proportion, prop.m.est
is the point estimate of the population-averaged (marginal) proportion, prop.m.ci.b
is the bootstrap confidence interval of the population-averaged (marginal) proportion, and b.w.e
is a vector of two numeric values, indicating the counts of warnings and errors occurred during the bootstrap iterations. Moreover, if the Freeman–Tukey double-arcsine transformation (link
= "double.arcsine"
) is used, the back-transformation will be implemented at four values as the overall sample size: the harmonic, geometric, and arithmetic means of the study-specific sample sizes, and the inverse of the synthesized result's variance. See details in Barendregt et al. (2013) and Schwarzer et al. (2019).
This function implements the two-step method for the meta-analysis of proportions via the rma.uni
function in the package metafor. It is possible that the algorithm of the maximum likelihood or restricted maximum likelihood estimation may not converge for some bootstrapped meta-analyses when pop.avg
= TRUE
, and the rma.uni
function may report warnings or errors about the convergence issue. The bootstrap iterations are continued until b.iter
replicates without any warnings or errors are obtained; those replicates with any warnings or errors are discarded.
Agresti A (2013). Categorical Data Analysis. Third edition. John Wiley & Sons, Hoboken, NJ.
Barendregt JJ, Doi SA, Lee YY, Norman RE, Vos T (2013). "Meta-analysis of prevalence." Journal of Epidemiology and Community Health, 67(11), 974–978. <doi:10.1136/jech-2013-203104>
Freeman MF, Tukey JW (1950). "Transformations related to the angular and the square root." The Annals of Mathematical Statistics, 21(4), 607–611. <doi:10.1214/aoms/1177729756>
Lin L, Chu H (2020). "Meta-analysis of proportions using generalized linear mixed models." Epidemiology, 31(5), 713–717. <doi:10.1097/ede.0000000000001232>
Miller JJ (1978). "The inverse of the Freeman–Tukey double arcsine transformation." The American Statistician, 32(4), 138. <doi:10.1080/00031305.1978.10479283>
Schwarzer G, Chemaitelly H, Abu-Raddad LJ, Rucker G (2019). "Seriously misleading results using inverse of Freeman-Tukey double arcsine transformation in meta-analysis of single proportions." Research Synthesis Methods, 10(3), 476–483. <doi:10.1002/jrsm.1348>
Viechtbauer W (2010). "Conducting meta-analyses in R with the metafor package." Journal of Statistical Software, 36, 3. <doi:10.18637/jss.v036.i03>
Zeger SL, Liang K-Y, Albert PS (1988). "Models for longitudinal data: a generalized estimating equation approach." Biometrics, 44(4), 1049–1060. <doi:10.2307/2531734>
# chorioamnionitis data data("dat.chor") # two-step method with the logit transformation out.chor.twostep.logit <- maprop.twostep(e, n, data = dat.chor, link = "logit", b.iter = 10, seed = 1234) out.chor.twostep.logit # not calculating the population-averaged (marginal) proportion, # without bootstrap resampling out.chor.twostep.logit <- maprop.twostep(e, n, data = dat.chor, link = "logit", pop.avg = FALSE) out.chor.twostep.logit # increases the number of bootstrap iterations to 1000, # taking longer time out.chor.twostep.logit <- maprop.twostep(e, n, data = dat.chor, link = "logit", b.iter = 1000, seed = 1234) out.chor.twostep.logit # two-step method with the log transformation out.chor.twostep.log <- maprop.twostep(e, n, data = dat.chor, link = "log", b.iter = 10, seed = 1234) out.chor.twostep.log # two-step method with the arcsine transformation out.chor.twostep.arcsine <- maprop.twostep(e, n, data = dat.chor, link = "arcsine", b.iter = 10, seed = 1234) out.chor.twostep.arcsine # two-step method with the Freeman--Tukey double-arcsine transformation out.chor.twostep.double.arcsine <- maprop.twostep(e, n, data = dat.chor, link = "double.arcsine", b.iter = 10, seed = 1234) out.chor.twostep.double.arcsine # depression data data("dat.beck17") out.beck17.twostep.log <- maprop.twostep(e, n, data = dat.beck17, link = "log", b.iter = 10, seed = 1234) out.beck17.twostep.log out.beck17.twostep.logit <- maprop.twostep(e, n, data = dat.beck17, link = "logit", b.iter = 10, seed = 1234) out.beck17.twostep.logit out.beck17.twostep.arcsine <- maprop.twostep(e, n, data = dat.beck17, link = "arcsine", b.iter = 10, seed = 1234) out.beck17.twostep.arcsine out.beck17.twostep.double.arcsine <- maprop.twostep(e, n, data = dat.beck17, link = "double.arcsine", b.iter = 10, seed = 1234) out.beck17.twostep.double.arcsine
# chorioamnionitis data data("dat.chor") # two-step method with the logit transformation out.chor.twostep.logit <- maprop.twostep(e, n, data = dat.chor, link = "logit", b.iter = 10, seed = 1234) out.chor.twostep.logit # not calculating the population-averaged (marginal) proportion, # without bootstrap resampling out.chor.twostep.logit <- maprop.twostep(e, n, data = dat.chor, link = "logit", pop.avg = FALSE) out.chor.twostep.logit # increases the number of bootstrap iterations to 1000, # taking longer time out.chor.twostep.logit <- maprop.twostep(e, n, data = dat.chor, link = "logit", b.iter = 1000, seed = 1234) out.chor.twostep.logit # two-step method with the log transformation out.chor.twostep.log <- maprop.twostep(e, n, data = dat.chor, link = "log", b.iter = 10, seed = 1234) out.chor.twostep.log # two-step method with the arcsine transformation out.chor.twostep.arcsine <- maprop.twostep(e, n, data = dat.chor, link = "arcsine", b.iter = 10, seed = 1234) out.chor.twostep.arcsine # two-step method with the Freeman--Tukey double-arcsine transformation out.chor.twostep.double.arcsine <- maprop.twostep(e, n, data = dat.chor, link = "double.arcsine", b.iter = 10, seed = 1234) out.chor.twostep.double.arcsine # depression data data("dat.beck17") out.beck17.twostep.log <- maprop.twostep(e, n, data = dat.beck17, link = "log", b.iter = 10, seed = 1234) out.beck17.twostep.log out.beck17.twostep.logit <- maprop.twostep(e, n, data = dat.beck17, link = "logit", b.iter = 10, seed = 1234) out.beck17.twostep.logit out.beck17.twostep.arcsine <- maprop.twostep(e, n, data = dat.beck17, link = "arcsine", b.iter = 10, seed = 1234) out.beck17.twostep.arcsine out.beck17.twostep.double.arcsine <- maprop.twostep(e, n, data = dat.beck17, link = "double.arcsine", b.iter = 10, seed = 1234) out.beck17.twostep.double.arcsine
Performs a meta-analysis with a binary outcome using a bivariate generalized linear mixed model (GLMM) described in Chu et al. (2012).
meta.biv(sid, tid, e, n, data, link = "logit", alpha = 0.05, b.iter = 1000, seed = 1234, ...)
meta.biv(sid, tid, e, n, data, link = "logit", alpha = 0.05, b.iter = 1000, seed = 1234, ...)
sid |
a vector specifying the study IDs. |
tid |
a vector of 0/1 specifying the treatment/exposure IDs (0: control/non-exposure; 1: treatment/exposure). |
e |
a numeric vector specifying the event counts. |
n |
a numeric vector specifying the sample sizes. |
data |
an optional data frame containing the meta-analysis dataset. If |
link |
a character string specifying the link function used in the GLMM, which can be either |
alpha |
a numeric value specifying the statistical significance level. |
b.iter |
an integer specifying the number of bootstrap iterations, which are used to produce confidence intervals of marginal results. |
seed |
an integer for specifying the seed of the random number generation for reproducibility during the bootstrap resampling. |
... |
other arguments that can be passed to the function |
Suppose a meta-analysis with a binary outcome contains studies. Let
and
be the sample sizes in the control/non-exposure and treatment/exposure groups in study
, respectively, and let
and
be the event counts (
). The event counts are assumed to independently follow binomial distributions:
where and
represent the true event probabilities. They are modeled jointly as follows:
Here, denotes the link function that transforms the event probabilities to linear forms. The fixed effects
and
represent the overall event probabilities on the transformed scale. The study-specific parameters
and
are random effects, which are assumed to follow the bivariate normal distribution with zero means and variance-covariance matrix
. The diagonal elements of
are
and
(between-study variances due to heterogeneity), and the off-diagonal elements are
, where
is the correlation coefficient.
When using the logit link, represents the log odds ratio (Van Houwelingen et al., 1993; Stijnen et al., 2010; Jackson et al., 2018);
may be referred to as the conditional odds ratio (Agresti, 2013). Alternatively, we can obtain the marginal event probabilities (Chu et al., 2012):
for = 0 and 1, where
. The marginal odds ratio, relative risk, and risk difference are subsequently obtained as
,
, and
, respectively.
When using the probit link, the model does not yield the conditional odds ratio. The marginal probabilities have closed-form solutions:
for = 0 and 1, where
is the cumulative distribution function of the standard normal distribution. They further lead to the marginal odds ratio, relative risk, and risk difference.
This function returns a list containing the point and interval estimates of the marginal event rates (p0.m
, p0.m.ci
, p1.m
, and p1.m.ci
), odds ratio (OR.m
and OR.m.ci
), relative risk (RR.m
and RR.m.ci
), risk difference (RD.m
and RD.m.ci
), and correlation coefficient between the two treatment/exposure groups (rho
and rho.ci
). These interval estimates are obtained using the bootstrap resampling. During the bootstrap resampling, computational warnings or errors may occur for implementing the bivariate GLMM in some resampled meta-analyses. This function returns the counts of warnings and errors (b.w.e
). The resampled meta-analyses that lead to warnings and errors are not used for producing the bootstrap confidence intervals; the bootstrap iterations stop after obtaining b.iter
resampled meta-analyses without warnings and errors. If the logit link is used (link
= "logit"
), it also returns the point and interval estimates of the conditional odds ratio (OR.c
and OR.c.ci
), which are more frequently reported in the current literature than the marginal odds ratios. Unlike the marginal results that use the bootstrap resampling to produce their confidence intervals, the Wald-type confidence interval is calculated for the log conditional odds ratio; it is then transformed to the odds ratio scale.
Agresti A (2013). Categorical Data Analysis. Third edition. John Wiley & Sons, Hoboken, NJ.
Chu H, Nie L, Chen Y, Huang Y, Sun W (2012). "Bivariate random effects models for meta-analysis of comparative studies with binary outcomes: methods for the absolute risk difference and relative risk." Statistical Methods in Medical Research, 21(6), 621–633. <doi:10.1177/0962280210393712>
Jackson D, Law M, Stijnen T, Viechtbauer W, White IR (2018). "A comparison of seven random-effects models for meta-analyses that estimate the summary odds ratio." Statistics in Medicine, 37(7), 1059–1085. <doi:10.1002/sim.7588>
Stijnen T, Hamza TH, Ozdemir P (2010). "Random effects meta-analysis of event outcome in the framework of the generalized linear mixed model with applications in sparse data." Statistics in Medicine, 29(29), 3046–3067. <doi:10.1002/sim.4040>
Van Houwelingen HC, Zwinderman KH, Stijnen T (1993). "A bivariate approach to meta-analysis." Statistics in Medicine, 12(24), 2273–2284. <doi:10.1002/sim.4780122405>
data("dat.bellamy") out.bellamy.logit <- meta.biv(sid, tid, e, n, data = dat.bellamy, link = "logit", b.iter = 1000) out.bellamy.logit out.bellamy.probit <- meta.biv(sid, tid, e, n, data = dat.bellamy, link = "probit", b.iter = 1000) out.bellamy.probit
data("dat.bellamy") out.bellamy.logit <- meta.biv(sid, tid, e, n, data = dat.bellamy, link = "logit", b.iter = 1000) out.bellamy.logit out.bellamy.probit <- meta.biv(sid, tid, e, n, data = dat.bellamy, link = "probit", b.iter = 1000) out.bellamy.probit
Performs a meta-analysis of diagnostic tests using approaches described in Reitsma et al. (2005) and Chu and Cole (2006).
meta.dt(tp, fp, fn, tn, data, method = "biv.glmm", alpha = 0.05, ...)
meta.dt(tp, fp, fn, tn, data, method = "biv.glmm", alpha = 0.05, ...)
tp |
counts of true positives. |
fp |
counts of false positives. |
fn |
counts of false negatives. |
tn |
counts of true negatives. |
data |
an optional data frame containing the meta-analysis dataset. If |
method |
a character string specifying the method used to implement the meta-analysis of diagnostic tests. It should be one of |
alpha |
a numeric value specifying the statistical significance level. |
... |
other arguments that can be passed to the function |
Suppose a meta-analysis of diagnostic tests contains studies. Each study reports the counts of true positives, false positives, false negatives, and true negatives, denoted by
,
,
, and
, respectively. The study-specific estimates of sensitivity and specificity are calculated as
and
for
. They are analyzed on the logarithmic scale in the meta-analysis. When using the summary ROC (receiver operating characteristic) approach or the bivariate linear mixed model, 0.5 needs to be added to all four counts in a study when at least one count is zero.
The summary ROC approach first calculates
where represents the log diagnostic odds ratio (DOR) in study
. A linear regression is then fitted:
The regression could be either unweighted or weighted; this function performs both versions. If weighted, the study-specific weights are the inverse of the variances of , i.e.,
. Based on the estimated regression intercept
and slope
, one may obtain the DOR at mean of
, Q point, summary ROC curve, and area under the curve (AUC). The Q point is the point on the summary ROC curve where sensitivity and specificity are equal. The ROC curve is given by
See more details of the summary ROC approach in Moses et al. (1993) and Irwig et al. (1995).
The bivariate linear mixed model described in Reitsma et al. (2005) assumes that the logit sensitivity and logit specificity independently follow normal distributions within each study: and
, where
denotes the logit function. The within-study variances are calculated as
and
. The parameters
and
are the underlying true sensitivity and specificity (on the logit scale) in study
. They are assumed to be random effects, jointly following a bivariate normal distribution:
where is the between-study variance-covariance matrix. The diagonal elements of
are
and
, representing the heterogeneity variances of sensitivities and specificities (on the logit scale), respectively. The correlation coefficient is
.
The bivariate generalized linear mixed model described in Chu and Cole (2006) refines the bivariate linear mixed model by directly modeling the counts of true positives and true negatives. This approach does not require the assumption that the logit sensitivity and logit specificity approximately follow normal distributions within studies, which could be seriously violated in the presence of small data counts. It also avoids corrections for zero counts. Specificially, the counts of true positives and true negatives are modeled using binomial likelihoods:
See more details in Chu and Cole (2006) and Ma et al. (2016).
For both the bivariate linear mixed model and bivariate generalized linear mixed model, and
represent the overall sensitivity and specificity (on the logit scale) across studies, respectively, and
represents the log DOR. The summary ROC curve may be constructed as
This function returns a list of the meta-analysis results. When method
= "s.roc"
, the list consists of the regression intercept (inter.unwtd
), slope (slope.unwtd
), their variance-covariance matrix (vcov.unwtd
), DOR at mean of (
DOR.meanS.unwtd
) with its confidence interval (DOR.meanS.unwtd.ci
), Q point (Q.unwtd
) with its confidence interval (Q.unwtd.ci
), and AUC (AUC.unwtd
) for the unweighted regression; it also consists of the counterparts for the weighted regression. When method
= "biv.lmm"
or "biv.glmm"
, the list consists of the overall sensitivity (sens.overall
) with its confidence interval (sens.overall.ci
), overall specificity (spec.overall
) with its confidence interval (spec.overall.ci
), overall DOR (DOR.overall
) with its confidence interval (DOR.overall.ci
), AUC (AUC
), estimated (
mu.sens
), (
mu.spec
), their variance-covariance matrix (mu.vcov
), estimated (
sig.sens
), (
sig.spec
), and (
rho
). In addition, the list includes the method used to perform the meta-analysis of diagnostic tests (method
), significance level (alpha
), and original data (data
).
The original articles by Reitsma et al. (2005) and Chu and Cole (2006) used SAS to implement (generalized) linear mixed models (specifically, PROC MIXED
and PROC NLMIXED
); this function imports rma.mv
from the package metafor and glmer
from the package lme4 for implementing these models. The estimation approaches adopted in SAS and the R packages metafor and lme4 may differ, which may impact the results. See, for example, Zhang et al. (2011).
Lifeng Lin, Kristine J. Rosenberger
Chu H, Cole SR (2006). "Bivariate meta-analysis of sensitivity and specificity with sparse data: a generalized linear mixed model approach." Journal of Clinical Epidemiology, 59(12), 1331–1332. <doi:10.1016/j.jclinepi.2006.06.011>
Irwig L, Macaskill P, Glasziou P, Fahey M (1995). "Meta-analytic methods for diagnostic test accuracy." Journal of Clinical Epidemiology, 48(1), 119–130. <doi:10.1016/0895-4356(94)00099-C>
Ma X, Nie L, Cole SR, Chu H (2016). "Statistical methods for multivariate meta-analysis of diagnostic tests: an overview and tutorial." Statistical Methods in Medical Research, 25(4), 1596–1619. <doi:10.1177/0962280213492588>
Moses LE, Shapiro D, Littenberg B (1993). "Combining independent studies of a diagnostic test into a summary ROC curve: data-analytic approaches and some additional considerations." Statistics in Medicine, 12(14), 1293–1316. <doi:10.1002/sim.4780121403>
Reitsma JB, Glas AS, Rutjes AWS, Scholten RJPM, Bossuyt PM, Zwinderman AH (2005). "Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews." Journal of Clinical Epidemiology, 58(10), 982–990. <doi:10.1016/j.jclinepi.2005.02.022>
Zhang H, Lu N, Feng C, Thurston SW, Xia Y, Zhu L, Tu XM (2011). "On fitting generalized linear mixed-effects models for binary responses using different statistical packages." Statistics in Medicine, 30(20), 2562–2572. <doi:10.1002/sim.4265>
maprop.twostep
, meta.biv
, plot.meta.dt
, print.meta.dt
data("dat.scheidler") out1 <- meta.dt(tp, fp, fn, tn, data = dat.scheidler[dat.scheidler$dt == "MRI",], method = "s.roc") out1 plot(out1) out2 <- meta.dt(tp, fp, fn, tn, data = dat.scheidler[dat.scheidler$dt == "MRI",], method = "biv.lmm") out2 plot(out2, predict = TRUE) out3 <- meta.dt(tp, fp, fn, tn, data = dat.scheidler[dat.scheidler$dt == "MRI",], method = "biv.glmm") out3 plot(out3, add = TRUE, studies = FALSE, col.roc = "blue", col.overall = "blue", col.confid = "blue", predict = TRUE,col.predict = "blue") data("dat.smith") out4 <- meta.dt(tp, fp, fn, tn, data = dat.smith, method = "biv.glmm") out4 plot(out4, predict = TRUE)
data("dat.scheidler") out1 <- meta.dt(tp, fp, fn, tn, data = dat.scheidler[dat.scheidler$dt == "MRI",], method = "s.roc") out1 plot(out1) out2 <- meta.dt(tp, fp, fn, tn, data = dat.scheidler[dat.scheidler$dt == "MRI",], method = "biv.lmm") out2 plot(out2, predict = TRUE) out3 <- meta.dt(tp, fp, fn, tn, data = dat.scheidler[dat.scheidler$dt == "MRI",], method = "biv.glmm") out3 plot(out3, add = TRUE, studies = FALSE, col.roc = "blue", col.overall = "blue", col.confid = "blue", predict = TRUE,col.predict = "blue") data("dat.smith") out4 <- meta.dt(tp, fp, fn, tn, data = dat.smith, method = "biv.glmm") out4 plot(out4, predict = TRUE)
Performs a Bayesian meta-analysis to synthesize standardized mean differences (SMDs) for a continuous outcome and odds ratios (ORs) for a binary outcome.
meta.or.smd(y1, sd1, n1, y0, sd0, n0, r1, m1, r0, m0, id.bin, data, n.adapt = 1000, n.chains = 3, n.burnin = 5000, n.iter = 20000, n.thin = 2, seed = 1234)
meta.or.smd(y1, sd1, n1, y0, sd0, n0, r1, m1, r0, m0, id.bin, data, n.adapt = 1000, n.chains = 3, n.burnin = 5000, n.iter = 20000, n.thin = 2, seed = 1234)
y1 |
a vector specifying the sample means in the treatment group for the continuous outcome. |
sd1 |
a vector specifying the sample standard deviations in the treatment group for the continuous outcome. |
n1 |
a vector specifying the sample sizes in the treatment group for the continuous outcome. |
y0 |
a vector specifying the sample means in the control group for the continuous outcome. |
sd0 |
a vector specifying the sample standard deviations in the control group for the continuous outcome. |
n0 |
a vector specifying the sample sizes in the control group for the continuous outcome. |
r1 |
a vector specifying the event counts in the treatment group for the binary outcome. |
m1 |
a vector specifying the sample sizes in the treatment group for the binary outcome. |
r0 |
a vector specifying the event counts in the control group for the binary outcome. |
m0 |
a vector specifying the sample sizes in the control group for the binary outcome. |
id.bin |
a vector indicating whether the outcome is binary (1) or continuous (0). |
data |
an optional data frame containing the meta-analysis dataset. If |
n.adapt |
the number of iterations for adaptation in the Markov chain Monte Carlo (MCMC) algorithm. The default is 1,000. This argument and the following |
n.chains |
the number of MCMC chains. The default is 3. |
n.burnin |
the number of iterations for burn-in period. The default is 5,000. |
n.iter |
the total number of iterations in each MCMC chain after the burn-in period. The default is 20,000. |
n.thin |
a positive integer specifying thinning rate. The default is 2. |
seed |
an integer for specifying the seed of the random number generation for reproducibility during the MCMC algorithm for performing the Bayesian meta-analysis model. |
The Bayesian meta-analysis model implemented by this function is detailed in Section 2.5 of Jing et al. (2023).
"This function returns a list of Bayesian estimates, including posterior medians and 95% credible intervals (comprising the 2.5% and 97.5% posterior quantiles) for the overall SMD (d
), the between-study standard deviation (tau
), and the individual studies' SMDs (theta
).
Yaqi Jing, Lifeng Lin
Jing Y, Murad MH, Lin L (2023). "A Bayesian model for combining standardized mean differences and odds ratios in the same meta-analysis." Journal of Biopharmaceutical Statistics, 33(2), 167–190. <doi:10.1080/10543406.2022.2105345>
data("dat.dep") out <- meta.or.smd(y1, sd1, n1, y0, sd0, n0, r1, m1, r0, m0, id.bin, data = dat.dep) out
data("dat.dep") out <- meta.or.smd(y1, sd1, n1, y0, sd0, n0, r1, m1, r0, m0, id.bin, data = dat.dep) out
Performs the penalization methods introduced in Wang et al. (2022) to achieve a compromise between the common-effect and random-effects model.
meta.pen(y, s2, data, tuning.para = "tau", upp = 1, n.cand = 100, tol = 1e-10)
meta.pen(y, s2, data, tuning.para = "tau", upp = 1, n.cand = 100, tol = 1e-10)
y |
a numeric vector or the corresponding column name in the argument data, specifying the observed effect sizes in the collected studies. |
s2 |
a numeric vector or the corresponding column name in the argument data, specifying the within-study variances. |
data |
an optional data frame containing the meta-analysis dataset. If |
tuning.para |
a character string specifying the type of tuning parameter used in the penalization method. It should be one of "lambda" (use lambda as a tuning parameter) or "tau" (use the standard deviation as a tuning parameter). The default is "tau". |
upp |
a positive scalar used to control the upper bound of the range for the tuning parameter. Specifically, [0, T* |
n.cand |
the total number of candidate values of the tuning parameter within the specified range. The default is 100. |
tol |
the desired accuracy (convergence tolerance). The default is 1e-10. |
Suppose a meta-analysis collects independent studies. Let
be the true effect size in study
(
= 1, ...,
). Each study reports an estimate of the effect size and its sample variance, denoted by
and
, respectively. These data are commonly modeled as
from
. If study-specific true effect sizes
are assumed i.i.d. from
, this is the random-effects (RE) model, where
is the overall effect size and
is the between-study variance. If
and thus
for all studies, this implies that studies are homogeneous and the RE model is reduced to the common-effect (CE) model.
Marginally, the RE model yields , and its log-likelihood is
where is a constant. In the past two decades, penalization methods have been rapidly developed for variable selection in high-dimensional data analysis to control model complexity and reduce the variance of parameter estimates. Borrowing the idea from the penalization methods, we employ a penalty term on the between-study variance
when the heterogeneity is overestimated. The penalty term increases with
. Specifically, we consider the following optimization problem:
where is a tuning parameter that controls the penalty strength. Using the technique of profile likelihood by taking the target function's derivative in the above equation with respect to
for a given
. The bivariate optimization problem is reduced to a univariate minimization problem. When
, the minimization problem is equivalent to minimizing the log-likelihood without penalty, so the penalized-likelihood method is identical to the conventional RE model. By contrast, it can be shown that a sufficiently large
produces the estimated between-study variance as 0, leading to the conventional CE model.
As different tuning parameters lead to different estimates of and
, it is important to select the optimal
among a set of candidate values. We perform the cross-validation process and construct a loss function of
to measure the performance of specific
values. The
corresponding to the smallest loss is considered optimal. The threshold, denoted by
, based on the penalty function
can be calculated. For all
, the estimated between-study variance is 0. Consequently, we select a certain number of candidate values (e.g., 100) from the range
for the tuning parameter. For a set of tuning parameters, the leave-one-study-out (i.e.,
-fold) cross-validation is used to construct the loss function. Specifically, we use the following loss function for the penalization method by tuning
:
where the subscript indicates that study
is removed,
is the estimated between-study variance for a given
,
is the corresponding between-study variance estimate of the RE model, and
The above procedure focuses on tuning the parameter, , to control the penalty strength for the between-study variance. Alternatively, for the purpose of shrinking the potentially overestimated heterogeneity, we can directly treat the between-study standard deviation (SD),
, as the tuning parameter. A set of candidate values of
are considered, and the value that produces the minimum loss function is selected. Compared with tuning
from the perspective of penalized likelihood, tuning
is more straightforward and intuitive from the practical perspective. The candidate values of
can be naturally chosen from
, with the lower and upper bounds corresponding to the CE and RE models, respectively. Denoted the candidate SDs as
, the loss function with respect to
is similarly defined as
where the overall effect size estimate (excluding study ) is
This function returns a list containing estimates of the overall effect size and their 95% confidence intervals. Specifically, the components include:
n |
the number of studies in the meta-analysis. |
I2 |
the |
tau2.re |
the maximum likelihood estimate of the between-study variance. |
mu.fe |
the estimated overall effect size of the |
se.fe |
the standard deviation of the overall effect size estimate of the |
mu.re |
the estimated overall effect size of the |
se.re |
the standard deviation of the overall effect size estimate of the |
loss |
the values of the loss function for candidate tuning parameters. |
tau.cand |
the candidate values of the tuning parameter |
lambda.cand |
the candidate values of the tuning parameter |
tau.opt |
the estimated between-study standard deviation of the penalization method. |
mu.opt |
the estimated overall effect size of the penalization method. |
se.opt |
the standard error estimate of the estimated overall effect size of the penalization method. |
Yipeng Wang [email protected]
Wang Y, Lin L, Thompson CG, Chu H (2022). "A penalization approach to random-effects meta-analysis." Statistics in Medicine, 41(3), 500–516. <doi:10.1002/sim.9261>
data("dat.bohren") ## log odds ratio ## perform the penaliztion method by tuning tau out11 <- meta.pen(y, s2, dat.bohren) ## plot the loss function and candidate taus plot(out11$tau.cand, out11$loss, xlab = NA, ylab = NA, lwd = 1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(paste(tau[t])), ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))), cex.lab = 0.8, line = 2) idx <- which(out11$loss == min(out11$loss)) abline(v = out11$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2) ## perform the penaliztion method by tuning lambda out12 <- meta.pen(y, s2, dat.bohren, tuning.para = "lambda") ## plot the loss function and candidate lambdas plot(log(out12$lambda.cand + 1), out12$loss, xlab = NA, ylab = NA, lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(log(lambda + 1)), ylab = expression(paste("Loss function ", ~hat(L)(lambda))), cex.lab = 0.8, line = 2) idx <- which(out12$loss == min(out12$loss)) abline(v = log(out12$lambda.cand[idx] + 1), col = "gray", lwd = 1.5, lty = 2) data("dat.bjelakovic") ## log odds ratio ## perform the penaliztion method by tuning tau out21 <- meta.pen(y, s2, dat.bjelakovic) ## plot the loss function and candidate taus plot(out21$tau.cand, out21$loss, xlab = NA, ylab = NA, lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(paste(tau[t])), ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))), cex.lab = 0.8, line = 2) idx <- which(out21$loss == min(out21$loss)) abline(v = out21$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2) out22 <- meta.pen(y, s2, dat.bjelakovic, tuning.para = "lambda") data("dat.carless") ## log odds ratio ## perform the penaliztion method by tuning tau out31 <- meta.pen(y, s2, dat.carless) ## plot the loss function and candidate taus plot(out31$tau.cand, out31$loss, xlab = NA, ylab = NA, lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(paste(tau[t])), ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))), cex.lab = 0.8, line = 2) idx <- which(out31$loss == min(out31$loss)) abline(v = out31$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2) out32 <- meta.pen(y, s2, dat.carless, tuning.para = "lambda") data("dat.adams") ## mean difference out41 <- meta.pen(y, s2, dat.adams) ## plot the loss function and candidate taus plot(out41$tau.cand, out41$loss, xlab = NA, ylab = NA, lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(paste(tau[t])), ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))), cex.lab = 0.8, line = 2) idx <- which(out41$loss == min(out41$loss)) abline(v = out41$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2) out42 <- meta.pen(y, s2, dat.adams, tuning.para = "lambda")
data("dat.bohren") ## log odds ratio ## perform the penaliztion method by tuning tau out11 <- meta.pen(y, s2, dat.bohren) ## plot the loss function and candidate taus plot(out11$tau.cand, out11$loss, xlab = NA, ylab = NA, lwd = 1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(paste(tau[t])), ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))), cex.lab = 0.8, line = 2) idx <- which(out11$loss == min(out11$loss)) abline(v = out11$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2) ## perform the penaliztion method by tuning lambda out12 <- meta.pen(y, s2, dat.bohren, tuning.para = "lambda") ## plot the loss function and candidate lambdas plot(log(out12$lambda.cand + 1), out12$loss, xlab = NA, ylab = NA, lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(log(lambda + 1)), ylab = expression(paste("Loss function ", ~hat(L)(lambda))), cex.lab = 0.8, line = 2) idx <- which(out12$loss == min(out12$loss)) abline(v = log(out12$lambda.cand[idx] + 1), col = "gray", lwd = 1.5, lty = 2) data("dat.bjelakovic") ## log odds ratio ## perform the penaliztion method by tuning tau out21 <- meta.pen(y, s2, dat.bjelakovic) ## plot the loss function and candidate taus plot(out21$tau.cand, out21$loss, xlab = NA, ylab = NA, lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(paste(tau[t])), ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))), cex.lab = 0.8, line = 2) idx <- which(out21$loss == min(out21$loss)) abline(v = out21$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2) out22 <- meta.pen(y, s2, dat.bjelakovic, tuning.para = "lambda") data("dat.carless") ## log odds ratio ## perform the penaliztion method by tuning tau out31 <- meta.pen(y, s2, dat.carless) ## plot the loss function and candidate taus plot(out31$tau.cand, out31$loss, xlab = NA, ylab = NA, lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(paste(tau[t])), ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))), cex.lab = 0.8, line = 2) idx <- which(out31$loss == min(out31$loss)) abline(v = out31$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2) out32 <- meta.pen(y, s2, dat.carless, tuning.para = "lambda") data("dat.adams") ## mean difference out41 <- meta.pen(y, s2, dat.adams) ## plot the loss function and candidate taus plot(out41$tau.cand, out41$loss, xlab = NA, ylab = NA, lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8) title(xlab = expression(paste(tau[t])), ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))), cex.lab = 0.8, line = 2) idx <- which(out41$loss == min(out41$loss)) abline(v = out41$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2) out42 <- meta.pen(y, s2, dat.adams, tuning.para = "lambda")
Calculates various between-study heterogeneity measures in meta-analysis, including the conventional measures (e.g., ) and the alternative measures (e.g.,
) which are robust to outlying studies; p-values of various tests are also calculated.
metahet(y, s2, data, n.resam = 1000)
metahet(y, s2, data, n.resam = 1000)
y |
a numeric vector specifying the observed effect sizes in the collected studies; they are assumed to be normally distributed. |
s2 |
a numeric vector specifying the within-study variances. |
data |
an optional data frame containing the meta-analysis dataset. If |
n.resam |
a positive integer specifying the number of resampling iterations for calculating p-values of test statistics and 95% confidence interval of heterogeneity measures. |
Suppose that a meta-analysis collects studies. The observed effect size in study
is
and its within-study variance is
. Also, the inverse-variance weight is
. The fixed-effect estimate of overall effect size is
. The conventional test statistic for heterogeneity is
Based on the statistic, the method-of-moments estimate of the between-study variance
is (DerSimonian and Laird, 1986)
Also, the and
statistics (Higgins and Thompson, 2002; Higgins et al., 2003) are widely used in practice because they do not depend on the number of collected studies
and the effect size scale; these two statistics are defined as
Specifically, the statistic reflects the ratio of the standard deviation of the underlying mean from a random-effects meta-analysis compared to the standard deviation from a fixed-effect meta-analysis; the
statistic describes the proportion of total variance across studies that is due to heterogeneity rather than sampling error.
Outliers are frequently present in meta-analyses, and they may have great impact on the above heterogeneity measures. Alternatively, to be more robust to outliers, the test statistic may be modified as (Lin et al., 2017):
Based on the statistic, the method-of-moments estimate of the between-study variance
is defined as the solution to
If no positive solution exists to the equation above, set . The counterparts of the
and
statistics are defined as
To further improve the robustness of heterogeneity assessment, the weighted mean in the statistic may be replaced by the weighted median
, which is the solution to
with respect to
. The new test statistic is
Based on , the new estimator of the between-study variance
is the solution to
The counterparts of the and
statistics are
This function returns a list containing p-values of various heterogeneity tests and various heterogeneity measures with 95% confidence intervals. Specifically, the components include:
p.Q |
p-value of the |
p.Q.theo |
p-value of the |
p.Qr |
p-value of the |
p.Qm |
p-value of the |
Q |
the |
ci.Q |
95% CI of the |
tau2.DL |
DerSimonian–Laird estimate of the between-study variance. |
ci.tau2.DL |
95% CI of the between-study variance based on the DerSimonian–Laird method. |
H |
the |
ci.H |
95% CI of the |
I2 |
the |
ci.I2 |
95% CI of the |
Qr |
the |
ci.Qr |
95% CI of the |
tau2.r |
the between-study variance estimate based on the |
ci.tau2.r |
95% CI of the between-study variance based on the |
Hr |
the |
ci.Hr |
95% CI of the |
Ir2 |
the |
ci.Ir2 |
95% CI of the |
Qm |
the |
ci.Qm |
95% CI of the |
tau2.m |
the between-study variance estimate based on the |
ci.tau2.m |
95% CI of the between-study variance based on the |
Hm |
the |
ci.Hm |
95% CI of the |
Im2 |
the |
ci.Im2 |
95% CI of the |
DerSimonian R, Laird N (1986). "Meta-analysis in clinical trials." Controlled Clinical Trials, 7(3), 177–188. <doi:10.1016/0197-2456(86)90046-2>
Higgins JPT, Thompson SG (2002). "Quantifying heterogeneity in a meta-analysis." Statistics in Medicine, 21(11), 1539–1558. <doi:10.1002/sim.1186>
Higgins JPT, Thompson SG, Deeks JJ, Altman DG (2003). "Measuring inconsistency in meta-analyses." BMJ, 327(7414), 557–560. <doi:10.1136/bmj.327.7414.557>
Lin L, Chu H, Hodges JS (2017). "Alternative measures of between-study heterogeneity in meta-analysis: reducing the impact of outlying studies." Biometrics, 73(1), 156–166. <doi:10.1111/biom.12543>
data("dat.aex") set.seed(1234) metahet(y, s2, dat.aex, 100) metahet(y, s2, dat.aex, 1000) data("dat.hipfrac") set.seed(1234) metahet(y, s2, dat.hipfrac, 100) metahet(y, s2, dat.hipfrac, 1000)
data("dat.aex") set.seed(1234) metahet(y, s2, dat.aex, 100) metahet(y, s2, dat.aex, 1000) data("dat.hipfrac") set.seed(1234) metahet(y, s2, dat.hipfrac, 100) metahet(y, s2, dat.hipfrac, 1000)
Calculates the standardized residual for each study in meta-analysis using the methods desribed in Chapter 12 in Hedges and Olkin (1985) and Viechtbauer and Cheung (2010). A study is considered as an outlier if its standardized residual is greater than 3 in absolute magnitude.
metaoutliers(y, s2, data, model)
metaoutliers(y, s2, data, model)
y |
a numeric vector specifying the observed effect sizes in the collected studies; they are assumed to be normally distributed. |
s2 |
a numeric vector specifying the within-study variances. |
data |
an optional data frame containing the meta-analysis dataset. If |
model |
a character string specified as either |
Suppose that a meta-analysis collects studies. The observed effect size in study
is
and its within-study variance is
. Also, the inverse-variance weight is
.
Chapter 12 in Hedges and Olkin (1985) describes the outlier detection procedure for the fixed-effect meta-analysis (model
= "FE"
). Using the studies except study , the pooled estimate of the overall effect size is
. The residual of study
is
. The variance of
is
, so the standardized residual of study
is
.
Viechtbauer and Cheung (2010) describes the outlier detection procedure for the random-effects meta-analysis (model
= "RE"
). Using the studies except study , let the method-of-moments estimate of the between-study variance be
. The pooled estimate of the overall effect size is
, where
. The residual of study
is
, and its variance is
. Then, the standardized residual of study
is
.
This functions returns a list which contains standardized residuals and identified outliers. A study is considered as an outlier if its standardized residual is greater than 3 in absolute magnitude.
Hedges LV, Olkin I (1985). Statistical Method for Meta-Analysis. Academic Press, Orlando, FL.
Viechtbauer W, Cheung MWL (2010). "Outlier and influence diagnostics for meta-analysis." Research Synthesis Methods, 1(2), 112–125. <doi:10.1002/jrsm.11>
data("dat.aex") metaoutliers(y, s2, dat.aex, model = "FE") metaoutliers(y, s2, dat.aex, model = "RE") data("dat.hipfrac") metaoutliers(y, s2, dat.hipfrac)
data("dat.aex") metaoutliers(y, s2, dat.aex, model = "FE") metaoutliers(y, s2, dat.aex, model = "RE") data("dat.hipfrac") metaoutliers(y, s2, dat.hipfrac)
Performs the regression test and calculates skewness for detecting and quantifying publication bias/small-study effects.
metapb(y, s2, data, model = "RE", n.resam = 1000)
metapb(y, s2, data, model = "RE", n.resam = 1000)
y |
a numeric vector specifying the observed effect sizes in the collected studies; they are assumed to be normally distributed. |
s2 |
a numeric vector specifying the within-study variances. |
data |
an optional data frame containing the meta-analysis dataset. If |
model |
a characher string specifying the fixed-effect ( |
n.resam |
a positive integer specifying the number of resampling iterations. |
This function derives the measures of publication bias introduced in Lin and Chu (2018).
This function returns a list containing measures of publication bias, their 95% confidence intervals, and p-values. Specifically, the components include:
n |
the number of studies in the meta-analysis. |
p.Q |
the p-value of the |
I2 |
the |
tau2 |
the DerSimonian–Laird estimate of the between-study variance. |
model |
the model setting ( |
std.dev |
the standardized deviates of the studies. |
reg.int |
the estimate of the regression intercept for quantifying publication bias. |
reg.int.ci |
the 95% CI of the regression intercept. |
reg.int.ci.resam |
the 95% CI of the regression intercept based on the resampling method. |
reg.pval |
the p-value of the regression intercept. |
reg.pval |
the p-value of the regression intercept based on the resampling method. |
skewness |
the estimate of the skewness for quantifying publication bias. |
skewness.ci |
the 95% CI of the skewness. |
skewness.ci.resam |
the 95% CI of the skewness based on the resampling method. |
skewness.pval |
the p-value of the skewness. |
skewness.pval.resam |
the p-value of the skewness based on the resampling method. |
combined.pval |
the p-value of the combined test that incorporates the regression intercept and the skewness. |
combined.pval.resam |
the p-value of the combined test that incorporates the regression intercept and the skewness based on the resampling method. |
Egger M, Davey Smith G, Schneider M, Minder C (1997). "Bias in meta-analysis detected by a simple, graphical test." BMJ, 315(7109), 629–634. <doi:10.1136/bmj.315.7109.629>
Lin L, Chu H (2018). "Quantifying publication bias in meta-analysis." Biometrics, 74(3), 785–794. <doi:10.1111/biom.12817>
data("dat.slf") set.seed(1234) metapb(y, s2, dat.slf) data("dat.ha") set.seed(1234) metapb(y, s2, dat.ha) data("dat.lcj") set.seed(1234) metapb(y, s2, dat.lcj)
data("dat.slf") set.seed(1234) metapb(y, s2, dat.slf) data("dat.ha") set.seed(1234) metapb(y, s2, dat.ha) data("dat.lcj") set.seed(1234) metapb(y, s2, dat.lcj)
Performs a multivariate meta-analysis when the within-study correlations are known.
mvma(ys, covs, data, method = "reml", tol = 1e-10)
mvma(ys, covs, data, method = "reml", tol = 1e-10)
ys |
an n x p numeric matrix containing the observed effect sizes. The n rows represent studies, and the p columns represent the multivariate endpoints. |
covs |
a numeric list with length n. Each element is the p x p within-study covariance matrix. |
data |
an optional data frame containing the multivariate meta-analysis dataset. If |
method |
a character string specifying the method for estimating the overall effect sizes. It should be |
tol |
a small number specifying the convergence tolerance for the estimates by maximizing (restricted) likelihood. The default is |
Suppose studies are collected in a multivariate meta-analysis on a total of
endpoints. Denote the
-dimensional vector of effect sizes as
, and the within-study covariance matrix
is assumed to be known. Then, the random-effects model is as follows:
Here, represents the true underlying effect sizes in study
,
represents the overall effect sizes across studies, and
is the between-study covariance matrix due to heterogeneity. By setting
, this model becomes the fixed-effects model.
This function returns a list containing the following elements:
mu.est |
The estimated overall effect sizes of the p endpoints. |
Tau.est |
The estimated between-study covariance matrix. |
mu.cov |
The covariance matrix of the estimated overall effect sizes. |
method |
The method used to produce the estimates. |
Jackson D, Riley R, White IR (2011). "Multivariate meta-analysis: potential and promise." Statistics in Medicine, 30(20), 2481–2498. <doi:10.1002/sim.4172>
mvma.bayesian
, mvma.hybrid
, mvma.hybrid.bayesian
data("dat.fib") mvma(ys = y, covs = S, data = dat.fib, method = "fe") mvma(ys = y, covs = S, data = dat.fib, method = "reml")
data("dat.fib") mvma(ys = y, covs = S, data = dat.fib, method = "fe") mvma(ys = y, covs = S, data = dat.fib, method = "reml")
Performs a Bayesian random-effects model for multivariate meta-analysis when the within-study correlations are known.
mvma.bayesian(ys, covs, data, n.adapt = 1000, n.chains = 3, n.burnin = 10000, n.iter = 10000, n.thin = 1, data.name = NULL, traceplot = FALSE, coda = FALSE)
mvma.bayesian(ys, covs, data, n.adapt = 1000, n.chains = 3, n.burnin = 10000, n.iter = 10000, n.thin = 1, data.name = NULL, traceplot = FALSE, coda = FALSE)
ys |
an n x p numeric matrix containing the observed effect sizes. The n rows represent studies, and the p columns represent the multivariate endpoints. |
covs |
a numeric list with length n. Each element is the p x p within-study covariance matrix. |
data |
an optional data frame containing the multivariate meta-analysis dataset. If |
n.adapt |
the number of iterations for adaptation in the Markov chain Monte Carlo (MCMC) algorithm. The default is 1,000. This argument and the following |
n.chains |
the number of MCMC chains. The default is 3. |
n.burnin |
the number of iterations for burn-in period. The default is 10,000. |
n.iter |
the total number of iterations in each MCMC chain after the burn-in period. The default is 10,000. |
n.thin |
a positive integer specifying thinning rate. The default is 1. |
data.name |
a character string specifying the data name. This is used in the names of the generated files that contain results. The default is |
traceplot |
a logical value indicating whether to save trace plots for the overall effect sizes and between-study standard deviations. The default is |
coda |
a logical value indicating whether to output MCMC posterior samples. The default is |
Suppose studies are collected in a multivariate meta-analysis on a total of
endpoints. Denote the
-dimensional vector of effect sizes as
, and the within-study covariance matrix
is assumed to be known. Then, the random-effects model is as follows:
Here, represents the true underlying effect sizes in study
,
represents the overall effect sizes across studies, and
is the between-study covariance matrix due to heterogeneity.
The vague priors are specified for the fixed effects
. Also, this function uses the separation strategy to specify vague priors for the variance and correlation components in
(Pinheiro and Bates, 1996); this technique is considered less sensitive to hyperparameters compared to specifying the inverse-Wishart prior (Lu and Ades, 2009; Wei and Higgins, 2013). Specifically, write the between-study covariance matrix as
, where the diagonal matrix
contains the between-study variances, and
is the correlation matrix. Uniform priors
are specified for
's (
). Further, the correlation matrix can be written as
, where
is a lower triangular matrix with nonnegative diagonal elements. Also,
and for
,
if
;
if
; and
if
. Here,
's are angle parameters for
, and
. Uniform priors are specified for the angle parameters:
.
This functions produces posterior estimates and Gelman and Rubin's potential scale reduction factor, and it generates several files that contain trace plots (if traceplot
= TRUE
) and MCMC posterior samples (if coda
= TRUE
) in users' working directory. In these results, mu
represents the overall effect sizes, tau
represents the between-study variances, R
contains the elements of the correlation matrix, and theta
represents the angle parameters (see "Details").
This function only implements the MCMC algorithm for the random-effects multivariate model, but not the fixed-effects model. Generally, the fixed-effects model can be easily implemented using the function mvma
. However, when using mvma
to fit the random-effects model, a large number of parameters need to be estimated, and the algorithm for maximizing (restricted) likelihood may not converge well. The Bayesian method in this function provides an alternative.
If a warning "adaptation incomplete" appears, users may increase n.adapt
.
Gelman A, Rubin DB (1992). "Inference from iterative simulation using multiple sequences." Statistical Science, 7(4), 457–472. <doi:10.1214/ss/1177011136>
Jackson D, Riley R, White IR (2011). "Multivariate meta-analysis: potential and promise." Statistics in Medicine, 30(20), 2481–2498. <doi:10.1002/sim.4172>
Lu G, Ades AE (2009). "Modeling between-trial variance structure in mixed treatment comparisons." Biostatistics, 10(4), 792–805. <doi:10.1093/biostatistics/kxp032>
Pinheiro JC, Bates DM (1996). "Unconstrained parametrizations for variance-covariance matrices." Statistics and Computing, 6(3), 289–296. <doi:10.1007/BF00140873>
Wei Y, Higgins JPT (2013). "Bayesian multivariate meta-analysis with multiple outcomes." Statistics in Medicine, 32(17), 2911–2934. <doi:10.1002/sim.5745>
mvma
, mvma.hybrid
, mvma.hybrid.bayesian
data("dat.fib") set.seed(12345) ## increase n.burnin and n.iter for better convergence of MCMC out <- mvma.bayesian(ys = y, covs = S, data = dat.fib, n.adapt = 1000, n.chains = 3, n.burnin = 100, n.iter = 100, n.thin = 1, data.name = "Fibrinogen") out
data("dat.fib") set.seed(12345) ## increase n.burnin and n.iter for better convergence of MCMC out <- mvma.bayesian(ys = y, covs = S, data = dat.fib, n.adapt = 1000, n.chains = 3, n.burnin = 100, n.iter = 100, n.thin = 1, data.name = "Fibrinogen") out
Performs a multivariate meta-analysis using the hybrid random-effects model when the within-study correlations are unknown.
mvma.hybrid(ys, vars, data, method = "reml", tol = 1e-10)
mvma.hybrid(ys, vars, data, method = "reml", tol = 1e-10)
ys |
an n x p numeric matrix containing the observed effect sizes. The n rows represent studies, and the p columns represent the multivariate endpoints. |
vars |
an n x p numeric matrix containing the observed within-study variances. The n rows represent studies, and the p columns represent the multivariate endpoints. |
data |
an optional data frame containing the multivariate meta-analysis dataset. If |
method |
a character string specifying the method for estimating the overall effect sizes. It should be |
tol |
a small number specifying the convergence tolerance for the estimates by maximizing (restricted) likelihood. The default is |
Suppose studies are collected in a multivariate meta-analysis on a total of
endpoints. Denote the
-dimensional vector of effect sizes as
, and their within-study variances form a diagonal matrix
. However, the within-study correlations are unknown. Then, the random-effects hybrid model is as follows (Riley et al., 2008; Lin and Chu, 2018):
where represents the overall effect sizes across studies,
consists of the between-study variances, and
is the marginal correlation matrix. Although the within-study correlations are unknown, this model accounts for both within- and between-study correlations by using the marginal correlation matrix.
This function returns a list containing the following elements:
mu.est |
The estimated overall effect sizes of the p endpoints. |
tau2.est |
The estimated between-study variances of the p endpoints. |
mar.R |
The estimated marginal correlation matrix. |
mu.cov |
The covariance matrix of the estimated overall effect sizes. |
method |
The method used to produce the estimates. |
The algorithm for maximizing (restricted) likelihood may not converge when the dimension of endpoints is too high or the data are too sparse.
Lin L, Chu H (2018), "Bayesian multivariate meta-analysis of multiple factors." Research Synthesis Methods, 9(2), 261–272. <doi:10.1002/jrsm.1293>
Riley RD, Thompson JR, Abrams KR (2008), "An alternative model for bivariate random-effects meta-analysis when the within-study correlations are unknown." Biostatistics, 9(1), 172–186. <doi:10.1093/biostatistics/kxm023>
mvma
, mvma.bayesian
, mvma.hybrid.bayesian
data("dat.fib") y <- dat.fib$y sd <- dat.fib$sd mvma.hybrid(y = y, vars = sd^2)
data("dat.fib") y <- dat.fib$y sd <- dat.fib$sd mvma.hybrid(y = y, vars = sd^2)
Performs a multivariate meta-analysis using the Bayesian hybrid random-effects model when the within-study correlations are unknown.
mvma.hybrid.bayesian(ys, vars, data, n.adapt = 1000, n.chains = 3, n.burnin = 10000, n.iter = 10000, n.thin = 1, data.name = NULL, traceplot = FALSE, coda = FALSE)
mvma.hybrid.bayesian(ys, vars, data, n.adapt = 1000, n.chains = 3, n.burnin = 10000, n.iter = 10000, n.thin = 1, data.name = NULL, traceplot = FALSE, coda = FALSE)
ys |
an n x p numeric matrix containing the observed effect sizes. The n rows represent studies, and the p columns represent the multivariate endpoints. |
vars |
an n x p numeric matrix containing the observed within-study variances. The n rows represent studies, and the p columns represent the multivariate endpoints. |
data |
an optional data frame containing the multivariate meta-analysis dataset. If |
n.adapt |
the number of iterations for adaptation in the Markov chain Monte Carlo (MCMC) algorithm. The default is 1,000. This argument and the following |
n.chains |
the number of MCMC chains. The default is 3. |
n.burnin |
the number of iterations for burn-in period. The default is 10,000. |
n.iter |
the total number of iterations in each MCMC chain after the burn-in period. The default is 10,000. |
n.thin |
a positive integer specifying thinning rate. The default is 1. |
data.name |
a character string specifying the data name. This is used in the names of the generated files that contain results. The default is |
traceplot |
a logical value indicating whether to save trace plots for the overall effect sizes and between-study standard deviations. The default is |
coda |
a logical value indicating whether to output MCMC posterior samples. The default is |
Suppose studies are collected in a multivariate meta-analysis on a total of
endpoints. Denote the
-dimensional vector of effect sizes as
, and their within-study variances form a diagonal matrix
. However, the within-study correlations are unknown. Then, the random-effects hybrid model is as follows (Riley et al., 2008; Lin and Chu, 2018):
where represents the overall effect sizes across studies,
consists of the between-study variances, and
is the marginal correlation matrix. Although the within-study correlations are unknown, this model accounts for both within- and between-study correlations by using the marginal correlation matrix.
Uniform priors are specified for the between-study standard deviations
(
). The correlation matrix can be written as
, where
is a lower triangular matrix with nonnegative diagonal elements. Also,
and for
,
if
;
if
; and
if
(Lu and Ades, 2009; Wei and Higgins, 2013). Here,
's are angle parameters for
, and
. Uniform priors are specified for the angle parameters:
.
This functions produces posterior estimates and Gelman and Rubin's potential scale reduction factor, and it generates several files that contain trace plots (if traceplot
= TRUE
), and MCMC posterior samples (if coda
= TRUE
) in users' working directory. In these results, mu
represents the overall effect sizes, tau
represents the between-study variances, R
contains the elements of the correlation matrix, and theta
represents the angle parameters (see "Details").
Lin L, Chu H (2018), "Bayesian multivariate meta-analysis of multiple factors." Research Synthesis Methods, 9(2), 261–272. <doi:10.1002/jrsm.1293>
Lu G, Ades AE (2009). "Modeling between-trial variance structure in mixed treatment comparisons." Biostatistics, 10(4), 792–805. <doi:10.1093/biostatistics/kxp032>
Riley RD, Thompson JR, Abrams KR (2008), "An alternative model for bivariate random-effects meta-analysis when the within-study correlations are unknown." Biostatistics, 9(1), 172–186. <doi:10.1093/biostatistics/kxm023>
Wei Y, Higgins JPT (2013). "Bayesian multivariate meta-analysis with multiple outcomes." Statistics in Medicine, 32(17), 2911–2934. <doi:10.1002/sim.5745>
mvma
, mvma.bayesian
, mvma.hybrid
data("dat.pte") set.seed(12345) ## increase n.burnin and n.iter for better convergence of MCMC out <- mvma.hybrid.bayesian(ys = dat.pte$y, vars = (dat.pte$se)^2, n.adapt = 1000, n.chains = 3, n.burnin = 100, n.iter = 100, n.thin = 1, data.name = "Pterygium") out
data("dat.pte") set.seed(12345) ## increase n.burnin and n.iter for better convergence of MCMC out <- mvma.hybrid.bayesian(ys = dat.pte$y, vars = (dat.pte$se)^2, n.adapt = 1000, n.chains = 3, n.burnin = 100, n.iter = 100, n.thin = 1, data.name = "Pterygium") out
Calculates evidence inconsistency degrees of freedom (ICDF) in Bayesian network meta-analysis with binary outcomes.
nma.icdf(sid, tid, r, n, data, type = c("la", "fe", "re"), n.adapt = 1000, n.chains = 3, n.burnin = 5000, n.iter = 20000, n.thin = 2, traceplot = FALSE, nma.name = NULL, seed = 1234)
nma.icdf(sid, tid, r, n, data, type = c("la", "fe", "re"), n.adapt = 1000, n.chains = 3, n.burnin = 5000, n.iter = 20000, n.thin = 2, traceplot = FALSE, nma.name = NULL, seed = 1234)
sid |
a vector specifying the study IDs. |
tid |
a vector specifying the treatment IDs. |
r |
a numeric vector specifying the event counts. |
n |
a numeric vector specifying the sample sizes. |
data |
an optional data frame containing the network meta-analysis dataset. If |
type |
a character string or a vector of character strings specifying the ICDF measures. It can be chosen from |
n.adapt |
the number of iterations for adaptation in the Markov chain Monte Carlo (MCMC) algorithm. The default is 1,000. This argument and the following |
n.chains |
the number of MCMC chains. The default is 3. |
n.burnin |
the number of iterations for burn-in period. The default is 5,000. |
n.iter |
the total number of iterations in each MCMC chain after the burn-in period. The default is 20,000. |
n.thin |
a positive integer specifying thinning rate. The default is 2. |
traceplot |
a logical value indicating whether to generate trace plots of network meta-analysis models for calculating the ICDF measures. It is only used when the argument |
nma.name |
a character string for specifying the name of the network meta-analysis, which will be used in the file names of the trace plots. It is only used when |
seed |
an integer for specifying the seed of the random number generation for reproducibility during the MCMC algorithm for performing Bayesian network meta-analysis models. |
Network meta-analysis frequenlty assumes that direct evidence is consistent with indirect evidence, but this consistency assumption may not hold in some cases. One may use the ICDF to assess the potential that a network meta-analysis might suffer from inconsistency. Suppose that a network meta-analysis compares a total of treatments. When it contains two-arm studies only, Lu and Ades (2006) propose to measure the ICDF as
where is the total number of treatment pairs that are directly compared in the treatment network. This measure is interpreted as the number of independent treatment loops. However, it may not be feasibly calculated when the network meta-analysis contains multi-arm studies. Multi-arm studies provide intrinsically consistent evidence and complicate the calculation of the ICDF.
Alternatively, the ICDF may be measured as the difference between the effective numbers of parameters of the consistency and inconsistency models for network meta-analysis. The consistency model assumes evidence consistency in all treatment loops, and the inconsistency model treats the overall effect sizes of all direct treatment comparisons as separate, unrelated parameters (Dias et al., 2013). The effective number of parameters is frequently used to assess the complexity of Bayesian hierarchical models (Spiegelhalter et al., 2002). The effect measure is the (log) odds ratio in the models used in this function. Let and
be the effective numbers of parameters of the consistency and inconsistency models under the fixed-effects setting, and
and
be those under the random-effects setting. The ICDF measures under the fixed-effects and random-effects settings are
respectively. See more details in Lin (2020).
This function produces a list containing the following results: a table of the number of arms within studies and the corresponding counts of studies (nstudy.trtarm
); the number of multi-arm studies (nstudy.multi
); the set of treatments compared in each multi-arm study (multi.trtarm
); the Lu–Ades ICDF measure (icdf.la
); the ICDF measure based on the fixed-effects consistency and inconsistency models (icdf.fe
); and the ICDF measure based on the random-effects consistency and inconsistency models (icdf.re
). The Lu–Ades ICDF measure is NA
(not available) in the presence of multi-arm studies, because multi-arm studies complicate the counting of independent treatment loops in generic network meta-analyses. When traceplot
= TRUE
, the trace plots will be saved in users' working directory.
Dias S, Welton NJ, Sutton AJ, Caldwell DM, Lu G, Ades AE (2013). "Evidence synthesis for decision making 4: inconsistency in networks of evidence based on randomized controlled trials." Medical Decision Making, 33(5), 641–656. <doi:10.1177/0272989X12455847>
Lin L (2021). "Evidence inconsistency degrees of freedom in Bayesian network meta-analysis." Journal of Biopharmaceutical Statistics, 31(3), 317–330. <doi:10.1080/10543406.2020.1852247>
Lu G, Ades AE (2006). "Assessing evidence inconsistency in mixed treatment comparisons." Journal of the American Statistical Association, 101(474), 447–459. <doi:10.1198/016214505000001302>
Spiegelhalter DJ, Best NG, Carlin BP, Van Der Linde A (2002). "Bayesian measures of model complexity and fit." Journal of the Royal Statistical Society, Series B (Statistical Methodology), 64(4), 583–639. <doi:10.1111/1467-9868.00353>
data("dat.baker") ## increase n.burnin (e.g., to 50000) and n.iter (e.g., to 200000) ## for better convergence of MCMC out <- nma.icdf(sid, tid, r, n, data = dat.baker, type = c("la", "fe", "re"), n.adapt = 1000, n.chains = 3, n.burnin = 500, n.iter = 2000, n.thin = 2, traceplot = FALSE, seed = 1234) out
data("dat.baker") ## increase n.burnin (e.g., to 50000) and n.iter (e.g., to 200000) ## for better convergence of MCMC out <- nma.icdf(sid, tid, r, n, data = dat.baker, type = c("la", "fe", "re"), n.adapt = 1000, n.chains = 3, n.burnin = 500, n.iter = 2000, n.thin = 2, traceplot = FALSE, seed = 1234) out
Calculates the P-score and predictive P-score for a network meta-analysis in the Bayesian framework described in Rosenberger et al. (2021).
nma.predrank(sid, tid, r, n, data, n.adapt = 1000, n.chains = 3, n.burnin = 2000, n.iter = 5000, n.thin = 2, lowerbetter = TRUE, pred = TRUE, pred.samples = FALSE, trace = FALSE)
nma.predrank(sid, tid, r, n, data, n.adapt = 1000, n.chains = 3, n.burnin = 2000, n.iter = 5000, n.thin = 2, lowerbetter = TRUE, pred = TRUE, pred.samples = FALSE, trace = FALSE)
sid |
a vector specifying the study IDs, from 1 to the number of studies. |
tid |
a vector specifying the treatment IDs, from 1 to the number of treatments. |
r |
a numeric vector specifying the event counts. |
n |
a numeric vector specifying the sample sizes. |
data |
an optional data frame containing the network meta-analysis dataset. If |
n.adapt |
the number of iterations for adaptation in the Markov chain Monte Carlo (MCMC) algorithm. The default is 1,000. This argument and the following |
n.chains |
the number of MCMC chains. The default is 3. |
n.burnin |
the number of iterations for burn-in period. The default is 2,000. |
n.iter |
the total number of iterations in each MCMC chain after the burn-in period. The default is 5,000. |
n.thin |
a positive integer specifying thinning rate. The default is 2. |
lowerbetter |
A logical value indicating whether lower effect measures indicate better treaetments. If |
pred |
a logical value indicating whether the treatment ranking measures in a new study are to be derived. These measures are only derived when |
pred.samples |
a logical value indicating whether the posterior samples of expected scaled ranks in a new study are to be saved. |
trace |
a logical value indicating whether all posterior samples are to be saved. |
Under the frequentist setting, the P-score is built on the quantiles
where and
are the point estimates of treatment effects for
vs. 1 and
vs. 1, respectively, and
is the standard error of
(Rucker and Schwarzer, 2015). Moreover,
is the cumulative distribution function of the standard normal distribution. The quantity
can be interpreted as the extent of certainty that treatment
is better than
. The frequentist P-score of treatment
is
.
Analogously to the frequentist P-score, conditional on and
, the quantities
from the Bayesian perspective can be considered as
, which are Bernoulli random variables. To quantify the overall performance of treatment
, we may similarly use
Note that is a parameter under the Bayesian framework, while the frequentist P-score is a statistic. Moreover,
is equivalent to
, where
is the true rank of treatment
. Thus, we may also write
; this corresponds to the findings by Rucker and Schwarzer (2015). Consequently, we call
the scaled rank in the network meta-analysis (NMA) for treatment
. It transforms the range of the original rank between 1 and
to a range between 0 and 1. In addition, note that
, which is analogous to the quantity of
under the frequentist framework. Therefore, we use the posterior mean of the scaled rank
as the Bayesian P-score; it is a counterpart of the frequentist P-score.
The scaled ranks can be feasibly estimated via the MCMC algorithm. Let
be the posterior samples of the overall relative effects
of all treatments vs. the reference treatment 1 in a total of
MCMC iterations after the burn-in period, where
indexes the iterations. As
is trivially 0, we set
to 0 for all
. The
th posterior sample of treatment
's scaled rank is
. We can make inferences for the scaled ranks from the posterior samples
, and use their posterior means as the Bayesian P-scores. We may also obtain the posterior medians as another set of point estimates, and the 2.5% and 97.5% posterior quantiles as the lower and upper bounds of 95% credible intervals (CrIs), respectively. Because the posterior samples of the scaled ranks take discrete values, the posterior medians and the CrI bounds are also discrete.
Based on the idea of the Bayesian P-score, we can similarly define the predictive P-score for a future study by accounting for the heterogeneity between the existing studies in the NMA and the new study. Specifically, we consider the probabilities in the new study
conditional on the population parameters ,
, and
from the NMA. Here,
and
represent the treatment effects of
vs. 1 and
vs. 1 in the new study, respectively. The
corresponds to the quantity
in the NMA; it represents the probability of treatment
being better than
in the new study. Due to heterogeneity,
and
. The correlation coefficients between treatment comparisons are typically assumed to be 0.5; therefore, such probabilities in the new study can be explicitly calculated as
, which is a function of
,
, and
. Finally, we use
to quantify the performance of treatment in the new study. The posterior samples of
can be derived from the posterior samples of
,
, and
during the MCMC algorithm.
Note that the probabilities can be written as
. Based on similar observations for the scaled ranks in the NMA, the
in the new study subsequently becomes
where is the true rank of treatment
in the new study. Thus, we call
the expected scaled rank in the new study. Like the Bayesian P-score, we define the predictive P-score as the posterior mean of
. The posterior medians and 95% CrIs can also be obtained using the MCMC samples of
. See more details in Rosenberger et al. (2021).
This function estimates the P-score for all treatments in a Bayesian NMA, estimates the predictive P-score (if pred
= TRUE
), gives the posterior samples of expected scaled ranks in a new study (if pred.samples
= TRUE
), and outputs all MCMC posterior samples (if trace
= TRUE
).
Kristine J. Rosenberger, Lifeng Lin
Rosenberger KJ, Duan R, Chen Y, Lin L (2021). "Predictive P-score for treatment ranking in Bayesian network meta-analysis." BMC Medical Research Methodology, 21, 213. <doi:10.1186/s12874-021-01397-5>
Rucker G, Schwarzer G (2015). "Ranking treatments in frequentist network meta-analysis works without resampling methods." BMC Medical Research Methodology, 15, 58. <doi:10.1186/s12874-015-0060-8>
## increase n.burnin (e.g., to 50000) and n.iter (e.g., to 200000) ## for better convergence of MCMC data("dat.sc") set.seed(1234) out1 <- nma.predrank(sid, tid, r, n, data = dat.sc, n.burnin = 500, n.iter = 2000, lowerbetter = FALSE, pred.samples = TRUE) out1$P.score out1$P.score.pred cols <- c("red4", "plum4", "paleturquoise4", "palegreen4") cols.hist <- adjustcolor(cols, alpha.f = 0.4) trtnames <- c("1) No contact", "2) Self-help", "3) Individual counseling", "4) Group counseling") brks <- seq(0, 1, 0.01) hist(out1$P.pred[[1]], breaks = brks, freq = FALSE, xlim = c(0, 1), ylim = c(0, 5), col = cols.hist[1], border = cols[1], xlab = "Expected scaled rank in a new study", ylab = "Density", main = "") hist(out1$P.pred[[2]], breaks = brks, freq = FALSE, col = cols.hist[2], border = cols[2], add = TRUE) hist(out1$P.pred[[3]], breaks = brks, freq = FALSE, col = cols.hist[3], border = cols[3], add = TRUE) hist(out1$P.pred[[4]], breaks = brks, freq = FALSE, col = cols.hist[4], border = cols[4], add = TRUE) legend("topright", fill = cols.hist, border = cols, legend = trtnames) data("dat.xu") set.seed(1234) out2 <- nma.predrank(sid, tid, r, n, data = dat.xu, n.burnin = 500, n.iter = 2000) out2
## increase n.burnin (e.g., to 50000) and n.iter (e.g., to 200000) ## for better convergence of MCMC data("dat.sc") set.seed(1234) out1 <- nma.predrank(sid, tid, r, n, data = dat.sc, n.burnin = 500, n.iter = 2000, lowerbetter = FALSE, pred.samples = TRUE) out1$P.score out1$P.score.pred cols <- c("red4", "plum4", "paleturquoise4", "palegreen4") cols.hist <- adjustcolor(cols, alpha.f = 0.4) trtnames <- c("1) No contact", "2) Self-help", "3) Individual counseling", "4) Group counseling") brks <- seq(0, 1, 0.01) hist(out1$P.pred[[1]], breaks = brks, freq = FALSE, xlim = c(0, 1), ylim = c(0, 5), col = cols.hist[1], border = cols[1], xlab = "Expected scaled rank in a new study", ylab = "Density", main = "") hist(out1$P.pred[[2]], breaks = brks, freq = FALSE, col = cols.hist[2], border = cols[2], add = TRUE) hist(out1$P.pred[[3]], breaks = brks, freq = FALSE, col = cols.hist[3], border = cols[3], add = TRUE) hist(out1$P.pred[[4]], breaks = brks, freq = FALSE, col = cols.hist[4], border = cols[4], add = TRUE) legend("topright", fill = cols.hist, border = cols, legend = trtnames) data("dat.xu") set.seed(1234) out2 <- nma.predrank(sid, tid, r, n, data = dat.xu, n.burnin = 500, n.iter = 2000) out2
Performs multiple methods introduced in Shi et al. (2020) to assess publication bias/small-study effects under the Bayesian framework in a meta-analysis of (log) odds ratios.
pb.bayesian.binary(n00, n01, n10, n11, p01 = NULL, p11 = NULL, data, sig.level = 0.1, method = "bay", het = "mul", sd.prior = "unif", n.adapt = 1000, n.chains = 3, n.burnin = 5000, n.iter = 10000, thin = 2, upp.het = 2, phi = 0.5, coda = FALSE, traceplot = FALSE, seed = 1234)
pb.bayesian.binary(n00, n01, n10, n11, p01 = NULL, p11 = NULL, data, sig.level = 0.1, method = "bay", het = "mul", sd.prior = "unif", n.adapt = 1000, n.chains = 3, n.burnin = 5000, n.iter = 10000, thin = 2, upp.het = 2, phi = 0.5, coda = FALSE, traceplot = FALSE, seed = 1234)
n00 |
a numeric vector or the corresponding column name in the argument |
n01 |
a numeric vector or the corresponding column name in the argument |
n10 |
a numeric vector or the corresponding column name in the argument |
n11 |
a numeric vector or the corresponding column name in the argument |
p01 |
an optional numeric vector specifying true event rates (e.g., from simulations) in the treatment group 0 across studies. |
p11 |
an optional numeric vector specifying true event rates (e.g., from simulations) in the treatment group 1 across studies. |
data |
an optional data frame containing the meta-analysis dataset. If |
sig.level |
a numeric value specifying the statistical significance level |
method |
a character string specifying the method for assessing publication bias via Bayesian hierarchical models. It can be one of |
het |
a character string specifying the type of heterogeneity assumption for the publication bias tests. It can be either |
sd.prior |
a character string specifying prior distributions for standard deviation parameters. It can be either |
n.adapt |
the number of iterations for adaptation in the Markov chain Monte Carlo (MCMC) algorithm. The default is 1,000. This argument and the following |
n.chains |
the number of MCMC chains. The default is 1. |
n.burnin |
the number of iterations for burn-in period. The default is 5,000. |
n.iter |
the total number of iterations in each MCMC chain after the burn-in period. The default is 10,000. |
thin |
a positive integer specifying thinning rate. The default is 2. |
upp.het |
a positive number for specifying the upper bound of uniform priors for standard deviation parameters (if |
phi |
a positive number for specifying the hyper-parameter of half-normal priors for standard deviation parameters (if |
coda |
a logical value indicating whether to output MCMC posterior samples. The default is |
traceplot |
a logical value indicating whether to draw trace plots for the regression slopes. The default is |
seed |
an integer for specifying the seed value for reproducibility. |
The Bayesian models are specified in Shi et al. (2020). The vague prior N(0, ) is used for the regression intercept and slope, and the uniform prior U(0,
upp.het
) and half-normal prior HN(phi
) are used for standard deviation parameters. The half-normal priors may be preferred in meta-analyses with rare events or small sample sizes.
This function returns a list containing estimates of regression slopes and their credible intervals with the specified significance level (sig.level
) as well as MCMC posterior samples (if coda
= TRUE
). Each element name in this list is related to a certain publication bias method (e.g., est.bay
and ci.bay
represent the slope estimate and its credible interval based on the proposed Bayesian method). In addition, trace plots for the regression slope are drawn if traceplot
= TRUE
.
The current version does not support other effect measures such as relative risks or risk differences.
Linyu Shi [email protected]
Egger M, Davey Smith G, Schneider M, Minder C (1997). "Bias in meta-analysis detected by a simple, graphical test." BMJ, 315(7109), 629–634. <doi:10.1136/bmj.315.7109.629>
Jin Z-C, Wu C, Zhou X-H, He J (2014). "A modified regression method to test publication bias in meta-analyses with binary outcomes." BMC Medical Research Methodology, 14, 132. <doi:10.1186/1471-2288-14-132>
Shi L, Chu H, Lin L (2020). "A Bayesian approach to assessing small-study effects in meta-analysis of a binary outcome with controlled false positive rate". Research Synthesis Methods, 11(4), 535–552. <doi:10.1002/jrsm.1415>
Thompson SG, Sharp SJ (1999). "Explaining heterogeneity in meta-analysis: a comparison of methods." Statistics in Medicine, 18(20), 2693–2708. <doi:10.1002/(SICI)1097-0258(19991030)18:20<2693::AID-SIM235>3.0.CO;2-V>
pb.hybrid.binary
, pb.hybrid.generic
data("dat.poole") set.seed(654321) ## increase n.burnin and n.iter for better convergence of MCMC rslt.poole <- pb.bayesian.binary(n00, n01, n10, n11, data = dat.poole, method = "bay", het = "mul", sd.prior = "unif", n.adapt = 1000, n.chains = 3, n.burnin = 500, n.iter = 2000, thin = 2, upp.het = 2) rslt.poole data("dat.ducharme") set.seed(654321) ## increase n.burnin and n.iter for better convergence of MCMC rslt.ducharme <- pb.bayesian.binary(n00, n01, n10, n11, data = dat.ducharme, method = "bay", het = "mul", sd.prior = "unif", n.adapt = 1000, n.chains = 3, n.burnin = 500, n.iter = 2000, thin = 2, upp.het = 2) rslt.ducharme data("dat.henry") set.seed(654321) ## increase n.burnin and n.iter for better convergence of MCMC rslt.henry <- pb.bayesian.binary(n00, n01, n10, n11, data = dat.henry, method = "bay", het = "mul", sd.prior = "unif", n.adapt = 1000, n.chains = 3, n.burnin = 500, n.iter = 2000, thin = 2, upp.het = 2) rslt.henry
data("dat.poole") set.seed(654321) ## increase n.burnin and n.iter for better convergence of MCMC rslt.poole <- pb.bayesian.binary(n00, n01, n10, n11, data = dat.poole, method = "bay", het = "mul", sd.prior = "unif", n.adapt = 1000, n.chains = 3, n.burnin = 500, n.iter = 2000, thin = 2, upp.het = 2) rslt.poole data("dat.ducharme") set.seed(654321) ## increase n.burnin and n.iter for better convergence of MCMC rslt.ducharme <- pb.bayesian.binary(n00, n01, n10, n11, data = dat.ducharme, method = "bay", het = "mul", sd.prior = "unif", n.adapt = 1000, n.chains = 3, n.burnin = 500, n.iter = 2000, thin = 2, upp.het = 2) rslt.ducharme data("dat.henry") set.seed(654321) ## increase n.burnin and n.iter for better convergence of MCMC rslt.henry <- pb.bayesian.binary(n00, n01, n10, n11, data = dat.henry, method = "bay", het = "mul", sd.prior = "unif", n.adapt = 1000, n.chains = 3, n.burnin = 500, n.iter = 2000, thin = 2, upp.het = 2) rslt.henry
Performs the hybrid test for publication bias/small-study effects introduced in Lin (2020), which synthesizes results from multiple popular publication bias tests, in a meta-analysis with binary outcomes.
pb.hybrid.binary(n00, n01, n10, n11, data, methods, iter.resam = 1000, theo.pval = TRUE)
pb.hybrid.binary(n00, n01, n10, n11, data, methods, iter.resam = 1000, theo.pval = TRUE)
n00 |
a numeric vector or the corresponding column name in the argument |
n01 |
a numeric vector or the corresponding column name in the argument |
n10 |
a numeric vector or the corresponding column name in the argument |
n11 |
a numeric vector or the corresponding column name in the argument |
data |
an optional data frame containing the meta-analysis dataset. If |
methods |
a vector of character strings specifying the publication bias tests to be included in the hybrid test. They can be a subset of |
iter.resam |
a positive integer specifying the number of resampling iterations for calculating the p-value of the hybrid test. |
theo.pval |
a logical value indicating whether additionally calculating the p-values of the tests specified in |
The hybrid test statistic is defined as the minimum p-value among the publication bias tests considered in the set specified by the argument methods
. Note that the minimum p-value is no longer a genuine p-value because it cannot control the type I error rate. Its p-value needs to be calculated via the resampling approach. See more details in Lin (2020).
This function returns a list containing p-values of the publication bias tests specified in methods
as well as the hybrid test. Each element's name in this list has the format of pval.x
, where x
stands for the character string corresponding to a certain publication bias test, such as rank
, reg
, skew
, etc. The hybrid test's p-value has the name pval.hybrid
. If theo.pval
= TRUE
, additional elements of p-values of the tests in methods
based on theorectical null distributions are included in the produced list; their names have the format of pval.x.theo
. Another p-value of the hybrid test is also produced based on them; its corresponding element has the name pval.hybrid.theo
.
Begg CB, Mazumdar M (1994). "Operating characteristics of a rank correlation test for publication bias." Biometrics, 50(4), 1088–1101. <doi:10.2307/2533446>
Duval S, Tweedie R (2000). "A nonparametric ‘trim and fill’ method of accounting for publication bias in meta-analysis." Journal of the American Statistical Association, 95(449), 89–98. <doi:10.1080/01621459.2000.10473905>
Egger M, Davey Smith G, Schneider M, Minder C (1997). "Bias in meta-analysis detected by a simple, graphical test." BMJ, 315(7109), 629–634. <doi:10.1136/bmj.315.7109.629>
Harbord RM, Egger M, Sterne JAC (2006). "A modified test for small-study effects in meta-analyses of controlled trials with binary endpoints." Statistics in Medicine, 25(20), 3443–3457. <doi:10.1002/sim.2380>
Jin Z-C, Wu C, Zhou X-H, He J (2014). "A modified regression method to test publication bias in meta-analyses with binary outcomes." BMC Medical Research Methodology, 14, 132. <doi:10.1186/1471-2288-14-132>
Lin L (2020). "Hybrid test for publication bias in meta-analysis." Statistical Methods in Medical Research, 29(10), 2881–2899. <doi:10.1177/0962280220910172>
Lin L, Chu H (2018). "Quantifying publication bias in meta-analysis." Biometrics, 74(3), 785–794. <doi:10.1111/biom.12817>
Macaskill P, Walter SD, Irwig L (2001). "A comparison of methods to detect publication bias in meta-analysis." Statistics in Medicine, 20(4), 641–654. <doi:10.1002/sim.698>
Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L (2006). "Comparison of two methods to detect publication bias in meta-analysis." JAMA, 295(6), 676–680. <doi:10.1001/jama.295.6.676>
Rucker G, Schwarzer G, Carpenter J (2008). "Arcsine test for publication bias in meta-analyses with binary outcomes." Statistics in Medicine, 27(5), 746–763. <doi:10.1002/sim.2971>
Schwarzer G, Antes G, Schumacher M (2007). "A test for publication bias in meta-analysis with sparse binary data." Statistics in Medicine, 26(4), 721–733. <doi:10.1002/sim.2588>
Tang J-L, Liu JLY (2000). "Misleading funnel plot for detection of bias in meta-analysis." Journal of Clinical Epidemiology, 53(5), 477–484. <doi:10.1016/S0895-4356(99)00204-8>
Thompson SG, Sharp SJ (1999). "Explaining heterogeneity in meta-analysis: a comparison of methods." Statistics in Medicine, 18(20), 2693–2708. <doi:10.1002/(SICI)1097-0258(19991030)18:20<2693::AID-SIM235>3.0.CO;2-V>
pb.bayesian.binary
, pb.hybrid.generic
## meta-analysis of (log) odds ratios data("dat.whiting") # based on only 10 resampling iterations set.seed(1234) out.whiting <- pb.hybrid.binary(n00 = n00, n01 = n01, n10 = n10, n11 = n11, data = dat.whiting, iter.resam = 10) out.whiting # increases the number of resampling iterations to 10000, # taking longer time
## meta-analysis of (log) odds ratios data("dat.whiting") # based on only 10 resampling iterations set.seed(1234) out.whiting <- pb.hybrid.binary(n00 = n00, n01 = n01, n10 = n10, n11 = n11, data = dat.whiting, iter.resam = 10) out.whiting # increases the number of resampling iterations to 10000, # taking longer time
Performs the hybrid test for publication bias/small-study effects introduced in Lin (2020), which synthesizes results from multiple popular publication bias tests, in a meta-analysis with generic outcomes.
pb.hybrid.generic(y, s2, n, data, methods, iter.resam = 1000, theo.pval = TRUE)
pb.hybrid.generic(y, s2, n, data, methods, iter.resam = 1000, theo.pval = TRUE)
y |
a numeric vector or the corresponding column name in the argument |
s2 |
a numeric vector or the corresponding column name in the argument |
n |
an optional numeric vector or the corresponding column name in the argument |
data |
an optional data frame containing the meta-analysis dataset. If |
methods |
a vector of character strings specifying the publication bias tests to be included in the hybrid test. They can be a subset of |
iter.resam |
a positive integer specifying the number of resampling iterations for calculating the p-value of the hybrid test. |
theo.pval |
a logical value indicating whether additionally calculating the p-values of the tests specified in |
The hybrid test statistic is defined as the minimum p-value among the publication bias tests considered in the set specified by the argument methods
. Note that the minimum p-value is no longer a genuine p-value because it cannot control the type I error rate. Its p-value needs to be calculated via the resampling approach. See more details in Lin (2020).
This function returns a list containing p-values of the publication bias tests specified in methods
as well as the hybrid test. Each element's name in this list has the format of pval.x
, where x
stands for the character string corresponding to a certain publication bias test, such as rank
, reg
, skew
, etc. The hybrid test's p-value has the name pval.hybrid
. If theo.pval
= TRUE
, additional elements of p-values of the tests in methods
based on theorectical null distributions are included in the produced list; their names have the format of pval.x.theo
. Another p-value of the hybrid test is also produced based on them; its corresponding element has the name pval.hybrid.theo
.
Begg CB, Mazumdar M (1994). "Operating characteristics of a rank correlation test for publication bias." Biometrics, 50(4), 1088–1101. <doi:10.2307/2533446>
Duval S, Tweedie R (2000). "A nonparametric ‘trim and fill’ method of accounting for publication bias in meta-analysis." Journal of the American Statistical Association, 95(449), 89–98. <doi:10.1080/01621459.2000.10473905>
Egger M, Davey Smith G, Schneider M, Minder C (1997). "Bias in meta-analysis detected by a simple, graphical test." BMJ, 315(7109), 629–634. <doi:10.1136/bmj.315.7109.629>
Lin L (2020). "Hybrid test for publication bias in meta-analysis." Statistical Methods in Medical Research, 29(10), 2881–2899. <doi:10.1177/0962280220910172>
Lin L, Chu H (2018). "Quantifying publication bias in meta-analysis." Biometrics, 74(3), 785–794. <doi:10.1111/biom.12817>
Tang J-L, Liu JLY (2000). "Misleading funnel plot for detection of bias in meta-analysis." Journal of Clinical Epidemiology, 53(5), 477–484. <doi:10.1016/S0895-4356(99)00204-8>
Thompson SG, Sharp SJ (1999). "Explaining heterogeneity in meta-analysis: a comparison of methods." Statistics in Medicine, 18(20), 2693–2708. <doi:10.1002/(SICI)1097-0258(19991030)18:20<2693::AID-SIM235>3.0.CO;2-V>
pb.bayesian.binary
, pb.hybrid.binary
## meta-analysis of mean differences data("dat.plourde") # based on only 10 resampling iterations set.seed(1234) out.plourde <- pb.hybrid.generic(y = y, s2 = s2, n = n, data = dat.plourde, iter.resam = 10) out.plourde # only produces resampling-based p-values set.seed(1234) pb.hybrid.generic(y = y, s2 = s2, n = n, data = dat.plourde, iter.resam = 10, theo.pval = FALSE) # increases the number of resampling iterations to 10000, # taking longer time ## meta-analysis of standardized mean differences data("dat.paige") # based on only 10 resampling iterations set.seed(1234) out.paige <- pb.hybrid.generic(y = y, s2 = s2, n = n, data = dat.paige, iter.resam = 10) out.paige # increases the number of resampling iterations to 10000, # taking longer time
## meta-analysis of mean differences data("dat.plourde") # based on only 10 resampling iterations set.seed(1234) out.plourde <- pb.hybrid.generic(y = y, s2 = s2, n = n, data = dat.plourde, iter.resam = 10) out.plourde # only produces resampling-based p-values set.seed(1234) pb.hybrid.generic(y = y, s2 = s2, n = n, data = dat.plourde, iter.resam = 10, theo.pval = FALSE) # increases the number of resampling iterations to 10000, # taking longer time ## meta-analysis of standardized mean differences data("dat.paige") # based on only 10 resampling iterations set.seed(1234) out.paige <- pb.hybrid.generic(y = y, s2 = s2, n = n, data = dat.paige, iter.resam = 10) out.paige # increases the number of resampling iterations to 10000, # taking longer time
Visualizes meta-analysis of diagnostic tests by presenting summary results, such as ROC (receiver operating characteristic) curve, overall sensitivity and overall specificity (1 specificity), and their confidence and prediction regions.
## S3 method for class 'meta.dt' plot(x, add = FALSE, xlab, ylab, alpha, studies = TRUE, cex.studies, col.studies, pch.studies, roc, col.roc, lty.roc, lwd.roc, weight = FALSE, eqline, col.eqline, lty.eqline, lwd.eqline, overall = TRUE, cex.overall, col.overall, pch.overall, confid = TRUE, col.confid, lty.confid, lwd.confid, predict = FALSE, col.predict, lty.predict, lwd.predict, ...)
## S3 method for class 'meta.dt' plot(x, add = FALSE, xlab, ylab, alpha, studies = TRUE, cex.studies, col.studies, pch.studies, roc, col.roc, lty.roc, lwd.roc, weight = FALSE, eqline, col.eqline, lty.eqline, lwd.eqline, overall = TRUE, cex.overall, col.overall, pch.overall, confid = TRUE, col.confid, lty.confid, lwd.confid, predict = FALSE, col.predict, lty.predict, lwd.predict, ...)
x |
an object of class |
add |
a logical value indicating if the plot is added to an already existing plot. |
xlab |
a label for the x axis; the default is "1 - Specificity". |
ylab |
a label for the y axis; the default is "Sensitivity". |
alpha |
a numeric value specifying the statistical significance level for the confidence and prediction regions. If not specified, the plot uses the significance level stored in |
studies |
a logical value indicating if the individual studies are presented in the plot. |
cex.studies |
the size of points representing individual studies (the default is 1). |
col.studies |
the color of points representing individual studies (the default is |
pch.studies |
the symbol of points representing individual studies (the default is 1, i.e., circle). |
roc |
a logical value indicating if the ROC curve is presented in the plot. The default is |
col.roc |
the color of the ROC curve (the default is |
lty.roc |
the line type of the ROC curve (the default is 1, i.e., solid line). |
lwd.roc |
the line width of the ROC curve (the default is 1). |
weight |
a logical value indicating if the weighted ( |
eqline |
a logical value indicating if the line of sensitivity equaling to specificity is presented in the plot. |
col.eqline |
the color of the equality line (the default is |
lty.eqline |
the type of the equality line (the default is 4, i.e., dot-dash line). |
lwd.eqline |
the width of the equality line (the default is 1). |
overall |
a logical value indicating if the overall sensitivity and overall specificity are presented in the plot. This and the following arguments are used for the bivariate (generalized) linear mixed model ( |
cex.overall |
the size of the point representing the overall sensitivity and overall specificity (the default is 1). |
col.overall |
the color of the point representing the overall sensitivity and overall specificity (the default is |
pch.overall |
the symbol of the point representing the overall sensitivity and overall specificity (the default is 15, i.e., filled square). |
confid |
a logical value indicating if the confidence region of the overall sensitivity and overall specificity is presented in the plot. |
col.confid |
the line color of the confidence region (the default is |
lty.confid |
the line type of the confidence region (the default is 2, i.e., dashed line). |
lwd.confid |
the line width of the confidence region (the default is 1). |
predict |
a logical value indicating if the prediction region of the overall sensitivity and overall specificity is presented in the plot. |
col.predict |
the line color of the prediction region (the default is |
lty.predict |
the line type of the prediction region (the default is 3, i.e., dotted line). |
lwd.predict |
the line width of the prediction region (the default is 1). |
... |
other arguments that can be passed to the function |
None.
Draws a plot showing study-specific standardized residuals.
## S3 method for class 'metaoutliers' plot(x, xtick.cex = 1, ytick.cex = 0.5, ...)
## S3 method for class 'metaoutliers' plot(x, xtick.cex = 1, ytick.cex = 0.5, ...)
x |
an object created by the function |
xtick.cex |
a numeric value specifying the size of ticks on the x axis. |
ytick.cex |
a numeric value specifying the size of ticks on the y axis. |
... |
other arguments that can be passed to the function |
None.
data("dat.aex") attach(dat.aex) out.aex <- metaoutliers(y, s2, model = "FE") detach(dat.aex) plot(out.aex) data("dat.hipfrac") attach(dat.hipfrac) out.hipfrac <- metaoutliers(y, s2, model = "RE") detach(dat.hipfrac) plot(out.hipfrac)
data("dat.aex") attach(dat.aex) out.aex <- metaoutliers(y, s2, model = "FE") detach(dat.aex) plot(out.aex) data("dat.hipfrac") attach(dat.hipfrac) out.hipfrac <- metaoutliers(y, s2, model = "RE") detach(dat.hipfrac) plot(out.hipfrac)
Prints information about a meta-analysis of diagnostic tests.
## S3 method for class 'meta.dt' print(x, digits = 3, ...)
## S3 method for class 'meta.dt' print(x, digits = 3, ...)
x |
an object of class |
digits |
an integer specifying the number of decimal places to which the printed results should be rounded. |
... |
other arguments. |
None.
Generates contour-enhanced sample-size-based funnel plot for a meta-analysis of mean differences, standardized mean differences, (log) odds ratios, (log) relative risks, or risk differences.
ssfunnel(y, s2, n, data, type, alpha = c(0.1, 0.05, 0.01, 0.001), log.ss = FALSE, sigma, p0, xlim, ylim, xlab, ylab, cols.contour, col.mostsig, cex.pts, lwd.contour, pch, x.legend, y.legend, cex.legend, bg.legend, ...)
ssfunnel(y, s2, n, data, type, alpha = c(0.1, 0.05, 0.01, 0.001), log.ss = FALSE, sigma, p0, xlim, ylim, xlab, ylab, cols.contour, col.mostsig, cex.pts, lwd.contour, pch, x.legend, y.legend, cex.legend, bg.legend, ...)
y |
a numeric vector or the corresponding column name in the argument |
s2 |
a numeric vector or the corresponding column name in the argument |
n |
a numeric vector or the corresponding column name in the argument |
data |
an optional data frame containing the meta-analysis dataset. If |
type |
a character string specifying the type of effect size, which should be one of |
alpha |
a numeric vector specifying the significance levels to be presented in the sample-size-based funnel plot. |
log.ss |
a logical value indicating whether sample sizes are plotted on a logarithmic scale ( |
sigma |
a positive numeric value that is required for the mean difference ( |
p0 |
an optional numeric value specifying a rough estimate of the common event rate in the control group across studies. It is only used for the (log) odds ratio, (log) relative risk, and risk difference. |
xlim |
the x limits |
ylim |
the y limits |
xlab |
a label for the x axis. |
ylab |
a label for the y axis. |
cols.contour |
a vector of character strings; they indicate colors of the contours to be presented in the sample-size-based funnel plot, and correspond to the significance levels specified in the argument |
col.mostsig |
a character string specifying the color for the most significant result among the studies in the meta-analysis. |
cex.pts |
the size of the points. |
lwd.contour |
the width of the contours. |
pch |
the symbol of the points. |
x.legend |
the x co-ordinate or a keyword, such as |
y.legend |
the y co-ordinate to be used to position the legend (the default is |
cex.legend |
the size of legend text. |
bg.legend |
the background color for the legend box. |
... |
other arguments that can be passed to |
A contour-enhanced sample-size-based funnel plot is generated; it presents study-specific total sample sizes against the corresponding effect size estimates. It is helpful to avoid the confounding effect caused by the intrinsic association between effect size estimates and standard errors in the conventional standard-error-based funnel plot. See details of the derivations of the contours in Lin (2019).
None.
Lin L (2019). "Graphical augmentations to sample-size-based funnel plot in meta-analysis." Research Synthesis Methods, 10(3), 376–388. <doi:10.1002/jrsm.1340>
Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L (2006). "Comparison of two methods to detect publication bias in meta-analysis." JAMA, 295(6), 676–680. <doi:10.1001/jama.295.6.676>
## mean difference data("dat.annane") # descriptive statistics for sigma (continuous outcomes' standard deviation) quantile(sqrt(dat.annane$s2/(1/dat.annane$n1 + 1/dat.annane$n2)), probs = c(0, 0.25, 0.5, 0.75, 1)) # based on sigma = 8 ssfunnel(y, s2, n, data = dat.annane, type = "md", alpha = c(0.1, 0.05, 0.01, 0.001), sigma = 8) # sample sizes presented on a logarithmic scale with plot title ssfunnel(y, s2, n, data = dat.annane, type = "md", alpha = c(0.1, 0.05, 0.01, 0.001), sigma = 8, log.ss = TRUE, main = "Contour-enhanced sample-size-based funnel plot") # based on sigma = 17, with specified x and y limits ssfunnel(y, s2, n, data = dat.annane, type = "md", xlim = c(-15, 15), ylim = c(30, 500), alpha = c(0.1, 0.05, 0.01, 0.001), sigma = 17, log.ss = TRUE) # based on sigma = 20 ssfunnel(y, s2, n, data = dat.annane, type = "md", xlim = c(-15, 15), ylim = c(30, 500), alpha = c(0.1, 0.05, 0.01, 0.001), sigma = 20, log.ss = TRUE) ## standardized mean difference data("dat.barlow") ssfunnel(y, s2, n, data = dat.barlow, type = "smd", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1)) ## log odds ratio data("dat.butters") ssfunnel(y, s2, n, data = dat.butters, type = "lor", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1.5)) # use different colors for contours ssfunnel(y, s2, n, data = dat.butters, type = "lor", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1.5), cols.contour = c("blue", "green", "yellow", "red"), col.mostsig = "black") # based on p0 = 0.3 (common event rate in the control group across studies) ssfunnel(y, s2, n, data = dat.butters, type = "lor", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1.5), p0 = 0.3) # based on p0 = 0.5 ssfunnel(y, s2, n, data = dat.butters, type = "lor", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1.5), p0 = 0.5) ## log relative risk data("dat.williams") ssfunnel(y, s2, n, data = dat.williams, type = "lrr", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 2.5)) # based on p0 = 0.2 ssfunnel(y, s2, n, data = dat.williams, type = "lrr", alpha = c(0.1, 0.05, 0.01, 0.001), p0 = 0.2, xlim = c(-1.5, 2.5)) # based on p0 = 0.3 ssfunnel(y, s2, n, data = dat.williams, type = "lrr", alpha = c(0.1, 0.05, 0.01, 0.001), p0 = 0.3, xlim = c(-1.5, 2.5)) ## risk difference data("dat.kaner") ssfunnel(y, s2, n, data = dat.kaner, type = "rd", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-0.5, 0.5)) # based on p0 = 0.1 ssfunnel(y, s2, n, data = dat.kaner, type = "rd", alpha = c(0.1, 0.05, 0.01, 0.001), p0 = 0.1, xlim = c(-0.5, 0.5)) # based on p0 = 0.4 ssfunnel(y, s2, n, data = dat.kaner, type = "rd", alpha = c(0.1, 0.05, 0.01, 0.001), p0 = 0.4, xlim = c(-0.5, 0.5))
## mean difference data("dat.annane") # descriptive statistics for sigma (continuous outcomes' standard deviation) quantile(sqrt(dat.annane$s2/(1/dat.annane$n1 + 1/dat.annane$n2)), probs = c(0, 0.25, 0.5, 0.75, 1)) # based on sigma = 8 ssfunnel(y, s2, n, data = dat.annane, type = "md", alpha = c(0.1, 0.05, 0.01, 0.001), sigma = 8) # sample sizes presented on a logarithmic scale with plot title ssfunnel(y, s2, n, data = dat.annane, type = "md", alpha = c(0.1, 0.05, 0.01, 0.001), sigma = 8, log.ss = TRUE, main = "Contour-enhanced sample-size-based funnel plot") # based on sigma = 17, with specified x and y limits ssfunnel(y, s2, n, data = dat.annane, type = "md", xlim = c(-15, 15), ylim = c(30, 500), alpha = c(0.1, 0.05, 0.01, 0.001), sigma = 17, log.ss = TRUE) # based on sigma = 20 ssfunnel(y, s2, n, data = dat.annane, type = "md", xlim = c(-15, 15), ylim = c(30, 500), alpha = c(0.1, 0.05, 0.01, 0.001), sigma = 20, log.ss = TRUE) ## standardized mean difference data("dat.barlow") ssfunnel(y, s2, n, data = dat.barlow, type = "smd", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1)) ## log odds ratio data("dat.butters") ssfunnel(y, s2, n, data = dat.butters, type = "lor", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1.5)) # use different colors for contours ssfunnel(y, s2, n, data = dat.butters, type = "lor", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1.5), cols.contour = c("blue", "green", "yellow", "red"), col.mostsig = "black") # based on p0 = 0.3 (common event rate in the control group across studies) ssfunnel(y, s2, n, data = dat.butters, type = "lor", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1.5), p0 = 0.3) # based on p0 = 0.5 ssfunnel(y, s2, n, data = dat.butters, type = "lor", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 1.5), p0 = 0.5) ## log relative risk data("dat.williams") ssfunnel(y, s2, n, data = dat.williams, type = "lrr", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-1.5, 2.5)) # based on p0 = 0.2 ssfunnel(y, s2, n, data = dat.williams, type = "lrr", alpha = c(0.1, 0.05, 0.01, 0.001), p0 = 0.2, xlim = c(-1.5, 2.5)) # based on p0 = 0.3 ssfunnel(y, s2, n, data = dat.williams, type = "lrr", alpha = c(0.1, 0.05, 0.01, 0.001), p0 = 0.3, xlim = c(-1.5, 2.5)) ## risk difference data("dat.kaner") ssfunnel(y, s2, n, data = dat.kaner, type = "rd", alpha = c(0.1, 0.05, 0.01, 0.001), xlim = c(-0.5, 0.5)) # based on p0 = 0.1 ssfunnel(y, s2, n, data = dat.kaner, type = "rd", alpha = c(0.1, 0.05, 0.01, 0.001), p0 = 0.1, xlim = c(-0.5, 0.5)) # based on p0 = 0.4 ssfunnel(y, s2, n, data = dat.kaner, type = "rd", alpha = c(0.1, 0.05, 0.01, 0.001), p0 = 0.4, xlim = c(-0.5, 0.5))