The rat tumor data consists of observations of endometrial stromal polyp incidence in k = 70 groups of rats. For each group, yi is the number of rats with polyps and ni is the total number of rats in the experiment. Here we describe the analysis of rat tumor data using Bayes-${\rm DS}(G,m)$ modeling.
Step 1. We begin by finding the starting parameter values for g ∼ Beta(α, β) by MLE:
library(BayesGOF)
set.seed(8697)
data(rat)
###Use MLE to determine starting values
rat.start <- gMLE.bb(rat$y, rat$n)$estimate
We use our starting parameter values to run the main function:
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
Step 2. We display the U-function to quantify and characterize the uncertainty of the a priori selected g:
The deviations from the uniform distribution (the red dashed line) indicate that our initial selection for g, Beta(α = 2.3, β = 14.1), is incompatible with the observed data and requires repair; the data show that there are, in fact, two different groups of incidence in the rats.
Step 3a. Extract the parameters for the corrected prior π̂:
## $g.par
## alpha beta
## 2.304768 14.079707
##
## $LP.coef
## LP1 LP2 LP3
## 0.0000000 0.0000000 -0.5040361
Therefore, the DS prior π̂ given g is: π̂(θ) = g(θ; α, β)[1 − 0.52T3(θ; G)]
Step 3b. We can now plot the estimated DS prior π̂ along with the original parametric g:
Step 4. Here we are interested in the overall macro-level inference by combining the k = 70 parallel studies. The group-specific modes along with their standard errors are computed as follows:
## 1SD Lower Limit Mode 1SD Upper Limit
## [1,] 0.0126 0.0340 0.0554
## [2,] 0.1425 0.1562 0.1698
Step 5. Given an additional study θ71 where y71 = 4 and n71 = 14, the goal is to estimate the probability of a tumor for this new clinical study. The following code performs the desired microinference to find the posterior distribution πLP(θ71|y71, n71) along with its mean and mode:
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Posterior Mean = 0.1904
## Posterior Mode = 0.1832
## Use plot(x) to generate posterior plot
The previous microinferece step ignores the randomness in the estimated hyperparameters (see step 3a). To take into account this extra variability (of the hyperparameter estimates in our DS(G,m) model) we perform finite Bayes adjustment to our microinference for θ71. We also report the 90% Bayesian credible interval based on the finite-sample “corrected” posterior of θ71.
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Warning in regularize.values(x, y, ties, missing(ties), na.rm = na.rm):
## collapsing to unique 'x' values
## Posterior Mean = 0.1938
## Posterior Mode = 0.1732
## Lower Bound Credible Interval = 0.0991
## Upper Bound Credible Interval = 0.3103
## Use plot(x) to generate finite Bayes plot
This demonstration will compare the results using the default ℒ2 representation of the estimated DS prior π̂ to the maximum entropy π̆ representation. We will use the galaxy data, which consists of k = 324 observed rotation velocities yi and their uncertainties of Low Surface Brightness (LSB) galaxies.
Step 1. We begin by finding the starting parameter values for g ∼ Normal(μ, τ2) by MLE:
Step 2. Use our starting parameters to run the main function for two disinct cases:
LP.type = “L2” estimates the DS prior in the ℒ2 representation. This is the default case.
LP.type = “MaxEnt” estimates the DS prior in terms of the maximum entropy representation.
gal.ds.L2 <- DS.prior(galaxy[,c(1,2)], max.m = 6, g.par = gal.start, family = "Normal", LP.type = "L2")
gal.ds.ME <- DS.prior(galaxy[, c(1,2)], max.m = 6, g.par = gal.start, family = "Normal", LP.type = "MaxEnt")
Step 3. We plot the U-functions and DS priors for both cases for comparison.
plot(gal.ds.L2, plot.type = "Ufunc", main = "L2 U-function")
plot(gal.ds.L2, plot.type = "DSg", main = "L2 DS vs g")
plot(gal.ds.ME, plot.type = "Ufunc", main = "MaxEnt U-function")
plot(gal.ds.ME, plot.type = "DSg", main = "MaxEnt DS vs g")
The U-function plots maintain a similar structure, but show some slight differences with respect to the two peaks. The maximum entropy representation shows the first peak as more narrow, while the second peak is not as pronounced as the ℒ2 representation. We can see how those slight changes in the U-functions influence the resulting DS-prior in the “DS vs g” plots. The maximum entropy representation has a more significant first peak, smaller second peak, and smoother tails.
Step 4. Extract the parameters for the corrected priors: π̂ and π̆:
## $g.par
## mu tau^2
## 85.51316 3304.41491
##
## $LP.coef
## LP1 LP2 LP3 LP4 LP5
## 0.0000000 0.0000000 0.2131113 -0.1967060 0.4069104
## $g.par
## mu tau^2
## 85.51316 3304.41491
##
## $LP.coef
## LP(ME)0 LP(ME)1 LP(ME)2 LP(ME)3 LP(ME)4 LP(ME)5
## -0.1529718 0.0000000 0.0000000 0.2553139 -0.2765342 0.4560368
The DS prior in its ℒ2 representation is: π̂(θ) = g(θ; μ, τ2)[1 + 0.21T3(θ; G) − 0.20T4(θ; G) + 0.41T5(θ; G)].
The DS prior in its maximum entropy representation is: π̆(θ) = g(θ; μ, τ2)exp [ − 0.15 + 0.26T3(θ; G) − 0.28T4(θ; G) + 0.46T5(θ; G)]
For this example, we will focus on the macroinference for the arsenic data set. The arsenic data set details the measurements of the level of arsenic in oyster tissue from k = 28 laboratories. We will use the maximum entropy representation for this example.
Step 1. We begin by finding the starting parameter values for g ∼ Normal(μ, τ2) by MLE:
We use our starting parameter values to run the main DS.prior function:
Step 2. We display the U-function to quantify and characterize the uncertainty of the a priori selected g:
Step 3. We now extract the parameters for the corrected prior π̆ and plot it, along with the original g:
## $g.par
## mu tau^2
## 13.220522 3.407165
##
## $LP.coef
## LP(ME)0 LP(ME)1 LP(ME)2 LP(ME)3 LP(ME)4
## -0.4780663 0.0000000 -0.6503293 -0.7091040 0.4174869
The resulting equation for π̆(θ) is: π̆(θ) = g(θ; μ, τ2)exp [ − 0.48 − 0.65T2(θ; G) − 0.71T3(θ; G) + 0.42T4(θ; G)]
Step 4. We now execute the macroinference to find a global estimate to summarize the k = 28 studies.
## 1SD Lower Limit Mode 1SD Upper Limit
## [1,] 8.6025 9.6674 10.7323
## [2,] 13.0553 13.6030 14.1507
Based on our results, we find two significant modes. Therefore, the prior shows structured heterogeneity and requires both modes to describe the distribution and its two groups. In this case, though, we are looking to estimate the consensus value of the measurand and its uncertainty. Therefore, we would select the dominant mode, which requires the macroinference results:
#plot(arsn.macro, main = "MacroInference: Arsenic Data")
par(mar=c(5,5.2,4,2)+0.3) #changes left margin to make large labels fit
plot(arsn.macro$prior.fit$theta.vals, arsn.macro$prior.fit$ds.prior,
type = "l", xlim = c(8,18.5),
lwd = 2, col = "red3",
xlab = expression(theta), ylab = "", font.main = 1,
cex.lab=1.45, cex.axis=1.5, cex.main= 1.75, cex.sub=1.5,
main = "MacroInference: Arsenic Data")
title(ylab = expression(paste(hat(pi)(theta))), line = 2.3, cex.lab=1.45)
points(arsn.macro$model.modes[2],-.02, col = "green", pch = 17, cex = 1.5)
axis(1, at=seq(arsn.macro$model.modes[2]-arsn.macro$mode.sd[2],
arsn.macro$model.modes[2]+arsn.macro$mode.sd[2],
length= (3-2) * 20),tick=TRUE, col="goldenrod4", labels = F, tck=-0.015)
The plot shows the dominant mode and our consensus value for the measurand is 13.6 with a standard error (using smooth bootstrap) of 0.25. This mode is resistant to any extreme low measurements and achieves a slightly more accurate result than the standard PEB estimate (13.22 ± 0.25).
The next example will conduct microinference on the child illness data. The child illness data comes from a study where researchers followed k = 602 pre-school children in north-east Thailand, recording the number of times (yi) a child became sick during every 2-week period for over three years. In particular, we want to compare posterior distributions for the number of children who became sick 1, 3, 5, and 10 times during a two week period. We will use the ℒ2 representation for this example.
Step 1. We begin by finding the starting parameter values for g ∼ Gamma(α, β) by MLE:
We use our starting parameter values to run the main DS.prior function for the Poisson family:
Step 2. We display the U-function to quantify and characterize the uncertainty of the selected g:
Step 3. We now extract the parameters for the corrected prior π̂:
## $g.par
## alpha beta
## 1.060878 4.193337
##
## $LP.coef
## LP1 LP2 LP3 LP4 LP5 LP6
## 0.0000000 0.0000000 -0.1259159 0.0000000 0.0000000 -0.2797667
The DS prior π̂ given g is: π̂(θ) = g(θ; α, β)[1 − 0.13T3(θ; G) − 0.28T6(θ; G)]. We can plot π̂, along with g:
Step 4. The plot shows some very interesting behavior in π̂. We want to explore the posterior distributions for y = 1, 3, 5, 10. For those results, we use the microinference functions.
child.micro.1 <- DS.micro.inf(child.ds, y.0 = 1)
child.micro.3 <- DS.micro.inf(child.ds, y.0 = 3)
child.micro.5 <- DS.micro.inf(child.ds, y.0 = 5)
child.micro.10 <- DS.micro.inf(child.ds, y.0 = 10)
By plotting the posterior distributions we see how the distributions change based on the number of times a child is ill. The plots for each of the four microinferences are shown below.