Title: | Bayesian Power Prior Design for Survival Data |
---|---|
Description: | Bayesian power/type I error calculation and model fitting using the power prior and the normalized power prior for proportional hazards models with piecewise constant hazard. The methodology and examples of applying the package are detailed in <doi:10.48550/arXiv.2404.05118>. The Bayesian clinical trial design methodology is described in Chen et al. (2011) <doi:10.1111/j.1541-0420.2011.01561.x>, and Psioda and Ibrahim (2019) <doi:10.1093/biostatistics/kxy009>. The proportional hazards model with piecewise constant hazard is detailed in Ibrahim et al. (2001) <doi:10.1007/978-1-4757-3447-8>. |
Authors: | Yueqi Shen [aut, cre], Matthew A. Psioda [aut], Joseph G. Ibrahim [aut] |
Maintainer: | Yueqi Shen <[email protected]> |
License: | GPL (>= 3) |
Version: | 1.0.3 |
Built: | 2024-12-06 06:49:02 UTC |
Source: | CRAN |
The BayesPPDSurv (Bayesian Power Prior Design for Survival Data) package provides two categories of functions: functions for Bayesian power/type I error calculation and functions for model fitting.
Ibrahim, J. G., Chen, M.-H. and Sinha, D. (2001). Bayesian Survival Analysis. New York: Springer Science & Business Media.
Psioda, M. A. and Ibrahim, J. G. (2019). Bayesian clinical trial design using historical data that inform the treatment effect. Biostatistics 20, 400–415.
Shen, Y., Psioda, M. A., and Joseph, J. G. (2023). BayesPPD: an R package for Bayesian sample size determination using the power and normalized power prior for generalized linear models. The R Journal, 14(4).
for the proportional hazards model with piecewise constant hazard and random a0Approximation of the normalized power prior for for the proportional hazards model with piecewise constant hazard and random
.
The function returns discrete samples of
from the normalized power prior, and the user can use any mixture of multivariate normal distributions as an
approximation for the normalized power prior for
.
This function is used to produce
prior.beta.mvn
in the function power.phm.random.a0
.
approximate.prior.beta( historical, n.intervals, change.points = NULL, prior.a0.shape1 = rep(1, 10), prior.a0.shape2 = rep(1, 10), prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), lower.limits = rep(-100, 50), upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), nMC = 10000, nBI = 250 )
approximate.prior.beta( historical, n.intervals, change.points = NULL, prior.a0.shape1 = rep(1, 10), prior.a0.shape2 = rep(1, 10), prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), lower.limits = rep(-100, 50), upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), nMC = 10000, nBI = 250 )
historical |
List of historical dataset(s). East historical dataset is stored in a list which contains four named elements:
|
n.intervals |
Vector of integers, indicating the number of intervals for the baseline hazards for each stratum. The length of the vector should be equal to the total number of strata. |
change.points |
List of vectors. Each vector in the list contains the change points for the baseline hazards for each stratum. The length of the list should be equal to the total number of strata.
For a given stratum, if there is only one interval, then |
prior.a0.shape1 |
Vector of the first shape parameters of the independent beta priors for |
prior.a0.shape2 |
Vector of the second shape parameters of the independent beta priors for |
prior.beta.mean |
Vector of means of the normal initial prior on |
prior.beta.sd |
Vector of standard deviations of the normal initial prior on |
prior.lambda0.hp1 |
Vector of first hyperparameters of the Gamma prior on |
prior.lambda0.hp2 |
Vector of second hyperparameters of the Gamma prior on |
lower.limits |
Vector of lower limits for |
upper.limits |
Vector of upper limits for |
slice.widths |
Vector of initial slice widths for |
nMC |
Number of iterations (excluding burn-in samples) for the slice sampler. The default is 10,000. |
nBI |
Number of burn-in samples for the slice sampler. The default is 250. |
This function is used to produce prior.beta.mvn
in the function power.phm.random.a0
. It approximates the normalized power prior for when
is modeled as random.
The function returns discrete samples of
from the normalized power prior, and the user can use any mixture of multivariate normal distributions as an
approximation for the normalized power prior for
.
Baseline hazard parameters for the
current and historical data are NOT shared.
The baseline hazards of the historical data are denoted by . We assume Gamma priors for
and independent normal initial priors for
.
Posterior samples are obtained through slice sampling.
The default lower limits are -100 for . The default upper limits
for the parameters are 100. The default slice widths for the parameters are 0.1.
The defaults may not be appropriate for all situations, and the user can specify the appropriate limits
and slice width for each parameter.
Samples of (approximating the normalized power prior) are returned.
Ibrahim, J. G., Chen, M.-H. and Sinha, D. (2001). Bayesian Survival Analysis. New York: Springer Science & Business Media.
Psioda, M. A. and Ibrahim, J. G. (2019). Bayesian clinical trial design using historical data that inform the treatment effect. Biostatistics 20, 400–415.
Shen, Y., Psioda, M. A., and Joseph, J. G. (2023). BayesPPD: an R package for Bayesian sample size determination using the power and normalized power prior for generalized linear models. The R Journal, 14(4).
phm.random.a0
and power.phm.random.a0
# Simulate two historical datasets n <- 100 P <- 4 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2), 2) # Get samples from the approximate normalized power prior for beta nMC <- 100 # nMC should be larger in practice nBI <- 50 prior.beta <- approximate.prior.beta(historical, n.intervals, change.points=change.points, prior.a0.shape1=c(1,1), prior.a0.shape2=c(1,1), nMC=nMC, nBI=nBI) prior_beta_mu=colMeans(prior.beta) prior_beta_sigma=cov(prior.beta) # Aprroximate the discrete sames with a single multivariate normal with weight one. # The user can use any mixture of multivariate normal distributions as an # approximation for the normalized power prior for beta. prior.beta.mvn <- list(list(prior_beta_mu, prior_beta_sigma, 1)) # prior.beta.mvn is a parameter for phm.random.a0() and power.phm.random.a0()
# Simulate two historical datasets n <- 100 P <- 4 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2), 2) # Get samples from the approximate normalized power prior for beta nMC <- 100 # nMC should be larger in practice nBI <- 50 prior.beta <- approximate.prior.beta(historical, n.intervals, change.points=change.points, prior.a0.shape1=c(1,1), prior.a0.shape2=c(1,1), nMC=nMC, nBI=nBI) prior_beta_mu=colMeans(prior.beta) prior_beta_sigma=cov(prior.beta) # Aprroximate the discrete sames with a single multivariate normal with weight one. # The user can use any mixture of multivariate normal distributions as an # approximation for the normalized power prior for beta. prior.beta.mvn <- list(list(prior_beta_mu, prior_beta_sigma, 1)) # prior.beta.mvn is a parameter for phm.random.a0() and power.phm.random.a0()
A dataset for the E1684 clinical trial (1996) and the E1690 clinical trial (2000) conducted to assess the utility of interferon alfa-2b (INF) as an adjuvant therapy following surgery for deep primary or regionally metastatic melanoma. This dataset contains subjects in disease stage four only, i.e., regional lymph node recurrence at any interval after appropriate surgery for primary melanoma of any depth.
melanoma
melanoma
A data frame with 381 rows and 5 variables:
study number (1684 or 1690)
time to relapse in years
censoring indicator for time to relapse (0 = did not relapse, 1 = relapsed)
treatment (1 if IFN and 0 if Observation)
stratum index (1 if number of positive nodes <= 2 and 2 if number of positive nodes >= 3
Kirkwood, J. M., J. G. Ibrahim, V. K. Sondak, J. Richards, L. E. Flaherty, M. S. Ernstoff, T. J. Smith, U. Rao, M. Steele, and R. H. Blum (2000, Jun). High- and low-dose interferon alfa-2b in high-risk melanoma: first analysis of intergroup trial e1690/s9111/c9190. Journal of Clinical Oncology 18 (12), 2444–2458.
Kirkwood, J. M., M. H. Strawderman, M. S. Ernstoff, T. J. Smith, E. C. Borden, and R. H. Blum (1996). Interferon alfa-2b adjuvant therapy of high-risk resected cutaneous melanoma: the eastern cooperative oncology group trial est 1684. Journal of Clinical Oncology 14, 7–17.
Model fitting using power priors for the proportional hazards model with piecewise constant hazard and fixed
phm.fixed.a0( time = NULL, event = NULL, X = NULL, S = NULL, historical, a0, n.intervals, change.points = NULL, shared.blh = FALSE, prior.beta = "Normal", prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda = "Gamma", prior.lambda.hp1 = rep(10^(-5), 50), prior.lambda.hp2 = rep(10^(-5), 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), lower.limits = NULL, upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), current.data = TRUE, nMC = 10000, nBI = 250 )
phm.fixed.a0( time = NULL, event = NULL, X = NULL, S = NULL, historical, a0, n.intervals, change.points = NULL, shared.blh = FALSE, prior.beta = "Normal", prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda = "Gamma", prior.lambda.hp1 = rep(10^(-5), 50), prior.lambda.hp2 = rep(10^(-5), 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), lower.limits = NULL, upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), current.data = TRUE, nMC = 10000, nBI = 250 )
time |
Vector of follow up times. |
event |
Vector of status indicators. Normally 0=alive and 1=dead. |
X |
Matrix of covariates. The first column must be the treatment indicator. |
S |
Vector of integers, where each integer represents the stratum that the subject belongs to. For example, if there are three strata, S can take values 1, 2 or 3. |
historical |
List of historical dataset(s). East historical dataset is stored in a list which contains four named elements:
|
a0 |
Vector containing numbers between 0 and 1 indicating the discounting parameter value for each historical dataset. The length of the vector should be equal to the length of |
n.intervals |
Vector of integers, indicating the number of intervals for the baseline hazards for each stratum. The length of the vector should be equal to the total number of strata. |
change.points |
List of vectors. Each vector in the list contains the change points for the baseline hazards for each stratum. The length of the list should be equal to the total number of strata.
For a given stratum, if there is only one interval, then |
shared.blh |
Logical value indicating whether baseline hazard parameters are shared between the current and historical data. If TRUE, baseline hazard parameters are shared. The default value is FALSE. |
prior.beta |
Prior used for |
prior.beta.mean |
(Only applies if |
prior.beta.sd |
(Only applies if |
prior.lambda |
Prior used for If Gamma(shape= If If |
prior.lambda.hp1 |
(Only applies if |
prior.lambda.hp2 |
(Only applies if |
prior.lambda0.hp1 |
(Only applies if |
prior.lambda0.hp2 |
(Only applies if |
lower.limits |
Vector of lower limits for parameters ( |
upper.limits |
Vector of upper limits for parameters ( |
slice.widths |
Vector of initial slice widths for parameters ( |
current.data |
Logical value indicating whether current data is included. The default is TRUE. If FALSE, only historical data is included in the analysis,
and the posterior samples can be used as a discrete approximation to the sampling prior in
|
nMC |
Number of iterations (excluding burn-in samples) for the slice sampler. The default is 10,000. |
nBI |
Number of burn-in samples for the slice sampler. The default is 250. |
The proportional hazards model with piecewise constant hazard is implemented.
We assume is the regression coefficients. We assume the first column of the covariate matrix is the treatment indicator,
and the corresponding parameter is
. The baseline hazards of the current data are denoted by
.
The baseline hazards of the historical data are denoted by
.
If the baseline hazards are shared between the historical and current data, then
=
.
Posterior samples are obtained through slice sampling.
The default lower limits are -100 for and 0 for
and
. The default upper limits
for the parameters are 100. The default slice widths for the parameters are 0.1.
The defaults may not be appropriate for all situations, and the user can specify the appropriate limits
and slice width for each parameter.
Posterior samples of ,
and
(if baseline hazards are not shared between the current and historical data) are returned.
Ibrahim, J. G., Chen, M.-H. and Sinha, D. (2001). Bayesian Survival Analysis. New York: Springer Science & Business Media.
Psioda, M. A. and Ibrahim, J. G. (2019). Bayesian clinical trial design using historical data that inform the treatment effect. Biostatistics 20, 400–415.
Shen, Y., Psioda, M. A., and Joseph, J. G. (2023). BayesPPD: an R package for Bayesian sample size determination using the power and normalized power prior for generalized linear models. The R Journal, 14(4).
set.seed(1) # Simulate current data n <- 50 P <- 4 time <- round(rexp(n, rate=0.5),1) event <- rep(1,n) X <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S <- c(rep(1,n/2),rep(2,n/2)) # Simulate two historical datasets n <- 100 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) # a0 is 0.3 for the first historical dataset and 0.6 for the second a0 <- c(0.3, 0.6) # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2), 2) nMC <- 1000 # nMC should be larger in practice nBI <- 50 result <- phm.fixed.a0(time=time, event=event, X=X, S=S, historical=historical, a0=a0, n.intervals=n.intervals, change.points=change.points, nMC=nMC, nBI=nBI) # posterior mean of beta colMeans(result$beta_samples) # posterior mean of baseline hazards for stratum 1 colMeans(result$lambda_samples[[1]]) # posterior mean of baseline hazards for stratum 2 colMeans(result$lambda_samples[[2]]) # posterior mean of historical baseline hazards for stratum 1 colMeans(result$lambda0_samples[[1]]) # posterior mean of historical baseline hazards for stratum 2 colMeans(result$lambda0_samples[[2]])
set.seed(1) # Simulate current data n <- 50 P <- 4 time <- round(rexp(n, rate=0.5),1) event <- rep(1,n) X <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S <- c(rep(1,n/2),rep(2,n/2)) # Simulate two historical datasets n <- 100 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) # a0 is 0.3 for the first historical dataset and 0.6 for the second a0 <- c(0.3, 0.6) # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2), 2) nMC <- 1000 # nMC should be larger in practice nBI <- 50 result <- phm.fixed.a0(time=time, event=event, X=X, S=S, historical=historical, a0=a0, n.intervals=n.intervals, change.points=change.points, nMC=nMC, nBI=nBI) # posterior mean of beta colMeans(result$beta_samples) # posterior mean of baseline hazards for stratum 1 colMeans(result$lambda_samples[[1]]) # posterior mean of baseline hazards for stratum 2 colMeans(result$lambda_samples[[2]]) # posterior mean of historical baseline hazards for stratum 1 colMeans(result$lambda0_samples[[1]]) # posterior mean of historical baseline hazards for stratum 2 colMeans(result$lambda0_samples[[2]])
Model fitting using the normalized power prior for the proportional hazards model with piecewise constant hazard and random
phm.random.a0( time, event, X, S, historical, n.intervals, change.points = NULL, prior.beta.mvn = NULL, prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), prior.a0.shape1 = rep(1, 10), prior.a0.shape2 = rep(1, 10), prior.lambda.hp1 = rep(10^(-5), 50), prior.lambda.hp2 = rep(10^(-5), 50), lower.limits = NULL, upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), nMC = 10000, nBI = 250 )
phm.random.a0( time, event, X, S, historical, n.intervals, change.points = NULL, prior.beta.mvn = NULL, prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), prior.a0.shape1 = rep(1, 10), prior.a0.shape2 = rep(1, 10), prior.lambda.hp1 = rep(10^(-5), 50), prior.lambda.hp2 = rep(10^(-5), 50), lower.limits = NULL, upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), nMC = 10000, nBI = 250 )
time |
Vector of follow up times. |
event |
Vector of status indicators. Normally 0=alive and 1=dead. |
X |
Matrix of covariates. The first column must be the treatment indicator. |
S |
Vector of integers, where each integer represents the stratum that the subject belongs to. For example, if there are three strata, S can take values 1, 2 or 3. |
historical |
List of historical dataset(s). East historical dataset is stored in a list which contains four named elements:
|
n.intervals |
Vector of integers, indicating the number of intervals for the baseline hazards for each stratum. The length of the vector should be equal to the total number of strata. |
change.points |
List of vectors. Each vector in the list contains the change points for the baseline hazards for each stratum. The length of the list should be equal to the total number of strata.
For a given stratum, if there is only one interval, then |
prior.beta.mvn |
List of multivariate normal approximations of the normalized power prior for |
prior.beta.mean |
(Only applies if |
prior.beta.sd |
(Only applies if |
prior.lambda0.hp1 |
(Only applies if |
prior.lambda0.hp2 |
(Only applies if |
prior.a0.shape1 |
(Only applies if |
prior.a0.shape2 |
(Only applies if |
prior.lambda.hp1 |
Vector of first hyperparameters of the Gamma prior on |
prior.lambda.hp2 |
Vector of second hyperparameters of the Gamma prior on |
lower.limits |
Vector of lower limits for parameters ( |
upper.limits |
Vector of upper limits for parameters ( |
slice.widths |
Vector of initial slice widths for parameters ( |
nMC |
Number of iterations (excluding burn-in samples) for the slice sampler. The default is 10,000. |
nBI |
Number of burn-in samples for the slice sampler. The default is 250. |
The proportional hazards model with piecewise constant hazard is implemented.
We assume is the regression coefficients. We assume the first column of the covariate matrix is the treatment indicator,
and the corresponding parameter is
. Here
is modeled as random with a normalized power prior.
The normalized power prior for is approximated by a weighted mixture of multivariate normal distributions provided in
prior.beta.mvn
.
The user can use the approximate.prior.beta
function to obtain samples of from the normalized power prior, and use any mixture of multivariate normals to approximate
the normalized power prior for
. By default, a single multivariate normal distribution is assumed.
Posterior samples are obtained through slice sampling.
The default lower limits are -100 for and 0 for
. The default upper limits
for the parameters are 100. The default slice widths for the parameters are 0.1.
The defaults may not be appropriate for all situations, and the user can specify the appropriate limits
and slice width for each parameter.
Posterior samples of and
are returned.
Ibrahim, J. G., Chen, M.-H. and Sinha, D. (2001). Bayesian Survival Analysis. New York: Springer Science & Business Media.
Psioda, M. A. and Ibrahim, J. G. (2019). Bayesian clinical trial design using historical data that inform the treatment effect. Biostatistics 20, 400–415.
Shen, Y., Psioda, M. A., and Joseph, J. G. (2023). BayesPPD: an R package for Bayesian sample size determination using the power and normalized power prior for generalized linear models. The R Journal, 14(4).
power.phm.random.a0
and approximate.prior.beta
set.seed(1) # Simulate current data n <- 50 P <- 4 time <- round(rexp(n, rate=0.5),1) event <- rep(1,n) X <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S <- c(rep(1,n/2),rep(2,n/2)) # Simulate two historical datasets n <- 100 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2), 2) # Get samples from the approximate normalized power prior for beta nMC <- 100 # nMC should be larger in practice nBI <- 50 prior.beta <- approximate.prior.beta(historical, n.intervals, change.points=change.points, prior.a0.shape1=c(1,1), prior.a0.shape2=c(1,1), nMC=nMC, nBI=nBI) prior_beta_mu=colMeans(prior.beta) prior_beta_sigma=cov(prior.beta) # Aprroximate the discrete sames with a single multivariate normal with weight one prior.beta.mvn <- list(list(prior_beta_mu, prior_beta_sigma, 1)) result <- phm.random.a0(time=time, event=event, X=X, S=S, historical=historical, n.intervals=n.intervals, change.points=change.points, prior.beta.mvn=prior.beta.mvn, nMC=nMC, nBI=nBI) # posterior mean of beta colMeans(result$beta_samples) # posterior mean of baseline hazards for stratum 1 colMeans(result$lambda_samples[[1]]) # posterior mean of baseline hazards for stratum 2 colMeans(result$lambda_samples[[2]])
set.seed(1) # Simulate current data n <- 50 P <- 4 time <- round(rexp(n, rate=0.5),1) event <- rep(1,n) X <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S <- c(rep(1,n/2),rep(2,n/2)) # Simulate two historical datasets n <- 100 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2), 2) # Get samples from the approximate normalized power prior for beta nMC <- 100 # nMC should be larger in practice nBI <- 50 prior.beta <- approximate.prior.beta(historical, n.intervals, change.points=change.points, prior.a0.shape1=c(1,1), prior.a0.shape2=c(1,1), nMC=nMC, nBI=nBI) prior_beta_mu=colMeans(prior.beta) prior_beta_sigma=cov(prior.beta) # Aprroximate the discrete sames with a single multivariate normal with weight one prior.beta.mvn <- list(list(prior_beta_mu, prior_beta_sigma, 1)) result <- phm.random.a0(time=time, event=event, X=X, S=S, historical=historical, n.intervals=n.intervals, change.points=change.points, prior.beta.mvn=prior.beta.mvn, nMC=nMC, nBI=nBI) # posterior mean of beta colMeans(result$beta_samples) # posterior mean of baseline hazards for stratum 1 colMeans(result$lambda_samples[[1]]) # posterior mean of baseline hazards for stratum 2 colMeans(result$lambda_samples[[2]])
Power/type I error calculation using power priors for the proportional hazards model with piecewise constant hazard and fixed
power.phm.fixed.a0( historical, a0, n.subjects, n.events, n.intervals, change.points = NULL, shared.blh = FALSE, samp.prior.beta, samp.prior.lambda, x.samples = matrix(), s.samples = NULL, dist.enroll, param.enroll, rand.prob = 0.5, prob.drop = 0, param.drop = 0, dist.csr = "Constant", param.csr = 10000, min.follow.up = 0, max.follow.up = 10000, prior.beta = "Normal", prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda = "Gamma", prior.lambda.hp1 = rep(10^(-5), 50), prior.lambda.hp2 = rep(10^(-5), 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), lower.limits = NULL, upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), nMC = 10000, nBI = 250, delta = 0, nullspace.ineq = ">", gamma = 0.95, N = 10000 )
power.phm.fixed.a0( historical, a0, n.subjects, n.events, n.intervals, change.points = NULL, shared.blh = FALSE, samp.prior.beta, samp.prior.lambda, x.samples = matrix(), s.samples = NULL, dist.enroll, param.enroll, rand.prob = 0.5, prob.drop = 0, param.drop = 0, dist.csr = "Constant", param.csr = 10000, min.follow.up = 0, max.follow.up = 10000, prior.beta = "Normal", prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda = "Gamma", prior.lambda.hp1 = rep(10^(-5), 50), prior.lambda.hp2 = rep(10^(-5), 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), lower.limits = NULL, upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), nMC = 10000, nBI = 250, delta = 0, nullspace.ineq = ">", gamma = 0.95, N = 10000 )
historical |
List of historical dataset(s). East historical dataset is stored in a list which contains four named elements:
|
a0 |
Vector containing numbers between 0 and 1 indicating the discounting parameter value for each historical dataset. The length of the vector should be equal to the length of |
n.subjects |
Number of subjects enrolled. |
n.events |
Number of events at which the trial will stop. |
n.intervals |
Vector of integers, indicating the number of intervals for the baseline hazards for each stratum. The length of the vector should be equal to the total number of strata. |
change.points |
List of vectors. Each vector in the list contains the change points for the baseline hazards for each stratum. The length of the list should be equal to the total number of strata.
For a given stratum, if there is only one interval, then |
shared.blh |
Logical value indicating whether baseline hazard parameters are shared between the current and historical data. If TRUE, baseline hazard parameters are shared. The default value is FALSE. |
samp.prior.beta |
Matrix of possible values of |
samp.prior.lambda |
List of matrices, where each matrix represents the sampling prior for the baseline hazards for each stratum. The number of columns of each matrix should be equal to the number of intervals for that stratum. |
x.samples |
(Only applies when there is no historical dataset) matrix of possible values of covariates from which covariate vectors are sampled with replacement. |
s.samples |
(Only applies when there is no historical dataset) vector of possible values of the stratum index from which the stratum indices are sampled with replacement. |
dist.enroll |
Distribution for enrollment times. The choices are "Uniform" or "Exponential". |
param.enroll |
Parameter for the distribution of enrollment times. If |
rand.prob |
Randomization probability for the treated group. The default value is 0.5. |
prob.drop |
Probability of subjects dropping out of the study (non-administrative censoring). The default value is zero. |
param.drop |
Parameter for dropout time simulations. The dropout times follow Unif(0, |
dist.csr |
Distribution for (administrative) censorship times. The choices are "Uniform", "Constant" and "Exponential". The default choice is "Constant". |
param.csr |
Parameter for the (administrative) censorship times. If |
min.follow.up |
Minimum amount of time for which subjects are followed up. The default value is zero. |
max.follow.up |
Maximum amount of time for which subjects are followed up. The default value is 10^4. |
prior.beta |
Prior used for |
prior.beta.mean |
(Only applies if |
prior.beta.sd |
(Only applies if |
prior.lambda |
Prior used for If Gamma(shape= If If |
prior.lambda.hp1 |
(Only applies if |
prior.lambda.hp2 |
(Only applies if |
prior.lambda0.hp1 |
(Only applies if |
prior.lambda0.hp2 |
(Only applies if |
lower.limits |
Vector of lower limits for parameters ( |
upper.limits |
Vector of upper limits for parameters ( |
slice.widths |
Vector of initial slice widths for parameters ( |
nMC |
Number of iterations (excluding burn-in samples) for the slice sampler. The default is 10,000. |
nBI |
Number of burn-in samples for the slice sampler. The default is 250. |
delta |
Prespecified constant that defines the boundary of the null hypothesis. The default is zero. |
nullspace.ineq |
Character string specifying the inequality of the null hypothesis. The options are ">" and "<". If ">" is specified, the null hypothesis is |
gamma |
Posterior probability threshold for rejecting the null. The null hypothesis is rejected if posterior probability is greater |
N |
Number of simulated datasets to generate. The default is 10,000. |
The proportional hazards model with piecewise constant hazard is implemented.
We assume is the regression coefficients. We assume the first column of the covariate matrix is the treatment indicator,
and the corresponding parameter is
. The baseline hazards of the current data are denoted by
.
The baseline hazards of the historical data are denoted by
.
If the baseline hazards are shared between the historical and current data, then
=
.
To perform sample size determination, we test the hypotheses
and
If historical datasets are provided, the algorithm samples with replacement from the historical covariates to construct the simulated datasets.
Otherwise, the algorithm samples with replacement from x.samples
. One of the arguments historical
and x.samples
must be provided.
The sampling prior for the treatment parameter can be generated from a normal distribution (see examples).
For example, suppose one wants to compute the power for the hypotheses and
To approximate the sampling prior for
, one can simply sample from a normal distribution with negative mean,
so that the mass of the prior falls in the alternative space. Conversely, to compute the type I error rate, one can
sample from a normal distribution with positive mean, so that the mass of the prior falls in the null space.
The sampling prior for the other parameters (, ...,
,
and
) can be generated from the posterior based on the historical data.
This can be achieved by the function phm.fixed.a0
with
current.data
set to FALSE
(see the vignette).
Posterior samples are obtained through slice sampling.
The default lower limits are -100 for and 0 for
and
. The default upper limits
for the parameters are 100. The default slice widths for the parameters are 0.1.
The defaults may not be appropriate for all situations, and the user can specify the appropriate limits
and slice width for each parameter.
If a sampling prior with support in the null space is used, the value returned is a Bayesian type I error rate. If a sampling prior with support in the alternative space is used, the value returned is a Bayesian power.
Power or type I error is returned, depending on the sampling prior used.
The posterior probabilities of the alternative hypothesis are returned.
The average posterior means of ,
and
(if the baseline hazard parameters are not shared) are also returned.
Ibrahim, J. G., Chen, M.-H. and Sinha, D. (2001). Bayesian Survival Analysis. New York: Springer Science & Business Media.
Psioda, M. A. and Ibrahim, J. G. (2019). Bayesian clinical trial design using historical data that inform the treatment effect. Biostatistics 20, 400–415.
Shen, Y., Psioda, M. A., and Joseph, J. G. (2023). BayesPPD: an R package for Bayesian sample size determination using the power and normalized power prior for generalized linear models. The R Journal, 14(4).
# Simulate two historical datasets set.seed(1) n <- 100 P <- 4 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) # a0 is 0.3 for the first historical dataset and 0.6 for the second a0 <- c(0.3, 0.6) n.subjects <- 100 n.events <- 30 # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2),1) # Generate sampling priors # The null hypothesis here is H0: beta_1 >= 0. To calculate power, # we can provide samples of beta_1 such that the mass of beta_1 < 0. # To calculate type I error, we can provide samples of beta_1 such that # the mass of beta_1 >= 0. samp.prior.beta1 <- rnorm(100, mean=-1, sd=1) # Here, mass is put on the alternative region, so power is calculated. samp.prior.beta <- cbind(samp.prior.beta1, matrix(rnorm(100*(P-1)), 100, P-1)) # Point mass sampling priors are used for lambda lambda_strat1 <- matrix(c(0.5, 0.5, 0.5), nrow=1) lambda_strat2 <- matrix(c(0.7, 0.7), nrow=1) samp.prior.lambda <- list(lambda_strat1, lambda_strat2) nMC <- 100 # nMC should be larger in practice nBI <- 50 N <- 5 # N should be larger in practice result <- power.phm.fixed.a0(historical=historical, a0=a0, n.subjects=n.subjects, n.events=n.events, n.intervals=n.intervals, change.points=change.points, samp.prior.beta=samp.prior.beta, samp.prior.lambda=samp.prior.lambda, dist.enroll="Uniform", param.enroll=0.5, nMC=nMC, nBI=nBI, delta=0, nullspace.ineq=">", N=N) result$`power/type I error` result$`average posterior mean of beta` result$`average posterior mean of lambda` result$`average posterior mean of lambda0`
# Simulate two historical datasets set.seed(1) n <- 100 P <- 4 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) # a0 is 0.3 for the first historical dataset and 0.6 for the second a0 <- c(0.3, 0.6) n.subjects <- 100 n.events <- 30 # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2),1) # Generate sampling priors # The null hypothesis here is H0: beta_1 >= 0. To calculate power, # we can provide samples of beta_1 such that the mass of beta_1 < 0. # To calculate type I error, we can provide samples of beta_1 such that # the mass of beta_1 >= 0. samp.prior.beta1 <- rnorm(100, mean=-1, sd=1) # Here, mass is put on the alternative region, so power is calculated. samp.prior.beta <- cbind(samp.prior.beta1, matrix(rnorm(100*(P-1)), 100, P-1)) # Point mass sampling priors are used for lambda lambda_strat1 <- matrix(c(0.5, 0.5, 0.5), nrow=1) lambda_strat2 <- matrix(c(0.7, 0.7), nrow=1) samp.prior.lambda <- list(lambda_strat1, lambda_strat2) nMC <- 100 # nMC should be larger in practice nBI <- 50 N <- 5 # N should be larger in practice result <- power.phm.fixed.a0(historical=historical, a0=a0, n.subjects=n.subjects, n.events=n.events, n.intervals=n.intervals, change.points=change.points, samp.prior.beta=samp.prior.beta, samp.prior.lambda=samp.prior.lambda, dist.enroll="Uniform", param.enroll=0.5, nMC=nMC, nBI=nBI, delta=0, nullspace.ineq=">", N=N) result$`power/type I error` result$`average posterior mean of beta` result$`average posterior mean of lambda` result$`average posterior mean of lambda0`
Power/type I error calculation using the normalized power prior for the proportional hazards model with piecewise constant hazard and random
power.phm.random.a0( historical, n.subjects, n.events, n.intervals, change.points = NULL, samp.prior.beta, samp.prior.lambda, dist.enroll, param.enroll, rand.prob = 0.5, prob.drop = 0, param.drop = 0, dist.csr = "Constant", param.csr = 10000, min.follow.up = 0, max.follow.up = 10000, prior.beta.mvn = NULL, prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), prior.a0.shape1 = rep(1, 10), prior.a0.shape2 = rep(1, 10), prior.lambda.hp1 = rep(10^(-5), 50), prior.lambda.hp2 = rep(10^(-5), 50), lower.limits = NULL, upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), nMC = 10000, nBI = 250, delta = 0, nullspace.ineq = ">", gamma = 0.95, N = 10000 )
power.phm.random.a0( historical, n.subjects, n.events, n.intervals, change.points = NULL, samp.prior.beta, samp.prior.lambda, dist.enroll, param.enroll, rand.prob = 0.5, prob.drop = 0, param.drop = 0, dist.csr = "Constant", param.csr = 10000, min.follow.up = 0, max.follow.up = 10000, prior.beta.mvn = NULL, prior.beta.mean = rep(0, 50), prior.beta.sd = rep(1000, 50), prior.lambda0.hp1 = rep(10^(-5), 50), prior.lambda0.hp2 = rep(10^(-5), 50), prior.a0.shape1 = rep(1, 10), prior.a0.shape2 = rep(1, 10), prior.lambda.hp1 = rep(10^(-5), 50), prior.lambda.hp2 = rep(10^(-5), 50), lower.limits = NULL, upper.limits = rep(100, 50), slice.widths = rep(0.1, 50), nMC = 10000, nBI = 250, delta = 0, nullspace.ineq = ">", gamma = 0.95, N = 10000 )
historical |
List of historical dataset(s). East historical dataset is stored in a list which contains four named elements:
|
n.subjects |
Number of subjects enrolled. |
n.events |
Number of events at which the trial will stop. |
n.intervals |
Vector of integers, indicating the number of intervals for the baseline hazards for each stratum. The length of the vector should be equal to the total number of strata. |
change.points |
List of vectors. Each vector in the list contains the change points for the baseline hazards for each stratum. The length of the list should be equal to the total number of strata.
For a given stratum, if there is only one interval, then |
samp.prior.beta |
Matrix of possible values of |
samp.prior.lambda |
List of matrices, where each matrix represents the sampling prior for the baseline hazards for each stratum. The number of columns of each matrix should be equal to the number of intervals for that stratum. |
dist.enroll |
Distribution for enrollment times. The choices are "Uniform" or "Exponential". |
param.enroll |
Parameter for the distribution of enrollment times. If |
rand.prob |
Randomization probability for the treated group. The default value is 0.5. |
prob.drop |
Probability of subjects dropping out of the study (non-administrative censoring). The default value is zero. |
param.drop |
Parameter for dropout time simulations. The dropout times follow Unif(0, |
dist.csr |
Distribution for (administrative) censorship times. The choices are "Uniform", "Constant" and "Exponential". The default choice is "Constant". |
param.csr |
Parameter for the (administrative) censorship times. If |
min.follow.up |
Minimum amount of time for which subjects are followed up. The default value is zero. |
max.follow.up |
Maximum amount of time for which subjects are followed up. The default value is 10^4. |
prior.beta.mvn |
List of multivariate normal approximations of the normalized power prior for |
prior.beta.mean |
(Only applies if |
prior.beta.sd |
(Only applies if |
prior.lambda0.hp1 |
(Only applies if |
prior.lambda0.hp2 |
(Only applies if |
prior.a0.shape1 |
(Only applies if |
prior.a0.shape2 |
(Only applies if |
prior.lambda.hp1 |
Vector of first hyperparameters of the Gamma prior on |
prior.lambda.hp2 |
Vector of second hyperparameters of the Gamma prior on |
lower.limits |
Vector of lower limits for parameters ( |
upper.limits |
Vector of upper limits for parameters ( |
slice.widths |
Vector of initial slice widths for parameters ( |
nMC |
Number of iterations (excluding burn-in samples) for the slice sampler. The default is 10,000. |
nBI |
Number of burn-in samples for the slice sampler. The default is 250. |
delta |
Prespecified constant that defines the boundary of the null hypothesis. The default is zero. |
nullspace.ineq |
Character string specifying the inequality of the null hypothesis. The options are ">" and "<". If ">" is specified, the null hypothesis is |
gamma |
Posterior probability threshold for rejecting the null. The null hypothesis is rejected if posterior probability is greater |
N |
Number of simulated datasets to generate. The default is 10,000. |
The proportional hazards model with piecewise constant hazard is implemented.
We assume is the regression coefficients. We assume the first column of the covariate matrix is the treatment indicator,
and the corresponding parameter is
. Here
is modeled as random with a normalized power prior.
The normalized power prior for is approximated by a weighted mixture of multivariate normal distributions provided in
prior.beta.mvn
.
The user can use the approximate.prior.beta
function to obtain samples of from the normalized power prior, and use any mixture of multivariate normals to approximate
the normalized power prior for
. By default, a single multivariate normal distribution is assumed.
Baseline hazard parameters for the
current and historical data are NOT shared. The baseline hazards of the current data are denoted by .
The baseline hazards of the historical data are denoted by
. We assume Gamma priors for
and
.
To perform sample size determination, we test the hypotheses
and
The sampling prior for the treatment parameter can be generated from a normal distribution (see examples).
For example, suppose one wants to compute the power for the hypotheses and
To approximate the sampling prior for
, one can simply sample from a normal distribution with negative mean,
so that the mass of the prior falls in the alternative space. Conversely, to compute the type I error rate, one can
sample from a normal distribution with positive mean, so that the mass of the prior falls in the null space.
The sampling prior for the other parameters (, ...,
and
) can be generated from the posterior based on the historical data.
This can be achieved by the function phm.fixed.a0
with
current.data
set to FALSE
(see the vignette).
Posterior samples are obtained through slice sampling.
The default lower limits are -100 for and 0 for
. The default upper limits
for the parameters are 100. The default slice widths for the parameters are 0.1.
The defaults may not be appropriate for all situations, and the user can specify the appropriate limits
and slice width for each parameter.
If a sampling prior with support in the null space is used, the value returned is a Bayesian type I error rate. If a sampling prior with support in the alternative space is used, the value returned is a Bayesian power.
Power or type I error is returned, depending on the sampling prior used.
The posterior probabilities of the alternative hypothesis are returned.
The average posterior means of and
are also returned.
Ibrahim, J. G., Chen, M.-H. and Sinha, D. (2001). Bayesian Survival Analysis. New York: Springer Science & Business Media.
Psioda, M. A. and Ibrahim, J. G. (2019). Bayesian clinical trial design using historical data that inform the treatment effect. Biostatistics 20, 400–415.
Shen, Y., Psioda, M. A., and Joseph, J. G. (2023). BayesPPD: an R package for Bayesian sample size determination using the power and normalized power prior for generalized linear models. The R Journal, 14(4).
phm.random.a0
and approximate.prior.beta
# Simulate two historical datasets set.seed(1) n <- 100 P <- 4 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) n.subjects <- 100 n.events <- 30 # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2),1) # Generate sampling priors # The null hypothesis here is H0: beta_1 >= 0. To calculate power, # we can provide samples of beta_1 such that the mass of beta_1 < 0. # To calculate type I error, we can provide samples of beta_1 such that # the mass of beta_1 >= 0. samp.prior.beta1 <- rnorm(100, mean=-1, sd=1) # Here, mass is put on the alternative region, so power is calculated. samp.prior.beta <- cbind(samp.prior.beta1, matrix(rnorm(100*(P-1)), 100, P-1)) # Point mass sampling priors are used for lambda lambda_strat1 <- matrix(c(0.5, 0.5, 0.5), nrow=1) lambda_strat2 <- matrix(c(0.7, 0.7), nrow=1) samp.prior.lambda <- list(lambda_strat1, lambda_strat2) nMC <- 50 # nMC should be larger in practice nBI <- 50 N <- 5 # N should be larger in practice result <- power.phm.random.a0(historical=historical, n.subjects=n.subjects, n.events=n.events, n.intervals=n.intervals, change.points=change.points, samp.prior.beta=samp.prior.beta, samp.prior.lambda=samp.prior.lambda, prior.a0.shape1 = c(1,1), prior.a0.shape2 = c(1,1), dist.enroll="Uniform", param.enroll=0.5, nMC=nMC, nBI=nBI, delta=0, nullspace.ineq=">", N=N) result$`power/type I error` result$`average posterior mean of beta` result$`average posterior mean of lambda`
# Simulate two historical datasets set.seed(1) n <- 100 P <- 4 time1 <- round(rexp(n, rate=0.5),1) event1 <- rep(1,n) X1 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S1 <- c(rep(1,n/2),rep(2,n/2)) time2 <- round(rexp(n, rate=0.7),1) event2 <- rep(1,n) X2 <- matrix(rbinom(n*P,prob=0.5,size=1), ncol=P) S2 <- c(rep(1,n/2),rep(2,n/2)) historical <- list(list(time=time1, event=event1, X=X1, S=S1), list(time=time2, event=event2, X=X2, S=S2)) n.subjects <- 100 n.events <- 30 # We choose three intervals for the first stratum and two intervals for the second stratum n.intervals <- c(3,2) change.points <- list(c(1,2),1) # Generate sampling priors # The null hypothesis here is H0: beta_1 >= 0. To calculate power, # we can provide samples of beta_1 such that the mass of beta_1 < 0. # To calculate type I error, we can provide samples of beta_1 such that # the mass of beta_1 >= 0. samp.prior.beta1 <- rnorm(100, mean=-1, sd=1) # Here, mass is put on the alternative region, so power is calculated. samp.prior.beta <- cbind(samp.prior.beta1, matrix(rnorm(100*(P-1)), 100, P-1)) # Point mass sampling priors are used for lambda lambda_strat1 <- matrix(c(0.5, 0.5, 0.5), nrow=1) lambda_strat2 <- matrix(c(0.7, 0.7), nrow=1) samp.prior.lambda <- list(lambda_strat1, lambda_strat2) nMC <- 50 # nMC should be larger in practice nBI <- 50 N <- 5 # N should be larger in practice result <- power.phm.random.a0(historical=historical, n.subjects=n.subjects, n.events=n.events, n.intervals=n.intervals, change.points=change.points, samp.prior.beta=samp.prior.beta, samp.prior.lambda=samp.prior.lambda, prior.a0.shape1 = c(1,1), prior.a0.shape2 = c(1,1), dist.enroll="Uniform", param.enroll=0.5, nMC=nMC, nBI=nBI, delta=0, nullspace.ineq=">", N=N) result$`power/type I error` result$`average posterior mean of beta` result$`average posterior mean of lambda`