Title: | Bayesian Dynamic Borrowing with Flexible Baseline Hazard Function |
---|---|
Description: | Allows Bayesian borrowing from a historical dataset for time-to- event data. A flexible baseline hazard function is achieved via a piecewise exponential likelihood with time varying split points and smoothing prior on the historic baseline hazards. The method is described in Scott and Lewin (2024) <doi:10.48550/arXiv.2401.06082>, and the software paper is in Axillus et al. (2024) <doi:10.48550/arXiv.2408.04327>. |
Authors: | Darren Scott [aut, cre], Sophia Axillus [aut] |
Maintainer: | Darren Scott <[email protected]> |
License: | Apache License (>= 2) |
Version: | 2.0.2 |
Built: | 2024-12-16 07:05:26 UTC |
Source: | CRAN |
Proposal beta with a Metropolis Adjusted Langevin (MALA)
.beta_MH_MALA(df, beta, bp, cprop_beta, beta_count)
.beta_MH_MALA(df, beta, bp, cprop_beta, beta_count)
df |
Data frame with indicators |
beta |
vector of parameters |
bp |
number of covariates |
cprop_beta |
proposal variance standard deviation |
beta_count |
count number of accepts |
updated beta vector
Sample beta from RW sampler
.beta_MH_NR(df, beta, bp, cprop_beta, beta_count)
.beta_MH_NR(df, beta, bp, cprop_beta, beta_count)
df |
Data frame with indicators |
beta |
vector of parameters |
bp |
number of covariates |
cprop_beta |
proposal scalar |
beta_count |
count number of accepts |
updated beta
Update beta via a Metropolis-Hastings Random Walk move
.beta_MH_RW(df, beta, bp, cprop_beta, beta_count)
.beta_MH_RW(df, beta, bp, cprop_beta, beta_count)
df |
data.frame from dataframe_fun() |
beta |
beta values |
bp |
number of covariates |
cprop_beta |
hyperparameter for beta proposal standard deviation |
beta_count |
number of moves done for beta |
beta, either old or new move
Mean for MALA using derivative for beta proposal
.beta_mom(df, k, beta, bp, cprop_beta)
.beta_mom(df, k, beta, bp, cprop_beta)
df |
Data frame with indicators |
k |
index for beta |
beta |
vector of parameters |
bp |
number of covariates |
cprop_beta |
proposal standard dev |
proposal mean
First and second derivative of target for mode and variance of proposal
.beta_mom.NR.fun(df, k, beta, bp, cprop_beta)
.beta_mom.NR.fun(df, k, beta, bp, cprop_beta)
df |
Data frame with indicators |
k |
index |
beta |
vector of parameters |
bp |
number of covariates |
cprop_beta |
proposal variance standard deviation |
First and second derivative mode and variance
Sample beta from RW sampler
.beta.MH.RW.glm(df, beta, beta_count, cprop_beta)
.beta.MH.RW.glm(df, beta, beta_count, cprop_beta)
df |
Data frame with indicators |
beta |
vector of parameters |
beta_count |
count number of accepted proposals |
cprop_beta |
proposal scalar |
beta, either old or new move
Calculates new values of x when proposing another split point, based on a weighted mean, as x_new/x <- (1-U)/U
.birth_move(U, sj, s_star, sjm1, x, j)
.birth_move(U, sj, s_star, sjm1, x, j)
U |
uniform random number |
sj |
upcoming split point location, j |
s_star |
new split point location, * |
sjm1 |
previous split point location, j-1 |
x |
vector of parameter values, length J + 1 |
j |
split point |
vector with adjusted parameter values after additional split point, length J + 2
Construct a split data.frame for updated split points
.dataframe_fun(Y, I, X, s, lambda, bp, J)
.dataframe_fun(Y, I, X, s, lambda, bp, J)
Y |
time-to-event |
I |
censor indicator |
X |
design Matrix |
s |
split point locations, including start and end (length J + 2) |
lambda |
baseline Hazards (length J+1) |
bp |
number of covariates |
J |
number of split points |
data.frame with columns c(tstart, id, X1,..., Xp, Y, I, lambda)
Calculates new values of x when proposing the death of a split point
.death_move(sjp1, sj, sjm1, x, j)
.death_move(sjp1, sj, sjm1, x, j)
sjp1 |
upcoming split point location, J + 1 |
sj |
split point location to be removed, j |
sjm1 |
previous split point location, j-1 |
x |
vector of parameter values, length J + 1 |
j |
split point |
vector with adjusted parameter values after removal of split point, length J
Compute MLE for PEM
.glmFit(df)
.glmFit(df)
df |
Data frame with time-to-event, censoring indicator and covariates |
beta MLE and inverse of information matrix
Calculate covariance matrix in the MVN-ICAR
.ICAR_calc(s, J, clam)
.ICAR_calc(s, J, clam)
s |
split points, J + 2 |
J |
number of split points |
clam |
controls neighbor interactions, in range (0, 1) |
Sigma_s = (I - W)^(-1) * Q, W, Q
Checks inputs before Gibbs sampler is run
.input_check( Y, Y_0, X, X_0, tuning_parameters, initial_values = NULL, hyperparameters )
.input_check( Y, Y_0, X, X_0, tuning_parameters, initial_values = NULL, hyperparameters )
Y |
current time-to-event data |
Y_0 |
historical time-to-event data |
X |
design Matrix |
X_0 |
design Matrix for historical data |
tuning_parameters |
list of tuning parameters |
initial_values |
list of initial values (optional) |
hyperparameters |
list of hyperparameters |
a print statement
Metropolis-Hastings Green Reversible Jump move, with Bayesian Borrowing
.J_RJMCMC( df_hist, df_curr, Y, Y_0, I, I_0, X, X_0, lambda, lambda_0, beta, beta_0, mu, sigma2, tau, s, J, Jmax, bp, bp_0, clam_smooth, a_tau = NULL, b_tau = NULL, c_tau = NULL, d_tau = NULL, type, p_0 = NULL, phi, pi_b, maxSj )
.J_RJMCMC( df_hist, df_curr, Y, Y_0, I, I_0, X, X_0, lambda, lambda_0, beta, beta_0, mu, sigma2, tau, s, J, Jmax, bp, bp_0, clam_smooth, a_tau = NULL, b_tau = NULL, c_tau = NULL, d_tau = NULL, type, p_0 = NULL, phi, pi_b, maxSj )
df_hist |
data_frame containing historical data. |
df_curr |
data_frame containing current trial data. |
Y |
data. |
Y_0 |
historical data. |
I |
censoring indicator. |
I_0 |
historical trial censoring indicator. |
X |
design matrix. |
X_0 |
historical trial design matrix. |
lambda |
baseline hazard. |
lambda_0 |
historical trial baseline hazard. |
beta |
current trial parameters. |
beta_0 |
historical trial parameters. |
mu |
prior mean for baseline hazard. |
sigma2 |
prior variance hyperparameter for baseline hazard. |
tau |
borrowing parameter. |
s |
split point locations, J + 2. |
J |
number of split points. |
Jmax |
maximum number of split points. |
bp |
number of covariates in current trial. |
bp_0 |
number of covariates in historical trial. |
clam_smooth |
neighbor interactions, in range (0, 1), for ICAR update. |
a_tau |
tau hyperparameter. |
b_tau |
tau hyperparameter. |
c_tau |
tau hyperparameter. |
d_tau |
tau hyperparameter. |
type |
choice of borrowing, "mix", "uni", or any other string for borrowing on every baseline hazard without mixture. |
p_0 |
mixture ratio. |
phi |
J hyperparameter. |
pi_b |
probability of birth move. |
maxSj |
maximal time point, either current or historic. |
list of proposed J and s, with adjusted values of lambda, lambda_0, tau, Sigma_s, and data_frames for historical and current trial data.
Metropolis-Hastings Green Reversible Jump move, without Bayesian Borrowing
.J_RJMCMC_NoBorrow( df, Y_0, I_0, X_0, lambda_0, beta_0, mu, sigma2, s, J, Jmax, bp_0, clam_smooth, phi, pi_b )
.J_RJMCMC_NoBorrow( df, Y_0, I_0, X_0, lambda_0, beta_0, mu, sigma2, s, J, Jmax, bp_0, clam_smooth, phi, pi_b )
df |
data_frame |
Y_0 |
data |
I_0 |
censoring indicator |
X_0 |
design matrix |
lambda_0 |
baseline hazard |
beta_0 |
historical trial parameters |
mu |
prior mean for baseline hazard |
sigma2 |
prior variance hyperparameter for baseline hazard |
s |
split point locations, J + 2 |
J |
number of split points |
Jmax |
maximum number of split points |
bp_0 |
number of covariates in historical trial |
clam_smooth |
neighbor interactions, in range (0, 1), for ICAR update |
phi |
J hyperparameter |
pi_b |
probability of birth move |
list of proposed J and s, with adjusted values of lambda, lambda_0, tau, Sigma_s, and data_frames for historical and current trial data
Lambda_0 MH step, proposal from conditional conjugate posterior
.lambda_0_MH_cp( df_hist, Y_0, I_0, X_0 = NULL, s, beta_0 = NULL, mu, sigma2, lambda, lambda_0, tau, bp_0 = 0, J, clam, a_lam = 0.01, b_lam = 0.01, lambda_0_count = 0, lambda_0_move = 0 )
.lambda_0_MH_cp( df_hist, Y_0, I_0, X_0 = NULL, s, beta_0 = NULL, mu, sigma2, lambda, lambda_0, tau, bp_0 = 0, J, clam, a_lam = 0.01, b_lam = 0.01, lambda_0_count = 0, lambda_0_move = 0 )
df_hist |
data.frame from dataframe_fun() |
Y_0 |
historical trial data |
I_0 |
historical trial censoring indicator |
X_0 |
historical trial design matrix |
s |
split point locations, (J+2) |
beta_0 |
parameter value for historical covariates |
mu |
prior mean for baseline hazard |
sigma2 |
prior variance hyperparameter for baseline hazard |
lambda |
baseline hazard |
lambda_0 |
historical baseline hazard |
tau |
borrowing parameter |
bp_0 |
number of covariates, length(beta_0) |
J |
number of split points |
clam |
controls neighbor interactions, in range (0, 1) |
a_lam |
lambda hyperparameter, default is 0.01 |
b_lam |
lambda hyperparameter, default is 0.01 |
lambda_0_count |
number of total moves for lambda_0 |
lambda_0_move |
number of accepted moves for lambda_0 |
list of updated (if accepted) lambda_0 and data.frames, as well as the number of accepted moves
Lambda_0 MH step, proposal from conditional conjugate posterior
.lambda_0_MH_cp_NoBorrow( df_hist, Y_0, I_0, X_0 = NULL, s, beta_0 = NULL, mu, sigma2, lambda_0, bp_0 = 0, J, clam, a_lam = 0.01, b_lam = 0.01, lambda_0_count = 0, lambda_0_move = 0 )
.lambda_0_MH_cp_NoBorrow( df_hist, Y_0, I_0, X_0 = NULL, s, beta_0 = NULL, mu, sigma2, lambda_0, bp_0 = 0, J, clam, a_lam = 0.01, b_lam = 0.01, lambda_0_count = 0, lambda_0_move = 0 )
df_hist |
data.frame from dataframe_fun() |
Y_0 |
historical trial data |
I_0 |
historical trial censoring indicator |
X_0 |
historical trial design matrix |
s |
split point locations, (J+2) |
beta_0 |
parameter value for historical covariates |
mu |
prior mean for baseline hazard |
sigma2 |
prior variance hyperparameter for baseline hazard |
lambda_0 |
baseline hazard |
bp_0 |
number of covariates, length(beta_0) |
J |
number of split points |
clam |
controls neighbor interactions, in range (0, 1) |
a_lam |
lambda hyperparameter, default is 0.01 |
b_lam |
lambda hyperparameter, default is 0.01 |
lambda_0_count |
number of total moves for lambda_0 |
lambda_0_move |
number of accepted moves for lambda_0 |
list of updated (if accepted) lambda_0 and data.frames, as well as the number of accepted moves
Propose lambda from a gamma conditional conjugate posterior proposal
.lambda_conj_prop(df, beta, j, bp, alam = 0.01, blam = 0.01)
.lambda_conj_prop(df, beta, j, bp, alam = 0.01, blam = 0.01)
df |
data.frame from dataframe_fun() |
beta |
parameter value for beta |
j |
current split point |
bp |
number of covariates |
alam |
lambda hyperparameter, default set to 0.01 |
blam |
lambda hyperparameter, default set to 0.01 |
list containing proposed lambda, shape and rate parameters
Lambda MH step, proposal from conditional conjugate posterior
.lambda_MH_cp( df_hist, df_curr, Y, I, X, s, beta, beta_0 = NULL, mu, sigma2, lambda, lambda_0, tau, bp, bp_0 = 0, J, a_lam = 0.01, b_lam = 0.01, lambda_move = 0, lambda_count = 0, alpha = 0.3 )
.lambda_MH_cp( df_hist, df_curr, Y, I, X, s, beta, beta_0 = NULL, mu, sigma2, lambda, lambda_0, tau, bp, bp_0 = 0, J, a_lam = 0.01, b_lam = 0.01, lambda_move = 0, lambda_count = 0, alpha = 0.3 )
df_hist |
data.frame from dataframe_fun() |
df_curr |
data.frame from dataframe_fun() |
Y |
data |
I |
censoring indicator |
X |
design matrix |
s |
split point locations, J + 2 |
beta |
parameter value for covariates |
beta_0 |
parameter value for historical covariates |
mu |
prior mean for baseline hazard |
sigma2 |
prior variance hyperparameter for baseline hazard |
lambda |
baseline hazard |
lambda_0 |
historical baseline hazard |
tau |
borrowing parameter |
bp |
number of covariates, length(beta) |
bp_0 |
number of covariates, length(beta_0) |
J |
number of split points |
a_lam |
lambda hyperparameter |
b_lam |
lambda hyperparameter |
lambda_move |
number of accepted lambda moves |
lambda_count |
total number of lambda moves |
alpha |
power parameter |
list of updated (if accepted) lambda and data.frames, as well as the number of accepted moves
Calculate log gamma ratio for two different parameter values
.lgamma_ratio(x1, x2, shape, rate)
.lgamma_ratio(x1, x2, shape, rate)
x1 |
old parameter value |
x2 |
proposed parameter value |
shape |
shape parameter |
rate |
rate parameter |
log gamma ratio
Compute log likelihood for beta update
.llikelihood_ratio_beta(df, beta, beta_new)
.llikelihood_ratio_beta(df, beta, beta_new)
df |
data.frame from dataframe_fun() |
beta |
beta values |
beta_new |
proposed beta values |
likelihood ratio
Log likelihood for lambda / lambda_0 update
.llikelihood_ratio_lambda(df, df_prop, beta)
.llikelihood_ratio_lambda(df, df_prop, beta)
df |
data.frame from dataframe_fun() |
df_prop |
proposal data.frame |
beta |
parameter value for beta |
log likelihood ratio for lambda
Log likelihood function
.log_likelihood(df, beta)
.log_likelihood(df, beta)
df |
data.frame containing data, time split points, and lambda |
beta |
coefficients for covariates |
log likelihood given lambdas and betas
Computes the logarithmic sum of an exponential
.logsumexp(x)
.logsumexp(x)
x |
set of log probabilities |
the logarithmic sum of an exponential
Log density of proposal for MALA
.lprop_density_beta(beta_prop, mu, cprop_beta)
.lprop_density_beta(beta_prop, mu, cprop_beta)
beta_prop |
proposal beta |
mu |
mean of proposal distribution |
cprop_beta |
proposal standard dev |
log density
log Gaussian proposal density for Newton Raphson proposal
.lprop.dens.beta.NR(beta.prop, mu_old, var_old)
.lprop.dens.beta.NR(beta.prop, mu_old, var_old)
beta.prop |
beta proposal |
mu_old |
density mean |
var_old |
density variance |
log Gaussian density
Calculate log density tau prior
.ltau_dprior(tau, a_tau, b_tau, c_tau = NULL, d_tau = NULL, p_0 = NULL, type)
.ltau_dprior(tau, a_tau, b_tau, c_tau = NULL, d_tau = NULL, p_0 = NULL, type)
tau |
current value(s) of tau |
a_tau |
tau hyperparameter |
b_tau |
tau hyperparameter |
c_tau |
tau hyperparameter |
d_tau |
tau hyperparameter |
p_0 |
mixture ratio |
type |
choice of borrowing, "mix", "uni", or any other string for borrowing on every baseline hazard without mixture |
log density of tau
Calculate mu posterior update
.mu_update(Sigma_s, lambda_0, sigma2, J)
.mu_update(Sigma_s, lambda_0, sigma2, J)
Sigma_s |
VCV matrix (j + 1) x (j + 1). |
lambda_0 |
Baseline hazard. |
sigma2 |
Scale variance. |
J |
Number of split point. |
mu update from Normal.
Normalize a set of probability to one, using the the log-sum-exp trick
.normalize_prob(x)
.normalize_prob(x)
x |
set of log probabilities |
normalized set of log probabilities
Calculates nu and sigma2 for the Gaussian Markov random field prior, for a given split point j
.nu_sigma_update(j, lambda_0, mu, sigma2, W, Q, J)
.nu_sigma_update(j, lambda_0, mu, sigma2, W, Q, J)
j |
current split point |
lambda_0 |
historical baseline hazard |
mu |
prior mean for baseline hazard |
sigma2 |
prior variance hyperparameter for baseline hazard |
W |
influence from right and left neighbors |
Q |
individual effect of neighborhood |
J |
number of split points |
nu and sigma2
Plots a histogram of the given discrete MCMC samples
.plot_hist( samples, title = "", xlab = "Values", ylab = "Frequency", color = "black", fill = "blue", binwidth = 0.05, scale_x = FALSE )
.plot_hist( samples, title = "", xlab = "Values", ylab = "Frequency", color = "black", fill = "blue", binwidth = 0.05, scale_x = FALSE )
samples |
data.frame containing the discrete MCMC samples |
title |
title of the plot, default is none |
xlab |
x-label of the plot, default is "Values" |
ylab |
y-label of the plot, default is "Frequency" |
color |
outline color for the bars, default is "black" |
fill |
fill color, default is "blue" |
binwidth |
width of the histogram bins, default is 0.5 |
scale_x |
option to scale the x-axis, suitable for discrete samples, default is FALSE |
a ggplot2 object
Plot mean and given quantiles of a matrix. Can also be used to plot derivatives of the baseline hazard, such as estimated cumulative hazard and survival function.
.plot_matrix( x_lim, y, percentiles = c(0.05, 0.95), title = "", xlab = "", ylab = "", color = "blue", fill = "blue", linewidth = 1, alpha = 0.2, y2 = NULL, color2 = "red", fill2 = "red" )
.plot_matrix( x_lim, y, percentiles = c(0.05, 0.95), title = "", xlab = "", ylab = "", color = "blue", fill = "blue", linewidth = 1, alpha = 0.2, y2 = NULL, color2 = "red", fill2 = "red" )
x_lim |
time grid |
y |
samples |
percentiles |
percentiles to include in plot, default is c(0.025, 0.975) |
title |
optional, add title to plot |
xlab |
optional, add xlabel |
ylab |
optional, add ylabel |
color |
color of the mid line, default is blue |
fill |
color of the percentiles, default is blue |
linewidth |
thickness of the plotted line, default is 1 |
alpha |
opacity of the percentiles, default is 0.2 |
y2 |
(optional) second set of samples for comparison |
color2 |
(optional) color of the mid line, default is red |
fill2 |
(optional) color of the percentiles, default is red |
a ggplot2 object
Creates a trace plot of given MCMC samples.
.plot_trace( x_lim, samples, title = "", xlab = "", ylab = "", color = "black", linewidth = 1 )
.plot_trace( x_lim, samples, title = "", xlab = "", ylab = "", color = "black", linewidth = 1 )
x_lim |
x-axis of the plot |
samples |
samples from MCMC |
title |
optional, add title to plot |
xlab |
optional, add xlabel |
ylab |
optional, add ylabel |
color |
color of the mid line, default is black |
linewidth |
thickness of the plotted line, default is 1 |
a ggplot2 object
Predictive hazard from BayesFBHborrow object
.predictive_hazard(out_slam, x_pred, beta_samples)
.predictive_hazard(out_slam, x_pred, beta_samples)
out_slam |
samples from the smoothed baseline hazard |
x_pred |
set of predictors to be used for calculating the predictive hazard |
beta_samples |
samples of the covariates |
matrix of the predictive hazard
Predictive hazard ratio (HR) from BayesFBHborrow object
.predictive_hazard_ratio(x_pred, beta_samples)
.predictive_hazard_ratio(x_pred, beta_samples)
x_pred |
set of predictors to be used for calculating the predictive HR |
beta_samples |
samples of the covariates |
posterior samples for expectation and credible intervals
Predictive survival from BayesFBHborrow object
.predictive_survival(grid_width, out_slam, x_pred, beta_samples)
.predictive_survival(grid_width, out_slam, x_pred, beta_samples)
grid_width |
size of time step |
out_slam |
samples from the smoothed baseline hazard |
x_pred |
set of predictors to be used for calculating the predictive survival |
beta_samples |
samples of the covariates |
matrix of the predictive survival
Set tuning parameters
.set_hyperparameters(hyperparameters = NULL, model_choice)
.set_hyperparameters(hyperparameters = NULL, model_choice)
hyperparameters |
list of hyperparameters, could contain any combination of the listed hyperparameters |
model_choice |
choice of model, could be either of 'mix', 'uni' or 'all' |
filled list of tuning_parameters
Set tuning parameters
.set_tuning_parameters(tuning_parameters = NULL, borrow, X, X_0 = NULL)
.set_tuning_parameters(tuning_parameters = NULL, borrow, X, X_0 = NULL)
tuning_parameters |
list of tuning_parameters, could contain any combination of the listed tuning parameters |
borrow |
choice of borrow, could be TRUE or FALSE |
X |
design matrix for concurrent trial |
X_0 |
design matrix for historical trial |
filled list of tuning_parameters
Metropolis Hastings step: shuffle the split point locations (with Bayesian borrowing)
.shuffle_split_point_location( df_hist, df_curr, Y_0, I_0, X_0, lambda_0, beta_0, Y, I, X, lambda, beta, s, J, bp_0, bp, clam_smooth, maxSj )
.shuffle_split_point_location( df_hist, df_curr, Y_0, I_0, X_0, lambda_0, beta_0, Y, I, X, lambda, beta, s, J, bp_0, bp, clam_smooth, maxSj )
df_hist |
dataframe containing historical trial data and parmaeters |
df_curr |
data.frame containing current trial data and parameters |
Y_0 |
historical trial data |
I_0 |
historical trial censoring indicator |
X_0 |
historical trial design matrix |
lambda_0 |
historical baseline hazard |
beta_0 |
historical parameter vector |
Y |
data |
I |
censoring indicator |
X |
design matrix |
lambda |
baseline hazard |
beta |
parameter vector |
s |
split point locations, J + 2 |
J |
number of split points |
bp_0 |
number of covariates in historical trial |
bp |
number of covariates in current trial |
clam_smooth |
neighbor interactions, in range (0, 1), for ICAR update |
maxSj |
the smallest of the maximal time points, min(max(Y), max(Y_0)) |
list containing new split points, updated Sigma_s and data.frames for historic and current trial data
Metropolis Hastings step: shuffle the split point locations (without Bayesian borrowing)
.shuffle_split_point_location_NoBorrow( df, Y_0, I_0, X_0, lambda_0, beta_0, s, J, bp_0, clam_smooth )
.shuffle_split_point_location_NoBorrow( df, Y_0, I_0, X_0, lambda_0, beta_0, s, J, bp_0, clam_smooth )
df |
dataframe containing trial data and parameters |
Y_0 |
data |
I_0 |
censoring indicator |
X_0 |
design matrix |
lambda_0 |
baseline hazard |
beta_0 |
parameter vector |
s |
split point locations, J + 2 |
J |
number of split points |
bp_0 |
number of covariates in historical trial |
clam_smooth |
neighbor interactions, in range (0, 1), for ICAR update |
list containing new split points, updated Sigma_s and data.frames for historic and current trial data
Calculate sigma2 posterior update
.sigma2_update(mu, lambda_0, Sigma_s, J, a_sigma, b_sigma)
.sigma2_update(mu, lambda_0, Sigma_s, J, a_sigma, b_sigma)
mu |
mean. |
lambda_0 |
Baseline hazard. |
Sigma_s |
VCV matrix (j + 1) x (j + 1). |
J |
Number of split point. |
a_sigma |
Hyperparameter a. |
b_sigma |
Hyperparameter b. |
sigma2 draw from IG
Smoothed hazard function
.smooth_hazard(out_slam, beta_samples = NULL)
.smooth_hazard(out_slam, beta_samples = NULL)
out_slam |
samples from GibbsMH of the baseline hazard |
beta_samples |
samples from GibbsMH from the treatment effect |
smoothed function for the baseline hazard
Smoothed survival curve
.smooth_survival(grid_width, out_slam, beta_samples = NULL)
.smooth_survival(grid_width, out_slam, beta_samples = NULL)
grid_width |
step size |
out_slam |
samples from GibbsMH of the baseline hazard |
beta_samples |
samples from GibbsMH from the treatment effect |
smoothed survival function
Sample tau from posterior distribution
.tau_update( lambda_0, lambda, J, s, a_tau, b_tau, c_tau = NULL, d_tau = NULL, p_0 = NULL, type )
.tau_update( lambda_0, lambda, J, s, a_tau, b_tau, c_tau = NULL, d_tau = NULL, p_0 = NULL, type )
lambda_0 |
historical baseline hazard |
lambda |
baseline hazard |
J |
number of split points |
s |
split point locations, J + 2 |
a_tau |
Inverse Gamma hyperparameter |
b_tau |
Inverse Gamma hyperparameter |
c_tau |
Inverse Gamma hyperparameter |
d_tau |
Inverse Gamma hyperparameter |
p_0 |
mixture ratio |
type |
choice of borrowing, "mix", "uni", or any other string for borrowing on every baseline hazard without mixture |
list containing tau and new mixture ratio
Main function of the BayesFBHborrow package. This generic function calls the correct MCMC sampler for time-to-event Bayesian borrowing.
BayesFBHborrow( data, data_hist = NULL, borrow = TRUE, model_choice, tuning_parameters, hyperparameters, lambda_hyperparameters, iter, warmup_iter, refresh, verbose, max_grid )
BayesFBHborrow( data, data_hist = NULL, borrow = TRUE, model_choice, tuning_parameters, hyperparameters, lambda_hyperparameters, iter, warmup_iter, refresh, verbose, max_grid )
data |
data.frame containing atleast three vectors of "tte" (time-to-event) and "event" (censoring), and covariates "X_i" (where i should be a number/ indicator of the covariate) |
data_hist |
data.frame containing atleast two vectors of "tte" (time-to-event) and "event" (censoring), with the option of adding covariates named "X_0_i" (where i should be a number/indicator of the covariate), for historical data |
borrow |
TRUE (default), will run the model with borrowing |
model_choice |
choice of which borrowing model to use out of "mix", "uni" or "all" |
tuning_parameters |
list of "cprop_beta" ("cprop_beta_0" for historical data), "alpha", "Jmax", and "pi_b". Default is list("Jmax" = 5, "clam_smooth" = 0.8, "cprop_beta" = 0.5, "cprop_beta_0" = 0.5, "pi_b" = 0.5, "alpha" = 0.4) |
hyperparameters |
list containing the hyperparameters ("a_tau", "b_tau", "c_tau", "d_tau","type", "p_0", "a_sigma", "b_sigma"). Default is list("a_tau" = 1, "b_tau" = 1,"c_tau" = 1, "d_tau" = 0.001, "type" = "mix", "p_0" = 0.5, "a_sigma" = 2, "b_sigma" = 2, "phi" = 3) |
lambda_hyperparameters |
contains two hyperparameters (a_lambda and b_lambda) used for the update of lambda and lambda_0. Default is c(0.01, 0.01) |
iter |
number of iterations for MCMC sampler |
warmup_iter |
number of warmup iterations (burn-in) for MCMC sampler. |
refresh |
number of iterations between printed screen updates |
verbose |
FALSE (default), choice of output, if TRUE will output intermittent results into console |
max_grid |
grid size for the smoothed baseline hazard |
a nested list of two items, 'out' and 'plots'. The list 'out' will contain all the samples of the MCMC chain, as well as acceptance ratios. The latter, 'plots', contains plots (and data) of the smoothed baseline hazard, smoothed survival, a histogram of the sampled number of split points, and the trace plot of the treatment effect beta_1
set.seed(123) # Load the example data data(piecewise_exp_cc, package = "BayesFBHborrow") data(piecewise_exp_hist, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 3.25, "alpha" = 0.4) # Set hyperparameters to default, with the borrowing model "mix" out <- BayesFBHborrow(data = piecewise_exp_cc, data_hist = piecewise_exp_hist, model_choice = 'mix', tuning_parameters = tuning_parameters, iter = 2, warmup_iter = 0) # Create a summary of the output summary(out$out, estimator = "out_fixed") # Plot the predictive curves for the treatment group plots <- plot(out$out, out$out$time_grid, x_pred = c(1))
set.seed(123) # Load the example data data(piecewise_exp_cc, package = "BayesFBHborrow") data(piecewise_exp_hist, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 3.25, "alpha" = 0.4) # Set hyperparameters to default, with the borrowing model "mix" out <- BayesFBHborrow(data = piecewise_exp_cc, data_hist = piecewise_exp_hist, model_choice = 'mix', tuning_parameters = tuning_parameters, iter = 2, warmup_iter = 0) # Create a summary of the output summary(out$out, estimator = "out_fixed") # Plot the predictive curves for the treatment group plots <- plot(out$out, out$out$time_grid, x_pred = c(1))
Main function of the BayesFBHborrow package. This generic function calls the correct MCMC sampler for time-to-event without Bayesian borrowing.
## S3 method for class 'NoBorrow' BayesFBHborrow( data, data_hist = NULL, borrow = FALSE, model_choice = "no_borrow", tuning_parameters = NULL, hyperparameters = NULL, lambda_hyperparameters = list(a_lambda = 0.01, b_lambda = 0.01), iter = 2000, warmup_iter = 2000, refresh = 0, verbose = FALSE, max_grid = 2000 )
## S3 method for class 'NoBorrow' BayesFBHborrow( data, data_hist = NULL, borrow = FALSE, model_choice = "no_borrow", tuning_parameters = NULL, hyperparameters = NULL, lambda_hyperparameters = list(a_lambda = 0.01, b_lambda = 0.01), iter = 2000, warmup_iter = 2000, refresh = 0, verbose = FALSE, max_grid = 2000 )
data |
data.frame containing atleast three vectors of "tte" (time-to-event) and "event" (event indicator), and covariates "X_i" (where i should be a number/ indicator of the covariate) |
data_hist |
NULL (not used) |
borrow |
FALSE (default), will run the model with borrowing |
model_choice |
'no_borrow' (default), for no borrowing |
tuning_parameters |
list of "cprop_beta", "Jmax", and "pi_b". Default is ("Jmax" = 5, "cprop_beta" = 0.5, "pi_b" = 0.5) |
hyperparameters |
list containing the hyperparameters c("a_sigma", "b_sigma", "phi", clam_smooth"). Default is list("a_sigma" = 2, "b_sigma" = 2, "phi" = 3 , "clam_smooth" = 0.8) |
lambda_hyperparameters |
contains two hyperparameters ("a_lambda" and "b_lambda") used for the update of lambda, default is c(0.01, 0.01) |
iter |
number of iterations for MCMC sampler. Default is 2000 |
warmup_iter |
number of warmup iterations (burn-in) for MCMC sampler. Default is 2000 |
refresh |
number of iterations between printed console updates. Default is 0 |
verbose |
FALSE (default), choice of output, if TRUE will output intermittent results into console |
max_grid |
grid size for the smoothed baseline hazard. Default is 2000 |
a nested list of two items, 'out' and 'plots'. The list 'out' will contain all the samples of the MCMC chain, as well as acceptance ratios. The latter, 'plots', contains plots (and data) of the smoothed baseline hazard, smoothed survival, a histogram of the sampled number of split points, and the trace plot of the treatment effect beta_1
set.seed(123) # Load the example data data(piecewise_exp_cc, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "cprop_beta" = 3.25) # Set initial values to default out <- BayesFBHborrow(piecewise_exp_cc, NULL, borrow = FALSE, tuning_parameters = tuning_parameters, iter = 2, warmup_iter = 0)
set.seed(123) # Load the example data data(piecewise_exp_cc, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "cprop_beta" = 3.25) # Set initial values to default out <- BayesFBHborrow(piecewise_exp_cc, NULL, borrow = FALSE, tuning_parameters = tuning_parameters, iter = 2, warmup_iter = 0)
Main function of the BayesFBHborrow package. This generic function calls the correct MCMC sampler for time-to-event Bayesian borrowing.
## S3 method for class 'WBorrow' BayesFBHborrow( data, data_hist, borrow = TRUE, model_choice = "mix", tuning_parameters = NULL, hyperparameters = NULL, lambda_hyperparameters = list(a_lambda = 0.01, b_lambda = 0.01), iter = 2000, warmup_iter = 2000, refresh = 0, verbose = FALSE, max_grid = 2000 )
## S3 method for class 'WBorrow' BayesFBHborrow( data, data_hist, borrow = TRUE, model_choice = "mix", tuning_parameters = NULL, hyperparameters = NULL, lambda_hyperparameters = list(a_lambda = 0.01, b_lambda = 0.01), iter = 2000, warmup_iter = 2000, refresh = 0, verbose = FALSE, max_grid = 2000 )
data |
data.frame containing atleast three vectors called "tte" (time-to-event), "event" (censoring), and covariates "X_i" (where i should be a number/indicator of the covariate) |
data_hist |
data.frame containing atleast two vectors called "tte" (time-to-event) and "event" (censoring), with the option of adding covariates named "X_0_i" (where i should be a number/indicator of the covariate) for the historical data |
borrow |
TRUE (default), will run the model with borrowing |
model_choice |
choice of which borrowing model to use out of 'mix', 'uni' or 'all' |
tuning_parameters |
list of "cprop_beta" ("cprop_beta_0" for historical data), "alpha", "Jmax", and "pi_b". Default is list("Jmax" = 5, "clam_smooth" = 0.8, "cprop_beta" = 0.5, cprop_beta_0" = 0.5, "pi_b" = 0.5, "alpha" = 0.4) |
hyperparameters |
list containing the hyperparameters ("a_tau", "b_tau", "c_tau", "d_tau","type", "p_0", "a_sigma", "b_sigma"). Default is list("a_tau" = 1, "b_tau" = 1,"c_tau" = 1, "d_tau" = 0.001, "type" = "mix", "p_0" = 0.5, "a_sigma" = 2, "b_sigma" = 2, "phi" = 3) |
lambda_hyperparameters |
contains three hyperparameters (a_lambda, b_lambda) used for the update of lambda and lambda_0. Default is c(0.01, 0.01) |
iter |
number of iterations for MCMC sampler. Default is 2000 |
warmup_iter |
number of warmup iterations (burn-in) for MCMC sampler. Default is 2000 |
refresh |
number of iterations between printed console updates. Default is 0 |
verbose |
FALSE (default), choice of output, if TRUE will output intermittent results into console |
max_grid |
grid size for the smoothed baseline hazard. Default is 2000 |
a nested list of two items, 'out' and 'plots'. The list 'out' will contain all the samples of the MCMC chain, as well as acceptance ratios. The latter, 'plots', contains plots (and data) of the smoothed baseline hazard, smoothed survival, a histogram of the sampled number of split points, and the trace plot of the treatment effect beta_1
set.seed(123) # Load the example data data(piecewise_exp_cc, package = "BayesFBHborrow") data(piecewise_exp_hist, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 3.25, "alpha" = 0.4) # Set hyperparameters to default, with the borrowing model "mix" out <- BayesFBHborrow(data = piecewise_exp_cc, data_hist = piecewise_exp_hist, model_choice = 'mix', tuning_parameters = tuning_parameters, iter = 2, warmup_iter = 0) # Create a summary of the output summary(out$out, estimator = "out_fixed") # Plot the predictive curves for the treatment group plots <- plot(out$out, out$out$time_grid, x_pred = c(1))
set.seed(123) # Load the example data data(piecewise_exp_cc, package = "BayesFBHborrow") data(piecewise_exp_hist, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 3.25, "alpha" = 0.4) # Set hyperparameters to default, with the borrowing model "mix" out <- BayesFBHborrow(data = piecewise_exp_cc, data_hist = piecewise_exp_hist, model_choice = 'mix', tuning_parameters = tuning_parameters, iter = 2, warmup_iter = 0) # Create a summary of the output summary(out$out, estimator = "out_fixed") # Plot the predictive curves for the treatment group plots <- plot(out$out, out$out$time_grid, x_pred = c(1))
S3 method for class "BayesFBHborrow", returns the mean posterior values for the fixed parameters
## S3 method for class 'BayesFBHborrow' coef(object, ...)
## S3 method for class 'BayesFBHborrow' coef(object, ...)
object |
MCMC sample object from BayesFBHborrow() |
... |
other arguments, see coef.default() |
mean values of given samples
data(weibull_cc, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5) # run the MCMC sampler out <- BayesFBHborrow(weibull_cc, NULL, tuning_parameters = tuning_parameters, iter = 3, warmup_iter = 1) # Plot the posterior mean values of the fixed parameters coef(out$out)
data(weibull_cc, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5) # run the MCMC sampler out <- BayesFBHborrow(weibull_cc, NULL, tuning_parameters = tuning_parameters, iter = 3, warmup_iter = 1) # Plot the posterior mean values of the fixed parameters coef(out$out)
An MCMC sampler for Bayesian borrowing with time-to-event data. We obtain a flexible baseline hazard function by making the split points random within a piecewise exponential model and using a Gaussian Markov random field prior to smooth the baseline hazards. Only calls the sampler and does not run any input checks. Best practice is to call BayesFBHborrow(), if the user is not familiar with the model at hand.
GibbsMH( Y, I, X, Y_0 = NULL, I_0 = NULL, X_0 = NULL, tuning_parameters, hyperparameters, lambda_hyperparameters, iter, warmup_iter, refresh, max_grid )
GibbsMH( Y, I, X, Y_0 = NULL, I_0 = NULL, X_0 = NULL, tuning_parameters, hyperparameters, lambda_hyperparameters, iter, warmup_iter, refresh, max_grid )
Y |
data |
I |
event indicator |
X |
design matrix |
Y_0 |
historical data, default is NULL |
I_0 |
historical event indicator, default is NULL |
X_0 |
historical design matrix, default is NULL |
tuning_parameters |
list of "cprop_beta", "cprop_beta_0", "alpha", "Jmax", and "pi_b" |
hyperparameters |
list containing the hyperparameters c("a_tau", "b_tau", "c_tau", "d_tau","type", "p_0", "a_sigma", "b_sigma", "Jmax", "clam_smooth", "cprop_beta", "phi", "pi_b"). Default is list("a_tau" = 1,"b_tau" = 1,"c_tau" = 1, "d_tau" = 0.001, "type" = "mix", "p_0" = 0.5, "a_sigma" = 2, "b_sigma" = 2, "Jmax" = 20, "clam_smooth" = 0.8, "cprop_beta" = 0.5, "phi" = 3, "pi_b" = 0.5) |
lambda_hyperparameters |
contains two hyperparameters (a_lambda and b_lambda) used for the update of lambda and lambda_0 |
iter |
number of iterations for MCMC sampler, excluding warmup, default is 2000 |
warmup_iter |
number of warmup iterations (burn-in) for MCMC sampler, default is 2000 |
refresh |
number of iterations between printed screen updates, default is 500 |
max_grid |
grid size for the smoothed baseline hazard, default is 2000 |
depending on if the user wishes to borrow; returns a list with values after each iteration for parameters: out_fixed (J, mu, sigma2, beta), lambda, lambda_0, tau, s, as well as tuning values of the total number of accepts: lambda_move, lambda_0_move and beta_move. Also included is the out_slam which contains the shrunk estimate of the baseline hazard.
set.seed(123) # Load example data and set your initial values and hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") # The datasets consists of 3 (2) columns named "tte", "event" and "X" # (only for concurrent). To explicitly run the sampler, extract the samples as # following Y <- weibull_cc$tte I <- weibull_cc$event X <- matrix(weibull_cc$X_trt) Y_0 <- weibull_hist$tte I_0 <- weibull_hist$event X_0 <- NULL # Specify hyperparameters and tuning parameters hyper <- list("a_tau" = 1, "b_tau" = 0.001, "c_tau" = 1, "d_tau" = 1, "type" = 'all', "p_0" = 0.5, "a_sigma" = 2, "b_sigma" = 2, "clam_smooth" = 0.5, "phi" = 3) tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5, "alpha" = 0.4) output <- GibbsMH(Y, I, X, Y_0, I_0, X_0, tuning_parameters, hyper, iter = 5, warmup_iter = 1)
set.seed(123) # Load example data and set your initial values and hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") # The datasets consists of 3 (2) columns named "tte", "event" and "X" # (only for concurrent). To explicitly run the sampler, extract the samples as # following Y <- weibull_cc$tte I <- weibull_cc$event X <- matrix(weibull_cc$X_trt) Y_0 <- weibull_hist$tte I_0 <- weibull_hist$event X_0 <- NULL # Specify hyperparameters and tuning parameters hyper <- list("a_tau" = 1, "b_tau" = 0.001, "c_tau" = 1, "d_tau" = 1, "type" = 'all', "p_0" = 0.5, "a_sigma" = 2, "b_sigma" = 2, "clam_smooth" = 0.5, "phi" = 3) tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5, "alpha" = 0.4) output <- GibbsMH(Y, I, X, Y_0, I_0, X_0, tuning_parameters, hyper, iter = 5, warmup_iter = 1)
An MCMC sampler for time-to-event data, without Bayesian Borrowing. We obtain a flexible baseline hazard function by making the split points random within a piecewise exponential model and using a Gaussian Markov random field prior to smooth the baseline hazards. Only calls the sampler and does not run any input checks. Best practice is to call BayesFBHborrow(), if the user is not familiar with the model at hand.
## S3 method for class 'NoBorrow' GibbsMH( Y, I, X = NULL, Y_0 = NULL, I_0 = NULL, X_0 = NULL, tuning_parameters, hyperparameters = list(a_sigma = 1, b_sigma = 1, phi = 3, clam_smooth = 0.8), lambda_hyperparameters = list(a_lambda = 0.01, b_lambda = 0.01), iter = 1500L, warmup_iter = 10L, refresh = 0, max_grid = 2000L )
## S3 method for class 'NoBorrow' GibbsMH( Y, I, X = NULL, Y_0 = NULL, I_0 = NULL, X_0 = NULL, tuning_parameters, hyperparameters = list(a_sigma = 1, b_sigma = 1, phi = 3, clam_smooth = 0.8), lambda_hyperparameters = list(a_lambda = 0.01, b_lambda = 0.01), iter = 1500L, warmup_iter = 10L, refresh = 0, max_grid = 2000L )
Y |
data |
I |
event indicator |
X |
design matrix |
Y_0 |
historical data, default is NULL |
I_0 |
historical event indicator, default is NULL |
X_0 |
historical design matrix, default is NULL |
tuning_parameters |
list of "cprop_beta", "Jmax", and "pi_b" |
hyperparameters |
list containing the hyperparameters c("a_sigma", "b_sigma", "Jmax", "clam_smooth", "cprop_beta", "phi"). Default is list("a_sigma" = 2, "b_sigma" = 2, "Jmax" = 20, "clam_smooth" = 0.8, "cprop_beta" = 0.5, "phi" = 3) |
lambda_hyperparameters |
contains two hyperparameters ("a" and "b") used for the update of lambda, default is c(0.01, 0.01) |
iter |
number of iterations for MCMC sampler, excluding warmup, default is 2000 |
warmup_iter |
number of warmup iterations (burn-in) for MCMC sampler, default is 2000 |
refresh |
number of iterations between printed screen updates, default is 500 |
max_grid |
grid size for the smoothed baseline hazard, default is 2000 |
list with values after each iteration for parameters: out_fixed (J, mu, sigma2, beta), lambda, s, as well as tuning values of the total number of accepts: lambda_move and beta_move. Also included is the out_slam which contains the shrunk estimate of the baseline hazard.
set.seed(123) # Load example data and set your hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") # The datasets consists of 3 (2) columns named "tte", "event" and "X". # To explicitly run the sampler, extract the samples as following Y <- weibull_cc$tte I <- weibull_cc$event X <- matrix(weibull_cc$X_trt) # Specify hyperparameters and tuning parameters hyper <- list("a_sigma" = 2, "b_sigma" = 2, "clam_smooth" = 0.5, "phi" = 3) tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5) # Set initial values to 'NULL' for default settings output <- GibbsMH(Y, I, X, NULL, NULL, NULL, tuning_parameters = tuning_parameters, hyperparameters = hyper, iter = 5, warmup_iter = 1)
set.seed(123) # Load example data and set your hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") # The datasets consists of 3 (2) columns named "tte", "event" and "X". # To explicitly run the sampler, extract the samples as following Y <- weibull_cc$tte I <- weibull_cc$event X <- matrix(weibull_cc$X_trt) # Specify hyperparameters and tuning parameters hyper <- list("a_sigma" = 2, "b_sigma" = 2, "clam_smooth" = 0.5, "phi" = 3) tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5) # Set initial values to 'NULL' for default settings output <- GibbsMH(Y, I, X, NULL, NULL, NULL, tuning_parameters = tuning_parameters, hyperparameters = hyper, iter = 5, warmup_iter = 1)
An MCMC sampler for Bayesian borrowing with time-to-event data. We obtain a flexible baseline hazard function by making the split points random within a piecewise exponential model and using a Gaussian Markov random field prior to smooth the baseline hazards. Only calls the sampler and does not run any input checks. Best practice is to call BayesFBHborrow(), if the user is not familiar with the model at hand.
## S3 method for class 'WBorrow' GibbsMH( Y, I, X, Y_0, I_0, X_0, tuning_parameters = NULL, hyperparameters = list(a_tau = 1, b_tau = 0.001, c_tau = 1, d_tau = 1, type = "mix", p_0 = 0.8, a_sigma = 1, b_sigma = 1, phi = 3, clam_smooth = 0.8), lambda_hyperparameters = list(a_lambda = 0.01, b_lambda = 0.01), iter = 150L, warmup_iter = 10L, refresh = 0, max_grid = 2000L )
## S3 method for class 'WBorrow' GibbsMH( Y, I, X, Y_0, I_0, X_0, tuning_parameters = NULL, hyperparameters = list(a_tau = 1, b_tau = 0.001, c_tau = 1, d_tau = 1, type = "mix", p_0 = 0.8, a_sigma = 1, b_sigma = 1, phi = 3, clam_smooth = 0.8), lambda_hyperparameters = list(a_lambda = 0.01, b_lambda = 0.01), iter = 150L, warmup_iter = 10L, refresh = 0, max_grid = 2000L )
Y |
data |
I |
event indicator |
X |
design matrix |
Y_0 |
historical data |
I_0 |
historical event indicator |
X_0 |
historical design matrix |
tuning_parameters |
list of "cprop_beta", "cprop_beta_0", "alpha", "Jmax", and "pi_b" |
hyperparameters |
list containing the hyperparameters c("a_tau", "b_tau", "c_tau", "d_tau","type", "p_0", "a_sigma", "b_sigma", "Jmax", "clam_smooth", "cprop_beta", "phi", "pi_b"). Default is list("a_tau" = 1,"b_tau" = 1,"c_tau" = 1, "d_tau" = 0.001, "type" = "mix", "p_0" = 0.5, "a_sigma" = 2, "b_sigma" = 2, "Jmax" = 20, "clam_smooth" = 0.8, "cprop_beta" = 0.5, "phi" = 3, "pi_b" = 0.5) |
lambda_hyperparameters |
contains two hyperparameters (a_lambda and b_lambda) used for the update of lambda and lambda_0. Default is c(0.01, 0.01) |
iter |
number of iterations for MCMC sampler, excluding warmup, default is 2000 |
warmup_iter |
number of warmup iterations (burn-in) for MCMC sampler, default is 2000 |
refresh |
number of iterations between printed screen updates, default is 500 |
max_grid |
grid size for the smoothed baseline hazard, default is 2000 |
list with values after each iteration for parameters: out_fixed (J, mu, sigma2, beta), lambda, lambda_0, tau, s, as well as tuning values of the total number of accepts: lambda_move, lambda_0_move and beta_move. Also included is the out_slam which contains the shrunk estimate of the baseline hazard.
set.seed(123) # Load example data and set your initial values and hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") # The datasets consists of 3 (2) columns named "tte", "event" and "X" # (only for concurrent). To explicitly run the sampler, extract the samples as # following Y <- weibull_cc$tte I <- weibull_cc$event X <- matrix(weibull_cc$X_trt) Y_0 <- weibull_hist$tte I_0 <- weibull_hist$event X_0 <- NULL # Specify hyperparameters and tuning parameters hyper <- list("a_tau" = 1, "b_tau" = 0.001, "c_tau" = 1, "d_tau" = 1, "type" = "all", "p_0" = 0.5, "a_sigma" = 2, "b_sigma" = 2, "clam_smooth" = 0.5, "phi" = 3) tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5, "alpha" = 0.4) output <- GibbsMH(Y, I, X, Y_0, I_0, X_0, tuning_parameters = tuning_parameters, hyperparameters = hyper, iter = 5, warmup_iter = 1)
set.seed(123) # Load example data and set your initial values and hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") # The datasets consists of 3 (2) columns named "tte", "event" and "X" # (only for concurrent). To explicitly run the sampler, extract the samples as # following Y <- weibull_cc$tte I <- weibull_cc$event X <- matrix(weibull_cc$X_trt) Y_0 <- weibull_hist$tte I_0 <- weibull_hist$event X_0 <- NULL # Specify hyperparameters and tuning parameters hyper <- list("a_tau" = 1, "b_tau" = 0.001, "c_tau" = 1, "d_tau" = 1, "type" = "all", "p_0" = 0.5, "a_sigma" = 2, "b_sigma" = 2, "clam_smooth" = 0.5, "phi" = 3) tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5, "alpha" = 0.4) output <- GibbsMH(Y, I, X, Y_0, I_0, X_0, tuning_parameters = tuning_parameters, hyperparameters = hyper, iter = 5, warmup_iter = 1)
Aggregate individual level data into group level data
group_summary(Y, I, X, s)
group_summary(Y, I, X, s)
Y |
data |
I |
censoring indicator |
X |
design matrix |
s |
split points, J + 2 |
list of group level data
set.seed(111) # Load example data and set your initial values and hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") Y <- weibull_cc$tte I <- weibull_cc$event X <- weibull_cc$X_trt # Say we want to know the group level data for the following split points s <- quantile(Y, c(0, 0.45, 0.65, 1), names = FALSE) group_summary(Y, I, X, s)
set.seed(111) # Load example data and set your initial values and hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") Y <- weibull_cc$tte I <- weibull_cc$event X <- weibull_cc$X_trt # Say we want to know the group level data for the following split points s <- quantile(Y, c(0, 0.45, 0.65, 1), names = FALSE) group_summary(Y, I, X, s)
Propose lambda hyperparameters for the choice of initial values for lambda
init_lambda_hyperparameters(group_data, s, w = 0.5)
init_lambda_hyperparameters(group_data, s, w = 0.5)
group_data |
group level data |
s |
split points |
w |
weight |
shape and rate for the estimated lambda distribution
set.seed(111) # Load example data and set your initial values and hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") Y <- weibull_cc$tte I <- weibull_cc$event X <- weibull_cc$X_trt # Say we want to know the group level data for the following split points s <- quantile(Y, c(0, 0.45, 0.65, 1), names = FALSE) group_data <- group_summary(Y, I, NULL, s) init_lambda_hyperparameters(group_data, s)
set.seed(111) # Load example data and set your initial values and hyper parameters data(weibull_cc, package = "BayesFBHborrow") data(weibull_hist, package = "BayesFBHborrow") Y <- weibull_cc$tte I <- weibull_cc$event X <- weibull_cc$X_trt # Say we want to know the group level data for the following split points s <- quantile(Y, c(0, 0.45, 0.65, 1), names = FALSE) group_data <- group_summary(Y, I, NULL, s) init_lambda_hyperparameters(group_data, s)
Data is simulated for a concurrent trial with three columns named "tte" (time-to-event), "event" (event indicator), and "X_trt" (treatment indicator). It was simulated using the following parameters:
data(piecewise_exp_cc)
data(piecewise_exp_cc)
An object of class tbl_df
(inherits from tbl
, data.frame
) with 250 rows and 3 columns.
data(piecewise_exp_cc) survival_model <- survival::survfit(survival::Surv(tte, event) ~ X_trt, data = piecewise_exp_cc) line_colors <- c("blue", "red") # Adjust colors as needed line_types <- 1:length(unique(piecewise_exp_cc$X_trt)) plot(survival_model, col = line_colors, lty = line_types, xlab = "Time (tte)", ylab = "Survival Probability", main = "Kaplan-Meier Survival Curves by Treatment")
data(piecewise_exp_cc) survival_model <- survival::survfit(survival::Surv(tte, event) ~ X_trt, data = piecewise_exp_cc) line_colors <- c("blue", "red") # Adjust colors as needed line_types <- 1:length(unique(piecewise_exp_cc$X_trt)) plot(survival_model, col = line_colors, lty = line_types, xlab = "Time (tte)", ylab = "Survival Probability", main = "Kaplan-Meier Survival Curves by Treatment")
Data is simulated for a historical trial with two columns named "tte" (time-to-event) and "event" (event indicator). It was simulated using the following parameters:
data(piecewise_exp_hist)
data(piecewise_exp_hist)
An object of class tbl_df
(inherits from tbl
, data.frame
) with 100 rows and 2 columns.
data(piecewise_exp_cc) data(piecewise_exp_hist) piecewise_exp_hist$X_trt <- 0 survival_model <- survival::survfit(survival::Surv(tte, event) ~ X_trt, data = rbind(piecewise_exp_cc, piecewise_exp_hist)) line_colors <- c("blue", "red", "green") # Adjust colors as needed line_types <- 1:length(unique(piecewise_exp_cc$X_trt)) plot(survival_model, col = line_colors, lty = line_types, xlab = "Time (tte)", ylab = "Survival Probability", main = "Kaplan-Meier Survival Curves by Treatment")
data(piecewise_exp_cc) data(piecewise_exp_hist) piecewise_exp_hist$X_trt <- 0 survival_model <- survival::survfit(survival::Surv(tte, event) ~ X_trt, data = rbind(piecewise_exp_cc, piecewise_exp_hist)) line_colors <- c("blue", "red", "green") # Adjust colors as needed line_types <- 1:length(unique(piecewise_exp_cc$X_trt)) plot(survival_model, col = line_colors, lty = line_types, xlab = "Time (tte)", ylab = "Survival Probability", main = "Kaplan-Meier Survival Curves by Treatment")
S3 object which produces predictive probabilities of the survival, hazard, and hazard ratio for a given set of predictors
## S3 method for class 'BayesFBHborrow' plot(x, x_lim, x_pred = NULL, ...)
## S3 method for class 'BayesFBHborrow' plot(x, x_lim, x_pred = NULL, ...)
x |
object of class "BayesFBHborrow" to be visualized |
x_lim |
x-axis to be used for plot, set to NULL to use default from MCMC sampling |
x_pred |
vector of chosen predictors |
... |
other plotting arguments, see .plot_matrix() for more information |
nested list of 'plots' (posterior predictive hazard, survival, and hazard ratio) as well as their samples.
data(weibull_cc, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5) # run the MCMC sampler out <- BayesFBHborrow(weibull_cc, NULL, tuning_parameters = tuning_parameters, iter = 3, warmup_iter = 1) # for the treatment group plots <- plot(out$out, out$out$time_grid, x_pred = c(1))
data(weibull_cc, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5) # run the MCMC sampler out <- BayesFBHborrow(weibull_cc, NULL, tuning_parameters = tuning_parameters, iter = 3, warmup_iter = 1) # for the treatment group plots <- plot(out$out, out$out$time_grid, x_pred = c(1))
S3 method for with borrowing. Returns summary of mean, median and given percentiles for the one dimensional parameters.
## S3 method for class 'BayesFBHborrow' summary( object, estimator = NULL, percentiles = c(0.025, 0.25, 0.75, 0.975), ... )
## S3 method for class 'BayesFBHborrow' summary( object, estimator = NULL, percentiles = c(0.025, 0.25, 0.75, 0.975), ... )
object |
MCMC sample object from BayesFBHborrow() |
estimator |
The type of estimator to summarize, could be "fixed", "lambda", "lambda_0" or "s". The default is NULL and will print a summary of the output list. |
percentiles |
Given percentiles to output, default is c(0.025, 0.25, 0.75, 0.975) |
... |
other arguments, see summary.default |
summary of the given estimator
data(piecewise_exp_cc, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5) # run the MCMC sampler out <- BayesFBHborrow(piecewise_exp_cc, NULL, tuning_parameters = tuning_parameters, iter = 3, warmup_iter = 1) # Create a summary of the output summary(out$out, estimator = "out_fixed")
data(piecewise_exp_cc, package = "BayesFBHborrow") # Set your tuning parameters tuning_parameters <- list("Jmax" = 5, "pi_b" = 0.5, "cprop_beta" = 0.5) # run the MCMC sampler out <- BayesFBHborrow(piecewise_exp_cc, NULL, tuning_parameters = tuning_parameters, iter = 3, warmup_iter = 1) # Create a summary of the output summary(out$out, estimator = "out_fixed")
Data is simulated for a concurrent trial with three columns named "tte" (time-to-event), "event" (event indicator), and "X_trt" (treatment indicator). It was simulated by drawing samples from a Weibull with kappa = 1.5 (shape) and nu = 0.4 (scale)
data(weibull_cc)
data(weibull_cc)
An object of class tbl_df
(inherits from tbl
, data.frame
) with 250 rows and 3 columns.
data(weibull_cc) survival_model <- survival::survfit(survival::Surv(tte, event) ~ X_trt, data = weibull_cc) line_colors <- c("blue", "red") # Adjust colors as needed line_types <- 1:length(unique(weibull_cc$X_trt)) plot(survival_model, col = line_colors, lty = line_types, xlab = "Time (tte)", ylab = "Survival Probability", main = "Kaplan-Meier Survival Curves by Treatment")
data(weibull_cc) survival_model <- survival::survfit(survival::Surv(tte, event) ~ X_trt, data = weibull_cc) line_colors <- c("blue", "red") # Adjust colors as needed line_types <- 1:length(unique(weibull_cc$X_trt)) plot(survival_model, col = line_colors, lty = line_types, xlab = "Time (tte)", ylab = "Survival Probability", main = "Kaplan-Meier Survival Curves by Treatment")
Data is simulated for a historical trial with two columns named "tte" (time-to-event) and "event" (event indicator). It was simulated using the following parameters:
data(weibull_hist)
data(weibull_hist)
An object of class tbl_df
(inherits from tbl
, data.frame
) with 100 rows and 2 columns.
data(weibull_cc) data(weibull_hist) weibull_hist$X_trt <- 0 survival_model <- survival::survfit(survival::Surv(tte, event) ~ X_trt, data = rbind(weibull_cc, weibull_hist)) line_colors <- c("blue", "red", "green") # Adjust colors as needed line_types <- 1:length(unique(weibull_cc$X_trt)) plot(survival_model, col = line_colors, lty = line_types, xlab = "Time (tte)", ylab = "Survival Probability", main = "Kaplan-Meier Survival Curves by Treatment")
data(weibull_cc) data(weibull_hist) weibull_hist$X_trt <- 0 survival_model <- survival::survfit(survival::Surv(tte, event) ~ X_trt, data = rbind(weibull_cc, weibull_hist)) line_colors <- c("blue", "red", "green") # Adjust colors as needed line_types <- 1:length(unique(weibull_cc$X_trt)) plot(survival_model, col = line_colors, lty = line_types, xlab = "Time (tte)", ylab = "Survival Probability", main = "Kaplan-Meier Survival Curves by Treatment")