Package: catalytic 0.1.0

Dongming Huang

catalytic: Tools for Applying Catalytic Priors in Statistical Modeling

To improve estimation accuracy and stability in statistical modeling, catalytic prior distributions are employed, integrating observed data with synthetic data generated from a simpler model's predictive distribution. This approach enhances model robustness, stability, and flexibility in complex data scenarios. The catalytic prior distributions are introduced by 'Huang et al.' (2020, <doi:10.1073/pnas.1920913117>), Li and Huang (2023, <doi:10.48550/arXiv.2312.01411>).

Authors:Yitong Wu [aut], Dongming Huang [aut, cre], Weihao Li [aut], Ministry of Education, Singapore [fnd]

catalytic_0.1.0.tar.gz
catalytic_0.1.0.tar.gz(r-4.5-noble)catalytic_0.1.0.tar.gz(r-4.4-noble)
catalytic_0.1.0.tgz(r-4.4-emscripten)catalytic_0.1.0.tgz(r-4.3-emscripten)
catalytic.pdf |catalytic.html
catalytic/json (API)
NEWS

# Install 'catalytic' in R:
install.packages('catalytic', repos = 'https://cloud.r-project.org')
Datasets:
  • swim - Simulated SWIM Dataset with Binary Response

On CRAN:

Conda:

This package does not link to any Github/Gitlab/R-forge repository. No issue tracker or development information is available.

3.18 score 209 downloads 15 exports 65 dependencies

Last updated 4 months agofrom:00e3494d40. Checks:3 OK. Indexed: yes.

TargetResultLatest binary
Doc / VignettesOKMar 18 2025
R-4.5-linuxOKMar 18 2025
R-4.4-linuxOKMar 18 2025

Exports:cat_coxcat_cox_bayescat_cox_bayes_jointcat_cox_initializationcat_cox_tunecat_glmcat_glm_bayescat_glm_bayes_jointcat_glm_bayes_joint_gibbscat_glm_initializationcat_glm_tunecat_lmmcat_lmm_initializationcat_lmm_tunetraceplot

Dependencies:abindbackportsBHbootcallrcheckmateclicodacolorspacedescdistributionalfansifarvergenericsggplot2gluegridExtragtableinlineinvgammaisobandlabelinglatticelifecyclelme4loomagrittrMASSmathjaxrMatrixmatrixStatsmgcvminqamunsellnlmenloptrnumDerivpillarpkgbuildpkgconfigposteriorprocessxpsquadformQuickJSRR6rbibutilsRColorBrewerRcppRcppEigenRcppParallelRdpackreformulasrlangrstanscalesStanHeaderssurvivaltensorAtibbletruncnormutf8vctrsviridisLitewithr

catalytic_cox

Rendered fromcatalytic_cox.Rmdusingknitr::rmarkdownon Mar 18 2025.

Last update: 2024-12-18
Started: 2024-12-18

catalytic_glm_binomial

Rendered fromcatalytic_glm_binomial.Rmdusingknitr::rmarkdownon Mar 18 2025.

Last update: 2024-12-18
Started: 2024-12-18

catalytic_glm_gaussian

Rendered fromcatalytic_glm_gaussian.Rmdusingknitr::rmarkdownon Mar 18 2025.

Last update: 2024-12-18
Started: 2024-12-18

Citation

Wu Y, Huang D, Li W, Ministry of Education, Singapore (2024). catalytic: Tools for Applying Catalytic Priors in Statistical Modeling. https://CRAN.R-project.org/package=catalytic.

Corresponding BibTeX entry:

  @Manual{,
    title = {catalytic: Tools for Applying Catalytic Priors in
      Statistical Modeling},
    author = {Yitong Wu and Dongming Huang and Weihao Li and {Ministry
      of Education, Singapore}},
    year = {2024},
    url = {https://CRAN.R-project.org/package=catalytic},
  }

Readme and manuals

catalytic

The catalytic package enhances the stability and accuracy of Generalized Linear Models (GLMs) and Cox proportional hazards models (COX) in R, especially in high-dimensional or sparse data settings where traditional methods struggle. By employing catalytic prior distributions, this package integrates observed data with synthetic data generated from simpler models, effectively reducing issues like overfitting. This approach provides more robust parameter estimation, making catalytic a valuable tool for complex and limited-data scenarios.

Installation

You can install the released version of catalytic from CRAN with:

install.packages("catalytic")

## Installing package into '/private/var/folders/j5/bktz8wt94hv7m_wtcc0k6ymh0000gn/T/RtmpNJNjW2/temp_libpath107b87e07f100'
## (as 'lib' is unspecified)

## Warning: package 'catalytic' is not available for this version of R
## 
## A version of this package for your version of R might be available elsewhere,
## see the ideas at
## https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages

library(catalytic)

Example

set.seed(1)

data <- data.frame(
  X1 = stats::rnorm(10),
  X2 = stats::rnorm(10),
  Y = stats::rnorm(10)
)

# Step 1: Initialization
cat_init <- cat_glm_initialization(
  formula = Y ~ 1, # Formula for a simple model to generate synthetic response
  family = gaussian,
  data = data,
  syn_size = 100 # Synthetic data size
)
print(cat_init)

## cat_glm_initialization
##  formula:               Y ~ 1
##  model:                 Unknown Variance
##  custom variance:       NULL
##  family:                gaussian [identity]
##  covariates dimention:  10 (Observation) + 100 (Synthetic) = 110 rows with 2 column(s)
## ------
## Observation Data Information:
##  covariates dimention:  10 rows with  2 column(s)
##  head(data) :              
##  [only show the head of first 10 columns] 
## 
##           X1          X2           Y
## 1 -0.6264538  1.51178117  0.91897737
## 2  0.1836433  0.38984324  0.78213630
## 3 -0.8356286 -0.62124058  0.07456498
## 4  1.5952808 -2.21469989 -1.98935170
## 5  0.3295078  1.12493092  0.61982575
## 6 -0.8204684 -0.04493361 -0.05612874
## 
## ------
## Synthetic Data Information:
##  covariates dimention:  100 rows with  2  column(s)
##  head(data):              
##  [only show the head of first 10 columns] 
## 
##           X1         X2          Y
## 1 -0.8356286  0.5953590 -0.1336732
## 2 -0.3053884  0.6855034 -0.1336732
## 3 -0.8204684  0.3898432 -0.1336732
## 4  0.7383247  0.5939013 -0.1336732
## 5  0.1836433  0.7523157 -0.1336732
## 6  0.1836433 -0.7461351 -0.1336732
## 
##  data generation process:
##  [only show the first 10 columns] 
## 
##                Covariate                   Type                Process
## 1 X1                     Continuous             Coordinate            
## 2 X2                     Continuous             Coordinate->Deskewing 
## 
## ------
## * For help interpreting the printed output see ?print.cat_initialization

# Step 2: Choose Method(s)
## Method 1 - Estimation with fixed tau
cat_glm_model <- cat_glm(
  formula = Y ~ ., # Formula for model fitting
  cat_init = cat_init, # Output object from `cat_glm_initialization`
  tau = 10 # Down-weight factor for synthetic data
)
print(cat_glm_model)

## cat_glm
##  formula:                Y ~ .
##  covariates dimention:   10 (Observation) + 100 (Synthetic) = 110 rows with 2 column(s)
##  tau:                    10
##  family:                 gaussian [identity]
## ------
## coefficients' information:
## (Intercept)          X1          X2 
##      -0.199      -0.402       0.250 
## 
## ------
## * For help interpreting the printed output see ?print.cat

print(predict(cat_glm_model))

##           1           2           3           4           5           6 
##  0.43141012 -0.17506574 -0.01774890 -1.39427971 -0.04996112  0.12024636 
##           7           8           9          10 
## -0.39882013 -0.25973457 -0.22499036  0.07272498

## Method 2 - Estimation with selective tau
cat_glm_tune_model <- cat_glm_tune(
  formula = Y ~ .,
  cat_init = cat_init,
  tau_seq = seq(1, 10) # Vector of values for downweighting synthetic data
)
print(cat_glm_tune_model)

## cat_glm_tune
##  formula:                 Y ~ .
##  covariates dimention:    10 (Observation) + 100 (Synthetic) = 110 rows with 2 column(s)
##  tau sequnce:             1, 2, 3, 4, 5, 6, 7, 8, 9, 10
##  family:                  gaussian
##  risk estimate method:    parametric_bootstrap
##  discrepancy method:      mean_square_error
## 
##  optimal tau:             10
##  minimun risk estimate:   1.539
## ------
## coefficients' information:
## 
## (Intercept)          X1          X2 
##      -0.199      -0.402       0.250 
## 
## ------
## * For help interpreting the printed output see ?print.cat_tune

print(predict(cat_glm_tune_model))

##           1           2           3           4           5           6 
##  0.43141012 -0.17506574 -0.01774890 -1.39427971 -0.04996112  0.12024636 
##           7           8           9          10 
## -0.39882013 -0.25973457 -0.22499036  0.07272498

plot(cat_glm_tune_model, text_pos = 2)

## Method 3 - Bayesian posterior sampling with fixed tau
cat_glm_bayes_model <- cat_glm_bayes(
  formula = Y ~ .,
  cat_init = cat_init,
  tau = 10
)

## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 2.5e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.25 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.046 seconds (Warm-up)
## Chain 1:                0.045 seconds (Sampling)
## Chain 1:                0.091 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 7e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.046 seconds (Warm-up)
## Chain 2:                0.047 seconds (Sampling)
## Chain 2:                0.093 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 6e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.044 seconds (Warm-up)
## Chain 3:                0.048 seconds (Sampling)
## Chain 3:                0.092 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 7e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.045 seconds (Warm-up)
## Chain 4:                0.045 seconds (Sampling)
## Chain 4:                0.09 seconds (Total)
## Chain 4:

print(cat_glm_bayes_model)

## cat_glm_bayes
##  formula:                Y ~ .
##  covariates dimention:   10 (Observation) + 100 (Synthetic) = 110 rows with 2 column(s)
##  tau:                    10
##  family:                 gaussian [identity]
##  stan algorithm:         NUTS
##  stan chain:             4
##  stan iter:              2000
##  stan warmup:            1000
## ------
## coefficients' information:
## 
##                mean se_mean    sd    2.5%     25%     50%     75%   97.5%
## (Intercept)  -0.196   0.003 0.156  -0.504  -0.298  -0.197  -0.093   0.118
## X1           -0.405   0.003 0.198  -0.813  -0.536  -0.405  -0.277  -0.017
## X2            0.247   0.003 0.153  -0.053   0.146   0.247   0.351   0.554
## sigma         0.648   0.002 0.103   0.483   0.575   0.636   0.703   0.888
## lp__        -18.429   0.039 1.550 -22.281 -19.198 -18.074 -17.319 -16.535
##                n_eff  Rhat
## (Intercept) 3547.056 1.003
## X1          3928.271 0.999
## X2          3583.399 1.001
## sigma       2780.931 1.000
## lp__        1547.193 1.000
## 
## ------
## * For help interpreting the printed output see ?print.cat_bayes

print(predict(cat_glm_bayes_model))

##  [1]  0.43217093 -0.17392884 -0.01089480 -1.39082785 -0.05115094  0.12558130
##  [7] -0.39757973 -0.26171856 -0.22616132  0.07484400

traceplot(cat_glm_bayes_model)

## Method 4 - Bayesian posterior sampling with adaptive tau
cat_glm_bayes_joint_model <- cat_glm_bayes_joint(
  formula = Y ~ .,
  cat_init = cat_init
)

## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0.000251 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.51 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.059 seconds (Warm-up)
## Chain 1:                0.057 seconds (Sampling)
## Chain 1:                0.116 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 8e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.055 seconds (Warm-up)
## Chain 2:                0.052 seconds (Sampling)
## Chain 2:                0.107 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 9e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.058 seconds (Warm-up)
## Chain 3:                0.057 seconds (Sampling)
## Chain 3:                0.115 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 9e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.059 seconds (Warm-up)
## Chain 4:                0.054 seconds (Sampling)
## Chain 4:                0.113 seconds (Total)
## Chain 4:

print(cat_glm_bayes_joint_model)

## cat_glm_bayes_joint
##  formula:                Y ~ .
##  covariates dimention:   10 (Observation) + 100 (Synthetic) = 110 rows with 2 column(s)
##  family:                 gaussian [identity]
##  stan algorithm:         NUTS
##  stan chain:             4
##  stan iter:              2000
##  stan warmup:            1000
## ------
## coefficients' information:
## 
##                mean se_mean    sd    2.5%     25%     50%     75%   97.5%
## (Intercept)  -0.141   0.004 0.259  -0.658  -0.308  -0.141   0.028   0.385
## X1           -0.647   0.006 0.348  -1.331  -0.876  -0.647  -0.422   0.053
## X2            0.334   0.005 0.257  -0.179   0.171   0.335   0.497   0.864
## tau           0.963   0.011 0.684   0.122   0.470   0.798   1.310   2.623
## sigma         0.777   0.003 0.167   0.527   0.658   0.752   0.867   1.167
## lp__        -14.090   0.045 1.801 -18.409 -15.022 -13.716 -12.755 -11.682
##                n_eff  Rhat
## (Intercept) 3607.756 1.000
## X1          3361.235 1.000
## X2          3098.601 1.001
## tau         4149.713 1.001
## sigma       2964.667 1.001
## lp__        1588.695 1.002
## 
## ------
## * For help interpreting the printed output see ?print.cat_bayes

print(predict(cat_glm_bayes_joint_model))

##  [1]  0.7690212 -0.1300890  0.1918019 -1.9138229  0.0210847  0.3745221
##  [7] -0.4623462 -0.3040014 -0.2397665  0.2545836

traceplot(cat_glm_bayes_joint_model)

Help Manual

Help pageTopics
Catalytic Cox Proportional Hazards Model (COX) Fitting Function with Fixed Taucat_cox
Bayesian Estimation for Catalytic Cox Proportional-Hazards Model (COX) with Fixed taucat_cox_bayes
Bayesian Estimation for Catalytic Cox Proportional-Hazards Model (COX) with adaptive taucat_cox_bayes_joint
Initialization for Catalytic Cox proportional hazards model (COX)cat_cox_initialization
Catalytic Cox Proportional-Hazards Model (COX) Fitting Function by Tuning tau from a Sequence of tau Valuescat_cox_tune
Catalytic Generalized Linear Models (GLMs) Fitting Function with Fixed Taucat_glm
Bayesian Estimation for Catalytic Generalized Linear Models (GLMs) with Fixed taucat_glm_bayes
Bayesian Estimation for Catalytic Generalized Linear Models (GLMs) with adaptive taucat_glm_bayes_joint
Bayesian Estimation with Gibbs Sampling for Catalytic Generalized Linear Models (GLMs) Binomial Family for Coefficients and taucat_glm_bayes_joint_gibbs
Initialization for Catalytic Generalized Linear Models (GLMs)cat_glm_initialization
Catalytic Generalized Linear Models (GLMs) Fitting Function by Tuning tau from a Sequence of tau Valuescat_glm_tune
Catalytic Linear Mixed Model (LMM) Fitting Function with fixed taucat_lmm
Initialization for Catalytic Linear Mixed Model (LMM)cat_lmm_initialization
Catalytic Linear Mixed Model (LMM) Fitting Function by Tuning tau from a Sequence of tau Valuescat_lmm_tune
Perform Cross-Validation for Model Estimationcross_validation
Perform Cross-Validation for Catalytic Cox Proportional-Hazards Model (COX) to Select Optimal taucross_validation_cox
Perform Cross-Validation for Catalytic Linear Mixed Model (LMM) to Select Optimal taucross_validation_lmm
Extract and Format Model Coefficientsextract_coefs
Extract Dimension Information from Model Initializationextract_dim
Extract and Format Summary of Stan Model Resultsextract_stan_summary
Extract and Format Sequence of Tau Valuesextract_tau_seq
Adjusted Cat Initializationget_adjusted_cat_init
Compute the Gradient for Cox Proportional Hazards Modelget_cox_gradient
Compute the Hessian Matrix for Cox Proportional Hazards Modelget_cox_hessian
Estimate the kappa value for the synthetic Cox proportional hazards modelget_cox_kappa
Compute the Partial Likelihood for the Cox Proportional Hazards Modelget_cox_partial_likelihood
Solve Linear System using QR Decompositionget_cox_qr_solve
Calculate Risk and Failure Sets for Cox Proportional Hazards Modelget_cox_risk_and_failure_sets
Identify the risk set indices for Cox proportional hazards modelget_cox_risk_set_idx
Compute the gradient of the synthetic Cox proportional hazards modelget_cox_syn_gradient
Compute the Synthetic Hessian Matrix for Cox Proportional Hazards Modelget_cox_syn_hessian
Compute Discrepancy Measuresget_discrepancy
Extract Left-Hand Side of Formula as Stringget_formula_lhs
Extract the Right-Hand Side of a Formulaget_formula_rhs
Convert Formula to Stringget_formula_string
Get Custom Variance for Generalized Linear Model (GLM)get_glm_custom_var
Compute Diagonal Approximate Covariance Matrixget_glm_diag_approx_cov
Retrieve GLM Family Name or Name with Link Functionget_glm_family_string
Compute Lambda Based on Discrepancy Methodget_glm_lambda
Compute Log Density Based on GLM Familyget_glm_log_density
Compute Gradient of Log Density for GLM Familiesget_glm_log_density_grad
Compute Mean Based on GLM Familyget_glm_mean
Generate Sample Data for GLMget_glm_sample_data
Run Hamiltonian Monte Carlo to Get MCMC Sample Resultget_hmc_mcmc_result
Compute Linear Predictorget_linear_predictor
Resampling Methods for Data Processingget_resampled_df
Generate Stan Model Based on Specified Parametersget_stan_model
Standardize Dataget_standardized_data
Hamiltonian Monte Carlo (HMC) Implementationhmc_neal_2010
Check if a Variable is Continuousis.continuous
Perform Mallowian Estimate for Model Risk (Only Applicable for Gaussian Family)mallowian_estimate
Perform Parametric Bootstrap for Model Risk Estimationparametric_bootstrap
Plot Likelihood or Risk Estimate vs. Tau for Tuning Modelplot.cat_tune
Predict Linear Predictor for New Data Using a Fitted Cox Modelpredict.cat_cox
Predict Outcome for New Data Using a Fitted GLM Modelpredict.cat_glm
Predict Linear Predictor for New Data Using a Fitted Linear Mixed Modelpredict.cat_lmm
Print Data Frame with Head and Tail Rowsprint_df_head_tail
Generate Suggestions for Bayesian Joint Binomial GLM Parameter Estimationprint_glm_bayes_joint_binomial_suggestion
Print Method for 'cat' Objectprint.cat
Print Summary of 'cat_bayes' Modelprint.cat_bayes
Print Summary of 'cat_gibbs' Modelprint.cat_gibbs
Print Summary for Catalytic Initialization Modelprint.cat_initialization
Print Method for 'cat_tune' Objectprint.cat_tune
Perform Steinian Estimate for Model Risk (Only Applicable for Binomial Family)steinian_estimate
Simulated SWIM Dataset with Binary Responseswim
Traceplot for Bayesian Model Samplingtraceplot
Traceplot for Bayesian Sampling Modeltraceplot.cat_bayes
Traceplot for Gibbs Sampling Modeltraceplot.cat_gibbs
Calculates the log-likelihood for linear mixed models (LMMs) by combining observed and synthetic log-likelihoods based on the variance parameters.update_lmm_variance
Validate Inputs for Catalytic Cox proportional hazards model (COX) Initializationvalidate_cox_initialization_input
Validate Inputs for Catalytic Cox Modelvalidate_cox_input
Validate Inputs for Catalytic Generalized Linear Models (GLMs) Initializationvalidate_glm_initialization_input
Validate Inputs for Catalytic Generalized Linear Models (GLMs)validate_glm_input
Validate Inputs for Catalytic Linear Mixed Model (LMM) Initializationvalidate_lmm_initialization_input
Validate Inputs for Catalytic Linear Mixed Model (LMM)validate_lmm_input
Validate Positive or Non-negative Parametervalidate_positive