Title: | Bayesian Vector Heterogeneous Autoregressive Modeling |
---|---|
Description: | Tools to model and forecast multivariate time series including Bayesian Vector heterogeneous autoregressive (VHAR) model by Kim & Baek (2023) (<doi:10.1080/00949655.2023.2281644>). 'bvhar' can model Vector Autoregressive (VAR), VHAR, Bayesian VAR (BVAR), and Bayesian VHAR (BVHAR) models. |
Authors: | Young Geun Kim [aut, cre, cph] , Changryong Baek [ctb] |
Maintainer: | Young Geun Kim <[email protected]> |
License: | GPL (>= 3) |
Version: | 2.1.2 |
Built: | 2024-11-11 06:57:12 UTC |
Source: | CRAN |
Draws dynamic directional spillover plot.
## S3 method for class 'bvhardynsp' autoplot( object, type = c("tot", "to", "from", "net"), hcol = "grey", hsize = 1.5, row_facet = NULL, col_facet = NULL, ... )
## S3 method for class 'bvhardynsp' autoplot( object, type = c("tot", "to", "from", "net"), hcol = "grey", hsize = 1.5, row_facet = NULL, col_facet = NULL, ... )
object |
A |
type |
Index to draw |
hcol |
color of horizontal line = 0 (By default, grey) |
hsize |
size of horizontal line = 0 (By default, 1.5) |
row_facet |
|
col_facet |
|
... |
Additional |
Draw impulse responses of response ~ impulse in the facet.
## S3 method for class 'bvharirf' autoplot(object, ...)
## S3 method for class 'bvharirf' autoplot(object, ...)
object |
A |
... |
Other arguments passed on the |
A ggplot object
Draw BVAR and BVHAR MCMC plots.
## S3 method for class 'bvharsp' autoplot( object, type = c("coef", "trace", "dens", "area"), pars = character(), regex_pars = character(), ... )
## S3 method for class 'bvharsp' autoplot( object, type = c("coef", "trace", "dens", "area"), pars = character(), regex_pars = character(), ... )
object |
A |
type |
The type of the plot. Posterior coefficient ( |
pars |
Parameter names to draw. |
regex_pars |
Regular expression parameter names to draw. |
... |
Other options for each |
A ggplot object
This function draws residual plot for covariance matrix of Minnesota prior VAR model.
## S3 method for class 'normaliw' autoplot(object, hcol = "grey", hsize = 1.5, ...)
## S3 method for class 'normaliw' autoplot(object, hcol = "grey", hsize = 1.5, ...)
object |
A |
hcol |
color of horizontal line = 0 (By default, grey) |
hsize |
size of horizontal line = 0 (By default, 1.5) |
... |
additional options for geom_point |
A ggplot object
Plots the forecasting result with forecast regions.
## S3 method for class 'predbvhar' autoplot( object, type = c("grid", "wrap"), ci_alpha = 0.7, alpha_scale = 0.3, x_cut = 1, viridis = FALSE, viridis_option = "D", NROW = NULL, NCOL = NULL, ... ) ## S3 method for class 'predbvhar' autolayer(object, ci_fill = "grey70", ci_alpha = 0.5, alpha_scale = 0.3, ...)
## S3 method for class 'predbvhar' autoplot( object, type = c("grid", "wrap"), ci_alpha = 0.7, alpha_scale = 0.3, x_cut = 1, viridis = FALSE, viridis_option = "D", NROW = NULL, NCOL = NULL, ... ) ## S3 method for class 'predbvhar' autolayer(object, ci_fill = "grey70", ci_alpha = 0.5, alpha_scale = 0.3, ...)
object |
A |
type |
Divide variables using |
ci_alpha |
Transparency of CI |
alpha_scale |
Scale of transparency parameter ( |
x_cut |
plot x axes from |
viridis |
If |
viridis_option |
Option for viridis string. See |
NROW |
|
NCOL |
|
... |
additional option for |
ci_fill |
color of CI |
A ggplot object
A ggplot layer
Draw heatmap for SSVS prior coefficients.
## S3 method for class 'summary.bvharsp' autoplot(object, point = FALSE, ...)
## S3 method for class 'summary.bvharsp' autoplot(object, point = FALSE, ...)
object |
A |
point |
Use point for sparsity representation |
... |
Other arguments passed on the |
A ggplot object
This function draws density plot for coefficient matrices of Minnesota prior VAR model.
## S3 method for class 'summary.normaliw' autoplot( object, type = c("trace", "dens", "area"), pars = character(), regex_pars = character(), ... )
## S3 method for class 'summary.normaliw' autoplot( object, type = c("trace", "dens", "area"), pars = character(), regex_pars = character(), ... )
object |
A |
type |
The type of the plot. Trace plot ( |
pars |
Parameter names to draw. |
regex_pars |
Regular expression parameter names to draw. |
... |
Other options for each |
A ggplot object
This function sets lower and upper bounds for set_bvar()
, set_bvhar()
, or set_weight_bvhar()
.
bound_bvhar( init_spec = set_bvhar(), lower_spec = set_bvhar(), upper_spec = set_bvhar() ) ## S3 method for class 'boundbvharemp' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.boundbvharemp(x) ## S3 method for class 'boundbvharemp' knit_print(x, ...)
bound_bvhar( init_spec = set_bvhar(), lower_spec = set_bvhar(), upper_spec = set_bvhar() ) ## S3 method for class 'boundbvharemp' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.boundbvharemp(x) ## S3 method for class 'boundbvharemp' knit_print(x, ...)
init_spec |
Initial Bayes model specification |
lower_spec |
Lower bound Bayes model specification |
upper_spec |
Upper bound Bayes model specification |
x |
|
digits |
digit option to print |
... |
not used |
boundbvharemp
class
This function fits BVAR(p) with flat prior.
bvar_flat( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvar_flat(), include_mean = TRUE, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarflat' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarflat' logLik(object, ...) ## S3 method for class 'bvarflat' AIC(object, ...) ## S3 method for class 'bvarflat' BIC(object, ...) is.bvarflat(x) ## S3 method for class 'bvarflat' knit_print(x, ...)
bvar_flat( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvar_flat(), include_mean = TRUE, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarflat' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarflat' logLik(object, ...) ## S3 method for class 'bvarflat' AIC(object, ...) ## S3 method for class 'bvarflat' BIC(object, ...) is.bvarflat(x) ## S3 method for class 'bvarflat' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
p |
VAR lag |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
A BVAR model specification by |
include_mean |
Add constant term (Default: |
verbose |
Print the progress bar in the console. By default, |
num_thread |
Number of threads |
x |
|
digits |
digit option to print |
... |
not used |
object |
A |
Ghosh et al. (2018) gives flat prior for residual matrix in BVAR.
Under this setting, there are many models such as hierarchical or non-hierarchical. This function chooses the most simple non-hierarchical matrix normal prior in Section 3.1.
where U: precision matrix (MN: matrix normal).
bvar_flat()
returns an object bvarflat
class.
It is a list with the following components:
Posterior Mean matrix of Matrix Normal distribution
Fitted values
Residuals
Posterior precision matrix of Matrix Normal distribution
Posterior scale matrix of posterior inverse-wishart distribution
Posterior shape of inverse-wishart distribution
Numer of Coefficients: mp + 1 or mp
Lag of VAR
Dimension of the time series
Sample size used when training = totobs
- p
Total number of the observation
Process string in the bayes_spec
: BVAR_Flat
Model specification (bvharspec
)
include constant term (const
) or not (none
)
Matched call
Prior mean matrix of Matrix Normal distribution: zero matrix
Prior precision matrix of Matrix Normal distribution:
Raw input (matrix
)
Ghosh, S., Khare, K., & Michailidis, G. (2018). High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models. Journal of the American Statistical Association, 114(526).
Litterman, R. B. (1986). Forecasting with Bayesian Vector Autoregressions: Five Years of Experience. Journal of Business & Economic Statistics, 4(1), 25.
set_bvar_flat()
to specify the hyperparameters of BVAR flat prior.
coef.bvarflat()
, residuals.bvarflat()
, and fitted.bvarflat()
predict.bvarflat()
to forecast the BVHAR process
This function fits BVAR(p) with horseshoe prior.
bvar_horseshoe( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_horseshoe(), include_mean = TRUE, minnesota = FALSE, algo = c("block", "gibbs"), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarhs' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarhs' knit_print(x, ...)
bvar_horseshoe( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_horseshoe(), include_mean = TRUE, minnesota = FALSE, algo = c("block", "gibbs"), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarhs' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarhs' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
p |
VAR lag |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
Horseshoe initialization specification by |
include_mean |
Add constant term (Default: |
minnesota |
Minnesota type |
algo |
Ordinary gibbs sampling ( |
verbose |
Print the progress bar in the console. By default, |
num_thread |
|
x |
|
digits |
digit option to print |
... |
not used |
bvar_horseshoe
returns an object named bvarhs
class.
It is a list with the following components:
Posterior mean of VAR coefficients.
Posterior mean of covariance matrix
Posterior mean of precision matrix
Posterior inclusion probabilities.
posterior::draws_df with every variable: alpha, lambda, tau, omega, and eta
Name of every parameter.
Numer of Coefficients: mp + 1
or mp
Lag of VAR
Dimension of the data
Sample size used when training = totobs
- p
Total number of the observation
Matched call
Description of the model, e.g. VAR_Horseshoe
include constant term (const
) or not (none
)
Usual Gibbs sampling (gibbs
) or fast sampling (fast
)
Horseshoe specification defined by set_horseshoe()
The numer of chains
Total iterations
Burn-in
Thinning
Indicators for group.
Number of groups.
Raw input
Carvalho, C. M., Polson, N. G., & Scott, J. G. (2010). The horseshoe estimator for sparse signals. Biometrika, 97(2), 465-480.
Makalic, E., & Schmidt, D. F. (2016). A Simple Sampler for the Horseshoe Estimator. IEEE Signal Processing Letters, 23(1), 179-182.
This function fits BVAR(p) with Minnesota prior.
bvar_minnesota( y, p = 1, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvar(), scale_variance = 0.05, include_mean = TRUE, parallel = list(), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarmn' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarhm' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarmn' logLik(object, ...) ## S3 method for class 'bvarmn' AIC(object, ...) ## S3 method for class 'bvarmn' BIC(object, ...) is.bvarmn(x) ## S3 method for class 'bvarmn' knit_print(x, ...) ## S3 method for class 'bvarhm' knit_print(x, ...)
bvar_minnesota( y, p = 1, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvar(), scale_variance = 0.05, include_mean = TRUE, parallel = list(), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarmn' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarhm' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarmn' logLik(object, ...) ## S3 method for class 'bvarmn' AIC(object, ...) ## S3 method for class 'bvarmn' BIC(object, ...) is.bvarmn(x) ## S3 method for class 'bvarmn' knit_print(x, ...) ## S3 method for class 'bvarhm' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
p |
VAR lag (Default: 1) |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
A BVAR model specification by |
scale_variance |
Proposal distribution scaling constant to adjust an acceptance rate |
include_mean |
Add constant term (Default: |
parallel |
List the same argument of |
verbose |
Print the progress bar in the console. By default, |
num_thread |
Number of threads |
x |
|
digits |
digit option to print |
... |
not used |
object |
A |
Minnesota prior gives prior to parameters (VAR matrices) and
(residual covariance).
(MN: matrix normal, IW: inverse-wishart)
bvar_minnesota()
returns an object bvarmn
class.
It is a list with the following components:
Posterior Mean
Fitted values
Residuals
Posterior mean matrix of Matrix Normal distribution
Posterior precision matrix of Matrix Normal distribution
Posterior scale matrix of posterior inverse-Wishart distribution
Posterior shape of inverse-Wishart distribution ( - obs + 2).
: nrow(Dummy observation) - k
Numer of Coefficients: mp + 1 or mp
Dimension of the time series
Sample size used when training = totobs
- p
Prior mean matrix of Matrix Normal distribution:
Prior precision matrix of Matrix Normal distribution:
Prior scale matrix of inverse-Wishart distribution:
Prior shape of inverse-Wishart distribution:
Lag of VAR
Total number of the observation
include constant term (const
) or not (none
)
Raw input (matrix
)
Matched call
Process string in the bayes_spec
: BVAR_Minnesota
Model specification (bvharspec
)
It is also normaliw
and bvharmod
class.
Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector auto regressions. Journal of Applied Econometrics, 25(1).
Giannone, D., Lenza, M., & Primiceri, G. E. (2015). Prior Selection for Vector Autoregressions. Review of Economics and Statistics, 97(2).
Litterman, R. B. (1986). Forecasting with Bayesian Vector Autoregressions: Five Years of Experience. Journal of Business & Economic Statistics, 4(1), 25.
KADIYALA, K.R. and KARLSSON, S. (1997), NUMERICAL METHODS FOR ESTIMATION AND INFERENCE IN BAYESIAN VAR-MODELS. J. Appl. Econ., 12: 99-132.
Karlsson, S. (2013). Chapter 15 Forecasting with Bayesian Vector Autoregression. Handbook of Economic Forecasting, 2, 791-897.
Sims, C. A., & Zha, T. (1998). Bayesian Methods for Dynamic Multivariate Models. International Economic Review, 39(4), 949-968.
set_bvar()
to specify the hyperparameters of Minnesota prior.
summary.normaliw()
to summarize BVAR model
# Perform the function using etf_vix dataset fit <- bvar_minnesota(y = etf_vix[,1:3], p = 2) class(fit) # Extract coef, fitted values, and residuals coef(fit) head(residuals(fit)) head(fitted(fit))
# Perform the function using etf_vix dataset fit <- bvar_minnesota(y = etf_vix[,1:3], p = 2) class(fit) # Extract coef, fitted values, and residuals coef(fit) head(residuals(fit)) head(fitted(fit))
This function fits BVAR(p) with stochastic search variable selection (SSVS) prior.
bvar_ssvs( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = choose_ssvs(y = y, ord = p, type = "VAR", param = c(0.1, 10), include_mean = include_mean, gamma_param = c(0.01, 0.01), mean_non = 0, sd_non = 0.1), init_spec = init_ssvs(type = "auto"), include_mean = TRUE, minnesota = FALSE, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarssvs' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarssvs' knit_print(x, ...)
bvar_ssvs( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = choose_ssvs(y = y, ord = p, type = "VAR", param = c(0.1, 10), include_mean = include_mean, gamma_param = c(0.01, 0.01), mean_non = 0, sd_non = 0.1), init_spec = init_ssvs(type = "auto"), include_mean = TRUE, minnesota = FALSE, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarssvs' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarssvs' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
p |
VAR lag |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
A SSVS model specification by |
init_spec |
SSVS initialization specification by |
include_mean |
Add constant term (Default: |
minnesota |
Apply cross-variable shrinkage structure (Minnesota-way). By default, |
verbose |
Print the progress bar in the console. By default, |
num_thread |
|
x |
|
digits |
digit option to print |
... |
not used |
SSVS prior gives prior to parameters (VAR coefficient) and
(residual covariance).
and for upper triangular matrix ,
bvar_ssvs
returns an object named bvarssvs
class.
It is a list with the following components:
Posterior mean of VAR coefficients.
Posterior mean of cholesky factor matrix
Posterior mean of covariance matrix
Posterior mean of omega
Posterior inclusion probability
posterior::draws_df with every variable: alpha, eta, psi, omega, and gamma
Name of every parameter.
Numer of Coefficients: mp + 1
or mp
Lag of VAR
Dimension of the data
Sample size used when training = totobs
- p
Total number of the observation
Matched call
Description of the model, e.g. VAR_SSVS
include constant term (const
) or not (none
)
SSVS specification defined by set_ssvs()
Initial specification defined by init_ssvs()
The numer of chains
Total iterations
Burn-in
Thinning
Indicators for group.
Number of groups.
Raw input
George, E. I., & McCulloch, R. E. (1993). Variable Selection via Gibbs Sampling. Journal of the American Statistical Association, 88(423), 881-889.
George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580.
Koop, G., & Korobilis, D. (2009). Bayesian Multivariate Time Series Methods for Empirical Macroeconomics. Foundations and Trends® in Econometrics, 3(4), 267-358.
This function fits VAR-SV. It can have Minnesota, SSVS, and Horseshoe prior.
bvar_sv( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvar(), sv_spec = set_sv(), intercept = set_intercept(), include_mean = TRUE, minnesota = TRUE, save_init = FALSE, convergence = NULL, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarsv' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarsv' knit_print(x, ...)
bvar_sv( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvar(), sv_spec = set_sv(), intercept = set_intercept(), include_mean = TRUE, minnesota = TRUE, save_init = FALSE, convergence = NULL, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarsv' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarsv' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
p |
VAR lag |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
A BVAR model specification by |
sv_spec |
SV specification by |
intercept |
Prior for the constant term by |
include_mean |
Add constant term (Default: |
minnesota |
Apply cross-variable shrinkage structure (Minnesota-way). By default, |
save_init |
Save every record starting from the initial values ( |
convergence |
Convergence threshold for rhat < convergence. By default, |
verbose |
Print the progress bar in the console. By default, |
num_thread |
Number of threads |
x |
|
digits |
digit option to print |
... |
not used |
Cholesky stochastic volatility modeling for VAR based on
, and implements corrected triangular algorithm for Gibbs sampler.
bvar_sv()
returns an object named bvarsv
class.
Posterior mean of coefficients.
Posterior mean of contemporaneous effects.
Every set of MCMC trace.
Name of every parameter.
Indicators for group.
Number of groups.
Numer of Coefficients: 3m + 1
or 3m
VAR lag
Dimension of the data
Sample size used when training = totobs
- p
Total number of the observation
Matched call
Description of the model, e.g. VHAR_SSVS_SV
, VHAR_Horseshoe_SV
, or VHAR_minnesota-part_SV
include constant term (const
) or not (none
)
Coefficients prior specification
log volatility prior specification
Intercept prior specification
Initial values
The numer of chains
Total iterations
Burn-in
Thinning
Raw input
If it is SSVS or Horseshoe:
Posterior inclusion probabilities.
Carriero, A., Chan, J., Clark, T. E., & Marcellino, M. (2022). Corrigendum to “Large Bayesian vector autoregressions with stochastic volatility and non-conjugate priors” [J. Econometrics 212 (1)(2019) 137-154]. Journal of Econometrics, 227(2), 506-512.
Chan, J., Koop, G., Poirier, D., & Tobias, J. (2019). Bayesian Econometric Methods (2nd ed., Econometric Exercises). Cambridge: Cambridge University Press.
Cogley, T., & Sargent, T. J. (2005). Drifts and volatilities: monetary policies and outcomes in the post WWII US. Review of Economic Dynamics, 8(2), 262-302.
Gruber, L., & Kastner, G. (2022). Forecasting macroeconomic data with Bayesian VARs: Sparse or dense? It depends! arXiv.
This function fits VHAR with horseshoe prior.
bvhar_horseshoe( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_horseshoe(), include_mean = TRUE, minnesota = c("no", "short", "longrun"), algo = c("block", "gibbs"), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharhs' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharhs' knit_print(x, ...)
bvhar_horseshoe( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_horseshoe(), include_mean = TRUE, minnesota = c("no", "short", "longrun"), algo = c("block", "gibbs"), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharhs' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharhs' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
har |
Numeric vector for weekly and monthly order. By default, |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
Horseshoe initialization specification by |
include_mean |
Add constant term (Default: |
minnesota |
Minnesota type |
algo |
Ordinary gibbs sampling ( |
verbose |
Print the progress bar in the console. By default, |
num_thread |
|
x |
|
digits |
digit option to print |
... |
not used |
bvhar_horseshoe
returns an object named bvarhs
class.
It is a list with the following components:
Posterior mean of VHAR coefficients.
Posterior mean of covariance matrix
Posterior mean of precision matrix
posterior::draws_df with every variable: alpha, lambda, tau, omega, and eta
Name of every parameter.
Numer of Coefficients: 3m + 1
or 3m
3 (The number of terms. It contains this element for usage in other functions.)
Order for weekly term
Order for monthly term
Dimension of the data
Sample size used when training = totobs
- p
Total number of the observation
Matched call
Description of the model, e.g. VHAR_Horseshoe
include constant term (const
) or not (none
)
Usual Gibbs sampling (gibbs
) or fast sampling (fast
)
Horseshoe specification defined by set_horseshoe()
The numer of chains
Total iterations
Burn-in
Thinning
Indicators for group.
Number of groups.
VHAR linear transformation matrix
Raw input
Kim, Y. G., and Baek, C. (n.d.). Working paper.
This function fits BVHAR with Minnesota prior.
bvhar_minnesota( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvhar(), scale_variance = 0.05, include_mean = TRUE, parallel = list(), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharmn' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharhm' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharmn' logLik(object, ...) ## S3 method for class 'bvharmn' AIC(object, ...) ## S3 method for class 'bvharmn' BIC(object, ...) is.bvharmn(x) ## S3 method for class 'bvharmn' knit_print(x, ...) ## S3 method for class 'bvharhm' knit_print(x, ...)
bvhar_minnesota( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvhar(), scale_variance = 0.05, include_mean = TRUE, parallel = list(), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharmn' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharhm' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharmn' logLik(object, ...) ## S3 method for class 'bvharmn' AIC(object, ...) ## S3 method for class 'bvharmn' BIC(object, ...) is.bvharmn(x) ## S3 method for class 'bvharmn' knit_print(x, ...) ## S3 method for class 'bvharhm' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
har |
Numeric vector for weekly and monthly order. By default, |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
A BVHAR model specification by |
scale_variance |
Proposal distribution scaling constant to adjust an acceptance rate |
include_mean |
Add constant term (Default: |
parallel |
List the same argument of |
verbose |
Print the progress bar in the console. By default, |
num_thread |
Number of threads |
x |
|
digits |
digit option to print |
... |
not used |
object |
A |
Apply Minnesota prior to Vector HAR: (VHAR matrices) and
(residual covariance).
(MN: matrix normal, IW: inverse-wishart)
There are two types of Minnesota priors for BVHAR:
VAR-type Minnesota prior specified by set_bvhar()
, so-called BVHAR-S model.
VHAR-type Minnesota prior specified by set_weight_bvhar()
, so-called BVHAR-L model.
bvhar_minnesota()
returns an object bvharmn
class.
It is a list with the following components:
Posterior Mean
Fitted values
Residuals
Posterior mean matrix of Matrix Normal distribution
Posterior precision matrix of Matrix Normal distribution
Posterior scale matrix of posterior inverse-wishart distribution
Posterior shape of inverse-Wishart distribution ( - obs + 2).
: nrow(Dummy observation) - k
Numer of Coefficients: 3m + 1 or 3m
Dimension of the time series
Sample size used when training = totobs
- 22
Prior mean matrix of Matrix Normal distribution:
Prior precision matrix of Matrix Normal distribution:
Prior scale matrix of inverse-Wishart distribution:
Prior shape of inverse-Wishart distribution:
3, this element exists to run the other functions
Order for weekly term
Order for monthly term
Total number of the observation
include constant term (const
) or not (none
)
VHAR linear transformation matrix:
Raw input (matrix
)
Matched call
Process string in the bayes_spec
: BVHAR_MN_VAR
(BVHAR-S) or BVHAR_MN_VHAR
(BVHAR-L)
Model specification (bvharspec
)
It is also normaliw
and bvharmod
class.
Kim, Y. G., and Baek, C. (2024). Bayesian vector heterogeneous autoregressive modeling. Journal of Statistical Computation and Simulation, 94(6), 1139-1157.
set_bvhar()
to specify the hyperparameters of BVHAR-S
set_weight_bvhar()
to specify the hyperparameters of BVHAR-L
summary.normaliw()
to summarize BVHAR model
# Perform the function using etf_vix dataset fit <- bvhar_minnesota(y = etf_vix[,1:3]) class(fit) # Extract coef, fitted values, and residuals coef(fit) head(residuals(fit)) head(fitted(fit))
# Perform the function using etf_vix dataset fit <- bvhar_minnesota(y = etf_vix[,1:3]) class(fit) # Extract coef, fitted values, and residuals coef(fit) head(residuals(fit)) head(fitted(fit))
This function fits BVAR(p) with stochastic search variable selection (SSVS) prior.
bvhar_ssvs( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = choose_ssvs(y = y, ord = har, type = "VHAR", param = c(0.1, 10), include_mean = include_mean, gamma_param = c(0.01, 0.01), mean_non = 0, sd_non = 0.1), init_spec = init_ssvs(type = "auto"), include_mean = TRUE, minnesota = c("no", "short", "longrun"), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharssvs' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharssvs' knit_print(x, ...)
bvhar_ssvs( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = choose_ssvs(y = y, ord = har, type = "VHAR", param = c(0.1, 10), include_mean = include_mean, gamma_param = c(0.01, 0.01), mean_non = 0, sd_non = 0.1), init_spec = init_ssvs(type = "auto"), include_mean = TRUE, minnesota = c("no", "short", "longrun"), verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharssvs' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharssvs' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
har |
Numeric vector for weekly and monthly order. By default, |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of warm-up (burn-in). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
A SSVS model specification by |
init_spec |
SSVS initialization specification by |
include_mean |
Add constant term (Default: |
minnesota |
Apply cross-variable shrinkage structure (Minnesota-way). Two type: |
verbose |
Print the progress bar in the console. By default, |
num_thread |
|
x |
|
digits |
digit option to print |
... |
not used |
SSVS prior gives prior to parameters (VAR coefficient) and
(residual covariance).
and for upper triangular matrix ,
Gibbs sampler is used for the estimation.
bvhar_ssvs
returns an object named bvharssvs
class.
It is a list with the following components:
Posterior mean of VAR coefficients.
Posterior mean of cholesky factor matrix
Posterior mean of covariance matrix
Posterior mean of omega
Posterior inclusion probability
posterior::draws_df with every variable: alpha, eta, psi, omega, and gamma
Name of every parameter.
Numer of Coefficients: 3m + 1
or 3m
3 (The number of terms. It contains this element for usage in other functions.)
Order for weekly term
Order for monthly term
Dimension of the data
Sample size used when training = totobs
- p
Total number of the observation
Matched call
Description of the model, e.g. VHAR_SSVS
include constant term (const
) or not (none
)
SSVS specification defined by set_ssvs()
Initial specification defined by init_ssvs()
The numer of chains
Total iterations
Burn-in
Thinning
Indicators for group.
Number of groups.
VHAR linear transformation matrix
Raw input
Kim, Y. G., and Baek, C. (n.d.). Working paper.
This function fits VHAR-SV.
It can have Minnesota, SSVS, and Horseshoe prior.
This function is deprecated. Use vhar_bayes()
with cov_spec = set_sv()
option.
bvhar_sv( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvhar(), sv_spec = set_sv(), intercept = set_intercept(), include_mean = TRUE, minnesota = c("longrun", "short", "no"), save_init = FALSE, convergence = NULL, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharsv' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharsv' knit_print(x, ...)
bvhar_sv( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvhar(), sv_spec = set_sv(), intercept = set_intercept(), include_mean = TRUE, minnesota = c("longrun", "short", "no"), save_init = FALSE, convergence = NULL, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharsv' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharsv' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
har |
Numeric vector for weekly and monthly order. By default, |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
A BVHAR model specification by |
sv_spec |
SV specification by |
intercept |
Prior for the constant term by |
include_mean |
Add constant term (Default: |
minnesota |
Apply cross-variable shrinkage structure (Minnesota-way). Two type: |
save_init |
Save every record starting from the initial values ( |
convergence |
Convergence threshold for rhat < convergence. By default, |
verbose |
Print the progress bar in the console. By default, |
num_thread |
Number of threads |
x |
|
digits |
digit option to print |
... |
not used |
Cholesky stochastic volatility modeling for VHAR based on
bvhar_sv()
returns an object named bvharsv
class. It is a list with the following components:
Posterior mean of coefficients.
Posterior mean of contemporaneous effects.
Every set of MCMC trace.
Name of every parameter.
Indicators for group.
Number of groups.
Numer of Coefficients: 3m + 1
or 3m
3 (The number of terms. It contains this element for usage in other functions.)
Order for weekly term
Order for monthly term
Dimension of the data
Sample size used when training = totobs
- p
Total number of the observation
Matched call
Description of the model, e.g. VHAR_SSVS_SV
, VHAR_Horseshoe_SV
, or VHAR_minnesota-part_SV
include constant term (const
) or not (none
)
Coefficients prior specification
log volatility prior specification
Initial values
Intercept prior specification
The numer of chains
Total iterations
Burn-in
Thinning
VHAR linear transformation matrix
Raw input
If it is SSVS or Horseshoe:
Posterior inclusion probabilities.
Kim, Y. G., and Baek, C. (n.d.). Working paper.
This function chooses the set of hyperparameters of Bayesian model using stats::optim()
function.
choose_bayes( bayes_bound = bound_bvhar(), ..., eps = 1e-04, y, order = c(5, 22), include_mean = TRUE, parallel = list() )
choose_bayes( bayes_bound = bound_bvhar(), ..., eps = 1e-04, y, order = c(5, 22), include_mean = TRUE, parallel = list() )
bayes_bound |
Empirical Bayes optimization bound specification defined by |
... |
Additional arguments for |
eps |
Hyperparameter |
y |
Time series data |
order |
Order for BVAR or BVHAR. |
include_mean |
Add constant term (Default: |
parallel |
List the same argument of |
bvharemp
class is a list that has
Many components of stats::optim()
or optimParallel::optimParallel()
Corresponding bvharspec
Chosen Bayesian model
Marginal likelihood of the final model
Giannone, D., Lenza, M., & Primiceri, G. E. (2015). Prior Selection for Vector Autoregressions. Review of Economics and Statistics, 97(2).
Kim, Y. G., and Baek, C. (2024). Bayesian vector heterogeneous autoregressive modeling. Journal of Statistical Computation and Simulation, 94(6), 1139-1157.
bound_bvhar()
to define L-BFGS-B optimization bounds.
Individual functions: choose_bvar()
Instead of these functions, you can use choose_bayes()
.
choose_bvar( bayes_spec = set_bvar(), lower = 0.01, upper = 10, ..., eps = 1e-04, y, p, include_mean = TRUE, parallel = list() ) choose_bvhar( bayes_spec = set_bvhar(), lower = 0.01, upper = 10, ..., eps = 1e-04, y, har = c(5, 22), include_mean = TRUE, parallel = list() ) ## S3 method for class 'bvharemp' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.bvharemp(x) ## S3 method for class 'bvharemp' knit_print(x, ...)
choose_bvar( bayes_spec = set_bvar(), lower = 0.01, upper = 10, ..., eps = 1e-04, y, p, include_mean = TRUE, parallel = list() ) choose_bvhar( bayes_spec = set_bvhar(), lower = 0.01, upper = 10, ..., eps = 1e-04, y, har = c(5, 22), include_mean = TRUE, parallel = list() ) ## S3 method for class 'bvharemp' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.bvharemp(x) ## S3 method for class 'bvharemp' knit_print(x, ...)
bayes_spec |
Initial Bayes model specification. |
lower |
|
upper |
|
... |
not used |
eps |
Hyperparameter |
y |
Time series data |
p |
BVAR lag |
include_mean |
Add constant term (Default: |
parallel |
List the same argument of |
har |
Numeric vector for weekly and monthly order. By default, |
x |
|
digits |
digit option to print |
Empirical Bayes method maximizes marginal likelihood and selects the set of hyperparameters.
These functions implement L-BFGS-B
method of stats::optim()
to find the maximum of marginal likelihood.
If you want to set lower
and upper
option more carefully,
deal with them like as in stats::optim()
in order of set_bvar()
, set_bvhar()
, or set_weight_bvhar()
's argument (except eps
).
In other words, just arrange them in a vector.
bvharemp
class is a list that has
chosen bvharspec
set
Bayesian model fit result with chosen specification
Many components of stats::optim()
or optimParallel::optimParallel()
Corresponding bvharspec
Chosen Bayesian model
Marginal likelihood of the final model
Byrd, R. H., Lu, P., Nocedal, J., & Zhu, C. (1995). A limited memory algorithm for bound constrained optimization. SIAM Journal on scientific computing, 16(5), 1190-1208.
Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2013). Bayesian data analysis. Chapman and Hall/CRC.
Giannone, D., Lenza, M., & Primiceri, G. E. (2015). Prior Selection for Vector Autoregressions. Review of Economics and Statistics, 97(2).
Kim, Y. G., and Baek, C. (2024). Bayesian vector heterogeneous autoregressive modeling. Journal of Statistical Computation and Simulation, 94(6), 1139-1157.
This function chooses and
using a default semiautomatic approach.
choose_ssvs( y, ord, type = c("VAR", "VHAR"), param = c(0.1, 10), include_mean = TRUE, gamma_param = c(0.01, 0.01), mean_non = 0, sd_non = 0.1 )
choose_ssvs( y, ord, type = c("VAR", "VHAR"), param = c(0.1, 10), include_mean = TRUE, gamma_param = c(0.01, 0.01), mean_non = 0, sd_non = 0.1 )
y |
Time series data of which columns indicate the variables. |
ord |
Order for VAR or VHAR. |
type |
Model type (Default: |
param |
Preselected constants |
include_mean |
Add constant term (Default: |
gamma_param |
Parameters (shape, rate) for Gamma distribution. This is for the output. |
mean_non |
Prior mean of unrestricted coefficients. This is for the output. |
sd_non |
Standard deviance of unrestricted coefficients. This is for the output. |
Instead of using subjective values of , we can use
It must be .
In case of ,
similarly.
ssvsinput
object
George, E. I., & McCulloch, R. E. (1993). Variable Selection via Gibbs Sampling. Journal of the American Statistical Association, 88(423), 881-889.
George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580.
Koop, G., & Korobilis, D. (2009). Bayesian Multivariate Time Series Methods for Empirical Macroeconomics. Foundations and Trends® in Econometrics, 3(4), 267-358.
This function computes AIC, FPE, BIC, and HQ up to p = lag_max
of VAR model.
choose_var(y, lag_max = 5, include_mean = TRUE, parallel = FALSE)
choose_var(y, lag_max = 5, include_mean = TRUE, parallel = FALSE)
y |
Time series data of which columns indicate the variables |
lag_max |
Maximum Var lag to explore (default = 5) |
include_mean |
Add constant term (Default: |
parallel |
Parallel computation using |
Minimum order and information criteria values
By defining stats::coef()
for each model, this function returns coefficient matrix estimates.
## S3 method for class 'varlse' coef(object, ...) ## S3 method for class 'vharlse' coef(object, ...) ## S3 method for class 'bvarmn' coef(object, ...) ## S3 method for class 'bvarflat' coef(object, ...) ## S3 method for class 'bvharmn' coef(object, ...) ## S3 method for class 'bvharsp' coef(object, ...) ## S3 method for class 'summary.bvharsp' coef(object, ...)
## S3 method for class 'varlse' coef(object, ...) ## S3 method for class 'vharlse' coef(object, ...) ## S3 method for class 'bvarmn' coef(object, ...) ## S3 method for class 'bvarflat' coef(object, ...) ## S3 method for class 'bvharmn' coef(object, ...) ## S3 method for class 'bvharsp' coef(object, ...) ## S3 method for class 'summary.bvharsp' coef(object, ...)
object |
Model object |
... |
not used |
matrix object with appropriate dimension.
Compute DIC of BVAR and BVHAR.
compute_dic(object, ...) ## S3 method for class 'bvarmn' compute_dic(object, n_iter = 100L, ...)
compute_dic(object, ...) ## S3 method for class 'bvarmn' compute_dic(object, n_iter = 100L, ...)
object |
Model fit |
... |
not used |
n_iter |
Number to sample |
Deviance information criteria (DIC) is
where is the effective number of parameters defined by
Random sampling from posterior distribution gives its computation,
DIC value.
Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2013). Bayesian data analysis. Chapman and Hall/CRC.
Spiegelhalter, D.J., Best, N.G., Carlin, B.P. and Van Der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64: 583-639.
Compute log of marginal likelihood of Bayesian Fit
compute_logml(object, ...) ## S3 method for class 'bvarmn' compute_logml(object, ...) ## S3 method for class 'bvharmn' compute_logml(object, ...)
compute_logml(object, ...) ## S3 method for class 'bvarmn' compute_logml(object, ...) ## S3 method for class 'bvharmn' compute_logml(object, ...)
object |
Model fit |
... |
not used |
Closed form of Marginal Likelihood of BVAR can be derived by
Closed form of Marginal Likelihood of BVHAR can be derived by
log likelihood of Minnesota prior model.
Giannone, D., Lenza, M., & Primiceri, G. E. (2015). Prior Selection for Vector Autoregressions. Review of Economics and Statistics, 97(2).
This function computes false discovery rate (FDR) for sparse element of the true coefficients given threshold.
conf_fdr(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_fdr(x, y, truth_thr = 0, ...)
conf_fdr(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_fdr(x, y, truth_thr = 0, ...)
x |
|
y |
True inclusion variable. |
... |
not used |
truth_thr |
Threshold value when using non-sparse true coefficient matrix. By default, |
When using this function, the true coefficient matrix should be sparse.
False discovery rate (FDR) is computed by
where TP is true positive, and FP is false positive.
FDR value in confusion table
Bai, R., & Ghosh, M. (2018). High-dimensional multivariate posterior consistency under global-local shrinkage priors. Journal of Multivariate Analysis, 167, 157-170.
This function computes false negative rate (FNR) for sparse element of the true coefficients given threshold.
conf_fnr(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_fnr(x, y, truth_thr = 0, ...)
conf_fnr(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_fnr(x, y, truth_thr = 0, ...)
x |
|
y |
True inclusion variable. |
... |
not used |
truth_thr |
Threshold value when using non-sparse true coefficient matrix. By default, |
False negative rate (FNR) is computed by
where TP is true positive, and FN is false negative.
FNR value in confusion table
Bai, R., & Ghosh, M. (2018). High-dimensional multivariate posterior consistency under global-local shrinkage priors. Journal of Multivariate Analysis, 167, 157-170.
This function computes F1 score for sparse element of the true coefficients given threshold.
conf_fscore(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_fscore(x, y, truth_thr = 0, ...)
conf_fscore(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_fscore(x, y, truth_thr = 0, ...)
x |
|
y |
True inclusion variable. |
... |
not used |
truth_thr |
Threshold value when using non-sparse true coefficient matrix. By default, |
The F1 score is computed by
F1 score in confusion table
This function computes precision for sparse element of the true coefficients given threshold.
conf_prec(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_prec(x, y, truth_thr = 0, ...)
conf_prec(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_prec(x, y, truth_thr = 0, ...)
x |
|
y |
True inclusion variable. |
... |
not used |
truth_thr |
Threshold value when using non-sparse true coefficient matrix. By default, |
If the element of the estimate is smaller than some threshold,
it is treated to be zero.
Then the precision is computed by
where TP is true positive, and FP is false positive.
Precision value in confusion table
Bai, R., & Ghosh, M. (2018). High-dimensional multivariate posterior consistency under global-local shrinkage priors. Journal of Multivariate Analysis, 167, 157-170.
This function computes recall for sparse element of the true coefficients given threshold.
conf_recall(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_recall(x, y, truth_thr = 0L, ...)
conf_recall(x, y, ...) ## S3 method for class 'summary.bvharsp' conf_recall(x, y, truth_thr = 0L, ...)
x |
|
y |
True inclusion variable. |
... |
not used |
truth_thr |
Threshold value when using non-sparse true coefficient matrix. By default, |
Precision is computed by
where TP is true positive, and FN is false negative.
Recall value in confusion table
Bai, R., & Ghosh, M. (2018). High-dimensional multivariate posterior consistency under global-local shrinkage priors. Journal of Multivariate Analysis, 167, 157-170.
This function computes FDR (false discovery rate) and FNR (false negative rate) for sparse element of the true coefficients given threshold.
confusion(x, y, ...) ## S3 method for class 'summary.bvharsp' confusion(x, y, truth_thr = 0, ...)
confusion(x, y, ...) ## S3 method for class 'summary.bvharsp' confusion(x, y, truth_thr = 0, ...)
x |
|
y |
True inclusion variable. |
... |
not used |
truth_thr |
Threshold value when using non-sparse true coefficient matrix. By default, |
When using this function, the true coefficient matrix should be sparse.
In this confusion matrix, positive (0) means sparsity. FP is false positive, and TP is true positive. FN is false negative, and FN is false negative.
Confusion table as following.
True-estimate | Positive (0) | Negative (1) |
Positive (0) | TP | FN |
Negative (1) | FP | TN |
Bai, R., & Ghosh, M. (2018). High-dimensional multivariate posterior consistency under global-local shrinkage priors. Journal of Multivariate Analysis, 167, 157-170.
Split a given time series dataset into train and test set for evaluation.
divide_ts(y, n_ahead)
divide_ts(y, n_ahead)
y |
Time series data of which columns indicate the variables |
n_ahead |
step to evaluate |
List of two datasets, train and test.
This function gives connectedness table with h-step ahead normalized spillover index (a.k.a. variance shares).
dynamic_spillover(object, n_ahead = 10L, ...) ## S3 method for class 'bvhardynsp' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvhardynsp' knit_print(x, ...) ## S3 method for class 'olsmod' dynamic_spillover(object, n_ahead = 10L, window, num_thread = 1, ...) ## S3 method for class 'normaliw' dynamic_spillover( object, n_ahead = 10L, window, num_iter = 1000L, num_burn = floor(num_iter/2), thinning = 1, num_thread = 1, ... ) ## S3 method for class 'ldltmod' dynamic_spillover( object, n_ahead = 10L, window, sparse = FALSE, num_thread = 1, ... ) ## S3 method for class 'svmod' dynamic_spillover(object, n_ahead = 10L, sparse = FALSE, num_thread = 1, ...)
dynamic_spillover(object, n_ahead = 10L, ...) ## S3 method for class 'bvhardynsp' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvhardynsp' knit_print(x, ...) ## S3 method for class 'olsmod' dynamic_spillover(object, n_ahead = 10L, window, num_thread = 1, ...) ## S3 method for class 'normaliw' dynamic_spillover( object, n_ahead = 10L, window, num_iter = 1000L, num_burn = floor(num_iter/2), thinning = 1, num_thread = 1, ... ) ## S3 method for class 'ldltmod' dynamic_spillover( object, n_ahead = 10L, window, sparse = FALSE, num_thread = 1, ... ) ## S3 method for class 'svmod' dynamic_spillover(object, n_ahead = 10L, sparse = FALSE, num_thread = 1, ...)
object |
Model object |
n_ahead |
step to forecast. By default, 10. |
... |
not used |
x |
|
digits |
digit option to print |
window |
Window size |
num_thread |
|
num_iter |
Number to sample MNIW distribution |
num_burn |
Number of burn-in |
thinning |
Thinning every thinning-th iteration |
sparse |
Diebold, F. X., & Yilmaz, K. (2012). Better to give than to receive: Predictive directional measurement of volatility spillovers. International Journal of forecasting, 28(1), 57-66.
Chicago Board Options Exchage (CBOE) Exchange Traded Funds (ETFs) volatility index from FRED.
etf_vix
etf_vix
A data frame of 1006 row and 9 columns:
From 2012-01-09 to 2015-06-27,
33 missing observations were interpolated by stats::approx()
with linear
.
Gold ETF volatility index
China ETF volatility index
Crude Oil ETF volatility index
Emerging Markets ETF volatility index
EuroCurrency ETF volatility index
Silver ETF volatility index
Gold Miners ETF volatility index
Energy Sector ETF volatility index
Brazil ETF volatility index
Copyright, 2016, Chicago Board Options Exchange, Inc.
Note that, in this data frame, dates column is removed.
This dataset interpolated 36 missing observations (nontrading dates) using imputeTS::na_interpolation()
.
Source: https://www.cboe.com
Release: https://www.cboe.com/us/options/market_statistics/daily/
Chicago Board Options Exchange, CBOE Gold ETF Volatility Index (GVZCLS), retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/GVZCLS, July 31, 2021.
Chicago Board Options Exchange, CBOE China ETF Volatility Index (VXFXICLS), retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/VXFXICLS, August 1, 2021.
Chicago Board Options Exchange, CBOE Crude Oil ETF Volatility Index (OVXCLS), retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/OVXCLS, August 1, 2021.
Chicago Board Options Exchange, CBOE Emerging Markets ETF Volatility Index (VXEEMCLS), retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/VXEEMCLS, August 1, 2021.
Chicago Board Options Exchange, CBOE EuroCurrency ETF Volatility Index (EVZCLS), retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/EVZCLS, August 2, 2021.
Chicago Board Options Exchange, CBOE Silver ETF Volatility Index (VXSLVCLS), retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/VXSLVCLS, August 1, 2021.
Chicago Board Options Exchange, CBOE Gold Miners ETF Volatility Index (VXGDXCLS), retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/VXGDXCLS, August 1, 2021.
Chicago Board Options Exchange, CBOE Energy Sector ETF Volatility Index (VXXLECLS), retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/VXXLECLS, August 1, 2021.
Chicago Board Options Exchange, CBOE Brazil ETF Volatility Index (VXEWZCLS), retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/VXEWZCLS, August 2, 2021.
By defining stats::fitted()
for each model, this function returns fitted matrix.
## S3 method for class 'varlse' fitted(object, ...) ## S3 method for class 'vharlse' fitted(object, ...) ## S3 method for class 'bvarmn' fitted(object, ...) ## S3 method for class 'bvarflat' fitted(object, ...) ## S3 method for class 'bvharmn' fitted(object, ...)
## S3 method for class 'varlse' fitted(object, ...) ## S3 method for class 'vharlse' fitted(object, ...) ## S3 method for class 'bvarmn' fitted(object, ...) ## S3 method for class 'bvarflat' fitted(object, ...) ## S3 method for class 'bvharmn' fitted(object, ...)
object |
Model object |
... |
not used |
matrix object.
This function conducts expanding window forecasting.
forecast_expand(object, n_ahead, y_test, num_thread = 1, ...) ## S3 method for class 'olsmod' forecast_expand(object, n_ahead, y_test, num_thread = 1, ...) ## S3 method for class 'normaliw' forecast_expand(object, n_ahead, y_test, num_thread = 1, use_fit = TRUE, ...) ## S3 method for class 'ldltmod' forecast_expand( object, n_ahead, y_test, num_thread = 1, level = 0.05, sparse = FALSE, lpl = FALSE, use_fit = TRUE, ... ) ## S3 method for class 'svmod' forecast_expand( object, n_ahead, y_test, num_thread = 1, level = 0.05, use_sv = TRUE, sparse = FALSE, lpl = FALSE, use_fit = TRUE, ... )
forecast_expand(object, n_ahead, y_test, num_thread = 1, ...) ## S3 method for class 'olsmod' forecast_expand(object, n_ahead, y_test, num_thread = 1, ...) ## S3 method for class 'normaliw' forecast_expand(object, n_ahead, y_test, num_thread = 1, use_fit = TRUE, ...) ## S3 method for class 'ldltmod' forecast_expand( object, n_ahead, y_test, num_thread = 1, level = 0.05, sparse = FALSE, lpl = FALSE, use_fit = TRUE, ... ) ## S3 method for class 'svmod' forecast_expand( object, n_ahead, y_test, num_thread = 1, level = 0.05, use_sv = TRUE, sparse = FALSE, lpl = FALSE, use_fit = TRUE, ... )
object |
Model object |
n_ahead |
Step to forecast in rolling window scheme |
y_test |
Test data to be compared. Use |
num_thread |
|
... |
Additional arguments. |
use_fit |
|
level |
Specify alpha of confidence interval level 100(1 - alpha) percentage. By default, .05. |
sparse |
|
lpl |
|
use_sv |
Use SV term |
Expanding windows forecasting fixes the starting period.
It moves the window ahead and forecast h-ahead in y_test
set.
predbvhar_expand
class
Hyndman, R. J., & Athanasopoulos, G. (2021). Forecasting: Principles and practice (3rd ed.). OTEXTS. https://otexts.com/fpp3/
This function conducts rolling window forecasting.
forecast_roll(object, n_ahead, y_test, num_thread = 1, ...) ## S3 method for class 'bvharcv' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.bvharcv(x) ## S3 method for class 'bvharcv' knit_print(x, ...) ## S3 method for class 'olsmod' forecast_roll(object, n_ahead, y_test, num_thread = 1, ...) ## S3 method for class 'normaliw' forecast_roll(object, n_ahead, y_test, num_thread = 1, use_fit = TRUE, ...) ## S3 method for class 'ldltmod' forecast_roll( object, n_ahead, y_test, num_thread = 1, level = 0.05, sparse = FALSE, lpl = FALSE, use_fit = TRUE, ... ) ## S3 method for class 'svmod' forecast_roll( object, n_ahead, y_test, num_thread = 1, level = 0.05, use_sv = TRUE, sparse = FALSE, lpl = FALSE, use_fit = TRUE, ... )
forecast_roll(object, n_ahead, y_test, num_thread = 1, ...) ## S3 method for class 'bvharcv' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.bvharcv(x) ## S3 method for class 'bvharcv' knit_print(x, ...) ## S3 method for class 'olsmod' forecast_roll(object, n_ahead, y_test, num_thread = 1, ...) ## S3 method for class 'normaliw' forecast_roll(object, n_ahead, y_test, num_thread = 1, use_fit = TRUE, ...) ## S3 method for class 'ldltmod' forecast_roll( object, n_ahead, y_test, num_thread = 1, level = 0.05, sparse = FALSE, lpl = FALSE, use_fit = TRUE, ... ) ## S3 method for class 'svmod' forecast_roll( object, n_ahead, y_test, num_thread = 1, level = 0.05, use_sv = TRUE, sparse = FALSE, lpl = FALSE, use_fit = TRUE, ... )
object |
Model object |
n_ahead |
Step to forecast in rolling window scheme |
y_test |
Test data to be compared. Use |
num_thread |
|
... |
not used |
x |
|
digits |
digit option to print |
use_fit |
|
level |
Specify alpha of confidence interval level 100(1 - alpha) percentage. By default, .05. |
sparse |
|
lpl |
|
use_sv |
Use SV term |
Rolling windows forecasting fixes window size.
It moves the window ahead and forecast h-ahead in y_test
set.
predbvhar_roll
class
Hyndman, R. J., & Athanasopoulos, G. (2021). Forecasting: Principles and practice (3rd ed.). OTEXTS.
Compute FPE of VAR(p) and VHAR
FPE(object, ...) ## S3 method for class 'varlse' FPE(object, ...) ## S3 method for class 'vharlse' FPE(object, ...)
FPE(object, ...) ## S3 method for class 'varlse' FPE(object, ...) ## S3 method for class 'vharlse' FPE(object, ...)
object |
Model fit |
... |
not used |
Let be the MLE
and let
be the unbiased estimator (
covmat
) for .
Note that
Then
FPE value.
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
This function computes estimation error given estimated model and true coefficient.
fromse(x, y, ...) ## S3 method for class 'bvharsp' fromse(x, y, ...)
fromse(x, y, ...) ## S3 method for class 'bvharsp' fromse(x, y, ...)
x |
Estimated model. |
y |
Coefficient matrix to be compared. |
... |
not used |
Consider the Frobenius Norm .
let
be nrow x k the estimates,
and let
be the true coefficients matrix.
Then the function computes estimation error by
Frobenius norm value
Bai, R., & Ghosh, M. (2018). High-dimensional multivariate posterior consistency under global-local shrinkage priors. Journal of Multivariate Analysis, 167, 157-170.
This function adds a layer of test dataset.
geom_eval(data, colour = "red", ...)
geom_eval(data, colour = "red", ...)
data |
Test data to draw, which has the same format with the train data. |
colour |
Color of the line (By default, |
... |
Other arguments passed on the |
A ggplot layer
Draw plot of test error for given models
gg_loss( mod_list, y, type = c("mse", "mae", "mape", "mase"), mean_line = FALSE, line_param = list(), mean_param = list(), viridis = FALSE, viridis_option = "D", NROW = NULL, NCOL = NULL, ... )
gg_loss( mod_list, y, type = c("mse", "mae", "mape", "mase"), mean_line = FALSE, line_param = list(), mean_param = list(), viridis = FALSE, viridis_option = "D", NROW = NULL, NCOL = NULL, ... )
mod_list |
Lists of forecast results ( |
y |
Test data to be compared. should be the same format with the train data and predict$forecast. |
type |
Loss function to be used ( |
mean_line |
Whether to draw average loss. By default, |
line_param |
Parameter lists for |
mean_param |
Parameter lists for average loss with |
viridis |
If |
viridis_option |
Option for viridis string. See |
NROW |
|
NCOL |
|
... |
Additional options for |
A ggplot object
mse()
to compute MSE for given forecast result
mae()
to compute MAE for given forecast result
mape()
to compute MAPE for given forecast result
mase()
to compute MASE for given forecast result
Compute HQ of VAR(p), VHAR, BVAR(p), and BVHAR
HQ(object, ...) ## S3 method for class 'logLik' HQ(object, ...) ## S3 method for class 'varlse' HQ(object, ...) ## S3 method for class 'vharlse' HQ(object, ...) ## S3 method for class 'bvarmn' HQ(object, ...) ## S3 method for class 'bvarflat' HQ(object, ...) ## S3 method for class 'bvharmn' HQ(object, ...)
HQ(object, ...) ## S3 method for class 'logLik' HQ(object, ...) ## S3 method for class 'varlse' HQ(object, ...) ## S3 method for class 'vharlse' HQ(object, ...) ## S3 method for class 'bvarmn' HQ(object, ...) ## S3 method for class 'bvarflat' HQ(object, ...) ## S3 method for class 'bvharmn' HQ(object, ...)
object |
A |
... |
not used |
The formula is
which can be computed by
AIC(object, ..., k = 2 * log(log(nobs(object))))
with stats::AIC()
.
Let be the MLE
and let
be the unbiased estimator (
covmat
) for .
Note that
Then
where the number of freely estimated parameters is .
HQ value.
Hannan, E.J. and Quinn, B.G. (1979). The Determination of the Order of an Autoregression. Journal of the Royal Statistical Society: Series B (Methodological), 41: 190-195.
Hannan, E.J. and Quinn, B.G. (1979). The Determination of the Order of an Autoregression. Journal of the Royal Statistical Society: Series B (Methodological), 41: 190-195.
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
Quinn, B.G. (1980). Order Determination for a Multivariate Autoregression. Journal of the Royal Statistical Society: Series B (Methodological), 42: 182-185.
Set initial parameters before starting Gibbs sampler for SSVS.
init_ssvs( init_coef, init_coef_dummy, init_chol, init_chol_dummy, type = c("user", "auto") ) ## S3 method for class 'ssvsinit' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.ssvsinit(x) ## S3 method for class 'ssvsinit' knit_print(x, ...)
init_ssvs( init_coef, init_coef_dummy, init_chol, init_chol_dummy, type = c("user", "auto") ) ## S3 method for class 'ssvsinit' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.ssvsinit(x) ## S3 method for class 'ssvsinit' knit_print(x, ...)
Set SSVS initialization for the VAR model.
init_coef
: (kp + 1) x m coefficient matrix.
init_coef_dummy
: kp x m dummy matrix to restrict the coefficients.
init_chol
: k x k upper triangular cholesky factor, which
.
init_chol_dummy
: k x k upper triangular dummy matrix to restrict the cholesky factor.
Denote that init_chol
and init_chol_dummy
should be upper_triangular or the function gives error.
For parallel chain initialization, assign three-dimensional array or three-length list.
ssvsinit
object
George, E. I., & McCulloch, R. E. (1993). Variable Selection via Gibbs Sampling. Journal of the American Statistical Association, 88(423), 881-889.
George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580.
Koop, G., & Korobilis, D. (2009). Bayesian Multivariate Time Series Methods for Empirical Macroeconomics. Foundations and Trends® in Econometrics, 3(4), 267-358.
Computes responses to impulses or orthogonal impulses
## S3 method for class 'varlse' irf(object, lag_max = 10, orthogonal = TRUE, impulse_var, response_var, ...) ## S3 method for class 'vharlse' irf(object, lag_max = 10, orthogonal = TRUE, impulse_var, response_var, ...) ## S3 method for class 'bvharirf' print(x, digits = max(3L, getOption("digits") - 3L), ...) irf(object, lag_max, orthogonal, impulse_var, response_var, ...) is.bvharirf(x) ## S3 method for class 'bvharirf' knit_print(x, ...)
## S3 method for class 'varlse' irf(object, lag_max = 10, orthogonal = TRUE, impulse_var, response_var, ...) ## S3 method for class 'vharlse' irf(object, lag_max = 10, orthogonal = TRUE, impulse_var, response_var, ...) ## S3 method for class 'bvharirf' print(x, digits = max(3L, getOption("digits") - 3L), ...) irf(object, lag_max, orthogonal, impulse_var, response_var, ...) is.bvharirf(x) ## S3 method for class 'bvharirf' knit_print(x, ...)
object |
Model object |
lag_max |
Maximum lag to investigate the impulse responses (By default, |
orthogonal |
Orthogonal impulses ( |
impulse_var |
Impulse variables character vector. If not specified, use every variable. |
response_var |
Response variables character vector. If not specified, use every variable. |
... |
not used |
x |
|
digits |
digit option to print |
bvharirf
class
If orthogonal = FALSE
, the function gives VMA representation of the process such that
If orthogonal = TRUE
, it gives orthogonalized VMA representation
. Based on variance decomposition (Cholesky decomposition)
where is lower triangular matrix,
impulse response analysis if performed under MA representation
Here,
and are orthogonal.
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
Check the stability condition of coefficient matrix.
is.stable(x, ...) ## S3 method for class 'varlse' is.stable(x, ...) ## S3 method for class 'vharlse' is.stable(x, ...) ## S3 method for class 'bvarmn' is.stable(x, ...) ## S3 method for class 'bvarflat' is.stable(x, ...) ## S3 method for class 'bvharmn' is.stable(x, ...)
is.stable(x, ...) ## S3 method for class 'varlse' is.stable(x, ...) ## S3 method for class 'vharlse' is.stable(x, ...) ## S3 method for class 'bvarmn' is.stable(x, ...) ## S3 method for class 'bvarflat' is.stable(x, ...) ## S3 method for class 'bvharmn' is.stable(x, ...)
x |
Model fit |
... |
not used |
VAR(p) is stable if
for .
logical class
logical class
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
This function computes MAE given prediction result versus evaluation set.
mae(x, y, ...) ## S3 method for class 'predbvhar' mae(x, y, ...) ## S3 method for class 'bvharcv' mae(x, y, ...)
mae(x, y, ...) ## S3 method for class 'predbvhar' mae(x, y, ...) ## S3 method for class 'bvharcv' mae(x, y, ...)
x |
Forecasting object |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
Let .
MAE is defined by
Some researchers prefer MAE to MSE because it is less sensitive to outliers.
MAE vector corressponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
This function computes MAPE given prediction result versus evaluation set.
mape(x, y, ...) ## S3 method for class 'predbvhar' mape(x, y, ...) ## S3 method for class 'bvharcv' mape(x, y, ...)
mape(x, y, ...) ## S3 method for class 'predbvhar' mape(x, y, ...) ## S3 method for class 'bvharcv' mape(x, y, ...)
x |
Forecasting object |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
Let .
Percentage error is defined by
(100 can be omitted since comparison is the focus).
MAPE vector corresponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
This function computes MASE given prediction result versus evaluation set.
mase(x, y, ...) ## S3 method for class 'predbvhar' mase(x, y, ...) ## S3 method for class 'bvharcv' mase(x, y, ...)
mase(x, y, ...) ## S3 method for class 'predbvhar' mase(x, y, ...) ## S3 method for class 'bvharcv' mase(x, y, ...)
x |
Forecasting object |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
Let .
Scaled error is defined by
so that the error can be free of the data scale. Then
Here, are the points in the sample, i.e. errors are scaled by the in-sample mean absolute error (
) from the naive random walk forecasting.
MASE vector corresponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
This function computes MRAE given prediction result versus evaluation set.
mrae(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' mrae(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' mrae(x, pred_bench, y, ...)
mrae(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' mrae(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' mrae(x, pred_bench, y, ...)
x |
Forecasting object to use |
pred_bench |
The same forecasting object from benchmark model |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
Let .
MRAE implements benchmark model as scaling method.
Relative error is defined by
where is the error from the benchmark method.
Then
MRAE vector corresponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
This function computes MSE given prediction result versus evaluation set.
mse(x, y, ...) ## S3 method for class 'predbvhar' mse(x, y, ...) ## S3 method for class 'bvharcv' mse(x, y, ...)
mse(x, y, ...) ## S3 method for class 'predbvhar' mse(x, y, ...) ## S3 method for class 'bvharcv' mse(x, y, ...)
x |
Forecasting object |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
Let . Then
MSE is the most used accuracy measure.
MSE vector corresponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
Forecasts multivariate time series using given model.
## S3 method for class 'varlse' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'vharlse' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvarmn' predict(object, n_ahead, n_iter = 100L, level = 0.05, num_thread = 1, ...) ## S3 method for class 'bvharmn' predict(object, n_ahead, n_iter = 100L, level = 0.05, num_thread = 1, ...) ## S3 method for class 'bvarflat' predict(object, n_ahead, n_iter = 100L, level = 0.05, num_thread = 1, ...) ## S3 method for class 'bvarssvs' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvharssvs' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvarhs' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvharhs' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvarldlt' predict( object, n_ahead, level = 0.05, num_thread = 1, sparse = FALSE, warn = FALSE, ... ) ## S3 method for class 'bvharldlt' predict( object, n_ahead, level = 0.05, num_thread = 1, sparse = FALSE, warn = FALSE, ... ) ## S3 method for class 'bvarsv' predict( object, n_ahead, level = 0.05, num_thread = 1, use_sv = TRUE, sparse = FALSE, warn = FALSE, ... ) ## S3 method for class 'bvharsv' predict( object, n_ahead, level = 0.05, num_thread = 1, use_sv = TRUE, sparse = FALSE, warn = FALSE, ... ) ## S3 method for class 'predbvhar' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.predbvhar(x) ## S3 method for class 'predbvhar' knit_print(x, ...)
## S3 method for class 'varlse' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'vharlse' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvarmn' predict(object, n_ahead, n_iter = 100L, level = 0.05, num_thread = 1, ...) ## S3 method for class 'bvharmn' predict(object, n_ahead, n_iter = 100L, level = 0.05, num_thread = 1, ...) ## S3 method for class 'bvarflat' predict(object, n_ahead, n_iter = 100L, level = 0.05, num_thread = 1, ...) ## S3 method for class 'bvarssvs' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvharssvs' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvarhs' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvharhs' predict(object, n_ahead, level = 0.05, ...) ## S3 method for class 'bvarldlt' predict( object, n_ahead, level = 0.05, num_thread = 1, sparse = FALSE, warn = FALSE, ... ) ## S3 method for class 'bvharldlt' predict( object, n_ahead, level = 0.05, num_thread = 1, sparse = FALSE, warn = FALSE, ... ) ## S3 method for class 'bvarsv' predict( object, n_ahead, level = 0.05, num_thread = 1, use_sv = TRUE, sparse = FALSE, warn = FALSE, ... ) ## S3 method for class 'bvharsv' predict( object, n_ahead, level = 0.05, num_thread = 1, use_sv = TRUE, sparse = FALSE, warn = FALSE, ... ) ## S3 method for class 'predbvhar' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.predbvhar(x) ## S3 method for class 'predbvhar' knit_print(x, ...)
predbvhar
class with the following components:
object$process
forecast matrix
standard error matrix
lower confidence interval
upper confidence interval
lower CI adjusted (Bonferroni)
upper CI adjusted (Bonferroni)
object$y
See pp35 of Lütkepohl (2007). Consider h-step ahead forecasting (e.g. n + 1, ... n + h).
Let .
Then one-step ahead (point) forecasting:
Recursively, let .
Then two-step ahead (point) forecasting:
Similarly, h-step ahead (point) forecasting:
How about confident region? Confidence interval at h-period is
Joint forecast region of % can be computed by
See the pp41 of Lütkepohl (2007).
To compute covariance matrix, it needs VMA representation:
Then
Let is VHAR linear transformation matrix.
Since VHAR is the linearly transformed VAR(22),
let
.
Then one-step ahead (point) forecasting:
Recursively, let .
Then two-step ahead (point) forecasting:
and h-step ahead (point) forecasting:
Point forecasts are computed by posterior mean of the parameters. See Section 3 of Bańbura et al. (2010).
Let be the posterior MN mean
and let
be the posterior MN precision.
Then predictive posterior for each step
and recursively,
Let be the posterior MN mean
and let
be the posterior MN precision.
Then predictive posterior for each step
and recursively,
The process of the computing point estimate is the same. However, predictive interval is achieved from each Gibbs sampler sample.
The process of the computing point estimate is the same. However, predictive interval is achieved from each Gibbs sampler sample.
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
Corsi, F. (2008). A Simple Approximate Long-Memory Model of Realized Volatility. Journal of Financial Econometrics, 7(2), 174-196.
Baek, C. and Park, M. (2021). Sparse vector heterogeneous autoregressive modeling for realized volatility. J. Korean Stat. Soc. 50, 495-510.
Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector auto regressions. Journal of Applied Econometrics, 25(1).
Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2013). Bayesian data analysis. Chapman and Hall/CRC.
Karlsson, S. (2013). Chapter 15 Forecasting with Bayesian Vector Autoregression. Handbook of Economic Forecasting, 2, 791-897.
Litterman, R. B. (1986). Forecasting with Bayesian Vector Autoregressions: Five Years of Experience. Journal of Business & Economic Statistics, 4(1), 25.
Ghosh, S., Khare, K., & Michailidis, G. (2018). High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models. Journal of the American Statistical Association, 114(526).
George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580.
George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580.
Korobilis, D. (2013). VAR FORECASTING USING BAYESIAN VARIABLE SELECTION. Journal of Applied Econometrics, 28(2).
Korobilis, D. (2013). VAR FORECASTING USING BAYESIAN VARIABLE SELECTION. Journal of Applied Econometrics, 28(2).
Huber, F., Koop, G., & Onorante, L. (2021). Inducing Sparsity and Shrinkage in Time-Varying Parameter Models. Journal of Business & Economic Statistics, 39(3), 669-683.
Conduct variable selection.
## S3 method for class 'summary.bvharsp' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'summary.bvharsp' knit_print(x, ...) ## S3 method for class 'ssvsmod' summary(object, method = c("pip", "ci"), threshold = 0.5, level = 0.05, ...) ## S3 method for class 'hsmod' summary(object, method = c("pip", "ci"), threshold = 0.5, level = 0.05, ...) ## S3 method for class 'ngmod' summary(object, level = 0.05, ...)
## S3 method for class 'summary.bvharsp' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'summary.bvharsp' knit_print(x, ...) ## S3 method for class 'ssvsmod' summary(object, method = c("pip", "ci"), threshold = 0.5, level = 0.05, ...) ## S3 method for class 'hsmod' summary(object, method = c("pip", "ci"), threshold = 0.5, level = 0.05, ...) ## S3 method for class 'ngmod' summary(object, level = 0.05, ...)
x |
|
digits |
digit option to print |
... |
not used |
object |
Model fit |
method |
Use PIP ( |
threshold |
Threshold for posterior inclusion probability |
level |
Specify alpha of credible interval level 100(1 - alpha) percentage. By default, |
summary.ssvsmod
object
hsmod
object
ngmod
object
George, E. I., & McCulloch, R. E. (1993). Variable Selection via Gibbs Sampling. Journal of the American Statistical Association, 88(423), 881-889.
George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580.
Koop, G., & Korobilis, D. (2009). Bayesian Multivariate Time Series Methods for Empirical Macroeconomics. Foundations and Trends® in Econometrics, 3(4), 267-358.
O’Hara, R. B., & Sillanpää, M. J. (2009). A review of Bayesian variable selection methods: what, how and which. Bayesian Analysis, 4(1), 85-117.
This function computes RelMAE given prediction result versus evaluation set.
relmae(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' relmae(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' relmae(x, pred_bench, y, ...)
relmae(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' relmae(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' relmae(x, pred_bench, y, ...)
x |
Forecasting object to use |
pred_bench |
The same forecasting object from benchmark model |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
Let .
RelMAE implements MAE of benchmark model as relative measures.
Let
be the MAE of the benchmark model.
Then
where is the MAE of our model.
RelMAE vector corresponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
This function computes relative estimation error given estimated model and true coefficient.
relspne(x, y, ...) ## S3 method for class 'bvharsp' relspne(x, y, ...)
relspne(x, y, ...) ## S3 method for class 'bvharsp' relspne(x, y, ...)
x |
Estimated model. |
y |
Coefficient matrix to be compared. |
... |
not used |
Let be the spectral norm of a matrix,
let
be the estimates,
and let
be the true coefficients matrix.
Then the function computes relative estimation error by
Spectral norm value
Ghosh, S., Khare, K., & Michailidis, G. (2018). High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models. Journal of the American Statistical Association, 114(526).
By defining stats::residuals()
for each model, this function returns residual.
## S3 method for class 'varlse' residuals(object, ...) ## S3 method for class 'vharlse' residuals(object, ...) ## S3 method for class 'bvarmn' residuals(object, ...) ## S3 method for class 'bvarflat' residuals(object, ...) ## S3 method for class 'bvharmn' residuals(object, ...)
## S3 method for class 'varlse' residuals(object, ...) ## S3 method for class 'vharlse' residuals(object, ...) ## S3 method for class 'bvarmn' residuals(object, ...) ## S3 method for class 'bvarflat' residuals(object, ...) ## S3 method for class 'bvharmn' residuals(object, ...)
object |
Model object |
... |
not used |
matrix object.
This function computes RMAFE (Mean Absolute Forecast Error Relative to the Benchmark)
rmafe(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' rmafe(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' rmafe(x, pred_bench, y, ...)
rmafe(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' rmafe(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' rmafe(x, pred_bench, y, ...)
x |
Forecasting object to use |
pred_bench |
The same forecasting object from benchmark model |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
Let .
RMAFE is the ratio of L1 norm of
from forecasting object and from benchmark model.
where is the error from the benchmark model.
RMAFE vector corresponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector auto regressions. Journal of Applied Econometrics, 25(1).
Ghosh, S., Khare, K., & Michailidis, G. (2018). High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models. Journal of the American Statistical Association, 114(526).
This function computes RMAPE given prediction result versus evaluation set.
rmape(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' rmape(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' rmape(x, pred_bench, y, ...)
rmape(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' rmape(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' rmape(x, pred_bench, y, ...)
x |
Forecasting object to use |
pred_bench |
The same forecasting object from benchmark model |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
RMAPE is the ratio of MAPE of given model and the benchmark one.
Let be the MAPE of the benchmark model.
Then
where is the MAPE of our model.
RMAPE vector corresponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
This function computes RMASE given prediction result versus evaluation set.
rmase(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' rmase(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' rmase(x, pred_bench, y, ...)
rmase(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' rmase(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' rmase(x, pred_bench, y, ...)
x |
Forecasting object to use |
pred_bench |
The same forecasting object from benchmark model |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
RMASE is the ratio of MAPE of given model and the benchmark one.
Let be the MAPE of the benchmark model.
Then
where is the MASE of our model.
RMASE vector corresponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
This function computes RMSFE (Mean Squared Forecast Error Relative to the Benchmark)
rmsfe(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' rmsfe(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' rmsfe(x, pred_bench, y, ...)
rmsfe(x, pred_bench, y, ...) ## S3 method for class 'predbvhar' rmsfe(x, pred_bench, y, ...) ## S3 method for class 'bvharcv' rmsfe(x, pred_bench, y, ...)
x |
Forecasting object to use |
pred_bench |
The same forecasting object from benchmark model |
y |
Test data to be compared. should be the same format with the train data. |
... |
not used |
Let .
RMSFE is the ratio of L2 norm of
from forecasting object and from benchmark model.
where is the error from the benchmark model.
RMSFE vector corresponding to each variable.
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688.
Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector auto regressions. Journal of Applied Econometrics, 25(1).
Ghosh, S., Khare, K., & Michailidis, G. (2018). High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models. Journal of the American Statistical Association, 114(526).
Set hyperparameters of Bayesian VAR and VHAR models.
set_bvar(sigma, lambda = 0.1, delta, eps = 1e-04) set_bvar_flat(U) set_bvhar(sigma, lambda = 0.1, delta, eps = 1e-04) set_weight_bvhar(sigma, lambda = 0.1, eps = 1e-04, daily, weekly, monthly) ## S3 method for class 'bvharspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.bvharspec(x) ## S3 method for class 'bvharspec' knit_print(x, ...)
set_bvar(sigma, lambda = 0.1, delta, eps = 1e-04) set_bvar_flat(U) set_bvhar(sigma, lambda = 0.1, delta, eps = 1e-04) set_weight_bvhar(sigma, lambda = 0.1, eps = 1e-04, daily, weekly, monthly) ## S3 method for class 'bvharspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.bvharspec(x) ## S3 method for class 'bvharspec' knit_print(x, ...)
sigma |
Standard error vector for each variable (Default: sd) |
lambda |
Tightness of the prior around a random walk or white noise (Default: .1) |
delta |
Persistence (Default: Litterman sets 1 = random walk prior, White noise prior = 0) |
eps |
Very small number (Default: 1e-04) |
U |
Positive definite matrix. By default, identity matrix of dimension ncol(X0) |
daily |
Same as delta in VHAR type (Default: 1 as Litterman) |
weekly |
Fill the second part in the first block (Default: 1) |
monthly |
Fill the third part in the first block (Default: 1) |
x |
|
digits |
digit option to print |
... |
not used |
Missing arguments will be set to be default values in each model function mentioned above.
set_bvar()
sets hyperparameters for bvar_minnesota()
.
Each delta
(vector), lambda
(length of 1), sigma
(vector), eps
(vector) corresponds to ,
,
,
.
are related to the belief to random walk.
If for all i, random walk prior
If for all i, white noise prior
controls the overall tightness of the prior around these two prior beliefs.
If , the posterior is equivalent to prior and the data do not influence the estimates.
If , the posterior mean becomes OLS estimates (VAR).
in Minnesota moments explain the data scales.
set_bvar_flat
sets hyperparameters for bvar_flat()
.
set_bvhar()
sets hyperparameters for bvhar_minnesota()
with VAR-type Minnesota prior, i.e. BVHAR-S model.
set_weight_bvhar()
sets hyperparameters for bvhar_minnesota()
with VHAR-type Minnesota prior, i.e. BVHAR-L model.
Every function returns bvharspec
class.
It is the list of which the components are the same as the arguments provided.
If the argument is not specified, NULL
is assigned here.
The default values mentioned above will be considered in each fitting function.
Model name: BVAR
, BVHAR
Prior name: Minnesota
(Minnesota prior for BVAR),
Hierarchical
(Hierarchical prior for BVAR),
MN_VAR
(BVHAR-S),
MN_VHAR
(BVHAR-L),
Flat
(Flat prior for BVAR)
Vector value (or bvharpriorspec
class) assigned for sigma
Value (or bvharpriorspec
class) assigned for lambda
Vector value assigned for delta
Value assigned for epsilon
set_weight_bvhar()
has different component with delta
due to its different construction.
Vector value assigned for daily weight
Vector value assigned for weekly weight
Vector value assigned for monthly weight
By using set_psi()
and set_lambda()
each, hierarchical modeling is available.
Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector auto regressions. Journal of Applied Econometrics, 25(1).
Litterman, R. B. (1986). Forecasting with Bayesian Vector Autoregressions: Five Years of Experience. Journal of Business & Economic Statistics, 4(1), 25.
Ghosh, S., Khare, K., & Michailidis, G. (2018). High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models. Journal of the American Statistical Association, 114(526).
Kim, Y. G., and Baek, C. (2024). Bayesian vector heterogeneous autoregressive modeling. Journal of Statistical Computation and Simulation, 94(6), 1139-1157.
Kim, Y. G., and Baek, C. (2024). Bayesian vector heterogeneous autoregressive modeling. Journal of Statistical Computation and Simulation, 94(6), 1139-1157.
lambda hyperprior specification set_lambda()
sigma hyperprior specification set_psi()
# Minnesota BVAR specification------------------------ bvar_spec <- set_bvar( sigma = c(.03, .02, .01), # Sigma = diag(.03^2, .02^2, .01^2) lambda = .2, # lambda = .2 delta = rep(.1, 3), # delta1 = .1, delta2 = .1, delta3 = .1 eps = 1e-04 # eps = 1e-04 ) class(bvar_spec) str(bvar_spec) # Flat BVAR specification------------------------- # 3-dim # p = 5 with constant term # U = 500 * I(mp + 1) bvar_flat_spec <- set_bvar_flat(U = 500 * diag(16)) class(bvar_flat_spec) str(bvar_flat_spec) # BVHAR-S specification----------------------- bvhar_var_spec <- set_bvhar( sigma = c(.03, .02, .01), # Sigma = diag(.03^2, .02^2, .01^2) lambda = .2, # lambda = .2 delta = rep(.1, 3), # delta1 = .1, delta2 = .1, delta3 = .1 eps = 1e-04 # eps = 1e-04 ) class(bvhar_var_spec) str(bvhar_var_spec) # BVHAR-L specification--------------------------- bvhar_vhar_spec <- set_weight_bvhar( sigma = c(.03, .02, .01), # Sigma = diag(.03^2, .02^2, .01^2) lambda = .2, # lambda = .2 eps = 1e-04, # eps = 1e-04 daily = rep(.2, 3), # daily1 = .2, daily2 = .2, daily3 = .2 weekly = rep(.1, 3), # weekly1 = .1, weekly2 = .1, weekly3 = .1 monthly = rep(.05, 3) # monthly1 = .05, monthly2 = .05, monthly3 = .05 ) class(bvhar_vhar_spec) str(bvhar_vhar_spec)
# Minnesota BVAR specification------------------------ bvar_spec <- set_bvar( sigma = c(.03, .02, .01), # Sigma = diag(.03^2, .02^2, .01^2) lambda = .2, # lambda = .2 delta = rep(.1, 3), # delta1 = .1, delta2 = .1, delta3 = .1 eps = 1e-04 # eps = 1e-04 ) class(bvar_spec) str(bvar_spec) # Flat BVAR specification------------------------- # 3-dim # p = 5 with constant term # U = 500 * I(mp + 1) bvar_flat_spec <- set_bvar_flat(U = 500 * diag(16)) class(bvar_flat_spec) str(bvar_flat_spec) # BVHAR-S specification----------------------- bvhar_var_spec <- set_bvhar( sigma = c(.03, .02, .01), # Sigma = diag(.03^2, .02^2, .01^2) lambda = .2, # lambda = .2 delta = rep(.1, 3), # delta1 = .1, delta2 = .1, delta3 = .1 eps = 1e-04 # eps = 1e-04 ) class(bvhar_var_spec) str(bvhar_var_spec) # BVHAR-L specification--------------------------- bvhar_vhar_spec <- set_weight_bvhar( sigma = c(.03, .02, .01), # Sigma = diag(.03^2, .02^2, .01^2) lambda = .2, # lambda = .2 eps = 1e-04, # eps = 1e-04 daily = rep(.2, 3), # daily1 = .2, daily2 = .2, daily3 = .2 weekly = rep(.1, 3), # weekly1 = .1, weekly2 = .1, weekly3 = .1 monthly = rep(.05, 3) # monthly1 = .05, monthly2 = .05, monthly3 = .05 ) class(bvhar_vhar_spec) str(bvhar_vhar_spec)
Set DL hyperparameters for VAR or VHAR coefficient and contemporaneous coefficient.
set_dl(dir_grid = 100L, shape = 0.01, rate = 0.01) ## S3 method for class 'dlspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.dlspec(x)
set_dl(dir_grid = 100L, shape = 0.01, rate = 0.01) ## S3 method for class 'dlspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.dlspec(x)
dir_grid |
Griddy gibbs grid size for Dirichlet hyperparameter |
shape |
Gamma shape |
rate |
Gamma rate |
x |
|
digits |
digit option to print |
... |
not used |
dlspec
object
Bhattacharya, A., Pati, D., Pillai, N. S., & Dunson, D. B. (2015). Dirichlet-Laplace Priors for Optimal Shrinkage. Journal of the American Statistical Association, 110(512), 1479-1490.
Korobilis, D., & Shimizu, K. (2022). Bayesian Approaches to Shrinkage and Sparse Estimation. Foundations and Trends® in Econometrics, 11(4), 230-354.
Set initial hyperparameters and parameter before starting Gibbs sampler for Horseshoe prior.
set_horseshoe(local_sparsity = 1, group_sparsity = 1, global_sparsity = 1) ## S3 method for class 'horseshoespec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.horseshoespec(x) ## S3 method for class 'horseshoespec' knit_print(x, ...)
set_horseshoe(local_sparsity = 1, group_sparsity = 1, global_sparsity = 1) ## S3 method for class 'horseshoespec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.horseshoespec(x) ## S3 method for class 'horseshoespec' knit_print(x, ...)
local_sparsity |
Initial local shrinkage hyperparameters |
group_sparsity |
Initial group shrinkage hyperparameters |
global_sparsity |
Initial global shrinkage hyperparameter |
x |
|
digits |
digit option to print |
... |
not used |
Set horseshoe prior initialization for VAR family.
local_sparsity
: Initial local shrinkage
group_sparsity
: Initial group shrinkage
global_sparsity
: Initial global shrinkage
In this package, horseshoe prior model is estimated by Gibbs sampling, initial means initial values for that gibbs sampler.
Carvalho, C. M., Polson, N. G., & Scott, J. G. (2010). The horseshoe estimator for sparse signals. Biometrika, 97(2), 465-480.
Makalic, E., & Schmidt, D. F. (2016). A Simple Sampler for the Horseshoe Estimator. IEEE Signal Processing Letters, 23(1), 179-182.
Set Normal prior hyperparameters for constant term
set_intercept(mean = 0, sd = 0.1) ## S3 method for class 'interceptspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.interceptspec(x) ## S3 method for class 'interceptspec' knit_print(x, ...)
set_intercept(mean = 0, sd = 0.1) ## S3 method for class 'interceptspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.interceptspec(x) ## S3 method for class 'interceptspec' knit_print(x, ...)
mean |
Normal mean of constant term |
sd |
Normal standard deviance for constant term |
x |
|
digits |
digit option to print |
... |
not used |
Set hyperpriors of Bayesian VAR and VHAR models.
set_lambda(mode = 0.2, sd = 0.4, param = NULL, lower = 1e-05, upper = 3) set_psi(shape = 4e-04, scale = 4e-04, lower = 1e-05, upper = 3) ## S3 method for class 'bvharpriorspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.bvharpriorspec(x) ## S3 method for class 'bvharpriorspec' knit_print(x, ...)
set_lambda(mode = 0.2, sd = 0.4, param = NULL, lower = 1e-05, upper = 3) set_psi(shape = 4e-04, scale = 4e-04, lower = 1e-05, upper = 3) ## S3 method for class 'bvharpriorspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.bvharpriorspec(x) ## S3 method for class 'bvharpriorspec' knit_print(x, ...)
mode |
Mode of Gamma distribution. By default, |
sd |
Standard deviation of Gamma distribution. By default, |
param |
Shape and rate of Gamma distribution, in the form of |
lower |
Lower bound for |
upper |
Upper bound for |
shape |
Shape of Inverse Gamma distribution. By default, |
scale |
Scale of Inverse Gamma distribution. By default, |
x |
|
digits |
digit option to print |
... |
not used |
In addition to Normal-IW priors set_bvar()
, set_bvhar()
, and set_weight_bvhar()
,
these functions give hierarchical structure to the model.
set_lambda()
specifies hyperprior for (
lambda
), which is Gamma distribution.
set_psi()
specifies hyperprior for (
sigma
), which is Inverse gamma distribution.
The following set of (mode, sd)
are recommended by Sims and Zha (1998) for set_lambda()
.
(mode = .2, sd = .4)
: default
(mode = 1, sd = 1)
Giannone et al. (2015) suggested data-based selection for set_psi()
.
It chooses (0.02)^2 based on its empirical data set.
bvharpriorspec
object
Giannone, D., Lenza, M., & Primiceri, G. E. (2015). Prior Selection for Vector Autoregressions. Review of Economics and Statistics, 97(2).
# Hirearchical BVAR specification------------------------ set_bvar( sigma = set_psi(shape = 4e-4, scale = 4e-4), lambda = set_lambda(mode = .2, sd = .4), delta = rep(1, 3), eps = 1e-04 # eps = 1e-04 )
# Hirearchical BVAR specification------------------------ set_bvar( sigma = set_psi(shape = 4e-4, scale = 4e-4), lambda = set_lambda(mode = .2, sd = .4), delta = rep(1, 3), eps = 1e-04 # eps = 1e-04 )
Set prior for covariance matrix.
set_ldlt(ig_shape = 3, ig_scl = 0.01) set_sv(ig_shape = 3, ig_scl = 0.01, initial_mean = 1, initial_prec = 0.1) ## S3 method for class 'covspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.covspec(x) is.svspec(x) is.ldltspec(x)
set_ldlt(ig_shape = 3, ig_scl = 0.01) set_sv(ig_shape = 3, ig_scl = 0.01, initial_mean = 1, initial_prec = 0.1) ## S3 method for class 'covspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.covspec(x) is.svspec(x) is.ldltspec(x)
ig_shape |
Inverse-Gamma shape of Cholesky diagonal vector.
For SV ( |
ig_scl |
Inverse-Gamma scale of Cholesky diagonal vector.
For SV ( |
initial_mean |
Prior mean of initial state. |
initial_prec |
Prior precision of initial state. |
x |
|
digits |
digit option to print |
... |
not used |
set_ldlt()
specifies LDLT of precision matrix,
set_sv()
specifices time varying precision matrix under stochastic volatility framework based on
Carriero, A., Chan, J., Clark, T. E., & Marcellino, M. (2022). Corrigendum to “Large Bayesian vector autoregressions with stochastic volatility and non-conjugate priors” [J. Econometrics 212 (1)(2019) 137-154]. Journal of Econometrics, 227(2), 506-512.
Chan, J., Koop, G., Poirier, D., & Tobias, J. (2019). Bayesian Econometric Methods (2nd ed., Econometric Exercises). Cambridge: Cambridge University Press.
Set NG hyperparameters for VAR or VHAR coefficient and contemporaneous coefficient.
set_ng( shape_sd = 0.01, group_shape = 0.01, group_scale = 0.01, global_shape = 0.01, global_scale = 0.01, contem_global_shape = 0.01, contem_global_scale = 0.01 ) ## S3 method for class 'ngspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.ngspec(x)
set_ng( shape_sd = 0.01, group_shape = 0.01, group_scale = 0.01, global_shape = 0.01, global_scale = 0.01, contem_global_shape = 0.01, contem_global_scale = 0.01 ) ## S3 method for class 'ngspec' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.ngspec(x)
shape_sd |
Standard deviation used in MH of Gamma shape |
group_shape |
Inverse gamma prior shape for coefficient group shrinkage |
group_scale |
Inverse gamma prior scale for coefficient group shrinkage |
global_shape |
Inverse gamma prior shape for coefficient global shrinkage |
global_scale |
Inverse gamma prior scale for coefficient global shrinkage |
contem_global_shape |
Inverse gamma prior shape for contemporaneous coefficient global shrinkage |
contem_global_scale |
Inverse gamma prior scale for contemporaneous coefficient global shrinkage |
x |
|
digits |
digit option to print |
... |
not used |
ngspec
object
Chan, J. C. C. (2021). Minnesota-type adaptive hierarchical priors for large Bayesian VARs. International Journal of Forecasting, 37(3), 1212-1226.
Huber, F., & Feldkircher, M. (2019). Adaptive Shrinkage in Bayesian Vector Autoregressive Models. Journal of Business & Economic Statistics, 37(1), 27-39.
Korobilis, D., & Shimizu, K. (2022). Bayesian Approaches to Shrinkage and Sparse Estimation. Foundations and Trends® in Econometrics, 11(4), 230-354.
Set SSVS hyperparameters for VAR or VHAR coefficient matrix and Cholesky factor.
set_ssvs( coef_spike = 0.1, coef_slab = 5, coef_spike_scl = 0.01, coef_slab_shape = 0.01, coef_slab_scl = 0.01, coef_mixture = 0.5, coef_s1 = c(1, 1), coef_s2 = c(1, 1), mean_non = 0, sd_non = 0.1, shape = 0.01, rate = 0.01, chol_spike = 0.1, chol_slab = 5, chol_spike_scl = 0.01, chol_slab_shape = 0.01, chol_slab_scl = 0.01, chol_mixture = 0.5, chol_s1 = 1, chol_s2 = 1 ) ## S3 method for class 'ssvsinput' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.ssvsinput(x) ## S3 method for class 'ssvsinput' knit_print(x, ...)
set_ssvs( coef_spike = 0.1, coef_slab = 5, coef_spike_scl = 0.01, coef_slab_shape = 0.01, coef_slab_scl = 0.01, coef_mixture = 0.5, coef_s1 = c(1, 1), coef_s2 = c(1, 1), mean_non = 0, sd_non = 0.1, shape = 0.01, rate = 0.01, chol_spike = 0.1, chol_slab = 5, chol_spike_scl = 0.01, chol_slab_shape = 0.01, chol_slab_scl = 0.01, chol_mixture = 0.5, chol_s1 = 1, chol_s2 = 1 ) ## S3 method for class 'ssvsinput' print(x, digits = max(3L, getOption("digits") - 3L), ...) is.ssvsinput(x) ## S3 method for class 'ssvsinput' knit_print(x, ...)
coef_spike |
Standard deviance for Spike normal distribution.
Will be deleted when |
coef_slab |
Standard deviance for Slab normal distribution.
Will be deleted when |
coef_spike_scl |
Scaling factor (between 0 and 1) for spike sd which is Spike sd = c * slab sd |
coef_slab_shape |
Inverse gamma shape for slab sd |
coef_slab_scl |
Inverse gamma scale for slab sd |
coef_mixture |
Bernoulli parameter for sparsity proportion.
Will be deleted when |
coef_s1 |
First shape of coefficients prior beta distribution |
coef_s2 |
Second shape of coefficients prior beta distribution |
mean_non |
Prior mean of unrestricted coefficients
Will be deleted when |
sd_non |
Standard deviance for unrestricted coefficients
Will be deleted when |
shape |
Gamma shape parameters for precision matrix (See Details). |
rate |
Gamma rate parameters for precision matrix (See Details). |
chol_spike |
Standard deviance for Spike normal distribution, in the cholesky factor.
Will be deleted when |
chol_slab |
Standard deviance for Slab normal distribution, in the cholesky factor.
Will be deleted when |
chol_spike_scl |
Scaling factor (between 0 and 1) for spike sd which is Spike sd = c * slab sd in the cholesky factor |
chol_slab_shape |
Inverse gamma shape for slab sd in the cholesky factor |
chol_slab_scl |
Inverse gamma scale for slab sd in the cholesky factor |
chol_mixture |
Bernoulli parameter for sparsity proportion, in the cholesky factor (See Details).
Will be deleted when |
chol_s1 |
First shape of cholesky factor prior beta distribution |
chol_s2 |
Second shape of cholesky factor prior beta distribution |
x |
|
digits |
digit option to print |
... |
not used |
Let be the vectorized coefficient,
.
Spike-slab prior is given using two normal distributions.
As spike-slab prior itself suggests, set small (point mass at zero: spike distribution)
and set
large (symmetric by zero: slab distribution).
is the proportion of the nonzero coefficients and it follows
coef_spike
:
coef_slab
:
coef_mixture
:
: vectorized format corresponding to coefficient matrix
If one value is provided, model function will read it by replicated value.
coef_non
: vectorized constant term is given prior Normal distribution with variance . Here,
coef_non
is .
Next for precision matrix , SSVS applies Cholesky decomposition.
where is upper triangular.
Diagonal components follow the gamma distribution.
For each row of off-diagonal (upper-triangular) components, we apply spike-slab prior again.
shape
:
rate
:
chol_spike
:
chol_slab
:
chol_mixture
:
: vectorized format corresponding to coefficient matrix
and
:
chol_
arguments can be one value for replication, vector, or upper triangular matrix.
ssvsinput
object
George, E. I., & McCulloch, R. E. (1993). Variable Selection via Gibbs Sampling. Journal of the American Statistical Association, 88(423), 881-889.
George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580.
Ishwaran, H., & Rao, J. S. (2005). Spike and slab variable selection: Frequentist and Bayesian strategies. The Annals of Statistics, 33(2).
Koop, G., & Korobilis, D. (2009). Bayesian Multivariate Time Series Methods for Empirical Macroeconomics. Foundations and Trends® in Econometrics, 3(4), 267-358.
This function samples random variates.
sim_gig(num_sim, lambda, psi, chi)
sim_gig(num_sim, lambda, psi, chi)
num_sim |
Number to generate |
lambda |
Index of modified Bessel function of third kind. |
psi |
Second parameter of GIG. Should be positive. |
chi |
Third parameter of GIG. Should be positive. |
The density of considered here is as follows.
where .
Hörmann, W., Leydold, J. Generating generalized inverse Gaussian random variates. Stat Comput 24, 547-557 (2014).
Leydold, J, Hörmann, W.. GIGrvg: Random Variate Generator for the GIG Distribution. R package version 0.8 (2023).
This function generates parameters of VAR with Horseshoe prior.
sim_horseshoe_var( p, dim_data = NULL, include_mean = TRUE, minnesota = FALSE, method = c("eigen", "chol") ) sim_horseshoe_vhar( har = c(5, 22), dim_data = NULL, include_mean = TRUE, minnesota = c("no", "short", "longrun"), method = c("eigen", "chol") )
sim_horseshoe_var( p, dim_data = NULL, include_mean = TRUE, minnesota = FALSE, method = c("eigen", "chol") ) sim_horseshoe_vhar( har = c(5, 22), dim_data = NULL, include_mean = TRUE, minnesota = c("no", "short", "longrun"), method = c("eigen", "chol") )
p |
VAR lag |
dim_data |
Specify the dimension of the data if hyperparameters of |
include_mean |
Add constant term (Default: |
minnesota |
Only use off-diagonal terms of each coefficient matrices for restriction.
In |
method |
Method to compute |
har |
Numeric vector for weekly and monthly order. By default, |
This function samples one matrix IW matrix.
sim_iw(mat_scale, shape)
sim_iw(mat_scale, shape)
mat_scale |
Scale matrix |
shape |
Shape |
Consider .
Upper triangular Bartlett decomposition: k x k matrix upper triangular with
with i < j (upper triangular)
Lower triangular Cholesky decomposition:
One k x k matrix following IW distribution
This function samples one matrix gaussian matrix.
sim_matgaussian(mat_mean, mat_scale_u, mat_scale_v, u_prec)
sim_matgaussian(mat_mean, mat_scale_u, mat_scale_v, u_prec)
mat_mean |
Mean matrix |
mat_scale_u |
First scale matrix |
mat_scale_v |
Second scale matrix |
u_prec |
If |
Consider n x k matrix where M is n x k, U is n x n, and V is k x k.
Lower triangular Cholesky decomposition: and
Standard normal generation: s x m matrix in row-wise direction.
This function only generates one matrix, i.e. .
One n x k matrix following MN distribution.
This function generates parameters of BVAR with Minnesota prior.
sim_mncoef(p, bayes_spec = set_bvar(), full = TRUE)
sim_mncoef(p, bayes_spec = set_bvar(), full = TRUE)
p |
VAR lag |
bayes_spec |
A BVAR model specification by |
full |
Generate variance matrix from IW (default: |
Implementing dummy observation constructions, Bańbura et al. (2010) sets Normal-IW prior.
If full = FALSE
, the result of is the same as input (
diag(sigma)
).
List with the following component.
BVAR coefficient (MN)
BVAR variance (IW or diagonal matrix of sigma
of bayes_spec
)
Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector auto regressions. Journal of Applied Econometrics, 25(1).
Karlsson, S. (2013). Chapter 15 Forecasting with Bayesian Vector Autoregression. Handbook of Economic Forecasting, 2, 791-897.
Litterman, R. B. (1986). Forecasting with Bayesian Vector Autoregressions: Five Years of Experience. Journal of Business & Economic Statistics, 4(1), 25.
set_bvar()
to specify the hyperparameters of Minnesota prior.
# Generate (A, Sigma) # BVAR(p = 2) # sigma: 1, 1, 1 # lambda: .1 # delta: .1, .1, .1 # epsilon: 1e-04 set.seed(1) sim_mncoef( p = 2, bayes_spec = set_bvar( sigma = rep(1, 3), lambda = .1, delta = rep(.1, 3), eps = 1e-04 ), full = TRUE )
# Generate (A, Sigma) # BVAR(p = 2) # sigma: 1, 1, 1 # lambda: .1 # delta: .1, .1, .1 # epsilon: 1e-04 set.seed(1) sim_mncoef( p = 2, bayes_spec = set_bvar( sigma = rep(1, 3), lambda = .1, delta = rep(.1, 3), eps = 1e-04 ), full = TRUE )
This function samples normal inverse-wishart matrices.
sim_mniw(num_sim, mat_mean, mat_scale_u, mat_scale, shape, u_prec = FALSE)
sim_mniw(num_sim, mat_mean, mat_scale_u, mat_scale, shape, u_prec = FALSE)
num_sim |
Number to generate |
mat_mean |
Mean matrix of MN |
mat_scale_u |
First scale matrix of MN |
mat_scale |
Scale matrix of IW |
shape |
Shape of IW |
u_prec |
If |
Consider .
Generate upper triangular factor of in the upper triangular Bartlett decomposition.
Standard normal generation: n x k matrix in row-wise direction.
Lower triangular Cholesky decomposition:
This function samples n x muti-dimensional normal random matrix.
sim_mnormal( num_sim, mu = rep(0, 5), sig = diag(5), method = c("eigen", "chol") )
sim_mnormal( num_sim, mu = rep(0, 5), sig = diag(5), method = c("eigen", "chol") )
num_sim |
Number to generate process |
mu |
Mean vector |
sig |
Variance matrix |
method |
Method to compute |
Consider .
Lower triangular Cholesky decomposition:
Standard normal generation:
T x k matrix
This function generates parameters of BVAR with Minnesota prior.
sim_mnvhar_coef(bayes_spec = set_bvhar(), full = TRUE)
sim_mnvhar_coef(bayes_spec = set_bvhar(), full = TRUE)
bayes_spec |
A BVHAR model specification by |
full |
Generate variance matrix from IW (default: |
Normal-IW family for vector HAR model:
List with the following component.
BVHAR coefficient (MN)
BVHAR variance (IW or diagonal matrix of sigma
of bayes_spec
)
Kim, Y. G., and Baek, C. (2024). Bayesian vector heterogeneous autoregressive modeling. Journal of Statistical Computation and Simulation, 94(6), 1139-1157.
set_bvhar()
to specify the hyperparameters of VAR-type Minnesota prior.
set_weight_bvhar()
to specify the hyperparameters of HAR-type Minnesota prior.
# Generate (Phi, Sigma) # BVHAR-S # sigma: 1, 1, 1 # lambda: .1 # delta: .1, .1, .1 # epsilon: 1e-04 set.seed(1) sim_mnvhar_coef( bayes_spec = set_bvhar( sigma = rep(1, 3), lambda = .1, delta = rep(.1, 3), eps = 1e-04 ), full = TRUE )
# Generate (Phi, Sigma) # BVHAR-S # sigma: 1, 1, 1 # lambda: .1 # delta: .1, .1, .1 # epsilon: 1e-04 set.seed(1) sim_mnvhar_coef( bayes_spec = set_bvhar( sigma = rep(1, 3), lambda = .1, delta = rep(.1, 3), eps = 1e-04 ), full = TRUE )
This function samples n x multi-dimensional t-random matrix.
sim_mvt(num_sim, df, mu, sig, method = c("eigen", "chol"))
sim_mvt(num_sim, df, mu, sig, method = c("eigen", "chol"))
num_sim |
Number to generate process. |
df |
Degrees of freedom. |
mu |
Location vector |
sig |
Scale matrix. |
method |
Method to compute |
T x k matrix
This function generates parameters of VAR with SSVS prior.
sim_ssvs_var( bayes_spec, p, dim_data = NULL, include_mean = TRUE, minnesota = FALSE, mn_prob = 1, method = c("eigen", "chol") ) sim_ssvs_vhar( bayes_spec, har = c(5, 22), dim_data = NULL, include_mean = TRUE, minnesota = c("no", "short", "longrun"), mn_prob = 1, method = c("eigen", "chol") )
sim_ssvs_var( bayes_spec, p, dim_data = NULL, include_mean = TRUE, minnesota = FALSE, mn_prob = 1, method = c("eigen", "chol") ) sim_ssvs_vhar( bayes_spec, har = c(5, 22), dim_data = NULL, include_mean = TRUE, minnesota = c("no", "short", "longrun"), mn_prob = 1, method = c("eigen", "chol") )
bayes_spec |
A SSVS model specification by |
p |
VAR lag |
dim_data |
Specify the dimension of the data if hyperparameters of |
include_mean |
Add constant term (Default: |
minnesota |
Only use off-diagonal terms of each coefficient matrices for restriction.
In |
mn_prob |
Probability for own-lags. |
method |
Method to compute |
har |
Numeric vector for weekly and monthly order. By default, |
List including coefficients.
Let be the vectorized coefficient of VAR(p).
Let be the vectorized coefficient of VHAR.
George, E. I., & McCulloch, R. E. (1993). Variable Selection via Gibbs Sampling. Journal of the American Statistical Association, 88(423), 881-889.
George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580.
Ghosh, S., Khare, K., & Michailidis, G. (2018). High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models. Journal of the American Statistical Association, 114(526).
Koop, G., & Korobilis, D. (2009). Bayesian Multivariate Time Series Methods for Empirical Macroeconomics. Foundations and Trends® in Econometrics, 3(4), 267-358.
This function generates multivariate time series dataset that follows VAR(p).
sim_var( num_sim, num_burn, var_coef, var_lag, sig_error = diag(ncol(var_coef)), init = matrix(0L, nrow = var_lag, ncol = ncol(var_coef)), method = c("eigen", "chol"), process = c("gaussian", "student"), t_param = 5 )
sim_var( num_sim, num_burn, var_coef, var_lag, sig_error = diag(ncol(var_coef)), init = matrix(0L, nrow = var_lag, ncol = ncol(var_coef)), method = c("eigen", "chol"), process = c("gaussian", "student"), t_param = 5 )
num_sim |
Number to generated process |
num_burn |
Number of burn-in |
var_coef |
VAR coefficient. The format should be the same as the output of |
var_lag |
Lag of VAR |
sig_error |
Variance matrix of the error term. By default, |
init |
Initial y1, ..., yp matrix to simulate VAR model. Try |
method |
Method to compute |
process |
Process to generate error term.
|
t_param |
Generate
For i = 1, ... n,
Then the output is
Initial values might be set to be zero vector or .
T x k matrix
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
This function generates multivariate time series dataset that follows VAR(p).
sim_vhar( num_sim, num_burn, vhar_coef, week = 5L, month = 22L, sig_error = diag(ncol(vhar_coef)), init = matrix(0L, nrow = month, ncol = ncol(vhar_coef)), method = c("eigen", "chol"), process = c("gaussian", "student"), t_param = 5 )
sim_vhar( num_sim, num_burn, vhar_coef, week = 5L, month = 22L, sig_error = diag(ncol(vhar_coef)), init = matrix(0L, nrow = month, ncol = ncol(vhar_coef)), method = c("eigen", "chol"), process = c("gaussian", "student"), t_param = 5 )
num_sim |
Number to generated process |
num_burn |
Number of burn-in |
vhar_coef |
VAR coefficient. The format should be the same as the output of |
week |
Weekly order of VHAR. By default, |
month |
Weekly order of VHAR. By default, |
sig_error |
Variance matrix of the error term. By default, |
init |
Initial y1, ..., yp matrix to simulate VAR model. Try |
method |
Method to compute |
process |
Process to generate error term.
|
t_param |
Let be the month order, e.g.
.
Generate
For i = 1, ... n,
Then the output is
For i = 1, ... n,
Then the output is
Initial values might be set to be zero vector or .
T x k matrix
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
This function gives connectedness table with h-step ahead normalized spillover index (a.k.a. variance shares).
spillover(object, n_ahead = 10L, ...) ## S3 method for class 'bvharspillover' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharspillover' knit_print(x, ...) ## S3 method for class 'olsmod' spillover(object, n_ahead = 10L, ...) ## S3 method for class 'normaliw' spillover( object, n_ahead = 10L, num_iter = 5000L, num_burn = floor(num_iter/2), thinning = 1L, ... ) ## S3 method for class 'bvarldlt' spillover(object, n_ahead = 10L, sparse = FALSE, ...) ## S3 method for class 'bvharldlt' spillover(object, n_ahead = 10L, sparse = FALSE, ...)
spillover(object, n_ahead = 10L, ...) ## S3 method for class 'bvharspillover' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharspillover' knit_print(x, ...) ## S3 method for class 'olsmod' spillover(object, n_ahead = 10L, ...) ## S3 method for class 'normaliw' spillover( object, n_ahead = 10L, num_iter = 5000L, num_burn = floor(num_iter/2), thinning = 1L, ... ) ## S3 method for class 'bvarldlt' spillover(object, n_ahead = 10L, sparse = FALSE, ...) ## S3 method for class 'bvharldlt' spillover(object, n_ahead = 10L, sparse = FALSE, ...)
object |
Model object |
n_ahead |
step to forecast. By default, 10. |
... |
not used |
x |
|
digits |
digit option to print |
num_iter |
Number to sample MNIW distribution |
num_burn |
Number of burn-in |
thinning |
Thinning every thinning-th iteration |
sparse |
Diebold, F. X., & Yilmaz, K. (2012). Better to give than to receive: Predictive directional measurement of volatility spillovers. International Journal of forecasting, 28(1), 57-66.
This function computes estimation error given estimated model and true coefficient.
spne(x, y, ...) ## S3 method for class 'bvharsp' spne(x, y, ...)
spne(x, y, ...) ## S3 method for class 'bvharsp' spne(x, y, ...)
x |
Estimated model. |
y |
Coefficient matrix to be compared. |
... |
not used |
Let be the spectral norm of a matrix,
let
be the estimates,
and let
be the true coefficients matrix.
Then the function computes estimation error by
Spectral norm value
Ghosh, S., Khare, K., & Michailidis, G. (2018). High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models. Journal of the American Statistical Association, 114(526).
Compute the character polynomial of coefficient matrix.
stableroot(x, ...) ## S3 method for class 'varlse' stableroot(x, ...) ## S3 method for class 'vharlse' stableroot(x, ...) ## S3 method for class 'bvarmn' stableroot(x, ...) ## S3 method for class 'bvarflat' stableroot(x, ...) ## S3 method for class 'bvharmn' stableroot(x, ...)
stableroot(x, ...) ## S3 method for class 'varlse' stableroot(x, ...) ## S3 method for class 'vharlse' stableroot(x, ...) ## S3 method for class 'bvarmn' stableroot(x, ...) ## S3 method for class 'bvarflat' stableroot(x, ...) ## S3 method for class 'bvharmn' stableroot(x, ...)
x |
Model fit |
... |
not used |
To know whether the process is stable or not, make characteristic polynomial.
where is VAR(1) coefficient matrix representation.
Numeric vector.
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
summary
method for normaliw
class.
## S3 method for class 'normaliw' summary( object, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, verbose = FALSE, num_thread = 1, ... ) ## S3 method for class 'summary.normaliw' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'summary.normaliw' knit_print(x, ...)
## S3 method for class 'normaliw' summary( object, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, verbose = FALSE, num_thread = 1, ... ) ## S3 method for class 'summary.normaliw' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'summary.normaliw' knit_print(x, ...)
object |
A |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
verbose |
Print the progress bar in the console. By default, |
num_thread |
Number of threads |
... |
not used |
x |
|
digits |
digit option to print |
From Minnesota prior, set of coefficient matrices and residual covariance matrix have matrix Normal Inverse-Wishart distribution.
BVAR:
where is the posterior precision of MN.
BVHAR:
where is the posterior precision of MN.
summary.normaliw
class has the following components:
Variable names
Total number of the observation
Sample size used when training = totobs
- p
Lag of VAR
Dimension of the data
Matched call
Model specification (bvharspec
)
MN Mean of posterior distribution (MN-IW)
MN Precision of posterior distribution (MN-IW)
IW scale of posterior distribution (MN-IW)
IW df of posterior distribution (MN-IW)
Number of MCMC iterations
Number of MCMC burn-in
MCMC thinning
MCMC record of coefficients vector
MCMC record of upper cholesky factor
MCMC record of diagonal of cholesky factor
MCMC record of upper part of cholesky factor
MCMC record of every parameter
Posterior mean of coefficients
Posterior mean of covariance
Litterman, R. B. (1986). Forecasting with Bayesian Vector Autoregressions: Five Years of Experience. Journal of Business & Economic Statistics, 4(1), 25.
Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector auto regressions. Journal of Applied Econometrics, 25(1).
summary
method for varlse
class.
## S3 method for class 'varlse' summary(object, ...) ## S3 method for class 'summary.varlse' print(x, digits = max(3L, getOption("digits") - 3L), signif_code = TRUE, ...) ## S3 method for class 'summary.varlse' knit_print(x, ...)
## S3 method for class 'varlse' summary(object, ...) ## S3 method for class 'summary.varlse' print(x, digits = max(3L, getOption("digits") - 3L), signif_code = TRUE, ...) ## S3 method for class 'summary.varlse' knit_print(x, ...)
object |
A |
... |
not used |
x |
|
digits |
digit option to print |
signif_code |
Check significant rows (Default: |
summary.varlse
class additionally computes the following
names |
Variable names |
totobs |
Total number of the observation |
obs |
Sample size used when training = |
p |
Lag of VAR |
coefficients |
Coefficient Matrix |
call |
Matched call |
process |
Process: VAR |
covmat |
Covariance matrix of the residuals |
corrmat |
Correlation matrix of the residuals |
roots |
Roots of characteristic polynomials |
is_stable |
Whether the process is stable or not based on |
log_lik |
log-likelihood |
ic |
Information criteria vector |
AIC
- AIC
BIC
- BIC
HQ
- HQ
FPE
- FPE
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
summary
method for vharlse
class.
## S3 method for class 'vharlse' summary(object, ...) ## S3 method for class 'summary.vharlse' print(x, digits = max(3L, getOption("digits") - 3L), signif_code = TRUE, ...) ## S3 method for class 'summary.vharlse' knit_print(x, ...)
## S3 method for class 'vharlse' summary(object, ...) ## S3 method for class 'summary.vharlse' print(x, digits = max(3L, getOption("digits") - 3L), signif_code = TRUE, ...) ## S3 method for class 'summary.vharlse' knit_print(x, ...)
object |
A |
... |
not used |
x |
|
digits |
digit option to print |
signif_code |
Check significant rows (Default: |
summary.vharlse
class additionally computes the following
names |
Variable names |
totobs |
Total number of the observation |
obs |
Sample size used when training = |
p |
3 |
week |
Order for weekly term |
month |
Order for monthly term |
coefficients |
Coefficient Matrix |
call |
Matched call |
process |
Process: VAR |
covmat |
Covariance matrix of the residuals |
corrmat |
Correlation matrix of the residuals |
roots |
Roots of characteristic polynomials |
is_stable |
Whether the process is stable or not based on |
log_lik |
log-likelihood |
ic |
Information criteria vector |
AIC
- AIC
BIC
- BIC
HQ
- HQ
FPE
- FPE
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
Corsi, F. (2008). A Simple Approximate Long-Memory Model of Realized Volatility. Journal of Financial Econometrics, 7(2), 174-196.
Baek, C. and Park, M. (2021). Sparse vector heterogeneous autoregressive modeling for realized volatility. J. Korean Stat. Soc. 50, 495-510.
This function fits BVAR. Covariance term can be homoskedastic or heteroskedastic (stochastic volatility). It can have Minnesota, SSVS, and Horseshoe prior.
var_bayes( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvar(), cov_spec = set_ldlt(), intercept = set_intercept(), include_mean = TRUE, minnesota = TRUE, save_init = FALSE, convergence = NULL, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarldlt' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarldlt' knit_print(x, ...)
var_bayes( y, p, num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvar(), cov_spec = set_ldlt(), intercept = set_intercept(), include_mean = TRUE, minnesota = TRUE, save_init = FALSE, convergence = NULL, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvarldlt' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvarldlt' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
p |
VAR lag |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
A BVAR model specification by |
cov_spec |
SV specification by |
intercept |
Prior for the constant term by |
include_mean |
Add constant term (Default: |
minnesota |
Apply cross-variable shrinkage structure (Minnesota-way). By default, |
save_init |
Save every record starting from the initial values ( |
convergence |
Convergence threshold for rhat < convergence. By default, |
verbose |
Print the progress bar in the console. By default, |
num_thread |
Number of threads |
x |
|
digits |
digit option to print |
... |
not used |
Cholesky stochastic volatility modeling for VAR based on
, and implements corrected triangular algorithm for Gibbs sampler.
var_bayes()
returns an object named bvarsv
class.
Posterior mean of coefficients.
Posterior mean of contemporaneous effects.
Every set of MCMC trace.
Name of every parameter.
Indicators for group.
Number of groups.
Numer of Coefficients: 3m + 1
or 3m
VAR lag
Dimension of the data
Sample size used when training = totobs
- p
Total number of the observation
Matched call
Description of the model, e.g. VHAR_SSVS_SV
, VHAR_Horseshoe_SV
, or VHAR_minnesota-part_SV
include constant term (const
) or not (none
)
Coefficients prior specification
log volatility prior specification
Intercept prior specification
Initial values
The numer of chains
Total iterations
Burn-in
Thinning
Raw input
If it is SSVS or Horseshoe:
Posterior inclusion probabilities.
Carriero, A., Chan, J., Clark, T. E., & Marcellino, M. (2022). Corrigendum to “Large Bayesian vector autoregressions with stochastic volatility and non-conjugate priors” [J. Econometrics 212 (1)(2019) 137-154]. Journal of Econometrics, 227(2), 506-512.
Chan, J., Koop, G., Poirier, D., & Tobias, J. (2019). Bayesian Econometric Methods (2nd ed., Econometric Exercises). Cambridge: Cambridge University Press.
Cogley, T., & Sargent, T. J. (2005). Drifts and volatilities: monetary policies and outcomes in the post WWII US. Review of Economic Dynamics, 8(2), 262-302.
Gruber, L., & Kastner, G. (2022). Forecasting macroeconomic data with Bayesian VARs: Sparse or dense? It depends! arXiv.
Huber, F., Koop, G., & Onorante, L. (2021). Inducing Sparsity and Shrinkage in Time-Varying Parameter Models. Journal of Business & Economic Statistics, 39(3), 669-683.
Korobilis, D., & Shimizu, K. (2022). Bayesian Approaches to Shrinkage and Sparse Estimation. Foundations and Trends® in Econometrics, 11(4), 230-354.
Ray, P., & Bhattacharya, A. (2018). Signal Adaptive Variable Selector for the Horseshoe Prior. arXiv.
This function fits VAR(p) using OLS method.
var_lm(y, p = 1, include_mean = TRUE, method = c("nor", "chol", "qr")) ## S3 method for class 'varlse' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'varlse' logLik(object, ...) ## S3 method for class 'varlse' AIC(object, ...) ## S3 method for class 'varlse' BIC(object, ...) is.varlse(x) is.bvharmod(x) ## S3 method for class 'varlse' knit_print(x, ...)
var_lm(y, p = 1, include_mean = TRUE, method = c("nor", "chol", "qr")) ## S3 method for class 'varlse' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'varlse' logLik(object, ...) ## S3 method for class 'varlse' AIC(object, ...) ## S3 method for class 'varlse' BIC(object, ...) is.varlse(x) is.bvharmod(x) ## S3 method for class 'varlse' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
p |
Lag of VAR (Default: 1) |
include_mean |
Add constant term (Default: |
method |
Method to solve linear equation system.
( |
x |
A |
digits |
digit option to print |
... |
not used |
object |
A |
This package specifies VAR(p) model as
If include_type = TRUE
, there is constant term.
The function estimates every coefficient matrix.
Consider the response matrix .
Let
be the total number of sample,
let
be the dimension of the time series,
let
be the order of the model,
and let
.
Likelihood of VAR(p) has
where is the design matrix,
and MN is matrix normal distribution.
Then log-likelihood of vector autoregressive model family is specified by
In addition, recall that the OLS estimator for the matrix coefficient matrix is the same as MLE under the Gaussian assumption.
MLE for has different denominator,
.
Let be the MLE
and let
be the unbiased estimator (
covmat
) for .
Note that
Then
where the number of freely estimated parameters is , i.e.
or
.
Let be the MLE
and let
be the unbiased estimator (
covmat
) for .
Note that
Then
where the number of freely estimated parameters is .
var_lm()
returns an object named varlse
class.
It is a list with the following components:
Coefficient Matrix
Fitted response values
Residuals
LS estimate for covariance matrix
Numer of Coefficients
Lag of VAR
Dimension of the data
Sample size used when training = totobs
- p
Total number of the observation
Matched call
Process: VAR
include constant term (const
) or not (none
)
Design matrix
Raw input
Multivariate response matrix
Solving method
Matched call
It is also a bvharmod
class.
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
Akaike, H. (1969). Fitting autoregressive models for prediction. Ann Inst Stat Math 21, 243-247.
Akaike, H. (1971). Autoregressive model fitting for control. Ann Inst Stat Math 23, 163-180.
Akaike H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, vol. 19, no. 6, pp. 716-723.
Akaike H. (1998). Information Theory and an Extension of the Maximum Likelihood Principle. In: Parzen E., Tanabe K., Kitagawa G. (eds) Selected Papers of Hirotugu Akaike. Springer Series in Statistics (Perspectives in Statistics). Springer, New York, NY.
Gideon Schwarz. (1978). Estimating the Dimension of a Model. Ann. Statist. 6 (2) 461 - 464.
summary.varlse()
to summarize VAR model
# Perform the function using etf_vix dataset fit <- var_lm(y = etf_vix, p = 2) class(fit) str(fit) # Extract coef, fitted values, and residuals coef(fit) head(residuals(fit)) head(fitted(fit))
# Perform the function using etf_vix dataset fit <- var_lm(y = etf_vix, p = 2) class(fit) str(fit) # Extract coef, fitted values, and residuals coef(fit) head(residuals(fit)) head(fitted(fit))
Convert VAR process to infinite vector MA process
VARtoVMA(object, lag_max)
VARtoVMA(object, lag_max)
object |
A |
lag_max |
Maximum lag for VMA |
Let VAR(p) be stable.
For VAR coefficient ,
Recursively,
VMA coefficient of k(lag-max + 1) x k dimension
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.
This function fits BVHAR. Covariance term can be homoskedastic or heteroskedastic (stochastic volatility). It can have Minnesota, SSVS, and Horseshoe prior.
vhar_bayes( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvhar(), cov_spec = set_ldlt(), intercept = set_intercept(), include_mean = TRUE, minnesota = c("longrun", "short", "no"), save_init = FALSE, convergence = NULL, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharldlt' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharldlt' knit_print(x, ...)
vhar_bayes( y, har = c(5, 22), num_chains = 1, num_iter = 1000, num_burn = floor(num_iter/2), thinning = 1, bayes_spec = set_bvhar(), cov_spec = set_ldlt(), intercept = set_intercept(), include_mean = TRUE, minnesota = c("longrun", "short", "no"), save_init = FALSE, convergence = NULL, verbose = FALSE, num_thread = 1 ) ## S3 method for class 'bvharldlt' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'bvharldlt' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
har |
Numeric vector for weekly and monthly order. By default, |
num_chains |
Number of MCMC chains |
num_iter |
MCMC iteration number |
num_burn |
Number of burn-in (warm-up). Half of the iteration is the default choice. |
thinning |
Thinning every thinning-th iteration |
bayes_spec |
A BVHAR model specification by |
cov_spec |
SV specification by |
intercept |
Prior for the constant term by |
include_mean |
Add constant term (Default: |
minnesota |
Apply cross-variable shrinkage structure (Minnesota-way). Two type: |
save_init |
Save every record starting from the initial values ( |
convergence |
Convergence threshold for rhat < convergence. By default, |
verbose |
Print the progress bar in the console. By default, |
num_thread |
Number of threads |
x |
|
digits |
digit option to print |
... |
not used |
Cholesky stochastic volatility modeling for VHAR based on
vhar_bayes()
returns an object named bvharsv
class. It is a list with the following components:
Posterior mean of coefficients.
Posterior mean of contemporaneous effects.
Every set of MCMC trace.
Name of every parameter.
Indicators for group.
Number of groups.
Numer of Coefficients: 3m + 1
or 3m
3 (The number of terms. It contains this element for usage in other functions.)
Order for weekly term
Order for monthly term
Dimension of the data
Sample size used when training = totobs
- p
Total number of the observation
Matched call
Description of the model, e.g. VHAR_SSVS_SV
, VHAR_Horseshoe_SV
, or VHAR_minnesota-part_SV
include constant term (const
) or not (none
)
Coefficients prior specification
log volatility prior specification
Initial values
Intercept prior specification
The numer of chains
Total iterations
Burn-in
Thinning
VHAR linear transformation matrix
Raw input
If it is SSVS or Horseshoe:
Posterior inclusion probabilities.
Kim, Y. G., and Baek, C. (2024). Bayesian vector heterogeneous autoregressive modeling. Journal of Statistical Computation and Simulation, 94(6), 1139-1157.
Kim, Y. G., and Baek, C. (n.d.). Working paper.
This function fits VHAR using OLS method.
vhar_lm( y, har = c(5, 22), include_mean = TRUE, method = c("nor", "chol", "qr") ) ## S3 method for class 'vharlse' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'vharlse' logLik(object, ...) ## S3 method for class 'vharlse' AIC(object, ...) ## S3 method for class 'vharlse' BIC(object, ...) is.vharlse(x) ## S3 method for class 'vharlse' knit_print(x, ...)
vhar_lm( y, har = c(5, 22), include_mean = TRUE, method = c("nor", "chol", "qr") ) ## S3 method for class 'vharlse' print(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'vharlse' logLik(object, ...) ## S3 method for class 'vharlse' AIC(object, ...) ## S3 method for class 'vharlse' BIC(object, ...) is.vharlse(x) ## S3 method for class 'vharlse' knit_print(x, ...)
y |
Time series data of which columns indicate the variables |
har |
Numeric vector for weekly and monthly order. By default, |
include_mean |
Add constant term (Default: |
method |
Method to solve linear equation system.
( |
x |
A |
digits |
digit option to print |
... |
not used |
object |
A |
For VHAR model
the function gives basic values.
vhar_lm()
returns an object named vharlse
class.
It is a list with the following components:
Coefficient Matrix
Fitted response values
Residuals
LS estimate for covariance matrix
Numer of Coefficients
Dimension of the data
Sample size used when training = totobs
- month
Multivariate response matrix
3 (The number of terms. vharlse
contains this element for usage in other functions.)
Order for weekly term
Order for monthly term
Total number of the observation
Process: VHAR
include constant term (const
) or not (none
)
VHAR linear transformation matrix
Design matrix of VAR(month
)
Raw input
Solving method
Matched call
It is also a bvharmod
class.
Baek, C. and Park, M. (2021). Sparse vector heterogeneous autoregressive modeling for realized volatility. J. Korean Stat. Soc. 50, 495-510.
Bubák, V., Kočenda, E., & Žikeš, F. (2011). Volatility transmission in emerging European foreign exchange markets. Journal of Banking & Finance, 35(11), 2829-2841.
Corsi, F. (2008). A Simple Approximate Long-Memory Model of Realized Volatility. Journal of Financial Econometrics, 7(2), 174-196.
summary.vharlse()
to summarize VHAR model
# Perform the function using etf_vix dataset fit <- vhar_lm(y = etf_vix) class(fit) str(fit) # Extract coef, fitted values, and residuals coef(fit) head(residuals(fit)) head(fitted(fit))
# Perform the function using etf_vix dataset fit <- vhar_lm(y = etf_vix) class(fit) str(fit) # Extract coef, fitted values, and residuals coef(fit) head(residuals(fit)) head(fitted(fit))
Convert VHAR process to infinite vector MA process
VHARtoVMA(object, lag_max)
VHARtoVMA(object, lag_max)
object |
A |
lag_max |
Maximum lag for VMA |
Let VAR(p) be stable
and let VAR(p) be
VHAR is VAR(22) with
Observe that
VMA coefficient of k(lag-max + 1) x k dimension
Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.