This vignette1 is a documentation of
the estimation procedure fit_model()
in
{RprobitB}
.
Bayes estimation of the probit model builds upon the work of McCulloch and Rossi (1994), Nobile (1998), Allenby and Rossi (1998), and Imai and Dyk (2005). A key ingredient is the concept of data augmentation, see Albert and Chib (1993): The idea is to treat the latent utilities U in the model equation U = Xβ + ϵ as additional parameters. Then, conditional on U, the probit model constitutes a standard Bayesian linear regression set-up. Its posterior distribution can be approximated by iteratively drawing and updating each model parameter conditional on the other parameters (the so-called Gibbs sampling approach).
A priori, we assume the following (conjugate) parameter distributions:
(s1, …, sC) ∼ DC(δ), where DC(δ) denotes the C-dimensional Dirichlet distribution with concentration parameter vector δ = (δ1, …, δC),
α ∼ MVNPf(ψ, Ψ), where MVNPf denotes the Pf-dimensional normal distribution with mean ψ and covariance Ψ,
bc ∼ MVNPr(ξ, Ξ), independent for all c,
Ωc ∼ WPr−1(ν, Θ), independent for all c, where WPr−1(ν, Θ) denotes the Pr-dimensional inverse Wishart distribution with ν degrees of freedom and scale matrix Θ,
and Σ ∼ WJ − 1−1(κ, Λ).
These prior distributions imply the following conditional posterior distributions:
The class weights are drawn from the Dirichlet distribution where for c = 1, …, C, mc = #{n : zn = c} denotes the current absolute class size.2
Independently for all n, we update the allocation variables (zn)n from their conditional distribution
The class means (bc)c are updated independently for all c via where μbc = (Ξ−1 + mcΩc−1)−1(Ξ−1ξ + mcΩc−1b̄c), Σbc = (Ξ−1 + mcΩc−1)−1, b̄c = mc−1∑n : zn = cβn.
The class covariance matrices (Ωc)c are updated independently for all c via where μΩc = ν + mc and ΣΩc = Θ−1 + ∑n : zn = c(βn − bc)(βn − bc)′.
Independently for all n and t and conditionally on the other components, the utility vectors (Unt:) follow a J − 1-dimensional truncated multivariate normal distribution, where the truncation points are determined by the choices ynt. To sample from a truncated multivariate normal distribution, we apply a sub-Gibbs sampler, following the approach of Geweke (1998): where Unt(−j) denotes the vector (Unt:) without the element Untj, 𝒩 denotes the univariate normal distribution, ΣUntj = 1/(Σ−1)jj and where (Σ−1)jj denotes the (j, j)th element of Σ−1, (Σ−1)j(−j) the jth row without the jth entry, Wnt(−j) and Xnt(−j) the coefficient matrices Wnt and Xnt, respectively, without the jth column.
Updating the fixed coefficient vector α is achieved by applying the formula for Bayesian linear regression of the regressors Wnt on the regressands (Unt:) − Xnt′βn, i.e. where $\mu_\alpha = \Sigma_\alpha (\Psi^{-1}\psi + \sum_{n=1,t=1}^{N,T} W_{nt} \Sigma^{-1} ((U_{nt:})-X_{nt}'\beta_n) )$ and $\Sigma_\alpha = (\Psi^{-1} + \sum_{n=1,t=1}^{N,T} W_{nt}\Sigma^{-1} W_{nt}^{'} )^{-1}$.
Analogously to α, the random coefficients (βn)n are updated independently via where $\mu_{\beta_n} = \Sigma_{\beta_n} (\Omega_{z_n}^{-1}b_{z_n} + \sum_{t=1}^{T} X_{nt} \Sigma^{-1} (U_{nt:}-W_{nt}'\alpha) )$ and $\Sigma_{\beta_n} = (\Omega_{z_n}^{-1} + \sum_{t=1}^{T} X_{nt}\Sigma^{-1} X_{nt}^{'} )^{-1}$ .
The error term covariance matrix Σ is updated by means of where $S = \sum_{n=1,t=1}^{N,T} \varepsilon_{nt} \varepsilon_{nt}'$ and εnt = (Unt:) − Wnt′α − Xnt′βn.
Samples obtained from the updating scheme described above lack identification (except for s and z draws), compare to the vignette on the model definition. Therefore, subsequent to the sampling, the following normalizations are required for the ith updates in each iterations i:
α(i) ⋅ ω(i),
bc(i) ⋅ ω(i), c = 1, …, C,
Unt(i) ⋅ ω(i), n = 1, …, N, t = 1, …, T,
βn(i) ⋅ ω(i), n = 1, …, N,
Ωc(i) ⋅ (ω(i))2, c = 1, …, C, and
Σ(i) ⋅ (ω(i))2,
where either $\omega^{(i)} = \sqrt{\text{const} / (\Sigma^{(i)})_{jj}}$ with (Σ(i))jj the jth diagonal element of Σ(i), 1 ≤ j ≤ J − 1, or alternatively ω(i) = const/αp(i) for some coordinate 1 ≤ p ≤ Pf of the ith draw for the coefficient vector α. Here, const is any positive constant (typically 1). The preferences will be flipped if ω(i) < 0, which only is the case if αp(i) < 0.
The theory behind Gibbs sampling constitutes that the sequence of samples produced by the updating scheme is a Markov chain with stationary distribution equal to the desired joint posterior distribution. It takes a certain number of iterations for that stationary distribution to be approximated reasonably well. Therefore, it is common practice to discard the first B out of R samples (the so-called burn-in period). Furthermore, correlation between nearby samples should be expected. In order to obtain independent samples, we consider only every Qth sample when computing Gibbs sample statistics like expectation and standard deviation. The independence of the samples can be verified by computing the serial correlation and the convergence of the Gibbs sampler can be checked by considering trace plots, see below.
fit_model()
functionThe Gibbs sampling scheme described above can be executed by applying the function
where data
must be an RprobitB_data
object
(see the vignette about choice data). The function has the following
optional arguments:
scale
: A character which determines the utility scale. It is of the form
"<parameter> := <value>"
, where
<parameter>
is either the name of a fixed effect or
Sigma_<j>,<j>
for the <j>
th
diagonal element of Sigma
, and <value>
is the value of the fixed parameter (i.e. const introduced above). Per default
scale = "Sigma\_1,1 := 1"
, i.e. the first error-term
variance is fixed to 1.
R
: The number of iterations of the Gibbs sampler.
The default is R = 10000
.
B
: The length of the burn-in period, i.e. a
non-negative number of samples to be discarded. The default is
B = R/2
.
Q
: The thinning factor for the Gibbs samples,
i.e. only every Q
th sample is kept. The default is
Q = 1
.
print_progress
: A boolean, determining whether to
print the Gibbs sampler progress.
prior
: A named list of parameters for the prior
distributions (their default values are documented in the
check_prior()
function):
eta
: The mean vector of length P_f
of
the normal prior for alpha
.
Psi
: The covariance matrix of dimension
P_f
x P_f
of the normal prior for
alpha
.
delta
: The concentration parameter of length 1 of
the Dirichlet prior for s
.
xi
: The mean vector of length P_r
of
the normal prior for each b_c
.
D
: The covariance matrix of dimension
P_r
x P_r
of the normal prior for each
b_c
.
nu
: The degrees of freedom (a natural number greater
than P_r
) of the Inverse Wishart prior for each
Omega_c
.
Theta
: The scale matrix of dimension
P_r
x P_r
of the Inverse Wishart prior for
each Omega_c
.
kappa
: The degrees of freedom (a natural number
greater than J-1
) of the Inverse Wishart prior for
Sigma
.
E
: The scale matrix of dimension J-1
x
J-1
of the Inverse Wishart prior for
Sigma
.
latent_classes
: A list of parameters specifying the
number and the updating scheme of latent classes, see the vignette on
modeling heterogeneity fitting.
In the
previous vignette on choice data, we introduced the
train_choice
data set that contains 2922 choices between
two fictional train route alternatives. The following lines fit a probit
model that explains the chosen trip alternatives (choice
)
by their price
, time
, number of
change
s, and level of comfort
(the lower this
value the higher the comfort). For normalization, the first linear
coefficient, the price
, was fixed to -1
, which
allows to interpret the other coefficients as monetary values:
form <- choice ~ price + time + change + comfort | 0
data <- prepare_data(form = form, choice_data = train_choice, id = "deciderID", idc = "occasionID")
model_train <- fit_model(
data = data,
scale = "price := -1"
)
The estimated coefficients (using the mean of the Gibbs samples as a point estimate) can be printed via
coef(model_train)
#> Estimate (sd)
#> 1 price -1.00 (0.00)
#> 2 time -25.90 (2.09)
#> 3 change -4.82 (0.84)
#> 4 comfort -14.49 (0.86)
and visualized via
The results indicate that the deciders value one hour travel time by about 25€, an additional change by 5€, and a more comfortable class by 14€.3
The Gibbs samples are saved in list form in the
RprobitB_fit
object at the entry
"gibbs_samples"
, i.e.
str(model_train$gibbs_samples, max.level = 2, give.attr = FALSE)
#> List of 2
#> $ gibbs_samples_raw:List of 2
#> ..$ alpha: num [1:1000, 1:4] -0.000713 -0.022961 -0.031988 -0.036215 -0.03571 ...
#> ..$ Sigma: num [1:1000, 1] 1.05 1.1 1.03 1.02 1.01 ...
#> $ gibbs_samples_nbt:List of 2
#> ..$ alpha: num [1:500, 1:4] -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 ...
#> ..$ Sigma: num [1:500, 1] 617 611 626 552 734 ...
This object contains 2 elements:
gibbs_samples_raw
is a list of the raw samples from
the Gibbs sampler,
and gibbs_samples_nbt
are the Gibbs samples used for
parameter estimates, i.e. the normalized and thinned Gibbs samples after
the burn-in.
Calling the summary function on the estimated
RprobitB_fit
object yields additional information about the
Gibbs samples gibbs_samples_nbt
. You can specify a list
FUN
of functions that compute any point estimate of the
Gibbs samples4, for example
mean
for the arithmetic mean,
stats::sd
for the standard deviation,
R_hat
for the Gelman-Rubin statistic (Gelman and Rubin
1992) 5,
or custom statistics like the absolute difference between the median and the mean.
summary(model_train,
FUN = c(
"mean" = mean,
"sd" = stats::sd,
"R^" = R_hat,
"custom_stat" = function(x) abs(mean(x) - median(x))
)
)
#> Probit model
#> Formula: choice ~ price + time + change + comfort | 0
#> R: 1000, B: 500, Q: 1
#> Level: Utility differences with respect to alternative 'B'.
#> Scale: Coefficient of effect 'price' (alpha_1) fixed to -1.
#>
#> Gibbs sample statistics
#> mean sd R^ custom_stat
#> alpha
#>
#> 1 -1.00 0.00 1.00 0.00
#> 2 -25.90 2.09 1.04 0.07
#> 3 -4.82 0.84 1.00 0.02
#> 4 -14.49 0.86 1.00 0.02
#>
#> Sigma
#>
#> 1,1 661.69 59.21 1.03 7.30
Calling the plot
method with the additional argument
type = "trace"
plots the trace of the Gibbs samples
gibbs_samples_nbt
:
Additionally, we can visualize the serial correlation of the Gibbs
samples via the argument type = "acf"
. The boxes in the
top-right corner state the total sample size TSS (here R
-
B
= 10000 - 5000 = 5000), the effective sample size ESS,
and the factor by which TSS is larger than ESS.
Here, the effective sample size is the value TSS/(1 + ∑k ≥ 1ρk),
where ρk
is the auto correlation between the chain offset by k positions. The auto correlations
are estimated via the stats::acf()
function.
The transform
method can be used to transform an
RprobitB_fit
object in three ways:
B
of the burn-in period, for
exampleQ
of the Gibbs samples, for
examplescale
, for
exampleThis vignette is built using R 4.4.2 with the
{RprobitB}
1.1.4 package.↩︎
Mind that the model is invariant to permutations of the class labels 1, …, C. For that reason, we accept an update only if the ordering s1 > … > sC holds, thereby ensuring a unique labeling of the classes.↩︎
These results are consistent with the ones that are presented in a vignette of the mlogit package on the same data set but using the logit model.↩︎
Use the function point_estimates()
to
access the Gibbs sample statistics as an RprobitB_parameter
object.↩︎
A Gelman-Rubin statistic close to 1 indicates that the chain of Gibbs samples converged to the stationary distribution.↩︎