BLE_Reg

library(BayesSampling)

Bayes linear approach to finite population

(From Section 2.2 of the “Gonçalves, Moura and Migon: Bayes linear estimation for finite population with emphasis on categorical data”)

Consider U = {u1, ..., uN} a finite population with N units. Let y = (y1, ..., yN)′ be the vector with the values of interest of the units in U. The response vector y is partitioned into the known observed n - sample vector ys, and the non-observed vector y of dimension N − n. The general problem is to predict a function of the vector y, such as the total $T = \sum_{i=1}^{N}y_i = 1^{'}_{s} y_s + 1^{'}_{\bar{s}} y_{\bar{s}}$, where 1s and 1 are the vectors of 1’s of dimensions n and N − n, respectively. In the model-based approach, this is usually done by assuming a parametric model for the population values yi’s and then obtaining the Empirical Best Linear Unbiased Predictor (EBLUP) for the unknown vector ys under this model. Usually, the mean square error of the EBLUP of T is obtained by second order approximation, as well as an unbiased estimator of it. See Valliant, Dorfman and Royall (2000), chapter 2, for details.

The Bayesian approach to finite population prediction often assumes a parametric model, but it aims to find the posterior distribution of T given ys. Point estimates can be obtained by setting a loss function, although in many practical problems, the posterior mean is often considered and its associated variance is given by the posterior variance, i.e.:

It is possible to obtain an approximation to the quantities in (2.3) by using a Bayes linear estimation approach. Here, we will particularly obtain the estimators by assuming a general two-stage hierarchical model for finite population, specified only by its mean and variance-covariance matrix, presented in Bolfarine and Zacks (1992), page 76. Particular cases describing usual population structures found in practice are easily derived from (2.4). The general model can be written as:

where X is a covariate matrix of dimension N × p, with rows Xi = (xi1, ..., xip), i = 1, ..., N;
β = (β1, ..., βp)′ is a p × 1 vector of unknown parameters; and y, given β, is a random vector with mean Xβ and known covariance matrix V of dimension N × N. Analogously a and R are the respective p × 1 prior mean vector and p × p prior covariance matrix of β.

Since the response vector y is partitioned into ys and y, the matrix X, which is assumed to be known, is analogously partitioned into Xs and X, and V is partitioned into Vs, V, Vs and Vs. The first aim is to predict y given the observed sample ys and then the total T. We did this in the following steps: first, we used a joint prior distribution that is only partially specified in terms of moments, as follows:

()

Application

The BLE_Reg() function will apply this methodology to the given sample, calculate the Bayes Linear Estimator (and its associate variance) to the parameter β and for the individuals not in the sample, given the auxiliary variable values. In a simple model the auxiliary variable will have value 1 for every individual.

Examples

  1. Example from the help page
xs <- matrix(c(1,1,1,1,2,3,5,0),nrow=4,ncol=2)
ys <- c(12,17,28,2)
x_nots <- matrix(c(1,1,1,0,1,4),nrow=3,ncol=2)
a <- c(1.5,6)
R <- matrix(c(10,2,2,10),nrow=2,ncol=2)
Vs <- diag(c(1,1,1,1))
V_nots <- diag(c(1,1,1))

Estimator <- BLE_Reg(ys,xs,a,R,Vs,x_nots,V_nots)
Estimator
#> $est.beta
#>       Beta
#> 1 1.723363
#> 2 5.206675
#> 
#> $Vest.beta
#>           V1          V2
#> 1  0.6708234 -0.17568311
#> 2 -0.1756831  0.07225381
#> 
#> $est.mean
#>      y_nots
#> 1  1.723363
#> 2  6.930039
#> 3 22.550064
#> 
#> $Vest.mean
#>            V1         V2          V3
#> 1  1.67082340 0.49514029 -0.03190904
#> 2  0.49514029 1.39171098  0.08142307
#> 3 -0.03190904 0.08142307  1.42141940
#> 
#> $est.tot
#> [1] 90.20347
#> 
#> $Vest.tot
#> [1] 5.573262