This vignette illustrates the standard use of the PLNPCA
function and the methods accompanying the R6 Classes
PLNPCAfamily
and PLNPCAfit
.
The packages required for the analysis are PLNmodels plus some others for data manipulation and representation:
The main function PLNPCA
integrates some features of the
future package to perform parallel computing: you can
set your plan now to speed the fit by relying on 2 workers as
follows:
We illustrate our point with the trichoptera data set, a full description of which can be found in the corresponding vignette. Data preparation is also detailed in the specific vignette.
The trichoptera
data frame stores a matrix of counts
(trichoptera$Abundance
), a matrix of offsets
(trichoptera$Offset
) and some vectors of covariates
(trichoptera$Wind
, trichoptera$Temperature
,
etc.)
In the vein of Tipping and Bishop (1999), we introduce in Chiquet, Mariadassou, and Robin (2018) a probabilistic PCA model for multivariate count data which is a variant of the Poisson Lognormal model of Aitchison and Ho (1989) (see the PLN vignette as a reminder). Indeed, it can be viewed as a PLN model with an additional rank constraint on the covariance matrix Σ such that rank(Σ) = q.
This PLN-PCA model can be written in a hierarchical framework where a sample of p-dimensional observation vectors Yi is related to some q-dimensional vectors of latent variables Wi as follows:
The parameter μ ∈ ℝp corresponds to the main effects, the p × q matrix C to the loadings in the parameter spaces and Wi to the scores of the i-th observation in the low-dimensional latent subspace of the parameter space. The dimension of the latent space q corresponds to the number of axes in the PCA or, in other words, to the rank of CC⊺. An hopefully more intuitive way of writing this model is the following: where the interpretation of PLN-PCA as a rank-constrained PLN model is more obvious.
Just like PLN, PLN-PCA generalizes to a formulation close to a multivariate generalized linear model where the main effect is due to a linear combination of d covariates xi and to a vector oi of p offsets in sample i. The latent layer then reads where B is a d × p matrix of regression parameters.
Dimension reduction and visualization is the main objective in (PLN)-PCA. To reach this goal, we need to first estimate the model parameters. Inference in PLN-PCA focuses on the regression parameters B and on the covariance matrix Σ. Technically speaking, we adopt a variational strategy to approximate the log-likelihood function and optimize the consecutive variational surrogate of the log-likelihood with a gradient-ascent-based approach. To this end, we rely on the CCSA algorithm of Svanberg (2002) implemented in the C++ library (Johnson 2011), which we link to the package. Technical details can be found in Chiquet, Mariadassou, and Robin (2018).
In the package, the PLNPCA model is adjusted with the function
PLNPCA
, which we review in this section. This function
adjusts the model for a series of value of q and provides a collection of
objects PLNPCAfit
stored in an object with class
PLNPCAfamily
.
The class PLNPCAfit
inherits from the class
PLNfit
, so we strongly recommend the reader to be
comfortable with PLN
and PLNfit
before using
PLNPCA
(see the PLN vignette).
We fit a collection of q models as follows:
##
## Initialization...
##
## Adjusting 4 PLN models for PCA analysis.
## Rank approximation = 1 Rank approximation = 4 Rank approximation = 3 Rank approximation = 2
## Post-treatments
## DONE!
Note the use of the formula
object to specify the model,
similar to the one used in the function PLN
.
PLNPCAfamily
The PCA_models
variable is an R6
object
with class PLNPCAfamily
, which comes with a couple of
methods. The most basic is the show/print
method, which
sends a brief summary of the estimation process:
## --------------------------------------------------------
## COLLECTION OF 4 POISSON LOGNORMAL MODELS
## --------------------------------------------------------
## Task: Principal Component Analysis
## ========================================================
## - Ranks considered: from 1 to 4
## - Best model (greater BIC): rank = 4
## - Best model (greater ICL): rank = 3
One can also easily access the successive values of the criteria in the collection
param | nb_param | loglik | BIC | ICL |
---|---|---|---|---|
1 | 34 | -1042.0386 | -1108.1995 | -1120.3695 |
2 | 50 | -731.0336 | -828.3291 | -860.2024 |
3 | 65 | -640.9613 | -767.4455 | -823.0533 |
4 | 79 | -600.3276 | -754.0545 | -838.8941 |
A quick diagnostic of the optimization process is available via the
convergence
field:
param | nb_param | status | backend | iterations | |
---|---|---|---|---|---|
out | 1 | 34 | 3 | nlopt | 669 |
elt | 2 | 50 | 3 | nlopt | 1421 |
elt.1 | 3 | 65 | 3 | nlopt | 2088 |
elt.2 | 4 | 79 | 3 | nlopt | 1632 |
Comprehensive information about PLNPCAfamily
is
available via ?PLNPCAfamily
.
The plot
method of PLNPCAfamily
displays
evolution of the criteria mentioned above, and is a good starting point
for model selection:
Note that we use the original definition of the BIC/ICL criterion
($\texttt{loglik} -
\frac{1}{2}\texttt{pen}$), which is on the same scale as the
log-likelihood. A popular
alternative consists in using −2loglik
+ pen
instead. You can do so by specifying reverse = TRUE
:
In this case, the variational lower bound of the log-likelihood is
hopefully strictly increasing (or rather decreasing if using
reverse = TRUE
) with the number of axes (or subspace
dimension). Also note the (approximated) R2 which is displayed for
each value of q (see (Chiquet, Mariadassou, and Robin
2018) for details on its computation).
From this plot, we can see that the best model in terms of BIC or ICL
is obtained for a rank q = 4
or q = 3. We may extract the
corresponding model with the method getBestModel("ICL")
. A
model with a specific rank can be extracted with the
getModel()
method:
PLNPCAfit
Objects myPCA_ICL
and myPCA_BIC
are
R6Class
objects of class PLNPCAfit
which in
turns own a couple of methods, some inherited from PLNfit
and some others specific, mostly for visualization purposes. The
plot
method provides individual maps and correlation
circles as in usual PCA. If an additional classification exists for the
observations – which is the case here with the available classification
of the trapping nights – , it can be passed as an argument to the
function.1
Among other fields and methods (see ?PLNPCAfit
for a
comprehensive view), the most interesting for the end-user in the
context of PCA are
Che | Hyc | Hym | Hys | Psy | Aga | Glo | Ath | Cea | Ced | Set | All | Han | Hfo | Hsp | Hve | Sta | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
(Intercept) | -7.445124 | -8.074374 | -3.021566 | -6.853431 | -0.5357292 | -3.830667 | -6.398725 | -5.807173 | -7.321683 | -3.514909 | -4.116969 | -5.059037 | -4.347135 | -5.931362 | -3.960075 | -7.233629 | -2.583633 |
PC1 | PC2 | PC3 | |
---|---|---|---|
Che | -0.1934479 | 0.3304464 | -0.0040070 |
Hyc | -0.4392512 | -0.2220790 | 0.1933944 |
Hym | -0.1214687 | 0.1547335 | 0.3075570 |
Hys | -0.4120006 | 0.3556770 | 0.3103185 |
Psy | 0.0510868 | 0.0198343 | -0.0697180 |
Aga | 0.0770385 | 0.2725331 | 0.2190086 |
PC1 | PC2 | PC3 |
---|---|---|
-1.738327 | -0.4540738 | 0.7261747 |
3.725218 | 0.7493489 | 2.1238981 |
7.359368 | -1.3047137 | 0.3875175 |
6.226308 | -1.4234568 | -1.8569670 |
4.622454 | -0.0761241 | -1.0909050 |
4.129785 | 0.6858140 | 0.4670117 |
PLNPCAfit
also inherits from the methods of
PLNfit
(see the appropriate
vignette). Most are recalled via the show method:
## Poisson Lognormal with rank constrained for PCA (rank = 3)
## ==================================================================
## nb_param loglik BIC ICL
## 65 -640.961 -767.445 -823.053
## ==================================================================
## * Useful fields
## $model_par, $latent, $latent_pos, $var_par, $optim_par
## $loglik, $BIC, $ICL, $loglik_vec, $nb_param, $criteria
## * Useful S3 methods
## print(), coef(), sigma(), vcov(), fitted()
## predict(), predict_cond(), standard_error()
## * Additional fields for PCA
## $percent_var, $corr_circle, $scores, $rotation, $eig, $var, $ind
## * Additional S3 methods for PCA
## plot.PLNPCAfit()
We provide simple plotting functions but a wealth of plotting
utilities are available for factorial analyses results. The following
bindings allow you to use widely popular tools to make your own plots:
$eig
, $var
and $ind
.
## List of 4
## $ coord : num [1:49, 1:3] -1.74 3.73 7.36 6.23 4.62 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:49] "1" "2" "3" "4" ...
## .. ..$ : chr [1:3] "Dim.1" "Dim.2" "Dim.3"
## $ cos2 : num [1:49, 1:3] 0.805 0.732 0.967 0.876 0.947 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:49] "1" "2" "3" "4" ...
## .. ..$ : chr [1:3] "Dim.1" "Dim.2" "Dim.3"
## $ contrib: num [1:49, 1:3] 0.601 2.758 10.766 7.706 4.247 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:49] "1" "2" "3" "4" ...
## .. ..$ : chr [1:3] "Dim.1" "Dim.2" "Dim.3"
## $ dist : Named num [1:49] 1.94 4.35 7.48 6.65 4.75 ...
## ..- attr(*, "names")= chr [1:49] "1" "2" "3" "4" ...
## Dim.1 Dim.2 Dim.3
## 1 -1.738327 -0.45407383 0.7261747
## 2 3.725218 0.74934895 2.1238981
## 3 7.359368 -1.30471372 0.3875175
## 4 6.226308 -1.42345680 -1.8569670
## 5 4.622454 -0.07612411 -1.0909050
## 6 4.129785 0.68581402 0.4670117
You can also use high level functions from the factoextra package to extract relevant informations
## eigenvalue variance.percent cumulative.variance.percent
## Dim.1 503.0761 39.95441 39.95441
## Dim.2 284.0630 22.56034 62.51475
## Dim.3 224.9296 17.86396 80.37871
## Principal Component Analysis Results for variables
## ===================================================
## Name Description
## 1 "$coord" "Coordinates for the variables"
## 2 "$cor" "Correlations between variables and dimensions"
## 3 "$cos2" "Cos2 for the variables"
## 4 "$contrib" "contributions of the variables"
## Principal Component Analysis Results for individuals
## ===================================================
## Name Description
## 1 "$coord" "Coordinates for the individuals"
## 2 "$cos2" "Cos2 for the individuals"
## 3 "$contrib" "contributions of the individuals"
And some of the very nice plotting methods such as biplots, correlation circles and scatter plots of the scores.
You can project new data in the PCA space although it’s slightly involved at the moment. We demonstrate that by projecting the original data on top of the original graph. As expected, the projections of the new data points (small red points) are superimposed to the original data points (large black points).
A contribution of PLN-PCA is to let the possibility to taking into account some covariates in the parameter space. Such a strategy often completely changes the interpretation of PCA. Indeed, the covariates are often responsible for some strong structure in the data. The effect of the covariates should be removed since they are often quite obvious for the analyst and may hide some more important and subtle effects.
In the case at hand, the covariates corresponds to the meteorological variables. Let us try to introduce some of them in our model, for instance, the temperature, the wind and the cloudiness. This can be done thanks to the model formula:
PCA_models_cov <-
PLNPCA(
Abundance ~ 1 + offset(log(Offset)) + Temperature + Wind + Cloudiness,
data = trichoptera,
ranks = 1:4
)
##
## Initialization...
##
## Adjusting 4 PLN models for PCA analysis.
## Rank approximation = 3 Rank approximation = 4 Rank approximation = 1 Rank approximation = 2
## Post-treatments
## DONE!
Again, the best model is obtained for q = 3 classes.
Suppose that we want to have a closer look to the first two axes. This can be done thanks to the plot method:
gridExtra::grid.arrange(
plot(myPCA_cov, map = "individual", ind_cols = trichoptera$Group, plot = FALSE),
plot(myPCA_cov, map = "variable", plot = FALSE),
ncol = 2
)
We can check that the fitted value of the counts – even with this low-rank covariance matrix – are close to the observed ones:
data.frame(
fitted = as.vector(fitted(myPCA_cov)),
observed = as.vector(trichoptera$Abundance)
) %>%
ggplot(aes(x = observed, y = fitted)) +
geom_point(size = .5, alpha =.25 ) +
scale_x_log10(limits = c(1,1000)) +
scale_y_log10(limits = c(1,1000)) +
theme_bw() + annotation_logticks()
When you are done, do not forget to get back to the standard sequential plan with future.
With our PLN-PCA (and any pPCA model for count data, where successive models are not nested), it is important to performed the model selection of q prior to visualization, since the model with rank q = 3 is not nested in the model with rank q = 4. Hence, percentage of variance must be interpreted with care: it sums to 100% but must be put in perspective with the model R2, giving an approximation of the total percentage of variance explained with the current model.↩︎