The mvnfast
R package provides computationally efficient
tools related to the multivariate normal and Student’s t distributions.
The tools are generally faster than those provided by other packages,
thanks to the use of C++ code through the
Rcpp
\RcppArmadillo
packages and
parallelization through the OpenMP
API. The most important
functions are:
rmvn()
: simulates multivariate normal random
vectors.rmvt()
: simulates Student’s t normal random
vectors.dmvn()
: evaluates the probability density function of a
multivariate normal distribution.dmvt()
: evaluates the probability density function of a
multivariate Student’s t distribution.maha()
: evaluates mahalanobis distances.In the following sections we will benchmark each function against equivalent functions provided by other packages, while in the final section we provide an example application.
Simulating multivariate normal random variables is an essential step
in many Monte Carlo algorithms (such as MCMC or Particle Filters), hence
this operations has to be as fast as possible. Here we compare the
rmvn
function with the equivalent function
rmvnorm
(from the mvtnorm
package) and
mvrnorm
(from the MASS
package). In
particular, we simulate 104
twenty-dimensional random vectors:
# microbenchmark does not work on all platforms, hence we need this small wrapper
microwrapper <- function(..., times = 100L){
ok <- "microbenchmark" %in% rownames(installed.packages())
if( ok ){
library("microbenchmark")
microbenchmark(list = match.call(expand.dots = FALSE)$..., times = times)
}else{
message("microbenchmark package is not installed")
return( invisible(NULL) )
}
}
library("mvtnorm")
library("mvnfast")
library("MASS")
# We might also need to turn off BLAS parallelism
library("RhpcBLASctl")
blas_set_num_threads(1)
N <- 10000
d <- 20
# Creating mean and covariance matrix
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
microwrapper(rmvn(N, mu, mcov, ncores = 2),
rmvn(N, mu, mcov),
rmvnorm(N, mu, mcov),
mvrnorm(N, mu, mcov))
## Unit: milliseconds
## expr min lq mean median uq
## rmvn(N, mu, mcov, ncores = 2) 2.204437 2.269308 2.604446 2.507797 2.652636
## rmvn(N, mu, mcov) 4.050896 4.088772 4.229811 4.106715 4.130665
## rmvnorm(N, mu, mcov) 8.949583 9.096486 9.677663 9.161562 9.777029
## mvrnorm(N, mu, mcov) 8.806776 8.900571 9.639566 8.955979 10.675597
## max neval
## 4.605981 100
## 5.930588 100
## 12.766874 100
## 14.080361 100
In this example rmvn
cuts the computational time,
relative to the alternatives, even when a single core is used. This gain
is attributable to several factors: the use of C++ code and efficient
numerical algorithms to simulate the random variables. Parallelizing the
computation on two cores gives another appreciable speed-up. To be fair,
it is necessary to point out that rmvnorm
and
mvrnorm
have many more safety check on the user’s input
than rmvn
. This is true also for the functions described in
the next sections.
Notice that this function does not use one of the Random Number Generators (RNGs) provided by R, but one of the parallel cryptographic RNGs described in (Salmon et al., 2011). It is important to point out that this RNG can safely be used in parallel, without risk of collisions between parallel sequence of random numbers, as detailed in the above reference.
We get similar performance gains when we simulate multivariate Student’s t random variables:
# Here we have a conflict between namespaces
microwrapper(mvnfast::rmvt(N, mu, mcov, df = 3, ncores = 2),
mvnfast::rmvt(N, mu, mcov, df = 3),
mvtnorm::rmvt(N, delta = mu, sigma = mcov, df = 3))
## Unit: milliseconds
## expr min lq
## mvnfast::rmvt(N, mu, mcov, df = 3, ncores = 2) 2.714157 2.761374
## mvnfast::rmvt(N, mu, mcov, df = 3) 5.042955 5.085985
## mvtnorm::rmvt(N, delta = mu, sigma = mcov, df = 3) 10.958194 11.142822
## mean median uq max neval
## 3.045308 2.988363 3.062916 5.188997 100
## 5.278756 5.116075 5.147409 8.074803 100
## 12.445940 11.256265 12.937382 56.332572 100
When d
and N
are large, and
rmvn
or rmvt
are called several times with the
same arguments, it would make sense to create the matrix where to store
the simulated random variable upfront. This can be done as follows:
A <- matrix(nrow = N, ncol = d)
class(A) <- "numeric" # This is important. We need the elements of A to be of class "numeric".
rmvn(N, mu, mcov, A = A)
Notice that here rmvn
returns NULL
, not the
simulated random vectors! These can be found in the matrix provided by
the user:
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1.171328 3.6497979 -0.9109822 7.979682 5.379615
## [2,] 1.784426 -0.7664261 0.2546489 8.584938 3.651352
Pre-creating the matrix of random variables saves some more time:
## Unit: milliseconds
## expr min lq mean median
## rmvn(N, mu, mcov, ncores = 2, A = A) 1.973827 1.992311 2.006312 2.002842
## rmvn(N, mu, mcov, ncores = 2) 2.191031 2.216199 2.426364 2.234233
## uq max neval
## 2.014037 2.202674 200
## 2.260301 4.558742 200
Don’t look at the median time here, the mean is much more affected by memory re-allocation.
Here we compare the dmvn
function, which evaluates the
multivariate normal density, with the equivalent function
dmvtnorm
(from the mvtnorm
package). In
particular we evaluate the log-density of 104 twenty-dimensional random
vectors:
# Generating random vectors
N <- 10000
d <- 20
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
X <- rmvn(N, mu, mcov)
microwrapper(dmvn(X, mu, mcov, ncores = 2, log = T),
dmvn(X, mu, mcov, log = T),
dmvnorm(X, mu, mcov, log = T), times = 500)
## Unit: milliseconds
## expr min lq mean median
## dmvn(X, mu, mcov, ncores = 2, log = T) 1.192041 1.227747 1.339153 1.248686
## dmvn(X, mu, mcov, log = T) 2.214515 2.249736 2.329692 2.267970
## dmvnorm(X, mu, mcov, log = T) 1.363570 1.414661 1.894148 1.442673
## uq max neval
## 1.358200 5.604951 500
## 2.367395 4.356846 500
## 1.520122 49.699987 500
Again, we get some speed-up using C++ code and some more from the parallelization. We get similar results if we use a multivariate Student’s t density:
# We have a namespace conflict
microwrapper(mvnfast::dmvt(X, mu, mcov, df = 4, ncores = 2, log = T),
mvnfast::dmvt(X, mu, mcov, df = 4, log = T),
mvtnorm::dmvt(X, delta = mu, sigma = mcov, df = 4, log = T), times = 500)
## Unit: milliseconds
## expr min lq
## mvnfast::dmvt(X, mu, mcov, df = 4, ncores = 2, log = T) 1.305532 1.359743
## mvnfast::dmvt(X, mu, mcov, df = 4, log = T) 2.348425 2.387929
## mvtnorm::dmvt(X, delta = mu, sigma = mcov, df = 4, log = T) 1.524029 1.576928
## mean median uq max neval
## 1.510147 1.409852 1.522301 6.178229 500
## 2.504021 2.412454 2.528400 4.527163 500
## 2.057921 1.613806 1.697217 51.266335 500
Finally, we compare the maha
function, which evaluates
the square mahalanobis
distance with the equivalent function mahalanobis
(from
the stats
package). Also in the case we use 104 twenty-dimensional random
vectors:
# Generating random vectors
N <- 10000
d <- 20
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
X <- rmvn(N, mu, mcov)
microwrapper(maha(X, mu, mcov, ncores = 2),
maha(X, mu, mcov),
mahalanobis(X, mu, mcov))
## Unit: milliseconds
## expr min lq mean median uq
## maha(X, mu, mcov, ncores = 2) 1.139273 1.158073 1.270088 1.213531 1.324377
## maha(X, mu, mcov) 2.163371 2.186674 2.270814 2.218659 2.269653
## mahalanobis(X, mu, mcov) 1.897354 1.947288 2.373108 1.977333 2.026886
## max neval
## 3.565923 100
## 2.824061 100
## 5.798292 100
The acceleration is similar to that obtained in the previous sections.
As an example application of the dmvn
function, we
implemented the mean-shift mode
seeking algorithm. This procedure can be used to find the mode or
maxima of a kernel density function, and it can be used to set up
clustering algorithms. Here we simulate 104 d-dimensional random vectors
from mixture of normal distributions:
set.seed(5135)
N <- 10000
d <- 2
mu1 <- c(0, 0); mu2 <- c(2, 3)
Cov1 <- matrix(c(1, 0, 0, 2), 2, 2)
Cov2 <- matrix(c(1, -0.9, -0.9, 1), 2, 2)
bin <- rbinom(N, 1, 0.5)
X <- bin * rmvn(N, mu1, Cov1) + (!bin) * rmvn(N, mu2, Cov2)
Finally, we plot the resulting probability density and, starting from 10 initial points, we use mean-shift to converge to the nearest mode:
# Plotting
np <- 100
xvals <- seq(min(X[ , 1]), max(X[ , 1]), length.out = np)
yvals <- seq(min(X[ , 2]), max(X[ , 2]), length.out = np)
theGrid <- expand.grid(xvals, yvals)
theGrid <- as.matrix(theGrid)
dens <- dmixn(theGrid,
mu = rbind(mu1, mu2),
sigma = list(Cov1, Cov2),
w = rep(1, 2)/2)
plot(X[ , 1], X[ , 2], pch = '.', lwd = 0.01, col = 3)
contour(x = xvals, y = yvals, z = matrix(dens, np, np),
levels = c(0.002, 0.01, 0.02, 0.04, 0.08, 0.15 ), add = TRUE, lwd = 2)
# Mean-shift
library(plyr)
inits <- matrix(c(-2, 2, 0, 3, 4, 3, 2, 5, 2, -3, 2, 2, 0, 2, 3, 0, 0, -4, -2, 6),
10, 2, byrow = TRUE)
traj <- alply(inits,
1,
function(input)
ms(X = X,
init = input,
H = 0.05 * cov(X),
ncores = 2,
store = TRUE)$traj
)
invisible( lapply(traj,
function(input){
lines(input[ , 1], input[ , 2], col = 2, lwd = 1.5)
points(tail(input[ , 1]), tail(input[ , 2]))
}))
As we can see from the plot, each initial point leads one of two points that are very close to the true mode. Notice that the bandwidth for the kernel density estimator was chosen by trial-and-error, and less arbitrary choices are certainly possible in real applications.
Dirk Eddelbuettel and Romain Francois (2011). Rcpp: Seamless R and C++ Integration. Journal of Statistical Software, 40(8), 1-18. URL https://www.jstatsoft.org/v40/i08/.
Eddelbuettel, Dirk (2013) Seamless R and C++ Integration with Rcpp. Springer, New York. ISBN 978-1-4614-6867-7.
Dirk Eddelbuettel, Conrad Sanderson (2014). RcppArmadillo: Accelerating R with high-performance C++ linear algebra. Computational Statistics and Data Analysis, Volume 71, March 2014, pages 1054-1063. URL https://dx.doi.org/10.1016/j.csda.2013.02.005
John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw (2011). Parallel Random Numbers: As Easy as 1, 2, 3. D. E. Shaw Research, New York, NY 10036, USA.