Title: | Fast Numerical Maximum Likelihood Estimation for Latent Markov Models |
---|---|
Description: | A variety of latent Markov models, including hidden Markov models, hidden semi-Markov models, state-space models and continuous-time variants can be formulated and estimated within the same framework via directly maximising the likelihood function using the so-called forward algorithm. Applied researchers often need custom models that standard software does not easily support. Writing tailored 'R' code offers flexibility but suffers from slow estimation speeds. We address these issues by providing easy-to-use functions (written in 'C++' for speed) for common tasks like the forward algorithm. These functions can be combined into custom models in a Lego-type approach, offering up to 10-20 times faster estimation via standard numerical optimisers. To aid in building fully custom likelihood functions, several vignettes are included that show how to simulate data from and estimate all the above model classes. |
Authors: | Jan-Ole Koslik [aut, cre] |
Maintainer: | Jan-Ole Koslik <[email protected]> |
License: | GPL-3 |
Version: | 2.0.2 |
Built: | 2024-11-21 07:13:58 UTC |
Source: | CRAN |
This high-level function can be used to prepare objects needed to estimate mixture models of smooth densities using P-Splines.
buildSmoothDens(data, type = "real", par, k = 20, degree = 3, diff_order = 2)
buildSmoothDens(data, type = "real", par, k = 20, degree = 3, diff_order = 2)
data |
named data frame of different data streams |
type |
type of each data stream, either |
par |
nested named list of initial means and sds/concentrations for each data stream |
k |
number of basis functions for each data stream |
degree |
degree of the B-spline basis functions for each data stream, defaults to cubic B-splines |
diff_order |
order of differencing used for the P-Spline penalty matrix for each data stream. Defaults to second-order differences. |
Under the hood, make_matrices_dens
is used for the actual construction of the design and penalty matrices.
You can provide one or multiple data streams of different types (real, positive, circular) and specify initial means and standard deviations/ concentrations for each data stream. This information is then converted into suitable spline coefficients.
buildSmoothDens
then constructs the design and penalty matrices for standardised B-splines basis functions (integrating to one) for each data stream.
For types "real"
and "circular"
the knots are placed equidistant in the range of the data, for type "positive"
the knots are placed using polynomial spacing.
a nested list containing the design matrices Z
, the penalty matrices S
, the initial coefficients coef
the prediction design matrices Z_predict
, the prediction grids xseq
, and details for the basis expansion for each data stream.
## 3 data streams, each with one distribution # normal data with mean 0 and sd 1 x1 = rnorm(100, mean = 0, sd = 1) # gamma data with mean 5 and sd 3 x2 = rgamma2(100, mean = 5, sd = 3) # circular data x3 = rvm(100, mu = 0, kappa = 2) data = data.frame(x1 = x1, x2 = x2, x3 = x3) par = list(x1 = list(mean = 0, sd = 1), x2 = list(mean = 5, sd = 3), x3 = list(mean = 0, concentration = 2)) SmoothDens = buildSmoothDens(data, type = c("real", "positive", "circular"), par) # extracting objects for x1 Z1 = SmoothDens$Z$x1 S1 = SmoothDens$S$x1 coefs1 = SmoothDens$coef$x1 ## one data stream, but mixture of two distributions # normal data with mean 0 and sd 1 x = rnorm(100, mean = 0, sd = 1) data = data.frame(x = x) # now parameters for mixture of two normals par = list(x = list(mean = c(0, 5), sd = c(1,1))) SmoothDens = buildSmoothDens(data, par = par) # extracting objects Z = SmoothDens$Z$x S = SmoothDens$S$x coefs = SmoothDens$coef$x
## 3 data streams, each with one distribution # normal data with mean 0 and sd 1 x1 = rnorm(100, mean = 0, sd = 1) # gamma data with mean 5 and sd 3 x2 = rgamma2(100, mean = 5, sd = 3) # circular data x3 = rvm(100, mu = 0, kappa = 2) data = data.frame(x1 = x1, x2 = x2, x3 = x3) par = list(x1 = list(mean = 0, sd = 1), x2 = list(mean = 5, sd = 3), x3 = list(mean = 0, concentration = 2)) SmoothDens = buildSmoothDens(data, type = c("real", "positive", "circular"), par) # extracting objects for x1 Z1 = SmoothDens$Z$x1 S1 = SmoothDens$S$x1 coefs1 = SmoothDens$coef$x1 ## one data stream, but mixture of two distributions # normal data with mean 0 and sd 1 x = rnorm(100, mean = 0, sd = 1) data = data.frame(x = x) # now parameters for mixture of two normals par = list(x = list(mean = c(0, 5), sd = c(1,1))) SmoothDens = buildSmoothDens(data, par = par) # extracting objects Z = SmoothDens$Z$x S = SmoothDens$S$x coefs = SmoothDens$coef$x
Function to conveniently calculate the trackInd variable that is needed internally when fitting a model to longitudinal data with multiple tracks.
calc_trackInd(ID)
calc_trackInd(ID)
ID |
ID variable of track IDs that is of the same length as the data to be analysed |
A vector of indices of the first observation of each track which can be passed to the forward and forward_g to sum likelihood contributions of each track
uniqueID = c("Animal1", "Animal2", "Animal3") ID = rep(uniqueID, c(100, 200, 300)) trackInd = calc_trackInd(ID)
uniqueID = c("Animal1", "Animal2", "Animal3") ID = rep(uniqueID, c(100, 200, 300)) trackInd = calc_trackInd(ID)
Density function of the multivariate Gaussian distribution reparametrised in terms of its precision matrix (inverse variance).
This implementation is particularly useful for defining the joint log-likelihood with penalised splines or i.i.d. random effects that have a multivariate Gaussian distribution with fixed precision/ penalty matrix .
As
is fixed and only scaled by
, it is more efficient to precompute the determinant of
(for the normalisation constant) and only scale the quadratic form by
when multiple spline parameters/ random effects with different
's but the same penalty matrix
are evaluated.
dgmrf2(x, mu = 0, S, lambda, logdetS = NULL, log = FALSE)
dgmrf2(x, mu = 0, S, lambda, logdetS = NULL, log = FALSE)
x |
density evaluation point, either a vector or a matrix |
mu |
mean parameter. Either scalar or vector |
S |
unscaled precision matrix |
lambda |
precision scaling parameter Can be a vector if |
logdetS |
Optional precomputed log determinant of the precision matrix |
log |
logical; if |
This implementation allows for automatic differentiation with RTMB
.
vector of density values
x = matrix(runif(30), nrow = 3) # iid random effects S = diag(10) sigma = c(1, 2, 3) # random effect standard deviations lambda = 1 / sigma^2 d = dgmrf2(x, 0, S, lambda) # P-splines L = diff(diag(10), diff = 2) # second-order difference matrix S = t(L) %*% L lambda = c(1,2,3) d = dgmrf2(x, 0, S, lambda, log = TRUE)
x = matrix(runif(30), nrow = 3) # iid random effects S = diag(10) sigma = c(1, 2, 3) # random effect standard deviations lambda = 1 / sigma^2 d = dgmrf2(x, 0, S, lambda) # P-splines L = diff(diag(10), diff = 2) # second-order difference matrix S = t(L) %*% L lambda = c(1,2,3) d = dgmrf2(x, 0, S, lambda, log = TRUE)
Calculates the log-likelihood of a sequence of observations under a homogeneous hidden Markov model using the forward algorithm.
forward(delta, Gamma, allprobs, trackID = NULL, ad = NULL, report = TRUE)
forward(delta, Gamma, allprobs, trackID = NULL, ad = NULL, report = TRUE)
delta |
initial or stationary distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if |
Gamma |
transition probability matrix of dimension c(N,N), or array of k transition probability matrices of dimension c(N,N,k), if |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
trackID |
optional vector of length n containing IDs If provided, the total log-likelihood will be the sum of each track's likelihood contribution.
In this case, |
ad |
optional logical, indicating whether automatic differentiation with |
report |
logical, indicating whether |
log-likelihood for given data and parameters
Other forward algorithms:
forward_g()
,
forward_hsmm()
,
forward_ihsmm()
,
forward_p()
,
forward_phsmm()
## negative log likelihood function nll = function(par, step) { # parameter transformations for unconstrained optimisation Gamma = tpm(par[1:2]) # multinomial logit link delta = stationary(Gamma) # stationary HMM mu = exp(par[3:4]) sigma = exp(par[5:6]) # calculate all state-dependent probabilities allprobs = matrix(1, length(step), 2) ind = which(!is.na(step)) for(j in 1:2) allprobs[ind,j] = dgamma2(step[ind], mu[j], sigma[j]) # simple forward algorithm to calculate log-likelihood -forward(delta, Gamma, allprobs) } ## fitting an HMM to the trex data par = c(-2,-2, # initial tpm params (logit-scale) log(c(0.3, 2.5)), # initial means for step length (log-transformed) log(c(0.2, 1.5))) # initial sds for step length (log-transformed) mod = nlm(nll, par, step = trex$step[1:1000])
## negative log likelihood function nll = function(par, step) { # parameter transformations for unconstrained optimisation Gamma = tpm(par[1:2]) # multinomial logit link delta = stationary(Gamma) # stationary HMM mu = exp(par[3:4]) sigma = exp(par[5:6]) # calculate all state-dependent probabilities allprobs = matrix(1, length(step), 2) ind = which(!is.na(step)) for(j in 1:2) allprobs[ind,j] = dgamma2(step[ind], mu[j], sigma[j]) # simple forward algorithm to calculate log-likelihood -forward(delta, Gamma, allprobs) } ## fitting an HMM to the trex data par = c(-2,-2, # initial tpm params (logit-scale) log(c(0.3, 2.5)), # initial means for step length (log-transformed) log(c(0.2, 1.5))) # initial sds for step length (log-transformed) mod = nlm(nll, par, step = trex$step[1:1000])
Calculates the log-likelihood of a sequence of observations under a hidden Markov model with time-varying transition probabilities using the forward algorithm.
forward_g(delta, Gamma, allprobs, trackID = NULL, ad = NULL, report = TRUE)
forward_g(delta, Gamma, allprobs, trackID = NULL, ad = NULL, report = TRUE)
delta |
initial or stationary distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if |
Gamma |
array of transition probability matrices of dimension c(N,N,n-1), as in a time series of length n, there are only n-1 transitions. If an array of dimension c(N,N,n) for a single track is provided, the first slice will be ignored. If the elements of If trackInd is provided, Gamma needs to be an array of dimension c(N,N,n), matching the number of rows of allprobs. For each track, the transition matrix at the beginning will be ignored.
If the parameters for Gamma are pooled across tracks or not, depends on your calculation of Gamma. If pooled, you can use tpm_g(Z, beta) to calculate the entire array of transition matrices when Z is of dimension c(n,p). This function can also be used to fit continuous-time HMMs, where each array entry is the Markov semigroup |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
trackID |
optional vector of length n containing IDs If provided, the total log-likelihood will be the sum of each track's likelihood contribution.
In this case, |
ad |
optional logical, indicating whether automatic differentiation with |
report |
logical, indicating whether |
log-likelihood for given data and parameters
Other forward algorithms:
forward()
,
forward_hsmm()
,
forward_ihsmm()
,
forward_p()
,
forward_phsmm()
## negative log likelihood function nll = function(par, step, Z) { # parameter transformations for unconstrained optimisation beta = matrix(par[1:6], nrow = 2) Gamma = tpm_g(Z, beta) # multinomial logit link for each time point delta = stationary(Gamma[,,1]) # stationary HMM mu = exp(par[7:8]) sigma = exp(par[9:10]) # calculate all state-dependent probabilities allprobs = matrix(1, length(step), 2) ind = which(!is.na(step)) for(j in 1:2) allprobs[ind,j] = dgamma2(step[ind], mu[j], sigma[j]) # simple forward algorithm to calculate log-likelihood -forward_g(delta, Gamma, allprobs) } ## fitting an HMM to the trex data par = c(-2,-2, # initial tpm intercepts (logit-scale) rep(0, 4), # initial tpm slopes log(c(0.3, 2.5)), # initial means for step length (log-transformed) log(c(0.2, 1.5))) # initial sds for step length (log-transformed) mod = nlm(nll, par, step = trex$step[1:500], Z = trigBasisExp(trex$tod[1:500]))
## negative log likelihood function nll = function(par, step, Z) { # parameter transformations for unconstrained optimisation beta = matrix(par[1:6], nrow = 2) Gamma = tpm_g(Z, beta) # multinomial logit link for each time point delta = stationary(Gamma[,,1]) # stationary HMM mu = exp(par[7:8]) sigma = exp(par[9:10]) # calculate all state-dependent probabilities allprobs = matrix(1, length(step), 2) ind = which(!is.na(step)) for(j in 1:2) allprobs[ind,j] = dgamma2(step[ind], mu[j], sigma[j]) # simple forward algorithm to calculate log-likelihood -forward_g(delta, Gamma, allprobs) } ## fitting an HMM to the trex data par = c(-2,-2, # initial tpm intercepts (logit-scale) rep(0, 4), # initial tpm slopes log(c(0.3, 2.5)), # initial means for step length (log-transformed) log(c(0.2, 1.5))) # initial sds for step length (log-transformed) mod = nlm(nll, par, step = trex$step[1:500], Z = trigBasisExp(trex$tod[1:500]))
Calculates the (approximate) log-likelihood of a sequence of observations under a homogeneous hidden semi-Markov model using a modified forward algorithm.
forward_hsmm( dm, omega, allprobs, trackID = NULL, delta = NULL, eps = 1e-10, report = TRUE )
forward_hsmm( dm, omega, allprobs, trackID = NULL, delta = NULL, eps = 1e-10, report = TRUE )
dm |
list of length N containing vectors of dwell-time probability mass functions (PMFs) for each state. The vector lengths correspond to the approximating state aggregate sizes, hence there should be little probablity mass not covered by these. |
omega |
matrix of dimension c(N,N) of conditional transition probabilites, also called embedded transition probability matrix. Contains the transition probabilities given that the current state is left. Hence, the diagonal elements need to be zero and the rows need to sum to one. Can be constructed using |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) which will automatically be converted to the appropriate dimension. |
trackID |
optional vector of length n containing IDs If provided, the total log-likelihood will be the sum of each track's likelihood contribution.
In this case, |
delta |
optional vector of initial state probabilities of length N By default, the stationary distribution is computed (which is typically recommended). |
eps |
small value to avoid numerical issues in the approximating transition matrix construction. Usually, this should not be changed. |
report |
logical, indicating whether initial distribution, approximating transition probability matrix and |
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs, where the state duration distribution is explicitly modelled by a distribution on the positive integers.
For direct numerical maximum likelhood estimation, HSMMs can be represented as HMMs on an enlarged state space (of size ) and with structured transition probabilities.
This function is designed to be used with automatic differentiation based on the R
package RTMB
. It will be very slow without it!
log-likelihood for given data and parameters
Other forward algorithms:
forward()
,
forward_g()
,
forward_ihsmm()
,
forward_p()
,
forward_phsmm()
# currently no examples
# currently no examples
Calculates the (approximate) log-likelihood of a sequence of observations under an inhomogeneous hidden semi-Markov model using a modified forward algorithm.
forward_ihsmm( dm, omega, allprobs, trackID = NULL, delta = NULL, startInd = NULL, eps = 1e-10, report = TRUE )
forward_ihsmm( dm, omega, allprobs, trackID = NULL, delta = NULL, startInd = NULL, eps = 1e-10, report = TRUE )
dm |
list of length N containing matrices (or vectors) of dwell-time probability mass functions (PMFs) for each state. If the dwell-time PMFs are constant, the vectors are the PMF of the dwell-time distribution fixed in time. The vector lengths correspond to the approximating state aggregate sizes, hence there should be little probablity mass not covered by these. If the dwell-time PMFs are inhomogeneous, the matrices need to have n rows, where n is the number of observations. The number of columns again correponds to the size of the approximating state aggregates. In the latter case, the first |
omega |
matrix of dimension c(N,N) or array of dimension c(N,N,n) of conditional transition probabilites, also called embedded transition probability matrix. It contains the transition probabilities given the current state is left. Hence, the diagonal elements need to be zero and the rows need to sum to one. Such a matrix can be constructed using |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
trackID |
trackID optional vector of length n containing IDs If provided, the total log-likelihood will be the sum of each track's likelihood contribution.
Instead of a single vector |
delta |
optional vector of initial state probabilities of length N By default, instead of this, the stationary distribution is computed corresponding to the first approximating transition probability matrix of each track is computed. Contrary to the homogeneous case, this is not theoretically motivated but just for convenience. |
startInd |
optional integer index at which the forward algorithm starts. When approximating inhomogeneous HSMMs by inhomogeneous HMMs, the first transition probability matrix that can be constructed is at time |
eps |
small value to avoid numerical issues in the approximating transition matrix construction. Usually, this should not be changed. |
report |
logical, indicating whether initial distribution, approximating transition probability matrix and |
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs, where the state duration distribution is explicitly modelled by a distribution on the positive integers. This function can be used to fit HSMMs where the state-duration distribution and/ or the conditional transition probabilities vary with covariates.
For direct numerical maximum likelhood estimation, HSMMs can be represented as HMMs on an enlarged state space (of size ) and with structured transition probabilities.
This function is designed to be used with automatic differentiation based on the R
package RTMB
. It will be very slow without it!
log-likelihood for given data and parameters
Other forward algorithms:
forward()
,
forward_g()
,
forward_hsmm()
,
forward_p()
,
forward_phsmm()
# currently no examples
# currently no examples
Calculates the log-likelihood of a sequence of observations under a hidden Markov model with periodically varying transition probabilities using the forward algorithm.
forward_p( delta, Gamma, allprobs, tod, trackID = NULL, ad = NULL, report = TRUE )
forward_p( delta, Gamma, allprobs, tod, trackID = NULL, ad = NULL, report = TRUE )
delta |
initial or stationary distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if |
Gamma |
array of transition probability matrices of dimension c(N,N,L). Here we use the definition |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
tod |
(Integer valued) variable for cycle indexing in 1, ..., L, mapping the data index to a generalised time of day (length n) For half-hourly data L = 48. It could, however, also be day of year for daily data and L = 365. |
trackID |
optional vector of length n containing IDs If provided, the total log-likelihood will be the sum of each track's likelihood contribution.
Instead of a single vector |
ad |
optional logical, indicating whether automatic differentiation with |
report |
logical, indicating whether |
When the transition probability matrix only varies periodically (e.g. as a function of time of day), there are only unique matrices if
is the period length (e.g.
for hourly data and time-of-day variation).
Thus, it is much more efficient to only calculate these
matrices and index them by a time variable (e.g. time of day or day of year) instead of calculating such a matrix for each index in the data set (which would be redundant).
This function allows for that by only expecting a transition probability matrix for each time point in a period and an integer valued (
) time variable that maps the data index to the according time.
log-likelihood for given data and parameters
Other forward algorithms:
forward()
,
forward_g()
,
forward_hsmm()
,
forward_ihsmm()
,
forward_phsmm()
## negative log likelihood function nll = function(par, step, tod) { # parameter transformations for unconstrained optimisation beta = matrix(par[1:6], nrow = 2) Gamma = tpm_p(1:24, beta = beta) # multinomial logit link for each time point delta = stationary_p(Gamma, tod[1]) # stationary HMM mu = exp(par[7:8]) sigma = exp(par[9:10]) # calculate all state-dependent probabilities allprobs = matrix(1, length(step), 2) ind = which(!is.na(step)) for(j in 1:2) allprobs[ind,j] = dgamma2(step[ind], mu[j], sigma[j]) # simple forward algorithm to calculate log-likelihood -forward_p(delta, Gamma, allprobs, tod) } ## fitting an HMM to the nessi data par = c(-2,-2, # initial tpm intercepts (logit-scale) rep(0, 4), # initial tpm slopes log(c(0.3, 2.5)), # initial means for step length (log-transformed) log(c(0.2, 1.5))) # initial sds for step length (log-transformed) mod = nlm(nll, par, step = trex$step[1:500], tod = trex$tod[1:500])
## negative log likelihood function nll = function(par, step, tod) { # parameter transformations for unconstrained optimisation beta = matrix(par[1:6], nrow = 2) Gamma = tpm_p(1:24, beta = beta) # multinomial logit link for each time point delta = stationary_p(Gamma, tod[1]) # stationary HMM mu = exp(par[7:8]) sigma = exp(par[9:10]) # calculate all state-dependent probabilities allprobs = matrix(1, length(step), 2) ind = which(!is.na(step)) for(j in 1:2) allprobs[ind,j] = dgamma2(step[ind], mu[j], sigma[j]) # simple forward algorithm to calculate log-likelihood -forward_p(delta, Gamma, allprobs, tod) } ## fitting an HMM to the nessi data par = c(-2,-2, # initial tpm intercepts (logit-scale) rep(0, 4), # initial tpm slopes log(c(0.3, 2.5)), # initial means for step length (log-transformed) log(c(0.2, 1.5))) # initial sds for step length (log-transformed) mod = nlm(nll, par, step = trex$step[1:500], tod = trex$tod[1:500])
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs, where the state duration distribution is explicitly modelled by a distribution on the positive integers. This function can be used to fit HSMMs where the state-duration distribution and/ or the conditional transition probabilities vary with covariates.
For direct numerical maximum likelhood estimation, HSMMs can be represented as HMMs on an enlarged state space (of size ) and with structured transition probabilities.
This function can be used to fit HSMMs where the state-duration distribution and/ or the conditional transition probabilities vary periodically.
In the special case of periodic variation (as compared to arbitrary covariate influence), this version is to be preferred over forward_ihsmm
because it computes the correct periodically stationary distribution and no observations are lost for the approximation.
This function is designed to be used with automatic differentiation based on the R
package RTMB
. It will be very slow without it!
forward_phsmm( dm, omega, allprobs, tod, trackID = NULL, delta = NULL, eps = 1e-10, report = TRUE )
forward_phsmm( dm, omega, allprobs, tod, trackID = NULL, delta = NULL, eps = 1e-10, report = TRUE )
dm |
list of length N containing matrices (or vectors) of dwell-time probability mass functions (PMFs) for each state. If the dwell-time PMFs are constant, the vectors are the PMF of the dwell-time distribution fixed in time. The vector lengths correspond to the approximating state aggregate sizes, hence there should be little probablity mass not covered by these. If the dwell-time PMFs are inhomogeneous, the matrices need to have L rows, where L is the cycle length. The number of columns again correpond to the size of the approximating state aggregates. |
omega |
matrix of dimension c(N,N) or array of dimension c(N,N,L) of conditional transition probabilites, also called embedded transition probability matrix It contains the transition probabilities given the current state is left. Hence, the diagonal elements need to be zero and the rows need to sum to one. Such a matrix can be constructed using |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
tod |
(Integer valued) variable for cycle indexing in 1, ..., L, mapping the data index to a generalised time of day (length n). For half-hourly data L = 48. It could, however, also be day of year for daily data and L = 365. |
trackID |
optional vector of length n containing IDs If provided, the total log-likelihood will be the sum of each track's likelihood contribution.
Instead of a single vector |
delta |
Optional vector of initial state probabilities of length N. By default, instead of this, the stationary distribution is computed corresponding to the first approximating t.p.m. of each track is computed. Contrary to the homogeneous case, this is not theoretically motivated but just for convenience. |
eps |
small value to avoid numerical issues in the approximating transition matrix construction. Usually, this should not be changed. |
report |
logical, indicating whether initial distribution, approximating transition probability matrix and |
Calculates the (approximate) log-likelihood of a sequence of observations under a periodically inhomogeneous hidden semi-Markov model using a modified forward algorithm.
log-likelihood for given data and parameters
Other forward algorithms:
forward()
,
forward_g()
,
forward_hsmm()
,
forward_ihsmm()
,
forward_p()
# currently no examples
# currently no examples
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs that can be approximated by HMMs on an enlarged state space (of size ) and with structured transition probabilities.
forward_s(delta, Gamma, allprobs, sizes)
forward_s(delta, Gamma, allprobs, sizes)
delta |
initial or stationary distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if |
Gamma |
transition probability matrix of dimension c(M,M) |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) which will automatically be converted to the appropriate dimension. |
sizes |
state aggregate sizes that are used for the approximation of the semi-Markov chain. |
log-likelihood for given data and parameters
## generating data from homogeneous 2-state HSMM mu = c(0, 6) lambda = c(6, 12) omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # simulation # for a 2-state HSMM the embedded chain always alternates between 1 and 2 s = rep(1:2, 100) C = x = numeric(0) for(t in 1:100){ dt = rpois(1, lambda[s[t]])+1 # shifted Poisson C = c(C, rep(s[t], dt)) x = c(x, rnorm(dt, mu[s[t]], 1.5)) # fixed sd 2 for both states } ## negative log likelihood function mllk = function(theta.star, x, sizes){ # parameter transformations for unconstraint optimization omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # omega fixed (2-states) lambda = exp(theta.star[1:2]) # dwell time means dm = list(dpois(1:sizes[1]-1, lambda[1]), dpois(1:sizes[2]-1, lambda[2])) Gamma = tpm_hsmm2(omega, dm) delta = stationary(Gamma) # stationary mu = theta.star[3:4] sigma = exp(theta.star[5:6]) # calculate all state-dependent probabilities allprobs = matrix(1, length(x), 2) for(j in 1:2){ allprobs[,j] = dnorm(x, mu[j], sigma[j]) } # return negative for minimization -forward_s(delta, Gamma, allprobs, sizes) } ## fitting an HSMM to the data theta.star = c(log(5), log(10), 1, 4, log(2), log(2)) mod = nlm(mllk, theta.star, x = x, sizes = c(20, 30), stepmax = 5)
## generating data from homogeneous 2-state HSMM mu = c(0, 6) lambda = c(6, 12) omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # simulation # for a 2-state HSMM the embedded chain always alternates between 1 and 2 s = rep(1:2, 100) C = x = numeric(0) for(t in 1:100){ dt = rpois(1, lambda[s[t]])+1 # shifted Poisson C = c(C, rep(s[t], dt)) x = c(x, rnorm(dt, mu[s[t]], 1.5)) # fixed sd 2 for both states } ## negative log likelihood function mllk = function(theta.star, x, sizes){ # parameter transformations for unconstraint optimization omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # omega fixed (2-states) lambda = exp(theta.star[1:2]) # dwell time means dm = list(dpois(1:sizes[1]-1, lambda[1]), dpois(1:sizes[2]-1, lambda[2])) Gamma = tpm_hsmm2(omega, dm) delta = stationary(Gamma) # stationary mu = theta.star[3:4] sigma = exp(theta.star[5:6]) # calculate all state-dependent probabilities allprobs = matrix(1, length(x), 2) for(j in 1:2){ allprobs[,j] = dnorm(x, mu[j], sigma[j]) } # return negative for minimization -forward_s(delta, Gamma, allprobs, sizes) } ## fitting an HSMM to the data theta.star = c(log(5), log(10), 1, 4, log(2), log(2)) mod = nlm(mllk, theta.star, x = x, sizes = c(20, 30), stepmax = 5)
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs that can be approximated by HMMs on an enlarged state space (of size ) and with structured transition probabilities.
Recently, this inference procedure has been generalised to allow either the dwell-time distributions or the conditional transition probabilities to depend on external covariates such as the time of day. This special case is implemented here.
This function allows for that, by expecting a transition probability matrix for each time point in a period, and an integer valued (
) time variable that maps the data index to the according time.
forward_sp(delta, Gamma, allprobs, sizes, tod)
forward_sp(delta, Gamma, allprobs, sizes, tod)
delta |
initial or stationary distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if |
Gamma |
array of transition probability matrices of dimension c(M,M,L). Here we use the definition |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) which will automatically be converted to the appropriate dimension. |
sizes |
state aggregate sizes that are used for the approximation of the semi-Markov chain. |
tod |
(Integer valued) variable for cycle indexing in 1, ..., L, mapping the data index to a generalised time of day (length n). For half-hourly data L = 48. It could, however, also be day of year for daily data and L = 365. |
log-likelihood for given data and parameters
## generating data from homogeneous 2-state HSMM mu = c(0, 6) beta = matrix(c(log(4),log(6),-0.2,0.2,-0.1,0.4), nrow=2) # time varying mean dwell time Lambda = exp(cbind(1, trigBasisExp(1:24, 24, 1))%*%t(beta)) omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # simulation # for a 2-state HSMM the embedded chain always alternates between 1 and 2 s = rep(1:2, 100) C = x = numeric(0) tod = rep(1:24, 50) # time of day variable time = 1 for(t in 1:100){ dt = rpois(1, Lambda[tod[time], s[t]])+1 # dwell time depending on time of day time = time + dt C = c(C, rep(s[t], dt)) x = c(x, rnorm(dt, mu[s[t]], 1.5)) # fixed sd 2 for both states } tod = tod[1:length(x)] ## negative log likelihood function mllk = function(theta.star, x, sizes, tod){ # parameter transformations for unconstraint optimization omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # omega fixed (2-states) mu = theta.star[1:2] sigma = exp(theta.star[3:4]) beta = matrix(theta.star[5:10], nrow=2) # time varying mean dwell time Lambda = exp(cbind(1, trigBasisExp(1:24, 24, 1))%*%t(beta)) dm = list() for(j in 1:2){ dm[[j]] = sapply(1:sizes[j]-1, dpois, lambda = Lambda[,j]) } Gamma = tpm_phsmm2(omega, dm) delta = stationary_p(Gamma, tod[1]) # calculate all state-dependent probabilities allprobs = matrix(1, length(x), 2) for(j in 1:2){ allprobs[,j] = dnorm(x, mu[j], sigma[j]) } # return negative for minimization -forward_sp(delta, Gamma, allprobs, sizes, tod) } ## fitting an HSMM to the data theta.star = c(1, 4, log(2), log(2), # state-dependent parameters log(4), log(6), rep(0,4)) # state process parameters dm # mod = nlm(mllk, theta.star, x = x, sizes = c(10, 15), tod = tod, stepmax = 5)
## generating data from homogeneous 2-state HSMM mu = c(0, 6) beta = matrix(c(log(4),log(6),-0.2,0.2,-0.1,0.4), nrow=2) # time varying mean dwell time Lambda = exp(cbind(1, trigBasisExp(1:24, 24, 1))%*%t(beta)) omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # simulation # for a 2-state HSMM the embedded chain always alternates between 1 and 2 s = rep(1:2, 100) C = x = numeric(0) tod = rep(1:24, 50) # time of day variable time = 1 for(t in 1:100){ dt = rpois(1, Lambda[tod[time], s[t]])+1 # dwell time depending on time of day time = time + dt C = c(C, rep(s[t], dt)) x = c(x, rnorm(dt, mu[s[t]], 1.5)) # fixed sd 2 for both states } tod = tod[1:length(x)] ## negative log likelihood function mllk = function(theta.star, x, sizes, tod){ # parameter transformations for unconstraint optimization omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # omega fixed (2-states) mu = theta.star[1:2] sigma = exp(theta.star[3:4]) beta = matrix(theta.star[5:10], nrow=2) # time varying mean dwell time Lambda = exp(cbind(1, trigBasisExp(1:24, 24, 1))%*%t(beta)) dm = list() for(j in 1:2){ dm[[j]] = sapply(1:sizes[j]-1, dpois, lambda = Lambda[,j]) } Gamma = tpm_phsmm2(omega, dm) delta = stationary_p(Gamma, tod[1]) # calculate all state-dependent probabilities allprobs = matrix(1, length(x), 2) for(j in 1:2){ allprobs[,j] = dnorm(x, mu[j], sigma[j]) } # return negative for minimization -forward_sp(delta, Gamma, allprobs, sizes, tod) } ## fitting an HSMM to the data theta.star = c(1, 4, log(2), log(2), # state-dependent parameters log(4), log(6), rep(0,4)) # state process parameters dm # mod = nlm(mllk, theta.star, x = x, sizes = c(10, 15), tod = tod, stepmax = 5)
Density, distribution function, quantile function and random generation for the gamma distribution reparametrised in terms of mean and standard deviation.
dgamma2(x, mean = 1, sd = 1, log = FALSE) pgamma2(q, mean = 1, sd = 1, lower.tail = TRUE, log.p = FALSE) qgamma2(p, mean = 1, sd = 1, lower.tail = TRUE, log.p = FALSE) rgamma2(n, mean = 1, sd = 1)
dgamma2(x, mean = 1, sd = 1, log = FALSE) pgamma2(q, mean = 1, sd = 1, lower.tail = TRUE, log.p = FALSE) qgamma2(p, mean = 1, sd = 1, lower.tail = TRUE, log.p = FALSE) rgamma2(n, mean = 1, sd = 1)
x , q
|
vector of quantiles |
mean |
mean parameter, must be positive scalar. |
sd |
standard deviation parameter, must be positive scalar. |
log , log.p
|
logical; if |
lower.tail |
logical; if |
p |
vector of probabilities |
n |
number of observations. If |
This implementation allows for automatic differentiation with RTMB
.
dgamma2
gives the density, pgamma2
gives the distribution function, qgamma2
gives the quantile function, and rgamma2
generates random deviates.
x = rgamma2(1) d = dgamma2(x) p = pgamma2(x) q = qgamma2(p)
x = rgamma2(1) d = dgamma2(x) p = pgamma2(x) q = qgamma2(p)
This function builds the infinitesimal generator matrix for a continuous-time Markov chain from an unconstrained parameter vector.
generator(param, byrow = FALSE, report = TRUE)
generator(param, byrow = FALSE, report = TRUE)
param |
unconstrained parameter vector of length N*(N-1) where N is the number of states of the Markov chain |
byrow |
logical indicating if the transition probability matrix should be filled by row |
report |
logical, indicating whether the generator matrix Q should be reported from the fitted model. Defaults to |
infinitesimal generator matrix of dimension c(N,N)
Other transition probability matrix functions:
tpm()
,
tpm_cont()
,
tpm_emb()
,
tpm_emb_g()
,
tpm_g()
,
tpm_p()
# 2 states: 2 free off-diagonal elements generator(rep(-1, 2)) # 3 states: 6 free off-diagonal elements generator(rep(-2, 6))
# 2 states: 2 free off-diagonal elements generator(rep(-1, 2)) # 3 states: 6 free off-diagonal elements generator(rep(-2, 6))
Build the design matrix and the penalty matrix for models involving penalised splines based on a formula and a data set
make_matrices(formula, data, knots = NULL)
make_matrices(formula, data, knots = NULL)
formula |
right side of a formula as used in |
data |
data frame containing the variables in the formula |
knots |
optional list containing user specified knot values to be used for basis construction For most bases the user simply supplies the knots to be used, which must match up with the k value supplied (note that the number of knots is not always just k).
See |
a list containing the design matrix Z
, the penalty matrix S
, the formula
, the data
and the knots
modmat = make_matrices(~ s(x), data.frame(x = 1:10))
modmat = make_matrices(~ s(x), data.frame(x = 1:10))
This function builds the B-spline design matrix for a given data vector. Importantly, the B-spline basis functions are normalised such that the integral of each basis function is 1, hence this basis can be used for spline-based density estimation, when the basis functions are weighted by non-negative weights summing to one.
make_matrices_dens( x, k, type = "real", degree = 3, npoints = 10000, diff_order = 2, pow = 0.5 )
make_matrices_dens( x, k, type = "real", degree = 3, npoints = 10000, diff_order = 2, pow = 0.5 )
x |
data vector |
k |
number of basis functions |
type |
type of the data, either |
degree |
degree of the B-spline basis functions, defaults to cubic B-splines |
npoints |
number of points used in the numerical integration for normalizing the B-spline basis functions |
diff_order |
order of differencing used for the P-Spline penalty matrix for each data stream. Defaults to second-order differences. |
pow |
power for polynomial knot spacing Such non-equidistant knot spacing is only used for |
list containing the design matrix Z
, the penalty matrix S
, the prediction design matrix Z_predict
, the prediction grid xseq
, and details for the basis expansion.
modmat = make_matrices_dens(x = (-50):50, k = 20) modmat = make_matrices_dens(x = 1:100, k = 20, type = "positive") modmat = make_matrices_dens(x = seq(-pi,pi), k = 20, type = "circular")
modmat = make_matrices_dens(x = (-50):50, k = 20) modmat = make_matrices_dens(x = 1:100, k = 20, type = "positive") modmat = make_matrices_dens(x = seq(-pi,pi), k = 20, type = "circular")
A small group of researchers managed to put an accelerometer on the Loch Ness Monster and collected data for a few days. Now we have a data set of the overall dynamic body acceleration (ODBA) of the creature.
nessi
nessi
A data frame with 5.000 rows and 3 variables:
overall dynamci body acceleration
logarithm of overall dynamic body acceleration
hidden state variable
Generated for example purposes.
This function computes quadratic penalties of the form
with smoothing parameters , coefficient vectors
, and fixed penalty matrices
.
It is intended to be used inside the penalised negative log-likelihood function when fitting models with penalised splines or simple random effects via quasi restricted maximum likelihood (qREML) with the qreml
function.
For qreml
to work, the likelihood function needs to be compatible with the RTMB
R package to enable automatic differentiation.
penalty(re_coef, S, lambda)
penalty(re_coef, S, lambda)
re_coef |
coefficient vector/ matrix or list of coefficient vectors/ matrices Each list entry corresponds to a different smooth/ random effect with its own associated penalty matrix in |
S |
fixed penalty matrix or list of penalty matrices matching the structure of |
lambda |
penalty strength parameter vector that has a length corresponding to the total number of random effects/ spline coefficients in E.g. if |
Caution: The formatting of re_coef
needs to match the structure of the parameter list in your penalised negative log-likelihood function,
i.e. you cannot have two random effect vectors of different names (different list elements in the parameter list), combine them into a matrix inside your likelihood and pass the matrix to penalty
.
If these are seperate random effects, each with its own name, they need to be passed as a list to penalty
. Moreover, the ordering of re_coef
needs to match the character vector random
specified in qreml
.
returns the penalty value and reports to qreml
.
qreml
for the qREML algorithm
# Example with a single random effect re = rep(0, 5) S = diag(5) lambda = 1 penalty(re, S, lambda) # Example with two random effects, # where one element contains two random effects of similar structure re = list(matrix(0, 2, 5), rep(0, 4)) S = list(diag(5), diag(4)) lambda = c(1,1,2) # length = total number of random effects penalty(re, S, lambda) # Full model-fitting example data = trex[1:1000,] # subset # initial parameter list par = list(logmu = log(c(0.3, 1)), # step mean logsigma = log(c(0.2, 0.7)), # step sd beta0 = c(-2,2), # state process intercept betaspline = matrix(rep(0, 18), nrow = 2)) # state process spline coefs # data object with initial penalty strength lambda dat = list(step = data$step, # step length tod = data$tod, # time of day covariate N = 2, # number of states lambda = rep(10,2)) # initial penalty strength # building model matrices modmat = make_matrices(~ s(tod, bs = "cp"), data = data.frame(tod = 1:24), knots = list(tod = c(0,24))) # wrapping points dat$Z = modmat$Z # spline design matrix dat$S = modmat$S # penalty matrix # penalised negative log-likelihood function pnll = function(par) { getAll(par, dat) # makes everything contained available without $ Gamma = tpm_g(Z, cbind(beta0, betaspline), ad = TRUE) # transition probabilities delta = stationary_p(Gamma, t = 1, ad = TRUE) # initial distribution mu = exp(logmu) # step mean sigma = exp(logsigma) # step sd # calculating all state-dependent densities allprobs = matrix(1, nrow = length(step), ncol = N) ind = which(!is.na(step)) # only for non-NA obs. for(j in 1:N) allprobs[ind,j] = dgamma2(step[ind],mu[j],sigma[j]) -forward_g(delta, Gamma[,,tod], allprobs, ad = TRUE) + penalty(betaspline, S, lambda) # this does all the penalization work } # model fitting mod = qreml(pnll, par, dat, random = "betaspline")
# Example with a single random effect re = rep(0, 5) S = diag(5) lambda = 1 penalty(re, S, lambda) # Example with two random effects, # where one element contains two random effects of similar structure re = list(matrix(0, 2, 5), rep(0, 4)) S = list(diag(5), diag(4)) lambda = c(1,1,2) # length = total number of random effects penalty(re, S, lambda) # Full model-fitting example data = trex[1:1000,] # subset # initial parameter list par = list(logmu = log(c(0.3, 1)), # step mean logsigma = log(c(0.2, 0.7)), # step sd beta0 = c(-2,2), # state process intercept betaspline = matrix(rep(0, 18), nrow = 2)) # state process spline coefs # data object with initial penalty strength lambda dat = list(step = data$step, # step length tod = data$tod, # time of day covariate N = 2, # number of states lambda = rep(10,2)) # initial penalty strength # building model matrices modmat = make_matrices(~ s(tod, bs = "cp"), data = data.frame(tod = 1:24), knots = list(tod = c(0,24))) # wrapping points dat$Z = modmat$Z # spline design matrix dat$S = modmat$S # penalty matrix # penalised negative log-likelihood function pnll = function(par) { getAll(par, dat) # makes everything contained available without $ Gamma = tpm_g(Z, cbind(beta0, betaspline), ad = TRUE) # transition probabilities delta = stationary_p(Gamma, t = 1, ad = TRUE) # initial distribution mu = exp(logmu) # step mean sigma = exp(logsigma) # step sd # calculating all state-dependent densities allprobs = matrix(1, nrow = length(step), ncol = N) ind = which(!is.na(step)) # only for non-NA obs. for(j in 1:N) allprobs[ind,j] = dgamma2(step[ind],mu[j],sigma[j]) -forward_g(delta, Gamma[,,tod], allprobs, ad = TRUE) + penalty(betaspline, S, lambda) # this does all the penalization work } # model fitting mod = qreml(pnll, par, dat, random = "betaspline")
make_matrices
Build the prediction design matrix based on new data and model_matrices object created by make_matrices
pred_matrix(model_matrices, newdata)
pred_matrix(model_matrices, newdata)
model_matrices |
model_matrices object as returned from |
newdata |
data frame containing the variables in the formula and new data for which to evaluate the basis |
prediction design matrix for newdata
with the same basis as used for model_matrices
modmat = make_matrices(~ s(x), data.frame(x = 1:10)) Z_predict = pred_matrix(modmat, data.frame(x = 1:10 - 0.5))
modmat = make_matrices(~ s(x), data.frame(x = 1:10)) Z_predict = pred_matrix(modmat, data.frame(x = 1:10 - 0.5))
For HMMs, pseudo-residuals are used to assess the goodness-of-fit of the model. These are based on the cumulative distribution function (CDF)
and can be used to quantify whether an observation is extreme relative to its model-implied distribution.
This function calculates such residuals via probability integral transform, based on the local state probabilities obtained by stateprobs
or stateprobs_g
and the respective parametric family.
pseudo_res( obs, dist, par, stateprobs = NULL, mod = NULL, normal = TRUE, discrete = NULL, randomise = TRUE, seed = NULL )
pseudo_res( obs, dist, par, stateprobs = NULL, mod = NULL, normal = TRUE, discrete = NULL, randomise = TRUE, seed = NULL )
obs |
vector of continuous-valued observations (of length n) |
dist |
character string specifying which parametric CDF to use (e.g., |
par |
named parameter list for the parametric CDF Names need to correspond to the parameter names in the specified distribution (e.g. |
stateprobs |
matrix of local state probabilities for each observation (of dimension c(n,N), where N is the number of states) as computed by |
mod |
optional model object containing initial distribution If you are using automatic differentiation either with |
normal |
logical, if These will be approximately standard normally distributed if the model is correct. |
discrete |
logical, if By default, will be determined using |
randomise |
for discrete pseudo residuals only. Logical, if |
seed |
for discrete pseudo residuals only. Integer, seed for random number generation |
When used for discrete pseudo-residuals, this function is just a wrapper for pseudo_res_discrete
.
vector of pseudo residuals
## continuous-valued observations obs = rnorm(100) stateprobs = matrix(0.5, nrow = 100, ncol = 2) par = list(mean = c(1,2), sd = c(1,1)) pres = pseudo_res(obs, "norm", par, stateprobs) ## discrete-valued observations obs = rpois(100, lambda = 1) stateprobs = matrix(0.5, nrow = 100, ncol = 2) par = list(lambda = c(1,2)) pres = pseudo_res(obs, "pois", par, stateprobs)
## continuous-valued observations obs = rnorm(100) stateprobs = matrix(0.5, nrow = 100, ncol = 2) par = list(mean = c(1,2), sd = c(1,1)) pres = pseudo_res(obs, "norm", par, stateprobs) ## discrete-valued observations obs = rpois(100, lambda = 1) stateprobs = matrix(0.5, nrow = 100, ncol = 2) par = list(lambda = c(1,2)) pres = pseudo_res(obs, "pois", par, stateprobs)
For HMMs, pseudo-residuals are used to assess the goodness-of-fit of the model. These are based on the cumulative distribution function (CDF)
and can be used to quantify whether an observation is extreme relative to its model-implied distribution.
This function calculates such residuals for discrete-valued observations, based on the local state probabilities obtained by stateprobs
or stateprobs_g
and the respective parametric family.
pseudo_res_discrete( obs, dist, par, stateprobs, normal = TRUE, randomise = TRUE, seed = NULL )
pseudo_res_discrete( obs, dist, par, stateprobs, normal = TRUE, randomise = TRUE, seed = NULL )
obs |
vector of discrete-valued observations (of length n) |
dist |
character string specifying which parametric CDF to use (e.g., |
par |
named parameter list for the parametric CDF Names need to correspond to the parameter names in the specified distribution (e.g. |
stateprobs |
matrix of local state probabilities for each observation (of dimension c(n,N), where N is the number of states) |
normal |
logical, if These will be approximately standard normally distributed if the model is correct. |
randomise |
logical, if |
seed |
integer, seed for random number generation |
For discrete observations, calculating pseudo residuals is slightly more involved, as the CDF is a step function.
Therefore, one can calculate the lower and upper CDF values for each observation.
By default, this function does exactly that and then randomly samples the interval in between to give approximately Gaussian psuedo-residuals.
If randomise
is set to FALSE
, the lower, upper and mean pseudo-residuasl are returned.
vector of pseudo residuals
obs = rpois(100, lambda = 1) stateprobs = matrix(0.5, nrow = 100, ncol = 2) par = list(lambda = c(1,2)) pres = pseudo_res_discrete(obs, "pois", par, stateprobs)
obs = rpois(100, lambda = 1) stateprobs = matrix(0.5, nrow = 100, ncol = 2) par = list(lambda = c(1,2)) pres = pseudo_res_discrete(obs, "pois", par, stateprobs)
This algorithm can be used very flexibly to fit statistical models that involve penalised splines or simple i.i.d. random effects, i.e. that have penalties of the form
with smoothing parameters , coefficient vectors
, and fixed penalty matrices
.
The qREML algorithm is typically much faster than REML or marginal ML using the full Laplace approximation method, but may be slightly less accurate regarding the estimation of the penalty strength parameters.
Under the hood, qreml
uses the R package RTMB
for automatic differentiation in the inner optimisation.
The user has to specify the penalised negative log-likelihood function pnll
structured as dictated by RTMB
and use the penalty
function to compute the quadratic-form penalty inside the likelihood.
qreml( pnll, par, dat, random, psname = "lambda", alpha = 0, smoothing = 1, maxiter = 100, tol = 1e-04, control = list(reltol = 1e-10, maxit = 1000), silent = 1, joint_unc = TRUE, saveall = FALSE )
qreml( pnll, par, dat, random, psname = "lambda", alpha = 0, smoothing = 1, maxiter = 100, tol = 1e-04, control = list(reltol = 1e-10, maxit = 1000), silent = 1, joint_unc = TRUE, saveall = FALSE )
pnll |
penalised negative log-likelihood function that is structured as dictated by Needs to be a function of the named list of initial parameters |
par |
named list of initial parameters The random effects/ spline coefficients can be vectors or matrices, the latter summarising several random effects of the same structure, each one being a row in the matrix. |
dat |
initial data list that contains the data used in the likelihood function, hyperparameters, and the initial penalty strength vector If the initial penalty strength vector is not called |
random |
vector of names of the random effects/ penalised parameters in Caution: The ordering of |
psname |
optional name given to the penalty strength parameter in |
alpha |
optional hyperparamater for exponential smoothing of the penalty strengths For larger values smoother convergence is to be expected but the algorithm may need more iterations. |
smoothing |
optional scaling factor for the final penalty strength parameters Increasing this beyond one will lead to a smoother final model. Can be an integer or a vector of length equal to the length of the penalty strength parameter. |
maxiter |
maximum number of iterations in the outer optimisation over the penalty strength parameters. |
tol |
Convergence tolerance for the penalty strength parameters. |
control |
list of control parameters for We advise against changing the default values of |
silent |
integer silencing level: 0 corresponds to full printing of inner and outer iterations, 1 to printing of outer iterations only, and 2 to no printing. |
joint_unc |
logical, if |
saveall |
logical, if |
returns a model list influenced by the users report statements in pnll
penalty
to compute the penalty inside the likelihood function
data = trex[1:1000,] # subset # initial parameter list par = list(logmu = log(c(0.3, 1)), # step mean logsigma = log(c(0.2, 0.7)), # step sd beta0 = c(-2,2), # state process intercept betaspline = matrix(rep(0, 18), nrow = 2)) # state process spline coefs # data object with initial penalty strength lambda dat = list(step = data$step, # step length tod = data$tod, # time of day covariate N = 2, # number of states lambda = rep(10,2)) # initial penalty strength # building model matrices modmat = make_matrices(~ s(tod, bs = "cp"), data = data.frame(tod = 1:24), knots = list(tod = c(0,24))) # wrapping points dat$Z = modmat$Z # spline design matrix dat$S = modmat$S # penalty matrix # penalised negative log-likelihood function pnll = function(par) { getAll(par, dat) # makes everything contained available without $ Gamma = tpm_g(Z, cbind(beta0, betaspline), ad = TRUE) # transition probabilities delta = stationary_p(Gamma, t = 1, ad = TRUE) # initial distribution mu = exp(logmu) # step mean sigma = exp(logsigma) # step sd # calculating all state-dependent densities allprobs = matrix(1, nrow = length(step), ncol = N) ind = which(!is.na(step)) # only for non-NA obs. for(j in 1:N) allprobs[ind,j] = dgamma2(step[ind],mu[j],sigma[j]) -forward_g(delta, Gamma[,,tod], allprobs, ad = TRUE) + penalty(betaspline, S, lambda) # this does all the penalization work } # model fitting mod = qreml(pnll, par, dat, random = "betaspline")
data = trex[1:1000,] # subset # initial parameter list par = list(logmu = log(c(0.3, 1)), # step mean logsigma = log(c(0.2, 0.7)), # step sd beta0 = c(-2,2), # state process intercept betaspline = matrix(rep(0, 18), nrow = 2)) # state process spline coefs # data object with initial penalty strength lambda dat = list(step = data$step, # step length tod = data$tod, # time of day covariate N = 2, # number of states lambda = rep(10,2)) # initial penalty strength # building model matrices modmat = make_matrices(~ s(tod, bs = "cp"), data = data.frame(tod = 1:24), knots = list(tod = c(0,24))) # wrapping points dat$Z = modmat$Z # spline design matrix dat$S = modmat$S # penalty matrix # penalised negative log-likelihood function pnll = function(par) { getAll(par, dat) # makes everything contained available without $ Gamma = tpm_g(Z, cbind(beta0, betaspline), ad = TRUE) # transition probabilities delta = stationary_p(Gamma, t = 1, ad = TRUE) # initial distribution mu = exp(logmu) # step mean sigma = exp(logsigma) # step sd # calculating all state-dependent densities allprobs = matrix(1, nrow = length(step), ncol = N) ind = which(!is.na(step)) # only for non-NA obs. for(j in 1:N) allprobs[ind,j] = dgamma2(step[ind],mu[j],sigma[j]) -forward_g(delta, Gamma[,,tod], allprobs, ad = TRUE) + penalty(betaspline, S, lambda) # this does all the penalization work } # model fitting mod = qreml(pnll, par, dat, random = "betaspline")
sdreport
After optimisation of an AD model, sdreportMC
can be used to calculate samples of confidence intervals of all model parameters and REPORT()
ed quantities
including nonlinear functions of random effects and parameters.
sdreportMC( obj, what, Hessian = NULL, CI = FALSE, n = 1000, probs = c(0.025, 0.975) )
sdreportMC( obj, what, Hessian = NULL, CI = FALSE, n = 1000, probs = c(0.025, 0.975) )
obj |
object returned by |
what |
vector of strings with names of parameters and |
Hessian |
optional Hessian matrix. If not provided, it will be computed from the object |
CI |
logical. If |
n |
number of samples to draw from the multivariate normal distribution of the MLE |
probs |
vector of probabilities for the confidence intervals (ignored if no CIs are computed) |
Caution: Currently does not work for models with fixed parameters (i.e. that use the map
argument of MakeADFun
.)
named list corresponding to the elements of what
. Each element has the structure of the corresponding quantity with an additional dimension added for the samples.
For example, if a quantity is a vector, the list contains a matrix. If a quantity is a matrix, the list contains an array.
# fitting an HMM to the trex data and running sdreportMC ## negative log-likelihood function nll = function(par) { getAll(par, dat) # makes everything contained available without $ Gamma = tpm(eta) # computes transition probability matrix from unconstrained eta delta = stationary(Gamma) # computes stationary distribution # exponentiating because all parameters strictly positive mu = exp(logmu) sigma = exp(logsigma) kappa = exp(logkappa) # reporting statements for sdreportMC REPORT(mu) REPORT(sigma) REPORT(kappa) # calculating all state-dependent densities allprobs = matrix(1, nrow = length(step), ncol = N) ind = which(!is.na(step) & !is.na(angle)) # only for non-NA obs. for(j in 1:N){ allprobs[ind,j] = dgamma2(step[ind],mu[j],sigma[j])*dvm(angle[ind],0,kappa[j]) } -forward(delta, Gamma, allprobs) # simple forward algorithm } ## initial parameter list par = list( logmu = log(c(0.3, 1)), # initial means for step length (log-transformed) logsigma = log(c(0.2, 0.7)), # initial sds for step length (log-transformed) logkappa = log(c(0.2, 0.7)), # initial concentration for turning angle (log-transformed) eta = rep(-2, 2) # initial t.p.m. parameters (on logit scale) ) ## data and hyperparameters dat = list( step = trex$step[1:500], # hourly step lengths angle = trex$angle[1:500], # hourly turning angles N = 2 ) ## creating AD function obj = MakeADFun(nll, par, silent = TRUE) # creating the objective function ## optimising opt = nlminb(obj$par, obj$fn, obj$gr) # optimization ## running sdreportMC # `mu` has report statement, `delta` is automatically reported by `forward()` sdrMC = sdreportMC(obj, what = c("mu", "delta"), n = 50) dim(sdrMC$delta) # now a matrix with 50 samples (rows)
# fitting an HMM to the trex data and running sdreportMC ## negative log-likelihood function nll = function(par) { getAll(par, dat) # makes everything contained available without $ Gamma = tpm(eta) # computes transition probability matrix from unconstrained eta delta = stationary(Gamma) # computes stationary distribution # exponentiating because all parameters strictly positive mu = exp(logmu) sigma = exp(logsigma) kappa = exp(logkappa) # reporting statements for sdreportMC REPORT(mu) REPORT(sigma) REPORT(kappa) # calculating all state-dependent densities allprobs = matrix(1, nrow = length(step), ncol = N) ind = which(!is.na(step) & !is.na(angle)) # only for non-NA obs. for(j in 1:N){ allprobs[ind,j] = dgamma2(step[ind],mu[j],sigma[j])*dvm(angle[ind],0,kappa[j]) } -forward(delta, Gamma, allprobs) # simple forward algorithm } ## initial parameter list par = list( logmu = log(c(0.3, 1)), # initial means for step length (log-transformed) logsigma = log(c(0.2, 0.7)), # initial sds for step length (log-transformed) logkappa = log(c(0.2, 0.7)), # initial concentration for turning angle (log-transformed) eta = rep(-2, 2) # initial t.p.m. parameters (on logit scale) ) ## data and hyperparameters dat = list( step = trex$step[1:500], # hourly step lengths angle = trex$angle[1:500], # hourly turning angles N = 2 ) ## creating AD function obj = MakeADFun(nll, par, silent = TRUE) # creating the objective function ## optimising opt = nlminb(obj$par, obj$fn, obj$gr) # optimization ## running sdreportMC # `mu` has report statement, `delta` is automatically reported by `forward()` sdrMC = sdreportMC(obj, what = c("mu", "delta"), n = 50) dim(sdrMC$delta) # now a matrix with 50 samples (rows)
Computes
for homogeneous HMMs
stateprobs(delta, Gamma, allprobs, trackID = NULL, mod = NULL)
stateprobs(delta, Gamma, allprobs, trackID = NULL, mod = NULL)
delta |
initial or stationary distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if |
Gamma |
transition probability matrix of dimension c(N,N), or array of k transition probability matrices of dimension c(N,N,k), if |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
trackID |
optional vector of length n containing IDs If provided, the total log-likelihood will be the sum of each track's likelihood contribution.
In this case, |
mod |
optional model object containing initial distribution If you are using automatic differentiation either with |
matrix of conditional state probabilities of dimension c(n,N)
Other decoding functions:
stateprobs_g()
,
stateprobs_p()
,
viterbi()
,
viterbi_g()
,
viterbi_p()
Gamma = tpm(c(-1,-2)) delta = stationary(Gamma) allprobs = matrix(runif(200), nrow = 100, ncol = 2) probs = stateprobs(delta, Gamma, allprobs)
Gamma = tpm(c(-1,-2)) delta = stationary(Gamma) allprobs = matrix(runif(200), nrow = 100, ncol = 2) probs = stateprobs(delta, Gamma, allprobs)
Computes
for inhomogeneous HMMs
stateprobs_g(delta, Gamma, allprobs, trackID = NULL, mod = NULL)
stateprobs_g(delta, Gamma, allprobs, trackID = NULL, mod = NULL)
delta |
initial or stationary distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if |
Gamma |
array of transition probability matrices of dimension c(N,N,n-1), as in a time series of length n, there are only n-1 transitions If an array of dimension c(N,N,n) for a single track is provided, the first slice will be ignored. If |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
trackID |
optional vector of k track IDs, if multiple tracks need to be decoded separately |
mod |
optional model object containing initial distribution If you are using automatic differentiation either with |
matrix of conditional state probabilities of dimension c(n,N)
Other decoding functions:
stateprobs()
,
stateprobs_p()
,
viterbi()
,
viterbi_g()
,
viterbi_p()
Gamma = tpm_g(runif(99), matrix(c(-1,-1,1,-2), nrow = 2, byrow = TRUE)) delta = c(0.5, 0.5) allprobs = matrix(runif(200), nrow = 100, ncol = 2) probs = stateprobs_g(delta, Gamma, allprobs)
Gamma = tpm_g(runif(99), matrix(c(-1,-1,1,-2), nrow = 2, byrow = TRUE)) delta = c(0.5, 0.5) allprobs = matrix(runif(200), nrow = 100, ncol = 2) probs = stateprobs_g(delta, Gamma, allprobs)
Computes
for periodically inhomogeneous HMMs
stateprobs_p(delta, Gamma, allprobs, tod, trackID = NULL, mod = NULL)
stateprobs_p(delta, Gamma, allprobs, tod, trackID = NULL, mod = NULL)
delta |
initial or stationary distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if This could e.g. be the periodically stationary distribution (for each track) as computed by |
Gamma |
array of transition probability matrices for each time point in the cycle of dimension c(N,N,L), where L is the length of the cycle. |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
tod |
(Integer valued) variable for cycle indexing in 1, ..., L, mapping the data index to a generalised time of day (length n). For half-hourly data L = 48. It could, however, also be day of year for daily data and L = 365. |
trackID |
optional vector of k track IDs, if multiple tracks need to be decoded separately |
mod |
optional model object containing initial distribution If you are using automatic differentiation either with |
matrix of conditional state probabilities of dimension c(n,N)
Other decoding functions:
stateprobs()
,
stateprobs_g()
,
viterbi()
,
viterbi_g()
,
viterbi_p()
L = 24 beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma = tpm_p(1:L, L, beta, degree = 1) delta = stationary_p(Gamma, 1) allprobs = matrix(runif(200), nrow = 100, ncol = 2) tod = rep(1:24, 5)[1:100] probs = stateprobs_p(delta, Gamma, allprobs, tod)
L = 24 beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma = tpm_p(1:L, L, beta, degree = 1) delta = stationary_p(Gamma, 1) allprobs = matrix(runif(200), nrow = 100, ncol = 2) tod = rep(1:24, 5)[1:100] probs = stateprobs_p(delta, Gamma, allprobs, tod)
A homogeneous, finite state Markov chain that is irreducible and aperiodic converges to a unique stationary distribution, here called .
As it is stationary, this distribution satisfies
subject to ,
where
is the transition probability matrix.
This function solves the linear system of equations above.
stationary(Gamma)
stationary(Gamma)
Gamma |
transition probability matrix of dimension c(N,N) |
stationary distribution of the Markov chain with the given transition probability matrix
tpm
to create a transition probabilty matrix using the multinomial logistic link (softmax)
Other stationary distribution functions:
stationary_cont()
,
stationary_p()
Gamma = tpm(c(rep(-2,3), rep(-3,3))) delta = stationary(Gamma)
Gamma = tpm(c(rep(-2,3), rep(-3,3))) delta = stationary(Gamma)
A well-behaved continuous-time Markov chain converges to a unique stationary distribution, here called .
This distribution satisfies
subject to ,
where
is the infinitesimal generator of the Markov chain.
This function solves the linear system of equations above for a given generator matrix.
stationary_cont(Q)
stationary_cont(Q)
Q |
infinitesimal generator matrix of dimension c(N,N) |
stationary distribution of the continuous-time Markov chain with given generator matrix
generator
to create a generator matrix
Other stationary distribution functions:
stationary()
,
stationary_p()
Q = generator(c(-2,-2)) Pi = stationary_cont(Q)
Q = generator(c(-2,-2)) Pi = stationary_cont(Q)
If the transition probability matrix of an inhomogeneous Markov chain varies only periodically (with period length ), it converges to a so-called periodically stationary distribution.
This happens, because the thinned Markov chain, which has a full cycle as each time step, has homogeneous transition probability matrix
for all
The stationary distribution for time
satifies
.
This function calculates said periodically stationary distribution.
stationary_p(Gamma, t = NULL, ad = NULL)
stationary_p(Gamma, t = NULL, ad = NULL)
Gamma |
array of transition probability matrices of dimension c(N,N,L) |
t |
integer index of the time point in the cycle, for which to calculate the stationary distribution If |
ad |
optional logical, indicating whether automatic differentiation with |
either the periodically stationary distribution at time t or all periodically stationary distributions.
tpm_p
and tpm_g
to create multiple transition matrices based on a cyclic variable or design matrix
Other stationary distribution functions:
stationary()
,
stationary_cont()
# setting parameters for trigonometric link beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma = tpm_p(beta = beta, degree = 1) # periodically stationary distribution for specific time point delta = stationary_p(Gamma, 4) # all periodically stationary distributions Delta = stationary_p(Gamma)
# setting parameters for trigonometric link beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma = tpm_p(beta = beta, degree = 1) # periodically stationary distribution for specific time point delta = stationary_p(Gamma, 4) # all periodically stationary distributions Delta = stationary_p(Gamma)
stationary_p
This is function computes the periodically stationary distribution of a Markov chain given a list of L sparse transition probability matrices.
Compatible with automatic differentiation by RTMB
stationary_p_sparse(Gamma, t = NULL)
stationary_p_sparse(Gamma, t = NULL)
Gamma |
sist of length L containing sparse transition probability matrices for one cycle. |
t |
integer index of the time point in the cycle, for which to calculate the stationary distribution If t is not provided, the function calculates all stationary distributions for each time point in the cycle. |
either the periodically stationary distribution at time t or all periodically stationary distributions.
## periodic HSMM example (here the approximating tpm is sparse) N = 2 # number of states L = 24 # cycle length # time-varying mean dwell times Z = trigBasisExp(1:L) # trigonometric basis functions design matrix beta = matrix(c(2, 2, 0.1, -0.1, -0.2, 0.2), nrow = 2) Lambda = exp(cbind(1, Z) %*% t(beta)) sizes = c(20, 20) # approximating chain with 40 states # state dwell-time distributions dm = lapply(1:N, function(i) sapply(1:sizes[i]-1, dpois, lambda = Lambda[,i])) omega = matrix(c(0,1,1,0), nrow = N, byrow = TRUE) # embedded t.p.m. # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm) # Periodically stationary distribution for specific time point delta = stationary_p_sparse(Gamma, 4) # All periodically stationary distributions Delta = stationary_p_sparse(Gamma)
## periodic HSMM example (here the approximating tpm is sparse) N = 2 # number of states L = 24 # cycle length # time-varying mean dwell times Z = trigBasisExp(1:L) # trigonometric basis functions design matrix beta = matrix(c(2, 2, 0.1, -0.1, -0.2, 0.2), nrow = 2) Lambda = exp(cbind(1, Z) %*% t(beta)) sizes = c(20, 20) # approximating chain with 40 states # state dwell-time distributions dm = lapply(1:N, function(i) sapply(1:sizes[i]-1, dpois, lambda = Lambda[,i])) omega = matrix(c(0,1,1,0), nrow = N, byrow = TRUE) # embedded t.p.m. # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm) # Periodically stationary distribution for specific time point delta = stationary_p_sparse(Gamma, 4) # All periodically stationary distributions Delta = stationary_p_sparse(Gamma)
stationary
This is function computes the stationary distribution of a Markov chain with a given sparse transition probability matrix.
Compatible with automatic differentiation by RTMB
stationary_sparse(Gamma)
stationary_sparse(Gamma)
Gamma |
sparse transition probability matrix of dimension c(N,N) |
stationary distribution of the Markov chain with the given transition probability matrix
## HSMM example (here the approximating tpm is sparse) # building the t.p.m. of the embedded Markov chain omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # defining state aggregate sizes sizes = c(20, 30) # defining state dwell-time distributions lambda = c(5, 11) dm = list(dpois(1:sizes[1]-1, lambda[1]), dpois(1:sizes[2]-1, lambda[2])) # calculating extended-state-space t.p.m. Gamma = tpm_hsmm(omega, dm) delta = stationary_sparse(Gamma)
## HSMM example (here the approximating tpm is sparse) # building the t.p.m. of the embedded Markov chain omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # defining state aggregate sizes sizes = c(20, 30) # defining state dwell-time distributions lambda = c(5, 11) dm = list(dpois(1:sizes[1]-1, lambda[1]), dpois(1:sizes[2]-1, lambda[2])) # calculating extended-state-space t.p.m. Gamma = tpm_hsmm(omega, dm) delta = stationary_sparse(Gamma)
Markov chains are parametrised in terms of a transition probability matrix , for which each row contains a conditional probability distribution of the next state given the current state.
Hence, each row has entries between 0 and 1 that need to sum to one.
For numerical optimisation, we parametrise in terms of unconstrained parameters, thus this function computes said matrix from an unconstrained parameter vector via the inverse multinomial logistic link (also known as softmax) applied to each row.
tpm(param, byrow = FALSE)
tpm(param, byrow = FALSE)
param |
unconstrained parameter vector of length N*(N-1) where N is the number of states of the Markov chain |
byrow |
logical indicating if the transition probability matrix should be filled by row Defaults to |
Transition probability matrix of dimension c(N,N)
Other transition probability matrix functions:
generator()
,
tpm_cont()
,
tpm_emb()
,
tpm_emb_g()
,
tpm_g()
,
tpm_p()
# 2 states: 2 free off-diagonal elements par1 = rep(-1, 2) Gamma1 = tpm(par1) # 3 states: 6 free off-diagonal elements par2 = rep(-2, 6) Gamma2 = tpm(par2)
# 2 states: 2 free off-diagonal elements par1 = rep(-1, 2) Gamma1 = tpm(par1) # 3 states: 6 free off-diagonal elements par2 = rep(-2, 6) Gamma2 = tpm(par2)
A continuous-time Markov chain is described by an infinitesimal generator matrix .
When observing data at time points
the transition probabilites between
and
are caluclated as
where is the matrix exponential. The mapping
is also called the Markov semigroup.
This function calculates all transition matrices based on a given generator and time differences.
tpm_cont(Q, timediff, ad = NULL, report = TRUE)
tpm_cont(Q, timediff, ad = NULL, report = TRUE)
Q |
infinitesimal generator matrix of the continuous-time Markov chain of dimension c(N,N) |
timediff |
time differences between observations of length n-1 when based on n observations |
ad |
optional logical, indicating whether automatic differentiation with |
report |
logical, indicating whether |
array of continuous-time transition matrices of dimension c(N,N,n-1)
Other transition probability matrix functions:
generator()
,
tpm()
,
tpm_emb()
,
tpm_emb_g()
,
tpm_g()
,
tpm_p()
# building a Q matrix for a 3-state cont.-time Markov chain Q = generator(rep(-2, 6)) # draw random time differences timediff = rexp(100, 10) # compute all transition matrices Gamma = tpm_cont(Q, timediff)
# building a Q matrix for a 3-state cont.-time Markov chain Q = generator(rep(-2, 6)) # draw random time differences timediff = rexp(100, 10) # compute all transition matrices Gamma = tpm_cont(Q, timediff)
Hidden semi-Markov models are defined in terms of state durations and an embedded transition probability matrix that contains the conditional transition probabilities given that the current state is left. This matrix necessarily has diagonal entries all equal to zero as self-transitions are impossible.
This function builds such an embedded/ conditional transition probability matrix from an unconstrained parameter vector. For each row of the matrix, the inverse multinomial logistic link is applied.
For a matrix of dimension c(N,N), the number of free off-diagonal elements is N*(N-2), hence also the length of param
.
This means, for 2 states, the function needs to be called without any arguments, for 3-states with a vector of length 3, for 4 states with a vector of length 8, etc.
Compatible with automatic differentiation by RTMB
tpm_emb(param = NULL)
tpm_emb(param = NULL)
param |
unconstrained parameter vector of length N*(N-2) where N is the number of states of the Markov chain If the function is called without |
embedded/ conditional transition probability matrix of dimension c(N,N)
Other transition probability matrix functions:
generator()
,
tpm()
,
tpm_cont()
,
tpm_emb_g()
,
tpm_g()
,
tpm_p()
# 2 states: no free off-diagonal elements omega = tpm_emb() # 3 states: 3 free off-diagonal elements param = rep(0, 3) omega = tpm_emb(param) # 4 states: 8 free off-diagonal elements param = rep(0, 8) omega = tpm_emb(param)
# 2 states: no free off-diagonal elements omega = tpm_emb() # 3 states: 3 free off-diagonal elements param = rep(0, 3) omega = tpm_emb(param) # 4 states: 8 free off-diagonal elements param = rep(0, 8) omega = tpm_emb(param)
Hidden semi-Markov models are defined in terms of state durations and an embedded transition probability matrix that contains the conditional transition probabilities given that the current state is left. This matrix necessarily has diagonal entries all equal to zero as self-transitions are impossible. We can allow this matrix to vary with covariates, which is the purpose of this function.
It builds all embedded/ conditional transition probability matrices based on a design and parameter matrix. For each row of the matrix, the inverse multinomial logistic link is applied.
For a matrix of dimension c(N,N), the number of free off-diagonal elements is N*(N-2) which determines the number of rows of the parameter matrix.
Compatible with automatic differentiation by RTMB
tpm_emb_g(Z, beta, report = TRUE)
tpm_emb_g(Z, beta, report = TRUE)
Z |
covariate design matrix with or without intercept column, i.e. of dimension c(n, p) or c(n, p+1) If |
beta |
matrix of coefficients for the off-diagonal elements of the embedded transition probability matrix Needs to be of dimension c(N*(N-2), p+1), where the first column contains the intercepts. p can be 0, in which case the model is homogeneous. |
report |
logical, indicating whether the coefficient matrix beta should be reported from the fitted model. Defaults to |
array of embedded/ conditional transition probability matrices of dimension c(N,N,n)
Other transition probability matrix functions:
generator()
,
tpm()
,
tpm_cont()
,
tpm_emb()
,
tpm_g()
,
tpm_p()
## parameter matrix for 3-state HSMM beta = matrix(c(rep(0, 3), -0.2, 0.2, 0.1), nrow = 3) # no intercept Z = rnorm(100) omega = tpm_emb_g(Z, beta) # intercept Z = cbind(1, Z) omega = tpm_emb_g(Z, beta)
## parameter matrix for 3-state HSMM beta = matrix(c(rep(0, 3), -0.2, 0.2, 0.1), nrow = 3) # no intercept Z = rnorm(100) omega = tpm_emb_g(Z, beta) # intercept Z = cbind(1, Z) omega = tpm_emb_g(Z, beta)
In an HMM, we often model the influence of covariates on the state process by linking them to the transition probabiltiy matrix. Most commonly, this is done by specifying a linear predictor
for each off-diagonal element () of the transition probability matrix and then applying the inverse multinomial logistic link (also known as softmax) to each row.
This function efficiently calculates all transition probabilty matrices for a given design matrix
Z
and parameter matrix beta
.
tpm_g(Z, beta, byrow = FALSE, ad = NULL, report = TRUE)
tpm_g(Z, beta, byrow = FALSE, ad = NULL, report = TRUE)
Z |
covariate design matrix with or without intercept column, i.e. of dimension c(n, p) or c(n, p+1) If |
beta |
matrix of coefficients for the off-diagonal elements of the transition probability matrix Needs to be of dimension c(N*(N-1), p+1), where the first column contains the intercepts. |
byrow |
logical indicating if each transition probability matrix should be filled by row Defaults to |
ad |
optional logical, indicating whether automatic differentiation with |
report |
logical, indicating whether the coefficient matrix |
array of transition probability matrices of dimension c(N,N,n)
Other transition probability matrix functions:
generator()
,
tpm()
,
tpm_cont()
,
tpm_emb()
,
tpm_emb_g()
,
tpm_p()
Z = matrix(runif(200), ncol = 2) beta = matrix(c(-1, 1, 2, -2, 1, -2), nrow = 2, byrow = TRUE) Gamma = tpm_g(Z, beta)
Z = matrix(runif(200), ncol = 2) beta = matrix(c(-1, 1, 2, -2, 1, -2), nrow = 2, byrow = TRUE) Gamma = tpm_g(Z, beta)
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs, where the state duration distribution is explicitly modelled.
For direct numerical maximum likelhood estimation, HSMMs can be represented as HMMs on an enlarged state space (of size ) and with structured transition probabilities.
This function computes the transition matrix to approximate a given HSMM by an HMM with a larger state space.
tpm_hsmm(omega, dm, Fm = NULL, sparse = TRUE, eps = 1e-10)
tpm_hsmm(omega, dm, Fm = NULL, sparse = TRUE, eps = 1e-10)
omega |
embedded transition probability matrix of dimension c(N,N) as computed by |
dm |
state dwell-time distributions arranged in a list of length(N). Each list element needs to be a vector of length N_i, where N_i is the state aggregate size. |
Fm |
optional list of length N containing the cumulative distribution functions of the dwell-time distributions. |
sparse |
logical, indicating whether the output should be a sparse matrix. Defaults to |
eps |
rounding value: If an entry of the transition probabily matrix is smaller, than it is rounded to zero. Usually, this should not be changed. |
extended-state-space transition probability matrix of the approximating HMM
# building the t.p.m. of the embedded Markov chain omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # defining state aggregate sizes sizes = c(20, 30) # defining state dwell-time distributions lambda = c(5, 11) dm = list(dpois(1:sizes[1]-1, lambda[1]), dpois(1:sizes[2]-1, lambda[2])) # calculating extended-state-space t.p.m. Gamma = tpm_hsmm(omega, dm)
# building the t.p.m. of the embedded Markov chain omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # defining state aggregate sizes sizes = c(20, 30) # defining state dwell-time distributions lambda = c(5, 11) dm = list(dpois(1:sizes[1]-1, lambda[1]), dpois(1:sizes[2]-1, lambda[2])) # calculating extended-state-space t.p.m. Gamma = tpm_hsmm(omega, dm)
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs.
For direct numerical maximum likelhood estimation, HSMMs can be represented as HMMs on an enlarged state space (of size ) and with structured transition probabilities.
This function computes the transition matrix of an HSMM.
tpm_hsmm2(omega, dm, eps = 1e-10)
tpm_hsmm2(omega, dm, eps = 1e-10)
omega |
embedded transition probability matrix of dimension c(N,N) |
dm |
state dwell-time distributions arranged in a list of length(N). Each list element needs to be a vector of length N_i, where N_i is the state aggregate size. |
eps |
rounding value: If an entry of the transition probabily matrix is smaller, than it is rounded to zero. |
extended-state-space transition probability matrix of the approximating HMM
# building the t.p.m. of the embedded Markov chain omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # defining state aggregate sizes sizes = c(20, 30) # defining state dwell-time distributions lambda = c(5, 11) dm = list(dpois(1:sizes[1]-1, lambda[1]), dpois(1:sizes[2]-1, lambda[2])) # calculating extended-state-space t.p.m. Gamma = tpm_hsmm(omega, dm)
# building the t.p.m. of the embedded Markov chain omega = matrix(c(0,1,1,0), nrow = 2, byrow = TRUE) # defining state aggregate sizes sizes = c(20, 30) # defining state dwell-time distributions lambda = c(5, 11) dm = list(dpois(1:sizes[1]-1, lambda[1]), dpois(1:sizes[2]-1, lambda[2])) # calculating extended-state-space t.p.m. Gamma = tpm_hsmm(omega, dm)
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs. For direct numerical maximum likelhood estimation, HSMMs can be represented as HMMs on an enlarged state space (of size ) and with structured transition probabilities.
This function computes the transition matrices of a periodically inhomogeneos HSMMs.
tpm_ihsmm(omega, dm, eps = 1e-10)
tpm_ihsmm(omega, dm, eps = 1e-10)
omega |
embedded transition probability matrix Either a matrix of dimension c(N,N) for homogeneous conditional transition probabilities (as computed by |
dm |
state dwell-time distributions arranged in a list of length N Each list element needs to be a matrix of dimension c(n, N_i), where each row t is the (approximate) probability mass function of state i at time t. |
eps |
rounding value: If an entry of the transition probabily matrix is smaller, than it is rounded to zero. Usually, this should not be changed. |
list of dimension length n - max(sapply(dm, ncol))
, containing sparse extended-state-space transition probability matrices for each time point (except the first max(sapply(dm, ncol)) - 1
).
N = 2 # time-varying mean dwell times n = 100 z = runif(n) beta = matrix(c(2, 2, 0.1, -0.1), nrow = 2) Lambda = exp(cbind(1, z) %*% t(beta)) sizes = c(15, 15) # approximating chain with 30 states # state dwell-time distributions dm = lapply(1:N, function(i) sapply(1:sizes[i]-1, dpois, lambda = Lambda[,i])) ## homogeneous conditional transition probabilites # diagonal elements are zero, rowsums are one omega = matrix(c(0,1,1,0), nrow = N, byrow = TRUE) # calculating extended-state-space t.p.m.s Gamma = tpm_ihsmm(omega, dm) ## inhomogeneous conditional transition probabilites # omega can be an array omega = array(omega, dim = c(N,N,n)) # calculating extended-state-space t.p.m.s Gamma = tpm_ihsmm(omega, dm)
N = 2 # time-varying mean dwell times n = 100 z = runif(n) beta = matrix(c(2, 2, 0.1, -0.1), nrow = 2) Lambda = exp(cbind(1, z) %*% t(beta)) sizes = c(15, 15) # approximating chain with 30 states # state dwell-time distributions dm = lapply(1:N, function(i) sapply(1:sizes[i]-1, dpois, lambda = Lambda[,i])) ## homogeneous conditional transition probabilites # diagonal elements are zero, rowsums are one omega = matrix(c(0,1,1,0), nrow = N, byrow = TRUE) # calculating extended-state-space t.p.m.s Gamma = tpm_ihsmm(omega, dm) ## inhomogeneous conditional transition probabilites # omega can be an array omega = array(omega, dim = c(N,N,n)) # calculating extended-state-space t.p.m.s Gamma = tpm_ihsmm(omega, dm)
Given a periodically varying variable such as time of day or day of year and the associated cycle length, this function calculates the transition probability matrices by applying the inverse multinomial logistic link (also known as softmax) to linear predictors of the form
for the off-diagonal elements () of the transition probability matrix.
This is relevant for modeling e.g. diurnal variation and the flexibility can be increased by adding smaller frequencies (i.e. increasing
).
tpm_p( tod = 1:24, L = 24, beta, degree = 1, Z = NULL, byrow = FALSE, ad = NULL, report = TRUE )
tpm_p( tod = 1:24, L = 24, beta, degree = 1, Z = NULL, byrow = FALSE, ad = NULL, report = TRUE )
tod |
equidistant sequence of a cyclic variable For time of day and e.g. half-hourly data, this could be 1, ..., L and L = 48, or 0.5, 1, 1.5, ..., 24 and L = 24. |
L |
length of one full cycle, on the scale of tod |
beta |
matrix of coefficients for the off-diagonal elements of the transition probability matrix Needs to be of dimension c(N *(N-1), 2*degree+1), where the first column contains the intercepts. |
degree |
degree of the trigonometric link function For each additional degree, one sine and one cosine frequency are added. |
Z |
pre-calculated design matrix (excluding intercept column) Defaults to |
byrow |
logical indicating if each transition probability matrix should be filled by row Defaults to |
ad |
optional logical, indicating whether automatic differentiation with RTMB should be used. By default, the function determines this itself. |
report |
logical, indicating whether the coefficient matrix |
Note that using this function inside the negative log-likelihood function is convenient, but it performs the basis expansion into sine and cosine terms each time it is called.
As these do not change during the optimisation, using tpm_g
with a pre-calculated (by trigBasisExp
) design matrix would be more efficient.
array of transition probability matrices of dimension c(N,N,length(tod))
Other transition probability matrix functions:
generator()
,
tpm()
,
tpm_cont()
,
tpm_emb()
,
tpm_emb_g()
,
tpm_g()
# hourly data tod = seq(1, 24, by = 1) L = 24 beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma = tpm_p(tod, L, beta, degree = 1) # half-hourly data ## integer tod sequence tod = seq(1, 48, by = 1) L = 48 beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma1 = tpm_p(tod, L, beta, degree = 1) ## equivalent specification tod = seq(0.5, 24, by = 0.5) L = 24 beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma2 = tpm_p(tod, L, beta, degree = 1) all(Gamma1 == Gamma2) # same result
# hourly data tod = seq(1, 24, by = 1) L = 24 beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma = tpm_p(tod, L, beta, degree = 1) # half-hourly data ## integer tod sequence tod = seq(1, 48, by = 1) L = 48 beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma1 = tpm_p(tod, L, beta, degree = 1) ## equivalent specification tod = seq(0.5, 24, by = 0.5) L = 24 beta = matrix(c(-1, 2, -1, -2, 1, -1), nrow = 2, byrow = TRUE) Gamma2 = tpm_p(tod, L, beta, degree = 1) all(Gamma1 == Gamma2) # same result
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs. For direct numerical maximum likelhood estimation, HSMMs can be represented as HMMs on an enlarged state space (of size ) and with structured transition probabilities.
This function computes the transition matrices of a periodically inhomogeneos HSMMs.
tpm_phsmm(omega, dm, eps = 1e-10)
tpm_phsmm(omega, dm, eps = 1e-10)
omega |
embedded transition probability matrix Either a matrix of dimension c(N,N) for homogeneous conditional transition probabilities (as computed by |
dm |
state dwell-time distributions arranged in a list of length N Each list element needs to be a matrix of dimension c(L, N_i), where each row t is the (approximate) probability mass function of state i at time t. |
eps |
rounding value: If an entry of the transition probabily matrix is smaller, than it is rounded to zero. Usually, this should not be changed. |
list of dimension length L, containing sparse extended-state-space transition probability matrices of the approximating HMM for each time point of the cycle.
N = 2 # number of states L = 24 # cycle length # time-varying mean dwell times Z = trigBasisExp(1:L) # trigonometric basis functions design matrix beta = matrix(c(2, 2, 0.1, -0.1, -0.2, 0.2), nrow = 2) Lambda = exp(cbind(1, Z) %*% t(beta)) sizes = c(20, 20) # approximating chain with 40 states # state dwell-time distributions dm = lapply(1:N, function(i) sapply(1:sizes[i]-1, dpois, lambda = Lambda[,i])) ## homogeneous conditional transition probabilites # diagonal elements are zero, rowsums are one omega = matrix(c(0,1,1,0), nrow = N, byrow = TRUE) # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm) ## inhomogeneous conditional transition probabilites # omega can be an array omega = array(omega, dim = c(N,N,L)) # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm)
N = 2 # number of states L = 24 # cycle length # time-varying mean dwell times Z = trigBasisExp(1:L) # trigonometric basis functions design matrix beta = matrix(c(2, 2, 0.1, -0.1, -0.2, 0.2), nrow = 2) Lambda = exp(cbind(1, Z) %*% t(beta)) sizes = c(20, 20) # approximating chain with 40 states # state dwell-time distributions dm = lapply(1:N, function(i) sapply(1:sizes[i]-1, dpois, lambda = Lambda[,i])) ## homogeneous conditional transition probabilites # diagonal elements are zero, rowsums are one omega = matrix(c(0,1,1,0), nrow = N, byrow = TRUE) # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm) ## inhomogeneous conditional transition probabilites # omega can be an array omega = array(omega, dim = c(N,N,L)) # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm)
Hidden semi-Markov models (HSMMs) are a flexible extension of HMMs. For direct numerical maximum likelhood estimation, HSMMs can be represented as HMMs on an enlarged state space (of size ) and with structured transition probabilities.
This function computes the transition matrices of a periodically inhomogeneos HSMMs.
tpm_phsmm2(omega, dm, eps = 1e-10)
tpm_phsmm2(omega, dm, eps = 1e-10)
omega |
embedded transition probability matrix Either a matrix of dimension c(N,N) for homogeneous conditional transition probabilities, or an array of dimension c(N,N,L) for inhomogeneous conditional transition probabilities. |
dm |
state dwell-time distributions arranged in a list of length(N) Each list element needs to be a matrix of dimension c(L, N_i), where each row t is the (approximate) probability mass function of state i at time t. |
eps |
rounding value: If an entry of the transition probabily matrix is smaller, than it is rounded to zero. |
array of dimension c(N,N,L), containing the extended-state-space transition probability matrices of the approximating HMM for each time point of the cycle.
N = 3 L = 24 # time-varying mean dwell times Lambda = exp(matrix(rnorm(L*N, 2, 0.5), nrow = L)) sizes = c(25, 25, 25) # approximating chain with 75 states # state dwell-time distributions dm = list() for(i in 1:3){ dmi = matrix(nrow = L, ncol = sizes[i]) for(t in 1:L){ dmi[t,] = dpois(1:sizes[i]-1, Lambda[t,i]) } dm[[i]] = dmi } ## homogeneous conditional transition probabilites # diagonal elements are zero, rowsums are one omega = matrix(c(0,0.5,0.5,0.2,0,0.8,0.7,0.3,0), nrow = N, byrow = TRUE) # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm) ## inhomogeneous conditional transition probabilites # omega can be an array omega = array(rep(omega,L), dim = c(N,N,L)) omega[1,,4] = c(0, 0.2, 0.8) # small change for inhomogeneity # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm)
N = 3 L = 24 # time-varying mean dwell times Lambda = exp(matrix(rnorm(L*N, 2, 0.5), nrow = L)) sizes = c(25, 25, 25) # approximating chain with 75 states # state dwell-time distributions dm = list() for(i in 1:3){ dmi = matrix(nrow = L, ncol = sizes[i]) for(t in 1:L){ dmi[t,] = dpois(1:sizes[i]-1, Lambda[t,i]) } dm[[i]] = dmi } ## homogeneous conditional transition probabilites # diagonal elements are zero, rowsums are one omega = matrix(c(0,0.5,0.5,0.2,0,0.8,0.7,0.3,0), nrow = N, byrow = TRUE) # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm) ## inhomogeneous conditional transition probabilites # omega can be an array omega = array(rep(omega,L), dim = c(N,N,L)) omega[1,,4] = c(0, 0.2, 0.8) # small change for inhomogeneity # calculating extended-state-space t.p.m.s Gamma = tpm_phsmm(omega, dm)
If the transition probability matrix of an inhomogeneous Markov chain varies only periodically (with period length ), it converges to a so-called periodically stationary distribution.
This happens, because the thinned Markov chain, which has a full cycle as each time step, has homogeneous transition probability matrix
for all
This function calculates the matrix above efficiently as a preliminery step to calculating the periodically stationary distribution.
tpm_thinned(Gamma, t)
tpm_thinned(Gamma, t)
Gamma |
array of transition probability matrices of dimension c(N,N,L). |
t |
integer index of the time point in the cycle, for which to calculate the thinned transition probility matrix |
thinned transition probabilty matrix of dimension c(N,N)
# setting parameters for trigonometric link beta = matrix(c(-1, -2, 2, -1, 2, -4), nrow = 2, byrow = TRUE) # calculating periodically varying t.p.m. array (of length 24 here) Gamma = tpm_p(beta = beta) # calculating t.p.m. of thinned Markov chain tpm_thinned(Gamma, 4)
# setting parameters for trigonometric link beta = matrix(c(-1, -2, 2, -1, 2, -4), nrow = 2, byrow = TRUE) # calculating periodically varying t.p.m. array (of length 24 here) Gamma = tpm_p(beta = beta) # calculating t.p.m. of thinned Markov chain tpm_thinned(Gamma, 4)
Hourly step lengths and turning angles of a Tyrannosaurus rex, living 66 million years ago.
trex
trex
A data frame with 10.000 rows and 4 variables:
time of day variable ranging from 1 to 24
hourly step lengths in kilometres
hourly turning angles in radians
hidden state variable
Generated for example purposes.
Given a periodically varying variable such as time of day or day of year and the associated cycle length, this function performs a basis expansion to efficiently calculate a linear predictor of the form
This is relevant for modeling e.g. diurnal variation and the flexibility can be increased by adding smaller frequencies (i.e. increasing ).
trigBasisExp(tod, L = 24, degree = 1)
trigBasisExp(tod, L = 24, degree = 1)
tod |
equidistant sequence of a cyclic variable For time of day and e.g. half-hourly data, this could be 1, ..., L and L = 48, or 0.5, 1, 1.5, ..., 24 and L = 24. |
L |
length of one cycle on the scale of the time variable. For time of day, this would be 24. |
degree |
degree K of the trigonometric link above. Increasing K increases the flexibility. |
design matrix (without intercept column), ordered as sin1, cos1, sin2, cos2, ...
## hourly data tod = rep(1:24, 10) Z = trigBasisExp(tod, L = 24, degree = 2) ## half-hourly data tod = rep(1:48/2, 10) # in [0,24] -> L = 24 Z1 = trigBasisExp(tod, L = 24, degree = 3) tod = rep(1:48, 10) # in [1,48] -> L = 48 Z2 = trigBasisExp(tod, L = 48, degree = 3) all(Z1 == Z2) # The latter two are equivalent specifications!
## hourly data tod = rep(1:24, 10) Z = trigBasisExp(tod, L = 24, degree = 2) ## half-hourly data tod = rep(1:48/2, 10) # in [0,24] -> L = 24 Z1 = trigBasisExp(tod, L = 24, degree = 3) tod = rep(1:48, 10) # in [1,48] -> L = 48 Z2 = trigBasisExp(tod, L = 48, degree = 3) all(Z1 == Z2) # The latter two are equivalent specifications!
The Viterbi algorithm allows one to decode the most probable state sequence of an HMM.
viterbi(delta, Gamma, allprobs, trackID = NULL, mod = NULL)
viterbi(delta, Gamma, allprobs, trackID = NULL, mod = NULL)
delta |
initial distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if |
Gamma |
transition probability matrix of dimension c(N,N) or array of transition probability matrices of dimension c(N,N,k) if |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
trackID |
optional vector of k track IDs, if multiple tracks need to be decoded separately |
mod |
optional model object containing initial distribution If you are using automatic differentiation either with |
vector of decoded states of length n
Other decoding functions:
stateprobs()
,
stateprobs_g()
,
stateprobs_p()
,
viterbi_g()
,
viterbi_p()
delta = c(0.5, 0.5) Gamma = matrix(c(0.9, 0.1, 0.2, 0.8), nrow = 2, byrow = TRUE) allprobs = matrix(runif(200), nrow = 100, ncol = 2) states = viterbi(delta, Gamma, allprobs)
delta = c(0.5, 0.5) Gamma = matrix(c(0.9, 0.1, 0.2, 0.8), nrow = 2, byrow = TRUE) allprobs = matrix(runif(200), nrow = 100, ncol = 2) states = viterbi(delta, Gamma, allprobs)
The Viterbi algorithm allows one to decode the most probable state sequence of an HMM.
viterbi_g(delta, Gamma, allprobs, trackID = NULL, mod = NULL)
viterbi_g(delta, Gamma, allprobs, trackID = NULL, mod = NULL)
delta |
initial distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if |
Gamma |
array of transition probability matrices of dimension c(N,N,n-1), as in a time series of length n, there are only n-1 transitions If an array of dimension c(N,N,n) is provided for a single track, the first slice will be ignored. If |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
trackID |
optional vector of k track IDs, if multiple tracks need to be decoded separately |
mod |
optional model object containing initial distribution If you are using automatic differentiation either with |
vector of decoded states of length n
Other decoding functions:
stateprobs()
,
stateprobs_g()
,
stateprobs_p()
,
viterbi()
,
viterbi_p()
delta = c(0.5, 0.5) Gamma = array(dim = c(2,2,99)) for(t in 1:99){ gammas = rbeta(2, shape1 = 0.4, shape2 = 1) Gamma[,,t] = matrix(c(1-gammas[1], gammas[1], gammas[2], 1-gammas[2]), nrow = 2, byrow = TRUE) } allprobs = matrix(runif(200), nrow = 100, ncol = 2) states = viterbi_g(delta, Gamma, allprobs)
delta = c(0.5, 0.5) Gamma = array(dim = c(2,2,99)) for(t in 1:99){ gammas = rbeta(2, shape1 = 0.4, shape2 = 1) Gamma[,,t] = matrix(c(1-gammas[1], gammas[1], gammas[2], 1-gammas[2]), nrow = 2, byrow = TRUE) } allprobs = matrix(runif(200), nrow = 100, ncol = 2) states = viterbi_g(delta, Gamma, allprobs)
The Viterbi algorithm allows one to decode the most probable state sequence of an HMM.
viterbi_p(delta, Gamma, allprobs, tod, trackID = NULL, mod = NULL)
viterbi_p(delta, Gamma, allprobs, tod, trackID = NULL, mod = NULL)
delta |
initial distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if This could e.g. be the periodically stationary distribution (for each track). |
Gamma |
array of transition probability matrices for each time point in the cycle of dimension c(N,N,L), where L is the length of the cycle |
allprobs |
matrix of state-dependent probabilities/ density values of dimension c(n, N) |
tod |
(Integer valued) variable for cycle indexing in 1, ..., L, mapping the data index to a generalised time of day (length n) For half-hourly data L = 48. It could, however, also be day of year for daily data and L = 365. |
trackID |
optional vector of k track IDs, if multiple tracks need to be decoded separately |
mod |
optional model object containing initial distribution If you are using automatic differentiation either with |
vector of decoded states of length n
Other decoding functions:
stateprobs()
,
stateprobs_g()
,
stateprobs_p()
,
viterbi()
,
viterbi_g()
delta = c(0.5, 0.5) beta = matrix(c(-2, 1, -1, -2, -1, 1), nrow = 2, byrow = TRUE) Gamma = tpm_p(1:24, 24, beta) tod = rep(1:24, 10) n = length(tod) allprobs = matrix(runif(2*n), nrow = n, ncol = 2) states = viterbi_p(delta, Gamma, allprobs, tod)
delta = c(0.5, 0.5) beta = matrix(c(-2, 1, -1, -2, -1, 1), nrow = 2, byrow = TRUE) Gamma = tpm_p(1:24, 24, beta) tod = rep(1:24, 10) n = length(tod) allprobs = matrix(runif(2*n), nrow = n, ncol = 2) states = viterbi_p(delta, Gamma, allprobs, tod)
Density, distribution function and random generation for the von Mises distribution.
dvm(x, mu = 0, kappa = 1, log = FALSE) pvm(q, mu = 0, kappa = 1, from = NULL, tol = 1e-20) rvm(n, mu = 0, kappa = 1, wrap = TRUE)
dvm(x, mu = 0, kappa = 1, log = FALSE) pvm(q, mu = 0, kappa = 1, from = NULL, tol = 1e-20) rvm(n, mu = 0, kappa = 1, wrap = TRUE)
x , q
|
vector of angles measured in radians at which to evaluate the density function. |
mu |
mean direction of the distribution measured in radians. |
kappa |
non-negative numeric value for the concentration parameter of the distribution. |
log |
logical; if |
from |
value from which the integration for CDF starts. If |
tol |
the precision in evaluating the distribution function |
n |
number of observations. If |
wrap |
logical; if |
The implementation of dvm
allows for automatic differentiation with RTMB
.
rvm
and pvm
are imported from CircStats
and circular
respectively.
dvm
gives the density, pvm
gives the distribution function, and rvm
generates random deviates.
set.seed(1) x = rvm(1000, 0, 1) d = dvm(x, 0, 1) p = pvm(x, 0, 1)
set.seed(1) x = rvm(1000, 0, 1) d = dvm(x, 0, 1) p = pvm(x, 0, 1)