Title: | Easy Handling Discrete Time Markov Chains |
---|---|
Description: | Functions and S4 methods to create and manage discrete time Markov chains more easily. In addition functions to perform statistical (fitting and drawing random variates) and probabilistic (analysis of their structural proprieties) analysis are provided. See Spedicato (2017) <doi:10.32614/RJ-2017-036>. Some functions for continuous times Markov chains depend on the suggested ctmcd package. |
Authors: | Giorgio Alfredo Spedicato [aut, cre] , Tae Seung Kang [aut], Sai Bhargav Yalamanchi [aut], Mildenberger Thoralf [ctb] , Deepak Yadav [aut], Ignacio Cordón [aut] , Vandit Jain [ctb], Toni Giorgino [ctb] , Richèl J.C. Bilderbeek [ctb] , Daniel Ebbert [ctb] , Shreyash Maheshwari [ctb], Reinhold Koch [ctb] |
Maintainer: | Giorgio Alfredo Spedicato <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.10.0 |
Built: | 2024-11-25 15:26:25 UTC |
Source: | CRAN |
The package contains classes and method to create and manage (plot, print, export for example) discrete time Markov chains (DTMC). In addition it provide functions to perform statistical (fitting and drawing random variates) and probabilistic (analysis of DTMC proprieties) analysis
Giorgio Alfredo Spedicato Maintainer: Giorgio Alfredo Spedicato <[email protected]>
Discrete-Time Markov Models, Bremaud, Springer 1999
Useful links:
Report bugs at https://github.com/spedygiorgio/markovchain/issues
# create some markov chains statesNames=c("a","b") mcA<-new("markovchain", transitionMatrix=matrix(c(0.7,0.3,0.1,0.9),byrow=TRUE, nrow=2, dimnames=list(statesNames,statesNames))) statesNames=c("a","b","c") mcB<-new("markovchain", states=statesNames, transitionMatrix= matrix(c(0.2,0.5,0.3,0,1,0,0.1,0.8,0.1), nrow=3, byrow=TRUE, dimnames=list(statesNames, statesNames))) statesNames=c("a","b","c","d") matrice<-matrix(c(0.25,0.75,0,0,0.4,0.6,0,0,0,0,0.1,0.9,0,0,0.7,0.3), nrow=4, byrow=TRUE) mcC<-new("markovchain", states=statesNames, transitionMatrix=matrice) mcD<-new("markovchain", transitionMatrix=matrix(c(0,1,0,1), nrow=2,byrow=TRUE)) #operations with S4 methods mcA^2 steadyStates(mcB) absorbingStates(mcB) markovchainSequence(n=20, markovchain=mcC, include=TRUE)
# create some markov chains statesNames=c("a","b") mcA<-new("markovchain", transitionMatrix=matrix(c(0.7,0.3,0.1,0.9),byrow=TRUE, nrow=2, dimnames=list(statesNames,statesNames))) statesNames=c("a","b","c") mcB<-new("markovchain", states=statesNames, transitionMatrix= matrix(c(0.2,0.5,0.3,0,1,0,0.1,0.8,0.1), nrow=3, byrow=TRUE, dimnames=list(statesNames, statesNames))) statesNames=c("a","b","c","d") matrice<-matrix(c(0.25,0.75,0,0,0.4,0.6,0,0,0,0,0.1,0.9,0,0,0.7,0.3), nrow=4, byrow=TRUE) mcC<-new("markovchain", states=statesNames, transitionMatrix=matrice) mcD<-new("markovchain", transitionMatrix=matrix(c(0,1,0,1), nrow=2,byrow=TRUE)) #operations with S4 methods mcA^2 steadyStates(mcB) absorbingStates(mcB) markovchainSequence(n=20, markovchain=mcC, include=TRUE)
Computes the absorption probability from each transient state to each recurrent one (i.e. the (i, j) entry or (j, i), in a stochastic matrix by columns, represents the probability that the first not transient state we can go from the transient state i is j (and therefore we are going to be absorbed in the communicating recurrent class of j)
absorptionProbabilities(object)
absorptionProbabilities(object)
object |
the markovchain object |
A named vector with the expected number of steps to go from a transient state to any of the recurrent ones
Ignacio Cordón
C. M. Grinstead and J. L. Snell. Introduction to Probability. American Mathematical Soc., 2012.
m <- matrix(c(1/2, 1/2, 0, 1/2, 1/2, 0, 0, 1/2, 1/2), ncol = 3, byrow = TRUE) mc <- new("markovchain", states = letters[1:3], transitionMatrix = m) absorptionProbabilities(mc)
m <- matrix(c(1/2, 1/2, 0, 1/2, 1/2, 0, 0, 1/2, 1/2), ncol = 3, byrow = TRUE) mc <- new("markovchain", states = letters[1:3], transitionMatrix = m) absorptionProbabilities(mc)
This table show mobility between income quartiles for father and sons for the 1970 cohort born
data(blanden)
data(blanden)
An object of class table
with 4 rows and 4 columns.
The rows represent fathers' income quartile when the son is aged 16, whilst the columns represent sons' income quartiles when he is aged 30 (in 2000).
Personal reworking
Jo Blanden, Paul Gregg and Stephen Machin, Intergenerational Mobility in Europe and North America, Center for Economic Performances (2005)
data(blanden) mobilityMc<-as(blanden, "markovchain")
data(blanden) mobilityMc<-as(blanden, "markovchain")
Returns the probability of hitting states rom set A before set B with different initial states
committorAB(object,A,B,p)
committorAB(object,A,B,p)
object |
a markovchain class object |
A |
a set of states |
B |
a set of states |
p |
initial state (default value : 1) |
The function solves a system of linear equations to calculate probaility that the process hits a state from set A before any state from set B
Return a vector of probabilities in case initial state is not provided else returns a number
transMatr <- matrix(c(0,0,0,1,0.5, 0.5,0,0,0,0, 0.5,0,0,0,0, 0,0.2,0.4,0,0, 0,0.8,0.6,0,0.5), nrow = 5) object <- new("markovchain", states=c("a","b","c","d","e"),transitionMatrix=transMatr) committorAB(object,c(5),c(3))
transMatr <- matrix(c(0,0,0,1,0.5, 0.5,0,0,0,0, 0.5,0,0,0,0, 0,0.2,0.4,0,0, 0,0.8,0.6,0,0.5), nrow = 5) object <- new("markovchain", states=c("a","b","c","d","e"),transitionMatrix=transMatr) committorAB(object,c(5),c(3))
conditionalDistribution
of a Markov ChainIt extracts the conditional distribution of the subsequent state, given current state.
conditionalDistribution(object, state)
conditionalDistribution(object, state)
object |
A |
state |
Subsequent state. |
A named probability vector
Giorgio Spedicato, Deepak Yadav
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
# define a markov chain statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1),nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) conditionalDistribution(markovB, "b")
# define a markov chain statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1),nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) conditionalDistribution(markovB, "b")
This is the table shown in Craig and Sendi paper showing zero and six month CD4 cells count in six brakets
data(craigsendi)
data(craigsendi)
The format is: table [1:3, 1:3] 682 154 19 33 64 19 25 47 43 - attr(*, "dimnames")=List of 2 ..$ : chr [1:3] "0-49" "50-74" "75-UP" ..$ : chr [1:3] "0-49" "50-74" "75-UP"
Rows represent counts at the beginning, cols represent counts after six months.
Estimation of the transition matrix of a discrete time Markov chain, Bruce A. Craig and Peter P. Sendi, Health Economics 11, 2002.
see source
data(craigsendi) csMc<-as(craigsendi, "markovchain") steadyStates(csMc)
data(craigsendi) csMc<-as(craigsendi, "markovchain") steadyStates(csMc)
Given a sequence of states arising from a stationary state, it fits the underlying Markov chain distribution using either MLE (also using a Laplacian smoother), bootstrap or by MAP (Bayesian) inference.
createSequenceMatrix( stringchar, toRowProbs = FALSE, sanitize = FALSE, possibleStates = character() ) markovchainFit( data, method = "mle", byrow = TRUE, nboot = 10L, laplacian = 0, name = "", parallel = FALSE, confidencelevel = 0.95, confint = TRUE, hyperparam = matrix(), sanitize = FALSE, possibleStates = character() )
createSequenceMatrix( stringchar, toRowProbs = FALSE, sanitize = FALSE, possibleStates = character() ) markovchainFit( data, method = "mle", byrow = TRUE, nboot = 10L, laplacian = 0, name = "", parallel = FALSE, confidencelevel = 0.95, confint = TRUE, hyperparam = matrix(), sanitize = FALSE, possibleStates = character() )
stringchar |
It can be a
matrix or a character vector or a list |
toRowProbs |
converts a sequence matrix into a probability matrix |
sanitize |
put 1 in all rows having rowSum equal to zero |
possibleStates |
Possible states which are not present in the given sequence |
data |
It can be a character vector or a
matrix or a
data frame or a list |
method |
Method used to estimate the Markov chain. Either "mle", "map", "bootstrap" or "laplace" |
byrow |
it tells whether the output Markov chain should show the transition probabilities by row. |
nboot |
Number of bootstrap replicates in case "bootstrap" is used. |
laplacian |
Laplacian smoothing parameter, default zero. It is only used when "laplace" method is chosen. |
name |
Optional character for name slot. |
parallel |
Use parallel processing when performing Boostrap estimates. |
confidencelevel |
level for conficence intervals width.
Used only when |
confint |
a boolean to decide whether to compute Confidence Interval or not. |
hyperparam |
Hyperparameter matrix for the a priori distribution. If none is provided, default value of 1 is assigned to each parameter. This must be of size
where k is the number of states in the chain and the values should typically be non-negative integers. |
Disabling confint would lower the computation time on large datasets. If data
or stringchar
contain NAs
, the related NA
containing transitions will be ignored.
A list containing an estimate, log-likelihood, and, when "bootstrap" method is used, a matrix of standards deviations and the bootstrap samples. When the "mle", "bootstrap" or "map" method is used, the lower and upper confidence bounds are returned along with the standard error. The "map" method also returns the expected value of the parameters with respect to the posterior distribution.
This function has been rewritten in Rcpp. Bootstrap algorithm has been defined "heuristically".
In addition, parallel facility is not complete, involving only a part of the bootstrap process.
When data
is either a data.frame
or a matrix
object, only MLE fit is
currently available.
Giorgio Spedicato, Tae Seung Kang, Sai Bhargav Yalamanchi
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
Inferring Markov Chains: Bayesian Estimation, Model Comparison, Entropy Rate, and Out-of-Class Modeling, Christopher C. Strelioff, James P. Crutchfield, Alfred Hubler, Santa Fe Institute
Yalamanchi SB, Spedicato GA (2015). Bayesian Inference of First Order Markov Chains. R package version 0.2.5
markovchainSequence
, markovchainListFit
sequence <- c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") sequenceMatr <- createSequenceMatrix(sequence, sanitize = FALSE) mcFitMLE <- markovchainFit(data = sequence) mcFitBSP <- markovchainFit(data = sequence, method = "bootstrap", nboot = 5, name = "Bootstrap Mc") na.sequence <- c("a", NA, "a", "b") # There will be only a (a,b) transition na.sequenceMatr <- createSequenceMatrix(na.sequence, sanitize = FALSE) mcFitMLE <- markovchainFit(data = na.sequence) # data can be a list of character vectors sequences <- list(x = c("a", "b", "a"), y = c("b", "a", "b", "a", "c")) mcFitMap <- markovchainFit(sequences, method = "map") mcFitMle <- markovchainFit(sequences, method = "mle")
sequence <- c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") sequenceMatr <- createSequenceMatrix(sequence, sanitize = FALSE) mcFitMLE <- markovchainFit(data = sequence) mcFitBSP <- markovchainFit(data = sequence, method = "bootstrap", nboot = 5, name = "Bootstrap Mc") na.sequence <- c("a", NA, "a", "b") # There will be only a (a,b) transition na.sequenceMatr <- createSequenceMatrix(na.sequence, sanitize = FALSE) mcFitMLE <- markovchainFit(data = na.sequence) # data can be a list of character vectors sequences <- list(x = c("a", "b", "a"), y = c("b", "a", "b", "a", "c")) mcFitMap <- markovchainFit(sequences, method = "map") mcFitMle <- markovchainFit(sequences, method = "mle")
The S4 class that describes ctmc
(continuous
time Markov chain) objects.
states |
Name of the states. Must be the same of
|
byrow |
TRUE or FALSE. Indicates whether the given matrix is stochastic by rows or by columns |
generator |
Square generator matrix |
name |
Optional character name of the Markov chain |
signature(x = "ctmc")
: method to get the size
signature(.Object = "ctmc")
: initialize
method
signature(object = "ctmc")
: states method.
signature(object = "ctmc")
: method to get the
steady state vector.
signature(x = "ctmc", y = "missing")
: plot method
for ctmc
objects
ctmc
classes are written using S4 classes
Validation method is used to assess whether either columns or rows totals to zero.
Rounding is used up to 5th decimal. If state names are not properly defined
for a generator matrix
, coercing to ctmc
object leads to overriding
states name with artificial "s1", "s2", ... sequence
Introduction to Stochastic Processes with Applications in the Biosciences (2013), David F. Anderson, University of Wisconsin at Madison. Sai Bhargav Yalamanchi, Giorgio Spedicato
generatorToTransitionMatrix
,rctmc
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) molecularCTMC <- new("ctmc", states = energyStates, byrow = byRow, generator = gen, name = "Molecular Transition Model") steadyStates(molecularCTMC) ## Not run: plot(molecularCTMC)
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) molecularCTMC <- new("ctmc", states = energyStates, byrow = byRow, generator = gen, name = "Molecular Transition Model") steadyStates(molecularCTMC) ## Not run: plot(molecularCTMC)
This function fits the underlying CTMC give the state transition data and the transition times using the maximum likelihood method (MLE)
ctmcFit(data, byrow = TRUE, name = "", confidencelevel = 0.95)
ctmcFit(data, byrow = TRUE, name = "", confidencelevel = 0.95)
data |
It is a list of two elements. The first element is a character vector denoting the states. The second is a numeric vector denoting the corresponding transition times. |
byrow |
Determines if the output transition probabilities of the underlying embedded DTMC are by row. |
name |
Optional name for the CTMC. |
confidencelevel |
Confidence level for the confidence interval construnction. |
Note that in data, there must exist an element wise corresponding between the two elements of the list and that data[[2]][1] is always 0.
It returns a list containing the CTMC object and the confidence intervals.
Sai Bhargav Yalamanchi
Continuous Time Markov Chains (vignette), Sai Bhargav Yalamanchi, Giorgio Alfredo Spedicato 2015
data <- list(c("a", "b", "c", "a", "b", "a", "c", "b", "c"), c(0, 0.8, 2.1, 2.4, 4, 5, 5.9, 8.2, 9)) ctmcFit(data)
data <- list(c("a", "b", "c", "a", "b", "a", "c", "b", "c"), c(0, 0.8, 2.1, 2.4, 4, 5, 5.9, 8.2, 9)) ctmcFit(data)
Given a markovchain object and reward values for every state, function calculates expected reward value after n steps.
expectedRewards(markovchain,n,rewards)
expectedRewards(markovchain,n,rewards)
markovchain |
the markovchain-class object |
n |
no of steps of the process |
rewards |
vector depicting rewards coressponding to states |
the function uses a dynamic programming approach to solve a recursive equation described in reference.
returns a vector of expected rewards for different initial states
Vandit Jain
Stochastic Processes: Theory for Applications, Robert G. Gallager, Cambridge University Press
transMatr<-matrix(c(0.99,0.01,0.01,0.99),nrow=2,byrow=TRUE) simpleMc<-new("markovchain", states=c("a","b"), transitionMatrix=transMatr) expectedRewards(simpleMc,1,c(0,1))
transMatr<-matrix(c(0.99,0.01,0.01,0.99),nrow=2,byrow=TRUE) simpleMc<-new("markovchain", states=c("a","b"), transitionMatrix=transMatr) expectedRewards(simpleMc,1,c(0,1))
Given a markovchain object and reward values for every state, function calculates expected reward value for a set A of states after n steps.
expectedRewardsBeforeHittingA(markovchain, A, state, rewards, n)
expectedRewardsBeforeHittingA(markovchain, A, state, rewards, n)
markovchain |
the markovchain-class object |
A |
set of states for first passage expected reward |
state |
initial state |
rewards |
vector depicting rewards coressponding to states |
n |
no of steps of the process |
The function returns the value of expected first passage rewards given rewards coressponding to every state, an initial state and number of steps.
returns a expected reward (numerical value) as described above
Sai Bhargav Yalamanchi, Vandit Jain
Returns expected hitting time from state i to state j
ExpectedTime(C,i,j,useRCpp)
ExpectedTime(C,i,j,useRCpp)
C |
A CTMC S4 object |
i |
Initial state i |
j |
Final state j |
useRCpp |
logical whether to use Rcpp |
According to the theorem, holding times for all states except j should be greater than 0.
A numerical value that returns expected hitting times from i to j
Vandit Jain
Markovchains, J. R. Norris, Cambridge University Press
states <- c("a","b","c","d") byRow <- TRUE gen <- matrix(data = c(-1, 1/2, 1/2, 0, 1/4, -1/2, 0, 1/4, 1/6, 0, -1/3, 1/6, 0, 0, 0, 0), nrow = 4,byrow = byRow, dimnames = list(states,states)) ctmc <- new("ctmc",states = states, byrow = byRow, generator = gen, name = "testctmc") ExpectedTime(ctmc,1,4,TRUE)
states <- c("a","b","c","d") byRow <- TRUE gen <- matrix(data = c(-1, 1/2, 1/2, 0, 1/4, -1/2, 0, 1/4, 1/6, 0, -1/3, 1/6, 0, 0, 0, 0), nrow = 4,byrow = byRow, dimnames = list(states,states)) ctmc <- new("ctmc",states = states, byrow = byRow, generator = gen, name = "testctmc") ExpectedTime(ctmc,1,4,TRUE)
This function compute the first passage probability in states
firstPassage(object, state, n)
firstPassage(object, state, n)
object |
A |
state |
Initial state |
n |
Number of rows on which compute the distribution |
Based on Feres' Matlab listings
A matrix of size 1:n x number of states showing the probability of the first time of passage in states to be exactly the number in the row.
Giorgio Spedicato
Renaldo Feres, Notes for Math 450 Matlab listings for Markov chains
simpleMc <- new("markovchain", states = c("a", "b"), transitionMatrix = matrix(c(0.4, 0.6, .3, .7), nrow = 2, byrow = TRUE)) firstPassage(simpleMc, "b", 20)
simpleMc <- new("markovchain", states = c("a", "b"), transitionMatrix = matrix(c(0.4, 0.6, .3, .7), nrow = 2, byrow = TRUE)) firstPassage(simpleMc, "b", 20)
The function calculates first passage probability for a subset of states given an initial state.
firstPassageMultiple(object, state, set, n)
firstPassageMultiple(object, state, set, n)
object |
a markovchain-class object |
state |
intital state of the process (charactervector) |
set |
set of states A, first passage of which is to be calculated |
n |
Number of rows on which compute the distribution |
A vector of size n showing the first time proabilities
Vandit Jain
Renaldo Feres, Notes for Math 450 Matlab listings for Markov chains; MIT OCW, course - 6.262, Discrete Stochastic Processes, course-notes, chap -05
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames) )) firstPassageMultiple(markovB,"a",c("b","c"),4)
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames) )) firstPassageMultiple(markovB,"a",c("b","c"),4)
Given a sequence of states arising from a stationary state, it fits the underlying Markov chain distribution with higher order.
fitHigherOrder(sequence, order = 2) seq2freqProb(sequence) seq2matHigh(sequence, order)
fitHigherOrder(sequence, order = 2) seq2freqProb(sequence) seq2matHigh(sequence, order)
sequence |
A character list. |
order |
Markov chain order |
A list containing lambda, Q, and X.
Giorgio Spedicato, Tae Seung Kang
Ching, W. K., Huang, X., Ng, M. K., & Siu, T. K. (2013). Higher-order markov chains. In Markov Chains (pp. 141-176). Springer US.
Ching, W. K., Ng, M. K., & Fung, E. S. (2008). Higher-order multivariate Markov chains and their applications. Linear Algebra and its Applications, 428(2), 492-507.
sequence<-c("a", "a", "b", "b", "a", "c", "b", "a", "b", "c", "a", "b", "c", "a", "b", "c", "a", "b", "a", "b") fitHigherOrder(sequence)
sequence<-c("a", "a", "b", "b", "a", "c", "b", "a", "b", "c", "a", "b", "c", "a", "b", "c", "a", "b", "a", "b") fitHigherOrder(sequence)
Given a matrix of categorical sequences it fits Higher Order Multivariate Markov chain.
fitHighOrderMultivarMC(seqMat, order = 2, Norm = 2)
fitHighOrderMultivarMC(seqMat, order = 2, Norm = 2)
seqMat |
a matrix or a data frame where each column is a categorical sequence |
order |
Multivariate Markov chain order. Default is 2. |
Norm |
Norm to be used. Default is 2. |
an hommc object
Giorgio Spedicato, Deepak Yadav
W.-K. Ching et al. / Linear Algebra and its Applications
data <- matrix(c('2', '1', '3', '3', '4', '3', '2', '1', '3', '3', '2', '1', c('2', '4', '4', '4', '4', '2', '3', '3', '1', '4', '3', '3')), ncol = 2, byrow = FALSE) fitHighOrderMultivarMC(data, order = 2, Norm = 2)
data <- matrix(c('2', '1', '3', '3', '4', '3', '2', '1', '3', '3', '2', '1', c('2', '4', '4', '4', '4', '2', '3', '3', '1', '4', '3', '3')), ncol = 2, byrow = FALSE) fitHighOrderMultivarMC(data, order = 2, Norm = 2)
The function provides interface to calculate generator matrix corresponding to a frequency matrix and time taken
freq2Generator(P, t = 1, method = "QO", logmethod = "Eigen")
freq2Generator(P, t = 1, method = "QO", logmethod = "Eigen")
P |
relative frequency matrix |
t |
(default value = 1) |
method |
one among "QO"(Quasi optimaisation), "WA"(weighted adjustment), "DA"(diagonal adjustment) |
logmethod |
method for computation of matrx algorithm (by default : Eigen) |
returns a generator matix with same dimnames
E. Kreinin and M. Sidelnikova: Regularization Algorithms for Transition Matrices. Algo Research Quarterly 4(1):23-40, 2001
sample <- matrix(c(150,2,1,1,1,200,2,1,2,1,175,1,1,1,1,150),nrow = 4,byrow = TRUE) sample_rel = rbind((sample/rowSums(sample))[1:dim(sample)[1]-1,],c(rep(0,dim(sample)[1]-1),1)) freq2Generator(sample_rel,1) data(tm_abs) tm_rel=rbind((tm_abs/rowSums(tm_abs))[1:7,],c(rep(0,7),1)) ## Derive quasi optimization generator matrix estimate freq2Generator(tm_rel,1)
sample <- matrix(c(150,2,1,1,1,200,2,1,2,1,175,1,1,1,1,150),nrow = 4,byrow = TRUE) sample_rel = rbind((sample/rowSums(sample))[1:dim(sample)[1]-1,],c(rep(0,dim(sample)[1]-1),1)) freq2Generator(sample_rel,1) data(tm_abs) tm_rel=rbind((tm_abs/rowSums(tm_abs))[1:7,],c(rep(0,7),1)) ## Derive quasi optimization generator matrix estimate freq2Generator(tm_rel,1)
The transition matrix of the embedded DTMC is inferred from the CTMC's generator
generatorToTransitionMatrix(gen, byrow = TRUE)
generatorToTransitionMatrix(gen, byrow = TRUE)
gen |
The generator matrix |
byrow |
Flag to determine if rows (columns) sum to 0 |
Returns the transition matrix.
Sai Bhargav Yalamanchi
Introduction to Stochastic Processes with Applications in the Biosciences (2013), David F. Anderson, University of Wisconsin at Madison
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) generatorToTransitionMatrix(gen)
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) generatorToTransitionMatrix(gen)
The S4 class that describes HigherOrderMarkovChain
objects.
Given a markovchain object, this function calculates the probability of ever arriving from state i to j
hittingProbabilities(object)
hittingProbabilities(object)
object |
the markovchain-class object |
a matrix of hitting probabilities
Ignacio Cordón
R. Vélez, T. Prieto, Procesos Estocásticos, Librería UNED, 2013
M <- markovchain:::zeros(5) M[1,1] <- M[5,5] <- 1 M[2,1] <- M[2,3] <- 1/2 M[3,2] <- M[3,4] <- 1/2 M[4,2] <- M[4,5] <- 1/2 mc <- new("markovchain", transitionMatrix = M) hittingProbabilities(mc)
M <- markovchain:::zeros(5) M[1,1] <- M[5,5] <- 1 M[2,1] <- M[2,3] <- 1/2 M[3,2] <- M[3,4] <- 1/2 M[4,2] <- M[4,5] <- 1/2 mc <- new("markovchain", transitionMatrix = M) hittingProbabilities(mc)
A data set containing 1000 life histories trajectories and a categorical status (1,2,3) observed on eleven evenly spaced steps.
data(holson)
data(holson)
A data frame with 1000 observations on the following 12 variables.
id
unique id
time1
observed status at i-th time
time2
observed status at i-th time
time3
observed status at i-th time
time4
observed status at i-th time
time5
observed status at i-th time
time6
observed status at i-th time
time7
observed status at i-th time
time8
observed status at i-th time
time9
observed status at i-th time
time10
observed status at i-th time
time11
observed status at i-th time
The example can be used to fit a markovchain
or a markovchainList
object.
Private communications
Private communications
data(holson) head(holson)
data(holson) head(holson)
An S4 class for representing High Order Multivariate Markovchain (HOMMC)
order
an integer equal to order of Multivariate Markovchain
states
a vector of states present in the HOMMC model
P
array of transition matrices
Lambda
a vector which stores the weightage of each transition matrices in P
byrow
if FALSE each column sum of transition matrix is 1 else row sum = 1
name
a name given to hommc
Giorgio Spedicato, Deepak Yadav
statesName <- c("a", "b") P <- array(0, dim = c(2, 2, 4), dimnames = list(statesName, statesName)) P[,,1] <- matrix(c(0, 1, 1/3, 2/3), byrow = FALSE, nrow = 2) P[,,2] <- matrix(c(1/4, 3/4, 0, 1), byrow = FALSE, nrow = 2) P[,,3] <- matrix(c(1, 0, 1/3, 2/3), byrow = FALSE, nrow = 2) P[,,4] <- matrix(c(3/4, 1/4, 0, 1), byrow = FALSE, nrow = 2) Lambda <- c(0.8, 0.2, 0.3, 0.7) ob <- new("hommc", order = 1, states = statesName, P = P, Lambda = Lambda, byrow = FALSE, name = "FOMMC")
statesName <- c("a", "b") P <- array(0, dim = c(2, 2, 4), dimnames = list(statesName, statesName)) P[,,1] <- matrix(c(0, 1, 1/3, 2/3), byrow = FALSE, nrow = 2) P[,,2] <- matrix(c(1/4, 3/4, 0, 1), byrow = FALSE, nrow = 2) P[,,3] <- matrix(c(1, 0, 1/3, 2/3), byrow = FALSE, nrow = 2) P[,,4] <- matrix(c(3/4, 1/4, 0, 1), byrow = FALSE, nrow = 2) Lambda <- c(0.8, 0.2, 0.3, 0.7) ob <- new("hommc", order = 1, states = statesName, P = P, Lambda = Lambda, byrow = FALSE, name = "FOMMC")
An S4 class for representing Imprecise Continuous Time Markovchains
states
a vector of states present in the ICTMC model
Q
matrix representing the generator demonstrated in the form of variables
range
a matrix that stores values of range of variables
name
name given to ICTMC
This function calculates full conditional probability at given time s using lower rate transition matrix
impreciseProbabilityatT(C,i,t,s,error,useRCpp)
impreciseProbabilityatT(C,i,t,s,error,useRCpp)
C |
a ictmc class object |
i |
initial state at time t |
t |
initial time t. Default value = 0 |
s |
final time |
error |
error rate. Default value = 0.001 |
useRCpp |
logical whether to use RCpp implementation; by default TRUE |
Vandit Jain
Imprecise Continuous-Time Markov Chains, Thomas Krak et al., 2016
states <- c("n","y") Q <- matrix(c(-1,1,1,-1),nrow = 2,byrow = TRUE,dimnames = list(states,states)) range <- matrix(c(1/52,3/52,1/2,2),nrow = 2,byrow = 2) name <- "testictmc" ictmc <- new("ictmc",states = states,Q = Q,range = range,name = name) impreciseProbabilityatT(ictmc,2,0,1,10^-3,TRUE)
states <- c("n","y") Q <- matrix(c(-1,1,1,-1),nrow = 2,byrow = TRUE,dimnames = list(states,states)) range <- matrix(c(1/52,3/52,1/2,2),nrow = 2,byrow = 2) name <- "testictmc" ictmc <- new("ictmc",states = states,Q = Q,range = range,name = name) impreciseProbabilityatT(ictmc,2,0,1,10^-3,TRUE)
Since the Bayesian inference approach implemented in the package is based on conjugate priors, hyperparameters must be provided to model the prior probability distribution of the chain parameters. The hyperparameters are inferred from a given a priori matrix under the assumption that the matrix provided corresponds to the mean (expected) values of the chain parameters. A scaling factor vector must be provided too. Alternatively, the hyperparameters can be inferred from a data set.
inferHyperparam(transMatr = matrix(), scale = numeric(), data = character())
inferHyperparam(transMatr = matrix(), scale = numeric(), data = character())
transMatr |
A valid transition matrix, with dimension names. |
scale |
A vector of scaling factors, each element corresponds to the row names of the provided transition matrix transMatr, in the same order. |
data |
A data set from which the hyperparameters are inferred. |
transMatr and scale need not be provided if data is provided.
Returns the hyperparameter matrix in a list.
The hyperparameter matrix returned is such that the row and column names are sorted alphanumerically, and the elements in the matrix are correspondingly permuted.
Sai Bhargav Yalamanchi, Giorgio Spedicato
Yalamanchi SB, Spedicato GA (2015). Bayesian Inference of First Order Markov Chains. R package version 0.2.5
markovchainFit
, predictiveDistribution
data(rain, package = "markovchain") inferHyperparam(data = rain$rain) weatherStates <- c("sunny", "cloudy", "rain") weatherMatrix <- matrix(data = c(0.7, 0.2, 0.1, 0.3, 0.4, 0.3, 0.2, 0.4, 0.4), byrow = TRUE, nrow = 3, dimnames = list(weatherStates, weatherStates)) inferHyperparam(transMatr = weatherMatrix, scale = c(10, 10, 10))
data(rain, package = "markovchain") inferHyperparam(data = rain$rain) weatherStates <- c("sunny", "cloudy", "rain") weatherMatrix <- matrix(data = c(0.7, 0.2, 0.1, 0.3, 0.4, 0.3, 0.2, 0.4, 0.4), byrow = TRUE, nrow = 3, dimnames = list(weatherStates, weatherStates)) inferHyperparam(transMatr = weatherMatrix, scale = c(10, 10, 10))
This function verifies if a state is reachable from another, i.e., if there exists a path that leads to state j leaving from state i with positive probability
is.accessible(object, from, to)
is.accessible(object, from, to)
object |
A |
from |
The name of state "i" (beginning state). |
to |
The name of state "j" (ending state). |
It wraps an internal function named reachabilityMatrix
.
A boolean value.
Giorgio Spedicato, Ignacio Cordón
James Montgomery, University of Madison
is.irreducible
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames) ) ) is.accessible(markovB, "a", "c")
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames) ) ) is.accessible(markovB, "a", "c")
This function verifies whether a CTMC object is irreducible
is.CTMCirreducible(ctmc)
is.CTMCirreducible(ctmc)
ctmc |
a ctmc-class object |
a boolean value as described above.
Vandit Jain
Continuous-Time Markov Chains, Karl Sigman, Columbia University
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) molecularCTMC <- new("ctmc", states = energyStates, byrow = byRow, generator = gen, name = "Molecular Transition Model") is.CTMCirreducible(molecularCTMC)
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) molecularCTMC <- new("ctmc", states = energyStates, byrow = byRow, generator = gen, name = "Molecular Transition Model") is.CTMCirreducible(molecularCTMC)
This function verifies whether a markovchain
object transition matrix
is composed by only one communicating class.
is.irreducible(object)
is.irreducible(object)
object |
A |
It is based on .communicatingClasses
internal function.
A boolean values.
Giorgio Spedicato
Feres, Matlab listings for Markov Chains.
statesNames <- c("a", "b") mcA <- new("markovchain", transitionMatrix = matrix(c(0.7,0.3,0.1,0.9), byrow = TRUE, nrow = 2, dimnames = list(statesNames, statesNames) )) is.irreducible(mcA)
statesNames <- c("a", "b") mcA <- new("markovchain", transitionMatrix = matrix(c(0.7,0.3,0.1,0.9), byrow = TRUE, nrow = 2, dimnames = list(statesNames, statesNames) )) is.irreducible(mcA)
Function to check wether a DTCM is regular
is.regular(object)
is.regular(object)
object |
a markovchain object |
A Markov chain is regular if some of the powers of its matrix has all elements strictly positive
A boolean value
Ignacio Cordón
Matrix Analysis. Roger A.Horn, Charles R.Johnson. 2nd edition. Corollary 8.5.8, Theorem 8.5.9
P <- matrix(c(0.5, 0.25, 0.25, 0.5, 0, 0.5, 0.25, 0.25, 0.5), nrow = 3) colnames(P) <- rownames(P) <- c("R","N","S") ciao <- as(P, "markovchain") is.regular(ciao)
P <- matrix(c(0.5, 0.25, 0.25, 0.5, 0, 0.5, 0.25, 0.25, 0.5), nrow = 3) colnames(P) <- rownames(P) <- c("R","N","S") ciao <- as(P, "markovchain") is.regular(ciao)
The function returns checks if provided function is time reversible
is.TimeReversible(ctmc)
is.TimeReversible(ctmc)
ctmc |
a ctmc-class object |
Returns a boolean value stating whether ctmc object is time reversible
a boolean value as described above
Vandit Jain
INTRODUCTION TO STOCHASTIC PROCESSES WITH R, ROBERT P. DOBROW, Wiley
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) molecularCTMC <- new("ctmc", states = energyStates, byrow = byRow, generator = gen, name = "Molecular Transition Model") is.TimeReversible(molecularCTMC)
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) molecularCTMC <- new("ctmc", states = energyStates, byrow = byRow, generator = gen, name = "Molecular Transition Model") is.TimeReversible(molecularCTMC)
A list of two matrices representing raw transitions between two states
data(kullback)
data(kullback)
A list containing two 6x6 non - negative integer matrices
The S4 class that describes markovchain
objects.
states |
Name of the states. Must be the same of |
byrow |
TRUE or FALSE indicating whether the supplied matrix is either stochastic by rows or by columns |
transitionMatrix |
Square transition matrix |
name |
Optional character name of the Markov chain |
Objects can be created by calls of the form new("markovchain", states, byrow, transitionMatrix, ...)
.
signature(e1 = "markovchain", e2 = "markovchain")
: multiply two markovchain
objects
signature(e1 = "markovchain", e2 = "matrix")
: markovchain by matrix multiplication
signature(e1 = "markovchain", e2 = "numeric")
: markovchain by numeric vector multiplication
signature(e1 = "matrix", e2 = "markovchain")
: matrix by markov chain
signature(e1 = "numeric", e2 = "markovchain")
: numeric vector by markovchain
multiplication
signature(x = "markovchain", i = "ANY", j = "ANY", drop = "ANY")
: ...
signature(e1 = "markovchain", e2 = "numeric")
: power of a markovchain
object
signature(e1 = "markovchain", e2 = "markovchain")
: equality of two markovchain
object
signature(e1 = "markovchain", e2 = "markovchain")
: non-equality of two markovchain
object
signature(object = "markovchain")
: method to get absorbing states
signature(object = "markovchain")
: return a markovchain
object into canonic form
signature(from = "markovchain", to = "data.frame")
: coerce method from markovchain to data.frame
signature(object = "markovchain")
: returns the conditional probability of subsequent states given a state
signature(from = "data.frame", to = "markovchain")
: coerce method from data.frame
to markovchain
signature(from = "table", to = "markovchain")
: coerce method from table
to markovchain
signature(from = "msm", to = "markovchain")
: coerce method from msm
to markovchain
signature(from = "msm.est", to = "markovchain")
: coerce method from msm.est
(but only from a Probability Matrix) to markovchain
signature(from = "etm", to = "markovchain")
: coerce method from etm
to markovchain
signature(from = "sparseMatrix", to = "markovchain")
: coerce method from sparseMatrix
to markovchain
signature(from = "markovchain", to = "igraph")
: coercing to igraph
objects
signature(from = "markovchain", to = "matrix")
: coercing to matrix
objects
signature(from = "markovchain", to = "sparseMatrix")
: coercing to sparseMatrix
objects
signature(from = "matrix", to = "markovchain")
: coercing to markovchain
objects from matrix
one
signature(x = "markovchain")
: method to get the size
signature(x = "markovchain")
: method to get the names of states
signature(x = "markovchain", value = "character")
: method to set the names of states
signature(.Object = "markovchain")
: initialize method
signature(x = "markovchain", y = "missing")
: plot method for markovchain
objects
signature(object = "markovchain")
: predict method
signature(x = "markovchain")
: print method.
signature(object = "markovchain")
: show method.
signature(x = "markovchain", decreasing=FALSE)
: sorting the transition matrix.
signature(object = "markovchain")
: returns the names of states (as names
.
signature(object = "markovchain")
: method to get the steady vector.
signature(object = "markovchain")
: method to summarize structure of the markov chain
signature(object = "markovchain")
: method to get the transient states.
signature(x = "markovchain")
: transpose matrix
signature(object = "markovchain")
: transition probability
markovchain
object are backed by S4 Classes.
Validation method is used to assess whether either columns or rows totals to one.
Rounding is used up to .Machine$double.eps * 100
. If state names are not properly
defined for a probability matrix
, coercing to markovchain
object leads
to overriding states name with artificial "s1", "s2", ... sequence. In addition, operator
overloading has been applied for operators.
Giorgio Spedicato
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
markovchainSequence
,markovchainFit
#show markovchain definition showClass("markovchain") #create a simple Markov chain transMatr<-matrix(c(0.4,0.6,.3,.7),nrow=2,byrow=TRUE) simpleMc<-new("markovchain", states=c("a","b"), transitionMatrix=transMatr, name="simpleMc") #power simpleMc^4 #some methods steadyStates(simpleMc) absorbingStates(simpleMc) simpleMc[2,1] t(simpleMc) is.irreducible(simpleMc) #conditional distributions conditionalDistribution(simpleMc, "b") #example for predict method sequence<-c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") mcFit<-markovchainFit(data=sequence) predict(mcFit$estimate, newdata="b",n.ahead=3) #direct conversion myMc<-as(transMatr, "markovchain") #example of summary summary(simpleMc) ## Not run: plot(simpleMc)
#show markovchain definition showClass("markovchain") #create a simple Markov chain transMatr<-matrix(c(0.4,0.6,.3,.7),nrow=2,byrow=TRUE) simpleMc<-new("markovchain", states=c("a","b"), transitionMatrix=transMatr, name="simpleMc") #power simpleMc^4 #some methods steadyStates(simpleMc) absorbingStates(simpleMc) simpleMc[2,1] t(simpleMc) is.irreducible(simpleMc) #conditional distributions conditionalDistribution(simpleMc, "b") #example for predict method sequence<-c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") mcFit<-markovchainFit(data=sequence) predict(mcFit$estimate, newdata="b",n.ahead=3) #direct conversion myMc<-as(transMatr, "markovchain") #example of summary summary(simpleMc) ## Not run: plot(simpleMc)
A class to handle non homogeneous discrete Markov chains
markovchains |
Object of class |
name |
Object of class |
A markovchainlist
is a list of markovchain
objects. They can
be used to model non homogeneous discrete time Markov Chains, when
transition probabilities (and possible states) change by time.
signature(x = "markovchainList")
: extract the
i-th markovchain
signature(x = "markovchainList")
: number
of markovchain
underlying the matrix
signature(object = "markovchainList")
: predict
from a markovchainList
signature(x = "markovchainList")
: prints the list
of markovchains
signature(object = "markovchainList")
: same as print
The class consists in a list of markovchain
objects.
It is aimed at working with non homogeneous Markov chains.
Giorgio Spedicato
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
showClass("markovchainList") #define a markovchainList statesNames=c("a","b") mcA<-new("markovchain",name="MCA", transitionMatrix=matrix(c(0.7,0.3,0.1,0.9), byrow=TRUE, nrow=2, dimnames=list(statesNames,statesNames)) ) mcB<-new("markovchain", states=c("a","b","c"), name="MCB", transitionMatrix=matrix(c(0.2,0.5,0.3,0,1,0,0.1,0.8,0.1), nrow=3, byrow=TRUE)) mcC<-new("markovchain", states=c("a","b","c","d"), name="MCC", transitionMatrix=matrix(c(0.25,0.75,0,0,0.4,0.6, 0,0,0,0,0.1,0.9,0,0,0.7,0.3), nrow=4, byrow=TRUE) ) mcList<-new("markovchainList",markovchains=list(mcA, mcB, mcC), name="Non - homogeneous Markov Chain")
showClass("markovchainList") #define a markovchainList statesNames=c("a","b") mcA<-new("markovchain",name="MCA", transitionMatrix=matrix(c(0.7,0.3,0.1,0.9), byrow=TRUE, nrow=2, dimnames=list(statesNames,statesNames)) ) mcB<-new("markovchain", states=c("a","b","c"), name="MCB", transitionMatrix=matrix(c(0.2,0.5,0.3,0,1,0,0.1,0.8,0.1), nrow=3, byrow=TRUE)) mcC<-new("markovchain", states=c("a","b","c","d"), name="MCC", transitionMatrix=matrix(c(0.25,0.75,0,0,0.4,0.6, 0,0,0,0,0.1,0.9,0,0,0.7,0.3), nrow=4, byrow=TRUE) ) mcList<-new("markovchainList",markovchains=list(mcA, mcB, mcC), name="Non - homogeneous Markov Chain")
Given a data frame or a matrix (rows are observations, by cols the temporal sequence), it fits a non - homogeneous discrete time markov chain process (storing row). In particular a markovchainList of size = ncol - 1 is obtained estimating transitions from the n samples given by consecutive column pairs.
markovchainListFit(data, byrow = TRUE, laplacian = 0, name)
markovchainListFit(data, byrow = TRUE, laplacian = 0, name)
data |
Either a matrix or a data.frame or a list object. |
byrow |
Indicates whether distinc stochastic processes trajectiories are shown in distinct rows. |
laplacian |
Laplacian correction (default 0). |
name |
Optional name. |
If data
contains NAs
then the transitions containing NA
will be ignored.
A list containing two slots: estimate (the estimate) name
# using holson dataset data(holson) # fitting a single markovchain singleMc <- markovchainFit(data = holson[,2:12]) # fitting a markovchainList mclistFit <- markovchainListFit(data = holson[, 2:12], name = "holsonMcList")
# using holson dataset data(holson) # fitting a single markovchain singleMc <- markovchainFit(data = holson[,2:12]) # fitting a markovchainList mclistFit <- markovchainListFit(data = holson[, 2:12], name = "holsonMcList")
Provided any markovchain
object, it returns a sequence of
states coming from the underlying stationary distribution.
markovchainSequence( n, markovchain, t0 = sample(markovchain@states, 1), include.t0 = FALSE, useRCpp = TRUE )
markovchainSequence( n, markovchain, t0 = sample(markovchain@states, 1), include.t0 = FALSE, useRCpp = TRUE )
n |
Sample size |
markovchain |
|
t0 |
The initial state |
include.t0 |
Specify if the initial state shall be used |
useRCpp |
Boolean. Should RCpp fast implementation being used? Default is yes. |
A sequence of size n is sampled.
A Character Vector
Giorgio Spedicato
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
# define the markovchain object statesNames <- c("a", "b", "c") mcB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) # show the sequence outs <- markovchainSequence(n = 100, markovchain = mcB, t0 = "a")
# define the markovchain object statesNames <- c("a", "b", "c") mcB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) # show the sequence outs <- markovchainSequence(n = 100, markovchain = mcB, t0 = "a")
Computes the expected number of steps to go from any of the transient states to any of the recurrent states. The Markov chain should have at least one transient state for this method to work
meanAbsorptionTime(object)
meanAbsorptionTime(object)
object |
the markovchain object |
A named vector with the expected number of steps to go from a transient state to any of the recurrent ones
Ignacio Cordón
C. M. Grinstead and J. L. Snell. Introduction to Probability. American Mathematical Soc., 2012.
m <- matrix(c(1/2, 1/2, 0, 1/2, 1/2, 0, 0, 1/2, 1/2), ncol = 3, byrow = TRUE) mc <- new("markovchain", states = letters[1:3], transitionMatrix = m) times <- meanAbsorptionTime(mc)
m <- matrix(c(1/2, 1/2, 0, 1/2, 1/2, 0, 0, 1/2, 1/2), ncol = 3, byrow = TRUE) mc <- new("markovchain", states = letters[1:3], transitionMatrix = m) times <- meanAbsorptionTime(mc)
Given an irreducible (ergodic) markovchain object, this function calculates the expected number of steps to reach other states
meanFirstPassageTime(object, destination)
meanFirstPassageTime(object, destination)
object |
the markovchain object |
destination |
a character vector representing the states respect to which we want to compute the mean first passage time. Empty by default |
For an ergodic Markov chain it computes:
If destination is empty, the average first time (in steps) that takes the Markov chain to go from initial state i to j. (i, j) represents that value in case the Markov chain is given row-wise, (j, i) in case it is given col-wise.
If destination is not empty, the average time it takes us from the
remaining states to reach the states in destination
a Matrix of the same size with the average first passage times if destination is empty, a vector if destination is not
Toni Giorgino, Ignacio Cordón
C. M. Grinstead and J. L. Snell. Introduction to Probability. American Mathematical Soc., 2012.
m <- matrix(1 / 10 * c(6,3,1, 2,3,5, 4,1,5), ncol = 3, byrow = TRUE) mc <- new("markovchain", states = c("s","c","r"), transitionMatrix = m) meanFirstPassageTime(mc, "r") # Grinstead and Snell's "Oz weather" worked out example mOz <- matrix(c(2,1,1, 2,0,2, 1,1,2)/4, ncol = 3, byrow = TRUE) mcOz <- new("markovchain", states = c("s", "c", "r"), transitionMatrix = mOz) meanFirstPassageTime(mcOz)
m <- matrix(1 / 10 * c(6,3,1, 2,3,5, 4,1,5), ncol = 3, byrow = TRUE) mc <- new("markovchain", states = c("s","c","r"), transitionMatrix = m) meanFirstPassageTime(mc, "r") # Grinstead and Snell's "Oz weather" worked out example mOz <- matrix(c(2,1,1, 2,0,2, 1,1,2)/4, ncol = 3, byrow = TRUE) mcOz <- new("markovchain", states = c("s", "c", "r"), transitionMatrix = mOz) meanFirstPassageTime(mcOz)
Given a markovchain object, this function calculates a matrix where the element (i, j) represents the expect number of visits to the state j if the chain starts at i (in a Markov chain by columns it would be the element (j, i) instead)
meanNumVisits(object)
meanNumVisits(object)
object |
the markovchain-class object |
a matrix with the expect number of visits to each state
Ignacio Cordón
R. Vélez, T. Prieto, Procesos Estocásticos, Librería UNED, 2013
M <- markovchain:::zeros(5) M[1,1] <- M[5,5] <- 1 M[2,1] <- M[2,3] <- 1/2 M[3,2] <- M[3,4] <- 1/2 M[4,2] <- M[4,5] <- 1/2 mc <- new("markovchain", transitionMatrix = M) meanNumVisits(mc)
M <- markovchain:::zeros(5) M[1,1] <- M[5,5] <- 1 M[2,1] <- M[2,3] <- 1/2 M[3,2] <- M[3,4] <- 1/2 M[4,2] <- M[4,5] <- 1/2 mc <- new("markovchain", transitionMatrix = M) meanNumVisits(mc)
Computes the expected time to return to a recurrent state in case the Markov chain starts there
meanRecurrenceTime(object)
meanRecurrenceTime(object)
object |
the markovchain object |
For a Markov chain it outputs is a named vector with the expected time to first return to a state when the chain starts there. States present in the vector are only the recurrent ones. If the matrix is ergodic (i.e. irreducible), then all states are present in the output and order is the same as states order for the Markov chain
Ignacio Cordón
C. M. Grinstead and J. L. Snell. Introduction to Probability. American Mathematical Soc., 2012.
m <- matrix(1 / 10 * c(6,3,1, 2,3,5, 4,1,5), ncol = 3, byrow = TRUE) mc <- new("markovchain", states = c("s","c","r"), transitionMatrix = m) meanRecurrenceTime(mc)
m <- matrix(1 / 10 * c(6,3,1, 2,3,5, 4,1,5), ncol = 3, byrow = TRUE) mc <- new("markovchain", states = c("s","c","r"), transitionMatrix = m) meanRecurrenceTime(mc)
Return estimated transition matrix assuming a Multinomial Distribution
multinomialConfidenceIntervals( transitionMatrix, countsTransitionMatrix, confidencelevel = 0.95 )
multinomialConfidenceIntervals( transitionMatrix, countsTransitionMatrix, confidencelevel = 0.95 )
transitionMatrix |
An estimated transition matrix. |
countsTransitionMatrix |
Empirical (conts) transition matrix, on which the |
confidencelevel |
confidence interval level. |
Two matrices containing the confidence intervals.
Constructing two-sided simultaneous confidence intervals for multinomial proportions for small counts in a large number of cells. Journal of Statistical Software 5(6) (2000)
markovchainFit
seq<-c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") mcfit<-markovchainFit(data=seq,byrow=TRUE) seqmat<-createSequenceMatrix(seq) multinomialConfidenceIntervals(mcfit$estimate@transitionMatrix, seqmat, 0.95)
seq<-c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") mcfit<-markovchainFit(data=seq,byrow=TRUE) seqmat<-createSequenceMatrix(seq) multinomialConfidenceIntervals(mcfit$estimate@transitionMatrix, seqmat, 0.95)
This method returns the name of a markovchain object
name(object) ## S4 method for signature 'markovchain' name(object)
name(object) ## S4 method for signature 'markovchain' name(object)
object |
A markovchain object |
Giorgio Spedicato, Deepak Yadav
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) name(markovB)
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) name(markovB)
This method modifies the existing name of markovchain object
name(object) <- value ## S4 replacement method for signature 'markovchain' name(object) <- value
name(object) <- value ## S4 replacement method for signature 'markovchain' name(object) <- value
object |
A markovchain object |
value |
New name of markovchain object |
Giorgio Spedicato, Deepak Yadav
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) name(markovB) <- "dangerous mc"
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) name(markovB) <- "dangerous mc"
Returns the states for a Markov chain object
## S4 method for signature 'markovchain' names(x)
## S4 method for signature 'markovchain' names(x)
x |
object we want to return states for |
This function would return a joint pdf of the number of visits to the various states of the DTMC during the first N steps.
noofVisitsDist(markovchain,N,state)
noofVisitsDist(markovchain,N,state)
markovchain |
a markovchain-class object |
N |
no of steps |
state |
the initial state |
This function would return a joint pdf of the number of visits to the various states of the DTMC during the first N steps.
a numeric vector depicting the above described probability density function.
Vandit Jain
transMatr<-matrix(c(0.4,0.6,.3,.7),nrow=2,byrow=TRUE) simpleMc<-new("markovchain", states=c("a","b"), transitionMatrix=transMatr, name="simpleMc") noofVisitsDist(simpleMc,5,"a")
transMatr<-matrix(c(0.4,0.6,.3,.7),nrow=2,byrow=TRUE) simpleMc<-new("markovchain", states=c("a","b"), transitionMatrix=transMatr, name="simpleMc") noofVisitsDist(simpleMc,5,"a")
Returns an Identity matrix
ones(n)
ones(n)
n |
size of the matrix |
a identity matrix
These functions return absorbing and transient states of the markovchain
objects.
period(object) communicatingClasses(object) recurrentClasses(object) transientClasses(object) transientStates(object) recurrentStates(object) absorbingStates(object) canonicForm(object)
period(object) communicatingClasses(object) recurrentClasses(object) transientClasses(object) transientStates(object) recurrentStates(object) absorbingStates(object) canonicForm(object)
object |
A |
period
returns a integer number corresponding to the periodicity of the Markov chain (if it is irreducible)
absorbingStates
returns a character vector with the names of the absorbing states in the Markov chain
communicatingClasses
returns a list in which each slot contains the names of the states that are in that communicating class
recurrentClasses
analogously to communicatingClasses
, but with
recurrent classes
transientClasses
analogously to communicatingClasses
, but with
transient classes
transientStates
returns a character vector with all the transient states for the Markov chain
recurrentStates
returns a character vector with all the recurrent states for the Markov chain
canonicForm
returns the Markov chain reordered by a permutation of states so that we have blocks submatrices for each of the recurrent classes and a collection of rows in the end for the transient states
Giorgio Alfredo Spedicato, Ignacio Cordón
Feres, Matlab listing for markov chain.
statesNames <- c("a", "b", "c") mc <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames)) ) communicatingClasses(mc) recurrentClasses(mc) recurrentClasses(mc) absorbingStates(mc) transientStates(mc) recurrentStates(mc) canonicForm(mc) # periodicity analysis A <- matrix(c(0, 1, 0, 0, 0.5, 0, 0.5, 0, 0, 0.5, 0, 0.5, 0, 0, 1, 0), nrow = 4, ncol = 4, byrow = TRUE) mcA <- new("markovchain", states = c("a", "b", "c", "d"), transitionMatrix = A, name = "A") is.irreducible(mcA) #true period(mcA) #2 # periodicity analysis B <- matrix(c(0, 0, 1/2, 1/4, 1/4, 0, 0, 0, 0, 1/3, 0, 2/3, 0, 0, 0, 0, 0, 0, 0, 1/3, 2/3, 0, 0, 0, 0, 0, 1/2, 1/2, 0, 0, 0, 0, 0, 3/4, 1/4, 1/2, 1/2, 0, 0, 0, 0, 0, 1/4, 3/4, 0, 0, 0, 0, 0), byrow = TRUE, ncol = 7) mcB <- new("markovchain", transitionMatrix = B) period(mcB)
statesNames <- c("a", "b", "c") mc <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames)) ) communicatingClasses(mc) recurrentClasses(mc) recurrentClasses(mc) absorbingStates(mc) transientStates(mc) recurrentStates(mc) canonicForm(mc) # periodicity analysis A <- matrix(c(0, 1, 0, 0, 0.5, 0, 0.5, 0, 0, 0.5, 0, 0.5, 0, 0, 1, 0), nrow = 4, ncol = 4, byrow = TRUE) mcA <- new("markovchain", states = c("a", "b", "c", "d"), transitionMatrix = A, name = "A") is.irreducible(mcA) #true period(mcA) #2 # periodicity analysis B <- matrix(c(0, 0, 1/2, 1/4, 1/4, 0, 0, 0, 0, 1/3, 0, 2/3, 0, 0, 0, 0, 0, 0, 0, 1/3, 2/3, 0, 0, 0, 0, 0, 1/2, 1/2, 0, 0, 0, 0, 0, 3/4, 1/4, 1/2, 1/2, 0, 0, 0, 0, 0, 1/4, 3/4, 0, 0, 0, 0, 0), byrow = TRUE, ncol = 7) mcB <- new("markovchain", transitionMatrix = B) period(mcB)
This function provides a prediction of states for a higher order multivariate markovchain object
predictHommc(hommc,t,init)
predictHommc(hommc,t,init)
hommc |
a hommc-class object |
t |
no of iterations to predict |
init |
matrix of previous states size of which depends on hommc |
The user is required to provide a matrix of giving n previous coressponding every categorical sequence. Dimensions of the init are s X n, where s is number of categorical sequences and n is order of the homc.
The function returns a matrix of size s X t displaying t predicted states in each row coressponding to every categorical sequence.
Vandit Jain
The function computes the probability of observing a new data set, given a data set
predictiveDistribution(stringchar, newData, hyperparam = matrix())
predictiveDistribution(stringchar, newData, hyperparam = matrix())
stringchar |
This is the data using which the Bayesian inference is performed. |
newData |
This is the data whose predictive probability is computed. |
hyperparam |
This determines the shape of the prior distribution of the parameters. If none is provided, default value of 1 is assigned to each parameter. This must be of size kxk where k is the number of states in the chain and the values should typically be non-negative integers. |
The underlying method is Bayesian inference. The probability is computed by averaging the likelihood of the new data with respect to the posterior. Since the method assumes conjugate priors, the result can be represented in a closed form (see the vignette for more details), which is what is returned.
The log of the probability is returned.
Sai Bhargav Yalamanchi
Inferring Markov Chains: Bayesian Estimation, Model Comparison, Entropy Rate, and Out-of-Class Modeling, Christopher C. Strelioff, James P. Crutchfield, Alfred Hubler, Santa Fe Institute
Yalamanchi SB, Spedicato GA (2015). Bayesian Inference of First Order Markov Chains. R package version 0.2.5
sequence<- c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") hyperMatrix<-matrix(c(1, 2, 1, 4), nrow = 2,dimnames=list(c("a","b"),c("a","b"))) predProb <- predictiveDistribution(sequence[1:10], sequence[11:17], hyperparam =hyperMatrix ) hyperMatrix2<-hyperMatrix[c(2,1),c(2,1)] predProb2 <- predictiveDistribution(sequence[1:10], sequence[11:17], hyperparam =hyperMatrix2 ) predProb2==predProb
sequence<- c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") hyperMatrix<-matrix(c(1, 2, 1, 4), nrow = 2,dimnames=list(c("a","b"),c("a","b"))) predProb <- predictiveDistribution(sequence[1:10], sequence[11:17], hyperparam =hyperMatrix ) hyperMatrix2<-hyperMatrix[c(2,1),c(2,1)] predProb2 <- predictiveDistribution(sequence[1:10], sequence[11:17], hyperparam =hyperMatrix2 ) predProb2==predProb
Sequence of bases for preproglucacon DNA protein
data(preproglucacon)
data(preproglucacon)
A data frame with 1572 observations on the following 2 variables.
V1
a numeric vector, showing original coding
preproglucacon
a character vector, showing initial of DNA bases (Adenine, Cytosine, Guanine, Thymine)
Avery Henderson
Averuy Henderson, Fitting markov chain models on discrete time series such as DNA sequences
data(preproglucacon) preproglucaconMc<-markovchainFit(data=preproglucacon$preproglucacon)
data(preproglucacon) preproglucaconMc<-markovchainFit(data=preproglucacon$preproglucacon)
Function to evaluate the prior probability of a transition matrix. It is based on conjugate priors and therefore a Dirichlet distribution is used to model the transitions of each state.
priorDistribution(transMatr, hyperparam = matrix())
priorDistribution(transMatr, hyperparam = matrix())
transMatr |
The transition matrix whose probability is the parameter of interest. |
hyperparam |
The hyperparam matrix (optional). If not provided, a default value of 1 is assumed for each and therefore the resulting probability distribution is uniform. |
The states (dimnames) of the transition matrix and the hyperparam may be in any order.
The log of the probabilities for each state is returned in a numeric vector. Each number in the vector represents the probability (log) of having a probability transition vector as specified in corresponding the row of the transition matrix.
This function can be used in conjunction with inferHyperparam. For example, if the user has a prior data set and a prior transition matrix, he can infer the hyperparameters using inferHyperparam and then compute the probability of their prior matrix using the inferred hyperparameters with priorDistribution.
Sai Bhargav Yalamanchi, Giorgio Spedicato
Yalamanchi SB, Spedicato GA (2015). Bayesian Inference of First Order Markov Chains. R package version 0.2.5
predictiveDistribution
, inferHyperparam
priorDistribution(matrix(c(0.5, 0.5, 0.5, 0.5), nrow = 2, dimnames = list(c("a", "b"), c("a", "b"))), matrix(c(2, 2, 2, 2), nrow = 2, dimnames = list(c("a", "b"), c("a", "b"))))
priorDistribution(matrix(c(0.5, 0.5, 0.5, 0.5), nrow = 2, dimnames = list(c("a", "b"), c("a", "b"))), matrix(c(2, 2, 2, 2), nrow = 2, dimnames = list(c("a", "b"), c("a", "b"))))
This function returns the probability of every state at time t under different conditions
probabilityatT(C,t,x0,useRCpp)
probabilityatT(C,t,x0,useRCpp)
C |
A CTMC S4 object |
t |
final time t |
x0 |
initial state |
useRCpp |
logical whether to use RCpp implementation |
The initial state is not mandatory, In case it is not provided,
function returns a matrix of transition function at time t
else it returns
vector of probaabilities of transition to different states if initial state was x0
returns a vector or a matrix in case x0
is provided or not respectively.
Vandit Jain
INTRODUCTION TO STOCHASTIC PROCESSES WITH R, ROBERT P. DOBROW, Wiley
states <- c("a","b","c","d") byRow <- TRUE gen <- matrix(data = c(-1, 1/2, 1/2, 0, 1/4, -1/2, 0, 1/4, 1/6, 0, -1/3, 1/6, 0, 0, 0, 0), nrow = 4,byrow = byRow, dimnames = list(states,states)) ctmc <- new("ctmc",states = states, byrow = byRow, generator = gen, name = "testctmc") probabilityatT(ctmc,1,useRCpp = TRUE)
states <- c("a","b","c","d") byRow <- TRUE gen <- matrix(data = c(-1, 1/2, 1/2, 0, 1/4, -1/2, 0, 1/4, 1/6, 0, -1/3, 1/6, 0, 0, 0, 0), nrow = 4,byrow = byRow, dimnames = list(states,states)) ctmc <- new("ctmc",states = states, byrow = byRow, generator = gen, name = "testctmc") probabilityatT(ctmc,1,useRCpp = TRUE)
Rainfall measured in Alofi Island
data(rain)
data(rain)
A data frame with 1096 observations on the following 2 variables.
V1
a numeric vector, showing original coding
rain
a character vector, showing daily rainfall millilitres brackets
Avery Henderson
Avery Henderson, Fitting markov chain models on discrete time series such as DNA sequences
data(rain) rainMc<-markovchainFit(data=rain$rain)
data(rain) rainMc<-markovchainFit(data=rain$rain)
The function generates random CTMC transitions as per the provided generator matrix.
rctmc(n, ctmc, initDist = numeric(), T = 0, include.T0 = TRUE, out.type = "list")
rctmc(n, ctmc, initDist = numeric(), T = 0, include.T0 = TRUE, out.type = "list")
n |
The number of samples to generate. |
ctmc |
The CTMC S4 object. |
initDist |
The initial distribution of states. |
T |
The time up to which the simulation runs (all transitions after time T are not returned). |
include.T0 |
Flag to determine if start state is to be included. |
out.type |
"list" or "df" |
In order to use the T0 argument, set n to Inf.
Based on out.type, a list or a data frame is returned. The returned list has two elements - a character vector (states) and a numeric vector (indicating time of transitions). The data frame is similarly structured.
Sai Bhargav Yalamanchi
Introduction to Stochastic Processes with Applications in the Biosciences (2013), David F. Anderson, University of Wisconsin at Madison
generatorToTransitionMatrix
,ctmc-class
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) molecularCTMC <- new("ctmc", states = energyStates, byrow = byRow, generator = gen, name = "Molecular Transition Model") statesDist <- c(0.8, 0.2) rctmc(n = Inf, ctmc = molecularCTMC, T = 1) rctmc(n = 5, ctmc = molecularCTMC, initDist = statesDist, include.T0 = FALSE)
energyStates <- c("sigma", "sigma_star") byRow <- TRUE gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2, byrow = byRow, dimnames = list(energyStates, energyStates)) molecularCTMC <- new("ctmc", states = energyStates, byrow = byRow, generator = gen, name = "Molecular Transition Model") statesDist <- c(0.8, 0.2) rctmc(n = Inf, ctmc = molecularCTMC, T = 1) rctmc(n = 5, ctmc = molecularCTMC, initDist = statesDist, include.T0 = FALSE)
Provided any markovchain
or markovchainList
objects, it returns a sequence of
states coming from the underlying stationary distribution.
rmarkovchain( n, object, what = "data.frame", useRCpp = TRUE, parallel = FALSE, num.cores = NULL, ... )
rmarkovchain( n, object, what = "data.frame", useRCpp = TRUE, parallel = FALSE, num.cores = NULL, ... )
n |
Sample size |
object |
Either a |
what |
It specifies whether either a |
useRCpp |
Boolean. Should RCpp fast implementation being used? Default is yes. |
parallel |
Boolean. Should parallel implementation being used? Default is yes. |
num.cores |
Number of Cores to be used |
... |
additional parameters passed to the internal sampler |
When a homogeneous process is assumed (markovchain
object) a sequence is
sampled of size n. When a non - homogeneous process is assumed,
n samples are taken but the process is assumed to last from the begin to the end of the
non-homogeneous markov process.
Character Vector, data.frame, list or matrix
Check the type of input
Giorgio Spedicato
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
markovchainFit
, markovchainSequence
# define the markovchain object statesNames <- c("a", "b", "c") mcB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) # show the sequence outs <- rmarkovchain(n = 100, object = mcB, what = "list") #define markovchainList object statesNames <- c("a", "b", "c") mcA <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) mcB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) mcC <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) mclist <- new("markovchainList", markovchains = list(mcA, mcB, mcC)) # show the list of sequence rmarkovchain(100, mclist, "list")
# define the markovchain object statesNames <- c("a", "b", "c") mcB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) # show the sequence outs <- rmarkovchain(n = 100, object = mcB, what = "list") #define markovchainList object statesNames <- c("a", "b", "c") mcA <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) mcB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) mcC <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames))) mclist <- new("markovchainList", markovchains = list(mcA, mcB, mcC)) # show the list of sequence rmarkovchain(100, mclist, "list")
Sales demand sequences of five products (A, B, C, D, E). Each row corresponds to a sequence. First row corresponds to Sequence A, Second row to Sequence B and so on.
data("sales")
data("sales")
An object of class matrix
(inherits from array
) with 269 rows and 5 columns.
The example can be used to fit High order multivariate markov chain.
data("sales") # fitHighOrderMultivarMC(seqMat = sales, order = 2, Norm = 2)
data("sales") # fitHighOrderMultivarMC(seqMat = sales, order = 2, Norm = 2)
This is a convenience function to display the slots of hommc object in proper format
## S4 method for signature 'hommc' show(object)
## S4 method for signature 'hommc' show(object)
object |
An object of class hommc |
This method returns the states of a transition matrix.
states(object) ## S4 method for signature 'markovchain' states(object)
states(object) ## S4 method for signature 'markovchain' states(object)
object |
A discrete |
The character vector corresponding to states slot.
Giorgio Spedicato
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) states(markovB) names(markovB)
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) states(markovB) names(markovB)
markovchain
objectThis method returns the stationary vector in matricial form of a markovchain object.
steadyStates(object)
steadyStates(object)
object |
A discrete |
A matrix corresponding to the stationary states
The steady states are identified starting from which eigenvectors correspond to identity eigenvalues and then normalizing them to sum up to unity. When negative values are found in the matrix, the eigenvalues extraction is performed on the recurrent classes submatrix.
Giorgio Spedicato
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) steadyStates(markovB)
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) steadyStates(markovB)
Matrix of Standard and Poor's Global Corporate Rating Transition Frequencies 2000 (NR Removed)
data(tm_abs)
data(tm_abs)
The format is: num [1:8, 1:8] 17 2 0 0 0 0 0 0 1 455 ... - attr(*, "dimnames")=List of 2 ..$ : chr [1:8] "AAA" "AA" "A" "BBB" ... ..$ : chr [1:8] "AAA" "AA" "A" "BBB" ...
European Securities and Markets Authority, 2016 https://cerep.esma.europa.eu/cerep-web/statistics/transitionMatrice.xhtml
data(tm_abs)
data(tm_abs)
Calculate the generator matrix for a corresponding transition matrix
transition2Generator(P, t = 1, method = "logarithm")
transition2Generator(P, t = 1, method = "logarithm")
P |
transition matrix between time 0 and t |
t |
time of observation |
method |
"logarithm" returns the Matrix logarithm of the transition matrix |
A matrix that represent the generator of P
mymatr <- matrix(c(.4, .6, .1, .9), nrow = 2, byrow = TRUE) Q <- transition2Generator(P = mymatr) expm::expm(Q)
mymatr <- matrix(c(.4, .6, .1, .9), nrow = 2, byrow = TRUE) Q <- transition2Generator(P = mymatr) expm::expm(Q)
This is a convenience function to get transition probabilities.
transitionProbability(object, t0, t1) ## S4 method for signature 'markovchain' transitionProbability(object, t0, t1)
transitionProbability(object, t0, t1) ## S4 method for signature 'markovchain' transitionProbability(object, t0, t1)
object |
A |
t0 |
Initial state. |
t1 |
Subsequent state. |
Numeric Vector
Giorgio Spedicato
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) transitionProbability(markovB,"b", "c")
statesNames <- c("a", "b", "c") markovB <- new("markovchain", states = statesNames, transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3, byrow = TRUE, dimnames=list(statesNames,statesNames)), name = "A markovchain Object" ) transitionProbability(markovB,"b", "c")
These functions verify the Markov property, assess the order and stationarity of the Markov chain.
This function tests whether an empirical transition matrix is statistically compatible with a theoretical one. It is a chi-square based test. In case a cell in the empirical transition matrix is >0 that is 0 in the theoretical transition matrix the null hypothesis is rejected.
Verifies that the s elements in the input list belongs to the same DTMC
verifyMarkovProperty(sequence, verbose = TRUE) assessOrder(sequence, verbose = TRUE) assessStationarity(sequence, nblocks, verbose = TRUE) verifyEmpiricalToTheoretical(data, object, verbose = TRUE) verifyHomogeneity(inputList, verbose = TRUE)
verifyMarkovProperty(sequence, verbose = TRUE) assessOrder(sequence, verbose = TRUE) assessStationarity(sequence, nblocks, verbose = TRUE) verifyEmpiricalToTheoretical(data, object, verbose = TRUE) verifyHomogeneity(inputList, verbose = TRUE)
sequence |
An empirical sequence. |
verbose |
Should test results be printed out? |
nblocks |
Number of blocks. |
data |
matrix, character or list to be converted in a raw transition matrix |
object |
a markovchain object |
inputList |
A list of items that can coerced to transition matrices |
Verification result
a list with following slots: statistic (the chi - square statistic), dof (degrees of freedom), and corresponding p-value. In case a cell in the empirical transition matrix is >0 that is 0 in the theoretical transition matrix the null hypothesis is rejected. In that case a p-value of 0 and statistic and dof of NA are returned.
a list of transition matrices?
Tae Seung Kang, Giorgio Alfredo Spedicato
Anderson and Goodman.
markovchain
sequence <- c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") mcFit <- markovchainFit(data = sequence, byrow = FALSE) verifyMarkovProperty(sequence) assessOrder(sequence) assessStationarity(sequence, 1) #Example taken from Kullback Kupperman Tests for Contingency Tables and Markov Chains sequence<-c(0,1,2,2,1,0,0,0,0,0,0,1,2,2,2,1,0,0,1,0,0,0,0,0,0,1,1, 2,0,0,2,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,2,1,0, 0,2,1,0,0,0,0,0,0,1,1,1,2,2,0,0,2,1,1,1,1,2,1,1,1,1,1,1,1,1,1,0,2, 0,1,1,0,0,0,1,2,2,0,0,0,0,0,0,2,2,2,1,1,1,1,0,1,1,1,1,0,0,2,1,1, 0,0,0,0,0,2,2,1,1,1,1,1,2,1,2,0,0,0,1,2,2,2,0,0,0,1,1) mc=matrix(c(5/8,1/4,1/8,1/4,1/2,1/4,1/4,3/8,3/8),byrow=TRUE, nrow=3) rownames(mc)<-colnames(mc)<-0:2; theoreticalMc<-as(mc, "markovchain") verifyEmpiricalToTheoretical(data=sequence,object=theoreticalMc) data(kullback) verifyHomogeneity(inputList=kullback,verbose=TRUE)
sequence <- c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a") mcFit <- markovchainFit(data = sequence, byrow = FALSE) verifyMarkovProperty(sequence) assessOrder(sequence) assessStationarity(sequence, 1) #Example taken from Kullback Kupperman Tests for Contingency Tables and Markov Chains sequence<-c(0,1,2,2,1,0,0,0,0,0,0,1,2,2,2,1,0,0,1,0,0,0,0,0,0,1,1, 2,0,0,2,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,2,1,0, 0,2,1,0,0,0,0,0,0,1,1,1,2,2,0,0,2,1,1,1,1,2,1,1,1,1,1,1,1,1,1,0,2, 0,1,1,0,0,0,1,2,2,0,0,0,0,0,0,2,2,2,1,1,1,1,0,1,1,1,1,0,0,2,1,1, 0,0,0,0,0,2,2,1,1,1,1,1,2,1,2,0,0,0,1,2,2,2,0,0,0,1,1) mc=matrix(c(5/8,1/4,1/8,1/4,1/2,1/4,1/4,3/8,3/8),byrow=TRUE, nrow=3) rownames(mc)<-colnames(mc)<-0:2; theoreticalMc<-as(mc, "markovchain") verifyEmpiricalToTheoretical(data=sequence,object=theoreticalMc) data(kullback) verifyHomogeneity(inputList=kullback,verbose=TRUE)
Matrix to create zeros
zeros(n)
zeros(n)
n |
size of the matrix |
a square matrix of zeros