Formulae

Introduction

We report all the formulae used in the main computations that take place in the package as a reference for the users and developers.

Basic model for the HP filter with jumps

The basic model is based on the state-space representation yt=αt(1)+εt,εtNID(0,σε2)αt+1(1)=αt(1)+αt(2)+ηt,ηtNID(0,ση,t2)αt+1(2)=αt(2)+ζt,ζtNID(0,σζ,t2) with initial conditions [α1(1)α1(2)]N([a1(1)a1(2)],[p1(11)p1(12)p1(12)p1(22)]).

Scalar Kalman filtering recursions

The Kalman filtering recursions, written in scalar form (to gain computational speed and insights) are the following (for generality we let also the variance of the measurement error vary over time).

The initial innovation, its variance and the Kalman gains are: i1=y1a1(1)f1=p1(11)+σε,12k1(1)=(p1(11)+p1(12))/f1k1(2)=p1(12)/f1 For t = 1, 2, …, n − 1 the recursions are at+1(1)=at(1)+at(2)+kt(1)itat+1(2)=at(2)+kt(2)itpt+1(11)=pt(11)+2pt(12)+pt(22)+ση,t2kt(1)kt(1)ftpt+1(12)=pt(12)+pt(22)kt(1)kt(2)ftpt+1(22)=pt(22)+σζ,t2kt(2)kt(2)ftit+1=yt+1at+1(1)ft+1=pt+111+σε,t+12kt+1(1)=(pt+1(11)+pt+1(12))/ft+1kt+1(2)=pt+1(12)/ft+1

Modifications when missing observations are present

When one or more values of yt are missing, then the only modifications to the above recursions are the following. it+1=0ft+1=kt+1(1)=0kt+1(1)=0.

Diffuse initial conditions

Since the two state variables are nonstationary, their initialization should be diffuse: [α1(1)α1(2)]N([00],[v00v]), with v → ∞.

As it will be clear from the computations below, when v is infinite, the mean squared errors of at(1) and at(2), and the variances of the innovations are infinite for t = 1, 2, while from t = 3 on they are finite.

Let us carry out the computations for t = 1, 2, 3 and then take the limit for v → ∞.

t = 1

a1(1)=0a1(2)=0p1(11)=vp1(12)=0p1(22)=vi1=y1f1=v+σε,12k1(1)=v/(v+σε,12)k1(2)=0

t = 2

a2(1)=y1a2(2)=0p2(11)=2v+ση,12v2v+σε,12p2(12)=vp2(22)=v+σζ,12i2=y2vv+σε2y1y2y1f2=2v+ση,12v2v+σε2+σε,22k2(1)=3v+ση,12v2v+σε,122v+ση,12v2v+σε,12+σε,222k2(2)=v2v+ση,12v2v+σε,12+σε,221

t = 3

a3(1)=v2v+σε,12y1+3v+ση,12v2v+σε,122v+ση,12v2v+σε,12+σε,22(y2v2v+σε,12y1)2y2y1a3(2)=3v+ση,12v2v+σε,122v+ση,12v2v+σε,12+σε,22(y2v2v+σε,12y1)y2y1p3(11)=5v+ση,12v2v+σε,12+σζ,12+ση,22(3v+ση,12v2v+σε,12)22v+ση,12v2v+σε,12+σε,22ση,12+ση,22+σζ,12p3(12)=2v+σζ,12v(3v+ση,12v2v+σε,12)2v+ση,12v2v+σε,12+σε,22σζ,12p3(22)=v+σζ,12+σζ,22v22v+ση,12v2v+σε,12+σε,22σζ,12+σζ,22i3y32y2+y1f3ση,12+ση,22+σζ,12+σε,32k3(1)ση,12+ση,22+2σζ,12ση,12+ση,22+σζ,12+σε,32k3(2)σζ,12ση,12+ση,22+σζ,12+σε,32

Smoothing

The smoothing recursions start from t = n and work backwards down to t = 1. The following quantities are auxiliar to compute the smoothed values of αt(1) and their MSE. rn+1(1)=0rn+1(2)=0nn+1(11)=0nn+1(12)=0nn+1(22)=0en=in/fndn=1/fn For t = n, n − 1, …, 1, compute rt(1)=it/ft+(1kt(1))rt+1(1)kt(2)rt+1(2)rt(2)=rt+1(1)+rt+1(2)nt(11)=(1kt(1))2nt+1(11)2(1kt(1))kt(2)nt+1(12)+kt(2)kt(2)nt+1(22)+1/ftnt(12)=(1kt(1))(nt+1(11)+nt+1(12))kt(2)(nt+1(12)+nt+1(22))nt(22)=nt+1(11)+2nt+1(12)+nt+1(22)et1=it1/ft1kt1(1)rt(1)kt1(2)rt(2)dt1=1/ft1+kt1(1)kt1(1)nt(11)+2kt1(1)kt1(2)nt(12)+kt1(2)kt1(2)nt(22) The smoothed values of αt(1), that is the Hodrick-Prescott filtered time series, and their mean squared errors are given by at|n(1)=at(1)+pt(11)rt(1)+pt(12)rt(2)pt|n(11)=pt(11)pt(11)pt(11)nt(11)2pt(11)pt(12)nt(12)pt(12)pt(12)nt(22)

Weights for computing the effective degrees of freedom

Since the smoother is linear in the observations, the vector of smoothed αt(1), say s, is just a linear transformation of the vector of observations, y: s = Wy. The number of effective degrees of freedom is the trace of the weighting matrix W (cf. Hastie, Tibshirani and Friedman, 2009, The Elements of Statistical Learning, Section 5.4.1). The formulae for computing such weights in a general state-space form can be found in Koopman and Harvey (2003) Journal of Economic Dynamics and Control, vol. 27. In our framework, the diagonal elements of the matrix W are given by wtt=pt(11)(1/ft+kt(1)kt(1)nt(11)+2kt(1)kt(2)nt(12)+kt(2)kt(2)nt(22)kt(1)nt(11)kt(2)nt(12))pt(12)(kt(1)(nt(11)+nt(12))+kt(2)(nt(12)+nt(22)))

Analytical scores

The log-likelihood must be maximised with respect to a very large number of parameters (n + 3). Thus, providing the numerical optimiser with analytical scores is important for stability and speed. Since all of our parameters are related to quantities in the disturbance covariance matrices, we can adapt the results in Koopman and Shephard (1992, Biometrika vol. 79).

Recall that our (slightly re-parametrised) model is yt=αt(1)+εt,εtNID(0,σε2)αt+1(1)=αt(1)+αt(2)+ηt,ηtNID(0,σt2)αt+1(2)=αt(2)+ζt,ζtNID(0,σ2+γ2σt2) where the parameters to estimate are σε, σ, γ, and the sequence {σt}t = 1, …, n, which are all non-negative. Notice that in this parametrisation λ = σε2/σ2.

λ free

If λ is not fixed and ℓ(θ) represents the log-likelihood function, with θ vector all of the parameters, then σε=σεt=1n(etetdt)σ=σt=1n(rt(2)rt(2)nt(22))γ=γt=1n(rt(2)rt(2)nt(22))σt2σt=(rt(1)rt(1)nt(11)+(rt(2)rt(2)nt(22))γ2)σt2

Generally, constrained optimisation problems also need the derivatives of the constraining function, which in our case is $g(\boldsymbol{\theta}) = \sum_{t=1}^n \sigma_t$. The solution to the regularised maximum likelihood problem must satisfy g(θ) ≤ M. The derivatives are trivial: gσε=0,gσ=0,gγ=0,gσt=1.

λ fixed

If λ is fixed, σε2 = λσ2 and, in the log-likelihood function ℓ(θ), the vector of parameters θ does not contain λ or σε2. The derivatives are now σ=σt=1n(rt(2)rt(2)nt(22))σλt=1n(etetdt)γ=γt=1n(rt(2)rt(2)nt(22))σtσt=(rt(1)rt(1)nt(11)+(rt(2)rt(2)nt(22))γ2)σt2 The derivatives of the constraining function are gσ=0,gγ=0,gσt=1.