Estimating the parameters is often done through maximum likelihood estimation. In this vignette, derivatives of parameters are calculated to give the parameter gradient. This can then be used in numerical optimization, such as with BFGS, and should be much faster than calculating without having the gradient.
In this vignette, I follow the lead of the GPfit paper. Instead of maximizing the likelihood, we find parameter estimates by minimizing the deviance. For correlation parameters θ, the deviance is shown below.
−2log Lθ ∝ log |R| + nlog [(Y − 1nμ̂(θ))TR−1(Y − 1nμ̂(θ))] = 𝒟
For now I will assume that μ̂ does not depend on θ, and will replace Y − μ̂(θ) with Z, as shown below.
𝒟 = log |R| + nlog [ZTR−1Z]
Thus the only dependence on the correlation parameters is through R.
Via the GPML book section A.3.1, we need the following equations:
$$ \frac{\partial }{\partial \theta} R^{-1} = -R^{-1} \frac{\partial R}{\partial \theta} R^{-1}$$
$$ \frac{\partial }{\partial \theta} \log |R| = \text{tr}(R ^ {-1}\frac{\partial R}{\partial \theta} ) $$
Now we can calculate the derivative of the deviance in terms of $\frac{\partial R}{\partial \theta}$.
$$ \frac{\partial \mathcal{D}}{\partial \theta} = \text{tr}(R ^ {-1}\frac{\partial R}{\partial \theta} ) - \frac{n}{Z^T R^{-1} Z} Z^T R^{-1} \frac{\partial R}{\partial \theta} R^{-1} Z $$
Now we just need to find $\frac{\partial R}{\partial \theta}$, which depends on the specific correlation function R(θ).
The correlation function is usually augmented by adding a nugget term δ to the diagonal:
R = R* + δI
The nugget accounts for noise in the responses, smoothing the predicted response function. It also helps with numerical stability, which is a serious problem when there is a lot of data. Often a small value is used even the function is noiseless.
For the nugget,
$$\frac{\partial R}{\partial \delta} = I$$
Thus, the derivative for the deviance is very simple. $$ \frac{\partial \mathcal{D}}{\partial \delta} = \text{tr}(R ^ {-1} ) - \frac{n}{Z^T R^{-1} Z} Z^T R^{-1} R^{-1} Z $$
This equation gives the derivative of the deviance with respect to the nugget regardless of the correlation function used.
The Gaussian correlation function has parameter vector θ = (θ1, ..., θd).
Ignoring the nugget term, the i,j entry of the correlation matrix is
$$ R_{ij}(x, y) = \exp\bigg[-\sum_{k=1}^{d} \theta_k (x_{ik} - x_{jk})^2 \bigg] $$
$$ \frac{\partial}{\partial \theta_l} R_{ij}(x, y) = -\exp\bigg[-\sum_{k=1}^{d} \theta_k (x_{ik} - x_{jk})^2 \bigg] (x_{il} - x_{jl})^2 $$
This will give the matrix $\frac{\partial R}{\partial \theta_l}$, which can be used with the previous equations to calculate $\frac{\partial \mathcal{D}}{\partial \theta_l}$.
The lifted brownian covariance function is c(x, x′) = ψ(x) + ψ(x′) + ψ(x − x′) − 1 where ψ(h) = (1 + ||h||a2γ)β
and $$ ||h||_a^2 = \boldsymbol{h}^T \begin{bmatrix} a_1 & 0 & \dots & 0 \\ 0 & a_2 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & a_d \end{bmatrix} \boldsymbol{h} = \sum_{i=1}^d a_i h_i^2$$
This is different from the others because it is not a correlation function, which ranges from 0 to 1, but is the covariance function itself. Thus we will have to use first deviance before inserting σ̂2.
$$ \frac{\partial}{\partial \beta} \psi(h) = (1 + ||h||_a^{2\gamma} ) ^{\beta} \log (1 + ||h||_a^{2\gamma} ) = \psi(h) \log (1 + ||h||_a^{2\gamma} ) $$
$$ \frac{\partial}{\partial \gamma} \psi(h) = \beta (1 + ||h||_a^{2\gamma} ) ^{\beta-1} ||h||_a^{2\gamma} \log(||h||_a^{2}) $$ $$ \frac{\partial}{\partial a_i} ||h||_a^{2} = h_i^2 $$
$$ \frac{\partial}{\partial a_i} ||h||_a^{2\gamma} = \gamma ||h||_a^{2(\gamma-1)} h_i^2 $$
$$ \frac{\partial}{\partial a_i} \psi(h) = \beta (1 + ||h||_a^{2\gamma} ) ^{\beta-1} \gamma ||h||_a^{2(\gamma-1)} h_i^2 $$
The likelihood for data from a Gaussian process follows the standard multivariate normal probability distribution function (pdf). $$ L = (2 \pi)^{-k/2} |\Sigma|^{-1/2} \exp[\frac{-1}{2}(Y - \mu) \Sigma^{-1} (Y - \mu)] $$ The log likelihood is generally easier to work with.
$$ \log L = \frac{-k}{2} \log(2 \pi) + \frac{-1}{2}\log|\Sigma| + \frac{-1}{2}(Y - \mu) \Sigma^{-1} (Y - \mu) $$ To simplify, we can multiply it by -2, and call this the deviance, denoted here as 𝒟.
𝒟 = −2log L = klog (2π) + log |Σ| + (Y − μ)Σ−1(Y − μ) k can be ignored since it usually constant while optimizing parameters. 𝒟 = −2log L ∝ log |Σ| + (Y − μ)Σ−1(Y − μ)
$$ \frac{\partial \mathcal{D}}{\partial \theta} = \text{tr}(\Sigma ^ {-1}\frac{\partial \Sigma}{\partial \theta} ) - Z^T \Sigma^{-1} \frac{\partial \Sigma}{\partial \theta} \Sigma^{-1} Z $$