# Multivariate spatio-temporal models with separable covariance functions: the auto-regressive representation

It is somewhat convenient to model a spatio-temporal Gaussian field with a separable covariance function. All one needs to do here is fit functions to the empirical spatial and temporal variograms and the job is done. If we obtain valid spatial and temporal covariance functions, then their product is guaranteed to also be a valid covariance function. Despite their simplicity, there are significant drawbacks with these models, most notably that they are unable to capture spatio-temporal interactions, by which I mean that the cross-covariance between two time series at two locations is independent of the distance between these locations, see Cressie and Huang (1999), and Gneiting et al. (2005), and references therein for more information and limitations on this class of models.

A very convenient property of separability is its representation in dynamic, autoregressive form. By Theorem 3 of Storvik et al. (2001) one can show that if $X(s,t) \sim \mathcal{GP}(0,k_1(\|s-r\|)k_2(|t-t'|))$ and if $k_2(|t-t'|) = \rho^{|t-t'|}$ then

$X(s,t+1) = \rho X(s,t) + e(s)$

where

$e(s) \sim \mathcal{GP}(0,(1-\rho^2)k_1(\cdot))$

There might be computational reasons why you would want to use one form or the other but I will not dwell on this today, let’s just say it’s useful to have both representations available at hand when carrying out parameter estimation.

# Spatio-temporal co-regionalisation

In my work I’m using (very) large datasets and have to carry out inference over large areas. Separability is a help here, however I have multiple spatio-temporal fields interacting, the interaction of which cannot be ignored. Finding the joint spatial covariance structure in a multivariate problem is not hard, see for example Zammit-Mangion et al. (2013) and Finley et al. (2007) which employ a co-regionalisation assumption. Under this assumption, interactions between the fields are assumed to be local, i.e. at a given point in space the interaction between the two fields can be fully defined by a 2×2 numeric covariance matrix. This does not imply a lack of spatial correlation in the interaction terms, rather that these off-diagonal terms are fully defined by the covariance functions of the individual fields which makes things somewhat easier.

Co-regionalisation has been applied in the spatio-temporal setting in a model termed the spatiotemporal linear coregionalization model (STLM) by Choi et al. (2009). The same rules apply in the spatio-temporal setting as in the spatial setting. Thus, under co-regionalisation, and the assumption that the spatio-temporal covariance functions of both interacting fields, marginally, are the same up to a numerical constant, the joint covariance function for the separable case described above is simply ${\bf R} \otimes (k_1(\|s-r\|)k_2(|t-t'|))$ where $\bf R$ is the 2×2 (numerical) covariance matrix describing the local covariances.

What I am interested in here is the multivariate auto-regressive (AR) interpretation of bivariate spatio-temporal fields having this covariance function.

# The AR representation

Let us assume the presence of two fields $\tilde X_1(s,t)$ and $\tilde X_2(s,t)$ which are i.i.d.  according to

$\tilde X_1, \tilde X_2 \sim \mathcal{GP}(0,k(\cdot))$

where $k(\cdot) = k_1(\cdot)k_2(\cdot)$, the product of the spatial and temporal covariance functions respectively. Then, without loss of generalisation, we can express the bi-variate model as

$X_1(s,t) = \tilde X_1(s,t)$

$X_2(s,t) = a\tilde X_1(s,t) + b\tilde X_2(s,t)$

where

${\bf R} = \begin{bmatrix} 1 & a \\ a & a^2 + b^2 \end{bmatrix}$

Then

$(X_1(s,t), X_2(s,t)) \sim \mathcal{GP}(0, {\bf R} \otimes k(\cdot))$

What I wish to show is that this joint space-time covariance function describes an auto-regressive process of the form

$\begin{bmatrix} X_1(s,t+1) \\ X_2(s,t+1) \end{bmatrix} = \begin{bmatrix} \rho & \\ & \rho \end{bmatrix}\begin{bmatrix} X_1(s,t) \\ X_2(s,t) \end{bmatrix} +\begin{bmatrix} e_1(s) \\ e_2(s) \end{bmatrix}$

where

$(e_1(s), e_2(s)) \sim \mathcal{GP}\left(0,(1-\rho^2)[{\bf R} \otimes k_1(\cdot)]\right)$

I do not have a formal proof for this, however we can check that this really is the case by comparing moments from the Gaussian field representation and the auto-regressive representation. There are six moments we need to compare, namely

1. $\langle X_{1}(s,t), X_{1}(s,t+1)\rangle$
2. $\langle X_{1}(s,t), X_{1}(s,t)\rangle$
3. $\langle X_{2}(s,t), X_{2}(s,t+1)\rangle$
4. $\langle X_{2}(s,t), X_{2}(s,t)\rangle$
5. $\langle X_{2}(s,t+1), X_{1}(s,t)\rangle$
6. $\langle X_{1}(s,t), X_{2}(s,t)\rangle$

Using the spatio-temporal covariance structure we obtain the following

1. $\langle X_{1}(s,t), X_{1}(s,t+1)\rangle = \rho k_1(\cdot)$
2. $\langle X_{1}(s,t), X_{1}(s,t)\rangle = k_1(\cdot)$
3. $\langle X_{2}(s,t), X_{2}(s,t+1)\rangle = \rho (a^2 + b^2) k_1(\cdot)$
4. $\langle X_{2}(s,t), X_{2}(s,t)\rangle = (a^2 + b^2) k_1(\cdot)$
5. $\langle X_{2}(s,t+1), X_{1}(s,t)\rangle = a\rho k_1(\cdot)$
6. $\langle X_{1}(s,t), X_{2}(s,t)\rangle = ak_1(\cdot)$

From the auto-regressive form we get exactly the same covariances. Note that to extract them from the AR process we need to evaluate the moments of the stationary distribution which can be found by noting the following

$var(X_{1}(s,t)) = \rho^2 var(X_1(s,t)) + (1-\rho^2)k_1(\cdot)$

$var(X_{2}(s,t)) = \rho^2 var(X_2(s,t)) + (1-\rho^2)(a^2 + b^2)k_1(\cdot)$

$cov(X_1(s,t),X_{2}(s,t)) = \rho^2 cov(X_1(s,t),X_2(s,t)) + cov(e_1(s),e_2(s))$

which gives the required result. Therefore we have found the auto-regressive representation for a bi-variate spatio-temporal process which, as expected, is somewhat simplistic since the propagation matrix does not include off-diagonal terms. It would be great to see if we can extend this to more complex multi-variate spatio-temporal models. Probably there is an avenue for this through the integro-difference equation and its Fourier representation as seen in Storvik’s paper.