# Sampling sparse Gaussian Markov Random Fields in R with the Matrix package

Over the past few months I’ve been involved in a fun project with Andrew Zammit Mangion and Noel Cressie at the University of Wollongong. This project involves inference over a large spatial field using a model with a latent space distributed as a multivariate Gaussian with a large and sparse precision matrix (it also involves me learning a lot from Andrew and Noel!). This is my first time working with sparse precision matrices, so I’ve been discovering many new things: what working in precision-space rather than covariance-space means, and how to draw samples from such models even when the number of data points is large. In this post I share a little of what I’ve learned, along with R code. A lot of what follows is derived from the excellent book on this topic by Rue and Held.

Let’s write

where \(\tilde{y}\) and \(\tilde{\mu}\) are \(n\) element column vectors, and \(Q\) is a sparse \(n \times n\) precision matrix. The \(ij\)th entry of \(Q\), \(Q_{ij}\), has a simple interpretation:

that is, the correlation between \(y_i\) and \(y_j\), conditional on all the other entries in \(\tilde{y}\), is in proportion to the \(ij\)th entry of the precision matrix. Then, if \(Q_{ij} = 0\), \(y_i\) and \(y_j\) are independent given all other entries, and the converse is also true. This is what leads to the interpretation of these systems as Gaussian Markov Random Fields (GMRFs): the ‘Gaussian’ part is (hopefully) obvious, while the ‘MRF’ part arises by constructing a graphical model with nodes labelled from 1 to \(n\), interpreting the non-zero entries \(Q_{ij}\) as indicating that an edge exists between nodes \(i\) and \(j\) (so they are neighbours), and noting that, conditional on its neighbours, a node is independent of its non-neighbours - the Markov property.

A really simple example of a model that can be cast this way is an AR(1) process, where \(y_t = \rho y_{t - 1} + \epsilon_t\), with \(\epsilon_t\) i.i.d. standard normal. For this model, the conditional correlations are \(\rho\) for adjacent entries and zero otherwise. The precision matrix is

which is very sparse for large \(n\). The covariance matrix, by constrast, is not at all sparse.

So that’s the interpretation of \(Q\), and one reason why working in precision space is valuable. How to sample \(\tilde{y}\)? The usual way, the one I was taught by my PhD supervisor, is to construct the Cholesky decomposition, \(Q = L L^T\), draw \(\tilde{z} \sim N(0, I_n)\), and then set \(\tilde{y} = \tilde{\mu} + L^{-T} \tilde{z}\). This works, because, as Wikipedia tells us, an affine transformation \(\tilde{c} + B\tilde{x}\) with \(\tilde{x} \sim N(\tilde{a}, \Sigma)\) has distribution \(N(\tilde{c} + B\tilde{a}, B \Sigma B^T)\), and in our case this means that \(\tilde{y}\) has mean \(\tilde{\mu}\), and covariance

It turns out that there are Cholesky decomposition algorithms that are efficient for sparse matrices, but there is a catch. Consider the following sparse 100x100 precision matrix with just 442 non-zero entries:

where the length of the `@i`

entry gives the number of non-zero values. Here I am using the Matrix package, which is very well engineered and has tons of useful sparse matrix classes and functions. We can use `image`

to visualise the sparsity pattern:

The direct Cholesky decomposition of this matrix is

which has 2403 non-zero entries, around 6 times less sparse the original matrix. In general there is no guarantee that the Cholesky decomposition of a sparse matrix will be particularly sparse.

However, all is not lost. If one permutes the indices of \(\tilde{y}\), the precision matrix of the permuted vector is just \(Q\) with rows and columns permuted the same way. It turns out that this can often be done in such a way that the Cholesky decomposition of the permuted precision matrix is much sparser than that of the original matrix. Algorithms that find these permutations are called minimum degree algorithms, but the problem in general is NP-hard, so that finding an optimal permutation is infeasible. Still, fast approximate algorithms exist and work well, and are also available in the Matrix package:

The Cholesky of the permuted system is only twice as dense as the precision matrix, with 932 non-zero entries versus 442 in \(Q\). Mathematically, this permuted decomposition can be written as

where \(P\) is a permutation matrix (for which, handily, \(P^{-1} = P^T\)). A simple rearrangement gives \( L L^T = P Q P^T \), showing that \(L\) factorises the permuted \(Q\). In the implementation in the `Cholesky`

function in the Matrix package, the matrix \(P\) is found using heuristics in the CHOLMOD library, which seem to do a good job most of the time. Now, finally, returning to the problem of sampling using a sparse precision matrix, we can again draw \(\tilde{z} \sim N(0, I_n)\), and then set \(\tilde{y} = \mu + P^T L^{-T} \tilde{z}\), which works because the resulting samples have the covariance matrix

exactly as desired. In R this can be implemented (assuming \(\tilde{\mu} = 0\)) as

and the sample can be visualised as

which shows a fair amount of correlated structure (at least it does to me).

As a programming aside, the `chol_Q_100_permuted`

object produced by the `Cholesky`

function is an S4 object of class `CHMfactor`

that contains both the permutation matrix \(P\) and the decomposition \(L\). You can extract these like so:

and manipulate them directly, but it’s generally more efficient to use the `solve`

method associated with the `CHMfactor`

class:

These use fast CHOLMOD routines that are generally faster than extracting the raw internals, and as a bonus they avoid the extra copying associated with that extraction.