I’m trying to perform a GP regression with linear operators as described in for example this paper by Särkkä: https://users.aalto.fi/~ssarkka/pub/spde.pdf In this example we can see from equation (8) that I need a different kernel function for the four covariance blocks (of training and test data) in the complete covariance matrix.
This is definitely possible and valid, but I would like to include this in a kernel definition of (preferably) GPflow, or GPytorch, GPy or the like.
However, in the documentation for kernel design in Gpflow, the only possibility is to define a covariance function that acts on all covariance blocks. In principle, the method above should be straight-forward to add myself (the kernel function expressions can be derived analytically), but I don’t see any way of incorporating the ‘heterogeneous’ kernel functions into the regression or kernel classes. I tried to consult other packages such as Gpytorch and Gpy, but again, the kernel design does not seem to allow this.
Maybe I’m missing something here, maybe I’m not familiar enough with the underlying implementation to asses this, but if someone has done this before or sees the (what should be reasonably straight-forward?) implementation possibility, I would be happy to find out.
Thank you very much in advance for your answer!
Kind regards
Advertisement
Answer
This should be reasonably straightforward, though requires building a custom kernel. Basically, you need a kernel that can know for each input what the linear operator for the corresponding output is (whether this is a function observation/identity operator, integral observation, derivative observation, etc). You can achieve this by including an extra column in your input matrix X
, similar to how it’s done for the gpflow.kernels.Coregion
kernel (see this notebook). You would need to then need to define a new kernel with K
and K_diag
methods that for each linear operator type find the corresponding rows in the input matrix, and pass it to the appropriate covariance function (using tf.dynamic_partition
and tf.dynamic_stitch
, this is used in a very similar way in GPflow’s SwitchedLikelihood
class).
The full implementation would probably take half a day or so, which is beyond what I can do here, but I hope this is a useful starting pointer, and you’re very welcome to join the GPflow slack (invite link in the GPflow README) and discuss it in more detail there!