Skip to content
Advertisement

Tag: gaussian-process

Gaussian Process Regression: tune hyperparameters based on validation set

In the standard scikit-learn implementation of Gaussian-Process Regression (GPR), the hyper-parameters (of the kernel) are chosen based on the training set. Is there an easy to use implementation of GPR (in python), where the hyperparemeters (of the kernel) are chosen based on a separate validation set? Or cross-validation would also be a nice alternative to find suitable hyperparameters (that are

GaussianProcessRegressor ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size

I am running the following code: The shape of my input is: (19142, 21) dtypes are each: float64 Added in Edit: X and y are Pandas Dataframes. After .values they’re each numpy arrays And I get the Error: ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size. I cant image a dataset of 20000

Is there a way to define a ‘heterogeneous’ kernel design to incorporate linear operators into the regression for GPflow (or GPytorch/GPy/…)?

I’m trying to perform a GP regression with linear operators as described in for example this paper by Särkkä: https://users.aalto.fi/~ssarkka/pub/spde.pdf In this example we can see from equation (8) that I need a different kernel function for the four covariance blocks (of training and test data) in the complete covariance matrix. This is definitely possible and valid, but I would

Advertisement