Gaussian Process.jl

Kernels

GaussianProcess.exp_cov_fnFunction
exp_cov_fn(X; delta=0.0005, kwargs...)

Implementation of the exponential kernel also known as Ornstein–Uhlenbeck.

Arguments:

  • X::Vector{Float} : node's position vector
  • delta::Float : strength of white noise component for numerical stability.
  • eta::Number : covariance matrix amplitude.
  • l::Number : covariance matrix correlation length.

Returns:

  • cov_mat::Matrix : GP's covariance matrix

Usage:

exp_cov_fn(x; eta=0.05, l=1.0)
source
GaussianProcess.sqexp_cov_fnFunction
sqexp_cov_fn(X; delta=0.0005, kwargs...)

Implementation of the square exponential kernel.

Arguments:

  • X::Vector{Float} : node's position vector
  • delta::Float : strength of white noise component for numerical stability.
  • eta::Number : covariance matrix amplitude.
  • l::Number : covariance matrix correlation length.

Returns:

  • cov_mat::Matrix : GP's covariance matrix

Usage:

sqexp_cov_fn(x; eta=0.05, l=1.0)
source

GP's

GaussianProcess.marginal_lklFunction
marginal_lkl(mean, kernel; data_cov=nothing)

Marginal likelihood implementation of a GP's. This is equivalent to analytically marginalizing over the GP's nodes. Used when GP is linearly related to data.

Arguments:

  • meam::Vector{Number}: GP's mean.
  • kernel::Matrix{Number}: GP's covariance matrix
  • data_cov::Matrix{Float}: data covariance matrix

Returns:

  • GP::MvNormal: instance of MvNormal.
source
GaussianProcess.latent_GPFunction
latent_GP(mean, nodes, kernel)

Latent variable implementation of a GP's. Rotates the GP ndodes by the covariance matrix and adds them to the mean vector. Used when GP is not linearly related to data.

Arguments:

  • mean::Vector{Number}: GP's mean.
  • nodes::Vector{Number}: GP's nodes.
  • kernel::Matrix{Number}: GP's cov mat.

Returns:

  • GP::Vector{Number}: Gaussian process realization.
source
GaussianProcess.conditionalFunction
conditional(old_X, new_X, latent_gp, cov_fn; kwargs...)

Given the GP's covariance matrix, applies a Wiener filter to transform the latent GP N-dimensional parameter space to M-dimensional target parameter space.

Arguments:

  • old_X::Vector{Float}: position vector of the latent GP.
  • new_X::Vector{Float}: target position vector.
  • latent_gp::Vector{Number}: latent GP realization.
  • a=a, b=b... : Covariance matrix hyperparameters.

Returns:

  • gp::Vector{Number}: GP in target space.
source
GaussianProcess.posterior_predictFunction
posterior_predict(X_new, X_old, mean_new, mean_old, data, cov_fn;
                        data_cov=nothing)

Returns a function that transforms the GP from an old parameter space to a new parameter space, given a particular set of values for the GP's covariance matrix hyperparameters.

Arguments:

  • new_X::Vector{Float}: target position vector.
  • old_X::Vector{Float}: position vector of the latent GP.
  • new_mean::Vector{Number}: expected GP mean for new position vector.
  • old_mean::Vector{Number}: GP's mean in old position vector.
  • data::Vector{Float}: data vector.
  • cov_fn::function: covariance function used to generate the GP's kernel.
  • data_cov::Matrix{Float}: data covariance matrix

Returns:

  • predict(): predicts GP in target space. Arguments:
    • a=a, b=b...: Cov mat hyperparameters
    Returns:
    • GP::MvNormal: instance of MvNormal.
source