The data module is designed to load and prepare arbitrary data sets for use in machine learning algorithms.
heron.data.
Data
(targets, labels, target_sigma=None, label_sigma=None, target_names=None, label_names=None, test_targets=None, test_labels=None, test_size=0.05)[source]¶Bases: object
The data class is designed to hold non-timeseries data, and is capable of automatically selecting test data from the provided dataset.
Future development will include the ability to add pre-selected test and verification data to the object.
Methods
|
Add new rows into the data object. |
|
Calculate the offsets for the normalisation. |
|
Return a copy of this data object. |
|
Reverse the normalise() method’s effect on the data, and return it to the correct scaling. |
|
Attempts to guess sensible starting values for the hyperparameter values. |
|
Convert the index of a column to a column name. |
|
Convert the name of a column to a column index. |
|
Normalise a given array of data so that the values of the data have a minimum at 0 and a maximum at 1. |
add_data
(self, targets, labels, target_sigma=None, label_sigma=None)[source]¶Add new rows into the data object.
An array of training targets or “x” values which are to be used to train a machine learning algorithm.
An array of training labels or “y” values which represent the observations made at the target locations of the data set.
Either an array of the uncertainty for each target point, or an array of the uncertainties, as a float, for each column in the targets.
Either an array of the uncertainty for each target point, or an array of the uncertainties, as a float, for each column in the labels.
calculate_normalisation
(self, data, name)[source]¶Calculate the offsets for the normalisation. We’ll normally want to normalise the training data, and then be able to normalise and denormalise new inputs according to that.
The array of data to use to calculate the normalisations.
The name to label the constants with.
denormalise
(self, data, name)[source]¶Reverse the normalise() method’s effect on the data, and return it to the correct scaling.
The normalised data
The scale-factors used to normalise the data.
The denormalised data
get_starting
(self)[source]¶Attempts to guess sensible starting values for the hyperparameter values.
An array of values for the various hyperparameters.
normalise
(self, data, name)[source]¶Normalise a given array of data so that the values of the data have a minimum at 0 and a maximum at 1. This improves the computability of the majority of data sets.
The array of data to be normalised.
The name of the normalisation to be applied, e.g. training or label
An array of normalised data.
An array of scale factors. The first is the DC offset, while the second is the multiplicative factor.
Notes
In order to perform the normalisation we need two steps: 1) Subtract the “DC Offset”, which is the minimum of the data 2) Divide by the range of the data
heron.data.
Timeseries
(targets, labels, target_names=None, label_names=None, test_size=0.05)[source]¶Bases: object
This is a class designed to hold timeseries data for machine learning algorithms.
Timeseries data needs to be handled differently from other datasets as it is rarely likely to be advantageous to select individual points from a timeseries as either test data or verification data. Instead the timeseries class will select individual timeseries as the test and verification data.
Matched filtering functions.
░█░█░█▀▀░█▀▄░█▀█░█▀█ ░█▀█░█▀▀░█▀▄░█░█░█░█ ░▀░▀░▀▀▀░▀░▀░▀▀▀░▀░▀
This code is designed for performing matched filtering using a Gaussian Process Surrogate model. —————————————————————
heron.filtering.
Filter
(gp, data, times)[source]¶Bases: object
This class builds the filtering machinery from a provided surrogate model and noisy data.
Methods
|
Calculate the simple match of some data, given a template, and return its log-likelihood. |
matched_likelihood
(self, theta, psd=None, srate=16834)[source]¶Calculate the simple match of some data, given a template, and return its log-likelihood.
An array of data which is believed to contain a signal.
An array containing the location at which the template should be evaluated.
heron.filtering.
inner_product_noise
(x, y, sigma, psd=None, srate=16834)[source]¶Calculate the noise-weighted inner product of two random arrays.
The first data array
The second data array
The uncertainty to weight the inner product by.
The power spectral density to weight the inner product by.
Kernel functions for GPs.
heron.kernels.
ExponentialSineSq
(period=1, width=15, ax=0)[source]¶Bases: heron.kernels.Kernel
An implementation of the exponential sine-squared kernel.
Methods
|
Calculate the squared distance to the point in parameter space. |
|
The functional form of the kernel inside the exponential. |
|
Produce a gram matrix based off this kernel. |
gradient |
|
set_hyperparameters |
function
(self, data1, data2, period)[source]¶The functional form of the kernel inside the exponential.
hyper
= [1, 1]¶matrix
(self, data1, data2)[source]¶Produce a gram matrix based off this kernel.
An array of data (x)
A covariance matrix.
name
= 'Exponential sine-squared kernel'¶heron.kernels.
Kernel
[source]¶Bases: object
A generic factory for Kernel classes.
Methods
|
Calculate the squared distance to the point in parameter space. |
|
Produce a gram matrix based off this kernel. |
set_hyperparameters |
distance
(self, data1, data2, hypers=None)[source]¶Calculate the squared distance to the point in parameter space.
matrix
(self, data1, data2)[source]¶Produce a gram matrix based off this kernel.
An array of data (x)
A covariance matrix.
name
= 'Generic kernel'¶ndim
= 1¶heron.kernels.
Matern
(order=1.5, amplitude=100, width=15)[source]¶Bases: heron.kernels.Kernel
An implementation of the Matern Kernel.
Methods
|
Calculate the squared distance to the point in parameter space. |
|
Produce a gram matrix based off this kernel. |
function |
|
set_hyperparameters |
name
= 'Matern'¶order
= 1.5¶heron.kernels.
SquaredExponential
(ndim=1, amplitude=100, width=15)[source]¶Bases: heron.kernels.Kernel
An implementation of the squared-exponential kernel.
Methods
|
Calculate the squared distance to the point in parameter space. |
|
The functional form of the kernel. |
|
Calculate the graient of the kernel. |
|
Produce a gram matrix based off this kernel. |
set_hyperparameters |
flat_hyper
¶hyper
= [1.0]¶name
= 'Squared exponential kernel'¶Prior distributions for GP hyperpriors.
heron.priors.
Normal
(mean, std)[source]¶Bases: heron.priors.Prior
A normal prior probability distribution.
Methods
|
Transform from unit normalisation to this prior. |
logp |
Functions and classes for contructing regression surrogate models.
heron.regression.
MultiTaskGP
(training_data, kernel, tikh=1e-06, solver=<class 'george.solvers.hodlr.HODLRSolver'>, hyperpriors=None)[source]¶Bases: heron.regression.SingleTaskGP
An implementation of a co-trained set of Gaussian processes which share the same hyperparameters, but which model differing data. The training of these models is described in RW pp115–116.
A multi-task GPR is capable of acting as a surrogate to a many-to-many function, and is trained by making the assumption that all of the outputs from the function share a common correlation structure.
The principle difference compared to a single task GP is the presence of multiple Gaussian Processes, with one to model each dimension of the output data.
Notes
The MultiTask GPR implementation is very much a work in progress at the moment, and not all methods implemented in the SingleTask GPR are implemented correctly yet.
Methods
|
Actively train the Gaussian process from a set of provided labels and targets using some acquisition function. |
|
Add data to the Gaussian process. |
|
Calculate the correlation between the model and the test data. |
|
Return the entropy of the Gaussian Process distribution. |
|
Returns the expected improvement at the design vector X in the model |
|
Return the kernel hyperparameters. |
|
Return the negative of the gradient of the log likelihood for the GP when its hyperparameters have some specified value. |
|
Return the true value in the desired hyperprior space, given an input of a unit-hypercube prior space. |
|
Provides a wrapper to the ln_likelihood functions for each component Gaussian process in the multi-task system. |
|
Calculate the log of the hyperprior distributions at a given point. |
|
Returns the negative of the log-likelihood; designed for use with minimisation algorithms. |
|
Calculate the negative of the expected improvement at a point x. |
|
Produce a prediction at a new point, or set of points. |
|
Calculate the root mean squared error of the whole model. |
|
Save the Gaussian Process to a file which can be reloaded later. |
|
Set the values of the B matrix from a vector. |
|
Set the hyperparameters of the kernel function on each Gaussian process. |
|
Calculate the value of the GP at the test targets. |
|
Train the Gaussian process by finding the optimal values for the kernel hyperparameters. |
|
Update the stored matrices. |
get_hyperparameters
(self)[source]¶Return the kernel hyperparameters. Returns the hyperparameters of only the first GP in the network; the others /should/ all be the same, but there might be something to be said for checking this.
A list of the kernel hyperparameters
ln_likelihood
(self, p)[source]¶Provides a wrapper to the ln_likelihood functions for each component Gaussian process in the multi-task system.
Notes
This is implemented in a separate function because of the mild peculiarities of how the pickle module needs to serialise functions, which means that instancemethods (which this would become) can’t be serialised.
prediction
(self, new_datum)[source]¶Produce a prediction at a new point, or set of points.
The coordinates of the new point(s) at which the GPR model should be evaluated.
The mean values of the function drawn from the Gaussian Process.
The variance values for the function drawn from the GP.
set_hyperparameters
(self, hypers)[source]¶Set the hyperparameters of the kernel function on each Gaussian process.
train
(self, method='MCMC', metric='loglikelihood', sampler='ensemble', \*\*kwargs)[source]¶Train the Gaussian process by finding the optimal values for the kernel hyperparameters.
The method to be employed to calculate the hyperparameters.
The metric which should be used to assess the model.
The hyperprior distributions for the hyperparameters. Defaults to None, in which case the prior is uniform over all real numbers.
heron.regression.
Regressor
(training_data, kernel, tikh=1e-06, solver=<class 'george.solvers.hodlr.HODLRSolver'>, hyperpriors=None, **kwargs)[source]¶Bases: heron.regression.SingleTaskGP
Methods
|
Actively train the Gaussian process from a set of provided labels and targets using some acquisition function. |
|
Add data to the Gaussian process. |
|
Calculate the correlation between the model and the test data. |
|
Return the entropy of the Gaussian Process distribution. |
|
Returns the expected improvement at the design vector X in the model |
|
Return the kernel hyperparameters. |
|
Return the negative of the gradient of the log likelihood for the GP when its hyperparameters have some specified value. |
|
Return the true value in the desired hyperprior space, given an input of a unit-hypercube prior space. |
|
Provides a convenient wrapper to the ln likelihood function. |
|
Calculate the log of the hyperprior distributions at a given point. |
|
Returns the negative of the log-likelihood; designed for use with minimisation algorithms. |
|
Calculate the negative of the expected improvement at a point x. |
|
Produce a prediction at a new point, or set of points. |
|
Calculate the root mean squared error of the whole model. |
|
Save the Gaussian Process to a file which can be reloaded later. |
|
Set the values of the B matrix from a vector. |
|
Set the hyperparameters of the kernel function. |
|
Calculate the value of the GP at the test targets. |
|
Train the Gaussian process by finding the optimal values for the kernel hyperparameters. |
|
Update the stored matrices. |
heron.regression.
SingleTaskGP
(training_data, kernel, tikh=1e-06, solver=<class 'george.solvers.hodlr.HODLRSolver'>, hyperpriors=None, **kwargs)[source]¶Bases: object
This is an implementaion of a Single task Gaussian process regressor. That is, a GPR which is capable of acting as a surrogate to a many-to-one function. The Single Task GPR is the fundamental building block of the MultiTask GPR, which consists of multiple Single Tasks which are trained in tandem (but which do NOT share correlation information). — Ahem… There /are/ components of this code in here, but things need a little bit more thought before this will work efficiently… An implementation of a Gaussian Process Regressor with multiple response outputs and multiple inputs.
Methods
|
Actively train the Gaussian process from a set of provided labels and targets using some acquisition function. |
|
Add data to the Gaussian process. |
|
Calculate the correlation between the model and the test data. |
|
Return the entropy of the Gaussian Process distribution. |
|
Returns the expected improvement at the design vector X in the model |
|
Return the kernel hyperparameters. |
|
Return the negative of the gradient of the log likelihood for the GP when its hyperparameters have some specified value. |
|
Return the true value in the desired hyperprior space, given an input of a unit-hypercube prior space. |
|
Provides a convenient wrapper to the ln likelihood function. |
|
Calculate the log of the hyperprior distributions at a given point. |
|
Returns the negative of the log-likelihood; designed for use with minimisation algorithms. |
|
Calculate the negative of the expected improvement at a point x. |
|
Produce a prediction at a new point, or set of points. |
|
Calculate the root mean squared error of the whole model. |
|
Save the Gaussian Process to a file which can be reloaded later. |
|
Set the values of the B matrix from a vector. |
|
Set the hyperparameters of the kernel function. |
|
Calculate the value of the GP at the test targets. |
|
Train the Gaussian process by finding the optimal values for the kernel hyperparameters. |
|
Update the stored matrices. |
active_learn
(self, afunction, x, y, iters=1, afunc_args={})[source]¶Actively train the Gaussian process from a set of provided labels and targets using some acquisition function.
The acquisition function.
The input labels of the data. This can be a multi-dimensional array.
The input targets of the data. This can only be a single-dimensional array at present.
The number of times to iterate the learning process: equivalently, the number of training points to digest.
A dictionary of arguments for the acquisition function. Optional.
correlation
(self)[source]¶Calculate the correlation between the model and the test data.
The correlation squared.
entropy
(self)[source]¶Return the entropy of the Gaussian Process distribution. This can be calculated directly from the covariance matrix, making this a nice, quick calculation to perform.
The differential entropy of the GP.
expected_improvement
(self, x)[source]¶Returns the expected improvement at the design vector X in the model
A real world coordinates design vector
The expected improvement value at the point x in the model
grad_neg_ln_likelihood
(self, p)[source]¶Return the negative of the gradient of the log likelihood for the GP when its hyperparameters have some specified value.
The gaussian process to be evaluated
An array of the hyper-parameters at which the model is to be evaluated.
The gradient of log-likelihood for the Gaussian process
hyperpriortransform
(self, p)[source]¶Return the true value in the desired hyperprior space, given an input of a unit-hypercube prior space.
The point in the unit hypercube space
km
= None¶ln_likelihood
(self, p)[source]¶Provides a convenient wrapper to the ln likelihood function.
Notes
This is implemented in a separate function because of the mild peculiarities of how the pickle module needs to serialise functions, which means that instancemethods (which this would become) can’t be serialised.
loghyperpriors
(self, p)[source]¶Calculate the log of the hyperprior distributions at a given point.
The location to be tested.
neg_ln_likelihood
(self, p)[source]¶Returns the negative of the log-likelihood; designed for use with minimisation algorithms.
The gaussian process to be evaluated.
An array of the hyper-parameters at which the model is to be evaluated.
The negative of the log-likelihood for the Gaussian process
prediction
(self, new_datum, normalised=False)[source]¶Produce a prediction at a new point, or set of points.
The coordinates of the new point(s) at which the GPR model should be evaluated.
A flag to indicate if the input is already normalised (this might be the case if you’re trying to efficiently sample to parameter space). If False the input will be normalised to the same range as the training data.
The mean values of the function drawn from the Gaussian Process.
The variance values for the function drawn from the GP.
rmse
(self)[source]¶Calculate the root mean squared error of the whole model.
The root mean squared error.
save
(self, filename)[source]¶Save the Gaussian Process to a file which can be reloaded later.
The location at which the Gaussian Process should be written.
Notes
In the current implementation the serialisation of the GP is performed by the python pickle library, which isn’t guaranteed to be binary-compatible with all machines.
train
(self, method='MCMC', metric='loglikelihood', sampler='ensemble', \*\*kwargs)[source]¶Train the Gaussian process by finding the optimal values for the kernel hyperparameters.
The method to be employed to calculate the hyperparameters.
The metric which should be used to assess the model.
The hyperprior distributions for the hyperparameters. Defaults to None, in which case the prior is uniform over all real numbers.
Code to simplify sampling the Gaussian process.
heron.sampling.
draw_samples
(gp, \*\*kwargs)[source]¶Construct an array to pass to the Gaussian process to pull out a number of samples from a high dimensional GP.
The Gaussian process object
The ranges for each value. In the format
parameter = 0.9
the axis will be constant Alternatively a list or tuple can be passed to form a range (which uses the same arrangement of arguments as numpy’s linspace function:
parameter = (0.0, 1.5, 100)
will produce an axis 100 samples long.
These are functions designed to be used for training a Gaussian process made using heron.
heron.training.
cross_validation
(p, gp)[source]¶Calculate the cross-validation factor between the training set and the test set.
The Gaussian process object.
The hyperparameters for the Gaussian process kernel. Defaults to None, which causes the current values for the hyperparameters to be used.
The cross validation of the test data and the model.
heron.training.
ln_likelihood
(p, gp)[source]¶Returns to log-likelihood of the Gaussian process, which can be used to learn the hyperparameters of the GP.
The gaussian process to be evaluated
An array of the hyper-parameters at which the model is to be evaluated.
The log-likelihood for the Gaussian process
Notes
TODO Add the ability to specify the priors on each hyperparameter.
heron.training.
run_sampler
(sampler, initial, iterations)[source]¶Run the MCMC sampler for some number of iterations, but output a progress bar so you can keep track of what’s going on
heron.training.
run_training_map
(gp, metric='loglikelihood', repeats=20, \*\*kwargs)[source]¶Find the maximum a posteriori training values for the Gaussian Process.
The Gaussian process object.
The metric to be used to train the MCMC. Defaults to log likelihood (loglikelihood), which is the more traditionally Bayesian manner, but cross-validation (cv) is also available.
The number of times that the optimisation should be repeated in order to partially combat having the optimiser choose a local rather than the global maximum log_like.
Notes
The current implementation has no way of specifying the optimisation algorithm.
TODO Add an option to change the optimisation algorithm.
heron.training.
run_training_mcmc
(gp, walkers=200, burn=500, samples=1000, metric='loglikelihood', samplertype='ensemble')[source]¶Train a Gaussian process using an MCMC process to find the maximum evidence.
The Gaussian process object.
The number of MCMC walkers.
The number of samples to be used to evaluate the burn-in for the MCMC.
The number of samples to be used for the production sampling.
The metric to be used to train the MCMC. Defaults to log likelihood (loglikelihood), which is the more traditionally Bayesian manner, but cross-validation (cv) is also available.
The sampler to be used on the model.
The log probabilities.
The array of samples from the sampling chains.
Notes
At present the algorithm assigns the median of the samples to the value of the kernel vector; this may not ultimately be the best way to do this, and so it should be possible to specify the desired value to be used from the distribution.
TODO Add ability to change median to other statistics for training
heron.training.
run_training_nested
(gp, method='multi', maxiter=None, npoints=1000)[source]¶Train the Gaussian Process model using nested sampling.
The Gaussian Process object.
The nested sampling method to be used.
The maximum number of iterations which should be carried out on the marginal likelihood. Optional.
The number of live-points to use in the optimisation.