Commit c7eb61a9 authored by Konstantinos Chatzilygeroudis's avatar Konstantinos Chatzilygeroudis
Browse files

Merge remote-tracking branch 'origin' into improve_doc

parents 28233702 4dec493f
limbo [![Build Status](https://travis-ci.org/resibots/limbo.svg?branch=master)](https://travis-ci.org/resibots/limbo)
=====
A lightweight framework for Bayesian and model-based optimization of black-box functions (C++11).
A lightweight framework for Bayesian optimization of black-box functions (C++11) and, more generally, for data-efficient optimization. It is designed to be very fast and very flexible.
Documentation
-------------
Documentation is available here: http://www.resibots.eu/limbo
Authors
------
- Antoine Cully (Imperial College): http://www.isir.upmc.fr/?op=view_profil&lang=fr&id=278
- Jean-Baptiste Mouret (Inria): http://pages.isir.upmc.fr/~mouret/website/
- Antoine Cully (Imperial College): http://www.antoinecully.com
- Jean-Baptiste Mouret (Inria): http://members.loria.fr/JBMouret
- Konstantinos Chatzilygeroudis (Inria)
- Federico Allocati (Inria)
Limbo is partly funded by the ResiBots ERC Project (http://www.resibots.eu).
Main features
-------------
- Bayesian optimization based on Gaussian processes
- Generic framework (template-based), which allows easy customization for testing original ideas
- Can exploit multicore computers
- Experimental support for some multi-objective algorithms
Main references
---------------
- **General introduction:** Brochu, E., Cora, V. M., & De Freitas, N. (2010). A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. *arXiv preprint arXiv:1012.2599*.
- **Gaussian Processes (GP)**: Rasmussen, C. A, Williams C. K. I. (2006). /Gaussian Processes for Machine Learning./ MIT Press.
- **Optimizing hyperparameters:** Blum, M., & Riedmiller, M. (2013). Optimization of Gaussian Process Hyperparameters using Rprop. In *European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning*.
- **Parego (Multi-objective optimization):** Knowles, J. (2006). ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. *Evolutionary Computation, IEEE Transactions on*, 10(1), 50-66.
- **CMA-ES (inner optimization):** Auger, A., & Hansen, N. (2005). A restart CMA evolution strategy with increasing population size. In *Evolutionary Computation, 2005. The 2005 IEEE Congress on* (Vol. 2, pp. 1769-1776). IEEE.
- **Expected hypervolume improvement (multi-objective optimization):** Hupkens, I., Emmerich, M. T. M., Deutz A. H. (2014). Faster Computation of Expected Hypervolume Improvement. arXiv: http://arxiv.org/abs/1408.7114
Other libraries
---------------
Limbo is a framework for our research that is voluntarily kept small. It is designed to be very fast and flexible, but it does not aim at covering every possible use case for Bayesian optimization.
If you need a more full-featured library, check:
- BayesOpt: http://rmcantin.bitbucket.org/html/
- libGP (no optimization): https://github.com/mblum/libgp
- Implementation of the classic algorithms (Bayesian optimization, many kernels, likelihood maximization, etc.)
- Modern C++-11
- Generic framework (template-based / policy-based design), which allows for easy customization, to test novel ideas
- Experimental framework that allows user to easily test variants of experiments, compare treatments, submit jobs to clusters (OAR scheduler), etc.
- High performance (in particular, Limbo can exploit multicore computers via Intel TBB and vectorize some operations via Eigen3)
- Purposely small to be easily maintained and quickly understood
Scientific articles that use Limbo
--------------------------------
Cully, A., Clune, J., Tarapore, D., & Mouret, J. B. (2015). Robots that can adapt like animals. *Nature*, 521(7553), 503-507.
Research project that use Limbo
--------------------------------
- Resibots. ERC Starting Grant: http://www.resibots.eu/
- PAL. H2020 EU project: http://www.pal4u.eu/
\ No newline at end of file
......@@ -414,7 +414,7 @@ Template
template <typename Params>
struct Samples : public StatBase<Params> {
template <typename BO, typename AggregatorFunction>
void operator()(const BO& bo, const AggregatorFunction&, bool blacklisted)
void operator()(const BO& bo, const AggregatorFunction&)
{
// code
}
......
......@@ -23,19 +23,6 @@ Replacing :math:`\mathbf{P}_{1:t}` by :math:`\mathbf{P}_{1:t}-\mathcal{P}(\mathb
See :ref:`the Limbo implementation guide <mean-api>` for the available mean functions.
Black lists
-----------
When performing experiments, it is possible that some solutions cannot be properly evaluated. For example, this situation happens often with a physical robot, typically because (1) the robot may be outside the sensor’s range, for example when the robot is not visible from the camera’s point of view, making it impossible to assess its performance and (2) the sensor may return intractable values (infinity, NaN,...).
Different solutions exist to deal with missing data. The simplest way consists in redoing the evaluation. This may work, but only if the problem is not deterministic, otherwise the algorithm will be continuously redoing the same, not working, evaluation. A second solution consists in assigning a very low value to the behavior’s performance, like a punishment. This approach will work with evolutionary algorithms because the corresponding individual will very likely be removed from the population in the next generation. By contrast, this approach will have a dramatic effect on algorithms using models of the reward function, like Bayesian Optimization, as the models will be completely distorted.
These different methods to deal with missing data do not fit well with the Bayesian Optimization framework. Limbo uses a different approach, compatible with Bayesian Optimization, which preserves the model’s stability. The overall idea is to encourage the algorithm to avoid regions around behaviors that could not be evaluated, which may contain other behaviors that are not evaluable too, but without providing any performance value, which is likely to increase the model’s instability.
In order to provide the information that some behaviors have already been tried, we define a blacklist of samples. Each time a behavior cannot be properly evaluated, this behavior is added into the blacklist (and not in the pool of tested behaviors). Because the performance value is not available, only the behavior’s location in the search space is added to the blacklist. In other words, the blacklists are a list of samples with missing performance data.
Thanks to this distinction between valid samples and blacklisted ones, the algorithm can consider only the valid samples when computing the mean of the Gaussian Process and both valid and blacklisted samples when computing the variance. By ignoring blacklisted samples, the mean will remain unchanged and free to move according to future observations. By contrast, the variance will consider both valid and blacklisted samples and will “mark” them as already explored .
.. _state-based-bo:
State-based optimization
......
......@@ -20,7 +20,6 @@ Limbo provides a few classes for common uses (see :ref:`statistics-stats` for de
- ``Samples``: records the position in the search space of the evaluated points [filename: ``samples.dat``]
- ``BestObservations``: records the best observation after each iteration? [filename ``best_observations.dat``]
- ``BestSamples``: records the position in the search space of the best observation after each iteration [filename: ``best_samples.dat``]
- ``BlSamples``: records the candidate solutions that have been blacklisted [filename: ``bl_samples.dat``]
These statistics are for "advanced users":
......@@ -45,7 +44,7 @@ All the statistics functors follow the same template:
template <typename Params>
struct Samples : public limbo::stat::StatBase<Params> {
template <typename BO, typename AggregatorFunction>
void operator()(const BO& bo, const AggregatorFunction&, bool blacklisted)
void operator()(const BO& bo, const AggregatorFunction&)
{
// code
}
......
......@@ -75,7 +75,6 @@ int main()
using Stop_t = boost::fusion::vector<stop::MaxIterations<Params>>;
using Stat_t = boost::fusion::vector<stat::Samples<Params>,
stat::BlSamples<Params>,
stat::BestObservations<Params>,
stat::AggregatedObservations<Params>>;
using Mean_t = mean::Constant<Params>;
......
......@@ -82,15 +82,14 @@ namespace limbo {
}
template <typename BO, typename AggregatorFunction>
struct RefreshStat_f {
RefreshStat_f(BO& bo, const AggregatorFunction& afun, bool blacklisted)
: _bo(bo), _afun(afun), _blacklisted(blacklisted) {}
RefreshStat_f(BO& bo, const AggregatorFunction& afun)
: _bo(bo), _afun(afun) {}
BO& _bo;
const AggregatorFunction& _afun;
bool _blacklisted;
template <typename T>
void operator()(T& x) const { x(_bo, _afun, _blacklisted); }
void operator()(T& x) const { x(_bo, _afun); }
};
struct FirstElem {
......@@ -221,9 +220,6 @@ namespace limbo {
/// return the list of the points that have been evaluated so far (x)
const std::vector<Eigen::VectorXd>& samples() const { return _samples; }
/// return the list of blacklisted points
const std::vector<Eigen::VectorXd>& bl_samples() const { return _bl_samples; }
/// return the current iteration number
int current_iteration() const { return _current_iteration; }
......@@ -240,22 +236,11 @@ namespace limbo {
_observations.push_back(v);
}
/// Add a new blacklisted sample
void add_new_bl_sample(const Eigen::VectorXd& s) { _bl_samples.push_back(s); }
/// Evaluate a sample and add the result to the 'database' (sample / observations vectors) -- it does not update the model
template <typename StateFunction>
bool eval_and_add(const StateFunction& seval, const Eigen::VectorXd& sample)
void eval_and_add(const StateFunction& seval, const Eigen::VectorXd& sample)
{
try {
this->add_new_sample(sample, seval(sample));
}
catch (const EvaluationError& e) {
this->add_new_bl_sample(sample);
return false;
}
return true;
this->add_new_sample(sample, seval(sample));
}
protected:
......@@ -267,7 +252,6 @@ namespace limbo {
this->_total_iterations = 0;
this->_samples.clear();
this->_observations.clear();
this->_bl_samples.clear();
}
if (this->_total_iterations == 0)
......@@ -282,10 +266,10 @@ namespace limbo {
}
template <typename BO, typename AggregatorFunction>
void _update_stats(BO& bo, const AggregatorFunction& afun, bool blacklisted)
void _update_stats(BO& bo, const AggregatorFunction& afun)
{ // not const, because some stat class
// modify the optimizer....
boost::fusion::for_each(_stat, RefreshStat_f<BO, AggregatorFunction>(bo, afun, blacklisted));
boost::fusion::for_each(_stat, RefreshStat_f<BO, AggregatorFunction>(bo, afun));
}
void _make_res_dir()
......@@ -305,7 +289,6 @@ namespace limbo {
std::vector<Eigen::VectorXd> _observations;
std::vector<Eigen::VectorXd> _samples;
std::vector<Eigen::VectorXd> _bl_samples;
};
}
}
......
......@@ -142,7 +142,7 @@ namespace limbo {
this->_init(sfun, afun, reset);
if (!this->_observations.empty())
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_observations.size(), Params::bayes_opt_boptimizer::noise()), this->_bl_samples, Eigen::VectorXd::Constant(this->_bl_samples.size(), Params::bayes_opt_boptimizer::noise()));
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_observations.size(), Params::bayes_opt_boptimizer::noise()));
else
_model = model_t(StateFunction::dim_in, StateFunction::dim_out);
......@@ -155,16 +155,11 @@ namespace limbo {
[&](const Eigen::VectorXd& x, bool g) { return acqui(x,afun,g); };
Eigen::VectorXd starting_point = tools::random_vector(StateFunction::dim_in);
Eigen::VectorXd new_sample = acqui_optimizer(acqui_optimization, starting_point, true);
bool blacklisted = !this->eval_and_add(sfun, new_sample);
this->eval_and_add(sfun, new_sample);
this->_update_stats(*this, afun, blacklisted);
this->_update_stats(*this, afun);
if (blacklisted) {
_model.add_bl_sample(this->_bl_samples.back(), Params::bayes_opt_boptimizer::noise());
}
else {
_model.add_sample(this->_samples.back(), this->_observations.back(), Params::bayes_opt_boptimizer::noise());
}
_model.add_sample(this->_samples.back(), this->_observations.back(), Params::bayes_opt_boptimizer::noise());
if (Params::bayes_opt_boptimizer::hp_period() > 0
&& (this->_current_iteration + 1) % Params::bayes_opt_boptimizer::hp_period() == 0)
......
......@@ -155,9 +155,9 @@ namespace limbo {
if (!this->_observations.empty()) {
_split_observations();
_model.compute(this->_samples, _obs[0], Eigen::VectorXd::Constant(_obs[0].size(), Params::cbayes_opt_boptimizer::noise()), this->_bl_samples, Eigen::VectorXd::Constant(this->_bl_samples.size(), Params::cbayes_opt_boptimizer::noise()));
_model.compute(this->_samples, _obs[0], Eigen::VectorXd::Constant(_obs[0].size(), Params::cbayes_opt_boptimizer::noise()));
if (_nb_constraints > 0)
_constraint_model.compute(this->_samples, _obs[1], Eigen::VectorXd::Constant(_obs[1].size(), Params::cbayes_opt_boptimizer::noise()), this->_bl_samples, Eigen::VectorXd::Constant(this->_bl_samples.size(), Params::cbayes_opt_boptimizer::noise()));
_constraint_model.compute(this->_samples, _obs[1], Eigen::VectorXd::Constant(_obs[1].size(), Params::cbayes_opt_boptimizer::noise()));
}
else {
_model = model_t(StateFunction::dim_in, StateFunction::dim_out);
......@@ -175,20 +175,13 @@ namespace limbo {
[&](const Eigen::VectorXd& x, bool g) { return opt::no_grad(acqui(x, afun)); };
Eigen::VectorXd starting_point = tools::random_vector(StateFunction::dim_in);
Eigen::VectorXd new_sample = acqui_optimizer(acqui_optimization, starting_point, true);
bool blacklisted = !this->eval_and_add(sfun, new_sample);
this->eval_and_add(sfun, new_sample);
this->_update_stats(*this, afun, blacklisted);
this->_update_stats(*this, afun);
if (blacklisted) {
_model.add_bl_sample(this->_bl_samples.back(), Params::cbayes_opt_boptimizer::noise());
if (_nb_constraints > 0)
_constraint_model.add_bl_sample(this->_bl_samples.back(), Params::cbayes_opt_boptimizer::noise());
}
else {
_model.add_sample(this->_samples.back(), _obs[0].back(), Params::cbayes_opt_boptimizer::noise());
if (_nb_constraints > 0)
_constraint_model.add_sample(this->_samples.back(), _obs[1].back(), Params::cbayes_opt_boptimizer::noise());
}
_model.add_sample(this->_samples.back(), _obs[0].back(), Params::cbayes_opt_boptimizer::noise());
if (_nb_constraints > 0)
_constraint_model.add_sample(this->_samples.back(), _obs[1].back(), Params::cbayes_opt_boptimizer::noise());
if (Params::cbayes_opt_boptimizer::hp_period() > 0
&& (this->_current_iteration + 1) % Params::cbayes_opt_boptimizer::hp_period() == 0) {
......
......@@ -154,7 +154,7 @@ namespace limbo {
// add sample
this->add_new_sample(new_sample, feval(new_sample));
this->_update_stats(*this, FirstElem(), false);
this->_update_stats(*this, FirstElem());
this->_current_iteration++;
this->_total_iterations++;
}
......
......@@ -200,12 +200,12 @@ namespace limbo {
x_d(splitd) = (tmp_tree[h2].x_min[ii](splitd) + 5 * tmp_tree[h2].x_max[ii](splitd)) / 6.0;
// TO-DO: Properly handle bl_samples etc
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_samples.size(), Params::bayes_opt_imgpo::noise()), this->_bl_samples);
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_samples.size(), Params::bayes_opt_imgpo::noise()));
acquisition_function_t acqui_g(_model, M2);
z_max = std::max(z_max, acqui_g(x_g, afun));
M2++;
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_samples.size(), Params::bayes_opt_imgpo::noise()), this->_bl_samples);
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_samples.size(), Params::bayes_opt_imgpo::noise()));
acquisition_function_t acqui_d(_model, M2);
z_max = std::max(z_max, acqui_d(x_d, afun));
M2++;
......@@ -270,7 +270,7 @@ namespace limbo {
// left node
_tree[h + 1].x.push_back(x_g);
// TO-DO: Properly handle bl_samples etc
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_samples.size(), Params::bayes_opt_imgpo::noise()), this->_bl_samples);
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_samples.size(), Params::bayes_opt_imgpo::noise()));
acquisition_function_t acqui_g(_model, M);
double UCB = acqui_g(x_g, afun);
Eigen::VectorXd fsample_g;
......@@ -309,7 +309,7 @@ namespace limbo {
// right node
_tree[h + 1].x.push_back(x_d);
// TO-DO: Properly handle bl_samples etc
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_samples.size(), Params::bayes_opt_imgpo::noise()), this->_bl_samples);
_model.compute(this->_samples, this->_observations, Eigen::VectorXd::Constant(this->_samples.size(), Params::bayes_opt_imgpo::noise()));
acquisition_function_t acqui_d(_model, M);
double UCB2 = acqui_d(x_d, afun);
Eigen::VectorXd fsample_d;
......
......@@ -90,7 +90,7 @@ namespace limbo {
<< this->_models[1].mu(best_v) << ")"
<< " sigma:" << this->_models[0].sigma(best_v) << " "
<< this->_models[1].sigma(best_v) << std::endl;
this->_update_stats(*this, FirstElem(), false);
this->_update_stats(*this, FirstElem());
this->_current_iteration++;
this->_total_iterations++;
}
......
......@@ -80,14 +80,12 @@ namespace limbo {
GPParego() {}
GPParego(int dim_in, int dim_out) : Model(dim_in, 1), _nb_objs(dim_out) {}
void compute(const std::vector<Eigen::VectorXd>& samples,
const std::vector<Eigen::VectorXd>& observations, const Eigen::VectorXd& noises,
const std::vector<Eigen::VectorXd>& bl_samples = std::vector<Eigen::VectorXd>(),
const Eigen::VectorXd& noises_bl = Eigen::VectorXd())
const std::vector<Eigen::VectorXd>& observations, const Eigen::VectorXd& noises)
{
_raw_observations = observations;
_nb_objs = observations[0].size();
auto new_observations = _scalarize_obs(observations);
Model::compute(samples, new_observations, noises, bl_samples);
Model::compute(samples, new_observations, noises);
}
/// add sample will NOT be incremental (we call compute each time)
void add_sample(const Eigen::VectorXd& sample, const Eigen::VectorXd& observation, double noise)
......@@ -98,16 +96,7 @@ namespace limbo {
_raw_observations.push_back(observation);
this->compute(this->_samples,
_raw_observations, this->_noises,
this->_bl_samples);
}
/// WARNING: Parego does not really work with blacklisted samples
void add_bl_sample(const Eigen::VectorXd& bl_sample, double noise)
{
Model::add_bl_sample(bl_sample, noise);
this->compute(this->_samples,
_raw_observations, this->_noises,
this->_bl_samples);
_raw_observations, this->_noises);
}
protected:
......
......@@ -61,7 +61,7 @@ namespace limbo {
template <typename Params>
struct HyperVolume : public limbo::stat::StatBase<Params> {
template <typename BO, typename AggregatorFunction>
void operator()(const BO& bo, const AggregatorFunction&, bool blacklisted)
void operator()(const BO& bo, const AggregatorFunction&)
{
if (bo.observations().empty())
return;
......
......@@ -53,7 +53,7 @@ namespace limbo {
template <typename F>
struct ParetoBenchmark {
template <typename BO, typename AggregatorFunction>
void operator()(BO& opt, const AggregatorFunction& afun, bool blacklisted)
void operator()(BO& opt, const AggregatorFunction& afun)
{
opt.update_pareto_data();
#ifndef NSBO // this is already done is NSBO
......
......@@ -58,7 +58,7 @@ namespace limbo {
typedef std::vector<pareto_point_t> pareto_t;
template <typename BO, typename AggregatorFunction>
void operator()(const BO& bo, const AggregatorFunction&, bool blacklisted)
void operator()(const BO& bo, const AggregatorFunction&)
{
if (!bo.stats_enabled() || bo.observations().empty())
return;
......
......@@ -74,17 +74,14 @@ namespace limbo {
GP(int dim_in, int dim_out)
: _dim_in(dim_in), _dim_out(dim_out), _kernel_function(dim_in), _mean_function(dim_out) {}
/// Compute the GP from samples, observation, noise. [optional: blacklisted samples]. This call needs to be explicit!
/// Compute the GP from samples, observation, noise. This call needs to be explicit!
void compute(const std::vector<Eigen::VectorXd>& samples,
const std::vector<Eigen::VectorXd>& observations,
const Eigen::VectorXd& noises,
const std::vector<Eigen::VectorXd>& bl_samples = std::vector<Eigen::VectorXd>(),
const Eigen::VectorXd& noises_bl = Eigen::VectorXd())
const Eigen::VectorXd& noises)
{
assert(samples.size() != 0);
assert(observations.size() != 0);
assert(samples.size() == observations.size());
assert(bl_samples.size() == (unsigned int)noises_bl.size());
_dim_in = samples[0].size();
_kernel_function = KernelFunction(_dim_in); // the cost of building a functor should be relatively low
......@@ -101,15 +98,9 @@ namespace limbo {
_mean_observation = _observations.colwise().mean();
_noises = noises;
_noises_bl = noises_bl;
_bl_samples = bl_samples;
this->_compute_obs_mean();
this->_compute_full_kernel();
if (!_bl_samples.empty())
this->_compute_bl_kernel();
}
/// Do not forget to call this if you use hyper-prameters optimization!!
......@@ -123,13 +114,8 @@ namespace limbo {
void add_sample(const Eigen::VectorXd& sample, const Eigen::VectorXd& observation, double noise)
{
if (_samples.empty()) {
if (_bl_samples.empty()) {
_dim_in = sample.size();
_kernel_function = KernelFunction(_dim_in); // the cost of building a functor should be relatively low
}
else {
assert(sample.size() == _dim_in);
}
_dim_in = sample.size();
_kernel_function = KernelFunction(_dim_in); // the cost of building a functor should be relatively low
_dim_out = observation.size();
_mean_function = MeanFunction(_dim_out); // the cost of building a functor should be relatively low
......@@ -152,31 +138,6 @@ namespace limbo {
this->_compute_obs_mean();
this->_compute_incremental_kernel();
if (!_bl_samples.empty())
this->_compute_bl_kernel();
}
/// add blacklisted sample and update the GP
void add_bl_sample(const Eigen::VectorXd& bl_sample, double noise)
{
if (_samples.empty() && _bl_samples.empty()) {
_dim_in = bl_sample.size();
_kernel_function = KernelFunction(_dim_in); // the cost of building a functor should be relatively low
}
else {
assert(bl_sample.size() == _dim_in);
}
_bl_samples.push_back(bl_sample);
_noises_bl.conservativeResize(_noises_bl.size() + 1);
_noises_bl[_noises_bl.size() - 1] = noise;
//_noise = noise;
if (!_samples.empty()) {
this->_compute_bl_kernel();
}
}
/**
......@@ -186,16 +147,12 @@ namespace limbo {
*/
std::tuple<Eigen::VectorXd, double> query(const Eigen::VectorXd& v) const
{
if (_samples.size() == 0 && _bl_samples.size() == 0)
return std::make_tuple(_mean_function(v, *this),
_kernel_function(v, v));
if (_samples.size() == 0)
return std::make_tuple(_mean_function(v, *this),
_sigma(v, _compute_k_bl(v, _compute_k(v))));
_kernel_function(v, v));
Eigen::VectorXd k = _compute_k(v);
return std::make_tuple(_mu(v, k), _sigma(v, _compute_k_bl(v, k)));
return std::make_tuple(_mu(v, k), _sigma(v, k));
}
/**
......@@ -217,9 +174,9 @@ namespace limbo {
*/
double sigma(const Eigen::VectorXd& v) const
{
if (_samples.size() == 0 && _bl_samples.size() == 0)
if (_samples.size() == 0)
return _kernel_function(v, v);
return _sigma(v, _compute_k_bl(v, _compute_k(v)));
return _sigma(v, _compute_k(v));
}
/// return the number of dimensions of the input
......@@ -269,13 +226,6 @@ namespace limbo {
/// return the number of samples used to compute the GP
int nb_samples() const { return _samples.size(); }
/** return the number of blacklisted samples used to compute the GP
\\rst
For the blacklist concept, see the Limbo-specific concept guide.
\\endrst
*/
int nb_bl_samples() const { return _bl_samples.size(); }
/// recomputes the GP
void recompute(bool update_obs_mean = true)
{
......@@ -285,9 +235,6 @@ namespace limbo {
this->_compute_obs_mean();
this->_compute_full_kernel();
if (!_bl_samples.empty())
this->_compute_bl_kernel();
}
/// return the likelihood (do not compute it!)
......@@ -314,11 +261,9 @@ namespace limbo {
std::vector<Eigen::VectorXd> _samples;
Eigen::MatrixXd _observations;
std::vector<Eigen::VectorXd> _bl_samples; // black listed samples
Eigen::MatrixXd _mean_vector;
Eigen::MatrixXd _obs_mean;
//double _noise;
Eigen::VectorXd _noises;
Eigen::VectorXd _noises_bl;
......@@ -327,10 +272,7 @@ namespace limbo {
Eigen::MatrixXd _kernel;
// Eigen::MatrixXd _inverted_kernel;
Eigen::MatrixXd _matrixL;
Eigen::MatrixXd _inv_bl_kernel;
double _lik;
......@@ -401,46 +343,6 @@ namespace limbo {
triang.adjoint().solveInPlace(_alpha);
}
void _compute_bl_kernel()
{
Eigen::MatrixXd A1 = Eigen::MatrixXd::Identity(this->_samples.size(), this->_samples.size());
_matrixL.template triangularView<Eigen::Lower>().solveInPlace(A1);
_matrixL.template triangularView<Eigen::Lower>().transpose().solveInPlace(A1);
_inv_bl_kernel.resize(_samples.size() + _bl_samples.size(),
_samples.size() + _bl_samples.size());
Eigen::MatrixXd B(_samples.size(), _bl_samples.size());
for (size_t i = 0; i < _samples.size(); i++)
for (size_t j = 0; j < _bl_samples.size(); ++j)
B(i, j) = _kernel_function(_samples[i], _bl_samples[j]);
Eigen::MatrixXd D(_bl_samples.size(), _bl_samples.size());
for (size_t i = 0; i < _bl_samples.size(); i++)
for (size_t j = 0; j < _bl_samples.size(); ++j)
D(i, j) = _kernel_function(_bl_samples[i], _bl_samples[j]) + ((i == j) ? _noises_bl[i] : 0);
Eigen::MatrixXd comA = (D - B.transpose() * A1 * B);
Eigen::LLT<Eigen::MatrixXd> llt_bl(comA);
Eigen::MatrixXd comA1 = Eigen::MatrixXd::Identity(_bl_samples.size(), _bl_samples.size());
llt_bl.matrixL().solveInPlace(comA1);
llt_bl.matrixL().transpose().solveInPlace(comA1);
// fill the matrix block wise
_inv_bl_kernel.block(0, 0, _samples.size(), _samples.size()) = A1 + A1 * B * comA1 * B.transpose() * A1;
_inv_bl_kernel.block(0, _samples.size(), _samples.size(),
_bl_samples.size())
= -A1 * B * comA1;
_inv_bl_kernel.block(_samples.size(), 0, _bl_samples.size(),
_samples.size())