A lightweight framework for Bayesian optimization of black-box functions (C++11) and, more generally, for data-efficient optimization. It is designed to be very fast and very flexible.
Limbo (LIbrary for Model-Based Optimization) is an open-source C++11 library for Gaussian Processes and data-efficient optimization (e.g., Bayesian optimization) that is designed to be both highly flexible and very fast. It can be used as a state-of-the-art optimization library or to experiment with novel algorithms with "plugin" components.
Documentation & Versions
------------------------
The development branch is the [master](https://github.com/resibots/limbo/tree/master) branch. For the latest stable release, check the [release-1.0](https://github.com/resibots/limbo/tree/release-1.0) branch.
The development branch is the [master](https://github.com/resibots/limbo/tree/master) branch. For the latest stable release, check the [release-2.0](https://github.com/resibots/limbo/tree/release-2.0) branch.
Documentation is available at: http://www.resibots.eu/limbo
A short paper that introduces the library is available on arxiv: https://arxiv.org/abs/1611.07343
A short paper that introduces the library is available here: https://members.loria.fr/JBMouret/pdf/limbo_paper.pdf
Citing Limbo
------------
If you use Limbo in a scientific paper, please cite:
Cully, A., Chatzilygeroudis, K., Allocati, F., and Mouret J.-B., (2016). [Limbo: A Fast and Flexible Library for Bayesian Optimization](https://arxiv.org/abs/1611.07343). *arXiv preprint arXiv:1611.07343*.
Cully, A., Chatzilygeroudis, K., Allocati, F., and Mouret J.-B., (2016). [Limbo: A Flexible High-performance Library for Gaussian Processes modeling and Data-Efficient Optimization](https://members.loria.fr/JBMouret/pdf/limbo_paper.pdf). *Preprint*.
In BibTex:
@article{cully_limbo_2016,
title={Limbo: A Fast and Flexible Library for Bayesian Optimization},
title={Limbo: A Flexible High-performance Library for Gaussian Processes modeling and Data-Efficient Optimization},
author={Cully, A. and Chatzilygeroudis, K. and Allocati, F. and Mouret, J.-B.},
- Purposely small to be easily maintained and quickly understood
Scientific articles that use Limbo
--------------------------------
- Cully, A., Clune, J., Tarapore, D., and Mouret, J.B. (2015). [Robots that can adapt like animals](http://www.nature.com/nature/journal/v521/n7553/full/nature14422.html). *Nature*, 521(7553), 503-507.
- Tarapore, D., Clune, J., Cully, A., and Mouret, J.B. (2016). [How Do Different Encodings Influence the Performance of the MAP-Elites Algorithm?](https://hal.inria.fr/hal-01302658/document). *In Proc. of Genetic and Evolutionary Computation Conference*.
- Chatzilygeroudis, K., Vassiliades, V. and Mouret, J.B. (2016). [Reset-free Trial-and-Error Learning for Data-Efficient Robot Damage Recovery](https://arxiv.org/abs/1610.04213). *arXiv preprint arXiv:1610.04213*.
- Chatzilygeroudis, K., Cully, A. and Mouret, J.B. (2016). [Towards semi-episodic learning for robot damage recovery](https://arxiv.org/abs/1610.01407). *Workshop on AI for Long-Term Autonomy at the IEEE International Conference on Robotics and Automation 2016*.
- Papaspyros, V., Chatzilygeroudis, K., Vassiliades, V., and Mouret, J.B. (2016). [Safety-Aware Robot Damage Recovery Using Constrained Bayesian Optimization and Simulated Priors](https://arxiv.org/pdf/1611.09419v3). *Workshop on Bayesian Optimization at the Annual Conference on Neural Information Processing Systems (NIPS) 2016.*
- Chatzilygeroudis K., Rama R., Kaushik, R., Goepp, D., Vassiliades, V. and Mouret, J.B. (2017). [Black-Box Data-efficient Policy Search for Robotics](https://arxiv.org/abs/1703.07261). *Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)*.
----------------------------------
- Chatzilygeroudis, K., & Mouret, J. B. (2018). [Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics](https://arxiv.org/pdf/1709.06917). *Proceedings of the International Conference on Robotics and Automation (ICRA)*.
- Pautrat, R., Chatzilygeroudis, K., & Mouret, J.-B. (2018). [Bayesian Optimization with Automatic Prior Selection for Data-Efficient Direct Policy Search](https://arxiv.org/pdf/1709.06919). *Proceedings of the International Conference on Robotics and Automation (ICRA)*.
- Chatzilygeroudis, K., Vassiliades, V. and Mouret, J.-B. (2017). [Reset-free Trial-and-Error Learning for Robot Damage Recovery](https://arxiv.org/abs/1610.04213). *Robotics and Autonomous Systems*.
- Karban P., Pánek D., Mach F. and Doležel, I. (2017). [Calibration of numerical models based on advanced optimization and penalization techniques](https://www.degruyter.com/downloadpdf/j/jee.2017.68.issue-5/jee-2017-0073/jee-2017-0073.pdf). *Journal of Electrical Engineering, 68(5), 396-400*.
- Chatzilygeroudis K., Rama R., Kaushik, R., Goepp, D., Vassiliades, V. and Mouret, J.-B. (2017). [Black-Box Data-efficient Policy Search for Robotics](https://arxiv.org/abs/1703.07261). *Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)*.
- Tarapore, D., Clune, J., Cully, A., and Mouret, J.-B. (2016). [How Do Different Encodings Influence the Performance of the MAP-Elites Algorithm?](https://hal.inria.fr/hal-01302658/document). *In Proc. of Genetic and Evolutionary Computation Conference*.
- Cully, A., Clune, J., Tarapore, D., and Mouret, J.-B. (2015). [Robots that can adapt like animals](http://www.nature.com/nature/journal/v521/n7553/full/nature14422.html). *Nature*, 521(7553), 503-507.
- Chatzilygeroudis, K., Cully, A. and Mouret, J.-B. (2016). [Towards semi-episodic learning for robot damage recovery](https://arxiv.org/abs/1610.01407). *Workshop on AI for Long-Term Autonomy at the IEEE International Conference on Robotics and Automation 2016*.
- Papaspyros, V., Chatzilygeroudis, K., Vassiliades, V., and Mouret, J.-B. (2016). [Safety-Aware Robot Damage Recovery Using Constrained Bayesian Optimization and Simulated Priors](https://arxiv.org/pdf/1611.09419v3). *Workshop on Bayesian Optimization at the Annual Conference on Neural Information Processing Systems (NIPS) 2016.*
* `NLOpt <http://ab-initio.mit.edu/wiki/index.php/NLopt>`_ with C++ binding: ::
* `Intel TBB <https://www.threadingbuildingblocks.org>`_ is not mandatory, but highly recommended; TBB is used in Limbo to take advantage of multicore architectures.
* `NLOpt <http://ab-initio.mit.edu/wiki/index.php/NLopt>`_ [mirror: http://members.loria.fr/JBMouret/mirrors/nlopt-2.4.2.tar.gz] with C++ binding: ::
The Debian/Unbuntu NLOpt package does NOT come with C++ bindings. Therefore you need to compile NLOpt yourself. The brew package (OSX) comes with C++ bindings (`brew install homebrew/science/nlopt`).
* `libcmaes <https://github.com/beniz/libcmaes>`_. Make sure that you install with **sudo** or configure the **LD_LIBRARY_PATH** accordingly. Be careful that gtest (which is a dependency of libcmaes) needs to be manually compiled **even if you install it with your package manager** (e.g. apt-get). Follow the instructions `here <https://github.com/beniz/libcmaes#build>`_, reproduced for your convenience::
* `libcmaes <https://github.com/beniz/libcmaes>`_. We advise you to use our own `fork of libcmaes <https://github.com/resibots/libcmaes>`_ (branch **fix_flags_native**). Make sure that you install with **sudo** or configure the **LD_LIBRARY_PATH** accordingly. Be careful that gtest (which is a dependency of libcmaes) needs to be manually compiled **even if you install it with your package manager** (e.g. apt-get): ::
sudo apt-get install libgtest-dev
sudo cd /usr/src/gtest
...
...
@@ -37,15 +39,40 @@ Optional but highly recommended
sudo make
sudo cp *.a /usr/lib
In addition, you should be careful to configure **libcmaes** to use the same Eigen3 version as what you intend to use with Limbo (configuring with Makefiles)::
Follow the instructions below (you can also have a look `here <https://github.com/resibots/libcmaes#build>`_): ::
In addition, you should be careful to configure **libcmaes** to use the same Eigen3 version as what you intend to use with Limbo (configuring with Makefiles): ::
* `Intel TBB <https://www.threadingbuildingblocks.org>`_ is not mandatory, but highly recommended; TBB is used in Limbo to take advantage of multicore architectures.
Additionally, you can enable the usage of TBB for parallelization (configuring with Makefiles): ::
@@ -13,7 +13,7 @@ We assume that our samples are in a vector called ``samples`` and that our obser
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 77-86
:lines: 79-88
Basic usage
------------
...
...
@@ -23,14 +23,14 @@ We first create a basic GP with an Exponential kernel (``kernel::Exp<Params>``)
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 59-72
:lines: 61-74
The type of the GP is defined by the following lines:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 87-91
:lines: 89-93
To use the GP, we need :
...
...
@@ -40,7 +40,7 @@ To use the GP, we need :
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 92-97
:lines: 94-99
Here we assume that the noise is the same for all samples and that it is equal to 0.01.
...
...
@@ -57,7 +57,7 @@ To visualize the predictions of the GP, we can query it for many points and reco
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 101-110
:lines: 101-112
Hyper-parameter optimization
...
...
@@ -71,21 +71,21 @@ A new GP type is defined as follows:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 112-116
:lines: 114-118
It uses the default values for the parameters of ``SquaredExpARD``:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 64-67
:lines: 66-69
After calling the ``compute()`` method, the hyper-parameters can be optimized by calling the ``optimize_hyperparams()`` function. The GP does not need to be recomputed and we pass ``false`` for the last parameter in ``compute()`` as we do not need to compute the kernel matrix again (it will be recomputed in the hyper-parameters optimization).
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 119-121
:lines: 121-123
We can have a look at the difference between the two GPs:
...
...
@@ -105,4 +105,25 @@ Here is the complete ``main.cpp`` file of this tutorial:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:lines: 48-
:lines: 46-
Saving and Loading
-------------------
We can also save our optimized GP model:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 140-141
This will create a directory called ``myGP`` with several files (the GP data, kernel hyperparameters etc.). If we want a binary format (i.e., more compact), we can replace the ``TextArchive`` by ``BinaryArchive``.
To the load a saved model, we can do the following:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 143-144
Note that we need to have the same kernel and mean function (i.e., the same GP type) as the one used for saving.