Unverified Commit 5148d6e9 authored by Konstantinos Chatzilygeroudis's avatar Konstantinos Chatzilygeroudis Committed by GitHub
Browse files

Merge pull request #257 from resibots/fix_docs

Fix docs (post-review)
parents 1840bc73 e07e72a8
API API
============ ====
.. highlight:: c++ .. highlight:: c++
Limbo follows a `policy-based design <https://en.wikipedia.org/wiki/Policy-based_design>`_, which allows users to combine high flexibility (almost every part of Limbo can be substituted by a user-defined part) with high performance (the abstraction do not add any overhead, contrary to classic OOP design). These two features are critical for researchers who want to experiment new ideas in Bayesian optimization. This means that changing a part of limbo (e.g. changing the kernel functions) usually corresponds to changing a template parameter of the optimizer. Limbo follows a `policy-based design <https://en.wikipedia.org/wiki/Policy-based_design>`_, which allows users to combine high flexibility (almost every part of Limbo can be substituted by a user-defined part) with high performance (the abstraction do not add any overhead, contrary to classic OOP design). These two features are critical for researchers who want to experiment new ideas in Bayesian optimization. This means that changing a part of limbo (e.g. changing the kernel functions) usually corresponds to changing a template parameter of the optimizer.
...@@ -46,7 +46,7 @@ However, there is no need to inherit from a particular 'abstract' class. ...@@ -46,7 +46,7 @@ However, there is no need to inherit from a particular 'abstract' class.
Every class is parametrized by a :ref:`Params <params-guide>` class that contains all the parameters. Every class is parametrized by a :ref:`Params <params-guide>` class that contains all the parameters.
Sequence diagram Sequence diagram
--------------- -----------------
.. figure:: pics/limbo_sequence_diagram.png .. figure:: pics/limbo_sequence_diagram.png
:alt: Sequence diagram :alt: Sequence diagram
:target: _images/limbo_sequence_diagram.png :target: _images/limbo_sequence_diagram.png
...@@ -56,7 +56,7 @@ Sequence diagram ...@@ -56,7 +56,7 @@ Sequence diagram
File Structure File Structure
-------------- ---------------
(see below for a short explanation of the concepts) (see below for a short explanation of the concepts)
.. highlight:: none .. highlight:: none
...@@ -149,7 +149,7 @@ Template ...@@ -149,7 +149,7 @@ Template
} }
Available initializers Available initializers
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
.. doxygengroup:: init .. doxygengroup:: init
:undoc-members: :undoc-members:
...@@ -272,7 +272,7 @@ Not all the algorithms support bounded optimization and/or initial point: ...@@ -272,7 +272,7 @@ Not all the algorithms support bounded optimization and/or initial point:
Available optimizers Available optimizers
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
.. doxygengroup:: opt .. doxygengroup:: opt
:undoc-members: :undoc-members:
...@@ -283,7 +283,7 @@ Default parameters ...@@ -283,7 +283,7 @@ Default parameters
Models / Gaussian processes (model) Models / Gaussian processes (model)
--------------- ------------------------------------
Currently, Limbo only includes Gaussian processes as models. More may come in the future. Currently, Limbo only includes Gaussian processes as models. More may come in the future.
.. doxygenclass:: limbo::model::GP .. doxygenclass:: limbo::model::GP
...@@ -304,7 +304,7 @@ Kernel functions (kernel) ...@@ -304,7 +304,7 @@ Kernel functions (kernel)
.. _kernel-api: .. _kernel-api:
Template Template
^^^^^^^^ ^^^^^^^^^
.. code-block:: cpp .. code-block:: cpp
template <typename Params> template <typename Params>
...@@ -329,14 +329,14 @@ Default parameters ...@@ -329,14 +329,14 @@ Default parameters
Mean functions (mean) Mean functions (mean)
-------------------------- ----------------------
.. _mean-api: .. _mean-api:
Mean functions capture the prior about the function to be optimized. Mean functions capture the prior about the function to be optimized.
Template Template
^^^^^^^^ ^^^^^^^^^
.. code-block:: cpp .. code-block:: cpp
...@@ -353,7 +353,7 @@ Template ...@@ -353,7 +353,7 @@ Template
}; };
Available mean functions Available mean functions
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
.. doxygengroup:: mean .. doxygengroup:: mean
:members: :members:
...@@ -370,12 +370,12 @@ Internals ...@@ -370,12 +370,12 @@ Internals
Stopping criteria (stop) Stopping criteria (stop)
--------------------------------- -------------------------
Stopping criteria are used to stop the Bayesian optimizer algorithm. Stopping criteria are used to stop the Bayesian optimizer algorithm.
Template Template
^^^^^^^^ ^^^^^^^^^
.. code-block:: cpp .. code-block:: cpp
template <typename Params> template <typename Params>
...@@ -388,7 +388,7 @@ Template ...@@ -388,7 +388,7 @@ Template
}; };
Available stopping criteria Available stopping criteria
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. doxygengroup:: stop .. doxygengroup:: stop
:members: :members:
...@@ -406,12 +406,12 @@ Internals ...@@ -406,12 +406,12 @@ Internals
.. _statistics-stats: .. _statistics-stats:
Statistics (stats) Statistics (stats)
-------------------------- -------------------
Statistics are used to report informations about the current state of the algorithm (e.g., the best observation for each iteration). They are typically chained in a `boost::fusion::vector<>`. Statistics are used to report informations about the current state of the algorithm (e.g., the best observation for each iteration). They are typically chained in a `boost::fusion::vector<>`.
Template Template
^^^^^^^^ ^^^^^^^^^
.. code-block:: cpp .. code-block:: cpp
template <typename Params> template <typename Params>
...@@ -427,7 +427,7 @@ Template ...@@ -427,7 +427,7 @@ Template
.. doxygenstruct:: limbo::stat::StatBase .. doxygenstruct:: limbo::stat::StatBase
Available statistics Available statistics
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
.. doxygengroup:: stat .. doxygengroup:: stat
:members: :members:
...@@ -437,12 +437,12 @@ Default parameters ...@@ -437,12 +437,12 @@ Default parameters
:undoc-members: :undoc-members:
Parallel tools (par) Parallel tools (par)
----------------------- ---------------------
.. doxygennamespace:: limbo::tools::par .. doxygengroup:: par_tools
:members: :members:
Misc tools (tools) Misc tools (tools)
------------------------------- -------------------
.. doxygennamespace:: limbo::tools .. doxygengroup:: tools
:members: :members:
This page presents benchmarks in which we compare the Bayesian optimization performance of **Limbo** against BayesOpt (https://github.com/rmcantin/bayesopt , a state-of-the-art Bayesian Optimization library in C++).
Each library is given 200 evaluations (10 random samples + 190 function evaluations) to find the optimum of the hidden function. We compare both the accuracy of the obtained solution (difference with the actual optimum solution) and the time (wall clock time) required by the library to run the optimization process. The results show that while the libraries generate solutions with similar accuracy (they are based on the same algorithm), **Limbo** generates these solutions significantly faster than BayesOpt.
In addition to comparing the performance of the libraries with their default parameter values (and evaluating **Limbo** with the same parameters as BayesOpt, see variant: limbo/bench_bayes_def), we also evaluate the performance of multiple variants of **Limbo**, including different acquisition functions (UCB or EI), different inner-optimizers (CMAES or DIRECT) and whether optimizing or not the hyper-parameters of the model. In all the these comparisons, **Limbo** is faster than BayesOpt (for similar results), even when BayesOpt is not optimizing the hyper-parameters of the Gaussian processes.
Details
-------
- We compare to BayesOpt (https://github.com/rmcantin/bayesopt)
- Accuracy: lower is better (difference with the optimum)
- Wall time: lower is better
- In each replicate, 10 random samples + 190 function evaluations
- see `src/benchmarks/limbo/bench.cpp` and `src/benchmarks/bayesopt/bench.cpp`
This page presents benchmarks in which we compare the performance of the Gaussian Process regression in **Limbo** against two other libraries: GPy (https://github.com/SheffieldML/GPy) and libGP (https://github.com/mblum/libgp).
The quality of the produced model is evaluated according to the Mean Squared Error (lower is better) with respect to the ground truth function. We also quantify the amount of time required by the different libraries to learn the model and to query it. In both cases, lower is better. The evaluations are replicated 30 times and for each replicate, all the variants (see below for the available variants) are using exactly the same data. The data are uniformly sampled and some noise is added (according to the variance of the data).
The comparison is done on 11 tasks to evaluate the performance of the libraries on functions of different complexity, and input/output spaces. The results show that the query time of Limbo's Gaussian processes is several orders of magnitude better than the one of GPy and around twice better than libGP for a similar accuracy. The learning time of Limbo, which highly depends on the optimization algorithm chosen to optimize the hyper-parameters, is either equivalent or faster than the compared libraries.
It is important to note that the objective of the compared libraries are not necessarily the performance, but to provide baselines so that users know what to expect from **Limbo** and how it compares to other GP libraries. For instance, GPy is a python library with much more feature and designed to be easy to use. Moreover, GPy can achieve comparable performance with C++ libraries in the hyper-parameters optimization part because it utilizes numpy and scipy that is basically calling C code with MKL bindings (which is almost identical to what we are doing in **Limbo**).
Variants
-------------------
- **GP-SE-Full-Rprop**: Limbo with Squared Exponential kernel where the signal noise, signal variance and kernel lengthscales are optimized via Maximum Likelihood Estimation with the Rprop optimizer (default for limbo)
- **GP-SE-Rprop**: Limbo with Squared Exponential kernel where the signal variance and kernel lengthscales are optimized via Maximum Likelihood Estimation with the Rprop optimizer (default for limbo) and where the signal noise is not optimized but set to a default value: 0.01
- **libGP-SE-Full**: libGP with Squared Exponential kernel where the signal noise, signal variance and kernel lengthscales are optimized via Maximum Likelihood Estimation with the Rprop optimizer (the only one that libGP has)
- **GPy**: GPy with Squared Exponential kernel where the signal noise, signal variance and kernel lengthscales are optimized via Maximum Likelihood Estimation (with the L-BFGS-B optimizer --- `check scipy.optimize.fmin_l_bfgs_b= <https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_l_bfgs_b.html>`_)
.. _bayesian_optimization:
Introduction to Bayesian Optimization (BO) Introduction to Bayesian Optimization (BO)
========================================== ==========================================
......
...@@ -57,3 +57,77 @@ ...@@ -57,3 +57,77 @@
pages = {503--507}, pages = {503--507},
file = {Cully et al. - 2015 - Robots that can adapt like animals.pdf:/Users/jbm/Documents/zotero_bib/storage/WQ9SQZX3/Cully et al. - 2015 - Robots that can adapt like animals.pdf:application/pdf;Cully et al_2015_Robots that can adapt like animals.pdf:/Users/jbm/Documents/zotero_bib/storage/ADZPNDPM/Cully et al_2015_Robots that can adapt like animals.pdf:application/pdf} file = {Cully et al. - 2015 - Robots that can adapt like animals.pdf:/Users/jbm/Documents/zotero_bib/storage/WQ9SQZX3/Cully et al. - 2015 - Robots that can adapt like animals.pdf:application/pdf;Cully et al_2015_Robots that can adapt like animals.pdf:/Users/jbm/Documents/zotero_bib/storage/ADZPNDPM/Cully et al_2015_Robots that can adapt like animals.pdf:application/pdf}
} }
@inproceedings{chatzilygeroudis2017,
TITLE = {{Black-Box Data-efficient Policy Search for Robotics}},
AUTHOR = {Chatzilygeroudis, Konstantinos and Rama, Roberto and Kaushik, Rituraj and Goepp, Dorian and Vassiliades, Vassilis and Mouret, Jean-Baptiste},
URL = {https://hal.inria.fr/hal-01576683},
BOOKTITLE = {{IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}},
ADDRESS = {Vancouver, Canada},
YEAR = {2017},
video={https://www.youtube.com/watch?v=kTEyYiIFGPM},
src={https://github.com/resibots/blackdrops},
MONTH = Sep,
KEYWORDS = {Data-Efficient Learning, learning, robotics, resilience},
PDF = {https://hal.inria.fr/hal-01576683/file/medrops-final.pdf},
HAL_ID = {hal-01576683},
HAL_VERSION = {v1},
}
@article{chatzilygeroudis2018resetfree,
title={{Reset-free Trial-and-Error Learning for Robot Damage Recovery}},
author={Konstantinos Chatzilygeroudis and Vassilis Vassiliades and Jean-Baptiste Mouret},
journal={{Robotics and Autonomous Systems}},
year={2018}
}
@inproceedings{tarapore2016,
TITLE = {{How Do Different Encodings Influence the Performance of the MAP-Elites Algorithm?}},
AUTHOR = {Tarapore, Danesh and Clune, Jeff and Cully, Antoine and Mouret, Jean-Baptiste},
BOOKTITLE = {{The 18th Annual conference on Genetic and evolutionary computation ({GECCO'14})}},
YEAR = {2016},
publisher={{ACM}},
keywords={illumination, evolution, resilience, robotics, encodings},
DOI = {10.1145/2908812.2908875},
URL = {https://hal.inria.fr/hal-01302658},
PDF = {https://hal.inria.fr/hal-01302658/document},
SRC={https://github.com/resibots/tarapore_2016_gecco},
HAL_ID = {hal-01302658},
HAL_VERSION = {v1},
X-PROCEEDINGS = {yes},
X-INTERNATIONAL-AUDIENCE = {yes},
X-EDITORIAL-BOARD = {yes},
X-INVITED-CONFERENCE = {no},
X-SCIENTIFIC-POPULARIZATION = {no},
}
@inproceedings{chatzilygeroudis2018using,
title={Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics},
author={Konstantinos Chatzilygeroudis and Jean-Baptiste Mouret},
year={2018},
booktitle={{International Conference on Robotics and Automation (ICRA)}}
}
@inproceedings{pautrat2018bayesian,
title={Bayesian Optimization with Automatic Prior Selection for Data-Efficient Direct Policy Search},
author={Rémi Pautrat and Konstantinos Chatzilygeroudis and Jean-Baptiste Mouret},
year={2018},
booktitle={{International Conference on Robotics and Automation (ICRA).}},
journal={A short version of the paper was accepted at the non-archival track of the 1st Conference on Robot Learning (CoRL) 2017}
}
@book{alexandrescu2001modern,
title={Modern {C++} design: generic programming and design patterns applied},
author={Alexandrescu, Andrei},
year={2001},
publisher={Addison-Wesley}
}
@article{martinezcantin14a,
author = {Ruben Martinez-Cantin},
title = {{BayesOpt:} A {Bayesian} Optimization Library for Nonlinear Optimization, Experimental Design and Bandits},
journal = {Journal of Machine Learning Research},
year = {2014},
volume = {15},
pages = {3915-3919},
}
...@@ -8,21 +8,17 @@ ...@@ -8,21 +8,17 @@
Limbo's documentation Limbo's documentation
================================= =================================
Limbo is a lightweight framework for Bayesian Optimization, a powerful approach for global optimization of expensive, non-convex functions. Github page (to report issues and/or help us to improve the library): `[Github repository] <http://github.com/resibots/limbo>`_ Limbo (LIbrary for Model-Based Optimization) is an open-source C++11 library for Gaussian Processes and data-efficient optimization (e.g., Bayesian optimization, see :cite:`b-brochu2010tutorial,b-Mockus2013`) that is designed to be both highly flexible and very fast. It can be used as a state-of-the-art optimization library or to experiment with novel algorithms with "plugin" components. Limbo is currently mostly used for data-efficient policy search in robot learning :cite:`b-lizotte2007automatic` and online adaptation because computation time matters when using the low-power embedded computers of robots. For example, Limbo was the key library to develop a new algorithm that allows a legged robot to learn a new gait after a mechanical damage in about 10-15 trials (2 minutes) :cite:`b-cully_robots_2015`, and a 4-DOF manipulator to learn neural networks policies for goal reaching in about 5 trials :cite:`b-chatzilygeroudis2017`.
The development of Limbo is funded by the `ERC project ResiBots <http://www.resibots.eu>`_. The implementation of Limbo follows a policy-based design :cite:`b-alexandrescu2001modern` that leverages C++ templates: this allows it to be highly flexible without the cost induced by classic object-oriented designs (cost of virtual functions). `The regression benchmarks <http://www.resibots.eu/limbo/reg_benchmarks.html>`_ show that the query time of Limbo's Gaussian processes is several orders of magnitude better than the one of GPy (a state-of-the-art `Python library for Gaussian processes <https://sheffieldml.github.io/GPy/>`_) for a similar accuracy (the learning time highly depends on the optimization algorithm chosen to optimize the hyper-parameters). The `black-box optimization benchmarks <http://www.resibots.eu/limbo/bo_benchmarks.html>`_ demonstrate that Limbo is about 2 times faster than BayesOpt (a C++ library for data-efficient optimization, :cite:`b-martinezcantin14a`) for a similar accuracy and data-efficiency. In practice, changing one of the components of the algorithms in Limbo (e.g., changing the acquisition function) usually requires changing only a template definition in the source code. This design allows users to rapidly experiment and test new ideas while keeping the software as fast as specialized code.
Limbo shares many ideas with `Sferes2 <http://github.com/sferes2>`_, a similar framework for evolutionary computation. Limbo takes advantage of multi-core architectures to parallelize the internal optimization processes (optimization of the acquisition function, optimization of the hyper-parameters of a Gaussian process) and it vectorizes many of the linear algebra operations (via the `Eigen 3 library <http://eigen.tuxfamily.org/>`_ and optional bindings to Intel's MKL).
The library is distributed under the `CeCILL-C license <http://www.cecill.info/index.en.html>`_ via a `Github repository <http://github.com/resibots/limbo>`_. The code is standard-compliant but it is currently mostly developed for GNU/Linux and Mac OS X with both the GCC and Clang compilers. New contributors can rely on a full API reference, while their developments are checked via a continuous integration platform (automatic unit-testing routines).
Main features Limbo is currently used in the `ERC project ResiBots <http://www.resibots.eu>`_, which is focused on data-efficient trial-and-error learning for robot damage recovery, and in the `H2020 projet PAL <http://www.pal4u.eu/>`_, which uses social robots to help coping with diabetes. It has been instrumental in many scientific publications since 2015 :cite:`b-cully_robots_2015,b-chatzilygeroudis2018resetfree,b-tarapore2016,b-chatzilygeroudis2017,b-pautrat2018bayesian,b-chatzilygeroudis2018using`.
--------------
- Implementation of the classic algorithms (Bayesian optimization, many kernels, likelihood maximization, etc.) Limbo shares many ideas with `Sferes2 <http://github.com/sferes2>`_, a similar framework for evolutionary computation.
- Modern C++-11
- Generic framework (template-based / policy-based design), which allows for easy customization, to test novel ideas
- Experimental framework that allows user to easily test variants of experiments, compare treatments, submit jobs to clusters (OAR scheduler), etc.
- High performance (in particular, Limbo can exploit multicore computers via Intel TBB and vectorize some operations via Eigen3)
- Purposely small to be easily maintained and quickly understood
Contents: Contents:
...@@ -52,3 +48,10 @@ Contents: ...@@ -52,3 +48,10 @@ Contents:
.. * :ref:`genindex` .. * :ref:`genindex`
.. * :ref:`modindex` .. * :ref:`modindex`
.. * :ref:`search` .. * :ref:`search`
-----
.. bibliography:: guides/refs.bib
:style: plain
:cited:
:keyprefix: b-
...@@ -37,9 +37,7 @@ The basic layout of your ``main.cpp`` file should look like this: ...@@ -37,9 +37,7 @@ The basic layout of your ``main.cpp`` file should look like this:
.. code-block:: c++ .. code-block:: c++
#include <iostream> #include <limbo/limbo.hpp>
#include <limbo/bayes_opt/boptimizer.hpp>
// Here we have to include other needed limbo headers
using namespace limbo; using namespace limbo;
...@@ -93,14 +91,14 @@ To compute the forward kinematics of our simple planar arm we use the following ...@@ -93,14 +91,14 @@ To compute the forward kinematics of our simple planar arm we use the following
.. literalinclude:: ../../src/tutorials/advanced_example.cpp .. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++ :language: c++
:linenos: :linenos:
:lines: 85-113 :lines: 85-112
To make this forward kinematic model useful to our GP, we need to create a mean function: To make this forward kinematic model useful to our GP, we need to create a mean function:
.. literalinclude:: ../../src/tutorials/advanced_example.cpp .. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++ :language: c++
:linenos: :linenos:
:lines: 115-125 :lines: 114-124
Using State-based bayesian optimization Using State-based bayesian optimization
----------------------------------------- -----------------------------------------
...@@ -111,7 +109,7 @@ Creating an Aggregator: ...@@ -111,7 +109,7 @@ Creating an Aggregator:
.. literalinclude:: ../../src/tutorials/advanced_example.cpp .. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++ :language: c++
:linenos: :linenos:
:lines: 138-150 :lines: 137-149
Here, we are using a very simple aggregator that simply computes the distance between the end-effector and the target position. Here, we are using a very simple aggregator that simply computes the distance between the end-effector and the target position.
...@@ -125,7 +123,7 @@ When our bayesian optimizer finds a solution that the end-effector of the arm is ...@@ -125,7 +123,7 @@ When our bayesian optimizer finds a solution that the end-effector of the arm is
.. literalinclude:: ../../src/tutorials/advanced_example.cpp .. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++ :language: c++
:linenos: :linenos:
:lines: 127-136 :lines: 126-135
Creating the evaluation function Creating the evaluation function
----------------------------------------- -----------------------------------------
...@@ -133,7 +131,7 @@ Creating the evaluation function ...@@ -133,7 +131,7 @@ Creating the evaluation function
.. literalinclude:: ../../src/tutorials/advanced_example.cpp .. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++ :language: c++
:linenos: :linenos:
:lines: 152-167 :lines: 151-166
Creating the experiment Creating the experiment
------------------------------------------------- -------------------------------------------------
...@@ -242,3 +240,10 @@ Then, an executable named ``arm_example`` should be produced under the folder `` ...@@ -242,3 +240,10 @@ Then, an executable named ``arm_example`` should be produced under the folder ``
Using state-based bayesian optimization, we can transfer what we learned during one task to achieve faster new tasks. Using state-based bayesian optimization, we can transfer what we learned during one task to achieve faster new tasks.
Full ``main.cpp``:
.. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++
:linenos:
:lines: 47-
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
Basic Example Basic Example
================================================= =================================================
If you are not familiar with the main concepts of Bayesian Optimization, a quick introduction is available :ref:`here <bayesian_optimization>`.
In this tutorial, we will explain how to create a new experiment in which a simple function ( :math:`-{(5 * x - 2.5)}^2 + 5`) is maximized.
Let's say we want to create an experiment called "myExp". The first thing to do is to create the folder ``exp/myExp`` under the limbo root. Then add two files: Let's say we want to create an experiment called "myExp". The first thing to do is to create the folder ``exp/myExp`` under the limbo root. Then add two files:
...@@ -22,6 +24,8 @@ Next, copy the following content to the ``wscript`` file: ...@@ -22,6 +24,8 @@ Next, copy the following content to the ``wscript`` file:
.. code:: python .. code:: python
from waflib.Configure import conf
def options(opt): def options(opt):
pass pass
...@@ -36,13 +40,19 @@ Next, copy the following content to the ``wscript`` file: ...@@ -36,13 +40,19 @@ Next, copy the following content to the ``wscript`` file:
For this example, we will optimize a simple function: :math:`-{(5 * x - 2.5)}^2 + 5`, using all default values and settings. If you did not compile with libcmaes and/or nlopt, remove LIBCMAES and/or NLOPT from 'uselib'. For this example, we will optimize a simple function: :math:`-{(5 * x - 2.5)}^2 + 5`, using all default values and settings. If you did not compile with libcmaes and/or nlopt, remove LIBCMAES and/or NLOPT from 'uselib'.
To begin, the ``main`` file has to include the necessary files, and declare the ``Parameter struct``: To begin, the ``main`` file has to include the necessary files:
.. literalinclude:: ../../src/tutorials/basic_example.cpp .. literalinclude:: ../../src/tutorials/basic_example.cpp
:language: c++ :language: c++
:linenos: :linenos:
:lines: 55-97 :lines: 48-53
We also need to declare the ``Parameter struct``:
.. literalinclude:: ../../src/tutorials/basic_example.cpp
:language: c++
:linenos:
:lines: 55-97
Here we are stating that the samples are observed without noise (which makes sense, because we are going to evaluate the function), that we want to output the stats (by setting stats_enabled to `true`), that the model has to be initialized with 10 samples (that will be selected randomly), and that the optimizer should run for 40 iterations. The rest of the values are taken from the defaults. **By default limbo optimizes in** :math:`[0,1]`, but you can optimize without bounds by setting ``BO_PARAM(bool, bounded, false)`` in ``bayes_opt_bobase`` parameters. If you do so, limbo outputs random numbers, wherever needed, sampled from a gaussian centered in zero with a standard deviation of :math:`10`, instead of uniform random numbers in :math:`[0,1]` (in the bounded case). Finally **limbo always maximizes**; this means that you have to update your objective function if you want to minimize. Here we are stating that the samples are observed without noise (which makes sense, because we are going to evaluate the function), that we want to output the stats (by setting stats_enabled to `true`), that the model has to be initialized with 10 samples (that will be selected randomly), and that the optimizer should run for 40 iterations. The rest of the values are taken from the defaults. **By default limbo optimizes in** :math:`[0,1]`, but you can optimize without bounds by setting ``BO_PARAM(bool, bounded, false)`` in ``bayes_opt_bobase`` parameters. If you do so, limbo outputs random numbers, wherever needed, sampled from a gaussian centered in zero with a standard deviation of :math:`10`, instead of uniform random numbers in :math:`[0,1]` (in the bounded case). Finally **limbo always maximizes**; this means that you have to update your objective function if you want to minimize.
...@@ -64,16 +74,32 @@ With this, we can declare the main function: ...@@ -64,16 +74,32 @@ With this, we can declare the main function:
:linenos: :linenos:
:lines: 114-123 :lines: 114-123
The full ``main.cpp`` can be found `here <../../src/tutorials/basic_example.cpp>`_
Finally, from the root of limbo, run a build command, with the additional switch ``--exp myExp``: :: Finally, from the root of limbo, run a build command, with the additional switch ``--exp myExp``: ::
./waf build --exp myExp ./waf build --exp myExp
Then, an executable named ``myExp`` should be produced under the folder ``build/exp/myExp``. Then, an executable named ``myExp`` should be produced under the folder ``build/exp/myExp``.
When running this executable, you should see something similar to this:
.. literalinclude:: ./example_run_basic_example/print_test.dat
These lines show the result of each sample evaluation of the :math:`40` iterations (after the random initialization). In particular, we can see that algorithm progressively converges toward the maximum of the function (:math:`5`) and that the maximum found is located at :math:`x = 0.500014`.
Running the executable also created a folder with a name composed of YOUCOMPUTERHOSTNAME-DATE-HOUR-PID. This folder should contain two files: ::
limbo
|-- YOUCOMPUTERHOSTNAME-DATE-HOUR-PID
+-- samples.dat
+-- aggregated_observations.dat
The file ``samples.dat`` contains the coordinates of the samples that have been evaluated during each iteration, while the file ``aggregated_observations.dat`` contains the corresponding observed values.
If you want to display the different observations in a graph, you can use the python script ``print_aggregated_observations.py`` (located in ``limbo_root/src/tutorials``).
For instance, from the root of limbo you can run ::
python src/tutorials/print_aggregated_observations.py YOUCOMPUTERHOSTNAME-DATE-HOUR-PID/aggregated_observations.dat
Full ``main.cpp``:
.. literalinclude:: ../../src/tutorials/basic_example.cpp
:language: c++
:linenos:
:lines: 48-
...@@ -28,7 +28,7 @@ Optional but highly recommended ...@@ -28,7 +28,7 @@ Optional but highly recommended
.. caution:: .. caution::
The Debian/Unbuntu NLOpt package does NOT come with C++ bindings. Therefore you need to compile NLOpt yourself. The brew package (OSX) comes with C++ bindings (`brew install homebrew/science/nlopt`). The Debian/Unbuntu NLOpt package does NOT come with C++ bindings. Therefore you need to compile NLOpt yourself. The brew package (OSX) comes with C++ bindings (`brew install nlopt`).
* `libcmaes <https://github.com/beniz/libcmaes>`_. We advise you to use our own `fork of libcmaes <https://github.com/resibots/libcmaes>`_ (branch **fix_flags_native**). Make sure that you install with **sudo** or configure the **LD_LIBRARY_PATH** accordingly. Be careful that gtest (which is a dependency of libcmaes) needs to be manually compiled **even if you install it with your package manager** (e.g. apt-get): :: * `libcmaes <https://github.com/beniz/libcmaes>`_. We advise you to use our own `fork of libcmaes <https://github.com/resibots/libcmaes>`_ (branch **fix_flags_native**). Make sure that you install with **sudo** or configure the **LD_LIBRARY_PATH** accordingly. Be careful that gtest (which is a dependency of libcmaes) needs to be manually compiled **even if you install it with your package manager** (e.g. apt-get): ::
......
#iteration aggregated_observation
-1 2.70602
-1 2.01091
-1 3.63208
-1 1.53741
-1 4.78237
-1 3.13115
-1 -1.21201
-1 4.44618
-1 -0.9999
-1 4.15864
0 4.99986
1 4.99977
2 4.99984
3 4.99984
4 4.99984
5 4.99983
6 4.99983
7 4.99983
8 4.99983
9 4.99982
10 4.99986
11 4.99987
12 4.99989