Unverified Commit 5148d6e9 authored by Konstantinos Chatzilygeroudis's avatar Konstantinos Chatzilygeroudis Committed by GitHub
Browse files

Merge pull request #257 from resibots/fix_docs

Fix docs (post-review)
parents 1840bc73 e07e72a8
.. highlight:: c++
Limbo follows a `policy-based design <https://en.wikipedia.org/wiki/Policy-based_design>`_, which allows users to combine high flexibility (almost every part of Limbo can be substituted by a user-defined part) with high performance (the abstraction do not add any overhead, contrary to classic OOP design). These two features are critical for researchers who want to experiment new ideas in Bayesian optimization. This means that changing a part of limbo (e.g. changing the kernel functions) usually corresponds to changing a template parameter of the optimizer.
......@@ -46,7 +46,7 @@ However, there is no need to inherit from a particular 'abstract' class.
Every class is parametrized by a :ref:`Params <params-guide>` class that contains all the parameters.
Sequence diagram
.. figure:: pics/limbo_sequence_diagram.png
:alt: Sequence diagram
:target: _images/limbo_sequence_diagram.png
......@@ -56,7 +56,7 @@ Sequence diagram
File Structure
(see below for a short explanation of the concepts)
.. highlight:: none
......@@ -149,7 +149,7 @@ Template
Available initializers
.. doxygengroup:: init
......@@ -272,7 +272,7 @@ Not all the algorithms support bounded optimization and/or initial point:
Available optimizers
.. doxygengroup:: opt
......@@ -283,7 +283,7 @@ Default parameters
Models / Gaussian processes (model)
Currently, Limbo only includes Gaussian processes as models. More may come in the future.
.. doxygenclass:: limbo::model::GP
......@@ -304,7 +304,7 @@ Kernel functions (kernel)
.. _kernel-api:
.. code-block:: cpp
template <typename Params>
......@@ -329,14 +329,14 @@ Default parameters
Mean functions (mean)
.. _mean-api:
Mean functions capture the prior about the function to be optimized.
.. code-block:: cpp
......@@ -353,7 +353,7 @@ Template
Available mean functions
.. doxygengroup:: mean
......@@ -370,12 +370,12 @@ Internals
Stopping criteria (stop)
Stopping criteria are used to stop the Bayesian optimizer algorithm.
.. code-block:: cpp
template <typename Params>
......@@ -388,7 +388,7 @@ Template
Available stopping criteria
.. doxygengroup:: stop
......@@ -406,12 +406,12 @@ Internals
.. _statistics-stats:
Statistics (stats)
Statistics are used to report informations about the current state of the algorithm (e.g., the best observation for each iteration). They are typically chained in a `boost::fusion::vector<>`.
.. code-block:: cpp
template <typename Params>
......@@ -427,7 +427,7 @@ Template
.. doxygenstruct:: limbo::stat::StatBase
Available statistics
.. doxygengroup:: stat
......@@ -437,12 +437,12 @@ Default parameters
Parallel tools (par)
.. doxygennamespace:: limbo::tools::par
.. doxygengroup:: par_tools
Misc tools (tools)
.. doxygennamespace:: limbo::tools
.. doxygengroup:: tools
This page presents benchmarks in which we compare the Bayesian optimization performance of **Limbo** against BayesOpt (https://github.com/rmcantin/bayesopt , a state-of-the-art Bayesian Optimization library in C++).
Each library is given 200 evaluations (10 random samples + 190 function evaluations) to find the optimum of the hidden function. We compare both the accuracy of the obtained solution (difference with the actual optimum solution) and the time (wall clock time) required by the library to run the optimization process. The results show that while the libraries generate solutions with similar accuracy (they are based on the same algorithm), **Limbo** generates these solutions significantly faster than BayesOpt.
In addition to comparing the performance of the libraries with their default parameter values (and evaluating **Limbo** with the same parameters as BayesOpt, see variant: limbo/bench_bayes_def), we also evaluate the performance of multiple variants of **Limbo**, including different acquisition functions (UCB or EI), different inner-optimizers (CMAES or DIRECT) and whether optimizing or not the hyper-parameters of the model. In all the these comparisons, **Limbo** is faster than BayesOpt (for similar results), even when BayesOpt is not optimizing the hyper-parameters of the Gaussian processes.
- We compare to BayesOpt (https://github.com/rmcantin/bayesopt)
- Accuracy: lower is better (difference with the optimum)
- Wall time: lower is better
- In each replicate, 10 random samples + 190 function evaluations
- see `src/benchmarks/limbo/bench.cpp` and `src/benchmarks/bayesopt/bench.cpp`
This page presents benchmarks in which we compare the performance of the Gaussian Process regression in **Limbo** against two other libraries: GPy (https://github.com/SheffieldML/GPy) and libGP (https://github.com/mblum/libgp).
The quality of the produced model is evaluated according to the Mean Squared Error (lower is better) with respect to the ground truth function. We also quantify the amount of time required by the different libraries to learn the model and to query it. In both cases, lower is better. The evaluations are replicated 30 times and for each replicate, all the variants (see below for the available variants) are using exactly the same data. The data are uniformly sampled and some noise is added (according to the variance of the data).
The comparison is done on 11 tasks to evaluate the performance of the libraries on functions of different complexity, and input/output spaces. The results show that the query time of Limbo's Gaussian processes is several orders of magnitude better than the one of GPy and around twice better than libGP for a similar accuracy. The learning time of Limbo, which highly depends on the optimization algorithm chosen to optimize the hyper-parameters, is either equivalent or faster than the compared libraries.
It is important to note that the objective of the compared libraries are not necessarily the performance, but to provide baselines so that users know what to expect from **Limbo** and how it compares to other GP libraries. For instance, GPy is a python library with much more feature and designed to be easy to use. Moreover, GPy can achieve comparable performance with C++ libraries in the hyper-parameters optimization part because it utilizes numpy and scipy that is basically calling C code with MKL bindings (which is almost identical to what we are doing in **Limbo**).
- **GP-SE-Full-Rprop**: Limbo with Squared Exponential kernel where the signal noise, signal variance and kernel lengthscales are optimized via Maximum Likelihood Estimation with the Rprop optimizer (default for limbo)
- **GP-SE-Rprop**: Limbo with Squared Exponential kernel where the signal variance and kernel lengthscales are optimized via Maximum Likelihood Estimation with the Rprop optimizer (default for limbo) and where the signal noise is not optimized but set to a default value: 0.01
- **libGP-SE-Full**: libGP with Squared Exponential kernel where the signal noise, signal variance and kernel lengthscales are optimized via Maximum Likelihood Estimation with the Rprop optimizer (the only one that libGP has)
- **GPy**: GPy with Squared Exponential kernel where the signal noise, signal variance and kernel lengthscales are optimized via Maximum Likelihood Estimation (with the L-BFGS-B optimizer --- `check scipy.optimize.fmin_l_bfgs_b= <https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_l_bfgs_b.html>`_)
.. _bayesian_optimization:
Introduction to Bayesian Optimization (BO)
......@@ -57,3 +57,77 @@
pages = {503--507},
file = {Cully et al. - 2015 - Robots that can adapt like animals.pdf:/Users/jbm/Documents/zotero_bib/storage/WQ9SQZX3/Cully et al. - 2015 - Robots that can adapt like animals.pdf:application/pdf;Cully et al_2015_Robots that can adapt like animals.pdf:/Users/jbm/Documents/zotero_bib/storage/ADZPNDPM/Cully et al_2015_Robots that can adapt like animals.pdf:application/pdf}
TITLE = {{Black-Box Data-efficient Policy Search for Robotics}},
AUTHOR = {Chatzilygeroudis, Konstantinos and Rama, Roberto and Kaushik, Rituraj and Goepp, Dorian and Vassiliades, Vassilis and Mouret, Jean-Baptiste},
URL = {https://hal.inria.fr/hal-01576683},
BOOKTITLE = {{IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}},
ADDRESS = {Vancouver, Canada},
YEAR = {2017},
MONTH = Sep,
KEYWORDS = {Data-Efficient Learning, learning, robotics, resilience},
PDF = {https://hal.inria.fr/hal-01576683/file/medrops-final.pdf},
HAL_ID = {hal-01576683},
title={{Reset-free Trial-and-Error Learning for Robot Damage Recovery}},
author={Konstantinos Chatzilygeroudis and Vassilis Vassiliades and Jean-Baptiste Mouret},
journal={{Robotics and Autonomous Systems}},
TITLE = {{How Do Different Encodings Influence the Performance of the MAP-Elites Algorithm?}},
AUTHOR = {Tarapore, Danesh and Clune, Jeff and Cully, Antoine and Mouret, Jean-Baptiste},
BOOKTITLE = {{The 18th Annual conference on Genetic and evolutionary computation ({GECCO'14})}},
YEAR = {2016},
keywords={illumination, evolution, resilience, robotics, encodings},
DOI = {10.1145/2908812.2908875},
URL = {https://hal.inria.fr/hal-01302658},
PDF = {https://hal.inria.fr/hal-01302658/document},
HAL_ID = {hal-01302658},
title={Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics},
author={Konstantinos Chatzilygeroudis and Jean-Baptiste Mouret},
booktitle={{International Conference on Robotics and Automation (ICRA)}}
title={Bayesian Optimization with Automatic Prior Selection for Data-Efficient Direct Policy Search},
author={Rémi Pautrat and Konstantinos Chatzilygeroudis and Jean-Baptiste Mouret},
booktitle={{International Conference on Robotics and Automation (ICRA).}},
journal={A short version of the paper was accepted at the non-archival track of the 1st Conference on Robot Learning (CoRL) 2017}
title={Modern {C++} design: generic programming and design patterns applied},
author={Alexandrescu, Andrei},
author = {Ruben Martinez-Cantin},
title = {{BayesOpt:} A {Bayesian} Optimization Library for Nonlinear Optimization, Experimental Design and Bandits},
journal = {Journal of Machine Learning Research},
year = {2014},
volume = {15},
pages = {3915-3919},
......@@ -8,21 +8,17 @@
Limbo's documentation
Limbo is a lightweight framework for Bayesian Optimization, a powerful approach for global optimization of expensive, non-convex functions. Github page (to report issues and/or help us to improve the library): `[Github repository] <http://github.com/resibots/limbo>`_
Limbo (LIbrary for Model-Based Optimization) is an open-source C++11 library for Gaussian Processes and data-efficient optimization (e.g., Bayesian optimization, see :cite:`b-brochu2010tutorial,b-Mockus2013`) that is designed to be both highly flexible and very fast. It can be used as a state-of-the-art optimization library or to experiment with novel algorithms with "plugin" components. Limbo is currently mostly used for data-efficient policy search in robot learning :cite:`b-lizotte2007automatic` and online adaptation because computation time matters when using the low-power embedded computers of robots. For example, Limbo was the key library to develop a new algorithm that allows a legged robot to learn a new gait after a mechanical damage in about 10-15 trials (2 minutes) :cite:`b-cully_robots_2015`, and a 4-DOF manipulator to learn neural networks policies for goal reaching in about 5 trials :cite:`b-chatzilygeroudis2017`.
The development of Limbo is funded by the `ERC project ResiBots <http://www.resibots.eu>`_.
The implementation of Limbo follows a policy-based design :cite:`b-alexandrescu2001modern` that leverages C++ templates: this allows it to be highly flexible without the cost induced by classic object-oriented designs (cost of virtual functions). `The regression benchmarks <http://www.resibots.eu/limbo/reg_benchmarks.html>`_ show that the query time of Limbo's Gaussian processes is several orders of magnitude better than the one of GPy (a state-of-the-art `Python library for Gaussian processes <https://sheffieldml.github.io/GPy/>`_) for a similar accuracy (the learning time highly depends on the optimization algorithm chosen to optimize the hyper-parameters). The `black-box optimization benchmarks <http://www.resibots.eu/limbo/bo_benchmarks.html>`_ demonstrate that Limbo is about 2 times faster than BayesOpt (a C++ library for data-efficient optimization, :cite:`b-martinezcantin14a`) for a similar accuracy and data-efficiency. In practice, changing one of the components of the algorithms in Limbo (e.g., changing the acquisition function) usually requires changing only a template definition in the source code. This design allows users to rapidly experiment and test new ideas while keeping the software as fast as specialized code.
Limbo shares many ideas with `Sferes2 <http://github.com/sferes2>`_, a similar framework for evolutionary computation.
Limbo takes advantage of multi-core architectures to parallelize the internal optimization processes (optimization of the acquisition function, optimization of the hyper-parameters of a Gaussian process) and it vectorizes many of the linear algebra operations (via the `Eigen 3 library <http://eigen.tuxfamily.org/>`_ and optional bindings to Intel's MKL).
The library is distributed under the `CeCILL-C license <http://www.cecill.info/index.en.html>`_ via a `Github repository <http://github.com/resibots/limbo>`_. The code is standard-compliant but it is currently mostly developed for GNU/Linux and Mac OS X with both the GCC and Clang compilers. New contributors can rely on a full API reference, while their developments are checked via a continuous integration platform (automatic unit-testing routines).
Main features
Limbo is currently used in the `ERC project ResiBots <http://www.resibots.eu>`_, which is focused on data-efficient trial-and-error learning for robot damage recovery, and in the `H2020 projet PAL <http://www.pal4u.eu/>`_, which uses social robots to help coping with diabetes. It has been instrumental in many scientific publications since 2015 :cite:`b-cully_robots_2015,b-chatzilygeroudis2018resetfree,b-tarapore2016,b-chatzilygeroudis2017,b-pautrat2018bayesian,b-chatzilygeroudis2018using`.
- Implementation of the classic algorithms (Bayesian optimization, many kernels, likelihood maximization, etc.)
- Modern C++-11
- Generic framework (template-based / policy-based design), which allows for easy customization, to test novel ideas
- Experimental framework that allows user to easily test variants of experiments, compare treatments, submit jobs to clusters (OAR scheduler), etc.
- High performance (in particular, Limbo can exploit multicore computers via Intel TBB and vectorize some operations via Eigen3)
- Purposely small to be easily maintained and quickly understood
Limbo shares many ideas with `Sferes2 <http://github.com/sferes2>`_, a similar framework for evolutionary computation.
......@@ -52,3 +48,10 @@ Contents:
.. * :ref:`genindex`
.. * :ref:`modindex`
.. * :ref:`search`
.. bibliography:: guides/refs.bib
:style: plain
:keyprefix: b-
......@@ -37,9 +37,7 @@ The basic layout of your ``main.cpp`` file should look like this:
.. code-block:: c++
#include <iostream>
#include <limbo/bayes_opt/boptimizer.hpp>
// Here we have to include other needed limbo headers
#include <limbo/limbo.hpp>
using namespace limbo;
......@@ -93,14 +91,14 @@ To compute the forward kinematics of our simple planar arm we use the following
.. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++
:lines: 85-113
:lines: 85-112
To make this forward kinematic model useful to our GP, we need to create a mean function:
.. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++
:lines: 115-125
:lines: 114-124
Using State-based bayesian optimization
......@@ -111,7 +109,7 @@ Creating an Aggregator:
.. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++
:lines: 138-150
:lines: 137-149
Here, we are using a very simple aggregator that simply computes the distance between the end-effector and the target position.
......@@ -125,7 +123,7 @@ When our bayesian optimizer finds a solution that the end-effector of the arm is
.. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++
:lines: 127-136
:lines: 126-135
Creating the evaluation function
......@@ -133,7 +131,7 @@ Creating the evaluation function
.. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++
:lines: 152-167
:lines: 151-166
Creating the experiment
......@@ -242,3 +240,10 @@ Then, an executable named ``arm_example`` should be produced under the folder ``
Using state-based bayesian optimization, we can transfer what we learned during one task to achieve faster new tasks.
Full ``main.cpp``:
.. literalinclude:: ../../src/tutorials/advanced_example.cpp
:language: c++
:lines: 47-
......@@ -2,6 +2,8 @@
Basic Example
If you are not familiar with the main concepts of Bayesian Optimization, a quick introduction is available :ref:`here <bayesian_optimization>`.
In this tutorial, we will explain how to create a new experiment in which a simple function ( :math:`-{(5 * x - 2.5)}^2 + 5`) is maximized.
Let's say we want to create an experiment called "myExp". The first thing to do is to create the folder ``exp/myExp`` under the limbo root. Then add two files:
......@@ -22,6 +24,8 @@ Next, copy the following content to the ``wscript`` file:
.. code:: python
from waflib.Configure import conf
def options(opt):
......@@ -36,13 +40,19 @@ Next, copy the following content to the ``wscript`` file:
For this example, we will optimize a simple function: :math:`-{(5 * x - 2.5)}^2 + 5`, using all default values and settings. If you did not compile with libcmaes and/or nlopt, remove LIBCMAES and/or NLOPT from 'uselib'.
To begin, the ``main`` file has to include the necessary files, and declare the ``Parameter struct``:
To begin, the ``main`` file has to include the necessary files:
.. literalinclude:: ../../src/tutorials/basic_example.cpp
:language: c++
:lines: 55-97
:lines: 48-53
We also need to declare the ``Parameter struct``:
.. literalinclude:: ../../src/tutorials/basic_example.cpp
:language: c++
:lines: 55-97
Here we are stating that the samples are observed without noise (which makes sense, because we are going to evaluate the function), that we want to output the stats (by setting stats_enabled to `true`), that the model has to be initialized with 10 samples (that will be selected randomly), and that the optimizer should run for 40 iterations. The rest of the values are taken from the defaults. **By default limbo optimizes in** :math:`[0,1]`, but you can optimize without bounds by setting ``BO_PARAM(bool, bounded, false)`` in ``bayes_opt_bobase`` parameters. If you do so, limbo outputs random numbers, wherever needed, sampled from a gaussian centered in zero with a standard deviation of :math:`10`, instead of uniform random numbers in :math:`[0,1]` (in the bounded case). Finally **limbo always maximizes**; this means that you have to update your objective function if you want to minimize.
......@@ -64,16 +74,32 @@ With this, we can declare the main function:
:lines: 114-123
The full ``main.cpp`` can be found `here <../../src/tutorials/basic_example.cpp>`_
Finally, from the root of limbo, run a build command, with the additional switch ``--exp myExp``: ::
./waf build --exp myExp
Then, an executable named ``myExp`` should be produced under the folder ``build/exp/myExp``.
When running this executable, you should see something similar to this:
.. literalinclude:: ./example_run_basic_example/print_test.dat
These lines show the result of each sample evaluation of the :math:`40` iterations (after the random initialization). In particular, we can see that algorithm progressively converges toward the maximum of the function (:math:`5`) and that the maximum found is located at :math:`x = 0.500014`.
Running the executable also created a folder with a name composed of YOUCOMPUTERHOSTNAME-DATE-HOUR-PID. This folder should contain two files: ::
+-- samples.dat
+-- aggregated_observations.dat
The file ``samples.dat`` contains the coordinates of the samples that have been evaluated during each iteration, while the file ``aggregated_observations.dat`` contains the corresponding observed values.
If you want to display the different observations in a graph, you can use the python script ``print_aggregated_observations.py`` (located in ``limbo_root/src/tutorials``).
For instance, from the root of limbo you can run ::
python src/tutorials/print_aggregated_observations.py YOUCOMPUTERHOSTNAME-DATE-HOUR-PID/aggregated_observations.dat
Full ``main.cpp``:
.. literalinclude:: ../../src/tutorials/basic_example.cpp
:language: c++
:lines: 48-
......@@ -28,7 +28,7 @@ Optional but highly recommended
.. caution::
The Debian/Unbuntu NLOpt package does NOT come with C++ bindings. Therefore you need to compile NLOpt yourself. The brew package (OSX) comes with C++ bindings (`brew install homebrew/science/nlopt`).
The Debian/Unbuntu NLOpt package does NOT come with C++ bindings. Therefore you need to compile NLOpt yourself. The brew package (OSX) comes with C++ bindings (`brew install nlopt`).
* `libcmaes <https://github.com/beniz/libcmaes>`_. We advise you to use our own `fork of libcmaes <https://github.com/resibots/libcmaes>`_ (branch **fix_flags_native**). Make sure that you install with **sudo** or configure the **LD_LIBRARY_PATH** accordingly. Be careful that gtest (which is a dependency of libcmaes) needs to be manually compiled **even if you install it with your package manager** (e.g. apt-get): ::
#iteration aggregated_observation
-1 2.70602
-1 2.01091
-1 3.63208
-1 1.53741
-1 4.78237
-1 3.13115
-1 -1.21201
-1 4.44618
-1 -0.9999
-1 4.15864
0 4.99986
1 4.99977
2 4.99984
3 4.99984
4 4.99984
5 4.99983
6 4.99983
7 4.99983
8 4.99983
9 4.99982
10 4.99986
11 4.99987
12 4.99989
13 4.99991
14 4.99993
15 4.99995
16 4.99997
17 4.99999
18 4.99999
19 5
20 5
21 5
22 5
23 5
24 5
25 5
26 5
27 5
28 5
29 5
30 5
31 5
32 5
33 5
34 5
35 5
36 5
37 5
38 5
39 5
0 new point: 0.502378 value: 4.99986 best:4.99986
1 new point: 0.503035 value: 4.99977 best:4.99986
2 new point: 0.502521 value: 4.99984 best:4.99986
3 new point: 0.502533 value: 4.99984 best:4.99986
4 new point: 0.502556 value: 4.99984 best:4.99986
5 new point: 0.502585 value: 4.99983 best:4.99986
6 new point: 0.502618 value: 4.99983 best:4.99986
7 new point: 0.502643 value: 4.99983 best:4.99986
8 new point: 0.502646 value: 4.99983 best:4.99986
9 new point: 0.502673 value: 4.99982 best:4.99986
10 new point: 0.502383 value: 4.99986 best:4.99986
11 new point: 0.502262 value: 4.99987 best:4.99987
12 new point: 0.502111 value: 4.99989 best:4.99989
13 new point: 0.501921 value: 4.99991 best:4.99991
14 new point: 0.501679 value: 4.99993 best:4.99993
15 new point: 0.501383 value: 4.99995 best:4.99995
16 new point: 0.501055 value: 4.99997 best:4.99997
17 new point: 0.500751 value: 4.99999 best:4.99999
18 new point: 0.500517 value: 4.99999 best:4.99999
19 new point: 0.500358 value: 5 best:5
20 new point: 0.500256 value: 5 best:5
21 new point: 0.500189 value: 5 best:5
22 new point: 0.500145 value: 5 best:5
23 new point: 0.500114 value: 5 best:5
24 new point: 0.500092 value: 5 best:5
25 new point: 0.500075 value: 5 best:5
26 new point: 0.500063 value: 5 best:5
27 new point: 0.500054 value: 5 best:5
28 new point: 0.500046 value: 5 best:5
29 new point: 0.500039 value: 5 best:5
30 new point: 0.500035 value: 5 best:5
31 new point: 0.50003 value: 5 best:5
32 new point: 0.500027 value: 5 best:5
33 new point: 0.500024 value: 5 best:5
34 new point: 0.500022 value: 5 best:5
35 new point: 0.50002 value: 5 best:5
36 new point: 0.500018 value: 5 best:5
37 new point: 0.500016 value: 5 best:5
38 new point: 0.500015 value: 5 best:5
39 new point: 0.500014 value: 5 best:5
Best sample: 0.500014 - Best observation: 5
#iteration sample
-1 0.197082
-1 0.15422
-1 0.266084
-1 0.127839
-1 0.593302
-1 0.226588
-1 0.998478
-1 0.351162
-1 0.0101061
-1 0.316549
0 0.502378
1 0.503035
2 0.502521
3 0.502533
4 0.502556
5 0.502585
6 0.502618
7 0.502643
8 0.502646
9 0.502673
10 0.502383
11 0.502262
12 0.502111
13 0.501921
14 0.501679
15 0.501383
16 0.501055
17 0.500751
18 0.500517
19 0.500358
20 0.500256
21 0.500189
22 0.500145
23 0.500114
24 0.500092
25 0.500075
26 0.500063
27 0.500054
28 0.500046
29 0.500039
30 0.500035
31 0.50003
32 0.500027
33 0.500024
34 0.500022
35 0.50002
36 0.500018
37 0.500016
38 0.500015
39 0.500014
......@@ -70,7 +70,7 @@ Create a new experiment
./waf --create test
See the :ref:`Framework guide <framework-guide>`
For more information about experiments in Limbo, see the :ref:`Framework guide <framework-guide>`
Edit the "Eval" function to define the function that you want to optimized
......@@ -102,7 +102,7 @@ inline double c2(double x)
inline vec_t t_osz(const vec_t& x)
vec_t r = x;
for (int i = 0; i < x.size(); i++)
for (size_t i = 0; i < static_cast<size_t>(x.size()); i++)
r(i) = sign(x(i)) * std::exp(hat(x(i)) + 0.049 * std::sin(c1(x(i)) * hat(x(i))) + std::sin(c2(x(i)) * hat(x(i))));
return r;
......@@ -161,7 +161,7 @@ struct Rastrigin {
double operator()(const vec_t& xx) const