Commit 46d9713b authored by JB Mouret's avatar JB Mouret Committed by GitHub
Browse files

Merge branch 'master' into doc_versioning

parents ac93dbe0 69bf9002
...@@ -12,13 +12,16 @@ ...@@ -12,13 +12,16 @@
*.log *.log
.lock-waf* .lock-waf*
.waf-* .waf-*
.waf3-*
build build
waf_xcode.sh waf_xcode.sh
exp exp
src/tests/combinations src/tests/combinations
params_* params_*
benchmark_results benchmark_results
__pycache__
# Ignored folders for the documentation # Ignored folders for the documentation
_build _build
doxygen_doc doxygen_doc
docs/defaults.rst
...@@ -16,28 +16,31 @@ env: ...@@ -16,28 +16,31 @@ env:
global: global:
- CI_HOME=`pwd` - CI_HOME=`pwd`
matrix: matrix:
- LIBCMAES=ON NLOPT=ON TBB=ON EXPERIMENTAL=OFF SFERES=OFF - LIBCMAES=ON NLOPT=ON TBB=ON EXPERIMENTAL=OFF SFERES=OFF PYTHON=python3
- LIBCMAES=ON NLOPT=ON TBB=OFF EXPERIMENTAL=OFF SFERES=OFF - LIBCMAES=ON NLOPT=ON TBB=ON EXPERIMENTAL=OFF SFERES=OFF PYTHON=python2
- LIBCMAES=ON NLOPT=OFF TBB=ON EXPERIMENTAL=OFF SFERES=OFF - LIBCMAES=ON NLOPT=ON TBB=OFF EXPERIMENTAL=OFF SFERES=OFF PYTHON=python2
- LIBCMAES=ON NLOPT=OFF TBB=OFF EXPERIMENTAL=OFF SFERES=OFF - LIBCMAES=ON NLOPT=OFF TBB=ON EXPERIMENTAL=OFF SFERES=OFF PYTHON=python2
- LIBCMAES=OFF NLOPT=ON TBB=ON EXPERIMENTAL=OFF SFERES=OFF - LIBCMAES=ON NLOPT=OFF TBB=OFF EXPERIMENTAL=OFF SFERES=OFF PYTHON=python2
- LIBCMAES=OFF NLOPT=ON TBB=OFF EXPERIMENTAL=OFF SFERES=OFF - LIBCMAES=OFF NLOPT=ON TBB=ON EXPERIMENTAL=OFF SFERES=OFF PYTHON=python2
- LIBCMAES=OFF NLOPT=OFF TBB=ON EXPERIMENTAL=OFF SFERES=OFF - LIBCMAES=OFF NLOPT=ON TBB=OFF EXPERIMENTAL=OFF SFERES=OFF PYTHON=python2
- LIBCMAES=OFF NLOPT=OFF TBB=OFF EXPERIMENTAL=OFF SFERES=OFF - LIBCMAES=OFF NLOPT=OFF TBB=ON EXPERIMENTAL=OFF SFERES=OFF PYTHON=python2
- LIBCMAES=ON NLOPT=OFF TBB=OFF EXPERIMENTAL=ON SFERES=OFF - LIBCMAES=OFF NLOPT=OFF TBB=OFF EXPERIMENTAL=OFF SFERES=OFF PYTHON=python2
- LIBCMAES=OFF NLOPT=OFF TBB=OFF EXPERIMENTAL=ON SFERES=OFF - LIBCMAES=ON NLOPT=OFF TBB=OFF EXPERIMENTAL=ON SFERES=OFF PYTHON=python2
- LIBCMAES=ON NLOPT=OFF TBB=ON EXPERIMENTAL=ON SFERES=OFF - LIBCMAES=OFF NLOPT=OFF TBB=OFF EXPERIMENTAL=ON SFERES=OFF PYTHON=python2
- LIBCMAES=OFF NLOPT=OFF TBB=ON EXPERIMENTAL=ON SFERES=OFF - LIBCMAES=ON NLOPT=OFF TBB=ON EXPERIMENTAL=ON SFERES=OFF PYTHON=python2
- LIBCMAES=ON NLOPT=OFF TBB=OFF EXPERIMENTAL=ON SFERES=ON - LIBCMAES=OFF NLOPT=OFF TBB=ON EXPERIMENTAL=ON SFERES=OFF PYTHON=python2
- LIBCMAES=OFF NLOPT=OFF TBB=OFF EXPERIMENTAL=ON SFERES=ON - LIBCMAES=ON NLOPT=OFF TBB=OFF EXPERIMENTAL=ON SFERES=ON PYTHON=python2
- LIBCMAES=ON NLOPT=OFF TBB=ON EXPERIMENTAL=ON SFERES=ON - LIBCMAES=OFF NLOPT=OFF TBB=OFF EXPERIMENTAL=ON SFERES=ON PYTHON=python2
- LIBCMAES=OFF NLOPT=OFF TBB=ON EXPERIMENTAL=ON SFERES=ON - LIBCMAES=ON NLOPT=OFF TBB=ON EXPERIMENTAL=ON SFERES=ON PYTHON=python2
- LIBCMAES=OFF NLOPT=OFF TBB=ON EXPERIMENTAL=ON SFERES=ON PYTHON=python2
addons: addons:
apt: apt:
packages: packages:
- libboost1.55-all-dev - libboost1.55-all-dev
- libeigen3-dev - libeigen3-dev
- python
- python3
before_install: before_install:
- sudo sed -i -e 's/^Defaults\tsecure_path.*$//' /etc/sudoers - sudo sed -i -e 's/^Defaults\tsecure_path.*$//' /etc/sudoers
...@@ -51,5 +54,5 @@ install: ...@@ -51,5 +54,5 @@ install:
# Change this to your needs # Change this to your needs
script: script:
- if [ "$SFERES" = "OFF" ]; then ./waf configure ; else ./waf configure --sferes=$CI_HOME/sferes2 ; fi - if [ "$SFERES" = "OFF" ]; then $PYTHON ./waf configure ; else $PYTHON ./waf configure --sferes=$CI_HOME/sferes2 ; fi
- if [ "$EXPERIMENTAL" = "OFF" ]; then ./waf --tests --alltests -v ; else ./waf --experimental ; fi - if [ "$EXPERIMENTAL" = "OFF" ]; then $PYTHON ./waf --tests --alltests -v ; else $PYTHON ./waf --experimental ; fi
## Coding style
We chose to abide by the following coding style rules.
1. Names representing classes must be in camel case:
`MyClass`
2. Variable and method names must be in lower case, using underscores to separate words:
`my_variable`, `my_method()`
3. Names of protected and private members must start with an underscore:
`_my_private_member`, `_my_private_method()`
4. File names must be in lower case, using underscores to separate words.
A file which contains a class `MyClass` should be named `my_class.hpp`
5. File structure mirrors namespace structure.
For instance `gen::MyClass` is in the file `gen/my_class.hpp`
6. Named constants (including enumeration values) must be placed in the `cst` namespace within the current namespace
```
namespace cst {
static constexpr int a_number = 3529;
}
```
7. Getters should have the name of the attribute:
`this->_objs` should be accessed using `this->objs()`
8. Setters should start with "set\_" followed by the name of the attribute:
`set_objs(const std::vector& ov)`
9. The public section should be the first section of a class
10. Type names defined using typedefs/aliases should end with "\_t": `iterator_t`
## Code formatting
We also follow the coding style rules enforced by `clang-format` and a custom configuration. See the [format_code](https://github.com/resibots/format_code) repository and software to follow this standard.
...@@ -54,11 +54,12 @@ Main features ...@@ -54,11 +54,12 @@ Main features
Scientific articles that use Limbo Scientific articles that use Limbo
-------------------------------- --------------------------------
- Cully, A., Clune, J., Tarapore, D., & Mouret, J. B. (2015). [Robots that can adapt like animals](http://www.nature.com/nature/journal/v521/n7553/full/nature14422.html). *Nature*, 521(7553), 503-507. - Cully, A., Clune, J., Tarapore, D., and Mouret, J.B. (2015). [Robots that can adapt like animals](http://www.nature.com/nature/journal/v521/n7553/full/nature14422.html). *Nature*, 521(7553), 503-507.
- Tarapore D, Clune J, Cully A, Mouret JB (2016). [How Do Different Encodings Influence the Performance of the MAP-Elites Algorithm?](https://hal.inria.fr/hal-01302658/document). *In Proc. of Genetic and Evolutionary Computation Conference*. - Tarapore, D., Clune, J., Cully, A., and Mouret, J.B. (2016). [How Do Different Encodings Influence the Performance of the MAP-Elites Algorithm?](https://hal.inria.fr/hal-01302658/document). *In Proc. of Genetic and Evolutionary Computation Conference*.
- Chatzilygeroudis, K., Vassiliades, V. and Mouret, J.B. (2016). [Reset-free Trial-and-Error Learning for Data-Efficient Robot Damage Recovery](https://arxiv.org/abs/1610.04213). *arXiv preprint arXiv:1610.04213*. - Chatzilygeroudis, K., Vassiliades, V. and Mouret, J.B. (2016). [Reset-free Trial-and-Error Learning for Data-Efficient Robot Damage Recovery](https://arxiv.org/abs/1610.04213). *arXiv preprint arXiv:1610.04213*.
- Chatzilygeroudis, K., Cully, A. and Mouret, J.B. (2016). [Towards semi-episodic learning for robot damage recovery](https://arxiv.org/abs/1610.01407). *Workshop on AI for Long-Term Autonomy at the IEEE International Conference on Robotics and Automation 2016*. - Chatzilygeroudis, K., Cully, A. and Mouret, J.B. (2016). [Towards semi-episodic learning for robot damage recovery](https://arxiv.org/abs/1610.01407). *Workshop on AI for Long-Term Autonomy at the IEEE International Conference on Robotics and Automation 2016*.
- Papaspyros V., Chatzilygeroudis K., Vassiliades V., and Mouret JB. (2016). [Safety-Aware Robot Damage Recovery Using Constrained Bayesian Optimization and Simulated Priors](https://arxiv.org/pdf/1611.09419v3). *Workshop on Bayesian Optimization at the Annual Conference on Neural Information Processing Systems (NIPS) 2016.* - Papaspyros, V., Chatzilygeroudis, K., Vassiliades, V., and Mouret, J.B. (2016). [Safety-Aware Robot Damage Recovery Using Constrained Bayesian Optimization and Simulated Priors](https://arxiv.org/pdf/1611.09419v3). *Workshop on Bayesian Optimization at the Annual Conference on Neural Information Processing Systems (NIPS) 2016.*
- Chatzilygeroudis K., Rama R., Kaushik, R., Goepp, D., Vassiliades, V. and Mouret, J.B. (2017). [Black-Box Data-efficient Policy Search for Robotics](https://arxiv.org/abs/1703.07261). *arXiv preprint arXiv:1703.07261*.
Research projects that use Limbo Research projects that use Limbo
-------------------------------- --------------------------------
......
Benchmarking with other Bayesian Optimization/Gaussian Processes Libraries
==========================================================================
In this section, we will compared the performance (both in accuracy and speed) of our library to the one of other Bayesian Optimization (BO) and Gaussian Processes (GP) libraries in several test functions.
.. _bench_bayes_opt:
BayesOpt C++ Bayesian Optimization Library
-------------------------------------------
**Last updated: 27/05/2017**
The BayesOpt library and information about it can be found `here <https://bitbucket.org/rmcantin/bayesopt>`_.
Parameters/Setup
~~~~~~~~~~~~~~~~~
Limbo was configured to replicate BayesOpt's default parameters:
+------------------------+---------------------------------------+
| **Kernel\*** | Matern5 (:math:`\sigma^2 = 1, l = 1`) |
+------------------------+---------------------------------------+
|**Acquisition Function**| UCB (:math:`\alpha = 0.125`) |
+------------------------+---------------------------------------+
| **Initialization** | RandomSampling (10 Samples) |
+------------------------+---------------------------------------+
| **Mean function** | Constant (value of 1) +
+------------------------+---------------------------------------+
| **Sample noise** | 1e-10 +
+------------------------+---------------------------------------+
| **Max iterations** | 190 +
+------------------------+---------------------------------------+
**\*** *When the hyperparameters are optimized, the kernel parameters are learnt.*
The acquisition function is optimized as follows: an outer optimization process uses **DIRECT** for :math:`225d` iterations (where :math:`d` is the input dimension) and the solution found is fed as the initial point to an inner optimization process that uses **BOBYQA** for :math:`25d` iterations.
Results
~~~~~~~~
.. figure:: ./pics/benchmark_limbo_bayes_opt.png
:alt: Benchmarks vs BayesOpt C++ library
:target: ./_images/benchmark_limbo_bayes_opt.png
Two configurations are tested: with and without optimization of the hyper-parameters of the Gaussian Process. Each experiment has been replicated 250 times. The median of the data is pictured with a thick dot, while the box represents the first and third quartiles. The most extreme data points are delimited by the whiskers and the outliers are individually depicted as smaller circles. According to the benchmarks we performed (see figure above), Limbo finds solutions with the same level of quality as BayesOpt, within a significantly lower amount of time: for the same accuracy (less than 2.10−3 between the optimized solutions found by Limbo and BayesOpt), Limbo is between 1.47 and 1.76 times faster (median values) than BayesOpt when the hyperparameters are not optimized, and between 2.05 and 2.54 times faster when they are.
...@@ -55,7 +55,7 @@ A GP is fully specified by its mean function :math:`\mu(\mathbf{x})` and covaria ...@@ -55,7 +55,7 @@ A GP is fully specified by its mean function :math:`\mu(\mathbf{x})` and covaria
.. math:: .. math::
k_{SE}(\chi_1, \chi_2) = \sigma_f^2 \cdot \exp\left( \frac{\left|\left|\chi_1, \chi_2\right|\right|^2}{2 l^2} \right) k_{SE}(\chi_1, \chi_2) = \sigma_f^2 \cdot \exp\left( -\frac{\left|\left|\chi_1 - \chi_2\right|\right|^2}{2 l^2} \right)
For some datasets, it makes sense to hand-tune these parameters (e.g., when there are very few samples). Ideally, our objective should be to learn :math:`l^2` (characteristic length scale) and :math:`\sigma_f^2` (overall variance). For some datasets, it makes sense to hand-tune these parameters (e.g., when there are very few samples). Ideally, our objective should be to learn :math:`l^2` (characteristic length scale) and :math:`\sigma_f^2` (overall variance).
......
...@@ -40,6 +40,7 @@ Contents: ...@@ -40,6 +40,7 @@ Contents:
guides/index guides/index
api api
defaults defaults
benchmarks
faq faq
......
...@@ -63,7 +63,7 @@ namespace limbo { ...@@ -63,7 +63,7 @@ namespace limbo {
Exponential kernel (see :cite:`brochu2010tutorial` p. 9). Exponential kernel (see :cite:`brochu2010tutorial` p. 9).
.. math:: .. math::
k(v_1, v_2) = \sigma^2\exp \Big(-\frac{1}{l^2} ||v_1 - v_2||^2\Big) k(v_1, v_2) = \sigma^2\exp \Big(-\frac{||v_1 - v_2||^2}{2l^2}\Big)
Parameters: Parameters:
- ``double sigma_sq`` (signal variance) - ``double sigma_sq`` (signal variance)
......
...@@ -61,6 +61,9 @@ namespace limbo { ...@@ -61,6 +61,9 @@ namespace limbo {
/// @ingroup opt_defaults /// @ingroup opt_defaults
/// number of replicates /// number of replicates
BO_PARAM(int, repeats, 10); BO_PARAM(int, repeats, 10);
/// epsilon of deviation: init + [-epsilon,epsilon]
BO_PARAM(double, epsilon, 1e-2);
}; };
} }
namespace opt { namespace opt {
...@@ -75,14 +78,18 @@ namespace limbo { ...@@ -75,14 +78,18 @@ namespace limbo {
template <typename F> template <typename F>
Eigen::VectorXd operator()(const F& f, const Eigen::VectorXd& init, bool bounded) const Eigen::VectorXd operator()(const F& f, const Eigen::VectorXd& init, bool bounded) const
{ {
assert(Params::opt_parallelrepeater::repeats() > 0);
assert(Params::opt_parallelrepeater::epsilon() > 0.);
tools::par::init(); tools::par::init();
using pair_t = std::pair<Eigen::VectorXd, double>; using pair_t = std::pair<Eigen::VectorXd, double>;
auto body = [&](int i) { auto body = [&](int i) {
// clang-format off // clang-format off
Eigen::VectorXd r_init = tools::random_vector(init.size()); Eigen::VectorXd r_deviation = tools::random_vector(init.size()).array() * 2. * Params::opt_parallelrepeater::epsilon() - Params::opt_parallelrepeater::epsilon();
Eigen::VectorXd v = Optimizer()(f, init, bounded); Eigen::VectorXd v = Optimizer()(f, init + r_deviation, bounded);
double lik = opt::eval(f, v); double val = opt::eval(f, v);
return std::make_pair(v, lik);
return std::make_pair(v, val);
// clang-format on // clang-format on
}; };
......
...@@ -58,7 +58,11 @@ namespace limbo { ...@@ -58,7 +58,11 @@ namespace limbo {
namespace defaults { namespace defaults {
struct opt_rprop { struct opt_rprop {
/// @ingroup opt_defaults /// @ingroup opt_defaults
/// number of max iterations
BO_PARAM(int, iterations, 300); BO_PARAM(int, iterations, 300);
/// gradient norm epsilon for stopping
BO_PARAM(double, eps_stop, 0.0);
}; };
} }
namespace opt { namespace opt {
...@@ -78,14 +82,15 @@ namespace limbo { ...@@ -78,14 +82,15 @@ namespace limbo {
template <typename F> template <typename F>
Eigen::VectorXd operator()(const F& f, const Eigen::VectorXd& init, bool bounded) const Eigen::VectorXd operator()(const F& f, const Eigen::VectorXd& init, bool bounded) const
{ {
// params assert(Params::opt_rprop::eps_stop() >= 0.);
size_t param_dim = init.size(); size_t param_dim = init.size();
double delta0 = 0.1; double delta0 = 0.1;
double deltamin = 1e-6; double deltamin = 1e-6;
double deltamax = 50; double deltamax = 50;
double etaminus = 0.5; double etaminus = 0.5;
double etaplus = 1.2; double etaplus = 1.2;
double eps_stop = 0.0; double eps_stop = Params::opt_rprop::eps_stop();
Eigen::VectorXd delta = Eigen::VectorXd::Ones(param_dim) * delta0; Eigen::VectorXd delta = Eigen::VectorXd::Ones(param_dim) * delta0;
Eigen::VectorXd grad_old = Eigen::VectorXd::Zero(param_dim); Eigen::VectorXd grad_old = Eigen::VectorXd::Zero(param_dim);
......
...@@ -49,7 +49,9 @@ ...@@ -49,7 +49,9 @@
#include <boost/test/unit_test.hpp> #include <boost/test/unit_test.hpp>
#include <limbo/opt/chained.hpp> #include <limbo/opt/chained.hpp>
#include <limbo/opt/parallel_repeater.hpp>
#include <limbo/opt/cmaes.hpp> #include <limbo/opt/cmaes.hpp>
#include <limbo/opt/rprop.hpp>
#include <limbo/opt/grid_search.hpp> #include <limbo/opt/grid_search.hpp>
#include <limbo/opt/random_point.hpp> #include <limbo/opt/random_point.hpp>
#include <limbo/tools/macros.hpp> #include <limbo/tools/macros.hpp>
...@@ -60,6 +62,15 @@ struct Params { ...@@ -60,6 +62,15 @@ struct Params {
struct opt_gridsearch { struct opt_gridsearch {
BO_PARAM(int, bins, 20); BO_PARAM(int, bins, 20);
}; };
struct opt_parallelrepeater {
BO_PARAM(int, repeats, 2);
BO_PARAM(double, epsilon, 0.1);
};
struct opt_rprop : public defaults::opt_rprop {
BO_PARAM(int, iterations, 150);
};
}; };
// test with a standard function // test with a standard function
...@@ -82,6 +93,18 @@ struct FakeAcquiBi { ...@@ -82,6 +93,18 @@ struct FakeAcquiBi {
} }
}; };
// test with gradient
int simple_calls = 0;
bool check_grad = false;
std::vector<Eigen::VectorXd> starting_points;
opt::eval_t simple_func(const Eigen::VectorXd& v, bool eval_grad)
{
assert(!check_grad || eval_grad);
simple_calls++;
starting_points.push_back(v);
return {-(v(0) * v(0) + 2. * v(0)), limbo::tools::make_vector(-(2 * v(0) + 2.))};
}
BOOST_AUTO_TEST_CASE(test_random_mono_dim) BOOST_AUTO_TEST_CASE(test_random_mono_dim)
{ {
using namespace limbo; using namespace limbo;
...@@ -144,6 +167,44 @@ BOOST_AUTO_TEST_CASE(test_grid_search_bi_dim) ...@@ -144,6 +167,44 @@ BOOST_AUTO_TEST_CASE(test_grid_search_bi_dim)
BOOST_CHECK_EQUAL(bidim_calls, (Params::opt_gridsearch::bins() + 1) * (Params::opt_gridsearch::bins() + 1) + 21); BOOST_CHECK_EQUAL(bidim_calls, (Params::opt_gridsearch::bins() + 1) * (Params::opt_gridsearch::bins() + 1) + 21);
} }
BOOST_AUTO_TEST_CASE(test_gradient)
{
using namespace limbo;
opt::Rprop<Params> optimizer;
simple_calls = 0;
check_grad = true;
Eigen::VectorXd best_point = optimizer(simple_func, Eigen::VectorXd::Constant(1, 2.0), false);
BOOST_CHECK_EQUAL(best_point.size(), 1);
BOOST_CHECK(std::abs(best_point(0) + 1.) < 1e-3);
BOOST_CHECK_EQUAL(simple_calls, Params::opt_rprop::iterations());
}
BOOST_AUTO_TEST_CASE(test_parallel_repeater)
{
#ifdef USE_TBB
static tbb::task_scheduler_init init(1);
#endif
using namespace limbo;
opt::ParallelRepeater<Params, opt::Rprop<Params>> optimizer;
simple_calls = 0;
check_grad = false;
starting_points.clear();
Eigen::VectorXd best_point = optimizer(simple_func, Eigen::VectorXd::Constant(1, 2.0), false);
BOOST_CHECK_EQUAL(best_point.size(), 1);
BOOST_CHECK(std::abs(best_point(0) + 1.) < 1e-3);
BOOST_CHECK_EQUAL(simple_calls, Params::opt_parallelrepeater::repeats() * Params::opt_rprop::iterations() + Params::opt_parallelrepeater::repeats());
BOOST_CHECK_EQUAL(starting_points.size(), simple_calls);
BOOST_CHECK(starting_points[0](0) >= 2. - Params::opt_parallelrepeater::epsilon() && starting_points[0](0) <= 2. + Params::opt_parallelrepeater::epsilon());
BOOST_CHECK(starting_points[Params::opt_rprop::iterations() + 1](0) >= 2. - Params::opt_parallelrepeater::epsilon() && starting_points[Params::opt_rprop::iterations() + 1](0) <= 2. + Params::opt_parallelrepeater::epsilon());
#ifdef USE_TBB
tools::par::init();
#endif
}
BOOST_AUTO_TEST_CASE(test_chained) BOOST_AUTO_TEST_CASE(test_chained)
{ {
using namespace limbo; using namespace limbo;
......
...@@ -52,6 +52,7 @@ import time ...@@ -52,6 +52,7 @@ import time
import threading import threading
import params import params
import license import license
from waflib import Logs
from waflib.Tools import waf_unit_test from waflib.Tools import waf_unit_test
json_ok = True json_ok = True
...@@ -59,7 +60,7 @@ try: ...@@ -59,7 +60,7 @@ try:
import simplejson import simplejson
except: except:
json_ok = False json_ok = False
print "WARNING simplejson not found some function may not work" Logs.pprint('YELLOW', 'WARNING: simplejson not found some function may not work')
def add_create_options(opt): def add_create_options(opt):
opt.add_option('--dim_in', type='int', dest='dim_in', help='Number of dimensions for the function to optimize [default: 1]') opt.add_option('--dim_in', type='int', dest='dim_in', help='Number of dimensions for the function to optimize [default: 1]')
...@@ -99,7 +100,7 @@ def create_exp(name, opt): ...@@ -99,7 +100,7 @@ def create_exp(name, opt):
if not os.path.exists('exp'): if not os.path.exists('exp'):
os.makedirs('exp') os.makedirs('exp')
if os.path.exists('exp/' + name): if os.path.exists('exp/' + name):
print 'ERROR: experiment ' + name + ' already exists. Please remove it if you want to re-create it from scratch.' Logs.pprint('RED', 'ERROR: experiment \'%s\' already exists. Please remove it if you want to re-create it from scratch.' % name)
return return
os.mkdir('exp/' + name) os.mkdir('exp/' + name)
...@@ -161,7 +162,7 @@ def _sub_script(tpl, conf_file): ...@@ -161,7 +162,7 @@ def _sub_script(tpl, conf_file):
ld_lib_path = os.environ['LD_LIBRARY_PATH'] ld_lib_path = os.environ['LD_LIBRARY_PATH']
else: else:
ld_lib_path = "''" ld_lib_path = "''"
print 'LD_LIBRARY_PATH=' + ld_lib_path Logs.pprint('NORMAL', 'LD_LIBRARY_PATH=%s' % ld_lib_path)
# parse conf # parse conf
list_exps = simplejson.load(open(conf_file)) list_exps = simplejson.load(open(conf_file))
fnames = [] fnames = []
...@@ -200,7 +201,7 @@ def _sub_script(tpl, conf_file): ...@@ -200,7 +201,7 @@ def _sub_script(tpl, conf_file):
try: try:
os.makedirs(directory) os.makedirs(directory)
except: except:
print "WARNING, dir:" + directory + " not be created" Logs.pprint('YELLOW', 'WARNING: directory \'%s\' could not be created' % directory)
subprocess.call('cp ' + bin_dir + '/' + e + ' ' + directory, shell=True) subprocess.call('cp ' + bin_dir + '/' + e + ' ' + directory, shell=True)
src_dir = bin_dir.replace('build/', '') src_dir = bin_dir.replace('build/', '')
subprocess.call('cp ' + src_dir + '/params_*.txt ' + directory, shell=True) subprocess.call('cp ' + src_dir + '/params_*.txt ' + directory, shell=True)
...@@ -225,7 +226,7 @@ def _sub_script_local(conf_file): ...@@ -225,7 +226,7 @@ def _sub_script_local(conf_file):
ld_lib_path = os.environ['LD_LIBRARY_PATH'] ld_lib_path = os.environ['LD_LIBRARY_PATH']
else: else:
ld_lib_path = "''" ld_lib_path = "''"
print 'LD_LIBRARY_PATH=' + ld_lib_path Logs.pprint('NORMAL', 'LD_LIBRARY_PATH=%s' % ld_lib_path)
# parse conf # parse conf
list_exps = simplejson.load(open(conf_file)) list_exps = simplejson.load(open(conf_file))
fnames = [] fnames = []
...@@ -264,7 +265,7 @@ def _sub_script_local(conf_file): ...@@ -264,7 +265,7 @@ def _sub_script_local(conf_file):
try: try:
os.makedirs(directory) os.makedirs(directory)
except: except:
print "WARNING, dir:" + directory + " not be created" Logs.pprint('YELLOW', 'WARNING: directory \'%s\' could not be created' % directory)
subprocess.call('cp ' + bin_dir + '/' + e + ' ' + '"' + directory + '"', shell=True) subprocess.call('cp ' + bin_dir + '/' + e + ' ' + '"' + directory + '"', shell=True)
src_dir = bin_dir.replace('build/', '') src_dir = bin_dir.replace('build/', '')
subprocess.call('cp ' + src_dir + '/params_*.txt ' + directory, shell=True) subprocess.call('cp ' + src_dir + '/params_*.txt ' + directory, shell=True)
...@@ -282,7 +283,7 @@ def run_local(conf_file, serial = True): ...@@ -282,7 +283,7 @@ def run_local(conf_file, serial = True):
threads = [] threads = []
for (fname, directory) in fnames: for (fname, directory) in fnames:
s = "cd " + '"' + directory + '"' + " && " + "./" + fname + ' ' + arguments s = "cd " + '"' + directory + '"' + " && " + "./" + fname + ' ' + arguments
print "Executing " + s Logs.pprint('NORMAL', "Executing: %s" % s)
if not serial: if not serial:
t = threading.Thread(target=run_local_one, args=(directory,s,)) t = threading.Thread(target=run_local_one, args=(directory,s,))
threads.append(t) threads.append(t)
...@@ -315,9 +316,9 @@ exec @exec ...@@ -315,9 +316,9 @@ exec @exec
fnames = _sub_script(tpl, conf_file) fnames = _sub_script(tpl, conf_file)
for (fname, directory) in fnames: for (fname, directory) in fnames:
s = "qsub -d " + directory + " " + fname s = "qsub -d " + directory + " " + fname
print "executing:" + s Logs.pprint('NORMAL', 'executing: %s' % s)
retcode = subprocess.call(s, shell=True, env=None) retcode = subprocess.call(s, shell=True, env=None)
print "qsub returned:" + str(retcode) Logs.pprint('NORMAL', 'qsub returned: %s' % str(retcode))
def oar(conf_file): def oar(conf_file):
...@@ -329,13 +330,13 @@ def oar(conf_file): ...@@ -329,13 +330,13 @@ def oar(conf_file):
export LD_LIBRARY_PATH=@ld_lib_path export LD_LIBRARY_PATH=@ld_lib_path
exec @exec exec @exec
""" """
print 'WARNING [oar]: MPI not supported yet' Logs.pprint('YELLOW', 'WARNING [oar]: MPI not supported yet')
fnames = _sub_script(tpl, conf_file) fnames = _sub_script(tpl, conf_file)
for (fname, directory) in fnames: for (fname, directory) in fnames:
s = "oarsub -d " + directory + " -S " + fname s = "oarsub -d " + directory + " -S " + fname
print "executing:" + s Logs.pprint('NORMAL', 'executing: %s' % s)
retcode = subprocess.call(s, shell=True, env=None) retcode = subprocess.call(s, shell=True, env=None)
print "oarsub returned:" + str(retcode) Logs.pprint('NORMAL', 'oarsub returned: %s' % str(retcode))
def output_params(folder): def output_params(folder):
files = [each for each in os.listdir(folder) if each.endswith('.cpp')] files = [each for each in os.listdir(folder) if each.endswith('.cpp')]
......
...@@ -101,7 +101,7 @@ def extract_params(fname): ...@@ -101,7 +101,7 @@ def extract_params(fname):
if '}' in line: if '}' in line:
level.pop(-1) level.pop(-1)
if '#if' in line: if '#if' in line:
ifdefs += [line] ifdefs += [line]
if '#else' in line: if '#else' in line:
ifdefs[-1] = 'NOT ' + ifdefs[-1] ifdefs[-1] = 'NOT ' + ifdefs[-1]
if '#elif' in line: if '#elif' in line:
......
...@@ -53,7 +53,7 @@ ...@@ -53,7 +53,7 @@
Usage: Usage:
def options(opt): def options(opt):
opt.load('xcode') opt.load('xcode')
$ waf configure xcode