Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
Matthias Mayr
limbo
Commits
26ff63e3
Commit
26ff63e3
authored
Apr 16, 2018
by
Jean-Baptiste Mouret
Browse files
[ci skip] minor changes
parent
4f1191dc
Changes
2
Hide whitespace changes
Inline
Side-by-side
docs/benchmark_res_bo.inc
View file @
26ff63e3
This
page
presents
benchmarks
in
which
we
compare
the
Bayesian
optimization
performance
of
LIMBO
against
BayesOpt
(
https
://
github
.
com
/
rmcantin
/
bayesopt
,
a
state
-
of
-
the
-
art
Bayesian
Optimization
library
in
C
++
)
.
Each
library
is
given
200
evaluations
(
10
random
samples
+
190
function
evaluations
)
to
find
the
optimum
of
the
hidden
function
.
We
compare
both
the
accuracy
of
the
obtained
solution
(
difference
with
the
actual
optimum
solution
)
and
the
time
(
wall
clock
time
)
required
by
the
library
to
run
the
optimization
process
.
The
results
show
that
while
the
libraries
generate
solutions
with
similar
accuracy
(
they
are
based
on
the
same
algorithm
),
LIMBO
generates
these
solutions
significantly
faster
than
BayesOpt
.
This
page
presents
benchmarks
in
which
we
compare
the
Bayesian
optimization
performance
of
LIMBO
against
BayesOpt
(
https
://
github
.
com
/
rmcantin
/
bayesopt
,
a
state
-
of
-
the
-
art
Bayesian
Optimization
library
in
C
++
)
.
In
addition
to
comparing
the
performance
of
the
libraries
with
their
default
parameter
values
(
and
evaluating
LIMBO
with
the
same
parameters
than
BayesOpt
,
see
variant
:
limbo
/
bench_bayes_def
),
we
also
evaluate
the
performance
of
multiple
variants
of
LIMBO
,
including
different
acquisition
functions
(
UCB
or
EI
),
different
inner
-
optimizers
(
CMAES
or
DIRECT
)
and
whether
optimizing
or
not
the
hyper
-
parameters
of
the
model
.
In
all
the
these
comparisons
,
we
can
observe
that
LIMBO
is
faster
than
BayesOpt
,
ebven
when
BayesOpt
is
not
optimizing
its
hyper
-
parameters
.
Each
library
is
given
200
evaluations
(
10
random
samples
+
190
function
evaluations
)
to
find
the
optimum
of
the
hidden
function
.
We
compare
both
the
accuracy
of
the
obtained
solution
(
difference
with
the
actual
optimum
solution
)
and
the
time
(
wall
clock
time
)
required
by
the
library
to
run
the
optimization
process
.
The
results
show
that
while
the
libraries
generate
solutions
with
similar
accuracy
(
they
are
based
on
the
same
algorithm
),
LIMBO
generates
these
solutions
significantly
faster
than
BayesOpt
.
In
addition
to
comparing
the
performance
of
the
libraries
with
their
default
parameter
values
(
and
evaluating
LIMBO
with
the
same
parameters
than
BayesOpt
,
see
variant
:
limbo
/
bench_bayes_def
),
we
also
evaluate
the
performance
of
multiple
variants
of
LIMBO
,
including
different
acquisition
functions
(
UCB
or
EI
),
different
inner
-
optimizers
(
CMAES
or
DIRECT
)
and
whether
optimizing
or
not
the
hyper
-
parameters
of
the
model
.
In
all
the
these
comparisons
,
LIMBO
is
faster
than
BayesOpt
(
for
similar
results
),
even
when
BayesOpt
is
not
optimizing
the
hyper
-
parameters
of
the
Gaussian
processes
.
Details
...
...
waf_tools/plot_bo_benchmarks.py
View file @
26ff63e3
...
...
@@ -222,11 +222,11 @@ def plot(func_name, data, rst_file):
ax
.
set_yticklabels
([])
ax
.
set_title
(
"Wall clock time (s)"
)
notes
=
get_notes
()
notes
=
get_notes
()
name
=
func_name
.
split
(
'.'
)[
0
]
fig
.
savefig
(
"benchmark_results/fig_benchmarks/"
+
name
+
".png"
)
rst_file
.
write
(
name
+
"
\n
"
)
rst_file
.
write
(
name
.
title
()
+
" function
\n
"
)
rst_file
.
write
(
"-----------------
\n\n
"
)
rst_file
.
write
(
notes
[
name
]
+
"
\n\n
"
)
rst_file
.
write
(
str
(
len
(
da_acc
[
0
]))
+
" replicates
\n\n
"
)
...
...
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment