Commit 4971c83d authored by Konstantinos Chatzilygeroudis's avatar Konstantinos Chatzilygeroudis
Browse files

Changed the GP tutorial to include save/load

parent 5d3bf0a3
......@@ -13,7 +13,7 @@ We assume that our samples are in a vector called ``samples`` and that our obser
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 77-86
:lines: 79-88
Basic usage
------------
......@@ -23,14 +23,14 @@ We first create a basic GP with an Exponential kernel (``kernel::Exp<Params>``)
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 59-72
:lines: 61-74
The type of the GP is defined by the following lines:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 87-91
:lines: 89-93
To use the GP, we need :
......@@ -40,7 +40,7 @@ To use the GP, we need :
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 92-97
:lines: 94-99
Here we assume that the noise is the same for all samples and that it is equal to 0.01.
......@@ -57,7 +57,7 @@ To visualize the predictions of the GP, we can query it for many points and reco
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 101-110
:lines: 101-112
Hyper-parameter optimization
......@@ -71,21 +71,21 @@ A new GP type is defined as follows:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 112-116
:lines: 114-118
It uses the default values for the parameters of ``SquaredExpARD``:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 64-67
:lines: 66-69
After calling the ``compute()`` method, the hyper-parameters can be optimized by calling the ``optimize_hyperparams()`` function. The GP does not need to be recomputed and we pass ``false`` for the last parameter in ``compute()`` as we do not need to compute the kernel matrix again (it will be recomputed in the hyper-parameters optimization).
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 119-121
:lines: 121-123
We can have a look at the difference between the two GPs:
......@@ -105,4 +105,25 @@ Here is the complete ``main.cpp`` file of this tutorial:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:lines: 48-
:lines: 46-
Saving and Loading
-------------------
We can also save our optimized GP model:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 140-141
If we want a binary format (i.e., more compact), we can replace the ``TextArchive`` by ``BinaryArchive``.
To the load a saved model, we can do the following:
.. literalinclude:: ../../src/tutorials/gp.cpp
:language: c++
:linenos:
:lines: 143-144
Note that we need to have the same kernel and mean function (i.e., the same GP type) as the one used for saving.
\ No newline at end of file
......@@ -52,6 +52,8 @@
#include <limbo/tools.hpp>
#include <limbo/tools/macros.hpp>
#include <limbo/serialize/text_archive.hpp>
// this tutorials shows how to use a Gaussian process for regression
using namespace limbo;
......@@ -134,5 +136,11 @@ int main(int argc, char** argv)
std::ofstream ofs_data("data.dat");
for (size_t i = 0; i < samples.size(); ++i)
ofs_data << samples[i].transpose() << " " << observations[i].transpose() << std::endl;
// Sometimes is useful to save an optimized GP
gp_ard.save<serialize::TextArchive>("myGP");
// Later we can load -- we need to make sure that the type is identical to the one saved
gp_ard.load<serialize::TextArchive>("myGP");
return 0;
}
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment