- 27 Oct, 2021 6 commits
-
-
Noric Couderc authored
The PAPICounter class checks that the counter works on the current machine everytime create one. That way, we don't need the list.
-
Noric Couderc authored
This can replace the PapiRunner family of classes. It just gathers all your features nicely, in a lost less code! Added some tests too!
-
Noric Couderc authored
Sometimes we only trace the default counters, and then the tracer might now know we want to trace. So we set the flag.
-
Noric Couderc authored
When you create a PAPI counter, it will check that this counter is available on the machine! That probably means you should create too many counters, but you probably don't need to.
-
Noric Couderc authored
BenchmarkInvocations -> TotalMethodInvocations MethodIterationCounts -> MethodInvocations
-
Noric Couderc authored
Because we might need to iterate on them (and because the eventSet is actually a list). We want to make sure the counters are not mixed up.
-
- 26 Oct, 2021 10 commits
-
-
Noric Couderc authored
We move this to the "papicounters" package, since it uses PAPI.
-
Noric Couderc authored
Takes a specification of features and builds the PAPI eventSet you'd need to use it.
-
Noric Couderc authored
A FeatureSet is a group of features. We want to interpret those in a certain way.
-
Noric Couderc authored
-
Noric Couderc authored
These are related to CSV printing.
-
Noric Couderc authored
We used PAPI counters before, and it's true these are the most relevant, but it makes sense to make this more general.
-
Noric Couderc authored
We'll probably use that for creating benchmarks and stuff
-
Noric Couderc authored
We use the PAPICounter type for specifying list of counters and stuff, instead of the map of strings. This makes the code _MUCH BETTER_.
-
Noric Couderc authored
That required creating a data-structure for representing the features that I want (in the Feature.kt file), which was something I really needed :) I added structures for different types of features, and some combinators for ratios of two different features!
-
Noric Couderc authored
-
- 25 Oct, 2021 1 commit
-
-
Noric Couderc authored
If the user had PAPI_LIBRARY_PATH set, then we add it to the list of options.
-
- 22 Oct, 2021 3 commits
-
-
Noric Couderc authored
-
Noric Couderc authored
We move setting the java.library.path for brainy executables in the main build.gradle, inside a plugin. That way, we don't need to configure the sub-projects.
-
Noric Couderc authored
Instead of just hardcoding one path, we have many, and we'll try all the paths until it works.
-
- 21 Oct, 2021 4 commits
-
-
Noric Couderc authored
The function loadSeeds() is called during configuration, not when the task is executed. Therefore, if the file containing the seeds does not exist, ALL the other tasks would fail. We use that to prevent that.
-
Noric Couderc authored
-
Noric Couderc authored
One important new option is the reset() method which allows to reset the counters without creating a brand new EventSet.
-
Noric Couderc authored
We don't need UNIFORM for now.
-
- 18 Oct, 2021 6 commits
-
-
Noric Couderc authored
Because seeds were initialized to time() during sampling of interesting seeds, we couldn't load them from strings when running benchmarks. Now we use a Long instead, so that should work.
-
Noric Couderc authored
-
Noric Couderc authored
We can read the command-line arguments from STDIN instead of the args array. That should be helpful when we want to load a lot of seeds to benchmark in a row!
-
Noric Couderc authored
We can read the command-line arguments from STDIN instead of the args array. That should be helpful when we want to load a lot of seeds to benchmark in a row!
-
Noric Couderc authored
Loading from a big file was too slow, hopefully this makes it faster
-
Noric Couderc authored
We use the file "seeds-benchmarks.csv" to load the data for the seeds we'd want to try, we use the function loadSeeds to convert that to arguments JMH can take, then we pass that as arguments to the runner. (That can be a looot of arguments for some cases, I wonder if it can take all these arguments)
-
- 05 Oct, 2021 1 commit
-
-
Noric Couderc authored
Otherwise, it could happen that the same seed was created twice.
-
- 29 Sep, 2021 5 commits
-
-
Noric Couderc authored
I set it to the values I had on my laptop
-
Noric Couderc authored
Didn't work because we used a sequence instead of a list
-
Noric Couderc authored
We run it 30 times instead of 10.
-
Noric Couderc authored
It writes to a file, in a streamed fashion. That way, if the program is interrupted, you still get some data out.
-
Noric Couderc authored
I'm planning to deprecate it completely, so I think that makes sense. We use a simpler function that we already used for FixedIterationSeedSampler.
-
- 23 Sep, 2021 1 commit
-
-
Noric Couderc authored
This time, we tried to generate 100 seeds for each collection, with 30 runs per benchmark, we limited the generation to 1000 seeds (like previously). The main difference is that the winner to be picked this time required to be at least 5% better than the alternatives.
-
- 22 Sep, 2021 3 commits
-
-
Noric Couderc authored
-
Noric Couderc authored
We also add the exact number of wins for each collection, during the run.
-
Noric Couderc authored
When we use seed sampling, we use the rule that a collection is considered the "winner" if it's at least 5% better than all the other alternatives. If it isn't, the run is unconclusive and we don't keep that seed.
-