- 20 Nov, 2021 2 commits
-
-
Noric Couderc authored
No need to manually call "accept" on the features, that's kinda ugly
-
Noric Couderc authored
I would create several eventSets before, that were not needed. Now I pass the FeatureSet, and we use the EvenSetBuilder to make an eventSet based on the FeatureSet. The EventSetBuilder caches eventSets, so it should not build unnecessary eventSets. It should create only one for a full run (they don't change often).
-
- 04 Nov, 2021 2 commits
-
-
Noric Couderc authored
I removed the parameters numRuns and counters from the constructor of PapiRunner, this parameter is passed in the specs instead and as a parameter of the specific functions that need it. This is so I can pass a PapiRunner without the number of runs to an Experiment and run it. Otherwise you might have to check the papiRunner has the same number of runs than the Experiment...
-
Noric Couderc authored
PapiBenchmarkAnalyzer.RunSpec doesn't take the number of iterations as a parameter anymore, as that's a parameter of the runner itself (it was always the same value used anyway).
-
- 26 Oct, 2021 2 commits
-
-
Noric Couderc authored
We used PAPI counters before, and it's true these are the most relevant, but it makes sense to make this more general.
-
Noric Couderc authored
We use the PAPICounter type for specifying list of counters and stuff, instead of the map of strings. This makes the code _MUCH BETTER_.
-
- 19 Sep, 2021 1 commit
-
-
Noric Couderc authored
We add some fields to BCBenchmarkPackage and a test to make sure we don't run a benchmark more than 1 time per iteration.
-
- 18 Sep, 2021 2 commits
-
-
Noric Couderc authored
We removed the need of a StopWatchRunner by making sure we only run the benchmark once, even for getting the number of cycles per operation type.
-
Noric Couderc authored
The papiRunner class now gathers all the counters in one single run. For that, we need to gather a limited number of counters: PAPI_TOT_CYC + PAPI_TOT_INS + max 2 counters. Because we don't do any aggregation either, I needed to change the functions so they return a list of values for each iteration.
-
- 13 Sep, 2021 1 commit
-
-
Noric Couderc authored
We used to run the benchmark many times are aggregate the values of counters (median) before. Now when we print the features, we don't do that anymore. We print the raw data. Doing this required to add an "iteration" number for each specific run.
-
- 25 Aug, 2021 2 commits
-
-
Noric Couderc authored
map -> map + zip
-
Noric Couderc authored
Works similarly to how runSpec runs for a regular PapiRunner. Updated smrt.jar because there are some methods we needed to reset the tracer and set it's counters dynamically.
-
- 23 Aug, 2021 2 commits
-
-
Noric Couderc authored
It would fail if there are no samples. Now we filter to rule out the values for which there are zero samples.
-
Noric Couderc authored
This PapiTracerRunner shares similarities with PapiRunner, so we make an interface that contains stuff that's common to the two classes.
-
- 24 Jun, 2021 1 commit
-
-
Noric Couderc authored
If you run stuff before calling start(), it's not recorded.
-
- 19 May, 2021 2 commits
-
-
Noric Couderc authored
In debug mode, the printed PAPI features are a deterministic value, which should be easy enough for classification. I will probably change what the value for each counter is, but it helps making sure that the data printed is not complete garbage.
-
Noric Couderc authored
Since we'd like to check the counters work in a deterministic way, we use a mockup class for getting the counters.
-
- 18 May, 2021 1 commit
-
-
Noric Couderc authored
-
- 07 May, 2021 1 commit
-
-
Noric Couderc authored
-
- 05 May, 2021 4 commits
-
-
Noric Couderc authored
-
Noric Couderc authored
-
Noric Couderc authored
-
Noric Couderc authored
-
- 12 Apr, 2021 1 commit
-
-
Noric Couderc authored
The features go into two boxes: Hardware and software. We print that next to the feature values, to make it much easier to sort which one is which. Required some refactorings: - Introduced a class BenchmarkRunData which contains data related to the PAPI counters. We use that instead of Triples. - Extracted two methods: printSoftwareCounters and printHardwareCounters
-
- 16 Feb, 2021 1 commit
-
-
Noric Couderc authored
The previous methods printed too many lines to be useful.
-
- 29 Jan, 2021 1 commit
-
-
Noric Couderc authored
If the trace contains methods that are not eligible or methods that are not recognized, it's the benchmark that's reported instead of have one report for every method
-
- 18 Aug, 2020 1 commit
-
-
Noric Couderc authored
Black holes are not member of BCBenchmarkPackage anymore, they are handled by the class running the benchmarks, and passed as an argument to runBenchmark()
-
- 15 Aug, 2020 1 commit
-
-
Noric Couderc authored
-
- 12 Aug, 2020 1 commit
-
-
Noric Couderc authored
This project will contain the main JBrainy libraries
-
- 06 Aug, 2020 1 commit
-
-
Noric Couderc authored
-
- 12 May, 2020 3 commits
-
-
Christoph Reichenbach authored
-
Christoph Reichenbach authored
-
Christoph Reichenbach authored
-
- 11 May, 2020 2 commits
-
-
Christoph Reichenbach authored
-
Christoph Reichenbach authored
-
- 19 Mar, 2020 2 commits
-
-
Noric Couderc authored
Erroneously ran the square of the needed benchmarks, and duplicated data.
-
Noric Couderc authored
Switched from exporting benchmark results as nested maps to exporting lists of triplets with benchmark, counter name, and values.
-
- 18 Mar, 2020 2 commits
-
-
Noric Couderc authored
Instead of taking a boolean as a parameter, added methods.
-
Noric Couderc authored
-
- 13 Mar, 2020 1 commit
-
-
Noric Couderc authored
The number of benchmarks was passed in a lot of places, now it's only passed to the class, all methods use it.
-