JBrainy issueshttps://git.cs.lth.se/noricc/jbrainy/-/issues2021-06-30T08:47:11Zhttps://git.cs.lth.se/noricc/jbrainy/-/issues/17Is there an impact on performance when you use timed collections?2021-06-30T08:47:11ZNoric CoudercIs there an impact on performance when you use timed collections?Compare the running times of benchmarks with normal and timed collections.Compare the running times of benchmarks with normal and timed collections.https://git.cs.lth.se/noricc/jbrainy/-/issues/16Add iteration to the list of timed operations on timed collections?2021-06-30T09:19:50ZNoric CoudercAdd iteration to the list of timed operations on timed collections?We track how long some operations take (insertion, deletion, search), how about iterations (for example in the case of `hashCode` and `toString`) it might be useful.We track how long some operations take (insertion, deletion, search), how about iterations (for example in the case of `hashCode` and `toString`) it might be useful.Noric CoudercNoric Couderchttps://git.cs.lth.se/noricc/jbrainy/-/issues/15Use time() to set the seed.2021-08-03T14:48:49ZNoric CoudercUse time() to set the seed.Right now we use a constant seed for generating the benchmark. Brainy uses the current `time()`.Right now we use a constant seed for generating the benchmark. Brainy uses the current `time()`.Noric CoudercNoric Couderchttps://git.cs.lth.se/noricc/jbrainy/-/issues/14Select important features for classification2021-06-23T12:13:42ZNoric CoudercSelect important features for classificationBrainy only gathers a subset of the PAPI counters to do classification. They used genetic algorithms to pick the right subset.Brainy only gathers a subset of the PAPI counters to do classification. They used genetic algorithms to pick the right subset.Noric CoudercNoric Couderchttps://git.cs.lth.se/noricc/jbrainy/-/issues/13Implement getting the cost of some methods as a feature2021-06-25T15:37:27ZNoric CoudercImplement getting the cost of some methods as a featureThe original Brainy gathered a feature that we don't: The time spent doing insertion, search, and deletion. We do not gather such features.The original Brainy gathered a feature that we don't: The time spent doing insertion, search, and deletion. We do not gather such features.Noric CoudercNoric Couderchttps://git.cs.lth.se/noricc/jbrainy/-/issues/12Implement adaptive generation of benchmarks2021-06-29T15:36:00ZNoric CoudercImplement adaptive generation of benchmarksThe original brainy generates benchmarks *until* they have at least *k* benchmarks where collection *c* wins, for each collection *c*. In brainy, k = 10 000.
JBrainy just uses a fixed number of seeds. It doesn't generate more of them if...The original brainy generates benchmarks *until* they have at least *k* benchmarks where collection *c* wins, for each collection *c*. In brainy, k = 10 000.
JBrainy just uses a fixed number of seeds. It doesn't generate more of them if it doesn't have enough benchmarks where one of them wins.Noric CoudercNoric Couderchttps://git.cs.lth.se/noricc/jbrainy/-/issues/11Compare diversity of benchmarks for different generation schemes2021-06-23T11:52:30ZNoric CoudercCompare diversity of benchmarks for different generation schemesWe implemented a new distribution for methods with the BRAINY mode. We would like to compare that with other benchmark generation methods (Uniform, Polya, Markov). What we want to see is the number of winners for each collection and benc...We implemented a new distribution for methods with the BRAINY mode. We would like to compare that with other benchmark generation methods (Uniform, Polya, Markov). What we want to see is the number of winners for each collection and benchmark methods.Noric CoudercNoric Couderchttps://git.cs.lth.se/noricc/jbrainy/-/issues/10Test Brainy benchmark generation2021-06-23T13:09:10ZNoric CoudercTest Brainy benchmark generationWe implemented a new distribution for the number of benchmarks, we should test that it works for many seeds.We implemented a new distribution for the number of benchmarks, we should test that it works for many seeds.Noric CoudercNoric Couderchttps://git.cs.lth.se/noricc/jbrainy/-/issues/9Benchmark generation leads to invalid indexes2021-05-04T17:13:14ZNoric CoudercBenchmark generation leads to invalid indexesThe following test fails:
```java
@Test
public void testBuggyBenchmark() {
// Where we test a benchmark that triggers an IndexOutOfBoundsException.
BCBenchmarkPackage b1 = BCBenchmarkPackage.LIST(3, 100, 100,
MethodSelecti...The following test fails:
```java
@Test
public void testBuggyBenchmark() {
// Where we test a benchmark that triggers an IndexOutOfBoundsException.
BCBenchmarkPackage b1 = BCBenchmarkPackage.LIST(3, 100, 100,
MethodSelectionType.UNIFORM,
new LinkedList<>());
b1.reset();
b1.runBenchmark(blackhole);
}
```
When running the benchmark we run in the following exception:
```
Index: 3, Size: 2
java.lang.IndexOutOfBoundsException: Index: 3, Size: 2
at java.base/java.util.LinkedList.checkElementIndex(LinkedList.java:559)
at java.base/java.util.LinkedList.remove(LinkedList.java:529)
at __bcbench.TraceX_0.java.util.List.Bench_I3_D3_IL100_DL100.r_0(/Bench_3.java)
at __bcbench.TraceX_0.java.util.List.Bench_I3_D3_IL100_DL100.run(/Bench_3.java)
at se.lth.cs.bcgen.BCBenchStep.runBenchmark(BCBenchStep.java:67)
at se.lth.cs.bcgen.BCBenchmarkPackage.runBenchmark(BCBenchmarkPackage.java:345)
```Christoph ReichenbachChristoph Reichenbachhttps://git.cs.lth.se/noricc/jbrainy/-/issues/8Document how to run the training of JBrainy2021-04-09T11:41:16ZNoric CoudercDocument how to run the training of JBrainy@creichen needs it for running experiments on the Raspberry Pi.@creichen needs it for running experiments on the Raspberry Pi.Noric CoudercNoric Couderc2021-04-09https://git.cs.lth.se/noricc/jbrainy/-/issues/7jmhTest task fail2021-04-08T11:36:57ZNoric CoudercjmhTest task failWe get an error with a `NullPointerException` when running it. Something related to Markov chain based generation:
```
Benchmark had encountered error, and fail on error was requested
ERROR: org.openjdk.jmh.runner.RunnerException: Bench...We get an error with a `NullPointerException` when running it. Something related to Markov chain based generation:
```
Benchmark had encountered error, and fail on error was requested
ERROR: org.openjdk.jmh.runner.RunnerException: Benchmark caught the exception
at org.openjdk.jmh.runner.Runner.runBenchmarks(Runner.java:585)
at org.openjdk.jmh.runner.Runner.internalRun(Runner.java:320)
at org.openjdk.jmh.runner.Runner.run(Runner.java:209)
at org.openjdk.jmh.Main.main(Main.java:71)
Caused by: org.openjdk.jmh.runner.BenchmarkException: Benchmark error during the run
at org.openjdk.jmh.runner.BenchmarkHandler.runIteration(BenchmarkHandler.java:428)
at org.openjdk.jmh.runner.BaseRunner.runBenchmark(BaseRunner.java:261)
at org.openjdk.jmh.runner.BaseRunner.runBenchmark(BaseRunner.java:233)
at org.openjdk.jmh.runner.BaseRunner.doSingle(BaseRunner.java:138)
at org.openjdk.jmh.runner.BaseRunner.runBenchmarksForked(BaseRunner.java:75)
at org.openjdk.jmh.runner.ForkedRunner.run(ForkedRunner.java:72)
at org.openjdk.jmh.runner.ForkedMain.main(ForkedMain.java:84)
Suppressed: java.lang.NullPointerException
at se.lth.cs.bcgen.MarkovMethodSelectionStrategy.selectNextMethod(MarkovMethodSelectionStrategy.java:28)
at se.lth.cs.bcgen.BenchmarkGenerator.generateLineOfCode(BenchmarkGenerator.java:536)
at se.lth.cs.bcgen.BenchmarkGenerator$MethodGenerator.writeNewExecutorMethod(BenchmarkGenerator.java:521)
at se.lth.cs.bcgen.BenchmarkGenerator.gen(BenchmarkGenerator.java:354)
at se.lth.cs.bcgen.BenchmarkGenerator.genBenchmark(BenchmarkGenerator.java:285)
at se.lth.cs.jmh.ListApplicationBenchmark$ListBenchmarkState.doSetup(ListApplicationBenchmark.java:64)
at se.lth.cs.jmh.generated.ListApplicationBenchmark_ListApplicationBenchmark_jmhTest.ListApplicationBenchmark_thrpt_jmhStub(ListApplicationBenchmark_ListApplicationBenchmark_jmhTest.java:124)
at se.lth.cs.jmh.generated.ListApplicationBenchmark_ListApplicationBenchmark_jmhTest.ListApplicationBenchmark_Throughput(ListApplicationBenchmark_ListApplicationBenchmark_jmhTest.java:86)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```Noric CoudercNoric Couderc2021-04-11https://git.cs.lth.se/noricc/jbrainy/-/issues/6Test readCSVUnsafe against reference implementation2021-02-18T14:23:31ZNoric CoudercTest readCSVUnsafe against reference implementationI wrote a function called `readCSV` (which works), and I am now trying a function called `readCSVUnsafe` which is optimized to take less memory while loading the data.
I have tests for `readCSV`, and I assume they could be easily adapt...I wrote a function called `readCSV` (which works), and I am now trying a function called `readCSVUnsafe` which is optimized to take less memory while loading the data.
I have tests for `readCSV`, and I assume they could be easily adapted to test `readCSVUnsafe` too.Noric CoudercNoric Couderchttps://git.cs.lth.se/noricc/jbrainy/-/issues/5Running generated benchmarks doesn't do anything.2021-02-15T16:16:13ZNoric CoudercRunning generated benchmarks doesn't do anything.I wrote the following test:
```java
@Test
public void TestBenchmarkTrace() {
int sz = 200000;
BCBenchmarkPackage<Map<Object, Object>> pk = BCBenchmarkPackage.MAP(0, sz, 0,
new HashMap<>());
...I wrote the following test:
```java
@Test
public void TestBenchmarkTrace() {
int sz = 200000;
BCBenchmarkPackage<Map<Object, Object>> pk = BCBenchmarkPackage.MAP(0, sz, 0,
new HashMap<>());
pk.reset();
pk.runBenchmark(blackhole);
blackhole.evaporate("Yes, I am Stephen Hawking, and know a thing or two about black holes.");
BCBenchmarkPackage<Map<Object, Object>> copy = BCBenchmarkPackage.MAP("TRACE",
pk.getTrace(),
new HashMap<>());
copy.reset();
copy.runBenchmark(blackhole);
blackhole.evaporate("Yes, I am Stephen Hawking, and know a thing or two about black holes.");
Assert.assertEquals(sz, pk.getTrace().size());
Assert.assertEquals(sz, copy.getTrace().size());
Assert.assertEquals(pk.getTrace(), copy.getTrace());
Assert.assertFalse(pk.getDataStructure().isEmpty());
Assert.assertFalse(copy.getDataStructure().isEmpty());
Assert.assertEquals(pk.getDataStructure(), copy.getDataStructure());
}
```
The datastructure is empty, for both benchmarks, which is very unlikely.Noric CoudercNoric Couderchttps://git.cs.lth.se/noricc/jbrainy/-/issues/4Low accuracy for ArrayList classifier2020-08-14T16:00:49ZNoric CoudercLow accuracy for ArrayList classifierRunning the `train_model.py` yeilds very low accuracy for ArrayLists.Running the `train_model.py` yeilds very low accuracy for ArrayLists.https://git.cs.lth.se/noricc/jbrainy/-/issues/3Low accuracy for Interface Specific Classification2021-06-23T11:47:32ZNoric CoudercLow accuracy for Interface Specific ClassificationIf I split the classifier to several classifiers (one for each interface), then the one for maps has a lower accuracy: ~0.75.
Not certain why yet.If I split the classifier to several classifiers (one for each interface), then the one for maps has a lower accuracy: ~0.75.
Not certain why yet.https://git.cs.lth.se/noricc/jbrainy/-/issues/2There was a bug with Map synthetic benchmarks.2020-03-12T17:59:22ZNoric CoudercThere was a bug with Map synthetic benchmarks.The method `runPutAll` did not add any elements because the `argument` it uses was not filled when building the benchmark.The method `runPutAll` did not add any elements because the `argument` it uses was not filled when building the benchmark.https://git.cs.lth.se/noricc/jbrainy/-/issues/1The methods "get(int)" and "get(Object)" collided in the function table2020-03-11T18:34:17ZNoric CoudercThe methods "get(int)" and "get(Object)" collided in the function tableThis should not be a big problem, since the `get()` method has different arguments for different interfaces... I changed the name of the mapped `runGet` method to `runGet` for ints (for lists) and `runGetObject` for maps. That means, how...This should not be a big problem, since the `get()` method has different arguments for different interfaces... I changed the name of the mapped `runGet` method to `runGet` for ints (for lists) and `runGetObject` for maps. That means, however, that there will be a bit of processing on the resulting training data before training.