JPerf 2.0

November 09, 2014

For those who don't know, JPerf is a simple performance and scalability testing library for Java. It's like JUnit but for performance.

This morning, I re-implemented JPerf from scratch using Java 8, making use of lambda functions to simplify usage, and making better use of java.util.concurrency. Here is a graph showing the current performance of a "no-op" test for the 2.0.0 release compared to the 1.3.1 release.

The difference in performance is staggering. This is because I am no longer using 'synchronized' or 'volatile' keyword to share state between the test threads and the main thread, both of which cause the thread to have to synchronize with main memory, which is costly in terms of performance. I am now using AtomicLong to track counters in each thread instead and AtomicBoolean to implement a mechanism to stop the test threads.

The programmatic use of JPerf now looks like this:

// create config
PerfTestConfig config = JPerf.newConfigBuilder()
.testFactory(() -> new EmptyTest())

// run test;

It is also possible to run JPerf from the command-line. I used Apache CLI to provide a professional looking command-line interface.

usage: jperf [-class ] [-duration ] [-increment ] [-max
] [-min ]
-class Name of class that implemented org.jperf.PerfTest
-duration The duration in milliseconds (per thread level)
-increment The number of threads to increment by
-max The maximum number of threads to test with
-min The number of threads to start testing with

All arguments are optional, except for 'class'.

By default, output is to stdout in this format:

Running on Nov 9, 2014 10:23:38 PM with config: PerfTestConfig{minThreads=1, maxThreads=8, threadIncrement=1, duration=100}
Threads: 1: Samples: 18,819,228; Duration: 100; Throughput: 188,192,272
Threads: 2: Samples: 39,376,010; Duration: 100; Throughput: 393,760,064
Threads: 3: Samples: 58,370,142; Duration: 100; Throughput: 583,701,440
Threads: 4: Samples: 77,196,871; Duration: 100; Throughput: 771,968,704
Threads: 5: Samples: 75,282,927; Duration: 100; Throughput: 752,829,312
Threads: 6: Samples: 73,362,323; Duration: 100; Throughput: 733,623,168
Threads: 7: Samples: 71,601,751; Duration: 100; Throughput: 716,017,472
Threads: 8: Samples: 69,649,895; Duration: 100; Throughput: 696,499,008

It is easy to specify custom output formatters as well. For example, to use the CSV formatter:

PerfTestConfig config = JPerf.newConfigBuilder()
.testFactory(() -> new NoOpTest())
.resultWriter(new ResultWriterCSV())

This produces the following output, which can easily be loaded into a spreadsheet. You could also provide your own implementation of the ResultWriter interface to write to any other format.


The source code is available in github, here: