Using Perfidix is quite straight-forward and even easier if you are familiar with Unit-Testing like JUnit. The aim of Perfidix is to reduce any generation of benchmarking code but to utilize existing testing-code: What is more convenient than to use Unit-tests?
Perfidix relies on the following two components:
The execution framework (either our Eclipse Plugin Perclipse) or a container class
Annotations for methods and classes to be benchmarked
While the first component relies on classes, the second component consists out of Annotations applicable on all methods which are parameter-free and void. Similar to JUnit, setup and teardown methods can be applied to the benchmarks even though Perfidix offers finer granularity of those utility methods.
The following annotations are applied on methoding including suitable parameters
- Has to be placed before the class declaration
- Each void-method without parameters and any bench annotation is benched
- Executed before every bench-method and after the BeforeBenchClass-annotated method
- Executed for all bench-methods but just once for all runs
- Executed before every bench-method and after the BeforeFirstBenchRun-annotated method
- Executed for all bench-methods before every run
- Annotates the method to bench
- Specific setUp-method for this bench for settings before the bench is executed once for this bench
- Specific setUp-method for this bench for settings before the bench is executed for every run for this bench
- Specific tearDown-method for this bench after the bench is executed after every run for this bench
- Specific tearDown-method for this bench after the bench is executed after the last run of this bench
- Executed after every bench-method and after the AfterEachBenchRun-annotated method
- Executed for all bench-methods after the last run
- Executed after the last bench-method and after the AfterLastBenchRun-annotated method
- Executed once per class
The methods, marked by the defined annotations, need to be executed by a suitable framework aware of the meters to benchmark on the one hand plus the outputs to be generated on the other hand. The execution takes place either by a provided Eclipse-Plugin or by a suitable Benchmarking-Object executable as normal Java-program.