Fork me on GitHub

Introduction

Perfidix is a tool for developers to conveniently and consistently benchmark Java code. It can quickly answer which implementation is faster. Without the need to launch a full-fledged profiling application or rewrite the same Java code to measure the execution time again and again.

Even though Perfidix is simple to use, it still provides a sound statistical output for taking a decision.

Perfidix was presented 2007 as a Work in Progess at the Jazoon. The related paper can be found over here.

JUnit like handling

Perfidix is intentionally designed to be used like JUnit 4.x, i.e., it provides annotations to discover which methods should be benchmarked how many times. Annotated classes or methods as Perfidix benchmarks are collected regarding flexible statistics e.g. execution time returned in the console or CVS output.

The output includes package, class, and method name together with the number of runs, min, max, average, standard deviation, and confidence intervals. The number of runs can individually be specified per package, class, or method. Besides this, the number of runs can be chosen by a probability distribution function to simulate typical workloads.

Define your own meters

Even though equipped with multiple meters to measure your source code (time, memory, threads), Perfidix offers the ability to inject own meters just by implement one single Interface. This enables Perfidix not only to act as a out-of-the-box usable tool, but also to be adapted to the specific program environment and therefore to any resulting problems.

Do a quick benchmark, analyze later.

Convenient outputs ranging from common ascii-tables to plain csv-output offer not only a quick overlook about the benchmark but also in-depth analysis techniques with the help of any third-party program. Therefore, all benchmark results could easily be made persistent for later analysis and compilation into charts.

Own needs, own requirements

Perfidix was created at the University of Konstanz at the Distributed Systems Group out of the necessity to provide an architecture for evaluating own implementations.

It greatly assisted the research by allowing for quick benchmarking of different algorithms and data structures. Although not directly mentioned, Perfidix represents the benchmarking framework for multiple thesis within our working group and is maintained continuously since 2007.

Perfidix is hosted with https://github.com/disy/perfidix under the BSD License and guarded by Travis-CI: Build Status