PerfKit Benchmarker is an open effort to define a canonical set of benchmarks to measure and compare cloud offerings. Please review the Licensing section before continuing.
Before you can run the PerfKit Benchmarker, you need to establish account credentials on the cloud provider you want to benchmark. For ProfitBricks, make sure you have Signed Up.
You also need the software dependencies, which are mostly command line tools and credentials to access your accounts without a password. The following steps should help you get the CLI tool auth in place.
If you are running on Windows, you will need to install GitHub Windows since it includes tools like
openssl and an
ssh client. Alternatively, you can install Cygwin since it should include the same tools.
If you are running on Windows, get the latest version of Python 2.7. This should have
pip bundled with it. Make sure your
PATH environment variable is set so that you can use both
pip on the command line (you can have the installer do it for you if you select the correct option).
Most Linux distributions and recent Mac OS X versions already have Python 2.7 installed. If Python is not installed, you can likely install it using your distribution's package manager, or see the Python Download page.
If you need to install
pip, see these instructions.
Make sure that
ssh-keygen are on your path (you will need to update the
PATH environment variable). The path to these commands should be:
Download PerfKit Benchmarker from GitHub. Support for ProfitBricks as a cloud provider was added in release 1.7.0. So please download release v1.7.0 or newer.
$ cd /path/to/PerfKitBenchmarker $ sudo pip install -r requirements.txt
Get started by running:
$ sudo pip install -r perfkitbenchmarker/providers/profitbricks/requirements.txt
PerfKit Benchmarker uses the Requests module to interact with ProfitBricks' Cloud API. HTTP Basic authentication is used to authorize access to the API. Please set this up as follows:
Create a configuration file containing the email address and password associated with your ProfitBricks account, separated by a colon.
$ less ~/.config/profitbricks-auth.cfg email:password
The PerfKit Benchmarker will automatically base64 encode your credentials before making any calls to the Cloud API.
PerfKit Benchmarker uses the file location
~/.config/profitbricks-auth.cfg by default. You can use the
--profitbricks_config flag to override the path.
The following example shows how to run the
$ ./pkb.py --cloud=ProfitBricks --machine_type=Small --benchmarks=iperf
PerfKit Benchmarker will authenticate against the ProfitBricks Cloud API and provision a Virtual Data Center and the resources necessary to perform the benchmark run. The resources will automatically be removed when the benchmark run completes. Detailed information about the process is logged to the system in a location provided at the conclusion of the benchmark run.
You must be running on a Windows machine in order to run Windows benchmarks. Install all dependencies as above and set TrustedHosts to accept all hosts so that you can open PowerShell sessions with the VMs (both machines having each other in their TrustedHosts list is necessary, but not sufficient to issue remote commands; valid credentials are still required):
set-item wsman:\localhost\Client\TrustedHosts -value *
Now you can run Windows benchmarks by running with
--os_type=windows. Windows has a different set of benchmarks than Linux does. They can be found under
perfkitbenchmarker/windows_benchmarks/. The target VM OS is Windows Server 2012 R2.
Run without the
--benchmarks parameter and every benchmark in the standard set will run serially which can take a couple of hours (alternatively, run with
--benchmarks="standard_set"). Additionally, if you don't specify
--cloud=..., all benchmarks will run on the Google Cloud Platform.
Named sets are are groupings of one or more benchmarks in the benchmarking directory. This feature allows parallel innovation of what is important to measure in the Cloud, and is defined by the set owner. For example the GoogleSet is maintained by Google, whereas the StanfordSet is managed by Stanford. Once a quarter a meeting is held to review all the sets to determine what benchmarks should be promoted to the
standard_set. The Standard Set is also reviewed to see if anything should be removed.
To run all benchmarks in a named set, specify the set name in the benchmarks parameter (e.g.,
--benchmarks="standard_set"). Sets can be combined with individual benchmarks or other named sets.
The following are some common flags used when configuring PerfKit Benchmarker.
||see all flags|
||A comma separated list of benchmarks or benchmark sets to run such as
||Cloud where the benchmarks are run. For ProfitBricks, use
||Type of machine to provision if pre-provisioned machines are not used. Most cloud providers accept the names of pre-defined provider-specific machine types.|
||This flag allows you to override the default zone. See the table below.|
||Type of disk to use. Names are provider-specific.|
The default cloud is 'GCP', override with the
--cloud flag. Each cloud has a default zone which you can override with the
--zone flag, the flag supports the same values that the corresponding Cloud CLIs take:
|Cloud name||Default zone||Notes|
|ProfitBricks||ZONE_1||Additional zones: ZONE_2|
./pkb.py --cloud=profitbricks --zone=ZONE_2 --benchmarks=iperf,ping
The following is important information regarding licensing of the benchmarks that PerfKit Benchmarker utilizes.
PerfKit Benchmarker provides wrappers and workload definitions around popular benchmark tools. It instantiates VMs on the Cloud provider of your choice, automatically installs benchmarks, and runs the workloads without user interaction.
Due to the level of automation you will not see prompts for software installed as part of a benchmark run. Therefore you must accept the license of each of the benchmarks individually, and take responsibility for using them before you use the PerfKit Benchmarker.
In its current release these are the benchmarks that are executed:
aerospike: Apache v2 for the client and GNU AGPL v3.0 for the server
bonnie++: GPL v2
cassandra_ycsb: Apache v2
cassandra_stress: Apache v2
cloudsuite3.0: CloudSuite 3.0 license
cluster_boot: MIT License
copy_throughput: Apache v2
fio: GPL v2
hadoop_terasort: Apache v2
hpcc: Original BSD license
iperf: BSD license
memtier_benchmark: GPL v2
mesh_network: HP license
mongodb: Deprecated. GNU AGPL v3.0
mongodb_ycsb: GNU AGPL v3.0
multichase: Apache v2
netperf: HP license
oldisim: Apache v2
object_storage_service: Apache v2
ping: No license needed.
silo: MIT License
scimark2: public domain
speccpu2006: SPEC CPU2006
sysbench_oltp: GPL v2
tomcat: Apache v2
unixbench: GPL v2
wrk: Modified Apache v2
hbase_ycsb, and others): Apache v2
Some of the benchmarks invoked require Java. You must also agree with the following license:
openjdk-7-jre: GPL v2 with the Classpath Exception
CoreMark setup cannot be automated. EEMBC requires users to agree with their terms and conditions, and PerfKit Benchmarker users must manually download the CoreMark tarball from their website and save it under the
perfkitbenchmarker/data folder (e.g.
SPEC CPU2006 benchmark setup cannot be automated. SPEC requires that users purchase a license and agree with their terms and conditions. PerfKit Benchmarker users must manually download
cpu2006-1.2.iso from the SPEC website, save it under the
perfkitbenchmarker/data folder (e.g.
~/PerfKitBenchmarker/perfkitbenchmarker/data/cpu2006-1.2.iso), and also supply a runspec cfg file (e.g.
~/PerfKitBenchmarker/perfkitbenchmarker/data/linux64-x64-gcc47.cfg). Alternately, PerfKit Benchmarker can accept a tar file that can be generated with the following steps:
cpu2006-1.2.isointo a directory named
cpu2006directory, and place it under the
PerfKit Benchmarker will use the tar file if it is present. Otherwise, it will search for the iso and cfg files.
We have demonstrated how to get the PerfKit Benchmarker installed and how to use it. The project itself contains additional information in its included
README. The wiki also contains detailed information about the PerfKit Benchmarker project.
If you have questions or comments on using PrefKit Benchmarker with ProfitBricks resources, please post a question in the Community section of this site.
Issues can also be reported at GitHub PerfKit Benchmarker.