Automating the deployment and benchmarking of containerized
apps over emulated WANs
Explore the docs »
Report Bug
·
Request Feature
Table of Contents
Ansible control node: A system from where one or more instances of BenchFaster are launched using Ansible.
Tester node: A remote host from where BenchFaster deployment is launched and the benchmarks are run.
Head node: A remote host where all the core components of BenchFaster are deployed.
Worker node: A remote host where containerized tools are deployed.
Minimum prerequisites before installing dependencies are:
ansible
in the control node (requirements)- Ubuntu Server 22.04 or Arch Linux (all machines)
- Passwordless sudo access (all machines)
-
Clone the repo
git clone https://github.com/fcarp10/benchfaster.git cd benchfaster/
-
Related dependencies and configuration can be installed using the provided playbooks. Install requirements for each type of node with
ansible-playbook -i inventory/inventory_example.yml requirements/${REQ_FILE}.yml
where
REQ_FILE
is eithermachine
,tester
orhypervisor
.
Two main categories of hosts are expected in the Ansible inventory file: machines
and testers
.
Common parameters
ansible_host
: Name of the host to connect from the ansible control nodeansible_user
: User name to connectinterface
: Network interfacearch
: amd64 or arm64
Machines
headnode
: true, when the machine is the head node
Testers
address_benchmark
: Name of the host where to run the benchmarks against
See an example of an inventory file here.
Benchfaster can also automate the deployment of VMs using vagrant with
libvirt/KVM. In that case, it is expected a hypervisors
category in the
inventory file.
Hypervisors
vagrant.num_workers
: Number of workers (VMs) in the clustervagrant.image
: Vagrant box (search)vagrant.memory
: RAM per VMvagrant.cpu
: CPUs per VM
See an example of an inventory file with Hypervisors here.
Deploying a local container registry is optional, but recommended
ansible-playbook -i inventory/inventory_example.yml local_registry.yml
Playbooks run specific workflows to perform single or multiple tests using a specific benchmarking tool against an application. Playbooks also define a set of variables necessary to run the workflow.
Required vars
application
: application to test (knative
,openfaas
,mosquitto
,rabbitmq
)benchmark_tool
: Benchmarking tool (jmeter
,k6
,hey
)
Depending of the specific benchmarking tool, additional variables are expected
${BENCHMARL_TOOL}.url
: URL used for the tests${BENCHMARL_TOOL}.port
: port used for the${BENCHMARL_TOOL}.path
: specific URL path
See examples of expected variables for each tool in jmeter and k6.
Every playbook includes a set of roles which defines the workflow. In
BenchFaster, three directories divide roles into three categories,
applications
and benchmark-tools
to contain all the roles related to apps
and benchmarking tools; and benchfaster
directory which contains all the roles
related to the BenchFaster's core components.
Every playbook needs to include at least the following roles
benchfaster/init
: creates new directory for results filesbenchfaster/start
: deploys all core componentsapplications/{{ application }}
: deploys the specificapplication
benchmark-tools/{{ benchmark_tool }}
: runs tests using the specificbenchmark-tool
benchfaster/stop
: retrieves result files to the control nodebenchfaster/finish
: cleanup temporal directories
See an example of a simple workflow is defined in generic_simple.yml which is then run by, for instance, openfaas_jmeter_example.yml playbook.
To run a playbook
ansible-playbook -i inventory/inventory_example.yml ${PLAYBOOK_FILE}.yml
BenchFaster is not limited to the apps and benchmarking tools currently supported, but new ones can be added by creating new roles.
To add a new application
ansible-galaxy init roles/applications/rabbitmq
To add a new benchmarking tool
ansible-galaxy init roles/benchmark-tools/hey
In both cases, this will create a directory within roles
with the expected
structure. For more details, read the docs from
Ansible.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE.txt
for more information.