Skip to content

BenchFaster automates the deployment and benchmarking of containerized tools over emulated WANs.

License

Notifications You must be signed in to change notification settings

fcarp10/benchfaster

Repository files navigation


Logo

Automating the deployment and benchmarking of containerized apps over emulated WANs
Explore the docs »

Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Contributing
  5. License

About The Project

Product Name Screen Shot

Ansible control node: A system from where one or more instances of BenchFaster are launched using Ansible.

Tester node: A remote host from where BenchFaster deployment is launched and the benchmarks are run.

Head node: A remote host where all the core components of BenchFaster are deployed.

Worker node: A remote host where containerized tools are deployed.

Built With

Core components

Ansible K3s Nebula Netem Vagrant

Benchmarking tools

JMeter k6 hey

Applications

knative OpenFaas Mosquitto RabbitMQ

(back to top)

Getting Started

Prerequisites

Minimum prerequisites before installing dependencies are:

  • ansible in the control node (requirements)
  • Ubuntu Server 22.04 or Arch Linux (all machines)
  • Passwordless sudo access (all machines)

Installation

  1. Clone the repo

    git clone https://github.com/fcarp10/benchfaster.git
    cd benchfaster/
  2. Related dependencies and configuration can be installed using the provided playbooks. Install requirements for each type of node with

    ansible-playbook -i inventory/inventory_example.yml requirements/${REQ_FILE}.yml

    where REQ_FILE is either machine, tester or hypervisor.

(back to top)

Usage

Inventory files

Two main categories of hosts are expected in the Ansible inventory file: machines and testers.

Common parameters

  • ansible_host: Name of the host to connect from the ansible control node
  • ansible_user: User name to connect
  • interface: Network interface
  • arch: amd64 or arm64

Machines

  • headnode: true, when the machine is the head node

Testers

  • address_benchmark: Name of the host where to run the benchmarks against

See an example of an inventory file here.

Benchfaster can also automate the deployment of VMs using vagrant with libvirt/KVM. In that case, it is expected a hypervisors category in the inventory file.

Hypervisors

  • vagrant.num_workers: Number of workers (VMs) in the cluster
  • vagrant.image: Vagrant box (search)
  • vagrant.memory: RAM per VM
  • vagrant.cpu: CPUs per VM

See an example of an inventory file with Hypervisors here.

(back to top)

Local registry

Deploying a local container registry is optional, but recommended

ansible-playbook -i inventory/inventory_example.yml local_registry.yml

(back to top)

Playbooks and roles

Playbooks run specific workflows to perform single or multiple tests using a specific benchmarking tool against an application. Playbooks also define a set of variables necessary to run the workflow.

Required vars

  • application: application to test (knative, openfaas, mosquitto, rabbitmq)
  • benchmark_tool: Benchmarking tool (jmeter, k6, hey)

Depending of the specific benchmarking tool, additional variables are expected

  • ${BENCHMARL_TOOL}.url: URL used for the tests
  • ${BENCHMARL_TOOL}.port: port used for the
  • ${BENCHMARL_TOOL}.path: specific URL path

See examples of expected variables for each tool in jmeter and k6.

Every playbook includes a set of roles which defines the workflow. In BenchFaster, three directories divide roles into three categories, applications and benchmark-tools to contain all the roles related to apps and benchmarking tools; and benchfaster directory which contains all the roles related to the BenchFaster's core components.

Every playbook needs to include at least the following roles

  • benchfaster/init: creates new directory for results files
  • benchfaster/start: deploys all core components
  • applications/{{ application }}: deploys the specific application
  • benchmark-tools/{{ benchmark_tool }}: runs tests using the specific benchmark-tool
  • benchfaster/stop: retrieves result files to the control node
  • benchfaster/finish: cleanup temporal directories

See an example of a simple workflow is defined in generic_simple.yml which is then run by, for instance, openfaas_jmeter_example.yml playbook.

To run a playbook

ansible-playbook -i inventory/inventory_example.yml ${PLAYBOOK_FILE}.yml

(back to top)

Adding applications/benchmarking-tools

BenchFaster is not limited to the apps and benchmarking tools currently supported, but new ones can be added by creating new roles.

To add a new application

ansible-galaxy init roles/applications/rabbitmq

To add a new benchmarking tool

ansible-galaxy init roles/benchmark-tools/hey 

In both cases, this will create a directory within roles with the expected structure. For more details, read the docs from Ansible.

(back to top)

Contributing

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

About

BenchFaster automates the deployment and benchmarking of containerized tools over emulated WANs.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published