This document will help you learn about NBomber Cluster. NBomber Cluster is available in the Enterprise version, including a free trial period.
NBomber Cluster is an additional runtime component that can run NBomber tests in a distributed way (on multiple nodes) with flexible orchestration and stats gathering.
You have reached the point that the capacity of one node is not enough to create a relevant load, and you want to run parallel execution of scenarios on multiple nodes.
You want to segregate multiple scenarios to different nodes. For example, you want to test the database by sending INSERT, READ and DELETE requests in parallel. In this case, you can spread your load granularly: one node can send INSERT requests, and another one can send READ or DELETE requests.
You may need to have several nodes to simulate a production load, especially in geo-distributed mode. In this example, one NBomber node (on the left side) is publishing messages to Kafka and other NBomber nodes (on the right side) are listening to PUSH messages from the Push servers and calculate latency and throughput.
NBomber Cluster consists of 3 main components:
Coordinatoris responsible for coordinating the execution of the entire test. It sends commands and executes scenarios.
Agentis responsible for listening to the commands from the
Coordinator, executing scenarios, sending stats.
Message Brokeris a communication point in the cluster. All network communication goes via the
Both Coordinator and Agent are the same .NET application but with different JSON configs.
Message Broker is a communication point in the cluster. The main goal of the message broker is to provide reliable message delivery across the cluster. NBomber Cluster works with any message broker that supports the MQTT protocol. In the default setup, one single MQTT node with minimum characteristics (2CPU and 2RAM) will be enough. Such a single MQTT node can serve many concurrent NBomber Clusters with no problems. We recommend using a free version of EMQ X broker. In the following section we will setup EMQX broker using Docker.
Coordinator is the main component that contains registered Scenarios and is responsible for coordinating the execution of the entire test, including gathering all statistics from Agent(s). The coordination process is lightweight and doesn't take many resources. For these reasons, you should use Coordinator not only for orchestration but also to execute Scenarios.
There should be only one Coordinator per cluster. So if you have 10 clusters, it means that you have 10 Coordinators. You can have unlimited number of Agent(s) per cluster.
Here is a basic example of Coordinator configuration. Pay attention to property
TargetScenarios - it plays a significant role and forms topology of test execution.
As you can see in Coordinator config, we specified what scenarios will be executed on Coordinator.
Scenarios that will be executed on Agent(s) (AgentGroup will be described in the section about Agent):
All cluster participants should have the same ClusterId because, it will allow them to see each other.
Coordinator config can also contain
GlobalSettings from regular NBomber JSON configuration.
After Coordinator starts and reads the config file, it will send all settings to Agent(s). So if you want to change some settings you can edit the config file and restart Coordinator. Coordinator will distribute new settings to all Agent(s).
Agent acts as a worker who listens to commands from Coordinator and executes the
TargetScenarios. Agent contains registered Scenarios (similar to Coordinator) to run them.
Another feature of Agent is the mandatory binding to AgentGroup. An AgentGroup provides a group of Agent(s) that execute the specified scenarios associated with this group. An AgentGroup can contain either one Agent or many. You can think of an AgentGroup like tagging for an Agent. You can have as many AgentGroups as you want,as they are virtual.
Here is an example of Agent config file. In this example, we define Agent that is bound to
"AgentGroup": "1". As you can see, we don't specify
TargetScenarios since these options will be passed dynamically by Coordinator. So Agent doesn't know what scenarios will be started until receiving a list of
TargetScenarios from Coordinator.
All cluster participants should have the same
ClusterId. This way, they will see each other.
This package is available in the Enterprise version only.
NBomber Cluster has the same API as a regular NBomber except:
- NBomber Cluster uses
NBomberRunner. But it has all the API functions that
- NBomber Cluster uses a bit extended JSON Configuration (it contains
ClusterSettings) to setup Coordinator or Agent.
- NBomber Cluster requires to set license key.
Let's first start with an empty hello world example. In this example, we will define one simple Step and Scenario which does nothing. After this, we will add Coordinator and Agent configs to run them in the cluster mode.
If we run this example, it will behave as a regular NBomber test since we didn't define any cluster yet. To build a cluster, we need to define configs for Agent and Coordinator; and then connect them to Message Broker.
Remember that the MQTT message broker is a communication point in the cluster, and we need to create and run it to establish connections between cluster members. To do so, we will use Docker Compose and EMQX Docker image. Here is an example of our docker-compose file (you can create this file in the current project folder).
Let's run it by using this command.
After starting try to open EMQX admin panel by URL:
http://localhost:18083. The default admin user credentials are:
You can use admin panel for diagnostic purposes.
Let's start with Agent since it's simpler. It should start before Coordinator (as Agent should listen to commands from Coordinator). The main thing for us is to define and load Agent config (agent_config.json).
Here is an example of Agent config that we load. We see that
Agent settings with connection params to the MQTT broker. For dev purposes, we are going to use the localhost and default MQTT port
1883. All cluster participants should have the same
ClusterId. This way, they will see each other. Another quite important option is
AgentGroup, which should be treated like tagging. Also, you can see that we didn't specify any
TargetScenarios to run. It's because Agent will receive the list of Scenarios from Coordinator to be run plus
Coordinator contains the same list of scenarios as Agent but uses a different config file and should be started after all Agent(s).
Here is an example of Coordiantor config that we load.
In Coordinator config we defined
TargetScenarios to run on Coordinator.
And also we defined
TargetScenarios for Agent(s) (via AgentGroup).
Instead of hardcoded file path and license key you can use CLI arguments.