When designing your system you have few options on how the messages will be delivered to your consumers:

  1. messages are delivered in round-robin fashion – this is a great option for distributed, load balanced processing;
  2. messages are delivered to all consumers – this is the option to use if there are different processors for given message;
  3. hybrid solution where some consumers are getting messages in round robin fashion and other are getting all messages;

Round-robin delivery

In the round-robin delivery, the messages are automatically distributed between all consumers. By manipulating prefetchcount setting you can easily achieve load balancing. This scenario is great when there are many consumers doing the same type of processing. The example will be an order fulfilling system with few processors receiving orders. In the heavy load times you can spin extra consumers to cope with additional work, and the system will automatically deliver next message to free consumer using round-robin fashion.

using EasyNetQ;
using EasyNetQ.Loggers;
using System;
using System.Threading.Tasks;

namespace DeliveryMethods
{
    public class RoundRobin
    {
        private readonly IBus _bus;

        public RoundRobin()
        {
            _bus = RabbitHutch.CreateBus("host=localhost;prefetchcount=1", x => x.Register<IEasyNetQLogger, NullLogger>());
        }

        public void Run()
        {
            _bus.SubscribeAsync<string>("roundRobin", m => Task.Factory.StartNew(() =>
            {
                Console.WriteLine("Consumer 1: {0}", m);
            }));

            _bus.SubscribeAsync<string>("roundRobin", m => Task.Factory.StartNew(() =>
            {
                Console.WriteLine("Consumer 2: {0}", m);
                System.Threading.Thread.Sleep(TimeSpan.FromSeconds(5));
            }));

            _bus.SubscribeAsync<string>("roundRobin", m => Task.Factory.StartNew(() =>
            {
                Console.WriteLine("Consumer 3: {0}", m);
            }));

            int oId = 1;
            while (true)
            {
                _bus.PublishAsync(string.Format("New order {0}. Timestamp: {1}", oId++, DateTime.Now.TimeOfDay));
                System.Threading.Thread.Sleep(TimeSpan.FromSeconds(1));
            }
        }
    }
}

When subscribing to the bus you have to specify the same subscriptionId for all consumers. You also need to set prefetchCount[^1] to small arbitrary value. This could be 1 or some other small value if buffering is necessary.

The above example shows 3 consumers subscribed to new order message. One of the consumers (no. 2) is processing messages slow, and due to load balancing will receive every 7th message (whenever it is ready to consume next message). Other consumers are processing messages with the same speed and are receiving them in round robin fashion.

All consumers delivery

This delivery mode will deliver every message to each subscribed consumer. This is useful in scenarios where there are many different processors interested in given message. For example, in the order fullfilment system, there may be one processor for verifying stock availability, another for billing and another for auditing and logging. When client places an order, new message is generated and copy of it is delivered to each processor. This allows for parallel processing, as each consumer may run on separate thread or machine. It also makes system open for extension – each new processor just registers interest in the message by subscribing to the bus.

using EasyNetQ;
using EasyNetQ.Loggers;
using System;
using System.Threading.Tasks;

namespace DeliveryMethods
{
    public class AllConsumers
    {
        private readonly IBus _bus;

        public AllConsumers()
        {
            _bus = RabbitHutch.CreateBus("host=localhost;prefetchcount=1", x => x.Register<IEasyNetQLogger, NullLogger>());
        }

        public void Run()
        {
            _bus.SubscribeAsync<string>("stockChecker", m => Task.Factory.StartNew(() =>
            {
                Console.WriteLine("Stock   [1]: {0}", m);
            }));

            _bus.SubscribeAsync<string>("billing", m => Task.Factory.StartNew(() =>
            {
                Console.WriteLine("Billing [2]: {0}", m);
                System.Threading.Thread.Sleep(TimeSpan.FromSeconds(5));
            }));

            _bus.SubscribeAsync<string>("audit", m => Task.Factory.StartNew(() =>
            {
                Console.WriteLine("Audit   [3]: {0}", m);
            }));

            int oId = 1;
            while (true)
            {
                _bus.PublishAsync(string.Format("New order {0}. Timestamp: {1}", oId++, DateTime.Now.TimeOfDay));
                System.Threading.Thread.Sleep(TimeSpan.FromSeconds(1));
            }
        }

    }
}

When subscribing to the bus, each processor is using a different subscriptionId. When deciding on prefetchCount[1] you may go with a higher value to get better performance.
As long as consumer code for each processor runs on different thread or different machine, the processing will be in parallel. Each consumer receives messages independently whenever it is ready.
The speed of processing messages by each processor doesn’t matter to other processors[2]. Messages are simply buffered in the queue.
Extending the system is simple. You create new consumer which subscribes to the bus with new subscriptionId and run it. During first run, EasyNetQ automatically creates and binds the queue, and messages get delivered to it.
When designing the system you can create conditional consumer. This kind of consumer will process messages only when some condition is met, such as: order value is over certain amount. When the condition isn’t met you return from the consuming method and the message is simply removed from the queue.

Hybrid solution

There is nothing to prevent you from using both delivery methods for the same message type. You can scale out each processor type by adding extra instances of it, so if you find for example that stock processor and billing processor are getting behind then scale those processors out.
If the system load varies than you can implement auto scaling – a system which will monitor queue sizes and automatically increase / decrease number of consumers to cope with the workload.


prefetchcount is set when creating a bus. It’s value determines number of messages being cached by consumer for faster delivery. ↩︎There may be some performance hit with delivering a message when one queue gets big backlog of messages and RabbitMQ has to spend resources on persisting those. ↩︎