Hello and happy deploying!
On this article I will share how I prepare a HA message broker with RabbitMQ, HAProxy and Docker Swarm.
Our goal is to have a message broker architecture resilient and highly available. Also we want do prevent message loss.
If you want to follow this in your machine, download the source code from: https://github.com/hermesmonteiro/rabbitmq_ha
This would be our final result:

1 – Prepare HAProxy image
The HAProxy is a lightweight free open-source high availability load balancer and proxy server.
We will prepare the HAProxy image using a dockerfile (located in .\haproxy).
This is a very simple dockerfile to start. We will use to copy our custom configuration file haproxy.cfg (also located i n .\haproxy).
Our haproxy.cfg is very simple too. We can talk about HAProxy options later.
The most important configurations are the mapping between the listening ports to target servers. We are mapping 8082 port to RabbitMQ 5672 ports on 3 nodes, and the 8083 port to RabbitMQ 15672 ports.
At the end we configure the stats endpoint so we can check the HAProxy statistics.
global
maxconn 4096
defaults
timeout connect 60s
timeout client 60s
timeout server 60s
listen rabbitmq
bind *:8082
balance roundrobin
server rabbitmq1 rabbitmq1:5672 check inter 1000 fall 3
server rabbitmq2 rabbitmq2:5672 check inter 1000 fall 3
server rabbitmq3 rabbitmq3:5672 check inter 1000 fall 3
listen rabbitmq-ui
bind *:8083
mode tcp
balance roundrobin
server rabbitmq1 rabbitmq1:15672 check inter 1000 fall 3
server rabbitmq2 rabbitmq2:15672 check inter 1000 fall 3
server rabbitmq3 rabbitmq3:15672 check inter 1000 fall 3
listen stats
bind *:1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth admin:admin
On dockerfile we create the image based on HAProxy official image and just copy the config file to specific directory inside the container.
FROM haproxy
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
2 – Prepare RabbitMQ image
RabbitMQ is a widely used open source message broker.
We will prepare the RabbitMQ image using a dockerfile (located in .\rabbitmq).
This dockerfile is also very simple. We use just to copy the configuration files and enable the plugins we want.
We modify the main config file rabbitmq.config (located in .\rabbitmq), to indicate the RabbitMQ to load the “definitions” from specific file: our definitions.json
[
{rabbit, [
{loopback_users, []}
]},
{rabbitmq_management, [
{load_definitions, "/etc/rabbitmq/definitions.json"}
]}
].
On our definitions.json we have all configuration we need. It’s a bit more complicated.
Now the most important configurations are: queues, exchanges and bindings.
But we can configure un RabbitMQ UI and export the definitions as well.
{
"rabbit_version":"3.8.19",
"rabbitmq_version":"3.8.19",
"product_name":"RabbitMQ",
"product_version":"3.8.19",
"users":[
{
"name":"guest",
"password_hash":"+EeUEEI/0NQvMwPrp/cqpZ9nBE1V04Z0l4Z62Stxis6tmnBr",
"hashing_algorithm":"rabbit_password_hashing_sha256",
"tags":"administrator",
"limits":{
}
}
],
"vhosts":[
{
"name":"/"
}
],
"permissions":[
{
"user":"guest",
"vhost":"/",
"configure":".*",
"write":".*",
"read":".*"
}
],
"topic_permissions":[
],
"parameters":[
],
"global_parameters":[
{
"name":"internal_cluster_id",
"value":"rabbitmq-cluster-id-sCe03Vcr5buS4w-8iX6t_Q"
}
],
"policies":[
],
"queues":[
{
"name":"MyQueue",
"vhost":"/",
"durable":true,
"auto_delete":false,
"arguments":{
"x-queue-type":"classic"
}
}
],
"exchanges":[
{
"name":"MyTopicExchange",
"vhost":"/",
"type":"topic",
"durable":true,
"auto_delete":false,
"internal":false,
"arguments":{
}
}
],
"bindings":[
{
"source":"MyTopicExchange",
"vhost":"/",
"destination":"MyQueue",
"destination_type":"queue",
"routing_key":"MyTag",
"arguments":{
}
}
]
}
On dockerfile we create the image based on RabbitMQ official image, add our custom files and activate prometheus plugin.
FROM rabbitmq:3-management
ADD rabbitmq.config /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
RUN rabbitmq-plugins enable rabbitmq_prometheus
3 – Composing
I separated the process in two composer files because I want a Swarm for HAProxy nodes but not for RabbitMQ nodes.
RabbitMQ Compose
Relevant facts about this RabbitMQ compose file (rabbit_docker-compose.yml):
- The image build is commented so I generate the image separately. But we could generate the image from the compose.
- We are generating 3 “nodes” of RabbitMQ but could be more
- All nodes is sharing the same volume. This way we can use Persistent messages and we prevent loss message if the node is down.
- We are using External network, sharing with HAProxy nodes
- We are not mapping/exposing ports, because we will access RabbitMQ through HAProxy
version: '4'
services:
rabbitmq1:
container_name: rabbitmq1
image: rabbitmq-cluster-base
#build:
# context: ./rabbitmq_base
# dockerfile: dockerfile
restart: always
environment:
- TZ=UTC
hostname: rabbitmq1
volumes:
- ./data:/var/lib/rabbitmq/mnesia
rabbitmq2:
container_name: rabbitmq2
image: rabbitmq-cluster-base
#build:
# context: ./rabbitmq_base
# dockerfile: dockerfile
restart: always
environment:
- TZ=UTC
hostname: rabbitmq2
volumes:
- ./data:/var/lib/rabbitmq/mnesia
rabbitmq3:
container_name: rabbitmq3
image: rabbitmq-cluster-base
#build:
# context: ./rabbitmq_base
# dockerfile: dockerfile
restart: always
environment:
- TZ=UTC
hostname: rabbitmq3
volumes:
- ./data:/var/lib/rabbitmq/mnesia
networks:
default:
external: true
name: rabbitHA_network
volumes:
data:
HAProxy Compose
Relevant facts about this HAProxy compose file (haproxy_docker-compose.yml):
- The image build is commented so I generate the image separately. But we could generate the image from the compose.
- We are only configuring 3 nodes in Swarm service, but there are much more configuration to add
- We are mapping 3 ports:
- 1936 for statistics
- 8083 for RabbitMQ UI
- 8082 for RabbitMQ Queues (for producers and consumers)
- We are using External network, sharing with RabbitMQ nodes
Notice that the mapped ports match with listening ports from HAProxy config file
version: '3.8'
services:
haproxy:
image: haproxy-base
#build:
# context: ./haproxy
# dockerfile: dockerfile
hostname: haproxy
volumes:
- ./tmp/data:/data
deploy:
replicas: 3
ports:
- "1936:1936"
- "8083:8083"
- "8082:8082"
networks:
default:
external: true
name: rabbitHA_network
4 – The execution script
I use Windows 10 with Docker. So I’ve created a .BAT script; but could easily converted in a bash script.
- We initialize the Swarm.
- If is already started no problem. You just need to join the existing swarm
- We create the external network, con Swarm scope and Attachable
- The Attachable option allows share the network between Swarm and standalone containers.
- We build the RabbitMQ image
- We run the RabbitMQ compose
- We create the RabbitMQ cluster
- We deploy the HAProxy stack
docker swarm init
docker network create --scope=swarm --attachable rabbitHA_network
docker image build --no-cache -t rabbitmq-cluster-base C:\HM\rabbit_cluster\rabbitmq
docker compose -f C:\HM\rabbit_cluster\rabbit_docker-compose.yml up --force-recreate -d
TIMEOUT 3
docker exec rabbitmq1 sh -c "rabbitmqctl stop_app; rabbitmqctl reset; rabbitmqctl start_app"
TIMEOUT 3
docker exec rabbitmq2 sh -c "rabbitmqctl stop_app; rabbitmqctl reset; rabbitmqctl join_cluster rabbit@rabbitmq1; rabbitmqctl start_app"
docker exec rabbitmq3 sh -c "rabbitmqctl stop_app; rabbitmqctl reset; rabbitmqctl join_cluster rabbit@rabbitmq1; rabbitmqctl start_app"
docker stack deploy -c C:\HM\rabbit_cluster\haproxy_docker-compose.yml haproxyStack
pause
5 – The result
After the execution we should be able to open RabbitMQ UI using the HAProxy port.
We can see the RabbitMQ nodes configured on the cluster
We can even see the HAProxy working if we fast hit F5, the “Cluster”, on right upper corner, will change between the three nodes.
The message broker is ready to receive messages.

If we open the stats port we can see the HAProxy statistcs
