RabbitMQ集群搭建(docker)

1. RabbitMQ集群搭建

1.1 镜像

  • docker pull rabbitmq:management
  • docker pull centos:7

1.2 docker-compose.yml

version: '2'
services:
  ha-rabbitmq01:
    image: rabbitmq:management
    container_name: ha-rabbitmq01
    restart: always
    hostname: ha-rabbitmq01
    privileged: true
    environment:
      - "RABBITMQ_NODENAME=ha-rabbitmq01"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "5672:5672"
      - "15672:15672"
    networks:
      ha-rabbitmq-network:
        ipv4_address: 172.50.0.2
  ha-rabbitmq02:
    image: rabbitmq:management
    container_name: ha-rabbitmq02
    restart: always
    hostname: ha-rabbitmq02
    privileged: true
    environment:
      - "RABBITMQ_NODENAME=ha-rabbitmq02"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "5673:5672"
      - "15673:15672"
    networks:
      ha-rabbitmq-network:
        ipv4_address: 172.50.0.3
  ha-rabbitmq03:
    image: rabbitmq:management
    container_name: ha-rabbitmq03
    restart: always
    hostname: ha-rabbitmq03
    privileged: true
    environment:
      - "RABBITMQ_NODENAME=ha-rabbitmq03"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "5674:5672"
      - "15674:15672"
    networks:
      ha-rabbitmq-network:
        ipv4_address: 172.50.0.4
networks:
  ha-rabbitmq-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.50.0.0/16
          gateway: 172.50.0.1

单个run运行,先创建network,然后指定即可。

docker run -it -d --privileged --name ha-rabbitmq01 -p 15672:15672 -p 25672:25672 -p 5672:5672 -e RABBITMQ_NODENAME=ha-rabbitmq01 -h ha-rabbitmq01 rabbitmq:management

docker run -it -d --privileged --name ha-rabbitmq02 -p 15673:15672 -p 25673:25672 -p 5673:5672  -e RABBITMQ_NODENAME=ha-rabbitmq02  -h ha-rabbitmq02 rabbitmq:management

docker run -it -d --privileged --name ha-rabbitmq03 -p 15674:15672 -p 25674:25672 -p 5674:5672 -e RABBITMQ_NODENAME=ha-rabbitmq03  -h ha-rabbitmq03 rabbitmq:management

1.3 .erlang.cookie同步

more ~/.erlang.cookie  PKBRYYBVWAZHJCAVAHBJ
echo "PKBRYYBVWAZHJCAVAHBJ" > ~/.erlang.cookie  # 看下第一台的cookie是什么

hosts记得要配置。

echo "172.50.0.2      ha-rabbitmq01" >> /etc/hosts
echo "172.50.0.3      ha-rabbitmq02" >> /etc/hosts
echo "172.50.0.4      ha-rabbitmq03" >> /etc/hosts

1.4 集群配置

分别进入rabbitmq2 和rabbitmq3容器(docker exec -it 容器id /bin/bash),执行以下:

rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster --ram ha-rabbitmq01@ha-rabbitmq01
rabbitmqctl start_app


# 查看集群状态
rabbitmqctl cluster_status

如果出现 Stopping rabbit application on node ha-rabbitmq02@ha-rabbitmq02 ... Error: unable to perform an operation on node 'ha-rabbitmq02@ha-rabbitmq02'. Please see diagnostics information and suggestions below. 这些错误,一般cookie不一致导致。

图片.png

1.5 镜像队列

rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'

或者管理页面配置

图片.png

2. HAproxy环境搭建

https://www.haproxy.org/#down

https://src.fedoraproject.org/repo/pkgs/haproxy/haproxy-2.4.4.tar.gz/

docker run -it -d --privileged --name haproxy-keepalived-v1 centos:7 /usr/sbin/init

2.1 源码安装

yum update
yum install -y gcc wget net-tools make vim initscripts ipvsadm tcpdump
cd /usr/local/src
wget https://src.fedoraproject.org/repo/pkgs/haproxy/haproxy-2.4.4.tar.gz/sha512/a8987e8342fdbec7e48de09a4391a67e77e05493260e0e561e8c185b6457b8e1086cc45ce04ebf3365699c008dff81667490e2fe99c33c0ac3c7513df8ae025c/haproxy-2.4.4.tar.gz
tar -zxvf  haproxy-2.4.4.tar.gz
make TARGET=linux-glibc  PREFIX=/usr/local/haproxy-2.4.4
make install PREFIX=/usr/local/haproxy-2.4.4

可直接 yum install haproxy 安装。本案例使用yum直接安装了。

2.2 配置环境变量

vim /etc/profile

export HAPROXY_HOME=/usr/local/haproxy-2.4.4
export PATH=$PATH:$HAPROXY_HOME/sbin

使得配置的环境变量立即生效:

source /etc/profile

2.3 检查安装是否成功

[root@ha-rabbitmq-haproxy01 haproxy-2.4.4]# haproxy -v
HAProxy version 2.4.4-acb1d0b 2021/09/07 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2026.
Known bugs: http://www.haproxy.org/bugs/bugs-2.4.4.html
Running on: Linux 3.10.0-1160.36.2.el7.x86_64 #1 SMP Wed Jul 21 11:57:15 UTC 2021 x86_64

2.4 负载均衡配置

新建配置文件 haproxy.cfg,这里我新建的位置为:/etc/haproxy/haproxy.cfg。

mkdir /etc/haproxy

vim /etc/haproxy/haproxy.cfg

global
  daemon
  maxconn 256
  
defaults
  mode http
  timeout connect 5000ms
  timeout client 5000ms
  timeout server 5000ms
 
listen rabbitmq_cluster
  bind 0.0.0.0:5677
  option tcplog
  mode tcp
  timeout client  3h
  timeout server  3h
  balance leastconn
  server ha-rabbitmq01 ha-rabbitmq01:5672 check inter 2s rise 2 fall 3
  server ha-rabbitmq02 ha-rabbitmq02:5672 check inter 2s rise 2 fall 3
  server ha-rabbitmq03 ha-rabbitmq03:5672 check inter 2s rise 2 fall 3

listen http_front
  bind 0.0.0.0:80
  stats uri /haproxy?stats
 
listen rabbitmq_admin
  bind 0.0.0.0:8001
  server ha-rabbitmq01 ha-rabbitmq01:15672 check inter 2s rise 2 fall 3
  server ha-rabbitmq02 ha-rabbitmq02:15672 check inter 2s rise 2 fall 3
  server ha-rabbitmq03 ha-rabbitmq03:15672 check inter 2s rise 2 fall 3

2.5 启动

haproxy -f /etc/haproxy/haproxy.cfg
echo "172.50.0.2      ha-rabbitmq01" >> /etc/hosts
echo "172.50.0.3      ha-rabbitmq02" >> /etc/hosts
echo "172.50.0.4      ha-rabbitmq03" >> /etc/hosts

2.6 查看运行

ps aux|grep haproxy
killall haproxy

2.7 查看效果

2.7.1 haproxy监控页面

图片.png

2.7.2 rabbitmq web管理页面

图片.png

2.7.3 amqp 服务

图片.png

或者访问 http://106.12.203.184:8001/api/vhosts

2.8 打包成镜像

将所有安装好之后,可以打包成镜像。

docker commit -a 'gan' -m 'haproxy-keepalived-v1' haproxy-keepalived-v1 ha-haproxy-v1
docker rm -f haproxy-keepalived-v1

docker run -it -d --privileged --name ha-haproxy01 -p 5677:5677 -p 8181:80 -p 8001:8001 --network harabbitmq_ha-rabbitmq-network --ip 172.50.0.5 --add-host ha-rabbitmq01:172.50.0.2 --add-host ha-rabbitmq02:172.50.0.3 --add-host ha-rabbitmq03:172.50.0.4 ha-haproxy-v1 /usr/sbin/init

docker run -it -d --privileged --name ha-haproxy02 -p 5688:5677 -p 8191:80 -p 8011:8001 --network harabbitmq_ha-rabbitmq-network --ip 172.50.0.6 --add-host ha-rabbitmq01:172.50.0.2 --add-host ha-rabbitmq02:172.50.0.3 --add-host ha-rabbitmq03:172.50.0.4  ha-haproxy-v1 /usr/sbin/init

3. Keepalived 环境搭建

3.1 安装

yum -y install keepalived
echo "172.50.0.2      ha-rabbitmq01" >> /etc/hosts
echo "172.50.0.3      ha-rabbitmq02" >> /etc/hosts
echo "172.50.0.4      ha-rabbitmq03" >> /etc/hosts

3.2 启动

vim /etc/keepalived/haproxy_check.sh

#!/bin/bash

A=`ps -C haproxy --no-header |wc -l`
if [ $A -eq 0 ] ; then
    systemctl start haproxy
fi

sleep 3

if [ $A -eq 0 ] ; then
    systemctl stop keepalived
fi

vim /etc/keepalived/keepalived.conf

global_defs {
         notification_email {
                 acassen@firewall.loc
                 failover@firewall.loc
                 sysadmin@firewall.loc
         }
         notification_email_from Alexandre.Cassen@firewall.loc
         smtp_server 192.168.50.131
         smtp_connect_timeout 30
         router_id LVS_DEVEL
}

vrrp_script chk_haproxy {
    script "/etc/keepalived/haproxy_check.sh"
    interval 5
    weight 10
}

vrrp_instance  VI_1 {
    state  MASTER
    interface  eth0
        virtual_router_id  100
        priority  100
        advert_int  1
        authentication {
        auth_type  PASS
        auth_pass  123456
    }
    track_script {
        chk_haproxy
    }
    unicast_src_ip 172.50.0.5
    unicast_peer {
        172.50.0.6
    }
    virtual_ipaddress {
        172.50.0.100
    }
}
 keepalived -f /etc/keepalived/keepalived.conf
 ps aux|grep keepalived

如果出现,宿主机也需要安装的。

# 如果之前已经安装了就不要安装了,直接配置就好
yum install -y ipvsadm
# enable IP forward
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf

sysctl -p
touch /etc/sysconfig/ipvsadm
systemctl start ipvsadm
systemctl enable ipvsadm

ping

[root@ha-haproxy01 /]# ping 172.50.0.100
PING 172.50.0.100 (172.50.0.100) 56(84) bytes of data.
64 bytes from 172.50.0.100: icmp_seq=1 ttl=64 time=0.016 ms
64 bytes from 172.50.0.100: icmp_seq=2 ttl=64 time=0.017 ms
^C
--- 172.50.0.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.016/0.016/0.017/0.004 ms

3.3 验证故障转移

安装 tcpdump 包

yum install tcpdump -y
tcpdump -i eth0 vrrp -n

看master上的IP情况如下:

[root@ha-haproxy01 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
173: eth0@if174: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:32:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.50.0.5/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.50.0.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe32:5/64 scope link 
       valid_lft forever preferred_lft forever

杀掉master上的keepalived

pkill keepalived

这个时候VIP到了ha-haproxy02这台了。

[root@ha-rabbitmq-haproxy01 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
173: eth0@if174: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:32:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.50.0.5/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe32:5/64 scope link 
       valid_lft forever preferred_lft forever
[root@ha-haproxy02 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
171: eth0@if172: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:32:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.50.0.6/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.50.0.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe32:6/64 scope link 
       valid_lft forever preferred_lft forever

启动master的keepalived,这个时候VIP又过来了。

[root@ha-haproxy01 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
173: eth0@if174: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:32:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.50.0.5/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.50.0.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe32:5/64 scope link 
       valid_lft forever preferred_lft forever

3.4 常见问题

  1. Can't initialize ipvs: Protocol not available Are you sure that IP Virtual Server is built in the kernel or as module?

     需要安装ipvsadm模块。
    
  2. bash: ip: command not found

    容器内运行没有命令,yum -y install initscripts 即可。

  1. Keepalived_healthcheckers exited with permanent error FATAL. Terminating

    这个错误基本就是keepalived.conf配置有误

  1. 好多人在容器中运行没有权限的,一般是容器没加上 --privileged
  1. 客户端连接代理地址的时候,会出现超时或断线的情况
    那是因为haproxy机制导致,会将不活跃的tcp连接给主动关闭掉。所以我们可以根据业务去调整心连接时间。 timeout client 3h timeout server 3h, 或者程序去做短线重连。

参考资料

Docker环境下搭建Rabbitmq+Haproxy+Keepalived高可用负载均衡集群

docker下用keepalived+Haproxy实现高可用负载均衡集群

RabbitMQ 高可用集群搭建

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。