haproxy+keepalived高可用集群

一、编译安装haproxy

1、解决Lua环境

yum install libtermcap-devel ncurses-devel libevent-devel readline-devel
wget http://www.lua.org/ftp/lua-5.3.6.tar.gz

tar -xf lua-5.3.6.tar.gz
mv lua-5.3.6 /usr/local/
cd /usr/local 
ln -s lua-5.3.6 lua
cd lua
 make linux test

src/lua -v
Lua 5.3.6  Copyright (C) 1994-2020 Lua.org, PUC-Rio

2、编译安装 haproxy

make ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 \
USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 USE_LUA=1 \
LUA_INC=/usr/local/lua/src/ LUA_LIB=/usr/local/lua/src/ PREFIX=/usr/local/haproxy

make install PREFIX=/usr/local/haproxy
cp haproxy /usr/sbin/

[root@haproxy1 haproxy-2.2.10]# /usr/sbin/haproxy -v
HA-Proxy version 2.2.10-6a09215 2021/03/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2025.
Known bugs: http://www.haproxy.org/bugs/bugs-2.2.10.html
Running on: Linux 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64

3、创建haproxy配置文件

mkdir -p /etc/haproxy
mkdir /var/lib/haproxy
vim /etc/haproxy/haproxy.cfg

global
maxconn 100000
chroot /usr/local/haproxy
stats socket /var/lib/haproxy/haproxy.sock1 mode 600 level admin process 1
stats socket /var/lib/haproxy/haproxy.sock2 mode 600 level admin process 2
uid 99
gid 99
daemon
nbproc 2
cpu-map 1 0
cpu-map 2 1
pidfile /var/lib/haproxy/haproxy.pid
log 127.0.0.1 local3 info

defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 30000ms
timeout client 30000ms
timeout server 30000ms

listen stats
    mode http
    bind 0.0.0.0:9999
    stats enable
    log global
    stats uri /haproxy-status
    stats auth haadmin:123456

listen web_server
    bind 10.0.0.201:80
    mode http
    log global
    balance roundrobin
    option forwardfor
    server web1 10.0.0.101:80 check inter 3s fall 2 rise 5
    server web2 10.0.0.102:80 check inter 3s fall 2 rise 5

4、创建自启动文件

mkdir -p /var/lib/haproxy
chown -R 99.99 /var/lib/haproxy/

vim /usr/lib/systemd/system/haproxy.service

[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target

[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID

[Install]
WantedBy=multi-user.target

systemctl start haproxy
systemctl enable haproxy
systemctl status haproxy

5、编译安装 keepalived

选项--disable-fwmark 可用于禁用iptables规则,可防止VIP无法访问,无此选项默认会启用ipatbles规则

yum install -y gcc curl openssl-devel libnl3-devel net-snmp-devel 

./configure --prefix=/usr/local/keepalived --disable-fwmark
make && make install

[root@haproxy1 keepalived]# sbin/keepalived -v
Keepalived v2.2.2 (03/05,2021)
Copyright(C) 2001-2021 Alexandre Cassen, <acassen@gmail.com>
...

6.1、创建主配置文件(keepalived的两个节点)

mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id haproxy1.cn
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}

include /etc/keepalived/conf.d/*.conf

6.2、创建子配置文件(keepalived节点1)

vrrp_instance web_1 {
    state MASTER
    interface eth0
    virtual_router_id 57
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.201/24 dev eth0 label eth0:1
    }
}

6.2、创建子配置文件(keepalived节点2)

vrrp_instance web_1 {
    state BACKUP
    interface eth0
    virtual_router_id 57
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.201/24 dev eth0 label eth0:1
    }
}

6.3、启动 keepalived 并查看VIP情况(节点1),


image.png

6.4、抓包


image.png

6.5、实现双主配置(节点1)

vrrp_instance web_1 {
    state MASTER
    interface eth0
    virtual_router_id 57
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.201/24 dev eth0 label eth0:1
    }
}

vrrp_instance web_2 {
    state BACKUP
    interface eth0
    virtual_router_id 99
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.202/24 dev eth0 label eth0:2
    }
}

web_1的虚拟IP 10.0.0.201 漂在节点1上


image.png

6.5、实现双主配置(节点2)

vrrp_instance web_1 {
    state BACKUP
    interface eth0
    virtual_router_id 57
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.201/24 dev eth0 label eth0:1
    }
}


vrrp_instance web_2 {
    state MASTER
    interface eth0
    virtual_router_id 99
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.202/24 dev eth0 label eth0:2
    }
}

web_2的虚拟IP 10.0.0.202 漂在节点2上


image.png

抓包显示,两个节点各有一组实例

image.png

二、Tomcat session cluster的实现

1、官网下载JDK8的二进制安装包
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html

tar xvf jdk-8u241-linux-x64.tar.gz -C /usr/local/
cd /usr/local/
ln -s jdk1.8.0_241/ jdk

2、配置环境变量

vim /etc/profile.d/jdk.sh

export JAVA_HOME=/usr/local/jdk
export PATH=$PATH:$JAVA_HOME/bin

source /etc/profile.d/jdk.sh

3、官网下载 Tomcat 8.x.x,并安装

wget https://mirrors.bfsu.edu.cn/apache/tomcat/tomcat-8/v8.5.63/bin/apache-tomcat-8.5.63.tar.gz
tar -xf apache-tomcat-8.5.63.tar.gz -C /usr/local
ln -sv apache-tomcat-8.5.63 tomcat 
cd /usr/local/tomcat/bin
./catalina.sh version
./catalina.sh start
ss -tanlp

./startup.sh
./shutdown.sh

3.1、配置 tomcat自启动的 service 文件

useradd -r -s /sbin/nologin tomcat
chown -R tomcat.tomcat /usr/local/tomcat/

#准备自启动文件所需的相关环境文件
vim /usr/local/tomcat/conf/tomcat.conf
JAVA_HOME=/usr/local/jdk

vim /lib/systemd/system/tomcat.service
[Unit]
Description=Tomcat
After=syslog.target network.target
[Service]
Type=forking
EnvironmentFile=/usr/local/tomcat/conf/tomcat.conf
ExecStart=/usr/local/tomcat/bin/startup.sh
ExecStop=/usr/local/tomcat/bin/shutdown.sh
PrivateTmp=true
User=tomcat
Group=tomcat
[Install]
WantedBy=multi-user.target

systemctl daemon-reload
systemctl start tomcat.service
systemctl enable tomcat.service

4、编辑conf/server.xml,分别修改两个节点的默认虚拟主机为tomcat1.org和tomcat2.org

<Engine name="Catalina" defaultHost="tomcat1.org">
<Engine name="Catalina" defaultHost="tomcat2.org">

5、使用Nginx实现后端Tomcat的负载均衡

ip_hash:源地址hash调度方法,基于的客户端的remote_addr(源地址IPv4的前24位或整个IPv6地址)做hash计算,以实现会话保持。
弊端:互联网都是使用虚拟IP进行上网,造成负载不均衡,并且后端Tomcat挂了后不能会话丢失。
hash $cookie_JSESSIONID; #基于cookie中的sessionid这个key进行hash调度,实现会话绑定。
弊端:固定调度到某一台后端Tomcat服务器上,后端挂了后会造成会话丢失

总结:基于IP或cookie的session绑定,其部署简单,尤其是基于session黏性的方式,粒度小,对负载均衡影响小,但如果后端服务器有故障,则会丢失session。

Nginx配置(http模块)

upstream tomcat-server {
      # ip_hash; 
      # hash $cookie_JSESSIONID; 
      server tomcat1.org:8080 weight=1 fail_timeout=5s max_fails=3;
      server tomcat2.org:8080 weight=2 fail_timeout=5s max_fails=3;
    }

server{
    location ~* \.(jsp|do)$ {
      proxy_pass http://tomcat-server;
    }
}

测试jsp文件 (/data/webapps/ROOT/index.jsp) 并修改权限 chown -R tomcat.tomcat /data/webapps

<%@ page import="java.util.*" %>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>tomcat test</title>
</head>
<body>
<div>On <%=request.getServerName() %></div>
<div><%=request.getLocalAddr() + ":" + request.getLocalPort() %></div>
<div>SessionID = <span style="color:blue"><%=session.getId() %></span></div>
<%=new Date()%>
</body>
</html>

6、实现Tomcat Session 集群

为解决在负载均衡上进行会话保持,Tomcat 官方实现了 Session 的复制集群,将每个Tomcat的Session进行相互的复制同步,从而保证所有Tomcat都有相同的Session信息
弊端:当后端tomcat主机较多时,会重复占用大量的内存,并不适合后端服务器众多的场景

6.1、分别在tomcat的两个节点修改 server.xml 配置文件

<Host name="tomcat1.org" appBase="/data/webapps" unpackWARs="true" autoDeploy="true">
      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                 channelSendOptions="8">

          <Manager className="org.apache.catalina.ha.session.DeltaManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"/>

          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
            <Membership className="org.apache.catalina.tribes.membership.McastService"
                        address="228.0.0.4"  # 指定不冲突的多播地址
                        port="45564"
                        frequency="500"
                        dropTime="3000"/>
            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="10.0.0.111"  # 指定网卡的IP地址
                      port="4000"
                      autoBind="100"
                      selectorTimeout="5000"
                      maxThreads="6"/>

            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
            </Sender>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
          </Channel>

          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                 filter=""/>
          <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                    tempDir="/tmp/war-temp/"
                    deployDir="/tmp/war-deploy/"
                    watchDir="/tmp/war-listen/"
                    watchEnabled="false"/>

          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
    </Cluster>
</Host>

6.2、修改应用的web.xml文件开启该应用程序的分布式 (两个Tomcat 节点)

cp -a /usr/local/tomcat/webapps/ROOT/WEB-INF/ /data/webapps/ROOT/
vim WEB-INF/web.xml

  <description>
     Welcome to Tomcat
  </description>
<distributable/>
</web-app>

重启全部Tomcat,通过负载均衡调度到不同节点,返回的SessionID不变了。
用浏览器访问,并刷新多次,发现SessionID 不变,但后端主机在轮询

总结:session复制集群,基于tomcat实现多个服务器内共享同步所有session。此方法可以保证任意一
台后端服务器故障,其余各服务器上还都存有全部session,对业务无影响。但是它基于多播实现心跳,TCP单播实现复制,当设备节点过多,这种复制机制不是很好的解决方案。且并发连接多的时候,单机上的所有session占据的内存空间非常巨大,甚至耗尽内存

7、session 共享服务器

msm(memcached session manager)提供将Tomcat的session保持到memcached或redis的程序,可以实现高可用

sticky 模式:t1和m1部署可以在一台主机上,t2和m2部署也可以在同一台。当新用户发请求到Tomcat1时,Tomcat1生成session返回给用户的同时,也会同时发给memcached2备份。即Tomcat1 session为主session,memcached2 session为备用session,使用memcached相当于备份了一份Session如果Tomcat1发现memcached2 失败,无法备份Session到memcached2,则将Sessoin备份存放在memcached1中。

image.png

7.1、配置msm需要下载相关jar包
https://github.com/magro/memcached-session-manager/wiki/SetupAndConfiguration

7.2、修改context.xml配置,在<Context>标签内添加(tomcat节点1)

<Context>
    <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
        memcachedNodes="n1:10.0.0.111:11211,n2:10.0.0.112:11211"
        failoverNodes="n1"
        requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
        transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
    />
</Context>

配置说明:failoverNodes 为故障转移节点,n1是备用节点,n2是主存储节点。另一台Tomcat将n1改为n2,其主节点是n1,备用节点是n2。

7.3、修改context.xml配置,在<Context>标签内添加(tomcat节点2)

<Context>
    <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
        memcachedNodes="n1:10.0.0.111:11211,n2:10.0.0.112:11211"
        failoverNodes="n2"
        requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
        transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
    />
</Context>

7.4、分别在 tomcat1 和 tomcat2 上配置 memcached

yum -y install memcached
vim /etc/sysconfig/memcached

PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
#注释下面行
#OPTIONS="-l 127.0.0.1,::1"

7.5、将相关jar包上传到lib目录下,并重启tomcat

asm-5.2.jar
kryo-3.0.3.jar
kryo-serializers-0.45.jar
memcached-session-manager-2.3.2.jar
memcached-session-manager-tc8-2.3.2.jar
minlog-1.3.1.jar
msm-kryo-serializer-2.3.2.jar
objenesis-2.6.jar
reflectasm-1.11.9.jar
spymemcached-2.12.3.jar

7.6、配置完成后可查看 tail -f tomcat/logs/catalina.out 显示如下内容即配置成功

[root@tomcat1 local]# tail -f tomcat/logs/catalina.out 
- sticky: true
- operation timeout: 1000
- node ids: [n2]
- failover node ids: [n1]
- storage key prefix: null
- locking mode: null (expiration: 5s)
--------
[root@tomcat2 tomcat]# tail -f logs/catalina.out
- sticky: true
- operation timeout: 1000
- node ids: [n1]
- failover node ids: [n2]
- storage key prefix: null
- locking mode: null (expiration: 5s)
--------

使用浏览器访问并刷新页面,负载均衡分别调度到不同的机器而且sessionid都不会发生变化。

8、msm的non-sticky 模式

non-sticky 模式:即前端tomcat和后端memcached无关联(无粘性)关系,从msm 1.4.0之后版本开始支持non-sticky模式。Tomcat session为中转Session,对每一个SessionID随机选中后端的memcached节点n1(或者n2)为主session,而另一个memcached节点n2(或者是n1)为备session。产生的新的Session会发送给主、备memcached,并清除本地Session。后端两个memcached服务器对一个session来说是一个是主,一个是备,但对所有session信息来说每个memcached即是主同时也是备,如果n1下线,n2则转正。n1再次上线,n2依然是主Session存储节点。

8.1、not-sticky模式的配置要点,相对于上面7.3,在conf/context.xml中添加 sticky="false"
tomcat1节点配置

<Context>
    <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
        memcachedNodes="n1:10.0.0.111:11211,n2:10.0.0.112:11211"
        sticky="false"
        failoverNodes="n1"
        requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
        transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
    />
</Context>

tomcat2节点配置

<Context>
    <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
        memcachedNodes="n1:10.0.0.111:11211,n2:10.0.0.112:11211"
        sticky="false"
        failoverNodes="n2"
        requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
        transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
    />
</Context>

总结:session服务器,将所有的session存储到一个共享的内存空间中,使用多个冗余节点保存session,这样做到session存储服务器的高可用,且占据业务服务器内存较小。是一种比较好的解决session持久的解决方案

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容