此部分内容是为了测试lvs的高可用,解决nginx本身的单点故障、负载均衡,而nginx对应用的负载均衡不在此处描叙。也就是下图中的第一层lvs负载均衡的部分,第二层nginx负载均衡不在此处介绍。
架构图
1、准备环境
根据需要修改成内网IP
LVS server1 (Master):192.168.2.18 虚拟IP为:192.168.2.25
LVS server2 (Slave):192.168.2.125 虚拟IP为:192.168.2.25
WEB server1(nginx): 192.168.2.13
WEB server2(nginx); 192.168.2.15
注意:所有集群服务器时间要一致
2、两台nginx的配置
1、webserver两台安装nginx服务
nginx安装配置略。
2、webserver两台配置路由
vim /etc/init.d/realserver
#!/bin/bash
#description : start realserver
SNS_VIP=192.168.2.25 #定义了一个VIP变量,必须跟真实服务在一个网段
#/etc/rc.d/init.d/functions
case "$1" in
start)
echo " start LVS of REALServer"
ifconfig lo:0 $SNS_VIP broadcast $SNS_VIP netmask 255.255.255.255 up #增加一个本地路由 lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
;;
stop)
/sbin/ifconfig lo:0 down
echo "close LVS Directorserver"
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
给脚本赋权限
chmod +x /etc/init.d/realserver
启动脚本
/etc/init.d/realserver start
用ifconfig查看效果
root@node05:/opt/nginx/conf# ifconfig lo:0
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.2.25 netmask 255.255.255.255
loop txqueuelen 1000 (Local Loopback)
3、lsv主备安装lvs
查看是否安装lvs
root@v:/etc/keepalived# lsmod | grep ip_vs
ip_vs 151552 0
nf_defrag_ipv6 20480 1 ip_vs
nf_conntrack 135168 1 ip_vs
libcrc32c 16384 4 nf_conntrack,xfs,raid456,ip_vs
没有安装的时候,需要安装
#安装内核kernels
sudo apt-get install linux-kernel-headers kernel-package
# 安装lvs
apt install ipvsadm
#查看集群
ipvsadm -Ln
4、lvs主备安装keepalived
1、lvs主备安装keepalived
apt-get install gcc libssl-dev
wget http://www.keepalived.org/software/keepalived-1.2.24.tar.gz
tar xf keepalived-1.2.24.tar.gz
cd keepalived-1.2.24
./configure --prefix=/usr/local/keepalived
make && make install
#将配置文件拷贝到系统对应的目录下:
mkdir /etc/sysconfig
mkdir /etc/keepalived
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
cp /usr/local/keepalived/lib/systemd/system/keepalived.service /lib/systemd/system/
#开机自启
sudo systemctl enable keepalived.service
#取消开机自启
sudo systemctl disable keepalived.service
#启动
service keepalived start
#关闭
service keepalived stop
keepalived日志配置
#keepalived查看日志
#在/etc/rsyslog.conf 末尾添加
vim /etc/rsyslog.conf
local0.* /var/log/keepalived.log
vim /etc/sysconfig/keepalived
把KEEPALIVED_OPTIONS="-D" 修改为
KEEPALIVED_OPTIONS="-D -d -S 0"
#重启日志记录服务
systemctl restart rsyslog
systemctl status rsyslog
#重启keepalived
systemctl restart keepalived
2、lvs主备keepalived配置
1、lvs主keepalived配置
global_defs {
# notification_email {
# }
# smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER #配置LVS是主机的状态
interface ens160 #配置LVS机器对外开放的IP
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.2.25 #LVS的对内IP
}
}
virtual_server 192.168.2.25 80 {
delay_loop 6
lb_algo wrr
lb_kind DR #使用LVSDR模式 DR直接路由模式
nat_mask 255.255.255.0
persistence_timeout 0
protocol TCP
real_server 192.168.2.13 80 { #真实服务的IP
weight 1 #配置加权轮询的权重
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.2.15 80 {
weight 2
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
2、lvs备keepalived配置
global_defs {
# notification_email {
# }
# smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP #配置LVS是备机的状态
interface ens160 #配置LVS机器对外开放的IP
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.2.25 #LVS的对内IP
}
}
virtual_server 192.168.2.25 80 {
delay_loop 6
lb_algo wrr
lb_kind DR #使用LVSDR模式 DR直接路由模式
nat_mask 255.255.255.0
persistence_timeout 0
protocol TCP
real_server 192.168.2.13 80 { #真实服务的IP
weight 1 #配置加权轮询的权重
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.2.15 80 {
weight 2
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
先启动主LVS用来测试,等LVS测试之后,再做LVS主备切换测试。
3、启动主LVS的keepalived
systemctl restart keepalived
查看LVS
root@v:/etc/keepalived# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP v:http wrr
-> 192.168.2.13:http Route 1 0 0
-> 192.168.2.15:http Route 2 0 0
3、LVS测试
1、访问虚拟IP,lvs负载均衡切换的时候会维持一段时间,才会切换
http://192.168.2.25
可以看到页面在2.15和2.13之间切换。
1、查看LVS状态
root@v:/etc/keepalived# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP v:http wrr
-> 192.168.2.13:http Route 1 0 0
-> 192.168.2.15:http Route 1 0 0
2、故障移除
故障移除,停其中一个服务,认为down掉了
/etc/init.d/nginx stop
1、查看LVS状态
root@v:/etc/keepalived# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP v:http wrr
-> 192.168.2.15:http Route 1 0 0
root@v:/etc/keepalived# ipvsadm -Lnc
IPVS connection entries
pro expire state source virtual destination
TCP 13:20 ESTABLISHED 192.168.2.201:54031 192.168.2.25:80 192.168.2.15:80
TCP 06:05 ESTABLISHED 192.168.2.201:64717 192.168.2.25:80 192.168.2.15:80
3、故障恢复自动添加
/etc/init.d/nginx start
1、查看LVS状态
root@v:/etc/keepalived# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP v:http wrr
-> 192.168.2.13:http Route 1 1 0
-> 192.168.2.15:http Route 1 2 0
4、LVS主备测试
1、主LVS模拟挂掉
service keepalived stop
2、测试后,证明主备自动切换,LVS高可用
3、主LVS启动,自动上位
service keepalived start