2020-08-15 架构师第6周作业

▲单独架构的小伙伴看这里:(学习杰哥视频的作业第11-12天)

1、实现openstack控制端的高可用

1.1 环境描述

172.31.7.101:ctrl-01;已经安装好的openstack控制结点;配置好haproxy

172.31.7.102:ctrl-02 ;要加入的新的openstack控制结点;配置好haproxy

1.2 新添加的controller节点基础环境准备

1、安装T版yum源

[ctrl-02]# yum install centos-release-openstack-train.noarch -y 

2、安装openstack客户端命令和selinux

[ctrl-02]# yum install python-openstackclient openstack-selinux -y  

3、安装py程序连接mysql所需要的模块

[ctrl-02]# yum install python2-PyMySQL -y  

4、安装py程序连接memcache所需要的模块

[ctrl-02]# yum install python-memcached -y  

5、将controller1上admin和myuser用户的环境变量拷贝到当前controller2节点上

[ctrl-02]# scp 172.31.7.101:/root/{admin-openrc.sh,demo-openrc.sh} /root/

1.3 controller-安装keystone

1、安装kyestone服务

[ctrl-02] # yum install openstack-keystone httpd mod_wsgi -y 

2、到controller1节点上,把已经部署好的keystone的配置文件目录进行打包

[ctrl-01] # cd /etc/keystone/

[ctrl-01] # tar czvf keystone-controller1.tar.gz ./*

3、拷贝到当前controller2节点上并修改相关配置文件

[ctrl-01] # scp keystone-controller1.tar.gz 172.31.7.102:/etc/keyston

[ctrl-02] # cd /etc/keystone/

[ctrl-02] # tar xvf keystone-controller1.tar.gz

[ctrl-02] # vim /etc/httpd/conf/httpd.conf

             ServerName 172.31.7.102:80    #让servername监听本机地址(主站点)

[ctrl-02] # vim /etc/hosts

              172.31.7.248 openstack-vip.linux.local

[ctrl-02] # ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

[ctrl-02] # systemctl start httpd.service

[ctrl-02] # systemctl enable httpd.service

4、到controller2节点进行测试,关闭haproxy上controller1节点的5000端口,通过controller2节点5000端口进行测试

[ctrl-02] # . admin-openrc.sh

[ctrl-02] # neutron agent-list   

#能通过keystone认证后,从mysql中获取到数据即配置正确

1.4 controller-安装glance

1、安装glance服务

[ctrl-02] # yum install openstack-glance -y

2、到controller1节点上,把已经部署好的glance的配置文件目录进行打包

[ctrl-01] # cd /etc/glance/

[ctrl-01] # tar czvf glance-controller1.tar.gz ./*

3、拷贝到当前controller2节点上并修改相关配置文件

[ctrl-01] # scp glance-controller1.tar.gz 172.31.7.102:/etc/glance/

[ctrl-02] # cd /etc/glance/

[ctrl-02] # tar xvf glance-controller1.tar.gz

[ctrl-02] # systemctl start openstack-glance-api.service

[ctrl-02] # systemctl enable openstack-glance-api.service

[ctrl-02] # vim /etc/fstab   

172.31.7.105:/data/glance/ /var/lib/glance/images/ nfs defaults,_netdev 0 0

[ctrl-02] # mount -a           

#注意 /var/lib/glance/images/ 目录权限,如果权限不对则执行下面命令:chown glance.glance /var/lib/glance/images/ -R

4、关闭haproxy上controller1节点的9292端口,通过controller2节点9292端口进行测试

[ctrl-02] # openstack image list         #通过keystone认证后,是否能够获取到镜像

1.5 controller-安装placement

1、安装placement服务

[ctrl-02] # yum install openstack-placement-api -y

2、到controller1节点上,把已经部署好的placement的配置文件目录进行打包

[ctrl-01] # cd /etc/placement/

[ctrl-01] # tar czvf placement-controller1.tar.gz ./*

3、拷贝到当前controller2节点上并解包

[ctrl-01] # scp placement-controller1.tar.gz 172.31.7.102:/etc/placement/

[ctrl-02] #  cd /etc/placement/

[ctrl-02] #  tar xvf placement-controller1.tar.gz

4、修改placement配置文件并重启服务

[ctrl-02] #  vim /etc/httpd/conf.d/00-placement-api.conf     

#下面内容添加到配置文件的最后

<Directory /usr/bin>

  <IfVersion >= 2.4>

      Require all granted

  </IfVersion>

  <IfVersion < 2.4>

      Order allow,deny

      Allow from all

  </IfVersion>

</Directory>

[ctrl-02] #  systemctl restart httpd

5、关闭haproxy上controller1节点的8778端口,通过controller2节点8778端口进行测试

[ctrl-02] #  placement-status upgrade check                #查看状态是否是success

1.6 controller-安装nova

1、安装nova服务

[ctrl-02] # yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-schedule -y

2、到controller1节点上,把已经部署好的nova的配置文件目录进行打包

[ctrl-01] # cd /etc/nova/

[ctrl-01] # tar czvf nova-controller1.tar.gz ./*

3、拷贝到当前controller2节点上并修改相关配置文件

[ctrl-01] # scp nova-controller1.tar.gz 172.31.7.102:/etc/nova/

[ctrl-02] # cd /etc/nova/

[ctrl-02] # tar xvf nova-controller1.tar.gz

[ctrl-02] # grep "172" ./* -R                           #查看有哪些配置需要修改

./nova.conf:server_listen = 172.31.7.101

./nova.conf:server_proxyclient_address = 172.31.7.101

[ctrl-02] # vim nova.conf

[vnc]

server_listen = 172.31.7.102           #指定vnc服务端监听地址

server_proxyclient_address = 172.31.7.102

[ctrl-02] # systemctl start \

    openstack-nova-api.service \

    openstack-nova-scheduler.service \

    openstack-nova-conductor.service \

    openstack-nova-novncproxy.service

[ctrl-02] # systemctl enable \

    openstack-nova-api.service \

    openstack-nova-scheduler.service \

    openstack-nova-conductor.service \

    openstack-nova-novncproxy.service

[ctrl-02] # tail -f /var/log/nova/*.log                     #日志中不能有任何报错

4、关闭haproxy上controller1节点的8774和6080端口,通过controller2节点8774和6080端口进行测试

[ctrl-02] #nova service-list                   #列出nova的所有服务,并且状态必须是UP

1.7 controller-安装neutron

1、安装neutron服务

[ctrl-02] # yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

2、到controller1节点上,把已经部署好的neutron的配置文件目录进行打包

[ctrl-01] # cd /etc/neutron/

[ctrl-01] # tar czvf neutron-controller1.tar.gz ./*

3、拷贝到当前controller2节点上并修改相关配置文件

[ctrl-01] # scp neutron-controller1.tar.gz 172.31.7.102:/etc/neutron/

[ctrl-02] # cd /etc/neutron/

[ctrl-02] # tar xvf neutron-controller1.tar.gz

[ctrl-02] # vim /etc/sysctl.conf                 #添加内核参数

             net.bridge.bridge-nf-call-iptables =1

              net.bridge.bridge-nf-call-ip6tables =1

[ctrl-02] # vim /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py

    metric = 100              #第400行

    #if 'metric' in gateway:  #注释掉这两行,否则brq网桥设备无法自动绑定eth0网卡

    #    metric = gateway['metric'] - 1

[ctrl-02] # systemctl start neutron-server.service \

  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \

  neutron-metadata-agent.service

[ctrl-02] # sysctl -p

[ctrl-02] # systemctl enable neutron-server.service \

  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \

  neutron-metadata-agent.service 

[ctrl-02] # tail -f /var/log/neutron/*.log                   #日志中不能有任何报错

4、关闭haproxy上controller1节点的9696端口,通过controller2节点9696端口进行测试

[ctrl-02] # neutron agent-list                     #列出neutron的所有服务,并且状态必须是true

1.8 controller-安装dashboard

1、安装dashboard服务

[ctrl-02] # yum install openstack-dashboard -y

2、到controller1节点上,把已经部署好的glance的配置文件目录进行打包

[ctrl-01] # cd /etc/openstack-dashboard/

[ctrl-01] # tar zcvf openstack-dashboard-controller1.tar.gz ./*

3、拷贝到当前controller2节点上并修改相关配置文件

[ctrl-01] # scp openstack-dashboard-controller1.tar.gz 172.31.7.102:/etc/openstack-dashboard/

[ctrl-02] # cd /etc/openstack-dashboard/

[ctrl-02] # tar xvf openstack-dashboard-controller1.tar.gz

[ctrl-02] # grep "172" ./* -R

./local_settings:ALLOWED_HOSTS = ['172.31.7.101', 'openstack-vip.linux.local']

./local_settings:OPENSTACK_HOST = "172.31.7.101"

[ctrl-02] # vim local_settings

ALLOWED_HOSTS = ['172.31.7.102', 'openstack-vip.linux.local']

OPENSTACK_HOST = "172.31.7.102"

[ctrl-02] # systemctl restart httpd

[ctrl-02] # tail -f /var/log/httpd/*.log                       #日志中不能有任何报错

4、关闭haproxy上controller1节点的80端口,通过controller2节点80端口进行测试

---  http://172.31.7.102/dashboard             #浏览器访问,账号密码都可以用admin


2、结合lvs实现openstack的负载均衡

2.1 环境描述

1、keepalived 两台服务器

[KP-A] [KP-B]

2、web 两台服务器

[WEB-A]:192.168.10.21

[WEB-B]:192.168.10.50

3、openstack 控制端

[OPEN-CTRL]

4、VIP:192.168.10.88 (已经完成openstack各实例关联注册)

2.2 WEB端服务器安装 web 服务

1、WEB-A 操作

[WEB-A]# yum install httpd 

[WEB-A]# vim /var/www/html/index.html

<h1>192.168.10.21</h1>

[WEB-A]# systemctl restart httpd

2、WEB-B 操作

[WEB-B]# yum install httpd

[WEB-B]# vim /var/www/html/index.html

<h1>192.168.10.50</h1>

[WEB-B]# systemctl restart httpd

2.3 keepalived 安装

[KP-A]# yum install keepalived ipvsadm -y

[KP-B]# yum install keepalived ipvsadm -y

2.4 配置 keepalived并启动验证

1、[KP-A]端配置

[KP-A]# cat /etc/keepalived/keepalived.conf

vrrp_sync_group VIP-1 { group {

VIP-1

}

}

vrrp_instance VIP-1 {

state MASTER

interface eth0

virtual_router_id 1

priority 100

advert_int 3

authentication {

auth_type PASS

auth_pass 123456

}

virtual_ipaddress {

192.168.10.88 dev eth0 label eth0:1

}

}

#Nginx service virtual_server

virtual_server 192.168.10.88 80 {

delay_loop 6          

lb_algo wrr            

lb_kind DR            

protocol TCP         

real_server 192.168.10.21 80 {

weight 1                 

TCP_CHECK {

connect_timeout 5 

nb_get_retry 3 

delay_before_retry 3 

connect_port 80 

}

}

real_server 192.168.10.50 80 {

weight 1 TCP_CHECK {

connect_timeout 5

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

}

[KP-A]# systemctl restart keepalived

[KP-A]# systemctl enable keepalived

[KP-A]# ipvsadm -L -n

2、[KP-B]端配置

[KP-B]# cat /etc/keepalived/keepalived.conf 

vrrp_sync_group VIP-1 { group {

VIP-1

}

}

vrrp_instance VIP-1 {

state BACKUP

interface eth0

virtual_router_id 1

priority 80

advert_int 3

authentication {

auth_type PASS

auth_pass 123456

}

virtual_ipaddress {

192.168.10.88 dev eth0 label eth0:1

}

}

#Nginx service virtual_server

virtual_server 192.168.10.88 80 {

delay_loop 6          

lb_algo wrr            

lb_kind DR            

protocol TCP

real_server 192.168.10.21 80 {

weight 1                 

TCP_CHECK {

connect_timeout 5 

nb_get_retry 3 

delay_before_retry 3 

connect_port 80 

}

}

real_server 192.168.10.50 80 {

weight 1 TCP_CHECK {

connect_timeout 5

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

}

[KP-B]# systemctl restart keepalived

[KP-B]# systemctl enable keepalived

[KP-B]# ipvsadm -L -n

2.5 各服务器绑定VIP

1、WEB各服务器均绑定VIP并启动

[WEB]# chmod a+x lvs-dr.sh

[WEB]# cat lvs-dr.sh

#!/bin/sh

#LVS DR 模式初始化脚本

LVS_VIP=192.168.10.88

source /etc/rc.d/init.d/functions

case "$1" in

start)

/sbin/ifconfig lo:0 $LVS_VIP netmask 255.255.255.255 broadcast $LVS_VIP

/sbin/route add -host $LVS_VIP dev lo:0

echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

sysctl -p >/dev/null 2>&1

echo "RealServer Start OK"

;;

stop)

/sbin/ifconfig lo:0 down

/sbin/route del $LVS_VIP >/dev/null 2>&1

echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce

echo "RealServer Stoped"

;;

*)

echo "Usage: $0 {start|stop}"

exit 1

esac

exit 0

[WEB]# bash lvs-dr.sh start

[WEB]# ipvsadm -L -n –stats


2、openstack控制端绑定VIP

[OPEN-CTRL]# chmod a+x lvs-dr.sh

[OPEN-CTRL]# cat lvs-dr.sh

#!/bin/sh

#LVS DR模式初始化脚本

LVS_VIP=192.168.10.88

source /etc/rc.d/init.d/functions

case "$1" in

start)

 /sbin/ifconfig lo:0 $LVS_VIP netmask 255.255.255.255 broadcast $LVS_VIP

 /sbin/route add -host $LVS_VIP dev lo:0

 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

 echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

 echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

 echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

 sysctl -p >/dev/null 2>&1

 echo "RealServer Start OK"

 ;;

stop)

 /sbin/ifconfig lo:0 down

 /sbin/route del $LVS_VIP >/dev/null 2>&1

 echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore

 echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce

 echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore

 echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce

 echo "RealServer Stoped"

 ;;

*)

 echo "Usage: $0 {start|stop}"

 exit 1

esac

exit 0

[OPEN-CTRL]# bash lvs-dr.sh start 

[OPEN-CTRL]# ipvsadm -L -n –stats

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。