Kubernetes作为容器应用的管理中心,通过对Pod的数量进行监控,并且根据主机或容器失效的状态将新的Pod调度到其他Node上,实现了应用层的高可用性。针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:etcd数据存储的高可用性和Kubernetes Master组件的高可用性。
*架构*
K8S Master 192.168.81.11 Etcd Flannel Kube-apiserver Kube-controller-manager Kube-scheduler
K8S Minion1 192.168.81.12Flannel Kubelet Kube-proxy
K8S Minion2 192.168.81.60Flannel Kubelet Kube-proxy
*准备工作*
端口转发
vim /etc/sysctl.confnet.ipv4.ip_
forward=1net.ipv4.conf.all.rp_
filter=0net.ipv4.conf.default.rp_
filter=0
关闭网络管理服务
systemctl stop NetworkManager.service
systemctl disable firewalld.service
firewalld和iptables(这里是测试环境所以全部关闭了,生产环境可以参考如下)
#停止firewall
systemctl stop firewalld.service
#禁止firewall开机启动
systemctl disable firewalld.service
#安装 iptables.service
yum -y install iptables-services
#添加策略
vim /etc/sysconfig/iptables
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2379 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2380 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
[所有节点]#注释此行-A FORWARD -j REJECT --reject-with icmp-host-prohibited #添加此行-A FORWARD -j ACCEPT
#注释此行
-A INPUT -j REJECT --reject-with icmp-host-prohibited
#添加此行
-A INPUT -j ACCEPT
#重启防火墙使配置生效
systemctl restart iptables.service
#设置防火墙开机启动
systemctl enable iptables.servicedocker
#更新
yumyum update
#配置yum源
vim /etc/yum.repos.d/docker.repo
[dockerrepo]name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
#安装yum install docker-engine
#下载镜像
docker pull google/pause
docker tag google/pause gcr.io/google_containers/pause-amd64:3.0
docker pull siriuszg/kubernetes-dashboard-amd64:v1.4.0
docker tag siriuszg/kubernetes-dashboard-amd64:v1.4.0 10.2.3.223:5000/kubernetes-dashboard-amd64:v1.4.0
一、etc集群的部署etcd在整个Kubernetes集群中处于中心数据库的地位,为保证Kubernetes集群的高可用性,首先需要保证数据库不是单故障点。
一方面,etcd需要以集群的方式进行部署,以实现etcd数据存储的冗余、备份与高可用性;
另一方面,etcd存储的数据本身也应考虑使用可靠的存储设备。
首先,规划一个至少3台服务器(节点)的etcd集群,在每台服务器上安装好etcd。
etcd1192.168.81.11etcd2192.168.81.12etcd3192.168.81.60
yum -y install etcd
#etcd实例名称 ETCD_NAME
#etcd数据保存目录 ETCD_DATA_DIR
#集群内部通信使用的URL ETCD_LISTEN_PEER_URLS
#供外部客户端使用的URL ETCD_LISTEN_CLIENT_URLS
#广播给集群内其他成员使用的URL ETCD_INITIAL_ADVERTISE_PEER_URLS
#初始集群成员列表 ETCD_INITIAL_CLUSTER
#初始集群状态 ETCD_INITIAL_CLUSTER_STATE
#集群名称 ETCD_INITIAL_CLUSTER_TOKEN
#广播给外部客户端使用的URL ETCD_ADVERTISE_CLIENT_URLS修改每台服务器上etcd的配置文件/etc/etcd/etcd.conf[etcd2]
vim /etc/etcd/etcd.conf
ETCD_NAME=etcd2
ETCD_DATA_DIR="/var/lib/etcd/etcd2.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.81.12:2380"
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://192.168.81.12:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.81.12:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.81.11:2380,etcd2=http://192.168.81.12:2380,etcd3=http://192.168.81.60:2380"
ETCD_INITIAL_CLUSTER_STATE="exist"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.81.12:2379"systemctl restart etcd[etcd3]
vim /etc/etcd/etcd.conf
ETCD_NAME=etcd3
ETCD_DATA_DIR="/var/lib/etcd/etcd2.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.81.60:2380"
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://192.168.81.60:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.81.60:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.81.11:2380,etcd2=http://192.168.81.12:2380,etcd3=http://192.168.81.60:2380"
ETCD_INITIAL_CLUSTER_STATE="exist"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.81.60:2379"systemctl restart etcd[etcd1]
vim /etc/etcd/etcd.conf
ETCD_NAME=etcd1ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.81.11:2380"
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://192.168.81.11:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.81.11:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.81.11:2380,etcd2=http://192.168.81.12:2380,etcd3=http://192.168.81.60:2380"
ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.81.11:2379"
systemctl restart etcd
启动完成后,在任意etcd节点执行etcdctl cluster-health命令来查询集群的运行状态
etcdctl cluster-health
member 618d69366dd8cee3 is healthy: got healthy result from http://192.168.81.12:2379
member acd2ba924953b1ec is healthy: got healthy result from http://192.168.81.60:2379
member f56676081999649a is healthy: got healthy result from http://192.168.81.11:2379
cluster is healthy
在任意etcd节点上执行etcdctl member list命令来查询集群的成员列表
etcdctl member list
618d69366dd8cee3: name=etcd2 peerURLs=http://192.168.81.12:2380 clientURLs=http://192.168.81.12:2379 isLeader=true
acd2ba924953b1ec: name=etcd3 peerURLs=http://192.168.81.60:2380 clientURLs=http://192.168.81.60:2379 isLeader=false
f56676081999649a: name=etcd1 peerURLs=http://192.168.81.11:2380 clientURLs=http://192.168.81.11:2379 isLeader=false
编辑启动脚本
vim /usr/lib/systemd/system/etcd.service
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""
启动etcd服务
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
创建网络信息
etcdctl mkdir /k8s/network
etcdctl set /k8s/network/config '{"Network":"172.100.0.0/16"}'
二、flannel安装与配置
yum -y install flannel
创建日志目录
mkdir -p /var/log/k8s/flannel/
配置
vim /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://192.168.81.11:2379"
FLANNEL_ETCD_PREFIX="/k8s/network"
FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/k8s/flannel/ --etcd-endpoints=http://192.168.81.11:2379 --iface=eno16777736"
(如果k8s-master是集群,配置不同之处如下:FLANNEL_ETCD="http://k8s_master_ip1:2379,http://k8s_master_ip2:2379,http://k8s_master_ip3:2379")
启动并添加开机启动项
systemctl start flanneld
systemctl enable flanneld.service
生成环境变量
/usr/libexec/flannel/mk-docker-opts.sh -i
检查环境变量
cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.100.0.0/16
FLANNEL_SUBNET=172.100.9.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=falsecat /run/docker_opts.env
DOCKER_OPT_BIP="--bip=172.100.9.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
EnvironmentFile=/run/docker_opts.env
ExecStart=/usr/bin/dockerd ${DOCKER_OPT_BIP} ${DOCKER_OPT_IPMASQ} ${DOCKER_OPT_MTU}
注释掉#ExecStart=/usr/bin/dockerd-current \
生效
systemctl daemon-reload
启动
systemctl stop docker
systemctl restart flanneld
systemctl start docker
检查
ip a | grep flannel
4: flannel0:mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500inet 172.100.9.0/16 scope global flannel0
ip a | grep docker
5: docker0:mtu 1500 qdisc noqueue state DOWN
inet 172.100.9.1/24 scope global docker0
在192.168.81.11和192.168.81.12分别启动容器
docker run -ti --net=bridge centos:7 /bin/bash
在192.168.81.11启动的容器中,ping 192.168.81.12启动的容器的IP(172.100.64.2)
[root@8e7cf36a1fb2 /]# ping 172.100.64.2
PING 172.100.64.2 (172.100.64.2) 56(84) bytes of data.
64 bytes from 172.100.64.2: icmp_seq=1 ttl=60 time=8.46 ms
64 bytes from 172.100.64.2: icmp_seq=2 ttl=60 time=0.794 ms
64 bytes from 172.100.64.2: icmp_seq=3 ttl=60 time=0.584 ms
三、部署kubernets
在hosts文件中加入master和node节点(由于宿主机性能限制,这里和etcd集群部署在一起)
echo "192.168.81.11 centos-master
> 192.168.81.12 centos-minion
> 192.168.81.60 centos-minion2" >> /etc/hosts
编辑/etc/kubernetes/config
vim /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.81.11:8080"
在master配置
vim /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.100.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
启动恰当的服务
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
配置在node节点上的kubernetes服务
minion
vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=centos-minion"
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
minion2
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=centos-minion2"
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
在node节点启动恰当的服务
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
检查以确认现在集群中master能够看到node
kubectl get nodes
NAME STATUS AGE
192.168.81.12 NotReady 4m
centos-minion Ready 14s
centos-minion2 Ready 45s
kubernetes-dashboard部署
镜像下载
docker pull siriuszg/kubernetes-dashboard-amd64:v1.4.0
docker tag siriuszg/kubernetes-dashboard-amd64:v1.4.0 10.2.3.223:5000/kubernetes-dashboard-amd64:v1.4.0
可以下载google提供的kubernetes-dashboard.yaml进行修改,也可以自己创建
https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
vim kubernetes-dashboard.yaml
-------------------------------------------------------------------------------------
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
spec:
containers:
- name: kubernetes-dashboard
image: 10.2.3.223:5000/kubernetes-dashboard-amd64:v1.4.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
protocol: TCP
args:
- --apiserver-host=http://192.168.81.11:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
--------------------------------------------------------------------------------
启动
kubectl create -f kubernetes-dashboard.yaml
访问
http://192.168.81.11:8080/ui