Centos7.6安装k8s(kubadmin高可用)

一、环境准备

1,主机准备

主机Ip 主机名 主功能
172.22.204.106 T-k8sMaster01 etcd,apiserver,controller-manager,scheduler,docker,proxy
172.22.204.107 T-k8sMaster02 etcd,apiserver,controller-manager,scheduler,docker,proxy
172.22.204.108 T-k8sMaster03 etcd,apiserver,controller-manager,scheduler,docker,proxy
172.22.204.111 T-k8sworker01 kubelet,docker,proxy
172.22.204.112 T-k8sworker03 kubelet,docker,proxy
172.22.204.110 VIP VIP

2,高可用架构

图片.png

二、安装准备

1,配置主机

1.1修改主机名

对5台机器分别修改,同时修改hosts文件

[root@172.22.204.106-Template01:~]$hostnamectl set-hostname T-k8sMaster01
[root@172.22.204.106-t-k8smaster01:~]$cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.22.204.106  T-k8sMaster01
172.22.204.107  T-k8sMaster02
172.22.204.108  T-k8sMaster03
172.22.204.111  T-k8sWorker01
172.22.204.112  T-k8sWorker02

1.1.2修改其它的主机名

[root@172.22.204.106-t-k8smaster01:~]# cat >> /etc/hosts << EOF
172.22.204.106  T-k8sMaster01
172.22.204.107  T-k8sMaster02
172.22.204.108  T-k8sMaster03
172.22.204.111  T-k8sWorker01
172.22.204.112  T-k8sWorker02
EOF
hostnamectl set-hostname T-k8sMaster02
hostnamectl set-hostname T-k8sMaster03
hostnamectl set-hostname T-k8sWorker01
hostnamectl set-hostname T-k8sWorker02

1.2优化主机

所有主机操作

1.2.1关闭swap

swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab 

1.2.2关闭防火墙及selinux

systemctl stop firewalld && systemctl disable firewalld
sed -i 's/=enforcing/=disabled/g' /etc/selinux/config

1.3内核参数

本文的k8s网络使用flannel,该网络需要设置内核参数bridge-nf-call-iptables=1,修改这个参数需要系统有br_netfilter模块。

1.3.1br_netfilter模块加载

查看br_netfilter模块:

[root@172.22.204.106-t-k8smaster01:~]$lsmod |grep br_netfilter

如果系统没有br_netfilter模块则执行下面的新增命令,如有则忽略。

临时新增br_netfilter模块:

[root@172.22.204.106-t-k8smaster01:~]# modprobe br_netfilter
该方式重启后会失效

永久新增br_netfilter模块:

[root@172.22.204.106-t-k8smaster01:~]# cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
[root@172.22.204.106-t-k8smaster01:~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
[root@172.22.204.106-t-k8smaster01:~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules

1.3.2内核参数修改

永久修改

[root@172.22.204.106-t-k8smaster01:~]$cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@172.22.204.106-t-k8smaster01:~]$sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
modprobe br_netfilter
sysctl -w net.bridge.bridge-nf-call-iptables=1
sysctl -a |grep net.bridge.bridge-nf-call-iptables
sysctl -p /etc/sysctl.d/k8s.conf

1.4 添加yum源

在所有机器上都添加以下源

1.4.1新增kubernetes源

cd /etc/yum.repos.d/
cat <<EOF > kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

1.4.2增加docker源


cd /etc/yum.repos.d/;mv docker-ce.repo docker-ce.repo_bak;wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@172.22.204.106-t-k8smaster01:~]$cd /etc/yum.repos.d/;wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.5安装docker

所有服务器均安装

1.5.1安装依赖包

yum install -y yum-utils   device-mapper-persistent-data   lvm2

1.5.2安装docker

查看docker版本

yum list docker-ce --showduplicates | sort -r

安装docker

yum install docker-ce -y

1.5.3启动docker命令

systemctl start docker
systemctl enable docker
[root@172.22.204.106-t-k8smaster01:yum.repos.d]$systemctl start docker
[root@172.22.204.106-t-k8smaster01:yum.repos.d]$systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

1.5.4修改daemon.json

[root@172.22.204.107-t-k8smaster02:yum.repos.d]$cat <<EOF >/etc/docker/daemon.json 
 {
"registry-mirrors":["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
     "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
     "overlay2.override_kernel_check=true"
   ]
 }
EOF

1.5.5重新加载docker

[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker

三、keepalived安装及配置

在三台master主机上都需要安装keepalived

3.1安装keepalived

yum -y install keepalived

3.2 配置keepalived

3.2.1 master01上keepalived配置

[root@172.22.204.106-t-k8smaster01:yum.repos.d]$cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id t-k8smaster01
}
vrrp_instance VI_1 {
    state MASTER 
    interface ens192
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.22.204.110
    }
}

3.2.2 master02上keepalived配置

[root@172.22.204.115-t-i-k8s-master02:~]$more /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id t-k8smaster02
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens192
    virtual_router_id 50
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.22.204.110
    }
}

3.2.3 master03上keepalived配置

[root@172.22.204.116-t-k8smaster03:~]$more /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id t-k8smaster03
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens192
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.22.204.110
    }
}

3.2.3 启动keepalived

service keepalived start
systemctl enable keepalived
[root@172.22.204.106-t-k8smaster01:yum.repos.d]$service keepalived start
Redirecting to /bin/systemctl start keepalived.service
[root@172.22.204.106-t-k8smaster01:yum.repos.d]$ systemctl enable keepalived

3.2.4验证vip

111111.png

四、安装k8s组件

4.1. 安装kubelet、kubeadm和kubectl

所有机器全部安装
kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
kubeadm 用于初始化集群,启动集群的命令工具
kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
安装版本为最新 1.22.2
也可以根据自己所需要的版本来安装部署

 yum install -y kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2

查看版本

yum list kubelet --showduplicates | sort -r

安装yum install -y kubelet kubeadm kubectl

yum install -y kubelet  kubeadm  kubectl 

4.22.3 启动kubelet

启动kubelet并设置开机启动

[root@172.22.204.106-t-k8smaster01:~]$systemctl enable kubelet && systemctl start kubelet

五、初始化k8s

5.1在master01上执行

  1. kubeadm.conf
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.22.2
apiServer:
  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP
  - t-k8smaster01
  - t-k8smaster02
  - t-k8smaster03
  - T-k8sWorker01
  - T-k8sWorker02
  - 172.22.204.106
  - 172.22.204.107
  - 172.22.204.108
  - 172.22.204.111
  - 172.22.204.112
  - 172.22.204.110
controlPlaneEndpoint: "172.22.204.110:6443"
networking:
  podSubnet: "10.244.0.0/16"

5.2 初始化master

kubeadm init --config=kubeadm-config.yaml

5.3 执行后如下

[root@172.22.204.106-t-k8smaster01:~]$kubeadm init --config=kubeadm-config.yaml 
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local t-k8smaster01 t-k8smaster02 t-k8smaster03 t-k8sworker01 t-k8sworker02] and IPs [10.96.0.1 172.22.204.106 172.22.204.110 172.22.204.107 172.22.204.108 172.22.204.111 172.22.204.112]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost t-k8smaster01] and IPs [172.22.204.106 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost t-k8smaster01] and IPs [172.22.204.106 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.037482 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node t-k8smaster01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node t-k8smaster01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: mkpnzt.he3sxvnr1igi0xxm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.22.204.110:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.22.204.110:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 

5.4验证

[root@172.22.204.106-t-k8smaster01:~]$mkdir -p $HOME/.kube
[root@172.22.204.106-t-k8smaster01:~]$sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@172.22.204.106-t-k8smaster01:~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@172.22.204.106-t-k8smaster01:~]$export KUBECONFIG=/etc/kubernetes/admin.conf
[root@172.22.204.106-t-k8smaster01:~]$kubectl get nodes;
NAME            STATUS     ROLES                  AGE    VERSION
t-k8smaster01   NotReady   control-plane,master   2m4s   v1.22.2

初始化失败,或出现以下错误,可以重新初始化

accepts at most 1 arg(s), received 3
To see the stack trace of this error execute with --v=5 or higher

如果初始化失败,可执行kubeadm reset后重新初始化

[root@172.22.204.106-t-k8smaster01:~]# kubeadm reset
[root@172.22.204.106-t-k8smaster01:~]# rm -rf $HOME/.kube/config

5.5添加其它机器

5.5.1在其它的master添加公钥

[root@172.22.204.106-t-k8smaster01:~]$ssh-keygen -t rsa
[root@172.22.204.106-t-k8smaster01:~]$ssh-copy-id -i t-k8smaster02
[root@172.22.204.106-t-k8smaster01:~]$ssh-copy-id -i t-k8smaster03

记录kubeadm join的输出,后面需要这个命令将work节点和其他master节点加入集群中。
master01分发证书:
在master01上运行脚本cert-main-master.sh,将证书分发至master02和master03

USER=root 
    CONTROL_PLANE_IPS="172.22.204.107 172.22.204.108"
    for host in ${CONTROL_PLANE_IPS}; do
        ssh ${USER}@${host} "mkdir -p /etc/kubernetes/pki/"
        ssh ${USER}@${host} "mkdir -p /etc/kubernetes/pki/etcd"
        scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki/ca.crt
        scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki/ca.key
        scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki/sa.key
        scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki/sa.pub
        scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki/front-proxy-ca.crt
        scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/front-proxy-ca.key
        scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.crt
        # Quote this line if you are using external etcd
        scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.key
    done
[root@172.22.204.106-t-k8smaster01:~]$sh cert-main-master.sh 

5.5.2 master02加入集群

kubeadm join 172.22.204.110:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 \
    --control-plane 

同时执行

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.5.3 master03加入集群

kubeadm join 172.22.204.110:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 \
    --control-plane 

同时执行

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.5.4 work01及work02加入集群

kubeadm join 172.22.204.110:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 

work01加入集群

[root@172.22.204.111-t-k8sworker01:yum.repos.d]$kubeadm join 172.22.204.110:6443 --token mkpnzt.he3sxvnr1igi0xxm \
>     --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 
[preflight] Running pre-flight checks
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

work02加入集群

[root@172.22.204.112-t-k8sworker01:yum.repos.d]$kubeadm join 172.22.204.110:6443 --token mkpnzt.he3sxvnr1igi0xxm \
>     --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 
[preflight] Running pre-flight checks
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

六、验证k8s集群

[root@172.22.204.106-t-k8smaster01:~]$kubectl get nodes;
NAME            STATUS     ROLES                  AGE   VERSION
t-k8smaster01   NotReady   control-plane,master   48m   v1.22.2
t-k8smaster02   NotReady   control-plane,master   18m   v1.22.2
t-k8smaster03   NotReady   control-plane,master   13m   v1.22.2
t-k8sworker01   NotReady   <none>                 12m   v1.22.2
t-k8sworker02   NotReady   <none>                 11s   v1.22.2

6.1 错误列举

如果出现worker节点名字没改,后面添加有问题,按以下执行则可以后续添加

[root@172.22.204.112-t-k8sworker02:kubernetes]$rm -rf /var/lib/kubelet/*
[root@172.22.204.112-t-k8sworker02:kubernetes]$ls
kubelet.conf  manifests  pki
[root@172.22.204.112-t-k8sworker02:kubernetes]$rm -rf kubelet.conf 
[root@172.22.204.112-t-k8sworker02:kubernetes]$rm -rf pki/ca.crt 

6.2 添加网络

CNI网络插件

kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6.3 验证网络状态

[root@172.22.204.106-t-k8smaster01:~]$kubectl get nodes;
NAME            STATUS   ROLES                  AGE     VERSION
t-k8smaster01   Ready    control-plane,master   53m     v1.22.2
t-k8smaster02   Ready    control-plane,master   23m     v1.22.2
t-k8smaster03   Ready    control-plane,master   18m     v1.22.2
t-k8sworker01   Ready    <none>                 17m     v1.22.2
t-k8sworker02   Ready    <none>                 5m31s   v1.22.2

6.4查看pod

[root@172.22.204.106-t-k8smaster01:~]$kubectl get po -o wide -A
NAMESPACE     NAME                                    READY   STATUS    RESTARTS      AGE     IP               NODE            NOMINATED NODE   READINESS GATES
kube-system   coredns-7d89d9b6b8-cqpjv                1/1     Running   0             55m     10.244.3.2       t-k8sworker01   <none>           <none>
kube-system   coredns-7d89d9b6b8-swpcb                1/1     Running   0             55m     10.244.3.3       t-k8sworker01   <none>           <none>
kube-system   etcd-t-k8smaster01                      1/1     Running   0             55m     172.22.204.106   t-k8smaster01   <none>           <none>
kube-system   etcd-t-k8smaster02                      1/1     Running   0             25m     172.22.204.107   t-k8smaster02   <none>           <none>
kube-system   etcd-t-k8smaster03                      1/1     Running   0             19m     172.22.204.108   t-k8smaster03   <none>           <none>
kube-system   kube-apiserver-t-k8smaster01            1/1     Running   0             55m     172.22.204.106   t-k8smaster01   <none>           <none>
kube-system   kube-apiserver-t-k8smaster02            1/1     Running   0             25m     172.22.204.107   t-k8smaster02   <none>           <none>
kube-system   kube-apiserver-t-k8smaster03            1/1     Running   0             20m     172.22.204.108   t-k8smaster03   <none>           <none>
kube-system   kube-controller-manager-t-k8smaster01   1/1     Running   1 (24m ago)   55m     172.22.204.106   t-k8smaster01   <none>           <none>
kube-system   kube-controller-manager-t-k8smaster02   1/1     Running   0             25m     172.22.204.107   t-k8smaster02   <none>           <none>
kube-system   kube-controller-manager-t-k8smaster03   1/1     Running   0             20m     172.22.204.108   t-k8smaster03   <none>           <none>
kube-system   kube-flannel-ds-5pjbf                   1/1     Running   0             4m8s    172.22.204.112   t-k8sworker02   <none>           <none>
kube-system   kube-flannel-ds-bs4t8                   1/1     Running   0             4m8s    172.22.204.111   t-k8sworker01   <none>           <none>
kube-system   kube-flannel-ds-jn698                   1/1     Running   0             4m8s    172.22.204.107   t-k8smaster02   <none>           <none>
kube-system   kube-flannel-ds-r4ktd                   1/1     Running   0             4m8s    172.22.204.108   t-k8smaster03   <none>           <none>
kube-system   kube-flannel-ds-tckjr                   1/1     Running   0             4m8s    172.22.204.106   t-k8smaster01   <none>           <none>
kube-system   kube-proxy-469lj                        1/1     Running   0             25m     172.22.204.107   t-k8smaster02   <none>           <none>
kube-system   kube-proxy-k47ww                        1/1     Running   0             18m     172.22.204.111   t-k8sworker01   <none>           <none>
kube-system   kube-proxy-msk5s                        1/1     Running   0             20m     172.22.204.108   t-k8smaster03   <none>           <none>
kube-system   kube-proxy-tjqhc                        1/1     Running   0             6m55s   172.22.204.112   t-k8sworker02   <none>           <none>
kube-system   kube-proxy-vch97                        1/1     Running   0             55m     172.22.204.106   t-k8smaster01   <none>           <none>
kube-system   kube-scheduler-t-k8smaster01            1/1     Running   1 (24m ago)   55m     172.22.204.106   t-k8smaster01   <none>           <none>
kube-system   kube-scheduler-t-k8smaster02            1/1     Running   0             25m     172.22.204.107   t-k8smaster02   <none>           <none>
kube-system   kube-scheduler-t-k8smaster03            1/1     Running   0             20m     172.22.204.108   t-k8smaster03   <none>           <none>

6.5验证master集群

先关闭master01

[root@172.22.204.106-t-k8smaster01:~]$ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:b9:9e:bd brd ff:ff:ff:ff:ff:ff
    inet 172.22.204.106/24 brd 172.22.204.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet 172.22.204.110/32 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:feb9:9ebd/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:7a:8a:7f:36 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 02:f0:d4:fe:9c:b7 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 brd 10.244.0.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::f0:d4ff:fefe:9cb7/64 scope link 
       valid_lft forever preferred_lft forever

查看master02

[root@172.22.204.107-t-k8smaster02:pki]$ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:b9:eb:71 brd ff:ff:ff:ff:ff:ff
    inet 172.22.204.107/24 brd 172.22.204.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet 172.22.204.110/32 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:feb9:eb71/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:4c:13:ce:33 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 02:ba:d4:f9:46:65 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 brd 10.244.1.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::ba:d4ff:fef9:4665/64 scope link 
       valid_lft forever preferred_lft forever

正常转移。
k8s高可用部署完毕

七、主要参加文档

https://www.kubernetes.org.cn/6632.html
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容