k8s部署高可用集群

1、集群拓扑图

image.png

2、环境准备,至少要3台master

vip :192.168.0.162 keeplive
master01:192.168.0.163 centos7
master02:192.168.0.164 centos7
master03:192.168.0.165 centos7
node01: 192.168.0.166 centos7

3、修改各个主机之间hosts解析

image.png

4、配置好基础环境、参考https://www.jianshu.com/p/feda1f429526 (到初始化master上一步)

5、 配置 docker 启动参数

cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://av0eyibf.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
  "max-size": "100m"
  },
    "storage-driver": "overlay2"
  }
EOF

6、所有机器开启ssh免密登陆,网上很多有教程,这里就不写了

7、 在三个master节点安装keepalived软件

# yum install -y socat keepalived ipvsadm conntrack

8、 创建如下keepalived的配置文件

# cat /etc/keepalived/keepalived.conf
global_defs {
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER                  #声明角色,其他两台也设置MASTER
    interface ens33                  #根据自己实际的网卡名称来写
    virtual_router_id 80            #ID是唯一的,必须一致
    priority 100                          #权重100 ,根据权重来选举虚拟ip,其他两台权重不能一样
    advert_int 1
    authentication {                    #认证方式,必须统一密码
        auth_type PASS              
        auth_pass just0kk              
    }
    virtual_ipaddress { 
        192.168.0.162                   #创建一个虚拟IP
    }
}

virtual_server 192.168.0.162 6443 {     #用于k8s-maser集群注册 的虚拟地址
    delay_loop 6
    lb_algo loadbalance
    lb_kind DR
    net_mask 255.255.255.0
    persistence_timeout 0
    protocol TCP

real_server 192.168.0.163 6443 {      #后端真实的服务
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

real_server 192.168.0.164 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

real_server 192.168.0.165 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

9、 创建k8s集群初始化配置文件

cat /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
controlPlaneEndpoint: "192.168.0.162:6443"                     #这里的注册地址要写keeplive的虚拟IP
apiServer:
  certSANs:
  - 192.168.0.162
  - 192.168.0.163
  - 192.168.0.164
  - 192.168.0.165
networking:
  podSubnet: 10.244.0.0/16
imageRepository: "registry.aliyuncs.com/google_containers"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

10、启动keepalived服务 (三台master)

# systemctl enable keepalived
# systemctl start keepalived
# systemctl status keepalived

检查无问题就下一步

11、启动docker和kubectl

# systemctl enable docker && systemctl enable kubelet
# systemctl daemon-reload
# systemctl restart docker
# systemctl status docker  && systemctl status kubelet

检查无问题,下一步

12、初始化k8s集群

#  kubeadm init --config /etc/kubernetes/kubeadm-config.yaml

安装网络插件

#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
如果不能在线安装就把kube-flannel.yml下载到服务器来安装

13、检查集群状态

#kubectl get cs

image.png

没问题就下一步

14、拷贝证书到各个master节点,拷贝完自动加入集群,脚本如下,前提做好ssh免密登陆

# cat k8s-cluster-other-init.sh
#!/bin/bash
IPS=(192.168.0.164 192.168.0.164)
JOIN_CMD=`kubeadm token create --print-join-command 2> /dev/null`

for index in 0 1; do
  ip=${IPS[${index}]}
  ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
  scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf $ip:~/.kube/config

  ssh ${ip} "${JOIN_CMD} --control-plane"
done

加入之后,检查一下


image.png

已经成功加入了,在把node01也加入集群

 kubeadm join 192.168.0.162:6443 --token 0omn7n.03r4ogczlsqey2u1     --discovery-token-ca-cert-hash sha256:3caf6f90feeb1933e91c9a07abeac4f7d01132634fe5ae131cfb226bd45926d0

查看集群节点报错了

# kubectl get node
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

是因为证书没有拷贝过来,把master的证书复制一份过来

scp $HOME/.kube/config root@node01:$HOME/.kube/config

在查看一下


image.png

OK了

15、接下来,创建一个nginx测试pod

 #vim nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-ingress-test
  template:
    metadata:
      labels:
        app: nginx-ingress-test
    spec:
      containers:
        - name: nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 80
  selector:
    app: nginx-ingress-test

执行创建

 kubectl apply -f nginx-deployment.yaml

image.png

16、测试master高可用,现在vip在master01上面

image.png

把master01节点down掉,观察一下


image.png

vip 已经飘逸到master02了,在验证一下集群是否正常

image.png

在所有节点检查都是正常的,在把master01起来,vip又会漂移到master01上面,因为master01的权重是最高的

至此完成了master的高可用部署

©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

友情链接更多精彩内容