使用Kubespray搭建Kubernetes高可用集群(使用Containerd)

演示环境说明

我们这里使用的是三台centos 7.3的虚拟机,具体信息如下表:

系统类型 IP地址 节点角色 CPU Memory Hostname
centos-7.3 192.168.56.128 master >=2 >=4G k8s-node1
centos-7.3 192.168.56.129 master >=2 >=4G k8s-node2
centos-7.3 192.168.56.130 worker >=2 >=4G k8s-node3

1:系统设置

1.1 主机名

$ hostnamectl set-hostname <your_hostname>

1.2 防火墙.selinux.swap.重置iptables

# 关闭selinux
$ setenforce 0
$ sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
# 关闭防火墙
$ systemctl stop firewalld && systemctl disable firewalld

# 设置iptables规则
$ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap
$ swapoff -a && free –h

# 关闭dnsmasq(否则可能导致容器无法解析域名)
$ service dnsmasq stop && systemctl disable dnsmasq

1.3 k8s参数设置

# 制作配置文件
$ cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
EOF
# 生效文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf

1.4 移除docker相关软件包

$ yum remove -y docker*
$ rm -f /etc/docker/daemon.json

2.使用kubespray部署集群

这部分只需要在一个 操作 节点执行,可以是集群中的一个节点,也可以是集群之外的节点。甚至可以是你自己的笔记本电脑。我们这里使用更普遍的集群中的任意一个linux节点。
2.1 配置免密

# 1. 生成keygen(执行ssh-keygen,一路回车下去)
$ ssh-keygen
# 2. 查看并复制生成的pubkey
$ cat /root/.ssh/id_rsa.pub
# 3. 分别登陆到每个节点上,将pubkey写入/root/.ssh/authorized_keys
$ mkdir -p /root/.ssh
$ echo "<上一步骤复制的pubkey>" >> /root/.ssh/authorized_keys

2.2 依赖软件下载、安装

# 安装基础软件
$ yum install -y epel-release python36 python36-pip git
# 下载kubespray源码
$ wget https://github.com/kubernetes-sigs/kubespray/archive/v2.15.0.tar.gz
# 解压缩
$ tar -xvf v2.15.0.tar.gz && cd kubespray-2.15.0
# 安装requirements
$ cat requirements.txt
$ pip3.6 install -r requirements.txt

## 如果install遇到问题可以先尝试升级pip
## $ pip3.6 install --upgrade pip

2.3 生成配置

# copy一份demo配置,准备自定义
$ cp -rpf inventory/sample inventory/mycluster

由于kubespray给我们准备了py脚本,可以直接根据环境变量自动生成配置文件,所以我们现在只需要设定好环境变量就可以

# 使用真实的hostname(否则会自动把你的hostname改成node1/node2...这种哦)
$ export USE_REAL_HOSTNAME=true
# 指定配置文件位置
$ export CONFIG_FILE=inventory/mycluster/hosts.yaml
# 定义ip列表(你的服务器内网ip地址列表,3台及以上,前两台默认为master节点)
$ declare -a IPS=(192.168.56.128 192.168.56.129 192.168.56.130)
# 生成配置文件
$ python3 contrib/inventory_builder/inventory.py ${IPS[@]}

2.4 个性化配置
配置文件都生成好了,虽然可以直接用,但并不能完全满足大家的个性化需求,比如用docker还是containerd?docker的工作目录是否用默认的/var/lib/docker?等等。当然默认的情况kubespray还会到google的官方仓库下载镜像、二进制文件

# 定制化配置文件
# 1. 节点组织配置(这里可以调整每个节点的角色)
$ vi inventory/mycluster/hosts.yaml
# 2. containerd配置(教程使用containerd作为容器引擎)
$ vi inventory/mycluster/group_vars/all/containerd.yml
# 3. 全局配置(可以在这配置http(s)代理实现外网访问)
$ vi inventory/mycluster/group_vars/all/all.yml
# 4. k8s集群配置(包括设置容器运行时、svc网段、pod网段等)
$ vi inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# 5. 修改etcd部署类型为host(默认是docker)
$ vi ./inventory/mycluster/group_vars/etcd.yml
# 6. 附加组件(ingress、dashboard等)
$ vi ./inventory/mycluster/group_vars/k8s-cluster/addons.yml

2.5 一键部署

# -vvvv会打印最详细的日志信息,建议开启
$ ansible-playbook -i inventory/mycluster/hosts.yaml  -b cluster.yml -vvvv

下载镜像(可选)
为了减少“一键部署”的等待时间,可以在部署的同时,预先下载一些镜像。

  vim kubespray.sh
#!/bin/bash
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_metrics-scraper:v1.0.6 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_metrics-scraper:v1.0.6 docker.io/kubernetesui/metrics-scraper:v1.0.6

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/library_nginx:1.19 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/library_nginx:1.19 docker.io/library/nginx:1.19

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/coredns:1.7.0 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/dns_k8s-dns-node-cache:1.16.0 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/dns_k8s-dns-node-cache:1.16.0 k8s.gcr.io/dns/k8s-dns-node-cache:1.16.0

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/ingress-nginx_controller:v0.41.2 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/ingress-nginx_controller:v0.41.2 k8s.gcr.io/ingress-nginx/controller:v0.41.2

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-apiserver:v1.19.7 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-apiserver:v1.19.7 k8s.gcr.io/kube-apiserver:v1.19.7

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-controller-manager:v1.19.7 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-controller-manager:v1.19.7 k8s.gcr.io/kube-controller-manager:v1.19.7

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-proxy:v1.19.7 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-proxy:v1.19.7 k8s.gcr.io/kube-proxy:v1.19.7

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-scheduler:v1.19.7 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-scheduler:v1.19.7 k8s.gcr.io/kube-scheduler:v1.19.7

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.2 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.2 k8s.gcr.io/pause:3.2

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.3 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.3 k8s.gcr.io/pause:3.3

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_dashboard-amd64:v2.1.0 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_dashboard-amd64:v2.1.0 docker.io/kubernetesui/dashboard-amd64:v2.1.0

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/cpa_cluster-proportional-autoscaler-amd64:1.8.3 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/cpa_cluster-proportional-autoscaler-amd64:1.8.3 k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_cni:v3.16.5 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_cni:v3.16.5 quay.io/calico/cni:v3.16.5

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_kube-controllers:v3.16.5 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_kube-controllers:v3.16.5 quay.io/calico/kube-controllers:v3.16.5

crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_node:v3.16.5 || exit 1
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_node:v3.16.5 quay.io/calico/node:v3.16.5

ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_metrics-scraper:v1.0.6
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/library_nginx:1.19
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/coredns:1.7.0
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/dns_k8s-dns-node-cache:1.16.0
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/ingress-nginx_controller:v0.41.2
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-apiserver:v1.19.7
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-controller-manager:v1.19.7
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-proxy:v1.19.7
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-scheduler:v1.19.7
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.2
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.3
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_dashboard-amd64:v2.1.0
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/cpa_cluster-proportional-autoscaler-amd64:1.8.3
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_cni:v3.16.5
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_kube-controllers:v3.16.5
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_node:v3.16.5
sh kubespray.sh

3 高可用性查看

[root@k8s-node1 kubespray-2.15.0]# kubectl get no
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    master   17h   v1.19.7
k8s-node2   Ready    master   17h   v1.19.7
k8s-node3   Ready    <none>   17h   v1.19.7
# 可以看到 node1和node2为master node3为worker节点

在任意Master节点运行

[root@k8s-node1 kubespray-2.15.0]# kubectl get pods -n kube-system
NAME                                          READY   STATUS    RESTARTS   AGE
calico-kube-controllers-8b5ff5d58-vlqnn       1/1     Running   0          17h
calico-node-fcdb2                             1/1     Running   0          17h
calico-node-k4j87                             1/1     Running   0          17h
calico-node-v4nkd                             1/1     Running   0          17h
coredns-85967d65-fxs5n                        1/1     Running   1          17h
coredns-85967d65-z88jg                        1/1     Running   0          17h
dns-autoscaler-5b7b5c9b6f-ntkjr               1/1     Running   0          17h
kube-apiserver-k8s-node1                      1/1     Running   1          18h
kube-apiserver-k8s-node2                      1/1     Running   1          17h
kube-controller-manager-k8s-node1             1/1     Running   1          18h
kube-controller-manager-k8s-node2             1/1     Running   3          17h
kube-proxy-8lzs6                              1/1     Running   0          17h
kube-proxy-jzf56                              1/1     Running   0          17h
kube-proxy-x9nkw                              1/1     Running   0          17h
kube-scheduler-k8s-node1                      1/1     Running   1          18h
kube-scheduler-k8s-node2                      1/1     Running   2          17h
kubernetes-dashboard-86c6f9df5b-5tfnd         1/1     Running   8          17h
kubernetes-metrics-scraper-678c97765c-jjtmc   1/1     Running   1          17h
nginx-proxy-k8s-node3                         1/1     Running   1          17h
nodelocaldns-42g4d                            1/1     Running   1          17h
nodelocaldns-5pjqs                            1/1     Running   0          17h
nodelocaldns-d4djt                            1/1     Running   0          17h
可以看到 nginx-proxy-k8s-node3 的pod

我们在node3上查看

[root@k8s-node3 ~]# cat /etc/kubernetes/manifests/nginx-proxy.yml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-proxy
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: kube-nginx
  annotations:
    nginx-cfg-checksum: "5f4777c5a1defa2db8704eb486b5796696d70c82"
spec:
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  nodeSelector:
    kubernetes.io/os: linux
  priorityClassName: system-node-critical
  containers:
  - name: nginx-proxy
    image: docker.io/library/nginx:1.19
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        cpu: 25m
        memory: 32M
    securityContext:
      privileged: true
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8081
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8081
    volumeMounts:
    - mountPath: /etc/nginx
      name: etc-nginx
      readOnly: true
  volumes:
  - name: etc-nginx
    hostPath:
      path: /etc/nginx
可以看到 nginx-proxy的yml文件,同时可以看到这个Nginx的配置文件在主机的/etc/nginx下

在node3上查看nginx配置文件

cat /etc/nginx/nginx.conf 
error_log stderr notice;

worker_processes 2;
worker_rlimit_nofile 130048;
worker_shutdown_timeout 10s;

events {
  multi_accept on;
  use epoll;
  worker_connections 16384;
}

stream {
  upstream kube_apiserver {
    least_conn;
    server 192.168.56.128:6443;
    server 192.168.56.129:6443;
    }

  server {
    listen        127.0.0.1:6443;
    proxy_pass    kube_apiserver;
    proxy_timeout 10m;
    proxy_connect_timeout 1s;
  }
}

http {
  aio threads;
  aio_write on;
  tcp_nopush on;
  tcp_nodelay on;

  keepalive_timeout 5m;
  keepalive_requests 100;
  reset_timedout_connection on;
  server_tokens off;
  autoindex off;

  server {
    listen 8081;
    location /healthz {
      access_log off;
      return 200;
    }
    location /stub_status {
      stub_status on;
      access_log off;
    }
  }
  }
可以看到node3已经对我们的两台Master节点的apiServer做了高可用性配置并实现了负载均衡
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 213,014评论 6 492
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,796评论 3 386
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 158,484评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,830评论 1 285
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,946评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,114评论 1 292
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,182评论 3 412
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,927评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,369评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,678评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,832评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,533评论 4 335
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,166评论 3 317
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,885评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,128评论 1 267
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,659评论 2 362
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,738评论 2 351