演示环境说明
我们这里使用的是三台centos 7.3的虚拟机,具体信息如下表:
系统类型 | IP地址 | 节点角色 | CPU | Memory | Hostname |
---|---|---|---|---|---|
centos-7.3 | 192.168.56.128 | master | >=2 | >=4G | k8s-node1 |
centos-7.3 | 192.168.56.129 | master | >=2 | >=4G | k8s-node2 |
centos-7.3 | 192.168.56.130 | worker | >=2 | >=4G | k8s-node3 |
1:系统设置
1.1 主机名
$ hostnamectl set-hostname <your_hostname>
1.2 防火墙.selinux.swap.重置iptables
# 关闭selinux
$ setenforce 0
$ sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
# 关闭防火墙
$ systemctl stop firewalld && systemctl disable firewalld
# 设置iptables规则
$ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap
$ swapoff -a && free –h
# 关闭dnsmasq(否则可能导致容器无法解析域名)
$ service dnsmasq stop && systemctl disable dnsmasq
1.3 k8s参数设置
# 制作配置文件
$ cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
EOF
# 生效文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf
1.4 移除docker相关软件包
$ yum remove -y docker*
$ rm -f /etc/docker/daemon.json
2.使用kubespray部署集群
这部分只需要在一个 操作 节点执行,可以是集群中的一个节点,也可以是集群之外的节点。甚至可以是你自己的笔记本电脑。我们这里使用更普遍的集群中的任意一个linux节点。
2.1 配置免密
# 1. 生成keygen(执行ssh-keygen,一路回车下去)
$ ssh-keygen
# 2. 查看并复制生成的pubkey
$ cat /root/.ssh/id_rsa.pub
# 3. 分别登陆到每个节点上,将pubkey写入/root/.ssh/authorized_keys
$ mkdir -p /root/.ssh
$ echo "<上一步骤复制的pubkey>" >> /root/.ssh/authorized_keys
2.2 依赖软件下载、安装
# 安装基础软件
$ yum install -y epel-release python36 python36-pip git
# 下载kubespray源码
$ wget https://github.com/kubernetes-sigs/kubespray/archive/v2.15.0.tar.gz
# 解压缩
$ tar -xvf v2.15.0.tar.gz && cd kubespray-2.15.0
# 安装requirements
$ cat requirements.txt
$ pip3.6 install -r requirements.txt
## 如果install遇到问题可以先尝试升级pip
## $ pip3.6 install --upgrade pip
2.3 生成配置
# copy一份demo配置,准备自定义
$ cp -rpf inventory/sample inventory/mycluster
由于kubespray给我们准备了py脚本,可以直接根据环境变量自动生成配置文件,所以我们现在只需要设定好环境变量就可以
# 使用真实的hostname(否则会自动把你的hostname改成node1/node2...这种哦)
$ export USE_REAL_HOSTNAME=true
# 指定配置文件位置
$ export CONFIG_FILE=inventory/mycluster/hosts.yaml
# 定义ip列表(你的服务器内网ip地址列表,3台及以上,前两台默认为master节点)
$ declare -a IPS=(192.168.56.128 192.168.56.129 192.168.56.130)
# 生成配置文件
$ python3 contrib/inventory_builder/inventory.py ${IPS[@]}
2.4 个性化配置
配置文件都生成好了,虽然可以直接用,但并不能完全满足大家的个性化需求,比如用docker还是containerd?docker的工作目录是否用默认的/var/lib/docker?等等。当然默认的情况kubespray还会到google的官方仓库下载镜像、二进制文件
# 定制化配置文件
# 1. 节点组织配置(这里可以调整每个节点的角色)
$ vi inventory/mycluster/hosts.yaml
# 2. containerd配置(教程使用containerd作为容器引擎)
$ vi inventory/mycluster/group_vars/all/containerd.yml
# 3. 全局配置(可以在这配置http(s)代理实现外网访问)
$ vi inventory/mycluster/group_vars/all/all.yml
# 4. k8s集群配置(包括设置容器运行时、svc网段、pod网段等)
$ vi inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# 5. 修改etcd部署类型为host(默认是docker)
$ vi ./inventory/mycluster/group_vars/etcd.yml
# 6. 附加组件(ingress、dashboard等)
$ vi ./inventory/mycluster/group_vars/k8s-cluster/addons.yml
2.5 一键部署
# -vvvv会打印最详细的日志信息,建议开启
$ ansible-playbook -i inventory/mycluster/hosts.yaml -b cluster.yml -vvvv
下载镜像(可选)
为了减少“一键部署”的等待时间,可以在部署的同时,预先下载一些镜像。
vim kubespray.sh
#!/bin/bash
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_metrics-scraper:v1.0.6 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_metrics-scraper:v1.0.6 docker.io/kubernetesui/metrics-scraper:v1.0.6
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/library_nginx:1.19 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/library_nginx:1.19 docker.io/library/nginx:1.19
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/coredns:1.7.0 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/dns_k8s-dns-node-cache:1.16.0 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/dns_k8s-dns-node-cache:1.16.0 k8s.gcr.io/dns/k8s-dns-node-cache:1.16.0
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/ingress-nginx_controller:v0.41.2 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/ingress-nginx_controller:v0.41.2 k8s.gcr.io/ingress-nginx/controller:v0.41.2
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-apiserver:v1.19.7 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-apiserver:v1.19.7 k8s.gcr.io/kube-apiserver:v1.19.7
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-controller-manager:v1.19.7 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-controller-manager:v1.19.7 k8s.gcr.io/kube-controller-manager:v1.19.7
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-proxy:v1.19.7 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-proxy:v1.19.7 k8s.gcr.io/kube-proxy:v1.19.7
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-scheduler:v1.19.7 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-scheduler:v1.19.7 k8s.gcr.io/kube-scheduler:v1.19.7
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.2 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.2 k8s.gcr.io/pause:3.2
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.3 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.3 k8s.gcr.io/pause:3.3
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_dashboard-amd64:v2.1.0 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_dashboard-amd64:v2.1.0 docker.io/kubernetesui/dashboard-amd64:v2.1.0
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/cpa_cluster-proportional-autoscaler-amd64:1.8.3 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/cpa_cluster-proportional-autoscaler-amd64:1.8.3 k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_cni:v3.16.5 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_cni:v3.16.5 quay.io/calico/cni:v3.16.5
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_kube-controllers:v3.16.5 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_kube-controllers:v3.16.5 quay.io/calico/kube-controllers:v3.16.5
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_node:v3.16.5 || exit 1
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_node:v3.16.5 quay.io/calico/node:v3.16.5
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_metrics-scraper:v1.0.6
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/library_nginx:1.19
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/coredns:1.7.0
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/dns_k8s-dns-node-cache:1.16.0
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/ingress-nginx_controller:v0.41.2
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-apiserver:v1.19.7
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-controller-manager:v1.19.7
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-proxy:v1.19.7
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kube-scheduler:v1.19.7
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.2
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.3
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/kubernetesui_dashboard-amd64:v2.1.0
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/cpa_cluster-proportional-autoscaler-amd64:1.8.3
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_cni:v3.16.5
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_kube-controllers:v3.16.5
ctr -n k8s.io i rm registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/calico_node:v3.16.5
sh kubespray.sh
3 高可用性查看
[root@k8s-node1 kubespray-2.15.0]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready master 17h v1.19.7
k8s-node2 Ready master 17h v1.19.7
k8s-node3 Ready <none> 17h v1.19.7
# 可以看到 node1和node2为master node3为worker节点
在任意Master节点运行
[root@k8s-node1 kubespray-2.15.0]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-8b5ff5d58-vlqnn 1/1 Running 0 17h
calico-node-fcdb2 1/1 Running 0 17h
calico-node-k4j87 1/1 Running 0 17h
calico-node-v4nkd 1/1 Running 0 17h
coredns-85967d65-fxs5n 1/1 Running 1 17h
coredns-85967d65-z88jg 1/1 Running 0 17h
dns-autoscaler-5b7b5c9b6f-ntkjr 1/1 Running 0 17h
kube-apiserver-k8s-node1 1/1 Running 1 18h
kube-apiserver-k8s-node2 1/1 Running 1 17h
kube-controller-manager-k8s-node1 1/1 Running 1 18h
kube-controller-manager-k8s-node2 1/1 Running 3 17h
kube-proxy-8lzs6 1/1 Running 0 17h
kube-proxy-jzf56 1/1 Running 0 17h
kube-proxy-x9nkw 1/1 Running 0 17h
kube-scheduler-k8s-node1 1/1 Running 1 18h
kube-scheduler-k8s-node2 1/1 Running 2 17h
kubernetes-dashboard-86c6f9df5b-5tfnd 1/1 Running 8 17h
kubernetes-metrics-scraper-678c97765c-jjtmc 1/1 Running 1 17h
nginx-proxy-k8s-node3 1/1 Running 1 17h
nodelocaldns-42g4d 1/1 Running 1 17h
nodelocaldns-5pjqs 1/1 Running 0 17h
nodelocaldns-d4djt 1/1 Running 0 17h
可以看到 nginx-proxy-k8s-node3 的pod
我们在node3上查看
[root@k8s-node3 ~]# cat /etc/kubernetes/manifests/nginx-proxy.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx-proxy
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-nginx
annotations:
nginx-cfg-checksum: "5f4777c5a1defa2db8704eb486b5796696d70c82"
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
containers:
- name: nginx-proxy
image: docker.io/library/nginx:1.19
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 25m
memory: 32M
securityContext:
privileged: true
livenessProbe:
httpGet:
path: /healthz
port: 8081
readinessProbe:
httpGet:
path: /healthz
port: 8081
volumeMounts:
- mountPath: /etc/nginx
name: etc-nginx
readOnly: true
volumes:
- name: etc-nginx
hostPath:
path: /etc/nginx
可以看到 nginx-proxy的yml文件,同时可以看到这个Nginx的配置文件在主机的/etc/nginx下
在node3上查看nginx配置文件
cat /etc/nginx/nginx.conf
error_log stderr notice;
worker_processes 2;
worker_rlimit_nofile 130048;
worker_shutdown_timeout 10s;
events {
multi_accept on;
use epoll;
worker_connections 16384;
}
stream {
upstream kube_apiserver {
least_conn;
server 192.168.56.128:6443;
server 192.168.56.129:6443;
}
server {
listen 127.0.0.1:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
http {
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 5m;
keepalive_requests 100;
reset_timedout_connection on;
server_tokens off;
autoindex off;
server {
listen 8081;
location /healthz {
access_log off;
return 200;
}
location /stub_status {
stub_status on;
access_log off;
}
}
}
可以看到node3已经对我们的两台Master节点的apiServer做了高可用性配置并实现了负载均衡