[TOC]
1. 前置知识点
1.1 生产环境可部署Kubernetes集群的两种方式
1. Kubeadm
Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。官方地址:`https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/`。
2. 二进制包
从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。
1.2 安装要求
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
- 一台或多台机器,操作系统 CentOS7.x-86_x64
- 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
- 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
- 禁止swap分区
1.3 准备环境
1. 软件环境:
软件 | 版本 |
---|---|
操作系统 | CentOS7.6_x64(mini) |
Docker | 19-ce |
Kubernetes | 1.18 |
所有需要的安装文件下载地址:
链接: https://pan.baidu.com/s/1k5KcqH1s4c1GG9S-QQpkbQ 密码: nk30
2. 服务器规划:
角色 | IP | 组件 |
---|---|---|
k8s-master1 | 10.1.1.204 | kube-apiserver, kube-controller-manager, kube-scheduler, etcd |
k8s-master2 | 10.1.1.230 | kube-apiserver, kube-controller-manager, kube-scheduler, etcd |
k8s-worker1 | 10.1.1.151 | kubelet, kube-proxy, docker, etcd |
k8s-worker2 | 10.1.1.186 | kubelet, kube-proxy, docker, etcd |
k8s-lb1 | 10.1.1.220, 10.1.1.170(VIP) | Nginx L4 |
k8s-lb2 | 10.1.1.169 | Nginx L4 |
须知:考虑到有些朋友电脑配置较低,这么多虚拟机跑不动,所以这一套高可用集群分两部分实施,先部署一套单Master架构(10.1.1.204/151/186),再扩容为多Master架构(上述规划),顺便熟悉下Master扩容流程。
单Master架构图:
单Master服务器规划:
角色 | IP | 组件 |
---|---|---|
k8s-master1 | 10.1.1.204 | kube-apiserver, kube-controller-manager, kube-scheduler, etcd |
k8s-worker1 | 10.1.1.151 | kubelet, kube-proxy, docker, etcd |
k8s-worker2 | 10.1.1.186 | kubelet, kube-proxy, docker, etcd |
1.4 操作系统初始化配置
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
# 根据规划设置主机名
hostnamectl set-hostname <hostname>
# 在所有主机添加hosts
cat >> /etc/hosts << EOF
10.1.1.204 k8s-master1
10.1.1.230 k8s-master2
10.1.1.151 k8s-worker1
10.1.1.186 k8s-worker2
10.1.1.220 k8s-lb1
10.1.1.169 k8s-lb2
EOF
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
# 时间同步
yum install ntpdate -y
ntpdate time.windows.com
2. 部署Etcd集群
2.1 集群规划
Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。
节点名称 | IP |
---|---|
etcd1(k8s-master1) | 10.1.1.204 |
etcd2(k8s-worker1) | 10.1.1.151 |
etcd3(k8s-worker2) | 10.1.1.186 |
注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行。
2.2 准备cfssl证书生成工具
cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。
找任意一台服务器操作,这里用Master节点。
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.3 生成Etcd证书
1. 自签证书颁发机构(CA)
创建工作目录:
mkdir -p ~/TLS/{etcd,k8s}
cd ~/TLS/etcd
自签CA:
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem
ca-key.pem ca.pem
2. 使用自签CA签发Etcd HTTPS证书
创建证书申请文件:
cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"10.1.1.204",
"10.1.1.151",
"10.1.1.186"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
s
server-key.pem server.pem
3. 从GitHub上下载二进制文件
下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
4. 部署Etcd集群
以下在节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2和节点3。
1. 创建工作目录并解压二进制包
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2. 创建etcd配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.1.1.204:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.1.1.204:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.1.1.204:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.1.1.204:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.1.1.204:2380,etcd-2=https://10.1.1.151:2380,etcd-3=https://10.1.1.186:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
- ETCD_NAME:节点名称,集群中唯一
- ETCD_DATA_DIR:数据目录
- ETCD_LISTEN_PEER_URLS:集群通信监听地址
- ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
- ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
- ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
- ETCD_INITIAL_CLUSTER:集群节点地址
- ETCD_INITIAL_CLUSTER_TOKEN:集群Token
- ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
3. systemd管理etcd集群
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4. 拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:
cp ~/TLS/etcd/ca*.pem ~/TLS/etcd/server*.pem /opt/etcd/ssl/
5. 将上面节点1所有生成的文件拷贝到节点2和节点3
scp -r /opt/etcd/ root@10.1.1.151:/opt/
scp /usr/lib/systemd/system/etcd.service root@10.1.1.151:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@10.1.1.186:/opt/
scp /usr/lib/systemd/system/etcd.service root@10.1.1.186:/usr/lib/systemd/system/
然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP:
vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1" # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
**6. 启动并设置开机自启(全部etcd节点执行) **
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
7. 查看集群状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://10.1.1.204:2379,https://10.1.1.151:2379,https://10.1.1.186:2379" endpoint health
https://10.1.1.204:2379 is healthy: successfully committed proposal: took = 12.415974ms
https://10.1.1.151:2379 is healthy: successfully committed proposal: took = 14.409434ms
https://10.1.1.186:2379 is healthy: successfully committed proposal: took = 14.447958ms
如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd
3. 安装Docker
下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
。
以下在所有节点操作。这里采用二进制安装,用yum安装也一样。
3.1 解压二进制包
tar zxf docker-19.03.9.tgz
mv docker/* /usr/bin
3.2 systemd管理docker
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
3.3 创建配置文件
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
registry-mirrors 阿里云镜像加速器
3.4 启动并设置开机启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker
4. 部署Master Node
4.1 生成kube-apiserver证书
1. 自签证书颁发机构(CA)
cd ~/TLS/k8s/
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem
ca-key.pem ca.pem
2. 使用自签CA签发kube-apiserver HTTPS证书
创建证书申请文件:
cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"10.1.1.204",
"10.1.1.230",
"10.1.1.151",
"10.1.1.186",
"10.1.1.220",
"10.1.1.169",
"10.1.1.170",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
10.0.0.1、127.0.0.1、kubernetes*不要删除
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
ls server*pem
server-key.pem server.pem
4.2 从GitHub下载二进制文件
下载地址https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183
。
注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。
例如:https://dl.k8s.io/v1.18.3/kubernetes-server-linux-amd64.tar.gz
。
4.3 解压二进制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
4.4 部署kube-apiserver
1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.100.81:2379,https://192.168.100.82:2379,https://192.168.100.83:2379 \\
--bind-address=192.168.100.81 \\
--secure-port=6443 \\
--advertise-address=192.168.100.81 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。
- –logtostderr:启用日志
- –v:日志等级
- –log-dir:日志目录
- –etcd-servers:etcd集群地址
- –bind-address:监听地址
- –secure-port:https安全端口
- –advertise-address:集群通告地址
- –allow-privileged:启用授权
- –service-cluster-ip-range:Service虚拟IP地址段
- –enable-admission-plugins:准入控制模块
- –authorization-mode:认证授权,启用RBAC授权和节点自管理
- –enable-bootstrap-token-auth:启用TLS bootstrap机制
- –token-auth-file:bootstrap token文件
- –service-node-port-range:Service nodeport类型默认分配端口范围
- –kubelet-client-xxx:apiserver访问kubelet客户端证书
- –tls-xxx-file:apiserver https证书
- –etcd-xxxfile:连接Etcd集群证书
- –audit-log-xxx:审计日志
2. 拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
3. 启用 TLS Bootstrapping 机制
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
TLS bootstraping 工作流程:
创建上述配置文件中token文件:
cat > /opt/kubernetes/cfg/token.csv << EOF
8054b7219e601b121e8d2b4f73d255ad,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
格式:token,用户名,UID,用户组
token也可自行生成替换:
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
4. systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
6. 授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
4.5 部署kube-controller-manager
1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
- –master:通过本地非安全本地端口8080连接apiserver
- –leader-elect:当该组件启动多个时,自动选举(HA)
- –cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
2. systemd管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
3. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
4.6 部署kube-scheduler
1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
- –master:通过本地非安全本地端口8080连接apiserver
- –leader-elect:当该组件启动多个时,自动选举(HA)
2. systemd管理scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
3. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
4. 查看集群状态
所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
如上输出说明Master节点组件运行正常。
5. 部署Worker Node
下面还是在Master Node上操作,即同时作为Worker Node。
5.1 创建工作目录并拷贝二进制文件
在所有worker node创建工作目录:
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
从master节点拷贝:
cd ~/kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin # 本地拷贝
5.2 部署kubelet
1. 创建配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
- –hostname-override:显示名称,集群中唯一
- –network-plugin:启用CNI
- –kubeconfig:空路径,会自动生成,后面用于连接apiserver
- –bootstrap-kubeconfig:首次启动向apiserver申请证书
- –config:配置参数文件
- –cert-dir:kubelet证书生成目录
- –pod-infra-container-image:管理Pod网络容器的镜像
2. 配置参数文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
3. 生成bootstrap.kubeconfig文件
KUBE_APISERVER="https://10.1.1.204:6443" # apiserver IP:PORT
TOKEN="8054b7219e601b121e8d2b4f73d255ad" # 与token.csv里保持一致
# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
拷贝到配置文件路径:
cp bootstrap.kubeconfig /opt/kubernetes/cfg
4. systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
5.3 批准kubelet证书申请并加入集群
# 查看kubelet证书请求
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-ngC06Pj7kmQUvmzBnbfLRxUVG1J90dlT2lWMkCNbnBA 26s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 批准申请(approve后跟上一步生成的node-*)
kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A
# 查看节点
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady <none> 6s v1.18.3
注:由于网络插件还没有部署,节点会没有准备就绪 NotReady。
5.4 部署kube-proxy
1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
2. 配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.0.0.0/24
EOF
3. 生成kube-proxy.kubeconfig文件
生成kube-proxy证书:
# 切换工作目录
cd ~/TLS/k8s
# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*pem
kube-proxy-key.pem kube-proxy.pem
生成kubeconfig文件:
KUBE_APISERVER="https://10.1.1.204:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
拷贝到配置文件指定路径:
cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
4. systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
5.5 部署CNI网络
先准备好CNI二进制文件:
下载地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
。
解压二进制包并移动到默认工作目录:
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
部署CNI网络:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml
# 这里需要一个国内镜像地址,国外的无法访问
默认镜像地址无法访问,修改为docker hub镜像仓库。
kubectl apply -f kube-flannel.yml
kubectl apply -f kube-fannel-rbac.yml
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-jj98k 0/1 Init:0/1 0 14s
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 14m v1.18.3
部署好网络插件,Node准备就绪。
5.6 授权apiserver访问kubelet
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml
5.7 新增加Worker Node
1. 拷贝已部署好的Node相关文件到新节点(k8s-master1上执行)
在Master节点将Worker Node涉及文件拷贝到新节点10.1.1.151/186
scp -r /opt/kubernetes root@10.1.1.151:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@10.1.1.151:/usr/lib/systemd/system
scp -r /opt/cni/ root@10.1.1.151:/opt/
scp -r /opt/kubernetes/ssl/ca.pem root@10.1.1.151:/opt/kubernetes/ssl
scp -r /opt/kubernetes root@10.1.1.186:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@10.1.1.186:/usr/lib/systemd/system
scp -r /opt/cni/ root@10.1.1.186:/opt/
scp -r /opt/kubernetes/ssl/ca.pem root@10.1.1.186:/opt/kubernetes/ssl
2. 删除kubelet证书和kubeconfig文件(k8s-worker1/2上执行)
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成。
3. 修改主机名(k8s-worker1/2上执行)
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-worker1
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-worker1
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-worker2
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-worker2
4. 启动并设置开机启动(k8s-worker1/2上执行)
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
5. 在Master上批准新Node kubelet证书申请
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-mZ1Shcds5I90zfqQupi-GaI_4MeKl7PizK5dgOF2wC8 14s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-x-lWfrl2pFrMUblecU5NYgpn3zI6j_iiTmcKZ1JRefY 20s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
kubectl certificate approve node-csr-mZ1Shcds5I90zfqQupi-GaI_4MeKl7PizK5dgOF2wC8
kubectl certificate approve node-csr-x-lWfrl2pFrMUblecU5NYgpn3zI6j_iiTmcKZ1JRefY
6. 查看Node状态
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 65m v1.18.3
k8s-worker1 Ready <none> 12m v1.18.3
k8s-worker2 Ready <none> 81s v1.18.3
6. 部署Dashboard和CoreDNS
6.1 部署Dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
type: NodePort
selector:
k8s-app: kubernetes-dashboard
kubectl apply -f recommended.yaml # 这里有个坑,需要等待一段时间下方STATUS为Running才行
kubectl get pods,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-694557449d-dhlrq 1/1 Running 0 27s
pod/kubernetes-dashboard-9774cc786-7d6qn 1/1 Running 0 27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.0.0.36 <none> 8000/TCP 27s
service/kubernetes-dashboard NodePort 10.0.0.154 <none> 443:30001/TCP 27s
访问地址:https://NodeIP:30001
创建service account并绑定默认cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
获得token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IlVZczd4TDN3VHRpLWdNUnRlc1BOY2E0T3Z1Q3UzQmlwUEZpaEZHclJ0MWcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNWRod2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTMzZmVkODItM2Q3OC00ODk5LWJiMzYtY2YyZjU4NzgzZTRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.gTuj33hMUmvgdPU499ifrn4V8AjinlRhzkw2ItdNRVdUmU6YLbSjZIp8WlPOkdtmDM2c3LCkLmnBqIabkIiIwb1HkNOtmglnFghzV8Td5GIGnoRqAmKPXO0aY4Y97w_X0sVC3DB34EY6soPlcI_v6_-nSqRC1rPOSJ9xd8yU774YHYucYqiU-ViiIgqHELo7BT6CVD1iLw4K_C5wfWs4o6htplhGpJ0edLPsZpwnTrW-4qn4d-EdKXcVJhpTUxxLoKL7HrerTwxeUMM1SL0vGnD5z2io9gybylubbgV7xRsSQQCwuXWqlzI6A_YWzl3-wmhwty4x9cWjQHMGw87YhA
使用输出的token登录Dashboard。
6.2 部署CoreDNS
CoreDNS用于集群内部Service名称解析。
curl -o ./coredns.yaml https://github.com/kubernetes/kubernetes/blob/e176e477195f92919724bd02f81e2b18d4477af7/cluster/addons/dns/coredns/coredns.yaml.sed
kubectl apply -f coredns.yaml # 这个地方需要修改下把CLUSTER_DNS_IP变量删除
kubectl get pods -n kube-system # 等待下方第一个STATUS为Running为准
NAME READY STATUS RESTARTS AGE
coredns-5ffbfd976d-j6shb 1/1 Running 0 32s
kube-flannel-ds-amd64-2pc95 1/1 Running 0 38m
kube-flannel-ds-amd64-7qhdx 1/1 Running 0 15m
kube-flannel-ds-amd64-99cr8 1/1 Running 0 26m
DNS解析测试:
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
解析没问题。
7. 高可用架构(扩容多Master架构)
Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。
针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。
Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。
Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。
多Master架构图:
7.1 安装Docker
同上,不再赘述。
7.2 部署Master2 Node(10.1.1.230)
Master2 与已部署的Master1所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可。
1. 创建etcd证书目录
在Master2创建etcd证书目录:
mkdir -p /opt/etcd/ssl
2. 拷贝文件(Master1操作)
拷贝Master1上所有K8s文件和etcd证书到Master2:
scp -r /opt/kubernetes root@10.1.1.230:/opt
scp -r /opt/cni/ root@10.1.1.230:/opt
scp -r /opt/etcd/ssl root@10.1.1.230:/opt/etcd
scp /usr/lib/systemd/system/kube* root@10.1.1.230:/usr/lib/systemd/system
scp /usr/bin/kubectl root@10.1.1.230:/usr/bin
3. 删除证书文件
删除kubelet证书和kubeconfig文件:
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
4. 修改配置文件IP和主机名
修改apiserver、kubelet和kube-proxy配置文件为本地IP:
vi /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=10.1.1.230 \
--advertise-address=10.1.1.230 \
...
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master2
5. 启动设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy
6. 查看集群状态
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
7. 批准kubelet证书申请
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU 85m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 34h v1.18.3
k8s-master2 Ready <none> 83m v1.18.3
k8s-worker1 Ready <none> 33h v1.18.3
k8s-worker2 Ready <none> 33h v1.18.3
7.3 部署Nginx负载均衡器
kube-apiserver高可用架构图:
- Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。
- Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(偏移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。
1. 安装软件包(主/备)
yum install epel-release -y
yum install nginx keepalived -y
2. Nginx配置文件(主/备一样)
cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.1.1.204:6443; # Master1 APISERVER IP:PORT
server 10.1.1.230:6443; # Master2 APISERVER IP:PORT
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
EOF
3. keepalived配置文件(Nginx Master)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
10.1.1.170/24
}
track_script {
check_nginx
}
}
EOF
- vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
- virtual_ipaddress:虚拟IP(VIP)
检查nginx状态脚本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
4. keepalived配置文件(Nginx Backup)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.170/24
}
track_script {
check_nginx
}
}
EOF
上述配置文件中检查nginx运行状态脚本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start nginx
systemctl start keepalived
systemctl enable nginx
systemctl enable keepalived
6. 查看keepalived工作状态
ip a
可以看到,在ens33网卡绑定了10.1.1.170虚拟IP,说明工作正常。
7. Nginx+Keepalived高可用测试
关闭主节点Nginx,测试VIP是否漂移到备节点服务器。
在Nginx Master执行 pkill nginx
在Nginx Backup,ip addr命令查看已成功绑定VIP。
8. 访问负载均衡器测试
找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:
curl -k https://10.1.1.170:6443/version
{
"major": "1",
"minor": "18",
"gitVersion": "v1.18.3",
"gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40",
"gitTreeState": "clean",
"buildDate": "2020-05-20T12:43:34Z",
"goVersion": "go1.13.9",
"compiler": "gc",
"platform": "linux/amd64"
}
可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver
。
通过查看Nginx日志也可以看到转发apiserver IP:
tail /var/log/nginx/k8s-access.log -f
10.1.1.170 10.1.1.204:6443 - [30/May/2020:11:15:10 +0800] 200 422
10.1.1.170 10.1.1.230:6443 - [30/May/2020:11:15:26 +0800] 200 422
到此还没结束,还有下面最关键的一步。
7.4 修改所有Worker Node连接LB VIP
试想下,虽然我们增加了Master2和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Node组件连接都还是Master1,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。
因此接下来就是要改所有Worker Node组件配置文件,由原来10.1.1.204修改为10.1.1.170(VIP):
角色 | IP |
---|---|
k8s-master1 | 10.1.1.204 |
k8s-master2 | 10.1.1.230 |
k8s-worker1 | 10.1.1.151 |
k8s-worker2 | 10.1.1.186 |
sed -i 's#10.1.1.204:6443#10.1.1.170:6443#' /opt/kubernetes/cfg/*
systemctl restart kubelet
systemctl restart kube-proxy
检查节点状态:kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready <none> 34h v1.18.3
k8s-master2 Ready <none> 101m v1.18.3
k8s-node1 Ready <none> 33h v1.18.3
k8s-node2 Ready <none> 33h v1.18.3
至此,一套完整的 Kubernetes 高可用集群就部署完成了!
8. KUBECONFIG配置
8.1 生成管理员证书【master执行】
[root@k8s-master1 ~]# cd /root/TLS/k8s/
root@k8s-master1 k8s]# vim admin-csr.json
写入如下内容:
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
颁发管理员(admin)证书
[root@master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@master1 k8s]# ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem
8.2 创建kubeconfig文件【master执行】
-
设置集群参数
[root@k8s-master1 k8s]# kubectl config set-cluster kubernetes \ --server=https://172.26.46.69:6443 \ --certificate-authority=ca.pem \ --embed-certs=true \ --kubeconfig=config
TIPS
这里--server制定的是master节点IP,或者是master集群的虚拟IP(VIP)
-
设置客户端认证参数
[root@k8s-master1 k8s]# kubectl config set-credentials cluster-admin \ --certificate-authority=ca.pem \ --embed-certs=true \ --client-key=admin-key.pem \ --client-certificate=admin.pem \ --kubeconfig=config
-
设置上下文参数
# 设置上下文 [root@k8s-master1 k8s]# kubectl config set-context default \ --cluster=kubernetes \ --user=cluster-admin \ --kubeconfig=config #设置默认上下文 [root@k8s-master1 k8s]# kubectl config use-context default --kubeconfig=config
TIPS
最终会生成一个config文件,这个文件里存的就是访问k8s集群的密钥
一般存放于~/.kube/config
一般用于用户鉴权、远程管理k8s集群(在其他主机节点使用kubectl命令),或者helm等插件的权限访问配置
9. Helm安装(可选)
9.1 源码安装
https://github.com/helm/helm/releases/tag/v3.2.4 官网3.2.4版本下载地址
[root@K8s-master ~]# tar -zxvf helm-v3.2.4-linux-amd64.tar.gz
[root@k8s-master src]# mv linux-amd64/helm /usr/local/bin
[root@k8s-master src]# helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
9.2 常用 Chart 源
添加 chart 库
# helm官网 chart 库,稳定
[root@k8s-master ~]# helm repo add stable https://kubernetes-charts.storage.googleapis.com
# 阿里云chart 库,速度最快
[root@k8s-master ~]# helm repo add aliyuncs https://apphub.aliyuncs.com
查看当前集群有哪些 chart 库
[root@k8s-master ~]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
aliyuncs https://apphub.aliyuncs.com
查看某个 chart 库当中有哪些可安装程序
[root@k8s-master ~]# helm search repo aliyuncs
NAME CHART VERSION APP VERSION DESCRIPTION
aliyuncs/admin-mongo 0.1.0 1 MongoDB管理工具(web gui)
aliyuncs/aerospike 0.3.2 v4.5.0.5 A Helm chart for Aerospike in Kubernetes
9.3 部署 Helm 应用
[root@k8s-master ~]# helm install nginx aliyuncs/nginx
NAME: nginx
LAST DEPLOYED: Thu Jun 25 17:29:46 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Get the NGINX URL:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace default -w nginx'
export SERVICE_IP=$(kubectl get svc --namespace default nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo "NGINX URL: http://$SERVICE_IP/"
9.4 查看helm 生成应用、卸载应用
查看集群中有哪些 helm 应用
[root@k8s-master1 ~]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nginx default 1 2020-07-02 20:03:36.008528631 -0400 EDT deployed nginx-5.1.5 1.16.1
卸载应用
helm uninstall +应用名
# helm uninstall +应用名
[root@k8s-master1 ~]# helm uninstall nginx
release "nginx" uninstalled
注意helm3区别于helm2 :
Helm 2 是 C/S 架构,主要分为客户端
helm
和服务端Tiller
; 与之前版本相同,Helm 3 同样在 Release 页面提供了预编译好的二进制文件。差别在于原先的二进制包下载下来你会看到helm
和tiller
。而 Helm 3 则只有helm
的存在了
Tiller
主要用于在 Kubernetes 集群中管理各种应用发布的版本,在 Helm 3 中移除了Tiller
, 版本相关的数据直接存储在了 Kubernetes 中。原先,由于有 RBAC 的存在,我们在开始使用时,必须先创建一个 ServiceAccount 而现在 Helm 的权限与当前的
KUBECONFIG
(默认为~/.kube/config
文件)中配置用户的权限相同,非常容易进行控制
10. 问题总结
10.1 通过kubectl logs pod-xxx -n namespace不能查看日志的解决办法
[root@test-master-113 kube-proxy]# kubectl logs prometheus-2159841327-pgfsv -n kube-system prometheus
Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log prometheus-2159841327-pgfsv)
执行以下命令
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
10.2 排查node节点一直处于not ready 状态
journalctl -f -u kubelet