手动部署二进制高可用k8s集群

一、规划

集群规划信息:

主机名称               Ip        
master01            192.168.50.10     
master02            192.168.50.11     
master03            192.168.50.12     
etcd01                192.168.50.13     
etcd02                192.168.50.14     
etcd03                192.168.50.15     
node01               192.168.50.16     
node02               192.168.50.17     
node03               192.168.50.18  
harbor                 192.168.50.19  
kube-lb01            192.168.50.20 
kube-lb02            192.168.50.21 

二、基础环境配置

2.1、所有节点配置hosts

root@master01:~# cat /etc/hosts
127.0.0.1 localhost
192.168.50.10 master01
192.168.50.11 master02
192.168.50.12 master03
192.168.50.13 etcd01
192.168.50.14 etcd02
192.168.50.15 etcd03
192.168.50.16 node01
192.168.50.17 node02
192.168.50.18 node03
192.168.50.19 harbor.whyxx.net
192.168.50.20 lb01
192.168.50.21 lb02

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

2.2、所有节点关闭防火墙,selinux,swap

2.3、配置时间同步

#安装ntpdate,需配置yum源
apt install ntpdate -y
#执行同步,可以使用自己的ntp服务器如果没有
ntpdate time2.aliyun.com
#写入定时任务
crontab -e
*/5 * * * * ntpdate time2.aliyun.com
也可以使用chrony

2.4、master01与其他节点做ssh认证

Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器

ssh-keygen -t rsa
for i in master01 master02 master03 node01 node02 node03 etcd01 etcd02 etcd03;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

2.5、所有节点优化内核参数

#修改内核参数
cat >/etc/sysctl.conf<<EOF
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.conf.all.rp_filter=0 # 默认为1,系统会严格校验数据包的反向路径,可能导致丢包
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce=2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
net.ipv4.ip_local_port_range= 45001 65000
net.ipv4.ip_forward=1
net.ipv4.tcp_max_tw_buckets=6000
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_synack_retries=2
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.netfilter.nf_conntrack_max=2310720
net.ipv6.neigh.default.gc_thresh1=8192
net.ipv6.neigh.default.gc_thresh2=32768
net.ipv6.neigh.default.gc_thresh3=65536
net.core.netdev_max_backlog=16384 # 每CPU网络设备积压队列长度
net.core.rmem_max = 16777216 # 所有协议类型读写的缓存区大小
net.core.wmem_max = 16777216
net.ipv4.tcp_max_syn_backlog = 8096 # 第一个积压队列长度
net.core.somaxconn = 32768 # 第二个积压队列长度
fs.inotify.max_user_instances=8192 # 表示每一个real user ID可创建的inotify instatnces的数量上限,默认128.
fs.inotify.max_user_watches=524288 # 同一用户同时可以添加的watch数目,默认8192。
fs.file-max=52706963
fs.nr_open=52706963
kernel.pid_max = 4194303
net.bridge.bridge-nf-call-arptables=1
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
vm.max_map_count = 262144
EOF

#加载ipvs模块
cat >/etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
EOF

systemctl enable --now systemd-modules-load.service
#重启
reboot
#重启服务器执行检查
lsmod | grep -e ip_vs -e nf

2.6、安装基础软件

apt install -y ntpdate git vim curl wget jq psmisc net-tools telnet lvm2

三、准备软件包

1.下载kubernetes1.24.+的二进制包
github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#downloads-for-v1242

2.下载etcdctl二进制包
github二进制包下载地址:https://github.com/etcd-io/etcd/releases

3.docker-ce二进制包下载地址
二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
这里需要下载20.10.+版本

4.cri-docker安装包下载
下载地址:https://github.com/Mirantis/cri-dockerd/releases

5.containerd二进制包下载
github下载地址:https://github.com/containerd/containerd/releases
containerd下载时下载带cni插件的二进制包。

6.下载cfssl二进制包
github二进制包下载地址:https://github.com/cloudflare/cfssl/releases

7.cni插件下载
github下载地址:https://github.com/containernetworking/plugins/releases

8.crictl客户端二进制下载
github下载:https://github.com/kubernetes-sigs/cri-tools/releases

四、安装容器运行时

以下操作docker-ce与containerd选择一个安装即可,需要在所有运行kubelet的节点都需要安装,在kubernetes1.24版本之后如果使用docker-ce作为容器运行时,需要额外安装cri-docker。

4.1、安装docker-ce(master与node节点)

#解压
tar xf docker-20.10.15.tgz 
#拷贝二进制文件
cp docker/* /usr/bin/
#创建containerd的service文件,并且启动
cat >/etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=1048576
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF
systemctl enable --now containerd.service
#准备docker的service文件
cat > /etc/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
Type=notify
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
EOF
#准备docker的socket文件
cat > /etc/systemd/system/docker.socket <<EOF
[Unit]
Description=Docker Socket for the API

[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF
#创建docker组
groupadd docker
#启动docker
systemctl enable --now docker.socket  && systemctl enable --now docker.service
#验证
docker info
#创建docker配置文件
# 由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
cat >/etc/docker/daemon.json <<EOF
{
   "insecure-registries": ["harbor.whyxx.net"],
   "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker

4.2、安装cri-dokcer

#解压安装包
tar xf cri-dockerd-0.2.3.amd64.tgz
#拷贝二进制文件
cp cri-dockerd/* /usr/bin/
#生成service文件
cat >/etc/systemd/system/cri-docker.socket<<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF
cat >/etc/systemd/system/cri-docker.service<<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=harbor.whyxx.net/baseimages/pause:3.7
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
#启动
systemctl enable --now cri-docker.socket
systemctl enable --now cri-docker

五、生成集群所需的证书

5.1、安装cfssl工具并生成证书工具(master01)

下载 cfssl cfssl-certinfo cfssljson 工具
并拷贝到/usr/bin目录下,并chmod +x 
#生成证书存放目录
mkdir /opt/pki/{etcd,kubernetes} -p

5.2、生成etcd证书

生成etcd证书的ca机构

mkdir /opt/pki/etcd/ -p
cd /opt/pki/etcd/
#创建etcd证书的ca
mkdir ca
#生成etcd证书ca配置文件与申请文件
cd ca/
#生成配置文件
cat > ca-config.json <<EOF
{
    "signing": {
         "default": {
             "expiry": "87600h"
        },
         "profiles": {
             "etcd": {
                 "expiry": "87600h",
                 "usages": [
                     "signing",
                     "key encipherment",
                     "server auth",
                     "client auth"
                 ]
             }
         }
     }
}
EOF
#生成申请文件
cat > ca-csr.json <<EOF
{
  "CA":{"expiry":"87600h"},
  "CN": "etcd-cluster",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "etcd-cluster",
      "OU": "System"
    }
  ]
}
EOF
#生成ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

生成etcd服务端证书

#生成etcd证书申请文件
cd /opt/pki/etcd/
cat > etcd-server-csr.json << EOF
{
  "CN": "etcd-server",
  "hosts": [
     "192.168.50.13",
     "192.168.50.14",
     "192.168.50.15",
     "127.0.0.1"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "etcd-server",
      "OU": "System"
    }
  ]
}
EOF
#生成证书
cfssl gencert \
  -ca=ca/ca.pem \
  -ca-key=ca/ca-key.pem \
  -config=ca/ca-config.json \
  -profile=etcd \
  etcd-server-csr.json | cfssljson -bare etcd-server

生成etcd客户端证书

#生成etcd证书申请文件
cd /opt/pki/etcd/
cat > etcd-client-csr.json << EOF
{
  "CN": "etcd-client",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "etcd-client",
      "OU": "System"
    }
  ]
}
EOF
#生成证书
cfssl gencert \
  -ca=ca/ca.pem \
  -ca-key=ca/ca-key.pem \
  -config=ca/ca-config.json \
  -profile=etcd \
  etcd-client-csr.json | cfssljson -bare etcd-client

验证

pwd
/opt/pki/etcd
#tree命令验证生成的文件
tree 
.
├── ca   #etcdca文件
│   ├── ca-config.json
│   ├── ca.csr
│   ├── ca-csr.json
│   ├── ca-key.pem
│   └── ca.pem
├── etcd-client.csr
├── etcd-client-csr.json
├── etcd-client-key.pem  #客户端私钥
├── etcd-client.pem      #客户端公钥
├── etcd-server.csr
├── etcd-server-csr.json
├── etcd-server-key.pem  #服务端私钥
└── etcd-server.pem      #服务端公钥

拷贝证书到master节点和etcd节点

master_etcd="master01 master02 master03 etcd01 etcd02 etcd03"
for i in $master_etcd;do
  ssh $i "mkdir /etc/etcd/ssl -p"
  scp /opt/pki/etcd/ca/ca.pem /opt/pki/etcd/{etcd-server.pem,etcd-server-key.pem,etcd-client.pem,etcd-client-key.pem} $i:/etc/etcd/ssl/
done

5.3、创建k8s各组件证书

创建kubernetes的ca

#创建目录
mkdir /opt/pki/kubernetes/ -p
cd /opt/pki/kubernetes/
mkdir ca
cd ca
#创建ca配置文件与申请文件
cat > ca-config.json <<EOF
{
    "signing": {
         "default": {
             "expiry": "87600h"
        },
         "profiles": {
             "kubernetes": {
                 "expiry": "87600h",
                 "usages": [
                     "signing",
                     "key encipherment",
                     "server auth",
                     "client auth"
                 ]
             }
         }
     }
}
EOF
#生成申请文件
cat > ca-csr.json <<EOF
{
  "CA":{"expiry":"87600h"},
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "kubernetes",
      "OU": "System"
    }
  ]
}
EOF
#生成ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

创建kube-apiserver证书

#创建目录
mkdir /opt/pki/kubernetes/kube-apiserver -p
cd /opt/pki/kubernetes/kube-apiserver
#生成证书申请文件
cat > kube-apiserver-csr.json <<EOF
{
  "CN": "kube-apiserver",
  "hosts": [
    "127.0.0.1",
    "192.168.50.10",
    "192.168.50.11",
    "192.168.50.12",
    "192.168.50.251",
    "10.200.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
   ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "kube-apiserver",
      "OU": "System"
    }
  ]
}
EOF
#生成证书
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  kube-apiserver-csr.json | cfssljson -bare  kube-apiserver

拷贝kube-apiserver组件证书到master节点

master="master01 master02 master03"
for i in $master;do
   ssh $i "mkdir /etc/kubernetes/pki -p"
   scp /opt/pki/kubernetes/ca/{ca.pem,ca-key.pem} /opt/pki/kubernetes/kube-apiserver/{kube-apiserver-key.pem,kube-apiserver.pem} $i:/etc/kubernetes/pki
done

拷贝证书到node

node="node01 node02 node03"
for i in $node;do
   ssh $i "mkdir /etc/kubernetes/pki -p"
   scp /opt/pki/kubernetes/ca/ca.pem $i:/etc/kubernetes/pki
done

创建proxy-client证书以及ca

#创建目录
mkdir /opt/pki/proxy-client
cd /opt/pki/proxy-client
#生成ca配置文件
cat > front-proxy-ca-csr.json <<EOF
{
  "CA":{"expiry":"87600h"},
  "CN": "kubernetes",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF
#生成ca文件
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca
#生成客户端证书申请文件
cat > front-proxy-client-csr.json <<EOF
{
  "CN": "front-proxy-client",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF
#生成证书
cfssl gencert \
-ca=front-proxy-ca.pem \
-ca-key=front-proxy-ca-key.pem  \
-config=../kubernetes/ca/ca-config.json   \
-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare front-proxy-client

拷贝证书到节点

master="master01 master02 master03"
node="node01 node02"
for i in $master;do
   scp /opt/pki/proxy-client/{front-proxy-ca.pem,front-proxy-client.pem,front-proxy-client-key.pem} $i:/etc/kubernetes/pki
done
for i in $node;do    
  scp /opt/pki/proxy-client/front-proxy-ca.pem $i:/etc/kubernetes/pki
done

创建kube-controller-manager证书与认证文件

#生成目录
mkdir /opt/pki/kubernetes/kube-controller-manager
cd /opt/pki/kubernetes/kube-controller-manager
#生成证书请求文件
cat > kube-controller-manager-csr.json <<EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "System"
    }
  ]
}
EOF
#生成证书文件
cfssl gencert \
   -ca=../ca/ca.pem \
   -ca-key=../ca/ca-key.pem \
   -config=../ca/ca-config.json \
   -profile=kubernetes \
   kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
#生成配置文件
kubectl config set-cluster kubernetes \
    --certificate-authority=../ca/ca.pem \
    --embed-certs=true \
    --server=https://192.168.50.251:8443 \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.pem \
    --client-key=kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

拷贝证书文件到master节点

master="master01 master02 master03"
for i in $master;do
   scp /opt/pki/kubernetes/kube-controller-manager/kube-controller-manager.kubeconfig $i:/etc/kubernetes
done

生成kube-scheduler证书文件

#创建目录
mkdir /opt/pki/kubernetes/kube-scheduler
cd /opt/pki/kubernetes/kube-scheduler
#生成证书申请文件
cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "System"
    }
  ]
}
EOF
#生成证书
cfssl gencert \
   -ca=../ca/ca.pem \
   -ca-key=../ca/ca-key.pem \
   -config=../ca/ca-config.json \
   -profile=kubernetes \
   kube-scheduler-csr.json | cfssljson -bare kube-scheduler
#生成配置文件
kubectl config set-cluster kubernetes \
    --certificate-authority=../ca/ca.pem \
    --embed-certs=true \
    --server=https://192.168.50.251:8443 \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.pem \
    --client-key=kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

拷贝证书文件到master节点

master="master01 master02 master03"
for i in $master;do
   scp /opt/pki/kubernetes/kube-scheduler/kube-scheduler.kubeconfig $i:/etc/kubernetes
done

生成kubernetes集群管理员证书

#创建目录
mkdir /opt/pki/kubernetes/admin
cd /opt/pki/kubernetes/admin
#生成证书申请文件
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
#生成证书
cfssl gencert \
   -ca=../ca/ca.pem \
   -ca-key=../ca/ca-key.pem \
   -config=../ca/ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare admin
#生成配置文件
kubectl config set-cluster kubernetes \
    --certificate-authority=../ca/ca.pem \
    --embed-certs=true \
    --server=https://192.168.50.251:8443 \
    --kubeconfig=admin.kubeconfig

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes \
    --user=admin \
    --kubeconfig=admin.kubeconfig

  kubectl config use-context default --kubeconfig=admin.kubeconfig

六、部署etcd集群

6.1、安装etcd

etcd 节点都要安装

tar xf etcd-v3.5.3-linux-amd64.tar.gz
cp etcd-v3.5.3-linux-amd64/etcd* /usr/bin/

etcd配置文件

#创建配置文件
#etcd-1
cat > /etc/etcd/etcd.config.yml <<EOF
name: 'etcd-1'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.50.13:2380'
listen-client-urls: 'https://192.168.50.13:2379,https://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.50.13:2380'
advertise-client-urls: 'https://192.168.50.13:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd-1=https://192.168.50.13:2380,etcd-2=https://192.168.50.14:2380,etcd-3=https://192.168.50.15:2380'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/etcd/ssl/etcd-server.pem'
  key-file: '/etc/etcd/ssl/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/etcd/ssl/ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/etcd/ssl/etcd-server.pem'
  key-file: '/etc/etcd/ssl/etcd-server-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/etcd/ssl/ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

#etcd-2
cat > /etc/etcd/etcd.config.yml <<EOF
name: 'etcd-2'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.50.14:2380'
listen-client-urls: 'https://192.168.50.14:2379,https://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.50.14:2380'
advertise-client-urls: 'https://192.168.50.14:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd-1=https://192.168.50.13:2380,etcd-2=https://192.168.50.14:2380,etcd-3=https://192.168.50.15:2380'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/etcd/ssl/etcd-server.pem'
  key-file: '/etc/etcd/ssl/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/etcd/ssl/ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/etcd/ssl/etcd-server.pem'
  key-file: '/etc/etcd/ssl/etcd-server-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/etcd/ssl/ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

#etcd-3
cat > /etc/etcd/etcd.config.yml <<EOF
name: 'etcd-3'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.50.15:2380'
listen-client-urls: 'https://192.168.50.15:2379,https://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.50.15:2380'
advertise-client-urls: 'https://192.168.50.15:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd-1=https://192.168.50.13:2380,etcd-2=https://192.168.50.14:2380,etcd-3=https://192.168.50.15:2380'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/etcd/ssl/etcd-server.pem'
  key-file: '/etc/etcd/ssl/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/etcd/ssl/ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/etcd/ssl/etcd-server.pem'
  key-file: '/etc/etcd/ssl/etcd-server-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/etcd/ssl/ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

#创建service文件
cat > /etc/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF

#启动服务
systemctl daemon-reload
systemctl enable --now etcd

配置etcdctl客户端工具

#设置全局变量
cat > /etc/profile.d/etcdctl.sh <<EOF
#!/bin/bash
export ETCDCTL_API=3
export ETCDCTL_ENDPOINTS=https://127.0.0.1:2379
export ETCDCTL_CACERT=/etc/etcd/ssl/ca.pem
export ETCDCTL_CERT=/etc/etcd/ssl/etcd-client.pem
export ETCDCTL_KEY=/etc/etcd/ssl/etcd-client-key.pem
EOF
#生效
source /etc/profile
#验证集群状态
etcdctl member list

七、部署kubernetes

7.1、分发二进制文件

master="master01 master02 master03"
node="node01 node02 node03"
#分发master组件
for i in $master;do
  scp kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kube-proxy,kubelet} $i:/usr/bin
done
#分发node组件
for i in $node;do
  scp kubernetes/server/bin/{kube-proxy,kubelet} $i:/usr/bin
done

7.2、所有master节点安装kube-apiserver

#创建ServiceAccount Key(master01中操作)
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
master="master01 master02 master03"
#分发master组件
for i in $master;do
  scp /etc/kubernetes/pki/{sa.pub,sa.key} $i:/etc/kubernetes/pki/
done
#创建service文件(所有master)
a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
cat > /etc/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-apiserver \\
      --v=2  \\
      --logtostderr=true  \\
      --allow-privileged=true  \\
      --bind-address=$a  \\
      --secure-port=6443  \\
      --advertise-address=$a \\
      --service-cluster-ip-range=10.200.0.0/16  \\
      --service-node-port-range=30000-42767  \\
      --etcd-servers=https://192.168.50.13:2379,https://192.168.50.14:2379,https://192.168.50.15:2379 \\
      --etcd-cafile=/etc/etcd/ssl/ca.pem  \\
      --etcd-certfile=/etc/etcd/ssl/etcd-client.pem  \\
      --etcd-keyfile=/etc/etcd/ssl/etcd-client-key.pem  \\
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
      --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem  \\
      --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem  \\
      --kubelet-client-certificate=/etc/kubernetes/pki/kube-apiserver.pem  \\
      --kubelet-client-key=/etc/kubernetes/pki/kube-apiserver-key.pem  \\
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User  

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF
#启动服务
systemctl enable --now kube-apiserver.service

7.2、安装kube-controller-manager

#生成service文件
cat > /etc/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-controller-manager \
      --v=2 \
      --logtostderr=true \
      --root-ca-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
      --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=10.200.0.0/16 \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
      --node-cidr-mask-size=24
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF
#启动服务
systemctl enable --now kube-controller-manager.service

7.3、安装kube-scheduler

#生成service文件
cat > /etc/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-scheduler \
      --v=2 \
      --logtostderr=true \
      --leader-elect=true \
      --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF
#启动服务
systemctl enable --now kube-scheduler.service

7.4高可用配置:

在lb节点安装keepalived和haproxy

apt install keepalived haproxy -y

keepalived配置文件:

# lb01
root@kube-lb01:/etc/keepalived# cat keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 56  # 修改对应的id
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.50.251 dev eth0 label eth0:0 # 添加vip
    }
     track_script {
      chk_apiserver 
  } 
}


# lb02
root@kube-lb02:/etc/keepalived# cat keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}

vrrp_instance VI_1 {
    state SLAVE
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 56  # 修改对应的id
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.50.251 dev eth0 label eth0:0 # 添加vip
    }
     track_script {
      chk_apiserver 
  } 
}

# 检查ha进程脚本 
root@kube-lb01:/etc/keepalived# cat check_apiserver.sh 
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

配置haproxy:

# lb01,lb02:
root@kube-lb01:/etc/haproxy# cat haproxy.cfg 
global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

    # Default SSL material locations
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private

    # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
    log global
    mode    http
    option  httplog
    option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http
listen k8s-6443       # 添加监听
        bind 192.168.50.251:8443
        mode tcp
        server k8s1 192.168.50.10:6443 check inter 3s fall 3 rise 5  # 每三秒一次健康检查
        server k8s2 192.168.50.11:6443 check inter 3s fall 3 rise 5
        server k8s3 192.168.50.12:6443 check inter 3s fall 3 rise 5

启动:

systemctl restart keepalived.service haproxy.service

注:在ha02节点时会启动不起来haproxy,是因为当前的vip 不在 ha02节点上,监听不到不在自己节点上的ip,要添加一条规则,添加后再启动就好了
监听vip 时要添加一条规则:

vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
sysctl -p

7.5、在master01节点配置kubectl工具

#拷贝admin.kubeconfig到~/.kube/config
mkdir /root/.kube/ -p
cp /opt/pki/kubernetes/admin/admin.kubeconfig  /root/.kube/config
#验证集群状态,以下显示信息表示master节点的所有组件运行正常
kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}

7.6、部署kubelet

创建TLS Bootstrapping认证文件(master01)

#创建目录
mkdir /opt/pki/kubernetes/kubelet -p
cd /opt/pki/kubernetes/kubelet
#生成随机认证key
a=`head -c 16 /dev/urandom | od -An -t x | tr -d ' ' | head -c6`
b=`head -c 16 /dev/urandom | od -An -t x | tr -d ' ' | head -c16`
#生成权限绑定文件
cat > bootstrap.secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-$a
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  token-id: $a
  token-secret: $b
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-certificate-rotation
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver
EOF
#生成配置文件
kubectl config set-cluster kubernetes  \
--certificate-authority=../ca/ca.pem   \
--embed-certs=true   \
--server=https://192.168.50.251:8443   \
--kubeconfig=bootstrap-kubelet.kubeconfig

kubectl config set-credentials tls-bootstrap-token-user  \
--token=$a.$b \
--kubeconfig=bootstrap-kubelet.kubeconfig

kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes   \
--user=tls-bootstrap-token-user  \
--kubeconfig=bootstrap-kubelet.kubeconfig

kubectl config use-context tls-bootstrap-token-user@kubernetes  \
--kubeconfig=bootstrap-kubelet.kubeconfig
#创建权限
kubectl apply -f bootstrap.secret.yaml

分发认证文件(master,node节点)

node="master01 master02 master03 node01 node02 node03"
for i in $node;do
  ssh $i "mkdir /etc/kubernetes -p"
  scp  /opt/pki/kubernetes/kubelet/bootstrap-kubelet.kubeconfig $i:/etc/kubernetes
done

部署kubelet组件,使用docker容器运行时部署方式(master,node都要)

a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
mkdir /etc/systemd/system/kubelet.service.d/ -p
mkdir /etc/kubernetes/manifests/ -p
#生成service文件
cat > /etc/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/bin/kubelet

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF
#生成service配置文件
cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf <<EOF
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--hostname-override=$a"
Environment="KUBELET_RINTIME=--container-runtime=remote --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node=''"
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_SYSTEM_ARGS \$KUBELET_EXTRA_ARGS \$KUBELET_RINTIME
EOF

kubelet配置文件生成(master,node)

a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
#生成配置文件
cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: $a
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.200.0.2
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

启动kubelet(master,node)

systemctl enable --now kubelet.service

7.7、部署kube-proxy

生成kube-proxy配置文件

#master节点执行
#创建目录
mkdir /opt/pki/kubernetes/kube-proxy/ -p
cd  /opt/pki/kubernetes/kube-proxy/
#生成配置文件
kubectl -n kube-system create serviceaccount kube-proxy
kubectl create clusterrolebinding  system:kube-proxy  --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
cat >kube-proxy-scret.yml<<EOF
apiVersion: v1
kind: Secret
metadata:
  name: kube-proxy
  namespace: kube-system
  annotations:
    kubernetes.io/service-account.name: "kube-proxy"
type: kubernetes.io/service-account-token
EOF
kubectl apply -f kube-proxy-scret.yml
JWT_TOKEN=$(kubectl -n kube-system get secret/kube-proxy \
--output=jsonpath='{.data.token}' | base64 -d)
kubectl config set-cluster kubernetes   \
--certificate-authority=/etc/kubernetes/pki/ca.pem    \
--embed-certs=true    \
--server=https://192.168.50.251:8443    \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kubernetes    \
--token=${JWT_TOKEN}   \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context kubernetes    \
--cluster=kubernetes   \
--user=kubernetes   \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context kubernetes   \
--kubeconfig=kube-proxy.kubeconfig

拷贝配置文件到node节点

node="master01 master02 master03 node01 node02 node03"
for i in $node;do
  scp  /opt/pki/kubernetes/kube-proxy/kube-proxy.kubeconfig $i:/etc/kubernetes
done

生成service文件(master , node )

cat > /etc/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.conf \
  --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF

生成配置文件

a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
cat > /etc/kubernetes/kube-proxy.conf <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: $a
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 10.100.0.0/16
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: "$a"
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF

启动服务

systemctl enable --now kube-proxy.service

检查:

root@master01:~/apps/k8s-ha-install/calico# kubectl get node
NAME            STATUS     ROLES    AGE   VERSION
192.168.50.10   NotReady   <none>   81m   v1.24.7
192.168.50.11   NotReady   <none>   81m   v1.24.7
192.168.50.12   NotReady   <none>   81m   v1.24.7
192.168.50.16   NotReady   <none>   55m   v1.24.7
192.168.50.17   NotReady   <none>   55m   v1.24.7
192.168.50.18   NotReady   <none>   55m   v1.24.7
# 注:由于网络插件还没有部署,节点会没有准备就绪 NotReady。

八、安装其他组件

8.1、安装calico网络组件

yaml下载地址:https://docs.projectcalico.org/v3.19/manifests/calico-typha.yaml

mkdir /opt/k8s/calico -p
# 修改配置文件中的image下载地址(要先部署好harbor和分发好harbor证书)
root@master01:/opt/k8s/calico# cat calico.yaml |grep image
          image: harbor.whyxx.net/baseimages/cni:v3.24.5
          imagePullPolicy: IfNotPresent
          image: harbor.whyxx.net/baseimages/cni:v3.24.5
          imagePullPolicy: IfNotPresent
          image: harbor.whyxx.net/baseimages/node:v3.24.5
          imagePullPolicy: IfNotPresent
          image: harbor.whyxx.net/baseimages/node:v3.24.5
          imagePullPolicy: IfNotPresent
          image: harbor.whyxx.net/baseimages/kube-controllers:v3.24.5
          imagePullPolicy: IfNotPresent
#修改配置
            - name: CALICO_IPV4POOL_CIDR
              value: "10.100.0.0/16"

# 验证
root@master01:/opt/k8s/calico# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
kube-system   calico-kube-controllers-77d64755cf-bszr8   1/1     Running            0          20h
kube-system   calico-node-4bd2m                          1/1     Running            0          20h
kube-system   calico-node-9dztx                          1/1     Running            0          20h
kube-system   calico-node-9vrvt                          1/1     Running            0          20h
kube-system   calico-node-fdc8t                          1/1     Running            0          20h
kube-system   calico-node-kt5ms                          1/1     Running            0          20h
kube-system   calico-node-zg5pt                          1/1     Running            0          20h

root@master01:/opt/k8s/calico# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.50.10   Ready    <none>   42h   v1.24.7
192.168.50.11   Ready    <none>   42h   v1.24.7
192.168.50.12   Ready    <none>   42h   v1.24.7
192.168.50.16   Ready    <none>   42h   v1.24.7
192.168.50.17   Ready    <none>   42h   v1.24.7
192.168.50.18   Ready    <none>   42h   v1.24.7

九、安装calicoctl客户端工具

下载地址:https://github.com/projectcalico/calicoctl/releases

#创建配置文件
mkdir /etc/calico -p
cat >/etc/calico/calicoctl.cfg <<EOF
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/root/.kube/config"
EOF
#验证
root@master01:~/apps/k8s-ha-install/metrics-server# calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+------------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |   SINCE    |    INFO     |
+---------------+-------------------+-------+------------+-------------+
| 192.168.50.12 | node-to-node mesh | up    | 2022-12-06 | Established |
| 192.168.50.11 | node-to-node mesh | up    | 2022-12-06 | Established |
| 192.168.50.16 | node-to-node mesh | up    | 2022-12-06 | Established |
| 192.168.50.17 | node-to-node mesh | up    | 2022-12-06 | Established |
+---------------+-------------------+-------+------------+-------------+

IPv6 BGP status
No IPv6 peers found.

十、安装coredns组件

yaml文件下载地址:https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed

#创建目录
mkdir /opt/k8s/coredns -p
# 修改镜像下载地址
root@master01:/opt/k8s/coredns# cat coredns.yaml |grep image
        image: harbor.whyxx.net/baseimages/coredns:1.9.4
        imagePullPolicy: IfNotPresent
#修改配置
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {  #这里修改
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . 114.114.114.114 {   #外部dns解析服务器
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
  clusterIP: 10.200.0.2  #这里改为这个地址
  
#创建
kubectl apply -f coredns.yaml
# 验证
root@master01:~/apps/k8s-ha-install/metrics-server# kubectl get pod -n kube-system | grep coredns
coredns-577db899cb-mgxhb                   1/1     Running   0          28h

十一、安装Metrics-server

下载地址:https://github.com/kubernetes-sigs/metrics-server/

mkdir /opt/k8s/metrics-server
cd /opt/k8s/metrics-server
#拷贝证书文件
node="master01 master02 master03 node01 node02"
for i in $node;do
   scp /opt/pki/proxy-client/front-proxy-ca.pem $i:/etc/kubernetes/pki/
done
# 修改image下载地址
root@master01:/opt/k8s/metrics-server# cat components.yaml |grep image
        image: harbor.whyxx.net/baseimages/metrics-server:v0.6.1
        imagePullPolicy: IfNotPresent
#需要修改配置
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem 
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra- 
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - mountPath: /etc/kubernetes/pki
          name: ca-ssl
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki
#创建
kubectl apply -f components.yaml
# 验证
root@master01:/opt/k8s/metrics-server# kubectl top node
NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
192.168.50.10   439m         21%    1163Mi          62%       
192.168.50.11   426m         21%    1116Mi          60%       
192.168.50.12   421m         21%    1136Mi          61%       
192.168.50.16   328m         16%    935Mi           50%       
192.168.50.17   380m         19%    899Mi           48%       
192.168.50.18   375m         18%    796Mi           42% 

十二、节点打上角色的标签

root@master01:/opt/k8s/metrics-server# kubectl label node 192.168.50.10  node-role.kubernetes.io/master=
node/192.168.50.10 labeled
root@master01:/opt/k8s/metrics-server# kubectl label node 192.168.50.11  node-role.kubernetes.io/master=
node/192.168.50.11 labeled
root@master01:/opt/k8s/metrics-server# kubectl label node 192.168.50.12  node-role.kubernetes.io/master=
node/192.168.50.12 labeled
root@master01:/opt/k8s/metrics-server# kubectl label node 192.168.50.16  node-role.kubernetes.io/node=
node/192.168.50.16 labeled
root@master01:/opt/k8s/metrics-server# kubectl label node 192.168.50.17  node-role.kubernetes.io/node=
node/192.168.50.17 labeled
root@master01:/opt/k8s/metrics-server# kubectl label node 192.168.50.18  node-role.kubernetes.io/node=
node/192.168.50.18 labeled
# 查看
root@master01:/opt/k8s/metrics-server# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.50.10   Ready    master   44h   v1.24.7
192.168.50.11   Ready    master   44h   v1.24.7
192.168.50.12   Ready    master   44h   v1.24.7
192.168.50.16   Ready    node     44h   v1.24.7
192.168.50.17   Ready    node     43h   v1.24.7
192.168.50.18   Ready    node     44h   v1.24.7
# 查看节点已有label
root@master01:/opt/k8s/metrics-server# kubectl get nodes --show-labels
NAME            STATUS   ROLES    AGE   VERSION   LABELS
192.168.50.10   Ready    master   44h   v1.24.7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.50.10,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/node=
192.168.50.11   Ready    master   44h   v1.24.7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.50.11,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/node=
192.168.50.12   Ready    master   44h   v1.24.7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.50.12,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/node=
192.168.50.16   Ready    node     44h   v1.24.7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.50.16,kubernetes.io/os=linux,node-role.kubernetes.io/node=,node.kubernetes.io/node=
192.168.50.17   Ready    node     43h   v1.24.7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.50.17,kubernetes.io/os=linux,node-role.kubernetes.io/node=,node.kubernetes.io/node=
192.168.50.18   Ready    node     44h   v1.24.7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.50.18,kubernetes.io/os=linux,node-role.kubernetes.io/node=,node.kubernetes.io/node=

十三、部署kuboard

官方文档:https://www.kuboard.cn/install/v3/install.html#%E5%AE%89%E8%A3%85%E6%96%B9%E5%BC%8F
这里是发static pod的方式来运行,用kubeconfig导入集群
安装:
在 Kubernetes master 节点上,执行如下两行指令,即可在根据提示完成 kuboard 安装。

# 最好先下载好镜像
curl -fsSL https://addons.kuboard.cn/kuboard/kuboard-static-pod.sh -o kuboard.sh
sh kuboard.sh
# 验证pod运行
root@master01:~/apps# kubectl get pod -n kuboard
NAME                       READY   STATUS    RESTARTS   AGE
kuboard-v3-192.168.50.10   1/1     Running   0          31s

访问 kuboard v3
在浏览器输入 http://your-host-ip:80 即可访问 Kuboard v3.x 的界面,登录方式:
用户名: admin
密 码: Kuboard123

参考文章:
https://www.zhangzhuo.ltd/articles/2022/01/09/1641717241819.html
https://feliks.cn/730/

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,362评论 5 477
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,330评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,247评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,560评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,580评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,569评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,929评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,587评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,840评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,596评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,678评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,366评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,945评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,929评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,165评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 43,271评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,403评论 2 342

推荐阅读更多精彩内容