开始之前
宿主机器信息
系统信息 | 系 统 | 芯 片 | 架构 |
---|---|---|---|
Mac pro | macos 13.4 | m2 pro | arm64 |
虚拟机机信息
系统信息 | 系 统 | 架 构 |
---|---|---|
VMware Fusion | ubuntu 22.04 | AArch64 |
- 注意: ubuntu虚拟机上所有的软件都要安装linux arm64版本
开始之前, 首先在所有的ubuntu配置了root用户登陆,安装软件会方便一点
一、环境准备
1. 服务规划
主机名称 | IP地址 | 配 置 | 组 件 |
---|---|---|---|
k8s-master01 | 192.168.91.136 | 2c4g | etcd, api-server, controller-manager, scheduler, kubelet, kube-proxy, haproxy, keepalived |
k8s-master02 | 192.168.91.137 | 1c4g | etcd, api-server, controller-manager, scheduler, kubelet, kube-proxy, haproxy, keepalived |
k8s-master03 | 192.168.91.138 | 1c4g | etcd, api-server, controller-manager, scheduler, kubelet, kube-proxy, haproxy, keepalived |
k8s-node01 | 192.168.91.139 | 1c4g | kubelet, kube-proxy |
k8s-node02 | 192.168.91.140 | 1c4g | kubelet, kube-proxy |
2. 软件版本、pod和service网段划分
pod和service网段根据实际需要修改
软件 | 版本 |
---|---|
Ubuntu | 22.04 |
kubernetes | 1.27.4 |
containerd | v1.7.2 |
宿主机器网段 | 192.168.91.0/24 |
Pod网段 | 172.16.0.0/16 |
Service网段 | 10.96.0.0/16 |
虚拟地址 | 192.168.91.100 |
3. 系统设置
- 所有节点执行一遍
定义环境变量:
这里先将ip地址的环境变量加入到~/.bashrc,这样就可以永久保存,因为是为了防止部署时配置的ip地址写错导致集群出现问题,还有环境变量添加只能执行一次因为是重定向到文件内。
cd ~
echo "K8S_MASTER01='192.168.91.136'" >> ~/.bashrc
echo "K8S_MASTER02='192.168.91.137'" >> ~/.bashrc
echo "K8S_MASTER03='192.168.91.138'" >> ~/.bashrc
echo "K8S_NODE01='192.168.91.139'" >> ~/.bashrc
echo "K8S_NODE02='192.168.91.140'" >> ~/.bashrc
echo "LOCALHOST=`hostname -I |awk '{print $1}'`" >> ~/.bashrc
echo "K8S_VIP='192.168.91.100'" >> ~/.bashrc
source ~/.bashrc
4. 设置主机名称
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-master02
hostnamectl set-hostname k8s-master03
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
5. 安装一些必备工具
apt install net-tools nfs-kernel-server curl vim git lvm2 telnet htop jq lrzsz tree bash-completion telnet wget make -y
vim ~/.bashrc
# 去除注释
if [ -f /etc/bash_completion ] && ! shopt -oq posix; then
. /etc/bash_completion
fi
source ~/.bashrc
6. 主机名解析
cat >> /etc/hosts << EOF
$K8S_MASTER01 k8s-master01
$K8S_MASTER02 k8s-master02
$K8S_MASTER03 k8s-master03
$K8S_NODE01 k8s-node01
$K8S_NODE02 k8s-node02
EOF
7. 所有节点禁用swap分区
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0
cat /etc/fstab | grep swap
查看一下swap状态
cat /etc/fstab
结果
root@k8s-master01:~/Desktop# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/nvme0n1p2 during installation
UUID=6a62dbb4-12bc-4f75-a233-66240703c49f / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/nvme0n1p1 during installation
UUID=F903-3FFB /boot/efi vfat umask=0077 0 1
#/swapfile none swap sw 0 0
8. 所有节点时间同步
这里也不重要, ubuntu 22.04装机已经选了时区
sudo timedatectl set-timezone Asia/Shanghai
<img src="img/time_zone.png">
9. 所有节点配置ulimit
ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF
10. 配置免密登陆
k8s-master01配置即可
apt install -y sshpass
ssh-keygen -f /root/.ssh/id_rsa -P ''
export IP="k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02"
export SSHPASS=mypassword
for HOST in $IP;do
sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
echo $HOST
done
11. 安装ipvs
apt install ipvsadm ipset sysstat conntrack -y
cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
systemctl restart systemd-modules-load.service
systemctl enable ipvsadm
lsmod | grep -e ip_vs -e nf_conntrack
结果如下
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 176128 1 ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 2 nf_conntrack,ip_vs
12. 修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 0
EOF
sysctl --system
13. 域名解析
apt install resolvconf
systemctl enable resolvconf.service
echo -e "nameserver 8.8.8.8\nnameserver 8.8.4.4" >> /etc/resolvconf/resolv.conf.d/head
systemctl start resolvconf.service
sudo systemctl restart resolvconf.service
sudo systemctl restart systemd-resolved.service
sudo systemctl status resolvconf.service
echo -e "nameserver 8.8.8.8\nnameserver 8.8.4.4" >> /etc/resolv.conf
二、 k8s组件软件安装
1. 所有节点安装Containerd作为Runtime
1.1 配置containerd所需的内核
chmod 777 /etc/modules-load.d/
1.2 配置Containerd所需的模块
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
1.3 加载内核模块
sudo modprobe -- overlay sudo modprobe -- br_netfilter
1.4 配置Containerd所需的内核
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# 加载内核
sysctl --system
1.5 下载container
当前 ubuntu 22.04是 arm版本, contained要选择arm64版, 如果要下载别的版本, 在以下下载地址中releases中查询
文档:
https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/
https://github.com/containerd/containerd/blob/main/docs/getting-started.md
下载带cri开头的containerd版本,最新版是v1.7.2,里面包含了 systemd 配置文件,containerd 和ctr、crictl等部署文件
wget https://github.com/containerd/containerd/releases/download/v1.7.2/cri-containerd-1.7.2-linux-arm64.tar.gz
#解压
tar -xzf cri-containerd-*-linux-arm64.tar.gz -C /
#创建文件夹
sudo mkdir -p /etc/containerd && sudo chmod 777 /etc/containerd
1.6 构建Containerd配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
# 修改Containerd的配置文件
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup
# 配置阿里云镜像源
sed -i "s#registry.k8s.io#registry.aliyuncs.com/google_containers#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep sandbox_image
1.7 配置crictl客户端连接
sudo bash -c "cat > /etc/crictl.yaml" <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
#修改下权限
chmod 777 /etc/crictl.yaml
1.8 启动containerd
注意:如果镜像源不可用,会影响后面镜像的下载
systemctl daemon-reload && systemctl enable --now containerd
#systemctl restart containerd
# 查看containerd状态
systemctl status containerd
结果
oot@k8s-master01:~/Desktop# systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-07-27 13:10:29 CST; 4h 14min ago
Docs: https://containerd.io
Process: 31533 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 31534 (containerd)
Tasks: 62
Memory: 314.5M
CPU: 2min 53.857s
CGroup: /system.slice/containerd.service
├─31322 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1b79ec9562212a2e469393e97e7ce77f43bfee7486f7b7b81f1e86d166bc5e3c -address /run/container>
├─31534 /usr/local/bin/containerd
├─39501 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id ec70b71d21c9e26db021c98a0f08e16c4f780890442b48c33e7b47d3939c5232 -address /run/container>
├─52477 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3a69ef7cc5a88a3dac30d00a28c594e526484d0de7e92574a41c15a6c0d18d6b -address /run/container>
└─75016 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5883e5432a9fa6270f659461702256ca4624b544be0bef833f14a53a8767b8ce -address /run/container>
2. k8s和etcd安装
https://github.com/etcd-io/etcd/releases/download/v3.5.9/etcd-v3.5.9-linux-arm64.zip
2.1 k8s下载&安装
#k8s二进制包
K8S_VERSION=v1.27.4
K8s_COMPONENT='kubelet kubectl kube-apiserver kube-controller-manager kube-scheduler kube-proxy'
# 根据下载地址, 遍历组件名称循环下载组件及校验文件
for component in $K8s_COMPONENT;
do
echo "downloading ${component}";
curl -LO "https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/arm64/${component}"
# download checksum file
curl -LO "https://dl.k8s.io/${K8S_VERSION}/bin/linux/arm64/${component}.sha256"
done
# 循环校验
for component in $K8s_COMPONENT;
do
# check checksum, if valid, the output is:
echo "$(cat ${component}.sha256) ${component}" | sha256sum --check
done
# 将组件放到bin目录
sudo install -o root -g root -m 0755 kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} /usr/local/bin
2.2 etcd下载&安装
#etcd二进制包
ETCD_VER=v3.5.9
# choose either URL
HUAWEI_URL=https://mirrors.huaweicloud.com/etcd
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
DOWNLOAD_URL=${HUAWEI_URL}
rm -f /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-arm64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz -C /usr/local/bin --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz
# 查看版本
kubelet --version
# Kubernetes v1.26.5
etcdctl version
# etcdctl version: 3.5.9
# API version: 3.5
2.3 将组件发送至其他k8s节点
etcd只部署在master节点
Master='k8s-master02 k8s-master03'
Work='k8s-node01 k8s-node02'
for NODE in $Master;
do
echo "正在传输的主机地址:$NODE ...";
scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/;
scp /usr/local/bin/etcd* $NODE:/usr/local/bin/;
done
for NODE in $Work;
do
echo "正在传输的主机地址:$NODE ...";
scp /usr/local/bin/kube{let,-proxy,ctl} $NODE:/usr/local/bin/ ;
done
# 添加tab功能
apt install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
3. 创建证书相关文件
mkdir -p /root/k8s/ssl && cd /root/k8s/ssl
cat > admin-csr.json << EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "system:masters",
"OU": "Kubernetes-manual"
}
]
}
EOF
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
EOF
cat > etcd-ca-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "etcd",
"OU": "Etcd Security"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF
cat > front-proxy-ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"ca": {
"expiry": "876000h"
}
}
EOF
cat > kubelet-csr.json << EOF
{
"CN": "system:node:\$NODE",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"ST": "Shanghai",
"O": "system:nodes",
"OU": "Kubernetes-manual"
}
]
}
EOF
cat > manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "system:kube-controller-manager",
"OU": "Kubernetes-manual"
}
]
}
EOF
cat > apiserver-csr.json << EOF
{
"CN": "kube-apiserver",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
]
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF
cat > etcd-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "etcd",
"OU": "Etcd Security"
}
]
}
EOF
cat > front-proxy-client-csr.json << EOF
{
"CN": "front-proxy-client",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "system:kube-proxy",
"OU": "Kubernetes-manual"
}
]
}
EOF
cat > scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "system:kube-scheduler",
"OU": "Kubernetes-manual"
}
]
}
EOF
cd .. ; mkdir -p bootstrap ; cd bootstrap
cat > bootstrap.secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-c8ad9c
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "The default bootstrap token generated by 'kubelet '."
token-id: c8ad9c
token-secret: 2e4d610cf3e7426e
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
EOF
cd .. ; mkdir -p coredns && cd coredns
cat > coredns.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: registry.cn-Shanghai.aliyuncs.com/dotbalo/coredns:1.8.6
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.96.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
EOF
三. 相关证书生成
master01节点下载证书生成工具
wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_arm64" -O /usr/local/bin/cfssl
wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_arm64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl*
1. 生成etcd证书
- master节点操作
1.1 所有master节点创建证书存放目录
mkdir /etc/etcd/ssl -p
1.2 master01节点生成etcd证书
mkdir -p /root/k8s/ssl && cd /root/k8s/ssl
# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,k8s-node01,k8s-node02,$K8S_MASTER01,$K8S_MASTER02,$K8S_MASTER03,$K8S_NODE01,$K8S_NODE02 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
1.3 将证书复制到其他etcd节点
Master='k8s-master02 k8s-master03'
for NODE in $Master;
do
ssh $NODE "mkdir -p /etc/etcd/ssl";
for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem;
do
scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE};
done;
done
2. 生成k8s相关证书
- master节点操作
2.1 所有k8s节点创建证书存放目录
mkdir -p /etc/kubernetes/pki
2.2 master01节点生成k8s证书
cd /root/k8s/ssl
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
# 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备
# 10.96.0.1是service网段的第一个地址,需要计算,192.168.91.100为高可用vip地址
# 若没有IPv6 可删除可保留
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=10.96.0.1,$K8S_VIP,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,$K8S_MASTER01,$K8S_MASTER02,$K8S_MASTER03,$K8S_NODE01,$K8S_NODE02,192.168.91.50,192.168.91.51,192.168.91.52,192.168.91.53,192.168.91.54,10.96.0.40,10.96.0.41 \
-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
ls /etc/kubernetes/pki/apiserver*
2.3 master01生成apiserver聚合证书
cd /root/k8s/ssl
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
# 有一个警告,可以忽略
cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
2.4 master01生成kube-proxy证书
cd /root/k8s/ssl
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://$K8S_VIP:8443 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kube-proxy@kubernetes \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
2.5 master01生成controller-manage的证书
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# 设置一个集群项
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://$K8S_VIP:8443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://$K8S_VIP:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://$K8S_VIP:8443 \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin \
--client-certificate=/etc/kubernetes/pki/admin.pem \
--client-key=/etc/kubernetes/pki/admin-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
2.6 master01创建ServiceAccount Key ——secret
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
2.7 master01将证书发送到其他master节点
for NODE in k8s-master02 k8s-master03;
do
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd);
do
ssh $NODE "mkdir -p /etc/kubernetes/pki"
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig kube-proxy.kubeconfig;
do
ssh $NODE "mkdir -p /etc/kubernetes"
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done
2.8 查看证书
ls /etc/kubernetes/pki/
ls /etc/kubernetes/pki/ | wc -l
结果, 共26个文件
root@k8s-master01:~/k8s/ssl# ls /etc/kubernetes/pki/
admin.csr ca-key.pem front-proxy-ca.pem sa.key
admin-key.pem ca.pem front-proxy-client.csr sa.pub
admin.pem controller-manager.csr front-proxy-client-key.pem scheduler.csr
apiserver.csr controller-manager-key.pem front-proxy-client.pem scheduler-key.pem
apiserver-key.pem controller-manager.pem kube-proxy.csr scheduler.pem
apiserver.pem front-proxy-ca.csr kube-proxy-key.pem
ca.csr front-proxy-ca-key.pem kube-proxy.pem
root@k8s-master01:~/k8s/ssl# ls /etc/kubernetes/pki/ | wc -l
26
四. k8s系统组件配置
1. 配置文件内容
- 所有master节点操作
1.1 添加环境变量:
cd ~
if [ $LOCALHOST :==: $K8S_MASTER01 ];then
echo "etcd='k8s-etcd01'" >> ~/.bashrc
elif [ $LOCALHOST :==: $K8S_MASTER02 ];then
echo "etcd='k8s-etcd02'" >> ~/.bashrc
elif [ $LOCALHOST == $K8S_MASTER03 ];then
echo "etcd='k8s-etcd03'" >> ~/.bashrc
fi
source ~/.bashrc
1.2 配置etcd启动文件内容:
cat > /etc/etcd/etcd.config.yml << EOF
name: $etcd
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: "https://$LOCALHOST:2380"
listen-client-urls: "https://$LOCALHOST:2379,http://127.0.0.1:2379"
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: "https://$LOCALHOST:2380"
advertise-client-urls: "https://$LOCALHOST:2379"
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: "k8s-etcd01=https://$K8S_MASTER01:2380,k8s-etcd02=https://$K8S_MASTER02:2380,k8s-etcd03=https://$K8S_MASTER03:2380"
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
2. 创建service(所有master节点操作)
2.1 创建etcd.service并启动
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF
2.2 创建etcd证书目录
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd
systemctl status etcd.service
2.3 查看etcd状态
export ETCDCTL_API=3
etcdctl --endpoints="$K8S_MASTER01:2379,$K8S_MASTER02:2379,$K8S_MASTER03:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
结果
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.91.136:2379 | 4b1a15de75e46030 | 3.5.9 | 20 kB | true | false | 2 | 9 | 9 | |
| 192.168.91.137:2379 | 7396ece7a1371606 | 3.5.9 | 25 kB | false | false | 2 | 9 | 9 | |
| 192.168.91.138:2379 | bfa20dc0da3b04ab | 3.5.9 | 25 kB | false | false | 2 | 9 | 9 | |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
五. 高可用配置
1. 在master01和master02和master03服务器上操作
1.1 安装keepalived和haproxy服务
apt -y install keepalived haproxy
1.2 修改haproxy配置文件(配置文件一样)
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
cat > /etc/haproxy/haproxy.cfg << EOF
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 $K8S_MASTER01:6443 check
server k8s-master02 $K8S_MASTER02:6443 check
server k8s-master03 $K8S_MASTER03:6443 check
EOF
1.3 master01配置keepalived master节点
- 全部master节点操作
注:这里keepalived使用的是非抢占模式:priority id
if [ $LOCALHOST == $K8S_MASTER01 ];then
export PRIORITY="100"
elif [ $LOCALHOST == $K8S_MASTER02 ];then
export PRIORITY="80"
elif [ $LOCALHOST == $K8S_MASTER03 ];then
export PRIORITY="50"
fi
#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER #这里表示master01是主节点
interface ens160 #注意替换为自己的网卡名称
mcast_src_ip $LOCALHOST #注意这里是master01的ip
virtual_router_id 51
priority $PRIORITY
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
$K8S_VIP #注意这里是虚拟的ip
}
track_script {
chk_apiserver
} }
EOF
1.4 健康检查脚本配置
cat > /etc/keepalived/check_apiserver.sh << EOF
#!/bin/bash
err=0
for k in \$(seq 1 3)
do
check_code=\$(pgrep haproxy)
if [[ \$check_code == "" ]]; then
err=\$(expr \$err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ \$err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
EOF
# 给脚本授权
chmod +x /etc/keepalived/check_apiserver.sh
1.5 启动服务
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
systemctl restart haproxy.service && netstat -lntup | grep 443
1.6 测试高可用
ping 192.168.91.100
# 关闭主节点,看vip是否漂移到备节点
六. k8s服务配置&启动
1. 创建apiserver
1.1准备目录
所有k8s节点创建以下目录
1.2 master所有节点配置
--service-cluster-ip-range=10.96.0.0/16: Service IP 段
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--v=2 \\
--allow-privileged=true \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=$LOCALHOST \\
--service-cluster-ip-range=10.96.0.0/16 \\
--service-node-port-range=30000-32767 \\
--etcd-servers=https://$K8S_MASTER01:2379,https://$K8S_MASTER02:2379,https://$K8S_MASTER03:2379 \\
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator,front-proxy-client \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
1.3 启动apiserver
- 所有master节点
systemctl daemon-reload && systemctl enable --now kube-apiserver
#systemctl restart kube-apiserver
systemctl status kube-apiserver
netstat -lntup | grep 6443
#查看日志 journalctl -u kube-apiserver
测试haproxy 8443端口
telnet 192.168.91.100 8443
结果
Trying 192.168.91.100...
Connected to 192.168.91.100.
Escape character is '^]'.
2. 配置kube-controller-manager service
- 所有master节点同样配置
1.2 添加service文件
172.16.0.0/16为pod网段,按需求设置你自己的网段
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--v=2 \\
--bind-address=127.0.0.1 \\
--root-ca-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
--leader-elect=true \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=40s \\
--node-monitor-period=5s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--allocate-node-cidrs=true \\
--cluster-cidr=172.16.0.0/16 \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
1.2 kube-controller-manager启动&查看状态
systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager
3 配置kube-scheduler service
- 所有master节点配置,且配置相同
3.1 配置 scheduler service
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--v=2 \\
--bind-address=127.0.0.1 \\
--leader-elect=true \\
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
3.2 启动并查看服务状态
systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler
七. TLS Bootstrapping配置
1. 在master01上配置
kubectl config set-cluster kubernetes \\
--certificate-authority=/etc/kubernetes/pki/ca.pem \\
--embed-certs=true \\
--server=https://$K8S_VIP:8443 \\
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user \\
--token=c8ad9c.2e4d610cf3e7426e \\
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes \\
--cluster=kubernetes \\
--user=tls-bootstrap-token-user \\
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes \\
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
# token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02;
do
ssh $NODE "mkdir -p /root/.kube"
scp /root/.kube/config $NODE:/root/.kube
done
2. 查看集群状态,没问题的话继续后续操作
kubectl get cs
结果
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy
etcd-1 Healthy
etcd-2 Healthy
cd /root/k8s/bootstrap
kubectl create -f bootstrap.secret.yaml
八. node节点配置
1. 在master01上将证书复制到node节点
cd /etc/kubernetes/
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02;
do
ssh $NODE mkdir -p /etc/kubernetes/pki;
for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig;
do
scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE};
done;
done
2. kubelet配置
2.1. kubelet service配置
使用Containerd作为Runtime
- k8s-master01、k8s-node01、k8s-node02操作
# 所有k8s节点配置kubelet service
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-conf.yml \\
--runtime-request-timeout=15m \\
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
--cgroup-driver=systemd \\
--node-labels=node.kubernetes.io/node=
# --feature-gates=IPv6DualStack=true
[Install]
WantedBy=multi-user.target
EOF
2.2. 所有k8s节点创建kubelet的配置文件
cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF
2.3 启动&查看kubelet
systemctl daemon-reload
systemctl enable --now kubelet
#systemctl restart kubelet
systemctl status kubelet.service
# 查看日志
# journalctl -u kubelet -n 150 --no-pager
3. kube-proxy配置
3.1. 所有k8s节点添加kube-proxy的service文件
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy.yaml \\
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
3.2. 所有k8s节点添加kube-proxy的配置
cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 172.16.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF
3.3. 启动&查看kube-proxy
systemctl daemon-reload
systemctl enable --now kube-proxy
#systemctl restart kube-proxy
systemctl status kube-proxy
netstat -lntup | grep 10249
# 查看当前使用是默认是iptables或ipvs
curl localhost:10249/proxyMode
九. 安装Calico
1. 安装Calico
curl -L https://github.com/projectcalico/calico/releases/download/v3.26.1/calicoctl-linux-arm64 -o calicoctl
vim calico.yaml
找到CALICO_IPV4POOL_CIDR
- name: CALICO_IPV4POOL_CIDR
value: "172.16.0.0/16" #修改为自己的pod网段
kubectl apply -f calico.yaml
kubectl get po -n kube-system
结果
root@k8s-master01:~/Desktop# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-85578c44bf-g258t 1/1 Running 1 (49s ago) 3d17h
calico-node-555nz 1/1 Running 0 6h21m
calico-node-5q9hf 1/1 Running 0 6h21m
calico-node-7jk4v 1/1 Running 0 7h11m
calico-node-fc4zb 1/1 Running 1 (49s ago) 6h21m
calico-node-n8qqc 1/1 Running 1 (28s ago) 6h21m
十. k8s-master01安装Metrics Server
下载地址:https://github.com/kubernetes-sigs/metrics-server/releases
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.4/components.yaml
vim components.yaml
# 参数部分
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-extra-headers-prefix=X-Remote-Extra-
image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.4
imagePullPolicy: IfNotPresent
# 挂载部分
volumeMounts:
- mountPath: /tmp
name: tmp-dir
- name: ca-ssl
mountPath: /etc/kubernetes/pki
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
- name: ca-ssl
hostPath:
path: /etc/kubernetes/pki
kubectl apply -f components.yaml
查看节点资源
kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 104m 5% 2088Mi 54%
k8s-master02 75m 7% 2031Mi 53%
k8s-master03 83m 8% 2291Mi 60%
k8s-node01 36m 3% 1832Mi 48%
k8s-node02 41m 4% 1732Mi 45%
十一. k8s-master01安装CoreDNS
1. 下载
下载地址: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns
wget https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.sed
mv coredns.yaml.sed coredns.yaml
vim coredns.yaml
coredns.yaml 完整内容如下, 修改4点, 对应文件中:修改1-4.
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
# kubernetes $DNS_DOMAIN in-addr.arpa ip6.arpa { 修改1
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
priorityClassName: system-cluster-critical
serviceAccountName: coredns
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
containers:
- name: coredns
# image: registry.k8s.io/coredns/coredns:v1.10.1 修改2
image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1
imagePullPolicy: IfNotPresent
resources:
limits:
#memory: $DNS_MEMORY_LIMIT 修改3
memory: 200Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
#clusterIP: 10.0.0.10 修改4
clusterIP: 10.96.0.10 # 改为自己Services网段的第10个IP
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
2. 启动容器
kubectl apply -f coredns.yaml
kubectl get po -A
结果
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-85578c44bf-g258t 1/1 Running 0 3d14h
kube-system calico-node-555nz 1/1 Running 0 3h19m
kube-system calico-node-5q9hf 1/1 Running 0 3h19m
kube-system calico-node-7jk4v 1/1 Running 0 4h9m
kube-system calico-node-fc4zb 1/1 Running 0 3h19m
kube-system calico-node-n8qqc 1/1 Running 0 3h19m
kube-system coredns-6f659bf4cb-8bp5x 1/1 Running 0 7s
kube-system metrics-server-79ff9bd764-vd928 1/1 Running 0 22m
十二 k8s-master01安装Dashboard
1. 下载&修改配置
官网地址:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
下载:wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
修改recommended.yaml
vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
替换为
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
2. 查看
kubectl apply -f recommended.yaml
kubectl get po -n kube-system -l k8s-app=metrics-server
结果
NAME READY STATUS RESTARTS AGE
metrics-server-79ff9bd764-vd928 1/1 Running 0 63m
3. 创建admin新用户
官方教程: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
vim dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
name: admin-user-token
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: admin-user
type: kubernetes.io/service-account-token
kubectl apply -f dashboard-adminuser.yaml
获取token, 登陆时使用
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
https://192.168.91.136:30001