创建虚拟机集群
VMware教程
VirtualBox教程
也可以在各大云供应商处购买几台服务器,比自己配置更省心
建议先创建一台虚拟机,安装好docker,k8s,以及需要的工具(curl,vim等)和镜像后再克隆成多个,节省在多个机器安装工具的步骤
安装docker
安装 K8s-v1.23 高可用集
需要阿里云ecs独立部署k8s集群的看这里
需要arm64架构方案的看这里
唯一要注意的就是ubuntu在20后已经用keyrings代替了原来的apt-key,网上很多教程的sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
写法是无法正常工作的,不过Docker官方已经做出了正确示例
v
$ sudo apt-get update
$ sudo apt-get install ca-certificates curl gnupg lsb-release
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$ sudo apt-get update
$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
kubernetes默认设置cgroup驱动(cgroupdriver)为"systemd",而docker服务的cgroup驱动默认为"cgroupfs",建议将其修改为"systemd",与kubernetes保持一致,
除此之外这时还可以添加docker镜像和日志的配置
$ docker info # 用来查看cgroup
$ sudo vim /etc/docker/daemon.js
# 添加如下
# {
# "registry-mirrors": ["https://ch72w18w.mirror.aliyuncs.com"],
# "exec-opts": ["native.cgroupdriver=systemd"],
# “log-driver”:“json-file”,
# "log-opts": {"max-size":"100m", "max-file":"3"}
# }
配置后记得重启docker
sudo systemctl daemon-reload
sudo systemctl restart docker
安装k8s
$ curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
$ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubelet=1.21.14-00 kubeadm=1.21.14-00 kubectl=1.21.14-00
k8s v1.24后弃用dockershim,因此v1.24和之前的版本操作逻辑有很大区别,因此下载时最好提前明确指定版本。
拷贝并生成好几台虚拟机
右键 -> 复制
按照网络配置里记录的修改新机器的hostname和ip
为新机器生成新的密钥,并使用ssh-copy-id -i /home/<username1>/.ssh/id_rsa.pub <username2>@192.168..X.XXX
将密钥分发给其他机器,如果都是root user可以使用ssh-copy-id 192.168.X.XXX
至此新的worker节点上应该:
- 配置好了网络
- 添加了ssh key
- 安装了docker和k8s三件套
启动集群 【master节点】
k8s v1.24后弃用dockershim,因此v1.24和之前的版本操作逻辑有很大区别
启动k8s v1.21
启动k8s需要的镜像在国内很难下载,所以要预先查看需要的镜像
kubeadm config images list
并从阿里云拉取镜像并改名
需要注意coredns,k8s官方镜像比阿里云镜像多一层repo
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.14
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.14
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.14
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.14
docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.0
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.14 k8s.gcr.io/kube-apiserver:v1.21.14
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.14 k8s.gcr.io/kube-controller-manager:v1.21.14
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.14 k8s.gcr.io/kube-scheduler:v1.21.14
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.21.14 k8s.gcr.io/kube-proxy:v1.21.14
docker tag registry.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
预演启动集群,发现报错
$ kubeadm init phase preflight
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[ERROR CRI]: container runtime is not running
第一个说明文件net.bridge.bridge-nf-call-iptables
不存在,iptables 的 bridge traffic无法工作,通过如下配置修改
$ sudo modprobe br_netfilter
$ sudo sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
$ cat /proc/sys/net/bridge/bridge-nf-call-iptables
第二个报错自行调整虚拟机CPU数量,第三个报错如下处理后重试
rm /etc/containerd/config.toml
systemctl restart containerd
preflight问题都解决后,重新正式启动
kubeadm init --kubernetes-version=v1.21.14 --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.1.111 --pod-network-cidr=10.244.0.0/12 --ignore-preflight-errors=Swap
除了命令行里的flag之外,k8s官方现在比较推荐通过config file来配置参数
初始化参数模板可以用
kubeadm config print init-config.yml # 初始化集群默认参数
kubeadm config print join-config.yml # 加入集群默认参数
# 文件可以不带后缀,或者写成.conf, .yaml
参数的详细解释见这里
初始化时调用配置
kubeadm init --config init-config.yml
如果使用阿里云ecs且没有给容器配置公网IP,会造成初始化timeout的问题,解决方案可以看[这里](https://blog.csdn.net/weixin_47678667/article/details/121680938)
Bug
使用配置文件init时会有如下报错,但flag就不会
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.884106 147373 eviction_manager.go:339] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.884248 147373 container_gc.go:85] "Attempting to delete unused containers"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.913385 147373 image_gc_manager.go:321] "Attempting to delete unused images"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.944040 147373 eviction_manager.go:350] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.944706 147373 eviction_manager.go:368] "Eviction manager: pods ranked for eviction" pods=[kube-system/kube-apiserver-spinq-master kube-system/etcd-spinq-master kube-system/kube-controller-manager-spinq-master kube-system/kube-scheduler-spinq-master kube-system/kube-proxy-kk5qc kube-system/kube-flannel-ds-84l5f]
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.945299 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-spinq-master"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.945719 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/etcd-spinq-master"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.946068 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-spinq-master"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.946436 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-spinq-master"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.946787 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kk5qc"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.947147 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-flannel-ds-84l5f"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.947488 147373 eviction_manager.go:391] "Eviction manager: unable to evict any pods from the node"
网上说是因为pod内部占用磁盘空间过多导致的,还没详细看透,参考资料如下
Kubernetes eviction manager evicting control plane pods to reclaim ephemeral storage
Node-pressure Eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/)
成功的话会返回类似信息
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as
root: root:
kubeadm join 10.0.2.15:6443 --token fb4x67.zk0lses0315xvzla \
--discovery-token-ca-cert-hash sha256:17167b1f9f4294d12766b1977681f0aa3575b9d978d371aa774fc5b9d978d371aa774fcadc707ff51d
按照指示配置kubeconfig,这样kubectl才可以查看到集群信息
普通user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
root user
export KUBECONFIG=/etc/kubernetes/admin.conf
root user需要注意的是,如果选择了设置KUBECONFIG
,这个环境变量只是暂时的,机器重启后kubectl会因为找不到KUBECONFIG
而报如下错误
The connection to the server localhost:8080 was refused - did you specify the right host or port?
可以通过把export写进/etc/profile
里解决
配置好后可以查看集群信息
这时可以看到master node的状态是NotReady,需要按指示安装cni addon
重设token
kubadm生成的用来让worker join集群的token是24小时过期,隔天工作的时候需要重新查看和设置
查看token
kubeadm token list
如果token还未过期,可以进一步查看cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
并拼接成命令供worker使用
kubeadm join 192.168.1.111:6443 --token <token> --discovery-token-ca-cert-hash sha256:<cert-hash>
如果token已经过期了则需要重新生成
kubeadm token create --print-join-command
flannel
因为资源容易被墙,先下载配置文件
wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
然后按照预定的pod cidr决定是否修改参数net-conf.json
net-conf.json: |
{
"Network": "10.96.0.0/12",
"Backend": {
"Type": "vxlan"
}
}
然后将配置文件应用于集群
$ kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
可能发生如下状况
这时需要查看出错pod的log
kubectl logs kube-flannel-ds-spw9v -n kube-system
如果遇到Error registering network: failed to acquire lease: node "spinq-master" pod cidr not assigned
,说明:
- pod cidr没有配置
- pod cidr和
kube-flannel.yml
中的net-conf.json
中的Network
参数不一致
解决方法:
- 如init集群时使用配置文件,则确认
kube-init-config.yml
中kind
为ClusterConfiguration的配置中的networking
中的podSubnet
参数和kube-flannel.yml
中的net-conf.json
中的Network
参数一致 - 如init集群时使用命令行flag,则确认
--pod-network-cidr
和kube-flannel.yml
中的net-conf.json
中的Network
参数一致
首次启动成功后flannel会创建两张虚拟网卡,cni0
和flannel.1
,且创建/etc/cni/net.d/10-flannel-conflist
和/run/flannel/subnet.env
我做实验的时候kubeadm reset后etc/cni/net.d和/run下的cni文件不会自动删除,如果这时再重启集群,可能会导致node上的pod启动
kubeadm reset
后网卡不会消失,/etc/cni/net.d/10-flannel-conflist
和/run/flannel/subnet.env
也不会消失,后续重启集群前如果不删除干净,可能会由于改变网络配置而导致集群通信失败
可能导致的bug
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "722424eeb5901d17fb90d720180f92a2683873110daccf82dc9c7e82f4ac665b" network for pod "coredns-59d64cd4d4-c7sm7": networkPlugin cni failed to set up pod "coredns-59d64cd4d4-c7sm7_kube-system" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 172.244.0.1/24
worker连接到master后也会有一样的网卡和配置文件,贸然删除网卡或配置文件的话也可能会导致worker连不上master的6443端口(curl https://<master-ip>:6443 failed),具体原因不明,重启后恢复正常
Calico
Calico的quickstart走的不是配置文件的途径,有些令人费解,希望用熟悉的配置文件方法配置的看这里
同样先下载配置文件
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises
Calico的配置需要注意的是,calico-node默认访问kubernetes SVC的443端口,这将导致无法访问apiserver,需要在yaml文件添加apiserver的IP和端口
应用配置文件
kubectl apply -f calico.yaml
安装cni addon成功后kubectl get nodes
的结果会变成Ready
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
spinq-master Ready control-plane,master 108m v1.21.14
PS: calico不管是本地虚拟机还是阿里云ecs上启动都特别慢,教程一般说20s,但我都要等20分钟,不知道是内核资源调度原因还是下载资源的网络原因
PS: 如果使用kubeadm reset
重置集群,会提示要删除/etc/cni/net.d
中的cni配置,若删除了这个文件夹,则下一次kubeadm init
会报错container network is not ready: cni config unitialized
,就算再次运行kubectl apply -f calico.yaml
也不一定会好,需要systemctl restart kubelet
一下,感觉是因为自动更新机制有问题
可以通过kubectl get pods -A
查看其状态,以及其他所有pod的状态
也可以通过docker ps
查看不同功能的容器是否正常启动
PS:kubectl get cs
的命令的scheduler和controller-manager在各种情况下都有可能为unhealthy
,且已经在v1.19后弃用,因此不需要太担心详情
部署基于dockershim的k8s集群-1
部署基于dockershim的k8s集群-2
启动k8s v1.24
事先安装镜像
kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
k8s v1.24后已经弃用dockershim,默认使用containerd,实际上使用的镜像要通过crictl img
才能查看
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.2 k8s.gcr.io/kube-apiserver:v1.24.2
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2 k8s.gcr.io/kube-controller-manager:v1.24.2
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.2 k8s.gcr.io/kube-scheduler:v1.24.2
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/kube-proxy:v1.24.2 k8s.gcr.io/kube-proxy:v1.24.2
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/etcd:3.5.3-0 k8s.gcr.io/etcd:3.5.3-0
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/coredns:v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6
安装containerd和k8s v1.24 - 1
安装containerd和k8s v1.24 - 2
kubeadm init --kubernetes-version=v1.24.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=Swap
kubelet可以开始运行,但是集群仍然报错
用systemctl status kubelet
和journalctl -xeu kubelet
查看日志,这里定位问题是需要一些耐心的
发现是包的版本问题,说明之前按照kubeadm config images list
查看到的镜像也并非全部可靠
Jun 28 09:35:57 spinq-master kubelet[10593]: E0628 09:35:57.819736 10593 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"k8s.gcr.io/pause:3.6\": failed to pull image \"k8s.gcr.io/pause:3.6\": failed to pull and unpack image \"k8s.gcr.io/pause:3.6\": failed to resolve reference \"k8s.gcr.io/pause:3.6\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.6\": dial tcp 64.233.189.82:443: i/o timeout"
添加需要的包,有关于containerd的操作看这里
ctr -n k8s.io image pull registry.aliyuncs.com/google_containers/pause:3.6
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6
添加节点 【worker节点】
使用init最后产生的命令将worker节点加入集群
$ kubeadm join 10.0.2.15:6443 --token fb4x67.zk0lses0315xvzla --discovery-token-ca-cert-hash sha256:17167b1f9f4294d12766b1977681f0aa3575b9d978d371aa774fc5b9d978d371aa774fcadc707ff51d
...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
同样可以通过systemctl status kubelet
和journalctl -xeu kubelet
来监控节点是否成功启动,如果无法成功启动的话,大多数错误还是要去master上kubectl describe
node或者pod来定位
部署服务
以部署一个nginx服务为例
k8s服务的基本架构 - 1
k8s服务的基本架构 - 2
k8s服务的基本架构 - 3
将下列配置写入master节点上的nginx-conf.yml文件
apiVersion: v1
kind: Namespace
metadata:
name: nginx-demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo-deploy
namespace: nginx-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-tag
template:
metadata:
labels:
app: nginx-tag
spec:
containers:
- name: nginx-ct
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-demo-service
namespace: nginx-demo
spec:
type: NodePort
selector:
app: nginx-tag # match the template metadata label in Deployment
ports:
- protocol: TCP
port: 3088 # match for service access port
targetPort: 80 # match for pod access port
nodePort: 30088 # match for external access port
应用配置文件
$ kubectl apply -f nginx-conf.yml
namespace/nginx-demo unchanged
deployment.apps/nginx-demo-deploy created
service/nginx-demo-service configured
查看生成的pod,deployment,和service
注意kubectl get svc
在不标注namesapce的情况下只显示default
namespace下的服务,因此一定要带-n
参数
$ kubectl get pods -n nginx-demo -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-demo-deploy-784fb48fbc-kcjpl 1/1 Running 0 18s 10.240.1.11 spinq-worker1 <none> <none>
nginx-demo-deploy-784fb48fbc-njw9n 1/1 Running 0 18s 10.240.1.10 spinq-worker1 <none> <none>
$ kubectl get deploy -n nginx-demo
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-demo-deploy 1/1 1 1 19s
$ kubectl get svc nginx-demo-service -n nginx-demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-demo-service NodePort 10.109.6.66 <none> 3088:30008/TCP 23s
访问http://192.168.1.111:30088
可以看到nginx的默认页面