问题描述
- 使用ansible安装Kubernetes,最后出现如所示报错,提示kubelet启动异常
TASK [kube-node : 轮询等待kube-proxy启动] ****************************************************************************
changed: [172.16.255.136]
changed: [172.16.255.132]
FAILED - RETRYING: 轮询等待kubelet启动 (4 retries left).
FAILED - RETRYING: 轮询等待kubelet启动 (4 retries left).
FAILED - RETRYING: 轮询等待kubelet启动 (3 retries left).
FAILED - RETRYING: 轮询等待kubelet启动 (3 retries left).
FAILED - RETRYING: 轮询等待kubelet启动 (2 retries left).
FAILED - RETRYING: 轮询等待kubelet启动 (2 retries left).
FAILED - RETRYING: 轮询等待kubelet启动 (1 retries left).
FAILED - RETRYING: 轮询等待kubelet启动 (1 retries left).
TASK [kube-node : 轮询等待kubelet启动] *******************************************************************************
fatal: [172.16.255.136]: FAILED! => {"attempts": 4, "changed": true, "cmd": "systemctl is-active kubelet.service", "delta": "0:00:00.007244", "end": "2022-12-07 23:09:47.327728", "msg": "non-zero return code", "rc": 3, "start": "2022-12-07 23:09:47.320484", "stderr": "", "stderr_lines": [], "stdout": "activating", "stdout_lines": ["activating"]}
fatal: [172.16.255.132]: FAILED! => {"attempts": 4, "changed": true, "cmd": "systemctl is-active kubelet.service", "delta": "0:00:00.003806", "end": "2022-12-07 23:09:47.492355", "msg": "non-zero return code", "rc": 3, "start": "2022-12-07 23:09:47.488549", "stderr": "", "stderr_lines": [], "stdout": "activating", "stdout_lines": ["activating"]}
问题排查
- 检查kubelet状态,显示没启动成功
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2022-12-07 23:14:26 CST; 3s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 30668 ExecStart=/usr/bin/kubelet --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=172.16.255.132 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --root-dir=/var/lib/kubelet --v=2 (code=exited, status=1/FAILURE)
Main PID: 30668 (code=exited, status=1/FAILURE)
- 使用journalctl -u kubelet --no-pager 查看启动报错日志
Dec 07 23:50:21 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[24786]: E1207 23:50:21.347929 24786 remote_runtime.go:168] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
Dec 07 23:50:21 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[24786]: E1207 23:50:21.348041 24786 kuberuntime_manager.go:225] "Get runtime version failed" err="get remote runtime typed version failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
Dec 07 23:50:21 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[24786]: Error: failed to run Kubelet: failed to create kubelet: get remote runtime typed version failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
- 将报错在网上搜索看到有人说是containerd的问题,确认containerd状态,
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~# systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-12-06 01:18:16 CST; 1 day 22h ago
Docs: https://containerd.io
Main PID: 3917 (containerd)
Tasks: 34
CGroup: /system.slice/containerd.service
├─ 3917 /usr/bin/containerd
├─11401 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9e95bf63c2a1e18bf6ddc2b79b840931e443ebda1d6a03b0f3d2c4c4f3b16ecd -address /run/containerd/containerd.sock
└─14318 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 1add85e11ac29505b1155c558b118d5cc2e3fa1e347414e1c4816bdc43bb7427 -address /run/containerd/containerd.sock
Dec 07 22:43:38 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:43:38.155622905+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 07 22:43:38 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:43:38.155722663+08:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/d8021addb602a9977f6115ae6d305c30fb23690f68553e5cca512a3fc63a0bd2 pid=12434 runtime=io.containerd.runc.v2
Dec 07 22:43:38 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:43:38.600183434+08:00" level=info msg="shim disconnected" id=d8021addb602a9977f6115ae6d305c30fb23690f68553e5cca512a3fc63a0bd2
Dec 07 22:43:38 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:43:38.600230978+08:00" level=warning msg="cleaning up after shim disconnected" id=d8021addb602a9977f6115ae6d305c30fb23690f68553e5cca512a3fc63a0bd2 namespace=moby
Dec 07 22:43:38 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:43:38.600245849+08:00" level=info msg="cleaning up dead shim"
Dec 07 22:43:38 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:43:38.605221254+08:00" level=warning msg="cleanup warnings time=\"2022-12-07T22:43:38+08:00\" level=info msg=\"starting signal loop\" namespace=moby pid=12610 runtime=io.containerd.runc.v2\n"
Dec 07 22:52:39 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:52:39.669074691+08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 07 22:52:39 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:52:39.669118722+08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 07 22:52:39 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:52:39.669135217+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 07 22:52:39 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[3917]: time="2022-12-07T22:52:39.669236032+08:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/1add85e11ac29505b1155c558b118d5cc2e3fa1e347414e1c4816bdc43bb7427 pid=14318 runtime=io.containerd.runc.v2
- 看有人说和配置文件/etc/containerd/config.toml中的disabled_plugins = ["cri"]有关,详情参见https://github.com/containerd/containerd/issues/4581
- 移除/etc/containerd/config.toml配置文件
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~# grep "disabled_plugins" /etc/containerd/config.toml
disabled_plugins = ["cri"]
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~#
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~# mv /etc/containerd/config.toml /tmp/
- 重启 containerd和kubelet,kubelet启动成功
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~# systemctl restart containerd
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~# systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-12-08 00:23:37 CST; 10s ago
Docs: https://containerd.io
Process: 16220 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 16226 (containerd)
Tasks: 33
CGroup: /system.slice/containerd.service
├─16226 /usr/bin/containerd
├─16419 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 1add85e11ac29505b1155c558b118d5cc2e3fa1e347414e1c4816bdc43bb7427 -address /run/containe
└─16428 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9e95bf63c2a1e18bf6ddc2b79b840931e443ebda1d6a03b0f3d2c4c4f3b16ecd -address /run/containe
Dec 08 00:23:37 iZ2vc2h2j9l2p8zqnwy6zoZ systemd[1]: Started containerd container runtime.
Dec 08 00:23:37 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[16226]: time="2022-12-08T00:23:37.930877956+08:00" level=info msg="loading plugin \"io.containerd.event.v1.p
Dec 08 00:23:37 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[16226]: time="2022-12-08T00:23:37.930929990+08:00" level=info msg="loading plugin \"io.containerd.internal.v
Dec 08 00:23:37 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[16226]: time="2022-12-08T00:23:37.930942089+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.t
Dec 08 00:23:37 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[16226]: time="2022-12-08T00:23:37.931105240+08:00" level=info msg="starting signal loop" namespace=moby path
Dec 08 00:23:37 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[16226]: time="2022-12-08T00:23:37.932520723+08:00" level=info msg="loading plugin \"io.containerd.event.v1.p
Dec 08 00:23:37 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[16226]: time="2022-12-08T00:23:37.932563830+08:00" level=info msg="loading plugin \"io.containerd.internal.v
Dec 08 00:23:37 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[16226]: time="2022-12-08T00:23:37.932582235+08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.t
Dec 08 00:23:37 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[16226]: time="2022-12-08T00:23:37.932692186+08:00" level=info msg="starting signal loop" namespace=moby path
Dec 08 00:23:41 iZ2vc2h2j9l2p8zqnwy6zoZ containerd[16226]: time="2022-12-08T00:23:41.811347380+08:00" level=info msg="No cni config template is specified, wait
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~#
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~# systemctl restart kubelet
root@iZ2vc2h2j9l2p8zqnwy6zoZ:~# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-12-08 00:24:06 CST; 10s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 16852 (kubelet)
Tasks: 15 (limit: 4915)
CGroup: /system.slice/kubelet.service
└─16852 /usr/bin/kubelet --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-overri
Dec 08 00:24:06 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:06.371586 16852 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR=
Dec 08 00:24:06 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:06.371664 16852 kubelet_node_status.go:563] "Recording event message for node" node="172.
Dec 08 00:24:06 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:06.371756 16852 kubelet_node_status.go:563] "Recording event message for node" node="172.
Dec 08 00:24:06 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:06.371770 16852 kubelet_node_status.go:563] "Recording event message for node" node="172.
Dec 08 00:24:06 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:06.371788 16852 kubelet_node_status.go:70] "Attempting to register node" node="172.16.255
Dec 08 00:24:06 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:06.378384 16852 kubelet_node_status.go:108] "Node was previously registered" node="172.16
Dec 08 00:24:06 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:06.378430 16852 kubelet_node_status.go:73] "Successfully registered node" node="172.16.25
Dec 08 00:24:07 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:07.266364 16852 apiserver.go:52] "Watching apiserver"
Dec 08 00:24:07 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:07.269182 16852 kubelet.go:2072] "SyncLoop ADD" source="api" pods=[]
Dec 08 00:24:07 iZ2vc2h2j9l2p8zqnwy6zoZ kubelet[16852]: I1208 00:24:07.274953 16852 reconciler.go:157] "Reconciler: start to sync state"
问题原因
参见https://github.com/containerd/containerd/issues/4581
解决办法
- mv /etc/containerd/config.toml /tmp
- systemctl restart containerd
- systemctl restart kubelet 或者 kubeadm init