arm ubuntu部署k8s

环境

华为泰山服务器,ip为10.203.1.19

节点信息

root@horatio:~# hostnamectl
   Static hostname: horatio
         Icon name: computer-server
           Chassis: server
        Machine ID: c9c709c6a0f04fe3a93b1368c361083a
           Boot ID: d76bd870c49a43649c90e4669bba79e6
  Operating System: Ubuntu 18.04.4 LTS
            Kernel: Linux 4.15.0-76-generic
      Architecture: arm64

前期准备

关闭swap

编辑/etc/fstab 文件,将以/swapfile开头的这一行注释掉,然后reboot
reboot之后使用top命令查看,可以看到Swap那一行都为0,证明修改生效

root@horatio:~# top
top - 11:19:41 up 5 min,  1 user,  load average: 0.00, 0.03, 0.00
Tasks: 905 total,   1 running, 438 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 65689596 total, 64585336 free,   892840 used,   211420 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 64368672 avail Mem 

安装软件

Docker

命令

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

结果

root@horatio:~# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
# Executing docker install script, commit: 3d8fe77c2c46c5b7571f94b42793905e5b3e42e4
+ sh -c 'apt-get update -qq >/dev/null'
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic/InRelease  Could not connect to 10.203.1.225:80 (10.203.1.225). - connect (113: No route to host)
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-security/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-updates/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-proposed/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-backports/InRelease  Unable to connect to 10.203.1.225:http:
W: Some index files failed to download. They have been ignored, or old ones used instead.
+ sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null'
+ sh -c 'curl -fsSL "https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null'
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c 'echo "deb [arch=arm64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic stable" > /etc/apt/sources.list.d/docker.list'
+ sh -c 'apt-get update -qq >/dev/null'
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic/InRelease  Could not connect to 10.203.1.225:80 (10.203.1.225). - connect (113: No route to host)
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-security/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-updates/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-proposed/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-backports/InRelease  Unable to connect to 10.203.1.225:http:
W: Some index files failed to download. They have been ignored, or old ones used instead.
+ '[' -n '' ']'
+ sh -c 'apt-get install -y -qq --no-install-recommends docker-ce >/dev/null'
+ sh -c 'docker version'
Client: Docker Engine - Community
 Version:           20.10.1
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        831ebea
 Built:             Tue Dec 15 04:34:49 2020
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.1
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       f001486
  Built:            Tue Dec 15 04:32:48 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.

kubeadm,kubectl,kubelet

需要安装指定版本,因为不指定版本的话会自动下载最新版1.20.0,但是image仓库中没有对应版本的ARM架构的image,经过测试,使用版本1.18.0

安装证书

命令

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

结果

root@horatio:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1974  100  1974    0     0  11963      0 --:--:-- --:--:-- --:--:-- 11891
OK

添加源

命令

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF

结果

root@horatio:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF
root@horatio:~#

更新源

root@horatio:~# apt update
Hit:1 https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease [9,383 B]                                                                                               
Hit:3 http://cn.ports.ubuntu.com/ubuntu-ports bionic InRelease                                                                                                                      
Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease                          
Ign:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 Packages         
Hit:6 http://cn.ports.ubuntu.com/ubuntu-ports bionic-updates InRelease   
Get:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 Packages [41.9 kB]
Hit:7 http://cn.ports.ubuntu.com/ubuntu-ports bionic-backports InRelease            
Fetched 51.3 kB in 1s (49.3 kB/s)                  
Reading package lists... Done
Building dependency tree       
Reading state information... Done
173 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@horatio:~#

安装软件

命令

apt install kubeadm=1.18.0-00 kubectl=1.18.0-00 kubelet=1.18.0-00 

结果

root@horatio:~# apt install kubeadm=1.18.0-00 kubectl=1.18.0-00 kubelet=1.18.0-00  
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  kubernetes-cni
The following NEW packages will be installed:
  kubeadm kubectl kubelet kubernetes-cni
0 upgraded, 4 newly installed, 0 to remove and 173 not upgraded.
Need to get 55.0 MB of archives.
After this operation, 258 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubernetes-cni arm64 0.8.7-00 [23.1 MB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubelet arm64 1.18.0-00 [17.2 MB]
Get:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubectl arm64 1.18.0-00 [7,622 kB]
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubeadm arm64 1.18.0-00 [7,073 kB]
Fetched 55.0 MB in 4s (14.5 MB/s)  
Selecting previously unselected package kubernetes-cni.
(Reading database ... 67551 files and directories currently installed.)
Preparing to unpack .../kubernetes-cni_0.8.7-00_arm64.deb ...
Unpacking kubernetes-cni (0.8.7-00) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../kubelet_1.18.0-00_arm64.deb ...
Unpacking kubelet (1.18.0-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../kubectl_1.18.0-00_arm64.deb ...
Unpacking kubectl (1.18.0-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../kubeadm_1.18.0-00_arm64.deb ...
Unpacking kubeadm (1.18.0-00) ...
Setting up kubernetes-cni (0.8.7-00) ...
Setting up kubelet (1.18.0-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubectl (1.18.0-00) ...
Setting up kubeadm (1.18.0-00) ...
root@horatio:~#

初始化集群

准备image

需要提前pull arm版本的image,因为初始化过程中kubeadm不会pull arm版本的image,pull下来的x86的image会造成集群无法初始化成功

Pull images

命令

docker pull mirrorgcrio/kube-apiserver-arm64:v1.18.1
docker pull mirrorgcrio/kube-controller-manager-arm64:v1.18.1
docker pull mirrorgcrio/kube-scheduler-arm64:v1.18.1
docker pull mirrorgcrio/kube-proxy-arm64:v1.18.1
docker pull mirrorgcrio/etcd-arm64:3.4.3-0
docker pull mirrorgcrio/pause-arm64:3.2
docker pull tkestack/coredns-arm64:1.6.9

结果

root@horatio:~# docker pull mirrorgcrio/kube-apiserver-arm64:v1.18.1
docker pull mirrorgcrio/kube-controller-manager-arm64:v1.18.1
docker pull mirrorgcrio/kube-scheduler-arm64:v1.18.1
docker pull mirrorgcrio/kube-proxy-arm64:v1.18.1
docker pull mirrorgcrio/etcd-arm64:3.4.3-0
docker pull mirrorgcrio/pause-arm64:3.2
v1.18.1: Pulling from mirrorgcrio/kube-apiserver-arm64
ed2e7fd67416: Pull complete 
6df437f7efad: Pull complete 
Digest: sha256:29165d4e875c996bce3790226ac90cc8f7db50b2c952929522d81106a85f3226
Status: Downloaded newer image for mirrorgcrio/kube-apiserver-arm64:v1.18.1
docker.io/mirrorgcrio/kube-apiserver-arm64:v1.18.1
root@horatio:~# docker pull mirrorgcrio/kube-controller-manager-arm64:v1.18.1
v1.18.1: Pulling from mirrorgcrio/kube-controller-manager-arm64
ed2e7fd67416: Already exists 
8e08af3f3336: Pull complete 
Digest: sha256:a2150210ea0b5a62fbcae903467e4c20992c03e5a484ff3b9230f41a6507f39b
Status: Downloaded newer image for mirrorgcrio/kube-controller-manager-arm64:v1.18.1
docker.io/mirrorgcrio/kube-controller-manager-arm64:v1.18.1
root@horatio:~# docker pull mirrorgcrio/kube-scheduler-arm64:v1.18.1
v1.18.1: Pulling from mirrorgcrio/kube-scheduler-arm64
ed2e7fd67416: Already exists 
79c79f4c4434: Pull complete 
Digest: sha256:1aebd94ad45b5204a89f05313838352c4fc2861da7a9ab97f3c41a37aaaa7119
Status: Downloaded newer image for mirrorgcrio/kube-scheduler-arm64:v1.18.1
docker.io/mirrorgcrio/kube-scheduler-arm64:v1.18.1
root@horatio:~# docker pull mirrorgcrio/kube-proxy-arm64:v1.18.1
v1.18.1: Pulling from mirrorgcrio/kube-proxy-arm64
ed2e7fd67416: Already exists 
d033d9855b96: Pull complete 
7bd91d4a9747: Pull complete 
6c3c2821ac4d: Pull complete 
b8ac04191d92: Pull complete 
355857a7a906: Pull complete 
ea9711a0e51a: Pull complete 
Digest: sha256:1cd85e909859001b68022f269c6ce223370cdb7889d79debd9cb87626a8280fb
Status: Downloaded newer image for mirrorgcrio/kube-proxy-arm64:v1.18.1
docker.io/mirrorgcrio/kube-proxy-arm64:v1.18.1
root@horatio:~# docker pull mirrorgcrio/etcd-arm64:3.4.3-0
3.4.3-0: Pulling from mirrorgcrio/etcd-arm64
9f9ba9541db2: Pull complete 
6feb97f21dc3: Pull complete 
de473e163c10: Pull complete 
Digest: sha256:fbc0f8b4861d23c9989edf877df7ae2533083e98c05687eb22b00422b9825c2f
Status: Downloaded newer image for mirrorgcrio/etcd-arm64:3.4.3-0
docker.io/mirrorgcrio/etcd-arm64:3.4.3-0
root@horatio:~# docker pull mirrorgcrio/pause-arm64:3.2
3.2: Pulling from mirrorgcrio/pause-arm64
84f9968a3238: Pull complete 
Digest: sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636
Status: Downloaded newer image for mirrorgcrio/pause-arm64:3.2
docker.io/mirrorgcrio/pause-arm64:3.2
root@horatio:~# docker pull tkestack/coredns-arm64:1.6.9
1.6.9: Pulling from tkestack/coredns-arm64
c6568d217a00: Pull complete 
9ee498572cc0: Pull complete 
Digest: sha256:0b24ee66a96fb4142d4d0d7014f78507dda2a8da28567e858461eef5a0734402
Status: Downloaded newer image for tkestack/coredns-arm64:1.6.9
docker.io/tkestack/coredns-arm64:1.6.9

修改tag

命令

docker tag mirrorgcrio/kube-apiserver-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.1
docker tag mirrorgcrio/kube-scheduler-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.1
docker tag mirrorgcrio/kube-controller-manager-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.1
docker tag mirrorgcrio/kube-proxy-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-proxy:v1.18.1
docker tag mirrorgcrio/etcd-arm64:3.4.3-0 registry.aliyuncs.com/google_containers/etcd:3.4.3-0
docker tag mirrorgcrio/pause-arm64:3.2 registry.aliyuncs.com/google_containers/pause:3.2
docker tag tkestack/coredns-arm64:1.6.9 registry.aliyuncs.com/google_containers/coredns-arm64:1.6.9

结果

root@horatio:~# docker tag mirrorgcrio/kube-apiserver-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.1
docker tag mirrorgcrio/kube-scheduler-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.1
docker tag mirrorgcrio/kube-controller-manager-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.1
docker tag mirrorgcrio/kube-proxy-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-proxy:v1.18.1
root@horatio:~# docker tag mirrorgcrio/kube-scheduler-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.1
docker tag mirrorgcrio/etcd-arm64:3.4.3-0 registry.aliyuncs.com/google_containers/etcd:3.4.3-0
docker tag mirrorgcrio/pause-arm64:3.2 registry.aliyuncs.com/google_containers/pause:3.2root@horatio:~# docker tag mirrorgcrio/kube-controller-manager-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.1
root@horatio:~# docker tag mirrorgcrio/kube-proxy-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-proxy:v1.18.1
root@horatio:~#docker tag tkestack/coredns-arm64:1.6.9 registry.aliyuncs.com/google_containers/coredns-arm64:1.6.9

初始化

需要根据提前pull的image指定版本进行初始化

初始化

命令

kubeadm init --apiserver-advertise-address=10.203.1.19 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.1

结果

root@horatio:~# kubeadm init --apiserver-advertise-address=10.203.1.19 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.1
W1223 16:41:24.406091   13803 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1\. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [horatio kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.203.1.19]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [horatio localhost] and IPs [10.203.1.19 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [horatio localhost] and IPs [10.203.1.19 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1223 16:41:32.111535   13803 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1223 16:41:32.112957   13803 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.503593 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node horatio as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node horatio as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: igj7rd.xlm267318e42bjt5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.203.1.19:6443 --token igj7rd.xlm267318e42bjt5 \
    --discovery-token-ca-cert-hash sha256:0d7a42c18ddbe1a0cb1d97e9758904551cf2d5d546fb8f1175391173309865ac

配置kubectl工具

命令

 mkdir -p /root/.kube && cp /etc/kubernetes/admin.conf /root/.kube/config

结果

root@horatio:~# mkdir -p /root/.kube && \
> cp /etc/kubernetes/admin.conf /root/.kube/config

部署flannel网络

编辑一个kube-flannel.yaml文件,内容如下

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

执行kubectl apply

root@horatio:~# kubectl apply -f kube-flannel.yaml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看状态

root@horatio:~# kubectl get all -n kube-system                
NAME                                  READY   STATUS    RESTARTS   AGE
pod/coredns-7c98c5f7b9-n8l9g          1/1     Running   0          10m
pod/coredns-7c98c5f7b9-vqcqd          1/1     Running   0          10m
pod/etcd-horatio                      1/1     Running   0          32m
pod/kube-apiserver-horatio            1/1     Running   0          32m
pod/kube-controller-manager-horatio   1/1     Running   0          32m
pod/kube-flannel-ds-bjhzf             1/1     Running   0          26m
pod/kube-proxy-8rfxt                  1/1     Running   0          32m
pod/kube-scheduler-horatio            1/1     Running   0          32m

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   32m

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/kube-flannel-ds   1         1         1       1            1           <none>                   26m
daemonset.apps/kube-proxy        1         1         1       1            1           kubernetes.io/os=linux   32m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           32m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-7c98c5f7b9   2         2         2       10m

问题

执行“kubeadm init”初始化并不会去下载arm版本的image来部署集群

描述

执行初始化之后一直报错

root@horatio:~# kubeadm init --apiserver-advertise-address=10.203.1.19 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16  
[init] Using Kubernetes version: v1.20.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1\. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [horatio kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.203.1.19]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [horatio localhost] and IPs [10.203.1.19 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [horatio localhost] and IPs [10.203.1.19 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

解决办法

查看kubectl状态

正常

root@horatio:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2020-12-23 13:13:50 CST; 4min 42s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 20359 (kubelet)
    Tasks: 62 (limit: 14745)

查看pod状态

提示错误

root@horatio:~# docker logs 858c5d4a664c
standard_init_linux.go:219: exec user process caused: exec format error

查找相关资料后是因为下载的image是x86版本的,需要先下载arm 版本,再tag成初始化需要的版本

kubeadm版本过高,无法指定低版本进行初始化

描述

通过“kubeadm config images list”命令查看当前版本kubeadm初始化k8s集群所需要的image

root@horatio:~# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.20.1
k8s.gcr.io/kube-controller-manager:v1.20.1
k8s.gcr.io/kube-scheduler:v1.20.1
k8s.gcr.io/kube-proxy:v1.20.1
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

但通过查找,此时仓库中并没有1.20.1 arm版本的image,经过比对,决定部署1.18.0版本的
pull image之后,需要指定1.18.0版本进行init
结果提示报错,只能指定>= 1.19.0版本的

root@horatio:~# kubeadm init --apiserver-advertise-address=10.203.1.19 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.1
this version of kubeadm only supports deploying clusters with the control plane version >= 1.19.0\. Current version: v1.18.1
To see the stack trace of this error execute with --v=5 or higher

解决办法

查找资料后发现kubeadm在一定版本下也只能初始化一定版本的k8s集群,所以要下载指定版本的kubeadm

卸载当前kubeadm版本以及其他相关软件

命令

apt --purge -y remove kubeadm kubectl kubelet

结果

root@horatio:~# apt --purge remove kubeadm
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  cri-tools
Use 'apt autoremove' to remove it.
The following packages will be REMOVED:
  kubeadm*
0 upgraded, 0 newly installed, 1 to remove and 173 not upgraded.
After this operation, 36.2 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 67576 files and directories currently installed.)
Removing kubeadm (1.20.1-00) ...
(Reading database ... 67575 files and directories currently installed.)
Purging configuration files for kubeadm (1.20.1-00) ...
root@horatio:~# apt --purge remove kubectl
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  cri-tools
Use 'apt autoremove' to remove it.
The following packages will be REMOVED:
  kubectl*
0 upgraded, 0 newly installed, 1 to remove and 173 not upgraded.
After this operation, 37.2 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 67573 files and directories currently installed.)
Removing kubectl (1.20.1-00) ...
root@horatio:~# apt --purge remove kubernetes-cni kubelet
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  conntrack cri-tools socat
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
  kubelet* kubernetes-cni*
0 upgraded, 0 newly installed, 2 to remove and 173 not upgraded.
After this operation, 177 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 67572 files and directories currently installed.)
Removing kubelet (1.20.1-00) ...
Warning: The unit file, source configuration file or drop-ins of kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Removing kubernetes-cni (0.8.7-00) ...
dpkg: warning: while removing kubernetes-cni, directory '/opt' not empty so not removed
(Reading database ... 67551 files and directories currently installed.)
Purging configuration files for kubelet (1.20.1-00) ..

下载指定版本的kubeadm及相关软件

命令

 apt install kubeadm=1.18.0-00 kubectl=1.18.0-00 kubelet=1.18.0-00

结果

root@horatio:~# apt install kubeadm=1.18.0-00 kubectl=1.18.0-00 kubelet=1.18.0-00  
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  kubernetes-cni
The following NEW packages will be installed:
  kubeadm kubectl kubelet kubernetes-cni
0 upgraded, 4 newly installed, 0 to remove and 173 not upgraded.
Need to get 55.0 MB of archives.
After this operation, 258 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubernetes-cni arm64 0.8.7-00 [23.1 MB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubelet arm64 1.18.0-00 [17.2 MB]
Get:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubectl arm64 1.18.0-00 [7,622 kB]
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubeadm arm64 1.18.0-00 [7,073 kB]
Fetched 55.0 MB in 4s (14.5 MB/s)  
Selecting previously unselected package kubernetes-cni.
(Reading database ... 67551 files and directories currently installed.)
Preparing to unpack .../kubernetes-cni_0.8.7-00_arm64.deb ...
Unpacking kubernetes-cni (0.8.7-00) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../kubelet_1.18.0-00_arm64.deb ...
Unpacking kubelet (1.18.0-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../kubectl_1.18.0-00_arm64.deb ...
Unpacking kubectl (1.18.0-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../kubeadm_1.18.0-00_arm64.deb ...
Unpacking kubeadm (1.18.0-00) ...
Setting up kubernetes-cni (0.8.7-00) ...
Setting up kubelet (1.18.0-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubectl (1.18.0-00) ...
Setting up kubeadm (1.18.0-00) ...

其他

查看当前系统软件

dpkg --list

查看可下载的kubeadm版本

apt-cache show kubeadm | grep Version

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,332评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,508评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,812评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,607评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,728评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,919评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,071评论 3 410
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,802评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,256评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,576评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,712评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,389评论 4 332
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,032评论 3 316
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,798评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,026评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,473评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,606评论 2 350

推荐阅读更多精彩内容