静态Pod方式安装kubernetes 19.2

1 说明

在以往的文档中介绍过二进制的方式安装kubernetes,二进制方式安装费事,费力还不好管理,本文以静态POD方式安装,处理kubelet其他组件都是以容器的方式启动。开始之前先介绍下什么是静态POD,通常情况下POD都是由kubernetes来创建的,但是在搭建kubernetes之前还没有集群,就没办法用kubernetes来创建pod。好在有一种方式可以脱离kubernetes集群,由kubelet来创建POD,这种用kubelet创建的POD,就称为静态POD。
既然不是通过kubernetes来创建的POD,当然该类POD就不属于kubernetes的管辖范围,不过kubelet会为该POD创建一个镜像POD,让kubernetes能看到镜像POD,这样就可以从kubernetes集群内通过kubectl工具看到该POD,当该POD删除的时候镜像POD也就删除了。
kubelet在启动的时候会去--manifests目录中找相应的yaml文件,让后创建对应的POD,默认的manifests目录是/etc/kubernetes/manifests
以上就是对静态POD的说明

2 构架规划

本文使用到的相关机器如下,etcd是单台,建议用集群,本文k8s master为单点。


BF064FD1-59C7-4AF3-A532-093BBE3A4305.png

3 初始化

3.1 网络规划

master-ip: 192.168.20.90
cluster-cidr: 10.244.0.0/16
service-ip-range: 10.96.0.0/12
dns: 10.96.0.10
bootstrap token
token-id: head -c 6 /dev/urandom | md5sum | head -c 6 (用该命令生成)
token-secret: head -c 16 /dev/urandom | md5sum | head -c 16 (用该命令生成)

3.2 所需软件

更新ubuntu源,建议用阿里云的源

apt-get update && apt-get install -y apt-transport-https curl
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat  >/etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y docker.io kubelet kubernetes-cni kubectl
systemctl enable docker
systemctl enable kubelet

3.3 禁用swap

临时命令:swapoff -a
永久命令:注释/etc/fstab相关行

3.4 相关目录

[ ! -d "/etc/cni/net.d" ] && mkdir -p /etc/cni/net.d
[ ! -d "/tmp/cert" ] && mkdir /tmp/cert
[ ! -d "/etc/kubernetes/pki/etcd" ] && mkdir -p /etc/kubernetes/pki/etcd
[ ! -d "/etc/kubernetes/manifests" ] && mkdir -p /etc/kubernetes/manifests
[ ! -d /var/lib/kubelet ] && mkdir -p /var/lib/kubelet

4 自签发证书

4.1 工具准备

用cfssl这个工具来生成自签证书

curl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfsslinfo
chmod +x /usr/local/bin/cfssl*

4.2 所需证书

签名文件,所有证书都用这个配置文件

cat >/tmp/cert/config.json<<EOF
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF

4.2.1 根证书
本文用同一个根证书

cat >/tmp/cert/ca.json<<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "k8s",
            "OU": "System"
        }
    ],
    "ca": {
        "expiry": "87600h"
    }
}
EOF
cd /tmp/cert
cfssl gencert -initca ca.json|cfssljson -bare ca - && mv ca.pem ca.crt && mv ca-key.pem ca.key
rm -f ca.csr ca.json

注意expiry证书的过期时间,默认为8760h,即一年。

4.2.2 etcd server端证书
hosts里面把所有etcd节点都写上,本文只有一个节点

cat >/tmp/cert/etcd-server.json<<EOF
{
    "CN": "etcd",
    "hosts": [
      "127.0.0.1",
      "192.168.20.87"
      ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "k8s",
            "OU": "System"
        }
   ]
}
EOF
cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes etcd-server.json | cfssljson -bare etcd-server && mv etcd-server.pem etcd-server.crt && mv etcd-server-key.pem etcd-server.key

4.2.3 etcd peer 证书
用于etcd集群节点内部的验证
hosts里面把所有etcd节点都写上,本文只有一个节点

cat >/tmp/cert/etcd-peer.json<<EOF
{
    "CN": "peer",
    "hosts": [
      "127.0.0.1",
      "192.168.20.87"
      ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "k8s",
            "OU": "System"
        }
   ]
}
EOF
cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes etcd-peer.json | cfssljson -bare etcd-peer && mv etcd-peer.pem etcd-peer.crt && mv etcd-peer-key.pem etcd-peer.key

4.2.4 etcd-client证书
用于apiserver访问etcd,apiserver作为etcd的客户端

cat >/tmp/cert/etcd-client.json<<EOF
{
    "CN": "client",
    "hosts": [""],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes etcd-client.json | cfssljson -bare  etcd-client && mv  etcd-client.pem  etcd-client.crt && mv etcd-client-key.pem  etcd-client.key

4.2.5 cluster admin证书
用户管理员管理kubernetes集群

cat > /tmp/cert/admin.json<<EOF
{
    "CN": "kubernetes-admin",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF
cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes admin.json | cfssljson -bare admin && mv admin.pem admin.crt && mv admin-key.pem admin.key

4.2.6 apiserver证书
hosts里面写apiserver节点的ip,cluster_cidr段的第一个ip,service-ip-range段的第一个ip,DNS的ip。

cat >/tmp/cert/apiserver.json<<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "192.168.20.90",
      "10.244.0.1",
      "10.96.0.1",
      "10.96.0.10",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes apiserver.json | cfssljson -bare apiserver && mv apiserver.pem apiserver.crt && mv apiserver-key.pem apiserver.key

4.2.7 apiserver-kubelet-client
用于apiserver和kubelet通信,O 为"system:masters"组,默认就有kubernetes所有资源的所有操作权限

cat >/tmp/cert/apiserver-kubelet-client.json<<EOF
{
    "CN": "apiserver-kubelet-client",
    "hosts": [""],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF
cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes apiserver-kubelet-client.json | cfssljson -bare apiserver-kubelet-client && mv apiserver-kubelet-client.pem apiserver-kubelet-client.crt && mv apiserver-kubelet-client-key.pem apiserver-kubelet-client.key

4.2.8 controller-manager
用于controller-manager访问apiserver

cat >/tmp/cert/controller-manager.json<<EOF
{
    "CN": "system:kube-controller-manager",
    "hosts": [""],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes controller-manager.json | cfssljson -bare controller-manager && mv controller-manager.pem controller-manager.crt && mv controller-manager-key.pem controller-manager.key 

4.2.9 scheduler
用于scheduler和apiserver交互

cat >/tmp/cert/scheduler.json<<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [""],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes scheduler.json | cfssljson -bare scheduler && mv scheduler.pem scheduler.crt && mv scheduler-key.pem scheduler.key

4.2.10 front-proxy-ca
用于apiserver aggregator

cat >/tmp/cert/front-proxy-ca.json<<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes front-proxy-ca.json | cfssljson -bare front-proxy-ca && mv front-proxy-ca.pem front-proxy-ca.crt && mv front-proxy-ca-key.pem front-proxy-ca.key

4.2.11 front-proxy-client
用于apiserver aggregator

cat >/tmp/cert/front-proxy-client.json<<EOF
{
    "CN": "front-proxy-client",
    "hosts": [""],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "SH",
            "ST": "SH",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cd /tmp/cert
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=config.json -profile=kubernetes front-proxy-client.json | cfssljson -bare front-proxy-client && mv front-proxy-client.pem front-proxy-client.crt && mv front-proxy-client-key.pem front-proxy-client.key

4.1.12 serviceaccount证书
用于pod和apiserver交互

cd /tmp/cert
openssl genrsa -out sa.key 1024  &&   openssl rsa -in sa.key -pubout -out sa.pub

共签发了20多个证书


tapd_20104951_1605783836_85.jpg

4.3 分发证书

将图示的证书分别copy到/etc/kubernetes/pki目录以及/etc/kubernetes/pki/etcd目录下。pki 和pki/etcd中的ca证书是一样的。

5 生成kubeconfig文件

用签发的各类证书,创建各个组件的配置文件,这个组件将利用创建的配置文件和apiserver通信,注意替换master_ip 为实际的master ip

5.1 controller-manage.conf

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://${master_ip}:6443 --kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.crt --embed-certs=true --client-key=/etc/kubernetes/pki/controller-manager.key --kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=/etc/kubernetes/controller-manager.conf

5.2 scheduler.conf

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://${master_ip}:6443 --kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.crt --embed-certs=true --client-key=/etc/kubernetes/pki/scheduler.key --kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config use-context system:kube-scheduler@kubernetes  --kubeconfig=/etc/kubernetes/scheduler.conf

5.3 cluster-admin.conf

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://${master_ip}:6443 --kubeconfig=/$(whoami)/.kube/config
kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.crt --embed-certs=true --client-key=/etc/kubernetes/pki/admin.key --kubeconfig=/$(whoami)/.kube/config
kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/$(whoami)/.kube/config
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/$(whoami)/.kube/config

5.4 kubelet-bootstrap.conf

注意其中的--token,格式为: 6位随机字符.16为随机字符,这是一个临时token,kubelet第一次和apiserver通信的引导文件,具体原理在Kubelet TLS Bootstrap是啥中有介绍

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://${master_ip}:6443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config set-credentials kubelet-bootstrap --token=${token-id}.${token-secret} --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config set-context kubelet-bootstrap@kubernetes --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config use-context kubelet-bootstrap@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf

6 yaml文件启动相关容器

6.1 apiserver.yml

cat >/etc/kubernetes/manifests/apiserver.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: ${master_ip}:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=${master_ip}
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/etcd/etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-client.key
    - --etcd-servers=https://${etcd_ip}:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --runtime-config=api/all=true
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=${service_ip_range}
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.19.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: ${master_ip}
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: ${master_ip}
        path: /readyz
        port: 6443
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: ${master_ip}
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
EOF

6.2 controller-manager.yml

cat >/etc/kubernetes/manifests/controller-manager.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --horizontal-pod-autoscaler-sync-period=10s
    - --horizontal-pod-autoscaler-use-rest-clients=true
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-monitor-grace-period=10s
    - --port=10251
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --use-service-account-credentials=true
    - --allocate-node-cidrs=true
    - --cluster-cidr=${cluster_cidr}
    - --feature-gates=RotateKubeletServerCertificate=true
    image: k8s.gcr.io/kube-controller-manager:v1.19.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      name: flexvolume-dir
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/controller-manager.conf
      name: kubeconfig
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/controller-manager.conf
      type: FileOrCreate
    name: kubeconfig
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
EOF

6.3 scheduler.yml

cat >/etc/kubernetes/manifests/scheduler.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    - --port=10252
    image: k8s.gcr.io/kube-scheduler:v1.19.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10259
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10259
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
status: {}
EOF

6.4 kubelet.yml

cat >/var/lib/kubelet/config.yaml<<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: cgroupfs
clusterDNS:
- ${dns}
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
logging: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
feature-gates:
  RotateKubeletClientCertificate: true
  RotateKubeletServerCertificate: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
# 开启kubelet server bootstrap,但是基于安全考虑,server 端的csr approve 在1.12版本后就已经取消了
serverTLSBootstrap: true
EOF

7 kubelet服务

7.1 kubelet.service 文件

cat >/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
  --kubeconfig=/etc/kubernetes/kubelet.conf \
  --config=/var/lib/kubelet/config.yaml \
  --network-plugin=cni \
  --pod-infra-container-image=k8s.gcr.io/pause:3.2

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

7.2 重启kubelet服务

systemctl daemon-reload
systemctl restart kubelet
此时会看到集群已经启动,但是master节点是处于noready状态,因为节点还没有注册进来

8 创建bootstrap secret

cat <<EOF |kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  # Name MUST be of form "bootstrap-token-<token id>"
  name: bootstrap-token-${token-id}
  namespace: kube-system

# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
  # Human readable description. Optional.
  description: "The default bootstrap token generated by 'kubeadm init'."

  # Token ID and secret. Required.
  token-id: '${token-id}'
  token-secret: '${token-secret}'

  # Expiration. Optional.
  expiration: 2020-12-10T00:00:11Z

  # Allowed usages.
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"

  # Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
  auth-extra-groups: system:bootstrappers:kubelet-bootstrap
EOF

9 CSR授权

cat <<EOF |kubectl apply -f -
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/nodeclient"]
  verbs: ["create"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: node-client-auto-approve-csr
subjects:
- kind: Group
  name: system:bootstrappers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient 
  apiGroup: rbac.authorization.k8s.io

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata: 
  name: allow-system:bootstrappers-create-csr
subjects:
- kind: Group
  name: system:bootstrappers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:node-bootstrapper
  apiGroup: rbac.authorization.k8s.io


---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: node-client-auto-renew-crt
subjects:
- kind: Group
  name: system:nodes
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
  apiGroup: rbac.authorization.k8s.io

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: node-server-auto-renew-crt
subjects:
- kind: Group
  name: system:nodes
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
  apiGroup: rbac.authorization.k8s.io
EOF

现在节点应该加入集群了

10 踩坑

问题1

User "system:anonymous" cannot get resource "nodes" in API group "" at the cluster scope
解决:
注意csr secret中的token 是否和bootstrap-kubelet.conf中的token是否一致

问题2

Unable to register node "k8s-master" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope
解决:
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers

问题3

invalid bearer token
token 过期或是无效

问题4

可能会遇到10.96.0.1:443 i/o timeout
解决
kube-proxy-configmap
data.kubeconfig.conf.clusters.cluster.server: https://192.168.20.104:6443 (master ip要写对,否则会导致proxy 启动错误,IPtable没有生效,10.96.0.1没有转发规则)

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,390评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,821评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,632评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,170评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,033评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,098评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,511评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,204评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,479评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,572评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,341评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,213评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,576评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,893评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,171评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,486评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,676评论 2 335

推荐阅读更多精彩内容