锻骨境-第3层 开发环境k8s 高可用master 的搭建

知其然,知其所以然

这是一个境界,一个学习的至高境界。

这里说的是开发环境,并不能用与生产环境。应为这里所有组件是容器化的。官方推荐
etcd 集群最好是外部集群方式,最起码生产是这样的。
见下一层:生产环境的高可用master集群,etcd 集群外置

k8s master 高可用的原理

首先,了解下底层的原理是怎么实现高可用的,先来个框架图


image.png
  • 原理
  1. 上面的图,LoadBalcance ip 代理master 服务,可用方式有SLB或者Metalb或Nginx 的方式。
  2. 确实,高可用必须保持数据的一致性,这才是重点。所以,etcd 必须首先是个集群,这是核心的服务,做集群存储使用。一般,etcd 不做容器化部署。
  3. k8s 的管理层组件,kube-scheduler 和kube-controller-manager 才是实现master 一主多从的重点。其实,k8s 通过etcd 实现了上述连个组件的选主的功能,既多个master 是有一套选主算法,大家会选举leader ,两个组件获取了leader节点的信息之后,才会执行相应的业务。
  4. kube-schedule 和 kube-controller-manager 的endpoints 会在etcd 有一份存储,也就是主节点的endpoint 和从masrer 的信息,每个节点定期检查这个endpoint 信息,其实这类似于心跳,获取主节点是否可用的信心,如果主节点不可用了之后,则每个节点尝试更新自己为主节点既触发选主。
  5. kuec-schedule 和kube-controller-manager 服务之间是不会直接通信,利用的是etcd 存储的数据一致性,保证在分布式高并发的情况下,leader 的节点是 全局唯一的。

搭建master 集群

  • 准备
  1. 先安装docker之类,关闭防火和swap 等。
  2. 把该有的组件包load 到这个节点的docker 镜像里
  3. k8s 的客户端组件,kubelet 和另一个安装一直
  4. 需要kubeadm ,同时,需要准备一个配置文件
  5. 多节点的话需要一个loadbalance 服务,这里使用了一个nginx . 我这里的nginx 和master1 在一个虚拟机上,正常情况,nginx 应该是一个高可用集群。

示例如下:

  1. 我们使用nginx 作为loadbalance 做master 的负载服务器,以达到高可用 的目的
    nginx 1.9.11 以上支持stream ( 和http 同级) ,http 模块是7层负载,而stream模块,用来实现四层协议的转发、代理或者负载均衡等。使用用了stream , 就等于是HAproxy 这个负载组件。Nginx 是牛逼啊!
    nginx.conf 增加配置如下:

stream {
    server {
        listen 8443;
        proxy_pass kube_apiserver;
    }

    upstream kube_apiserver {
        server 192.168.10.133:6443 weight=50 max_fails=3 fail_timeout=5s;
        server 192.168.10.134:6443 weight=50 max_fails=3 fail_timeout=5s;
        
    }
}

Nginx 版本信息:

[root@k8s-master nginx]# ./nginx  -v
nginx version: nginx/1.14.1
[root@k8s-master nginx]# 
[root@k8s-master nginx]# 
[root@k8s-master nginx]# ps  -ef |grep nginx 
root       7144  24180  0 20:36 pts/0    00:00:00 grep --color=auto nginx
root     105442      1  0 19:35 ?        00:00:00 nginx: master process ./nginx
www      105443 105442  0 19:35 ?        00:00:03 nginx: worker process

  1. 默认kubeadm kubelet kubectl 等组件已经安装完毕。
    如果以前是单节点的,可以执行以下命令重置。
[root@k8s-master k8s]# kubeadm reset 
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: 
  1. 新建kubeadm-config.yaml 文件
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
apiServer:
  certSANs:
  - "192.168.10.133"
controlPlaneEndpoint: "192.168.10.133:8443"
networking:
  podSubnet: 10.244.0.0/16


附: podSubnet: 10.244.0.0/16 这里指的是用的flannel 网络组件的子网网段。 192.168.10.133:8443 就是我配置的Nginx 的地址。

  1. 执行kubeadm init 如下:

[root@k8s-master k8s]# kubeadm init --config=kubeadm-config.yaml 
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.133 192.168.10.133 192.168.10.133]
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.10.133 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.10.133 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 27.174431 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7xttna.zdgzyhbrodvre1p9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.10.133:8443 --token 7xttna.zdgzyhbrodvre1p9 --discovery-token-ca-cert-hash sha256:09560c6c62952f7f69ddb170e241fb9d9688b77ba9c060af90626f0da6f0a26e


最后把join 命令存下来:

  kubeadm join 192.168.10.133:8443 --token 7xttna.zdgzyhbrodvre1p9 --discovery-token-ca-cert-hash sha256:09560c6c62952f7f69ddb170e241fb9d9688b77ba9c060af90626f0da6f0a26e
  1. 添加config文件,查看结果

[root@k8s-master k8s]# kubectl get nodes 
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
[root@k8s-master k8s]# 
[root@k8s-master k8s]# 
[root@k8s-master k8s]#  mkdir -p $HOME/.kube
[root@k8s-master k8s]#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp:是否覆盖"/root/.kube/config"? y
[root@k8s-master k8s]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master k8s]# kubectl get no 
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   master   9m    v1.13.0

  1. 安装flannel
    见:kube-flannel.yml
    文件内容:
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg


安装命令:


[root@k8s-master k8s]# vim flannel.yaml
[root@k8s-master k8s]# 
[root@k8s-master k8s]# kubectl  apply -f flannel.yaml   
clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created

查看结果:

[root@k8s-master k8s]# kubectl  get po --all-namespaces 
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-d9p4n             1/1     Running   0          12m
kube-system   coredns-86c58d9df4-p2xbq             1/1     Running   0          12m
kube-system   etcd-k8s-master                      1/1     Running   0          11m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          11m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          11m
kube-system   kube-flannel-ds-amd64-kcpxg          1/1     Running   0          67s
kube-system   kube-proxy-6krck                     1/1     Running   0          12m
kube-system   kube-scheduler-k8s-master            1/1     Running   0          11m

flannel 正常工作。节点master1 安装完成

7 复制master1 生产的证书到master2

脚本:(这个脚本是做一个master2 ,如果还要做master3的话,可以直接CONTROL_PLANE_IPS 追加自己的ip,如 CONTROL_PLANE_IPS="172.18.0.82 172.18.0.83")

[root@k8s-master k8s]# cat master-cluster-certs.sh 
USER=root
CONTROL_PLANE_IPS="192.168.10.134"
for host in ${CONTROL_PLANE_IPS}; do
     scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
     scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
     scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
     scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
     scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
     scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
     scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
     scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
     sc

授权+执行脚本:

[root@k8s-master k8s]# ./master-cluster-certs.sh  
root@192.168.10.134's password: 
ca.crt                                                                                                                                   100% 1025   678.5KB/s   00:00    
root@192.168.10.134's password: 
ca.key                                                                                                                                   100% 1675   230.8KB/s   00:00    
root@192.168.10.134's password: 
sa.key                                                                                                                                   100% 1675   901.2KB/s   00:00    
root@192.168.10.134's password: 
sa.pub                                                                                                                                   100%  451   400.4KB/s   00:00    
root@192.168.10.134's password: 
front-proxy-ca.crt                                                                                                                       100% 1038   611.1KB/s   00:00    
root@192.168.10.134's password: 
front-proxy-ca.key                                                                                                                       100% 1675   645.9KB/s   00:00    
root@192.168.10.134's password: 
ca.crt                                                                                                                                   100% 1017   789.7KB/s   00:00    
root@192.168.10.134's password: 
Permission denied, please try again.
root@192.168.10.134's password: 
ca.key                                                                                                                                   100% 1675     1.2MB/s   00:00    
root@192.168.10.134's password: 
admin.conf                                                                                                                               100% 5454     3.7MB/s   00:00    
[root@k8s-master k8s]# 

8 登陆到192.168.10.134 ,复制对应的certs 认证文件到对应的目录下,既 /etc/kubernetes/pki ;

[root@k8s-node1 ~]# cat mv-certs.sh 
 mkdir -p /etc/kubernetes/pki/etcd
 mv ca.crt /etc/kubernetes/pki/
 mv ca.key /etc/kubernetes/pki/
 mv sa.pub /etc/kubernetes/pki/
 mv sa.key /etc/kubernetes/pki/
 mv front-proxy-ca.crt /etc/kubernetes/pki/
 mv front-proxy-ca.key /etc/kubernetes/pki/
 mv etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
 mv etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
 mv admin.conf /etc/kubernetes/admin.conf

执行:

[root@k8s-node1 ~]# 
[root@k8s-node1 ~]# chmod 755  mv-certs.sh 
[root@k8s-node1 ~]# ./mv-certs.sh 

查看:

[root@k8s-node1 ~]# cd /etc/kubernetes/pki/
[root@k8s-node1 pki]# ls
ca.crt  ca.key  etcd  front-proxy-ca.crt  front-proxy-ca.key  sa.key  sa.pub
[root@k8s-node1 pki]# 
[root@k8s-node1 pki]# 
[root@k8s-node1 pki]# cd etcd/
[root@k8s-node1 etcd]# ls -l
总用量 8
-rw-r--r-- 1 root root 1017 10月 28 19:59 ca.crt
-rw------- 1 root root 1675 10月 28 19:59 ca.key

9 初始化新节点master2

官网kubeadm 命令:

kubeadm join vip:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --experimental-control-plane

请注意这里的命令多了参数: --experimental-control-plane ,master1是没有的.

experimental-control-plane 指的是额外的控制面板,意思就是在本节点上,也会部署master 主节点的组件,如api-server , etcd 等主组件,他是一个控制节点既master .

那么我的命令就是:

 kubeadm join 192.168.10.133:8443 --token 7xttna.zdgzyhbrodvre1p9 --discovery-token-ca-cert-hash sha256:09560c6c62952f7f69ddb170e241fb9d9688b77ba9c060af90626f0da6f0a26e  --experimental-control-plane

结果如下:

[root@k8s-node1 etcd]#  kubeadm join 192.168.10.133:8443 --token wipo2g.wl0is1y9zm7fe7je --discovery-token-ca-cert-hash sha256:15c3869d81037dba2eec8456b9ff7722848586b9df3c16afeac1ac04fe3f3026  --experimental-control-plane
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.10.133:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.133:8443"
[discovery] Failed to connect to API Server "192.168.10.133:8443": token id "wipo2g" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "192.168.10.133:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.133:8443"
[discovery] Failed to connect to API Server "192.168.10.133:8443": token id "wipo2g" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "192.168.10.133:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.133:8443"
[discovery] Failed to connect to API Server "192.168.10.133:8443": token id "wipo2g" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
^C
[root@k8s-node1 etcd]#  kubeadm join 192.168.10.133:8443 --token 7xttna.zdgzyhbrodvre1p9 --discovery-token-ca-cert-hash sha256:09560c6c62952f7f69ddb170e241fb9d9688b77ba9c060af90626f0da6f0a26e  --experimental-control-plane
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.10.133:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.133:8443"
[discovery] Requesting info from "https://192.168.10.133:8443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.133:8443"
[discovery] Successfully established connection with API Server "192.168.10.133:8443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[join] Running pre-flight checks before initializing the new control plane instance
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.10.134 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.10.134 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.134 192.168.10.133 192.168.10.133]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing up-to-date kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Checking Etcd cluster health
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node1" as an annotation
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Master label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

配置maste2 的config

    # mkdir -p $HOME/.kube
    # sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    # sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看结果:

[root@k8s-node1 etcd]# kubectl get no 
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   31m     v1.13.0
k8s-node1    Ready    master   2m58s   v1.13.0

以上已经完成了 搭建2个master节点。若果要加入master3 ,master2 的步骤重复即可。因为 certs 文件已经全部存储到了对应的 /etc/kubernetes/pki 目录下。

验证结果,在master2 既134 上:

[root@k8s-node1 etcd]# kubectl  get cm -n kube-system 
NAME                                 DATA   AGE
coredns                              1      33m
extension-apiserver-authentication   6      33m
kube-flannel-cfg                     2      22m
kube-proxy                           2      33m
kubeadm-config                       2      33m
kubelet-config-1.13                  1      33m
[root@k8s-node1 etcd]# 
[root@k8s-node1 etcd]# kubectl  get cm -n kube-system 

可以看到kubeadm-config.yaml 文件在kube-system 空间下,而这个文件是在master1上提交的,故此可知两个master 是互通的。

[root@k8s-node1 etcd]# kubectl  get  po -n kube-system  -o wide 
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
coredns-86c58d9df4-d9p4n             1/1     Running   0          40m   10.244.0.78      k8s-master   <none>           <none>
coredns-86c58d9df4-p2xbq             1/1     Running   0          40m   10.244.0.79      k8s-master   <none>           <none>
etcd-k8s-master                      1/1     Running   0          39m   192.168.10.133   k8s-master   <none>           <none>
etcd-k8s-node1                       1/1     Running   0          12m   192.168.10.134   k8s-node1    <none>           <none>
kube-apiserver-k8s-master            1/1     Running   0          39m   192.168.10.133   k8s-master   <none>           <none>
kube-apiserver-k8s-node1             1/1     Running   0          12m   192.168.10.134   k8s-node1    <none>           <none>
kube-controller-manager-k8s-master   1/1     Running   1          39m   192.168.10.133   k8s-master   <none>           <none>
kube-controller-manager-k8s-node1    1/1     Running   0          12m   192.168.10.134   k8s-node1    <none>           <none>
kube-flannel-ds-amd64-kcpxg          1/1     Running   0          29m   192.168.10.133   k8s-master   <none>           <none>
kube-flannel-ds-amd64-qmpln          1/1     Running   0          12m   192.168.10.134   k8s-node1    <none>           <none>
kube-proxy-6krck                     1/1     Running   0          40m   192.168.10.133   k8s-master   <none>           <none>
kube-proxy-jb726                     1/1     Running   0          12m   192.168.10.134   k8s-node1    <none>           <none>
kube-scheduler-k8s-master            1/1     Running   1          39m   192.168.10.133   k8s-master   <none>           <none>
kube-scheduler-k8s-node1             1/1     Running   0          12m   192.168.10.134   k8s-node1    <none>           <none>

以上,我原先的k8s-node1 是个node 节点,现在其实变成了一个master 节点。

附:
如果忘了join 命令或者没有存储,其实没关系,通过kubeadm 创建新的join 命令即可:

[root@k8s-node1 etcd]#  kubeadm token  create --print-join-command
kubeadm join 192.168.10.133:8443 --token 1eldhg.u9lhj81e6gwepz7a --discovery-token-ca-cert-hash sha256:09560c6c62952f7f69ddb170e241fb9d9688b77ba9c060af90626f0da6f0a26e

下一层:生产环境的高可用master集群,etcd 集群外置

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
禁止转载,如需转载请通过简信或评论联系作者。
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,372评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,368评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,415评论 0 353
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,157评论 1 292
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,171评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,125评论 1 297
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,028评论 3 417
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,887评论 0 274
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,310评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,533评论 2 332
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,690评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,411评论 5 343
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,004评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,659评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,812评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,693评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,577评论 2 353

推荐阅读更多精彩内容