centos7 安装k8s V1.19.4版本

一、部署环境

主机列表:

主机名 Centos7版本 IP 主机配置 k8s版本
master 192.168.214.128 2CPU 2G v1.19.4
node01 192.168.214.129 2CPU 2G v1.19.4
node02 192.168.214.130 2CPU 2G v1.19.4
安装共有3台服务器 1台master,2台node。

master最少配置为2CPU 2GB(至少2GB)或更多的CPU和RAM。
node1最少1CPU 2G
node2最少1CPU 2G

Kubernetes集群组件:

etcd 一个高可用的K/V键值对存储和服务发现系统
flannel 实现夸主机的容器网络的通信
kube-apiserver 提供kubernetes集群的API调用
kube-controller-manager 确保集群服务
kube-scheduler 调度容器,分配到Node
kubelet 在Node节点上按照配置文件中定义的容器规格启动容器
kube-proxy 提供网络代理服务

Kubernetes 的优势:

一、服务发现和负载均衡
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。

二、存储编排
Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等。

三、自动部署和回滚
你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态 更改为期望状态。例如,你可以自动化 Kubernetes 来为你的部署创建新容器, 删除现有容器并将它们的所有资源用于新容器。

四、自动完成装箱计算
Kubernetes 允许你指定每个容器所需 CPU 和内存(RAM)。 当容器指定了资源请求时,Kubernetes 可以做出更好的决策来管理容器的资源。

五、自我修复
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的 运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。

六、密钥与配置管理
Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。 你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。

Kubernetes官方地址:https://kubernetes.io
Kubernetes官方项目地址:https://github.com/kubernetes/kubernetes

二、安装准备工作

修改主机名

主节点 [root@centos7 ~] hostnamectl set-hostname master
节点1 [root@centos7 ~] hostnamectl set-hostname node1
节点2 [root@centos7 ~] hostnamectl set-hostname node2

修改完成后退出重新SSH登陆即可显示新设置的主机名master、node1、node2

关闭防火墙和关闭selinux

三台服务器全部需要执行
关于关闭防火墙(仅用于测试环境,生产环境请勿使用)
如使用云服务器(如阿里云ECS 到服务器安全组手动增加相关端口)
关闭selinux的原因(关闭selinux以允许容器访问宿主机的文件系统)

关闭防火墙和设置开机禁用防火墙

[root@centos7 ~] systemctl stop firewalld && systemctl disable firewalld

永久关闭selinux

[root@centos7 ~] sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

临时关闭selinux

root@centos7 ~] setenforce 0

保证yum仓库可用(使用国内镜像仓库,所有节点)

[root@localhost ~]# sed -ri s/^#baseurl/baseurl/g /etc/yum.repos.d/CentOS-Base.repo
[root@localhost ~]# sed -ri s/^mirrorlist/#mirrorlist/g /etc/yum.repos.d/CentOS-Base.repo
[root@localhost ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@localhost ~]# yum clean all
[root@localhost ~]# yum makecache fast

添加本地解析(所有节点)

[root@master ~]# cat >> /etc/hosts <<eof
10.9.62.205 k8s-master
10.9.62.70 node1
10.9.62.69 node2
10.9.62.68 node3
eof

禁用swap虚拟内存(所有节点都要执行)

master node节点都执行本部分操作。
Swap会导致docker的运行不正常,Kubernetes性能下降。
详情开发人员说明:https://github.com/kubernetes/kubernetes/issues/53533

临时禁用

[root@master ~] swapoff -a

永久禁用

[root@master ~] sed -i.bak '/swap/s/^/#/' /etc/fstab
然后关机再启动

配置内核参数

# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

[root@root]# sysctl --system
[root@root]# modprobe br_netfilter
[root@root]# sysctl -p /etc/sysctl.d/k8s.conf

加载ipvs相关内核模块
如果重新开机,需要重新加载(可以写在 /etc/rc.local 中开机自动加载)
[root@root]# modprobe ip_vs
[root@root]# modprobe ip_vs_rr
[root@root]# modprobe ip_vs_wrr
[root@root]# modprobe ip_vs_sh
[root@root]# modprobe nf_conntrack_ipv4

查看是否加载成功
[root@root]# lsmod | grep ip_vs

所有机器安装docker

vim docker.sh

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
yum -y install docker-ce docker-ce-cli
systemctl start docker
systemctl enable docker

所有机器安装kubeadm和kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

  • [] 中括号中的是repository id,唯一,用来标识不同仓库
  • name 仓库名称,自定义
  • baseurl 仓库地址
  • enable 是否启用该仓库,默认为1表示启用
  • gpgcheck 是否验证从该仓库获得程序包的合法性,1为验证
  • repo_gpgcheck 是否验证元数据的合法性 元数据就是程序包列表,1为- - 验证
  • gpgkey=URL 数字签名的公钥文件所在位置,如果gpgcheck值为1,此处就需要指定gpgkey文件的位置,如果gpgcheck值为0就不需要此项了

更新缓存

[root@master ~] yum clean all
[root@master ~] yum -y makecache

命令补全 安装bash-completion

# 安装bash-completion
[root@master ~] yum -y install bash-completion
# 加载bash-completion
[root@master ~] source /etc/profile.d/bash_completion.sh

kubelet 版本信息查看

[root@master ~] yum list kubelet --showduplicates | sort -r
安装kubelet、kubeadm和kubectl
[root@master ~] yum install -y kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 ipvsadm

  • kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
  • kubeadm 用于初始化集群,启动集群的命令工具
  • kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

启动kubelet

启动kubelet并设置开机启动

[root@master ~] systemctl enable kubelet && systemctl start kubelet

kubectl命令补全

[root@master ~] echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master ~] source ~/.bash_profile

镜像下载

Kubernetes几乎所有的安装组件和Docker镜像都放在goolge自己的网站上,直接访问可能会有网络问题,这里的解决办法是从阿里云镜像仓库下载镜像,拉取到本地以后改回默认的镜像tag。本文通过运行image.sh脚本方式拉取镜像。

[root@master ~] vim image.sh

url=registry.aliyuncs.com/google_containers
version=v1.19.4
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done

运行脚本image.sh,下载指定版本的镜像

[root@master ~] chmod 775 image.sh

运行脚本

[root@master ~] ./image.sh

查看已下载的镜像

[root@master ~] docker images

node节点直接运行启动命令: kubeadm init 无需初始化master

三、初始化master

master节点执行本部分操作。

修改kubelet配置默认cgroup driver

[root@master ~] mkdir -p /var/lib/kubelet/
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF 

需要重新启动kubelet

[root@master ~] # systemctl daemon-reload
[root@master ~] # systemctl enable kubelet && systemctl restart kubelet

环境是否正常

[root@master ~] kubeadm init phase preflight

初始化master 10.244.0.0/16是flannel固定使用的IP段,设置取决于网络组件要求

[root@master ~] kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.19.4

如初始化失败可执行kubeadm reset 命令来清理环境重新安装

[root@master ~] kubeadm reset[root@master ~] rm -rf $HOME/.kube/config

5.5 配置master认证

[root@master ~] echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
[root@master ~] source /etc/profile

在master节点配置使用kubectl

mkdir -p HOME/.kube cp -i /etc/kubernetes/admin.confHOME/.kube/config
chown (id -u):(id -g) $HOME/.kube/config

配置网络插件

master节点下载yaml配置文件

cd ~ && mkdir flannel && cd flannel

进入网址拉取yml文件(https://github.com/ 搜 flannel)https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml(进入网址复制内容)

vi kube-flannel.yml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

把镜像下载
docker pull quay.io/coreos/flannel:v0.13.1-rc1

修改配置文件

containers:
  - name: kube-flannel
    image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64 #文档172、192等等行,好多行,都需要换掉   
    command:
    - /opt/bin/flanneld

    args:
    - --ip-masq
    - --kube-subnet-mgr
    - --iface=ens33  #文档192行    改了加入一行
    - --iface=eth0

启动

# kubectl apply -f ~/flannel/kube-flannel.yml

查看
kubectl get pods --namespace kube-system
kubectl get service
kubectl get svc --namespace kube-system

(注释注意查看)六、node节点加入集群

6.1 集群节点查看

[root@master ~]# kubectl get nodesNAME     STATUS   ROLES    AGE   VERSIONmaster   Ready    master   22h   v1.19.4node01   NotReady <none>   22h   v1.19.4node02   NotReady <none>   22h   v1.19.4 

6.2 node加入集群

需要root权限 (例如 sudo su -

首先在master节点上运行获取 token,discovery-token-ca-cert-hash的值

在Node节点上执行:

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash> # 示例kubeadm join --token 4xpmwx.nw6psmvn9qi4d3cj 192.168.214.128:6443 --discovery-token-ca-cert-hash sha256:c7cbe95a66092c58b4da3ad20874f0fe2b6d6842d28b2762ffc8d36227d7a0a7 

在master节点上运行以下命令来获取token令牌:

[root@master ~] kubeadm token list  # 输出以下内容TOKEN                    TTL  EXPIRES              USAGES           DESCRIPTION            EXTRA GROUPS8ewj1p.9r9hcjoqgajrj4gi  23h  2018-06-12T02:51:28Z authentication,  The default bootstrap  system:                                                   signing          token generated by     bootstrappers:                                                                    'kubeadm init'.        kubeadm:                                                                                           default-node-token

默认情况下,令牌会在24小时后过期。如果要在当前令牌过期后将节点加入集群, 则可以通过在控制平面节点上运行以下命令来创建新token令牌:

[root@master ~] kubeadm token create  # 输出以下内容5didvk.d09sbcov8ph2amjw

获取 --discovery-token-ca-cert-hash 的值:

[root@master ~] openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \   openssl dgst -sha256 -hex | sed 's/^.* //'  # 输出以下内容8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

如运行kubeadm join -- 加入集群命令出现报错:[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

原因:cgroup和systemd有冲突

[root@node01 ~]# docker info | grep Cgroup  Cgroup Driver: cgroupfs

通过以上命令查到当前的cgroup driver 为cgroupfs,需改为systemd

cat > /etc/docker/daemon.json <<EOF{  "exec-opts": ["native.cgroupdriver=systemd"],  "log-driver": "json-file",  "log-opts": {    "max-size": "100m"  },  "storage-driver": "overlay2",  "storage-opts": [    "overlay2.override_kernel_check=true"  ]}EOF
[root@node01 ~]# systemctl daemon-reload[root@node01 ~]# systemctl restart docker[root@node01 ~]# docker info | grep Cgroup         #再次查看cgroup driver 已改为systemd Cgroup Driver: systemd # 再次运行加入集群[root@node01 ~]# kubeadm join 192.168.191.133:6443 --token xvnp3x.pl6i8ikcdoixkaf0 \    --discovery-token-ca-cert-hash sha256:9f90161043001c0c75fac7d61590734f844ee507526e948f3647d7b9cfc1362d

7、配置所有node节点加入集群

在所有node节点操作,此命令为初始化master成功后返回的结果

# kubeadm join 192.168.1.200:6443 --token ccxrk8.myui0xu4syp99gxu --discovery-token-ca-cert-hash sha256:e3c90ace969aa4d62143e7da6202f548662866dfe33c140095b020031bff2986

8、集群检测

查看pods

说明:节点加入到集群之后需要等待几分钟再查看

# kubectl get pods -n kube-system
NAME                                    READY  STATUS       RESTARTS  AGE
coredns-6c66ffc55b-l76bq        1/1   Running       0      16m
coredns-6c66ffc55b-zlsvh        1/1   Running       0      16m
etcd-node1                          1/1   Running       0      16m
kube-apiserver-node1            1/1   Running       0      16m
kube-controller-manager-node1 1/1   Running         0      15m
kube-flannel-ds-sr6tq           0/1   CrashLoopBackOff  6      7m12s
kube-flannel-ds-ttzhv           1/1   Running       0      9m24s
kube-proxy-nfbg2                1/1   Running       0      7m12s
kube-proxy-r4g7b                1/1   Running       0      16m
kube-scheduler-node1            1/1   Running       0      16m

遇到异常状态0/1的pod长时间启动不了可删除它等待集群创建新的pod资源

# kubectl delete pod kube-flannel-ds-sr6tq -n kube-system
pod "kube-flannel-ds-sr6tq" deleted

删除后再次查看,发现状态为正常

[root@master flannel]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-6955765f44-g767b         1/1     Running   0          18m
coredns-6955765f44-l8zzs         1/1     Running   0          18m
etcd-master                      1/1     Running   0          18m
kube-apiserver-master            1/1     Running   0          18m
kube-controller-manager-master   1/1     Running   0          18m
kube-flannel-ds-amd64-bsdcr      1/1     Running   0          60s
kube-flannel-ds-amd64-g8d7x      1/1     Running   0          2m33s
kube-flannel-ds-amd64-qjpzg      1/1     Running   0          5m9s
kube-proxy-5pmgv                 1/1     Running   0          2m33s
kube-proxy-r962v                 1/1     Running   0          60s
kube-proxy-zklq2                 1/1     Running   0          18m
kube-scheduler-master            1/1     Running   0          18m

再次查看节点状态

[root@master flannel]# kubectl get nodes -n kube-system
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   19m     v1.17.2
node1    Ready    <none>   3m16s   v1.17.2
node2    Ready    <none>   103s    v1.17.2

到此集群配置完成

安装Kubernetes Dashboard

Dashboard 只在在master上安装

Kubernetes Dashboard 是 Kubernetes 的官方 Web UI

  • 向 Kubernetes 集群部署容器化应用
  • 诊断容器化应用的问题
  • 管理集群的资源
  • 查看集群上所运行的应用程序
  • 创建、修改Kubernetes 上的资源(例如 Deployment、Job、DaemonSet等)
  • 展示集群上发生的错误

国人开发的Web UI页面: https://kuboard.cn/install/install-dashboard.html

有兴趣可以了解一下,值得推荐。

下载yaml

访问网站https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

\vi recommended.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.4
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

修改配置文件

# vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #新加此行
  ports:
    - port: 443
      nodePort: 30001 #新加此行
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

应用配置文件
kubectl apply -f recommended.yaml
查看pod和service
kubectl get pod -o wide -n kubernetes-dashboard
kubectl get svc -o wide -n kubernetes-dashboard
火狐浏览器访问
https://10.9.62.205:30001

创建一个dashboard用户

# vim create-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
  
# kubectl apply -f create-admin.yaml

获取Token

[root@master dashboard1]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-z4jp6
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 349285ce-741d-4dc1-a600-1843a6ec9751

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InY5M1pSc3RpejBVZ0x6LTNSbWlCc2t5b01ualNZWnpYMVB5YzUwNmZ3ZmsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXo0anA2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNDkyODVjZS03NDFkLTRkYzEtYTYwMC0xODQzYTZlYzk3NTEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.JtCa0VC7tYtIGLWlwSKUwqSL0T8eRvZ8jk_AUxB4Atmi5PjF9IjAHNNwGS3HaTL3Q86fCI8MvYGf3Eplk9X-n-g9WsrFIxXxa0wGJxZp0d8R78A6vuN7I7Zd5CeQm_O2ycTUuQhYnSZlNplF8X033QOfjOoFnKKevbn2094XXWWZuAsT9haGnZ8BX92DmYzsaMyLesfv7ZziJD80KgSQ8_jtb0n55zw5cedYTsRCZgofJ_o9U5SUW3I0AXG-vVhI28m0sMBjZkuMppfB4eMLnSDH-XAw3Gvwe_2NOLfS4hBTkYu7gJket-gif9Cs8Ybkzvf2qXdZW5fydZUuSylafg
ca.crt:     1025 bytes
namespace:  20 bytes

访问网址token粘贴,完成进入。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,542评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,596评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 158,021评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,682评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,792评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,985评论 1 291
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,107评论 3 410
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,845评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,299评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,612评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,747评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,441评论 4 333
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,072评论 3 317
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,828评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,069评论 1 267
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,545评论 2 362
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,658评论 2 350

推荐阅读更多精彩内容