1. kubernetes 的安装与部署

环境信息: centos7.9 + kubernetes 1.23.8 + docker 20.10.17 + virtualBox 6.1
文章编写时间: 2022-06-30
部署方式: kubeadm
组件: 网络组件calico、dashboard组件

一、前置工作与注意事项

  • 这里我们的centos使用的是 centos7.9, 不同版本的系统对k8s影响较大,具体看实际情况而定。 有的还需要更新系统内核。

  • 我们先准备了3台虚拟机,配置好网络(映射好ssh端口)。虚拟机情况如下:

    k8s_master-1  192.168.56.105
    k8s_slave-2   192.168.56.106
    k8s_slave-3   192.168.56.107
    

    hosts配置(每台机器都需要设置)

    vim /etc/hosts
    
    192.168.56.105 k8s-master01
    192.168.56.106 k8s-slave02
    192.168.56.107 k8s-slave03
    

    hostname配置(每台机器都需要配置,这里我们以192.168.56.105 为例,我们需要设置hostname为 "k8s_master-1" ,与hosts 相匹配)

    如果不配置hostname 默认会配置为localhost.localdomain,k8s 运行时会报错Error getting node" err="node \"localhost.localdomain\" not found

    # 设置当前机器的hostname
    hostnamectl set-hostname k8s-master01
    # 查看当前机器hostname
    hostname
    
  • 系统配置要求:2c 2g 20g 以上,, cpu 至少为2核,否则k8s初始化无法成功。 建议master节点内存给4g

  • k8s安装有多种方式:

    1. 使用minikube安装单节点集群,用于测试
    2. 采用工具kubeadm -- 我们使用的这种方式(开发环境,机器比较少(几十台以下))
    3. 使用kubespray, google官方提供的工具
    4. 全手动: 二进制安装(运维)
    5. 全自动安装: rancher、kubesphere (大型生产环境,百台,万台机器)
  • k8s health会依赖一些端口,为了不出现网络问题,我们在虚拟机(master)中开放以下端口:

    • 6443 主要

    • 2379

    • 2380

    • kubeadm 帮助我们安装的ca 证书时限是一年,所以不推荐正式环境使用,或需要手动配置ca证书。

二、安装

1. 初始准备

以下为实际安装步骤与流程:

# 基础依赖包安装
 yum -y install wget vim net-tools ntpdate  bash-completion
 
# 设置当前机器的hostname
hostnamectl set-hostname k8s-master01
# 查看当前机器hostname
hostname

0.系统时钟同步

# 向阿里云服务器同步时间
ntpdate time1.aliyun.com
# 删除本地时间并设置时区为上海
rm -rf /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# 查看时间
date -R || date

1.关闭防火墙、selinux

 systemctl stop firewalld
 systemctl disable firewalld
 sed -i 's/enforcing/disabled/' /etc/selinux/config
 setenforce 0

3.关闭swap

# 临时关闭Swap
swapoff -a
# 修改 /etc/fstab 删除或者注释掉swap的挂载,可永久关闭swap
sed -i '/swap/s/^/#/' /etc/fstab
# 修改完后我们检测以下,看最后一行swap 都是0 就成功了
free -m
#----------------start----------------------
              total        used        free      shared  buff/cache   available
Mem:           1837         721          69          10        1046         944
Swap:             0           0           0
#-----------------end---------------------

4.网桥过滤

# 网桥过滤
vim /etc/sysctl.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward=1
net.ipv4.ip_forward_use_pmtu = 0

# 生效命令
sysctl --system
# 查看效果
sysctl -a|grep "ip_forward"

5.开启ipvs(kubernetes1.8版本开始,新增了kube-proxy对ipvs的支持,性能和追踪问题比iptable强)------ 此步骤为选填项,如果不执行那么默认使用iptables

# 安装IPVS
yum -y install ipset ipvsdm
# 编译ipvs.modules文件
vi /etc/sysconfig/modules/ipvs.modules
# 文件内容如下
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
# 赋予权限并执行
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules &&lsmod | grep -e ip_vs -e nf_conntrack_ipv4
# 重启电脑,检查是否生效
reboot
lsmod | grep ip_vs_rr

修改hosts文件,添加解析

vim /etc/hosts

192.168.56.105 k8s-master01
192.168.56.106 k8s-slave02
192.168.56.107 k8s-slave03

2. docker安装

docker 换源安装

# 安装yum utils
yum install -y yum-utils
# yum docker-ce config 换源
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
# 安装docker
yum -y install docker-ce docker-ce-cli containerd.io
# 启动docker, enable 为必须,k8s会检测docker.service
systemctl enable docker && systemctl start docker

docker配置镜像加速

# 创建docker目录
mkdir -p /etc/docker
# 设置镜像源, exec-opts必须指定否则k8s启动报错(cgroup、systemd)
tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF
# 重启docke并生效镜像加速
systemctl daemon-reload && systemctl restart docker

3. k8s安装

配置kubernetes源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

注意:阿里源并未与官网同步gpg(由于官网未开放同步方式, 可能会有索引gpg检查失败的情况,这时请用如下命令安装)

安装kubernets,最好指定版本,否则会使用最新版本(k8s 每个版本的变化都比较大,这里我们的k8s使用1.23.8 版本)

# 检测可用的k8s版本(--nogpgcheck 忽略gpg检测)
yum list --nogpgcheck  --showduplicates kubeadm --disableexcludes=kubernetes
# 找到我们想要安装的版本,并安装--------------这里我们用1.23.8版本,最新版目前是1.24.0 版本安装启用了docker 会有一些问题。
# 安装kubelet、kubeadm、kubectl 组件--- 这里要注意,docker 版本和 k8s版本有关系,尽量使用支持区间的版本
# yum install --nogpgcheck kubelet-1.23.8 kubeadm-1.23.8 kubectl-1.23.8
yum -y install --nogpgcheck kubelet-1.23.8 kubeadm-1.23.8 kubectl-1.23.8

安装完成后我们检查一下

# 检查kubectl version
kubectl version
##########show start############
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.23.8", GitCommit:"5575935422cc1cf5169dfc8847cb587aa47bac5a", GitTreeState:"clean", BuildDate:"2021-06-16T13:00:45Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
###########show end###########
# 检查kubeadm版本
kubeadm version
##########show start############
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.23.8", GitCommit:"5575935422cc1cf5169dfc8847cb587aa47bac5a", GitTreeState:"clean", BuildDate:"2021-06-16T12:58:46Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}
##########show end############

启动k8s服务

# 启动k8s服务
systemctl enable kubelet && systemctl start kubelet
# 查看服务状态
systemctl status kubelet

# 如果不指定版本初始化那么会使用最新的k8s,有可能存在的报错信息如下,需要先手动设置下
# 运行初始化后有可能会报错:
#########错误信息--start###########
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: E0627 16:44:11.772277   16359 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-06-27T16:44:11+08:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
#########错误信息--end###########
# 解决方案:
vim /etc/containerd/config.toml
# 从disabled_plugins 数组值中删除 "cri"属性
# 重启容器
systemctl restart containerd

k8s master 主节点初始化仅master节点执行-- 这里考虑的是单master,多slave)

# 初始化
kubeadm init  \
--image-repository registry.aliyuncs.com/google_containers  \
--apiserver-advertise-address=192.168.56.105  \
--service-cidr=10.222.0.0/16 \
--pod-network-cidr=10.244.0.0/16

# 初始化过程比较长,需要下载一些资源

#---------------打印信息 start---------------------
I0628 15:14:20.293469    5655 version.go:255] remote version is much newer: v1.24.2; falling back to: stable-1.23
[init] Using Kubernetes version: v1.23.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.222.0.1 192.168.56.105]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.56.105 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.56.105 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.502756 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4yipfl.er9r8aqnq0hpd8a4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.105:6443 --token 4yipfl.er9r8aqnq0hpd8a4 \
        --discovery-token-ca-cert-hash sha256:afa76da3ced528e667374693fb4b0edd160530c251471ae11ece13c65d3d162a

#---------------打印信息 end---------------------

初始化参数解析

参数名 示例值 含义
--kubernetes-version v1.23.8 版本
--apiserver-advertise-address 192.168.56.105 当前机器节点ip
--image-repository registry.aliyuncs.com/google_containers 镜像仓库
--service-cidr 10.222.0.0/16 service 网段
--pod-network-cidr 10.244.0.0/16 k8s内部pod节点直接网段,不能和--service-cide相同

至此,kubeadm init(master 主节点)安装完成。

还需要进行一些收尾工作,根据kubeadm init log 提示,执行以下命令

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

然后,我们就可以进行k8s的节点查询

# 查询节点
kubectl get nodes

#------------展示信息 start--------------
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   6m21s   v1.23.8
#------------展示信息 end----------------

此时, STATUS 是 NotReady状态,因为网络组件还未安装,Pod之间还不能通讯

查看各命名空间下的Pod信息

kubectl get pods --all-namespaces

#------------展示信息 start--------------

NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-5bjmr                0/1     Pending   0          12m
kube-system   coredns-6d8c4cb4d-7w72l                0/1     Pending   0          12m
kube-system   etcd-k8s-master01                      1/1     Running   0          12m
kube-system   kube-apiserver-k8s-master01            1/1     Running   0          12m
kube-system   kube-controller-manager-k8s-master01   1/1     Running   0          12m
kube-system   kube-proxy-rcsfg                       1/1     Running   0          12m
kube-system   kube-scheduler-k8s-master01            1/1     Running   0          12m
#------------展示信息 end----------------

可以看到NDS解析服务coredns的pod还处于Pending状态未运行,也是因为网络组件还没安装

三、网络插件的安装

下面我们进行网络组件的安装

1. 常用网络插件

这里只简单说明下,推荐使用calico。

flannel 和 calico 是常用的网络插件。

calico 的性能更好,使用场景更广一些。

flannel 没有网络策略,不能控制pod的访问。

这里我们用calico插件

2. 插件安装

# calico插件安装
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# 插件安装过程较慢,请耐心等待
# 安装后我们查看pod状态,直到 所有 STATUS 为 Running 才启动成功
kubectl get pod --all-namespaces

#----------显示如下 start-------------
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7bc6547ffb-2bnbh   1/1     Running   0          5m57s
kube-system   calico-node-rnhcv                          1/1     Running   0          5m57s
kube-system   coredns-6d8c4cb4d-5bjmr                    1/1     Running   0          90m
kube-system   coredns-6d8c4cb4d-7w72l                    1/1     Running   0          90m
kube-system   etcd-k8s-master01                          1/1     Running   0          91m
kube-system   kube-apiserver-k8s-master01                1/1     Running   0          91m
kube-system   kube-controller-manager-k8s-master01       1/1     Running   0          91m
kube-system   kube-proxy-rcsfg                           1/1     Running   0          90m
kube-system   kube-scheduler-k8s-master01                1/1     Running   0          91m
#----------显示如下 end---------------

# 查看k8s node 状态
kubectl get nodes

#----------显示如下 start-------------
k8s-master01   Ready    control-plane,master   99m   v1.23.8
#----------显示如下 end---------------

四、安装dashboard

1.注意事项

dashboard 在github上开源地址:

dashboard 的版本和 k8s的版本有关系,因为每个k8s版本改动较大,所以在选择dashboard时尽量选择兼容的版本,否则某些功能有可能使用异常。

版本的选择方式:

  1. github 点击releases

  2. 查看dashboard版本的兼容情况,并选择对应版本


    image.png

3.找到支持的兼容版本,复制安装语句并执行


image.png

2.安装

# 安装dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

#----------显示如下 start-------------
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
#----------显示如下 end---------------

# 配置dashboard访问端口等信息
vim k8s-dashboard.yaml

#----------编辑内容 start------------- 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      # 对外暴露的端口(端口范围 30000~32767)
      nodePort: 30443
  selector:
    k8s-app: kubernetes-dashboard

#----------编辑内容 end--------------- 

# 运行yaml
kubectl apply -f k8s-dashboard.yaml

#----------显示如下 start-------------
service/kubernetes-dashboard created
#----------显示如下 end---------------

# 这样我们的dashboard服务就部署完毕了,下面我们验证下
# 查看pod,我们会看到两个pod 的status 都是`Running`
kubectl get pods -n kubernetes-dashboard

#----------显示如下 start-------------
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-799d786dbf-j85zw   1/1     Running   0          18m
kubernetes-dashboard-fb8648fd9-qcrzk         1/1     Running   0          18m
#----------显示如下 end---------------

# 查看服务
kubectl get svc -n kubernetes-dashboard -o wide
#----------显示如下 start-------------
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE   SELECTOR
dashboard-metrics-scraper   ClusterIP   10.222.211.134   <none>        8000/TCP        19m   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.222.156.236   <none>        443:30443/TCP   19m   k8s-app=kubernetes-dashboard
#----------显示如下 end---------------

上面我们说到dashboard 开放的端口是30443,我们的虚拟机也需要对外暴露该端口

image.png

设置好后,我们访问https://192.168.56.105:30443(注意:这里一定要用https

image.png

3.权限配置与登录验证

上面我们已经完成了dashboard的安装,但是我们发现登录的时候需要使用Token,下边我们就说下Token的生成和配置

# 为了规范化处理,我们制定配置文件路径
cd /opt/kube-dashboard/conf
# 创建rbac配置文件
vim admin-user-dashboard.yaml

#----------编辑内容 start------------- 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-view
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - get
  - list
  - watch
- nonResourceURLs:
  - '*'
  verbs:
  - get
  - list
  - watch
 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: view-user
  namespace: kubernetes-dashboard
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: view-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-view
subjects:
- kind: ServiceAccount
  name: view-user
  namespace: kubernetes-dashboard
#----------编辑内容 end--------------- 

# 运行权限配置
kubectl apply -f admin-user-dashboard.yaml

# 生成登录token
# admin-user 为角色名称
kubectl describe secret admin-user -n kubernetes-dashboard
# 或者(还是推荐上边的语句,比较简单)
# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

#展示内容如下,我们复制token部分信息即可

#----------------------------------------------
Name:         admin-user-token-2gt2w
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: ce38f197-f395-45ed-9385-104707df07c7

Type:  kubernetes.io/service-account-token

Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImgzM0taa283elhJbjFvc2NFNTJULXVDTGNURjZJaV9zSzV0X3U1VkgwUFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTJndDJ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjZTM4ZjE5Ny1mMzk1LTQ1ZWQtOTM4NS0xMDQ3MDdkZjA3YzciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.RoYrf2VCgGn8RiHVgBvMS7El4DWa6XmAT_Prrjs_Kk2nOFOjG2z3i4I_1Db9S6Jq0ZC-L2lCeGDQSdMmOBW1eYMNw6-vSwteKR_Un7GhshPLK4AJML3CK8uHsgYnhM64EinyTcdbBj9ade6OdJ3ypFi_Dw_oms4CUnuD57zLynZnh_JGMj-HJEMmtjBDV_FE-yJUn7_Y626e5Uw92p_xcW9up68TPEMuOSTedlxHJ61jpGf0H8ZGdinslvgpEbp7jUJeXoU_caLHhKGc28pQzJgjtHkatHJS7HmYdcPmSSON-2HZztmNlNfHI0luEfEg2KCAU3hxQeDKMw89jye1eg
ca.crt:     1099 bytes

#----------------------------------------------

下面我们复制登录token,登录dashboard,显示如下页面


image.png

五、子节点加入

子节点需要执行 两点操作,注意点我们除了kubeadm init操作外其他的都执行

子节点加入的命令在 step3-》k8s安装-》kubeadm init 输出的日志中我们可以找到。

kubeadm join 192.168.56.105:6443 --token 4yipfl.er9r8aqnq0hpd8a4 \
        --discovery-token-ca-cert-hash sha256:afa76da3ced528e667374693fb4b0edd160530c251471ae11ece13c65d3d162a

下面我们简单说下 子节点服务器的操作流程:

# 基础依赖包安装
yum -y install wget vim net-tools ntpdate bash-completion
# 修改当前机器名 
hostnamectl set-hostname k8s-slave02
# 或 hostnamectl set-hostname k8s-slave03

# 修改hosts文件
vim /etc/hosts

192.168.56.105 k8s-master01
192.168.56.106 k8s-slave02
192.168.56.107 k8s-slave03

# 系统时钟同步与时区配置
ntpdate time1.aliyun.com
rm -rf /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
date -R || date

# 关闭防火墙、selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

# 关闭swap
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab
free -m

# 网桥过滤
vim /etc/sysctl.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward=1
net.ipv4.ip_forward_use_pmtu = 0

# 生效命令 与 查看
sysctl --system
sysctl -a|grep "ip_forward"

# docker安装
yum install -y yum-utils
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum -y install docker-ce docker-ce-cli containerd.io
systemctl enable docker && systemctl start docker

# docker 镜像加速 与 cgroup配置
mkdir -p /etc/docker

tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF

systemctl daemon-reload && systemctl restart docker

# k8s安装
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum -y install --nogpgcheck kubelet-1.23.8 kubeadm-1.23.8 kubectl-1.23.8

# 启动k8s服务
systemctl enable kubelet && systemctl start kubelet

# join k8s网络
kubeadm join 192.168.56.105:6443 --token 4yipfl.er9r8aqnq0hpd8a4 \
        --discovery-token-ca-cert-hash sha256:afa76da3ced528e667374693fb4b0edd160530c251471ae11ece13c65d3d162a
# 注意: token 有效时间是24小时,如果过来24小时那么需要刷新token
# 在master 主节点上运行命令,刷新 token
kubeadm token create --print-join-command
# 得到以下结果
kubeadm join 192.168.56.105:6443 --token o26r8i.zg2t9ade0tyuh4tp --discovery-token-ca-cert-hash sha256:de2d35d81f3740e93aca8a461713ca4ab1fcb9e7e881dc0f8836dd06d8a40229
# 在slave 节点上运行 上述 `kubeadm join` 语句
# 运行成功,显示如下:

#------------运行成功 start-----------------
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
#------------运行成功 end-------------------

# 在主节点上 查看所有nodes
kubectl get nodes


NAME           STATUS     ROLES                  AGE   VERSION
k8s-master01   Ready      control-plane,master   18h   v1.23.8
k8s-slave02    NotReady   <none>                 46s   v1.23.8

# 这个时候我们看到 `k8s-slave02` 已经加入到 k8s集群中了,但是`STATUS`还是`NotReady`
# 此步骤比较耗时,我们多等一会,直到 `NotReady` 变为 `Ready`

NAME           STATUS   ROLES                  AGE     VERSION
k8s-master01   Ready    control-plane,master   18h     v1.23.8
k8s-slave02    Ready    <none>                 3m16s   v1.23.8

# 至此,子节点加入成功
# 为了稳妥起见,我们在 master 上发现子节点 状态由 `NotReady` 变成 `Ready` 后,我们对子节点k8s 进行重启
systemctl restart kubelet

子节点上我们尝试运行 k8s命令

# 查看节点信息
kubectl get nodes
# 展示信息
The connection to the server localhost:8080 was refused - did you specify the right host or port?

我们发现:子节点无法运行kubectl命令

原因:kubectl命令需要使用 kubernetes-admin 来运行

解决办法: 我们将master节点的 admin.conf 复制到子节点上

# master节点上操作:从master节点 复制到 子节点
scp /etc/kubernetes/admin.conf root@192.168.56.106:/etc/kubernetes/
# scp /etc/kubernetes/admin.conf root@192.168.56.107:/etc/kubernetes/

# node节点上操作:配置环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

# 而后我们在子节点上运行 命令
kubectl get nodes

NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   19h   v1.23.8
k8s-slave02    Ready    <none>                 49m   v1.23.8
k8s-slave03    Ready    <none>                 14m   v1.23.8


六、常用信息

1. 常用命令

# 查看k8s 运行日志命令, 这个比较有用,在k8s 启动、kubeadm init、kubeadm join 阶段可以辅助分析问题。
journalctl -xefu kubelet 
# 查看k8s驱动
systemctl show --property=Environment kubelet |cat
# 重启k8s
systemctl restart kubelet
# 启动k8s
systemctl start kubelet
# 停止k8s
systemctl stop kubelet
# 开机自启k8s
systemctl enable kubelet

# dashboard 获取token
kubectl describe secret admin-user -n kubernetes-dashboard

# kubeadm 重置, 有些时候我们在使用kubeadm init 命令时会报错,我们根据错误提示修复问题后需要重新进行 init 操作,因此需要进行reset重置
kubeadm reset

2. 环境信息

# k8s 安装目录
/etc/kubernetes/

总用量 32
-rw-------. 1 root root 5642 6月  28 15:19 admin.conf
-rw-------. 1 root root 5674 6月  28 15:19 controller-manager.conf
-rw-------. 1 root root 1986 6月  28 15:19 kubelet.conf
drwxr-xr-x. 2 root root  113 6月  28 15:19 manifests
drwxr-xr-x. 3 root root 4096 6月  28 15:19 pki
-rw-------. 1 root root 5618 6月  28 15:19 scheduler.conf

# 组件配置文件目录
/etc/kubernetes/manifests/

总用量 16
-rw-------. 1 root root 2310 6月  28 15:19 etcd.yaml
-rw-------. 1 root root 3378 6月  28 15:19 kube-apiserver.yaml
-rw-------. 1 root root 2879 6月  28 15:19 kube-controller-manager.yaml
-rw-------. 1 root root 1464 6月  28 15:19 kube-scheduler.yaml

# 自定义dashboard yaml文件目录
/opt/kube-dashboard/conf/

总用量 8
-rw-r--r--. 1 root root 1124 6月  29 08:41 admin-user-dashboard.yaml
-rw-r--r--. 1 root root  285 6月  29 08:25 k8s-dashboard.yaml

附录

参考链接

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,222评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,455评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,720评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,568评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,696评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,879评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,028评论 3 409
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,773评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,220评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,550评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,697评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,360评论 4 332
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,002评论 3 315
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,782评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,010评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,433评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,587评论 2 350

推荐阅读更多精彩内容