在线全科家教小学初高中 SpringCloud Alibaba Flink大数据 高可用kubernetes集群 AI人工智能云平台免中介费试听课开始啦

中小学全学科家教网 线下、线上、一对一、1对多小班课、可试听


kubeadm部署高可用kubernetes集群 1.21

一、kubernetes 1.21发布




1.1 介绍

2021年04月,Kubernetes 1.21正式与大家见面,这是我们 2021 年的第一个版本!这个版本包含 51 个增强功能:13 个增强功能升级为稳定版,16 个增强功能升级为 beta 版,20 个增强功能进入 alpha 版,还有 2 个功能已经弃用。

1.2 主要变化

CronJobs 毕业到稳定!

自 Kubernetes 1.8 以来,CronJobs一直是一个测试版功能!在 1.21 中,我们终于看到这个广泛使用的 API 毕业到稳定。

CronJobs 用于执行定期计划的操作,如备份、报告生成等。每个任务都应该被配置为无限期地重复出现(例如:一天/一周/一个月);你可以在该间隔内定义作业应该启动的时间点。

不可变的 Secrets 和 ConfigMaps

Immutable Secrets和ConfigMaps为这些资源类型添加了一个新字段,如果设置了该字段,将拒绝对这些对象的更改。默认情况下,Secrets 和 ConfigMaps 是可变的,这对能够使用更改的 pod 是有益的。如果将错误的配置推送给使用它们的 pod,可变的 Secrets 和 ConfigMaps 也会导致问题。

通过将 Secrets 和 ConfigMaps 标记为不可变的,可以确保应用程序配置不会改变。如果你希望进行更改,则需要创建一个新的、唯一命名的 Secret 或 ConfigMap,并部署一个新的 pod 来消耗该资源。不可变资源也有伸缩性优势,因为控制器不需要轮询 API 服务器来观察变化。

这个特性在 Kubernetes 1.21 中已经毕业到稳定。

IPv4/IPv6 双栈支持

IP 地址是一种可消耗的资源,集群操作人员和管理员需要确保它不会耗尽。特别是,公共 IPv4 地址现在非常稀少。双栈支持使原生 IPv6 路由到 pod 和服务,同时仍然允许你的集群在需要的地方使用 IPv4。双堆栈集群网络还改善了工作负载的可能伸缩限制。

Kubernetes 的双栈支持意味着 pod、服务和节点可以获得 IPv4 地址和 IPv6 地址。在 Kubernetes 1.21 中,双栈网络已经从 alpha 升级到 beta,并且已经默认启用了。

优雅的节点关闭

在这个版本中,优雅的节点关闭也升级到测试版(现在将提供给更大的用户群)!这是一个非常有益的特性,它允许 kubelet 知道节点关闭,并优雅地终止调度到该节点的 pod。

目前,当节点关闭时,pod 不会遵循预期的终止生命周期,也不会正常关闭。这可能会在许多不同的工作负载下带来问题。接下来,kubelet 将能够通过 systemd 检测到即将发生的系统关闭,然后通知正在运行的 pod,以便它们能够尽可能优雅地终止。

PersistentVolume 健康监测器

持久卷(Persistent Volumes,PV)通常用于应用程序中获取本地的、基于文件的存储。它们可以以许多不同的方式使用,并帮助用户迁移应用程序,而不需要重新编写存储后端。

Kubernetes 1.21 有一个新的 alpha 特性,允许对 PV 进行监视,以了解卷的运行状况,并在卷变得不健康时相应地进行标记。工作负载将能够对运行状况状态作出反应,以保护数据不被从不健康的卷上写入或读取。

减少 Kubernetes 的构建维护

以前,Kubernetes 维护了多个构建系统。这常常成为新贡献者和当前贡献者的摩擦和复杂性的来源。

在上一个发布周期中,为了简化构建过程和标准化原生的 Golang 构建工具,我们投入了大量的工作。这应该赋予更广泛的社区维护能力,并降低新贡献者进入的门槛。

1.3 重大变化

弃用 PodSecurityPolicy

在 Kubernetes 1.21 中,PodSecurityPolicy 已被弃用。与 Kubernetes 所有已弃用的特性一样,PodSecurityPolicy 将在更多版本中继续可用并提供完整的功能。先前处于测试阶段的 PodSecurityPolicy 计划在 Kubernetes 1.25 中删除。

接下来是什么?我们正在开发一种新的内置机制来帮助限制 Pod 权限,暂定名为“PSP 替换策略”。我们的计划是让这个新机制覆盖关键的 PodSecurityPolicy 用例,并极大地改善使用体验和可维护性。

弃用 TopologyKeys

服务字段 topologyKeys 现在已弃用;所有使用该字段的组件特性以前都是 alpha 特性,现在也已弃用。我们用一种实现感知拓扑路由的方法替换了 topologyKeys,这种方法称为感知拓扑提示。支持拓扑的提示是 Kubernetes 1.21 中的一个 alpha 特性。你可以在拓扑感知提示中阅读关于替换特性的更多细节;相关的KEP解释了我们替换的背景。

二、kubernetes 1.21.0 部署工具介绍

What is Kubeadm ?

Kubeadm is a tool built to provide best-practice "fast paths" for creating Kubernetes clusters. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way. Kubeadm's scope is limited to the local node filesystem and the Kubernetes API, and it is intended to be a composable building block of higher level tools.

Kubeadm是为创建Kubernetes集群提供最佳实践并能够“快速路径”构建kubernetes集群的工具。它能够帮助我们执行必要的操作,以获得最小可行的、安全的集群,并以用户友好的方式运行。

Common Kubeadm cmdlets

kubeadm initto bootstrap the initial Kubernetes control-plane node.初始化

kubeadm jointo bootstrap a Kubernetes worker node or an additional control plane node, and join it to the cluster.添加工作节点到kubernetes集群

kubeadm upgradeto upgrade a Kubernetes cluster to a newer version. 更新kubernetes版本

kubeadm resetto revert any changes made to this host by kubeadm init or kubeadm join. 重置kubernetes集群

三、kubernetes 1.21.0 部署环境准备

3主2从

3.1 主机操作系统说明

序号操作系统及版本备注

1CentOS7u6

3.2 主机硬件配置说明

需求CPU内存硬盘角色主机名

值4C8G100GBmastermaster01

值4C8G100GBmastermaster02

值4C8G100GBmastermaster03

值4C8G100GBworker(node)worker01

值4C8G100GBworker(node)worker02

序号主机名IP地址备注

1master01192.168.10.11master

2master02192.168.10.12master

3master03192.168.10.13master

4worker01192.168.10.14node

5worker02192.168.10.15node

6master01192.168.10.100vip

序号主机名功能备注

1master01haproxy、keepalivedkeepalived主节点

2master02haproxy、keepalivedkeepalived从节点

3.3 主机配置

3.3.1 主机名配置

由于本次使用5台主机完成kubernetes集群部署,其中3台为master节点,名称为master01、master02、master03;其中2台为worker节点,名称分别为:worker01及worker02

master节点,名称为master01

# hostnamectl set-hostname master01

master节点,名称为master02

# hostnamectl set-hostname master02

master节点,名称为master03

# hostnamectl set-hostname master03

worker1节点,名称为worker01

# hostnamectl set-hostname worker01

worker2节点,名称为worker02

# hostnamectl set-hostname worker02

3.3.2 主机IP地址配置

master01节点IP地址为:192.168.10.11/24

# vim /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"

PROXY_METHOD="none"

BROWSER_ONLY="no"

BOOTPROTO="none"

DEFROUTE="yes"

IPV4_FAILURE_FATAL="no"

IPV6INIT="yes"

IPV6_AUTOCONF="yes"

IPV6_DEFROUTE="yes"

IPV6_FAILURE_FATAL="no"

IPV6_ADDR_GEN_MODE="stable-privacy"

NAME="ens33"

DEVICE="ens33"

ONBOOT="yes"

IPADDR="192.168.10.11"

PREFIX="24"

GATEWAY="192.168.10.2"

DNS1="119.29.29.29"

master02节点IP地址为:192.168.10.12/24

# vim /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"

PROXY_METHOD="none"

BROWSER_ONLY="no"

BOOTPROTO="none"

DEFROUTE="yes"

IPV4_FAILURE_FATAL="no"

IPV6INIT="yes"

IPV6_AUTOCONF="yes"

IPV6_DEFROUTE="yes"

IPV6_FAILURE_FATAL="no"

IPV6_ADDR_GEN_MODE="stable-privacy"

NAME="ens33"

DEVICE="ens33"

ONBOOT="yes"

IPADDR="192.168.10.12"

PREFIX="24"

GATEWAY="192.168.10.2"

DNS1="119.29.29.29"

master03节点IP地址为:192.168.10.13/24

# vim /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"

PROXY_METHOD="none"

BROWSER_ONLY="no"

BOOTPROTO="static"

DEFROUTE="yes"

IPV4_FAILURE_FATAL="no"

IPV6INIT="yes"

IPV6_AUTOCONF="yes"

IPV6_DEFROUTE="yes"

IPV6_FAILURE_FATAL="no"

IPV6_ADDR_GEN_MODE="stable-privacy"

NAME="ens33"

DEVICE="ens33"

ONBOOT="yes"

IPADDR="192.168.161.90"

PREFIX="24"

GATEWAY="192.168.161.2"

DNS1="119.29.29.29"

worker01节点IP地址为:192.168.10.14/24

# vim /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"

PROXY_METHOD="none"

BROWSER_ONLY="no"

BOOTPROTO="none"

DEFROUTE="yes"

IPV4_FAILURE_FATAL="no"

IPV6INIT="yes"

IPV6_AUTOCONF="yes"

IPV6_DEFROUTE="yes"

IPV6_FAILURE_FATAL="no"

IPV6_ADDR_GEN_MODE="stable-privacy"

NAME="ens33"

DEVICE="ens33"

ONBOOT="yes"

IPADDR="192.168.10.14"

PREFIX="24"

GATEWAY="192.168.10.2"

DNS1="119.29.29.29"

worker02节点IP地址为:192.168.10.15/24

# vim /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"

PROXY_METHOD="none"

BROWSER_ONLY="no"

BOOTPROTO="none"

DEFROUTE="yes"

IPV4_FAILURE_FATAL="no"

IPV6INIT="yes"

IPV6_AUTOCONF="yes"

IPV6_DEFROUTE="yes"

IPV6_FAILURE_FATAL="no"

IPV6_ADDR_GEN_MODE="stable-privacy"

NAME="ens33"

DEVICE="ens33"

ONBOOT="yes"

IPADDR="192.168.10.15"

PREFIX="24"

GATEWAY="192.168.10.2"

DNS1="119.29.29.29"

systemctl restart network

3.3.3 主机名与IP地址解析

所有集群主机均需要进行配置。

# cat /etc/hosts

......

192.168.10.11 master01

192.168.10.12 master02

192.168.10.13 master03

192.168.10.14 worker01

192.168.10.15 worker02

# vim /etc/hosts

......

192.168.161.90 master01

192.168.161.91 master02

192.168.161.92 master03

192.168.161.93 worker01

192.168.161.94 worker02

3.3.4 防火墙配置

所有主机均需要操作。

关闭现有防火墙firewalld

# systemctl disable firewalld

# systemctl stop firewalld

# firewall-cmd --state

not running

3.3.5 SELINUX配置

所有主机均需要操作。修改SELinux配置需要重启操作系统。

# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

3.3.6 时间同步配置

所有主机均需要操作。最小化安装系统需要安装ntpdate软件。

# crontab -l

# crontab -e

0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com

3.3.7 多机互信

在master节点上生成证书,复制到其它节点即可。复制完成后,可以相互测试登录。

# ssh-keygen

直接回车即可

# cd /root/.ssh

[root@master01 .ssh]# ls

id_rsa  id_rsa.pub  known_hosts

[root@master01 .ssh]# cp id_rsa.pub authorized_keys

cd ~

# for i in 12 13 14 15; do scp -r /root/.ssh 192.168.10.$i:/root/; done

yes

password

ssh master01

ssh master02

ssh master03

3.3.8 升级操作系统内核

所有主机均需要操作。

导入elrepo gpg key

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

安装elrepo YUM源仓库

# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

安装kernel-ml版本,ml为长期稳定版本,lt为长期维护版本

# yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64

设置grub2默认引导为0

# grub2-set-default 0

重新生成grub2引导文件

# grub2-mkconfig -o /boot/grub2/grub.cfg

更新后,需要重启,使用升级的内核生效。等到下一个重启

# reboot

重启后,需要验证内核是否为更新对应的版本

# uname -r

3.3.9 配置内核转发及网桥过滤

所有主机均需要操作。

添加网桥过滤及内核转发配置文件

# cat /etc/sysctl.d/k8s.conf

vim /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

vm.swappiness = 0

加载br_netfilter模块

# modprobe br_netfilter

查看是否加载

# lsmod | grep br_netfilter

br_netfilter          22256  0

bridge                151336  1 br_netfilter

加载网桥过滤及内核转发配置文件

# sysctl -p /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

vm.swappiness = 0

3.3.10 安装ipset及ipvsadm

所有主机均需要操作。主要用于实现service转发。

安装ipset及ipvsadm

# yum -y install ipset ipvsadm

配置ipvsadm模块加载方式

添加需要加载的模块

# cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack

EOF

授权、运行、检查是否加载  1条命令

# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

3.3.11 关闭SWAP分区

修改完成后需要重启操作系统,如不重启,可临时关闭,命令为swapoff -a

永远关闭swap分区,需要重启操作系统

# cat /etc/fstab

vim /etc/fstab

......

# /dev/mapper/centos-swap swap                    swap    defaults        0 0

在上一行中行首添加#

做最后一次重启操作系统

# reboot

3.4 Docker准备

所有集群主机均需操作。

3.4.1 获取YUM源

使用阿里云开源软件镜像站。

# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

3.4.2 查看可安装版本

# yum list docker-ce.x86_64 --showduplicates | sort -r

3.4.3 安装指定版本并设置启动及开机自启动

# yum -y install --setopt=obsoletes=0 docker-ce-20.10.9-3.el7

# systemctl enable docker ; systemctl start docker

3.4.4 修改cgroup方式

在/etc/docker/daemon.json添加如下内容

# cat /etc/docker/daemon.json

vim /etc/docker/daemon.json

{

        "exec-opts": ["native.cgroupdriver=systemd"]

}

3.4.5 重启docker

# systemctl restart docker

四、HAProxy及Keepalived部署

4.1 HAProxy及keepalived安装 # 一共2个节点

[root@master01 ~]# yum -y install haproxy keepalived

[root@master02 ~]# yum -y install haproxy keepalived

4.2 HAProxy配置及启动

linux怎样删除多行

1.多行注释:

1. 首先按esc进入命令行模式下,按下Ctrl + v,进入列(也叫区块)模式;

2. 在行首使用上下键选择需要注释的多行;

3. 按下键盘(大写)“I”键,进入插入模式;

4. 然后输入注释符(“//”、“#”等);

5. 最后按下“Esc”键。

注:在按下esc键后,会稍等一会才会出现注释,不要着急~~时间很短的

2.删除多行注释:

1. 首先按esc进入命令行模式下,按下Ctrl + v, 进入列模式;

2. 选定要取消注释的多行;

3. 按下“x”或者“d”.

注意:如果是“//”注释,那需要执行两次该操作,如果是“#”注释,一次即可

3.多行删除

1.首先在命令模式下,输入“:set nu”显示行号;

2.通过行号确定你要删除的行;

3.命令输入“:32,65d”,回车键,32-65行就被删除了,很快捷吧

如果无意中删除错了,可以使用‘u’键恢复(命令模式下)

[root@master01 ~]# vim /etc/haproxy/haproxy.cfg

[root@master01 ~]# cat /etc/haproxy/haproxy.cfg  # 可以把文件内容都删除 快捷键: d + G

#---------------------------------------------------------------------

# Example configuration for a possible web application.  See the

# full configuration options online.

#

#

#---------------------------------------------------------------------

#---------------------------------------------------------------------

# Global settings  # 相同部分,复制替换以下内容;替换global以下所有内容

#---------------------------------------------------------------------

global

  maxconn  2000

  ulimit-n  16384

  log  127.0.0.1 local0 err

  stats timeout 30s

defaults

  log global

  mode  http

  option  httplog

  timeout connect 5000

  timeout client  50000

  timeout server  50000

  timeout http-request 15s

  timeout http-keep-alive 15s

frontend monitor-in

  bind *:33305

  mode http

  option httplog

  monitor-uri /monitor

frontend k8s-master

  bind 0.0.0.0:16443

  bind 127.0.0.1:16443

  mode tcp

  option tcplog

  tcp-request inspect-delay 5s

  default_backend k8s-master

backend k8s-master

  mode tcp

  option tcplog

  option tcp-check

  balance roundrobin

  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100

  server master01  192.168.161.90:6443  check

  server master02  192.168.161.91:6443  check

  server master03  192.168.161.92:6443  check

  server master01  192.168.10.11:6443  check

  server master02  192.168.10.12:6443  check

  server master03  192.168.10.13:6443  check

[root@master01 ~]# systemctl enable haproxy;systemctl start haproxy

[root@master01 ~]# systemctl status haproxy



[root@master01 ~]# scp /etc/haproxy/haproxy.cfg master02:/etc/haproxy/haproxy.cfg

[root@master02 ~]# systemctl enable haproxy;systemctl start haproxy

[root@master02 ~]# systemctl status haproxy



4.3 Keepalived配置及启动

d G # 删除要编辑的原所有内容

[root@master01 ~]# vim /etc/keepalived/keepalived.conf

[root@master01 ~]# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

    router_id LVS_DEVEL

script_user root

    enable_script_security

}

vrrp_script chk_apiserver {

    script "/etc/keepalived/check_apiserver.sh" #此脚本需要单独定义,并要调用。

    interval 5

    weight -5

    fall 2

rise 1

}

vrrp_instance VI_1 {

    state MASTER # 当前master 或备份backup节点

    interface ens33 # 修改为正在使用的网卡

    mcast_src_ip 192.168.10.11 #为本master主机对应的IP地址

    virtual_router_id 51

    priority 101 # 数字越大优先级越高

    advert_int 2

    authentication {

        auth_type PASS

        auth_pass abc123

    }

    virtual_ipaddress {

        192.168.10.100 #为VIP地址

    }

    track_script {

      chk_apiserver # 执行上面检查apiserver脚本

    }

}

! Configuration File for keepalived

global_defs {

    router_id LVS_DEVEL

script_user root

    enable_script_security

}

vrrp_script chk_apiserver {

    script "/etc/keepalived/check_apiserver.sh"

    interval 5

    weight -5

    fall 2

rise 1

}

vrrp_instance VI_1 {

    state MASTER

    interface ens33

    mcast_src_ip 192.168.161.90

    virtual_router_id 51

    priority 101

    advert_int 2

    authentication {

        auth_type PASS

        auth_pass abc123

    }

    virtual_ipaddress {

        192.168.161.100

    }

    track_script {

      chk_apiserver

    }

}

[root@master01 ~]# vim /etc/keepalived/check_apiserver.sh

[root@master01 ~]# cat /etc/keepalived/check_apiserver.sh

#!/bin/bash

err=0

for k in $(seq 1 3)

do

    check_code=$(pgrep haproxy)

    if [[ $check_code == "" ]]; then

        err=$(expr $err + 1)

        sleep 1

        continue

    else

        err=0

        break

    fi

done

if [[ $err != "0" ]]; then

    echo "systemctl stop keepalived"

    /usr/bin/systemctl stop keepalived

    exit 1

else

    exit 0

fi

[root@master01 ~]# chmod +x /etc/keepalived/check_apiserver.sh

[root@master01 ~]# scp /etc/keepalived/keepalived.conf master02:/etc/keepalived/

[root@master01 ~]# scp /etc/keepalived/check_apiserver.sh master02:/etc/keepalived/

[root@master02 ~]# vim /etc/keepalived/keepalived.conf

[root@master02 ~]# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

    router_id LVS_DEVEL

script_user root

    enable_script_security

}

vrrp_script chk_apiserver {

    script "/etc/keepalived/check_apiserver.sh" #此脚本需要单独定义,并要调用。

    interval 5

    weight -5

    fall 2

rise 1

}

vrrp_instance VI_1 {

    state BACKUP  # BACKUP

    interface ens33 # 修改为正在使用的网卡

    mcast_src_ip 192.168.10.12 #为本master主机对应的IP地址

    virtual_router_id 51

    priority 99 # 修改为99

    advert_int 2

    authentication {

        auth_type PASS

        auth_pass abc123

    }

    virtual_ipaddress {

        192.168.10.100 #为VIP地址

    }

    track_script {

      chk_apiserver # 执行上面检查apiserver脚本

    }

}

! Configuration File for keepalived

global_defs {

    router_id LVS_DEVEL

script_user root

    enable_script_security

}

vrrp_script chk_apiserver {

    script "/etc/keepalived/check_apiserver.sh"

    interval 5

    weight -5

    fall 2

rise 1

}

vrrp_instance VI_1 {

    state BACKUP

    interface ens33

    mcast_src_ip 192.168.161.91

    virtual_router_id 51

    priority 99

    advert_int 2

    authentication {

        auth_type PASS

        auth_pass abc123

    }

    virtual_ipaddress {

        192.168.161.100

    }

    track_script {

      chk_apiserver

    }

}

[root@master01 ~]# systemctl enable keepalived;systemctl start keepalived

[root@master02 ~]# systemctl enable keepalived;systemctl start keepalived

4.4 验证高可用集群可用性

[root@master01 ~]# ip a s ens33

2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 00:0c:29:50:f9:5f brd ff:ff:ff:ff:ff:ff

    inet 192.168.10.11/24 brd 192.168.10.255 scope global noprefixroute ens33

      valid_lft forever preferred_lft forever

    inet 192.168.10.100/32 scope global ens33

      valid_lft forever preferred_lft forever

    inet6 fe80::adf4:a8bc:a1c:a9f7/64 scope link tentative noprefixroute dadfailed

      valid_lft forever preferred_lft forever

    inet6 fe80::2b33:40ed:9311:8812/64 scope link tentative noprefixroute dadfailed

      valid_lft forever preferred_lft forever

    inet6 fe80::8508:20d8:7240:32b2/64 scope link tentative noprefixroute dadfailed

      valid_lft forever preferred_lft forever

[root@master01 ~]# ss -anput | grep ":16443"

tcp    LISTEN    0      2000  127.0.0.1:16443                *:*                  users:(("haproxy",pid=2983,fd=6))

tcp    LISTEN    0      2000      *:16443                *:*                  users:(("haproxy",pid=2983,fd=5))

[root@master02 ~]# ss -anput | grep ":16443"

tcp    LISTEN    0      2000  127.0.0.1:16443                *:*                  users:(("haproxy",pid=2974,fd=6))

tcp    LISTEN    0      2000      *:16443                *:*                  users:(("haproxy",pid=2974,fd=5))

五、kubernetes 1.21.0 集群部署

5.1 集群软件版本说明

kubeadmkubeletkubectl

版本1.21.01.21.01.21.0

安装位置集群所有主机集群所有主机集群所有主机

作用初始化集群、管理集群等用于接收api-server指令,对pod生命周期进行管理集群应用命令行管理工具

5.2 kubernetes YUM源准备

在/etc/yum.repos.d/目录中创建k8s.repo文件,把下面内容复制进去即可。

5.2.1 谷歌YUM源

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

5.2.2 阿里云YUM源

vim /etc/yum.repos.d/k8s.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

# 把gpgcheck及repo_gpgcheck值1设置为0。解决yumrepolist  Kubernetes status 0#确保yum源可用性yum repolist

5.3 集群软件安装

查看指定版本

# yum list kubeadm.x86_64 --showduplicates | sort -r

# yum list kubelet.x86_64 --showduplicates | sort -r

# yum list kubectl.x86_64 --showduplicates | sort -r

安装指定版本

# yum -y install --setopt=obsoletes=0 kubeadm-1.21.0-0  kubelet-1.21.0-0 kubectl-1.21.0-0

5.4 配置kubelet

为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。

# vim /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动

# systemctl enable kubelet

5.5 集群镜像准备

可使用VPN实现下载。

# kubeadm config images list --kubernetes-version=v1.21.0

k8s.gcr.io/kube-apiserver:v1.21.0

k8s.gcr.io/kube-controller-manager:v1.21.0

k8s.gcr.io/kube-scheduler:v1.21.0

k8s.gcr.io/kube-proxy:v1.21.0

k8s.gcr.io/pause:3.4.1

k8s.gcr.io/etcd:3.4.13-0

k8s.gcr.io/coredns/coredns:v1.8.0

# cat image_download.sh

vim image_download.sh

#!/bin/bash

images_list='

k8s.gcr.io/kube-apiserver:v1.21.0

k8s.gcr.io/kube-controller-manager:v1.21.0

k8s.gcr.io/kube-scheduler:v1.21.0

k8s.gcr.io/kube-proxy:v1.21.0

k8s.gcr.io/pause:3.4.1

k8s.gcr.io/etcd:3.4.13-0

k8s.gcr.io/coredns/coredns:v1.8.0'

for i in $images_list

do

        docker pull $i

done

docker save -o k8s-1-21-0.tar $images_list

# vim aLiYun_k8s_images_download.sh

#!/bin/bash

images_list='

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0

registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1

registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0

registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0'

for i in $images_list

do

        docker pull $i

done

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 k8s.gcr.io/kube-apiserver:v1.21.0

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 k8s.gcr.io/kube-controller-manager:v1.21.0

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 k8s.gcr.io/kube-scheduler:v1.21.0

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0 k8s.gcr.io/kube-proxy:v1.21.0

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0

sh aLiYun_k8s_images_download.sh

# 报错调试 自己写的不必执行  coredns:v1.8.0 -> coredns:v1.8.6

# vim aLiYun_k8s_images_download_tar.sh

#!/bin/bash

images_list_k8s.gcr.io='

k8s.gcr.io/kube-apiserver:v1.21.0

k8s.gcr.io/kube-controller-manager:v1.21.0

k8s.gcr.io/kube-scheduler:v1.21.0

k8s.gcr.io/kube-proxy:v1.21.0

k8s.gcr.io/pause:3.4.1

k8s.gcr.io/etcd:3.4.13-0

k8s.gcr.io/coredns/coredns:v1.8.0'

docker save -o k8s-1-21-0.tar $images_list_k8s.gcr.io

5.6 集群初始化

阿里云镜像仓库中的CoreDNS镜像下载有错误。

[root@master01 ~]# vim kubeadm-config.yaml

[root@master01 ~]# cat kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

- groups:

  - system:bootstrappers:kubeadm:default-node-token

  token: 7t2weq.bjbawausm0jaxury

  ttl: 24h0m0s

  usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

  advertiseAddress: 192.168.10.11

  bindPort: 6443

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  name: master01

  taints:

  - effect: NoSchedule

    key: node-role.kubernetes.io/master

---

apiServer:

  certSANs:

  - 192.168.10.100 #后台的虚拟IP,后端对应着三个master节点,授权三个master节点来使用它

  timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controlPlaneEndpoint: 192.168.10.100:16443

controllerManager: {}

dns:

  type: CoreDNS

etcd:

  local:

    dataDir: /var/lib/etcd

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  #vpn 下载,此项默认为空

kind: ClusterConfiguration

kubernetesVersion: v1.21.0

networking:

  dnsDomain: cluster.local

  podSubnet: 10.244.0.0/16

  serviceSubnet: 10.96.0.0/12

scheduler: {}

[root@master01 ~]# vim kubeadm-config.yaml

[root@master01 ~]# cat kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

- groups:

  - system:bootstrappers:kubeadm:default-node-token

  token: 7t2weq.bjbawausm0jaxury

  ttl: 24h0m0s

  usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

  advertiseAddress: 192.168.161.90

  bindPort: 6443

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  name: master01

  taints:

  - effect: NoSchedule

    key: node-role.kubernetes.io/master

---

apiServer:

  certSANs:

  - 192.168.161.100

  timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controlPlaneEndpoint: 192.168.161.100:16443

controllerManager: {}

dns:

  type: CoreDNS

etcd:

  local:

    dataDir: /var/lib/etcd

imageRepository:

kind: ClusterConfiguration

kubernetesVersion: v1.21.0

networking:

  dnsDomain: cluster.local

  podSubnet: 10.244.0.0/16

  serviceSubnet: 10.96.0.0/12

scheduler: {}

# kubeadm config images list --config kubeadm-config.yaml

[root@master01 ~]# kubeadm init --config /root/kubeadm-config.yaml --upload-certs

输出内容,一定保留,便于后继操作使用。

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.10.100:16443 --token 7t2weq.bjbawausm0jaxury \

        --discovery-token-ca-cert-hash sha256:085fc221ad8b5baffdaa567768a10d21eca2fc1f939fe73578ff725feea70ba4 \

        --control-plane --certificate-key 9f74fd2c73a16a79fb9f458cd5874a860564070fd93c3912d910ba2b9c11a2b1

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!

As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use

"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.100:16443 --token 7t2weq.bjbawausm0jaxury \

        --discovery-token-ca-cert-hash sha256:085fc221ad8b5baffdaa567768a10d21eca2fc1f939fe73578ff725feea70ba4

5.7 集群应用客户端管理集群文件准备

[root@master01 ~]# mkdir -p $HOME/.kube

[root@master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

[root@master01 ~]# ls /root/.kube/

config

[root@master01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

5.8 集群网络准备

使用calico部署集群网络

安装参考网址:https://projectcalico.docs.tigera.io/about/about-calico

5.8.1 calico安装









下载operator资源清多文件

# wget https://docs.projectcalico.org/manifests/tigera-operator.yaml

应用资源清多文件,创建operator

# kubectl apply -f tigera-operator.yaml

1.1

kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml

或:

1.2

下载operator资源清单文件,使用wget下载时,后面添加–no-check-certificate

# yum install -y ca-certificates  (VMWare 报错安装)

# wget https://docs.projectcalico.org/manifests/tigera-operator.yaml –no-check-certificate

1.2.1

应用资源清单文件,创建operator

# kubectl apply -f tigera-operator.yaml

# kubectl delete -f tigera-operator.yaml (上面报错时,kubectl delete -f xxx.yaml 再kubectl create -f xxxx.yaml)

# kubectl create -f tigera-operator.yaml

mkdir calicodir

cd calicodir

通过自定义资源方式安装

# wget https://docs.projectcalico.org/manifests/custom-resources.yaml

修改文件第13行,修改为使用kubeadm init ----pod-network-cidr对应的IP地址段

# vim custom-resources.yaml

......

11    ipPools:

12    - blockSize: 26

13      cidr: 10.244.0.0/16

14      encapsulation: VXLANCrossSubnet

......

应用资源清多文件

# kubectl apply -f custom-resources.yaml

监视calico-system命名空间中pod运行情况

# watch kubectl get pods -n calico-system

Wait until each pod has theSTATUSofRunning.

删除 master 上的 taint

# kubectl taint nodes --all node-role.kubernetes.io/master-

已经全部运行

# kubectl get pods -n calico-system

NAME                                      READY  STATUS    RESTARTS  AGE

calico-kube-controllers-666bb9949-dzp68  1/1    Running  0          11m

calico-node-jhcf4                        1/1    Running  4          11m

calico-typha-68b96d8d9c-7qfq7            1/1    Running  2          11m

查看kube-system命名空间中coredns状态,处于Running状态表明联网成功。

# kubectl get pods -n kube-system

NAME                              READY  STATUS    RESTARTS  AGE

coredns-558bd4d5db-4jbdv          1/1    Running  0          113m

coredns-558bd4d5db-pw5x5          1/1    Running  0          113m

etcd-master01                      1/1    Running  0          113m

kube-apiserver-master01            1/1    Running  0          113m

kube-controller-manager-master01  1/1    Running  4          113m

kube-proxy-kbx4z                  1/1    Running  0          113m

kube-scheduler-master01            1/1    Running  3          113m

5.8.2 calico客户端安装





下载二进制文件

# curl -L https://github.com/projectcalico/calico/releases/download/v3.24.1/calicoctl-linux-amd64 -o calicoctl

安装calicoctl

# mv calicoctl /usr/bin/

为calicoctl添加可执行权限

# chmod +x /usr/bin/calicoctl

查看添加权限后文件

# ls /usr/bin/calicoctl

/usr/bin/calicoctl

查看calicoctl版本

# calicoctl  version

Client Version:    v3.21.4

Git commit:        220d04c94

Cluster Version:  v3.21.4

Cluster Type:      typha,kdd,k8s,operator,bgp,kubeadm

通过~/.kube/config连接kubernetes集群,查看已运行节点

# DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get nodes

NAME

master01

5.9 集群其它Master节点加入集群

[root@master02 ~]# kubeadm join 192.168.10.100:16443 --token 7t2weq.bjbawausm0jaxury \

>        --discovery-token-ca-cert-hash sha256:085fc221ad8b5baffdaa567768a10d21eca2fc1f939fe73578ff725feea70ba4 \

>        --control-plane --certificate-key 9f74fd2c73a16a79fb9f458cd5874a860564070fd93c3912d910ba2b9c11a2b1

[root@master03 ~]# kubeadm join 192.168.10.100:16443 --token 7t2weq.bjbawausm0jaxury \

>        --discovery-token-ca-cert-hash sha256:085fc221ad8b5baffdaa567768a10d21eca2fc1f939fe73578ff725feea70ba4 \

>        --control-plane --certificate-key 9f74fd2c73a16a79fb9f458cd5874a860564070fd93c3912d910ba2b9c11a2b1

5.10 集群工作节点加入集群

因容器镜像下载较慢,可能会导致报错,主要错误为没有准备好cni(集群网络插件),如有网络,请耐心等待即可。

[root@worker01 ~]# kubeadm join 192.168.10.100:16443 --token 7t2weq.bjbawausm0jaxury \

>        --discovery-token-ca-cert-hash sha256:085fc221ad8b5baffdaa567768a10d21eca2fc1f939fe73578ff725feea70ba4

[root@worker02 ~]# kubeadm join 192.168.10.100:16443 --token 7t2weq.bjbawausm0jaxury \

>        --discovery-token-ca-cert-hash sha256:085fc221ad8b5baffdaa567768a10d21eca2fc1f939fe73578ff725feea70ba4

5.11 验证集群可用性

查看所有的节点

[root@master01 ~]# kubectl get nodes

NAME      STATUS  ROLES                  AGE    VERSION

master01  Ready    control-plane,master  13m    v1.21.0

master02  Ready    control-plane,master  2m25s  v1.21.0

master03  Ready    control-plane,master  87s    v1.21.0

worker01  Ready    <none>                3m13s  v1.21.0

worker02  Ready    <none>                2m50s  v1.21.0

查看集群健康情况,理想状态

[root@master01 ~]# kubectl get cs

NAME                STATUS    MESSAGE            ERROR

controller-manager  Healthy  ok

scheduler            Healthy  ok

etcd-0              Healthy  {"health":"true"}

真实情况

# kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+

NAME                STATUS      MESSAGE                                                                                      ERROR

scheduler            Unhealthy  Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused

controller-manager  Unhealthy  Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused

etcd-0              Healthy    {"health":"true"}

查看kubernetes集群pod运行情况

[root@master01 ~]# kubectl get pods -n kube-system

NAME                              READY  STATUS    RESTARTS  AGE

coredns-558bd4d5db-smp62          1/1    Running  0          13m

coredns-558bd4d5db-zcmp5          1/1    Running  0          13m

etcd-master01                      1/1    Running  0          14m

etcd-master02                      1/1    Running  0          3m10s

etcd-master03                      1/1    Running  0          115s

kube-apiserver-master01            1/1    Running  0          14m

kube-apiserver-master02            1/1    Running  0          3m13s

kube-apiserver-master03            1/1    Running  0          116s

kube-controller-manager-master01  1/1    Running  1          13m

kube-controller-manager-master02  1/1    Running  0          3m13s

kube-controller-manager-master03  1/1    Running  0          116s

kube-proxy-629zl                  1/1    Running  0          2m17s

kube-proxy-85qn8                  1/1    Running  0          3m15s

kube-proxy-fhqzt                  1/1    Running  0          13m

kube-proxy-jdxbd                  1/1    Running  0          3m40s

kube-proxy-ks97x                  1/1    Running  0          4m3s

kube-scheduler-master01            1/1    Running  1          13m

kube-scheduler-master02            1/1    Running  0          3m13s

kube-scheduler-master03            1/1    Running  0          115s

再次查看calico-system命名空间中pod运行情况。

[root@master01 ~]# kubectl get pod -n calico-system

NAME                                      READY  STATUS    RESTARTS  AGE

calico-kube-controllers-666bb9949-4z77k  1/1    Running  0          10m

calico-node-b5wjv                        1/1    Running  0          10m

calico-node-d427l                        1/1    Running  0          4m45s

calico-node-jkq7f                        1/1    Running  0          2m59s

calico-node-wtjnm                        1/1    Running  0          4m22s

calico-node-xxh2p                        1/1    Running  0          3m57s

calico-typha-7cd9d6445b-5zcg5            1/1    Running  0          2m54s

calico-typha-7cd9d6445b-b5d4j            1/1    Running  0          10m

calico-typha-7cd9d6445b-z44kp            1/1    Running  1          4m17s

在master节点上操作,查看网络节点是否添加

[root@master01 ~]# DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get nodes

NAME

master01

master02

master03

worker01

worker02

Kubernetes简单部署Nginx

第一步,在master节点中执行创建nginx命令

第二步,暴露端口

第三步,查看服务状态

注意,这个32517是给这个nginx服务程序外部暴露的端口,要想访问,必须是master节点的IP:暴露端口号

第四步,访问

centos7 清理缓存(buff/cache)

free -h

解决办法:

1、同步数据到磁盘

[root@localhost ~]# sync

2、根据需求清除对应缓存

[root@localhost ~]# echo 3 > /proc/sys/vm/drop_caches

参数说明:

0 //默认是0;

1-清空页缓存;

2-清空inode和目录树缓存;

3-清空所有缓存

重启虚拟机: 集群其它Master节点加入集群

[root@master02 ~]# kubeadm join 192.168.161.100:16443 --token 7t2weq.bjbawausm0jaxury \

    --discovery-token-ca-cert-hash sha256:ca97b57caee598b6165eb266a9f0e01cc6cd23513f952a751b58621cd8ed6cd3 \

    --control-plane --certificate-key f9a98816a7304ae2a90cd650453a4d1bc450074156d7405bc973cebd3a8e7a13

    [preflight] Running pre-flight checks

    [preflight] Reading configuration from the cluster...

    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

    [preflight] Running pre-flight checks before initializing the new control plane instance

    [preflight] Pulling images required for setting up a Kubernetes cluster

    [preflight] This might take a minute or two, depending on the speed of your internet connection

    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

    [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace

    error execution phase control-plane-prepare/download-certs: error downloading certs: error downloading the secret: Secret "kubeadm-certs" was not found in the "kube-system" Namespace. This Secret might have expired. Please, run `kubeadm init phase upload-certs --upload-certs` on a control plane to generate a new one

    To see the stack trace of this error execute with --v=5 or higher

    [root@master02 ~]# kubeadm join 192.168.161.100:16443 --token 7t2weq.bjbawausm0jaxury \

    --discovery-token-ca-cert-hash sha256:ca97b57caee598b6165eb266a9f0e01cc6cd23513f952a751b58621cd8ed6cd3 \

    --control-plane --certificate-key 4b8212d32253c0bb55609df4ec5cd38304db31318f2075e380f126ccc0d829ef

    [preflight] Running pre-flight checks

    [preflight] Reading configuration from the cluster...

    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

    [preflight] Running pre-flight checks before initializing the new control plane instance

    [preflight] Pulling images required for setting up a Kubernetes cluster

    [preflight] This might take a minute or two, depending on the speed of your internet connection

    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

    [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace

    [certs] Using certificateDir folder "/etc/kubernetes/pki"

    [certs] Generating "etcd/server" certificate and key

    [certs] etcd/server serving cert is signed for DNS names [localhost master02] and IPs [192.168.161.91 127.0.0.1 ::1]

    [certs] Generating "etcd/peer" certificate and key

    [certs] etcd/peer serving cert is signed for DNS names [localhost master02] and IPs [192.168.161.91 127.0.0.1 ::1]

    [certs] Generating "etcd/healthcheck-client" certificate and key

    [certs] Generating "apiserver-etcd-client" certificate and key

    [certs] Generating "apiserver" certificate and key

    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master02] and IPs [10.96.0.1 192.168.161.91 192.168.161.100]

    [certs] Generating "apiserver-kubelet-client" certificate and key

    [certs] Generating "front-proxy-client" certificate and key

    [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"

    [certs] Using the existing "sa" key

    [kubeconfig] Generating kubeconfig files

    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"

    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address

    [kubeconfig] Writing "admin.conf" kubeconfig file

    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address

    [kubeconfig] Writing "controller-manager.conf" kubeconfig file

    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address

    [kubeconfig] Writing "scheduler.conf" kubeconfig file

    [control-plane] Using manifest folder "/etc/kubernetes/manifests"

    [control-plane] Creating static Pod manifest for "kube-apiserver"

    [control-plane] Creating static Pod manifest for "kube-controller-manager"

    [control-plane] Creating static Pod manifest for "kube-scheduler"

    [check-etcd] Checking that the etcd cluster is healthy

    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

    [kubelet-start] Starting the kubelet

    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

    [etcd] Announced new etcd member joining to the existing etcd cluster

    [etcd] Creating static Pod manifest for "etcd"

    [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s

    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

    [mark-control-plane] Marking the node master02 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

    [mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

Certificate signing request was sent to apiserver and approval was received.

The Kubelet was informed of the new secure connection details.

Control plane (master) label and taint were applied to the new node.

The Kubernetes control plane instances scaled up.

A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@master02 ~]# mkdir -p HOME/.kube

[root@master02 ~]# sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config

[root@master02 ~]# sudo chown (id -u):(id -g) $HOME/.kube/config

[root@master02 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master01 Ready control-plane,master 11h v1.21.0

master02 Ready control-plane,master 12m v1.21.0

[root@master02 ~]#

Last login: Fri Sep 16 18:40:39 2022 from 192.168.161.32[root@master01 ~]# system status kubeletbash: system: command not found...[root@master01 ~]# systemctl status kubelet● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Sat 2022-09-17 05:58:35 PDT; 11min ago Docs:https://kubernetes.io/docs/Main PID: 1146 (kubelet) Tasks: 23 Memory: 257.6M CGroup: /system.slice/kubelet.service └─1146 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/confi...

Sep 17 06:01:11 master01 kubelet[1146]: I0917 06:01:11.393934 1146 operation_generator.go:181] scheme "" not registered, fallback to default schemeSep 17 06:01:11 master01 kubelet[1146]: I0917 06:01:11.393955 1146 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins_r...nil>}Sep 17 06:01:11 master01 kubelet[1146]: I0917 06:01:11.393963 1146 clientconn.go:948] ClientConn switching balancer to "pick_first"Sep 17 06:01:11 master01 kubelet[1146]: I0917 06:01:11.396942 1146 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.t...ions: 1.0.0Sep 17 06:01:11 master01 kubelet[1146]: I0917 06:01:11.397065 1146 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at end...io/csi.sockSep 17 06:01:11 master01 kubelet[1146]: I0917 06:01:11.397242 1146 clientconn.go:106] parsed scheme: ""Sep 17 06:01:11 master01 kubelet[1146]: I0917 06:01:11.397261 1146 clientconn.go:106] scheme "" not registered, fallback to default schemeSep 17 06:01:11 master01 kubelet[1146]: I0917 06:01:11.397363 1146 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/c...nil>}

Sep 17 06:01:11 master01 kubelet[1146]: I0917 06:01:11.397378 1146 clientconn.go:948] ClientConn switching balancer to "pick_first"

Sep 17 06:01:12 master01 kubelet[1146]: I0917 06:01:12.601431 1146 scope.go:111] "RemoveContainer" containerID="60ee68d885949c567c69e9379b58de72cbbe598f6c19...644bf30b78"

Hint: Some lines were ellipsized, use -l to show in full.

[root@master01 ~]# kubeadm init phase upload-certs --upload-certs

I0917 06:26:32.875593 41753 version.go:254] remote version is much newer: v1.25.1; falling back to: stable-1.21

[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace

[upload-certs] Using certificate key:

5c83b15986964d0c72df7782b989c7f86539fe1abd6bf45f885820c5485053e4

[root@master01 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master01 Ready control-plane,master 11h v1.21.0

master02 Ready control-plane,master 12m v1.21.0

[root@master01 ~]#

第一个问题是由于使用HA技术后,访问k8s master节点时,可能会被分配到被关闭的节点,所以就会导致有时可以访问,有时不可以访问的情况。

第二个问题从报错上来看是没有找到/root/.kube/config文件,初始化后,需要创建这个文件。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,744评论 6 502
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,505评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 163,105评论 0 353
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,242评论 1 292
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,269评论 6 389
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,215评论 1 299
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,096评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,939评论 0 274
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,354评论 1 311
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,573评论 2 333
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,745评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,448评论 5 344
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,048评论 3 327
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,683评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,838评论 1 269
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,776评论 2 369
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,652评论 2 354

推荐阅读更多精彩内容