部署方式
(1)kubeadmin
kubeadm是一个k8s部署工具,提供kubeadmin init初始化集群和kubeadm join加入集群,用于快速部署k8s集群。
官网:https://kubernetes.io/docs/reference/setup-tools/kubeadm/
(2)二进制文件
从GitHub下载发行版的二进制包,手动部署每个组件,组成k8s集群(目前k8s的组件都是通过systemd来维护的,所以二进制安装,调试bug比较方便)。
官网:https://github.com/kubernetes/kubernetes/releases
(3)区别:
Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署 Kubernetes 集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。
前提准备
- 准备多台机器,操作系统为CentOS 7.x-x86_64(最低配置:内存2GB、CPU2核、硬盘30GB)
- 集群机器之间网络可以互通
- 机器可以访问外网拉取镜像
- 禁止swap分区。(kubernetes的想法是将实例紧密包装到尽可能接近100%,所有的部署应该与CPU /内存限制固定在一起。所以如果调度程序发送一个pod到一台机器,它不应该使用交换。设计者不想交换,因为它会减慢速度,所以关闭swap主要是为了性能考虑)
集群搭建
节点规划
节点 | IP | 硬件配置 |
---|---|---|
k8s-master | 192.168.127.200 | CPU2核 内存2G 硬盘30G
|
k8s-node1 | 192.168.127.201 | CPU4核 内存4G 硬盘50G
|
k8s-node2 | 192.168.127.202 | CPU4核 内存4G 硬盘50G
|
系统配置
1、装完三台机器,分别配置IP地址,master如下,其他两个node同理
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static # 静态地址
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=c9acc2a5-0995-4011-8c6c-5c164116fdad
DEVICE=ens33
ONBOOT=yes # 开机启动
IPADDR=192.168.127.200 # IP地址
GATEWAY=192.168.127.2 # 网关
NETMASK=255.255.255.0 # 子网掩码
DNS1=8.8.8.8 # Google免费提供的DNS服务器的IP地址
DNS2=8.8.4.4 # Google免费提供的DNS服务器的IP地址
2、配置公共DNS服务,在/etc/resolv.conf
文件里增加如下配置
[root@localhost ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
3、关闭防火墙,禁止开机启动
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]#
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost ~]#
[root@localhost ~]# service network restart
Restarting network (via systemctl): [ 确定 ]
[root@localhost ~]#
4、关闭selinux防火墙
-
setenforce 0
临时关闭 -
sed -i 's/enforcing/disabled/' /etc/selinux/config
永久关闭
[root@localhost ~]# setenforce 0
[root@localhost ~]#
[root@localhost ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@localhost ~]#
[root@localhost ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# disabled - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of disabled.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
5、关闭swap分区
-
swapoff -a
临时关闭 -
vim /etc/fstab
永久关闭
(1)关闭swap
[root@localhost ~]# swapoff -a
(2)永久删除 swap 挂载:注释或删掉swap相关行
[root@localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Nov 2 19:17:25 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=cf61bc77-78a5-4848-bf55-6a43f44eeaff /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
(3)检查是否已经关闭`free -m`,swap行应该全部显示0
[root@localhost ~]# free -h
total used free shared buff/cache available
Mem: 1.8G 118M 1.4G 8.6M 255M 1.5G
Swap: 0B 0B 0B
6、设置主机名hostnamectl set-hostname <hostname>
[root@localhost ~]# hostnamectl set-hostname k8s-master
[root@localhost ~]# hostname
k8s-master
7、在每个节点添加hosts
[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
[root@localhost ~]#
[root@localhost ~]# cat >> /etc/hosts << EOF
> 192.168.127.200 k8s-master
> 192.168.127.201 k8s-node1
> 192.168.127.202 k8s-node2
> EOF
[root@localhost ~]#
[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.127.200 k8s-master
192.168.127.201 k8s-node1
192.168.127.202 k8s-node2
[root@localhost ~]#
8、将桥接的IPv4流量传递到iptables的链
[root@localhost ~]# cat > /etc/sysctl.d/k8s.conf << EOF # 配置
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@localhost ~]#
[root@localhost ~]# cat /etc/sysctl.d/k8s.conf # 查看
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# sysctl --system # 生效
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...
9、系统时间同步
[root@localhost ~]# yum install ntpdate -y
已加载插件:fastestmirror
base | 3.6 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/4): extras/7/x86_64/primary_db | 249 kB 00:00:00
(2/4): base/7/x86_64/group_gz | 153 kB 00:00:01
(3/4): updates/7/x86_64/primary_db | 17 MB 00:00:02
(4/4): base/7/x86_64/primary_db | 6.1 MB 00:00:52
Determining fastest mirrors
* base: mirrors.bfsu.edu.cn
* extras: mirrors.bfsu.edu.cn
......
已安装:
ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2
完毕!
[root@localhost ~]# date
2022年 11月 02日 星期三 14:40:10 CST
[root@localhost ~]# ntpdate time.windows.com
2 Nov 14:41:36 ntpdate[19589]: adjust time server 52.231.114.183 offset -0.001893 sec
集群部署
Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker
1、安装Docker
参考:CentOS7下Docker安装及配置安装,或者以下方式:
(1)安装
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
docker --version
[root@localhost ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
--2022-11-02 14:58:32-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 117.161.156.242, 117.161.156.241, 117.161.156.239, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|117.161.156.242|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2081 (2.0K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/docker-ce.repo”
100%[=====================================================================================================================================================================================>] 2,081 --.-K/s 用时 0.01s
2022-11-02 14:58:32 (173 KB/s) - 已保存 “/etc/yum.repos.d/docker-ce.repo” [2081/2081])
[root@localhost ~]#
[root@localhost ~]# yum -y install docker-ce-18.06.1.ce-3.el7
已加载插件:fastestmirror
docker-ce-stable | 3.5 kB 00:00:00
(1/2): docker-ce-stable/7/x86_64/updateinfo | 55 B 00:00:00
(2/2): docker-ce-stable/7/x86_64/primary_db | 87 kB 00:00:00
Loading mirror speeds from cached hostfile
* base: mirrors.bfsu.edu.cn
* extras: mirrors.bfsu.edu.cn
* updates: mirrors.bfsu.edu.cn
正在解决依赖关系
--> 正在检查事务
---> 软件包 docker-ce.x86_64.0.18.06.1.ce-3.el7 将被 安装
--> 正在处理依赖关系 container-selinux >= 2.9,它被软件包 docker-ce-18.06.1.ce-3.el7.x86_64 需要
--> 正在处理依赖关系 libcgroup,它被软件包 docker-ce-18.06.1.ce-3.el7.x86_64 需要
--> 正在检查事务
---> 软件包 container-selinux.noarch.2.2.119.2-1.911c772.el7_8 将被 安装
--> 正在处理依赖关系 selinux-policy-targeted >= 3.13.1-216.el7,它被软件包 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 需要
--> 正在处理依赖关系 selinux-policy-base >= 3.13.1-216.el7,它被软件包 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 需要
--> 正在处理依赖关系 selinux-policy >= 3.13.1-216.el7,它被软件包 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 需要
--> 正在处理依赖关系 policycoreutils-python,它被软件包 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 需要
---> 软件包 libcgroup.x86_64.0.0.41-21.el7 将被 安装
--> 正在检查事务
---> 软件包 policycoreutils-python.x86_64.0.2.5-34.el7 将被 安装
--> 正在处理依赖关系 policycoreutils = 2.5-34.el7,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 setools-libs >= 3.3.8-4,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libsemanage-python >= 2.5-14,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 audit-libs-python >= 2.1.3-4,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 python-IPy,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libqpol.so.1(VERS_1.4)(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libqpol.so.1(VERS_1.2)(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libapol.so.4(VERS_4.0)(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 checkpolicy,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libqpol.so.1()(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libapol.so.4()(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
---> 软件包 selinux-policy.noarch.0.3.13.1-166.el7 将被 升级
---> 软件包 selinux-policy.noarch.0.3.13.1-268.el7_9.2 将被 更新
--> 正在处理依赖关系 libsemanage >= 2.5-13,它被软件包 selinux-policy-3.13.1-268.el7_9.2.noarch 需要
---> 软件包 selinux-policy-targeted.noarch.0.3.13.1-166.el7 将被 升级
---> 软件包 selinux-policy-targeted.noarch.0.3.13.1-268.el7_9.2 将被 更新
--> 正在检查事务
---> 软件包 audit-libs-python.x86_64.0.2.8.5-4.el7 将被 安装
--> 正在处理依赖关系 audit-libs(x86-64) = 2.8.5-4.el7,它被软件包 audit-libs-python-2.8.5-4.el7.x86_64 需要
---> 软件包 checkpolicy.x86_64.0.2.5-8.el7 将被 安装
---> 软件包 libsemanage.x86_64.0.2.5-8.el7 将被 升级
---> 软件包 libsemanage.x86_64.0.2.5-14.el7 将被 更新
--> 正在处理依赖关系 libsepol >= 2.5-10,它被软件包 libsemanage-2.5-14.el7.x86_64 需要
--> 正在处理依赖关系 libselinux >= 2.5-14,它被软件包 libsemanage-2.5-14.el7.x86_64 需要
---> 软件包 libsemanage-python.x86_64.0.2.5-14.el7 将被 安装
---> 软件包 policycoreutils.x86_64.0.2.5-17.1.el7 将被 升级
---> 软件包 policycoreutils.x86_64.0.2.5-34.el7 将被 更新
--> 正在处理依赖关系 libselinux-utils >= 2.5-14,它被软件包 policycoreutils-2.5-34.el7.x86_64 需要
---> 软件包 python-IPy.noarch.0.0.75-6.el7 将被 安装
---> 软件包 setools-libs.x86_64.0.3.3.8-4.el7 将被 安装
--> 正在检查事务
---> 软件包 audit-libs.x86_64.0.2.7.6-3.el7 将被 升级
--> 正在处理依赖关系 audit-libs(x86-64) = 2.7.6-3.el7,它被软件包 audit-2.7.6-3.el7.x86_64 需要
---> 软件包 audit-libs.x86_64.0.2.8.5-4.el7 将被 更新
---> 软件包 libselinux.x86_64.0.2.5-11.el7 将被 升级
--> 正在处理依赖关系 libselinux(x86-64) = 2.5-11.el7,它被软件包 libselinux-python-2.5-11.el7.x86_64 需要
---> 软件包 libselinux.x86_64.0.2.5-15.el7 将被 更新
---> 软件包 libselinux-utils.x86_64.0.2.5-11.el7 将被 升级
---> 软件包 libselinux-utils.x86_64.0.2.5-15.el7 将被 更新
---> 软件包 libsepol.x86_64.0.2.5-6.el7 将被 升级
---> 软件包 libsepol.x86_64.0.2.5-10.el7 将被 更新
--> 正在检查事务
---> 软件包 audit.x86_64.0.2.7.6-3.el7 将被 升级
---> 软件包 audit.x86_64.0.2.8.5-4.el7 将被 更新
---> 软件包 libselinux-python.x86_64.0.2.5-11.el7 将被 升级
---> 软件包 libselinux-python.x86_64.0.2.5-15.el7 将被 更新
--> 解决依赖关系完成
依赖关系解决
===============================================================================================================================================================================================================================
Package 架构 版本 源 大小
===============================================================================================================================================================================================================================
正在安装:
docker-ce x86_64 18.06.1.ce-3.el7 docker-ce-stable 41 M
为依赖而安装:
audit-libs-python x86_64 2.8.5-4.el7 base 76 k
checkpolicy x86_64 2.5-8.el7 base 295 k
container-selinux noarch 2:2.119.2-1.911c772.el7_8 extras 40 k
libcgroup x86_64 0.41-21.el7 base 66 k
libsemanage-python x86_64 2.5-14.el7 base 113 k
policycoreutils-python x86_64 2.5-34.el7 base 457 k
python-IPy noarch 0.75-6.el7 base 32 k
setools-libs x86_64 3.3.8-4.el7 base 620 k
为依赖而更新:
audit x86_64 2.8.5-4.el7 base 256 k
audit-libs x86_64 2.8.5-4.el7 base 102 k
libselinux x86_64 2.5-15.el7 base 162 k
libselinux-python x86_64 2.5-15.el7 base 236 k
libselinux-utils x86_64 2.5-15.el7 base 151 k
libsemanage x86_64 2.5-14.el7 base 151 k
libsepol x86_64 2.5-10.el7 base 297 k
policycoreutils x86_64 2.5-34.el7 base 917 k
selinux-policy noarch 3.13.1-268.el7_9.2 updates 498 k
selinux-policy-targeted noarch 3.13.1-268.el7_9.2 updates 7.0 M
事务概要
===============================================================================================================================================================================================================================
安装 1 软件包 (+ 8 依赖软件包)
升级 ( 10 依赖软件包)
总下载量:52 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/19): audit-libs-2.8.5-4.el7.x86_64.rpm | 102 kB 00:00:00
(2/19): audit-libs-python-2.8.5-4.el7.x86_64.rpm | 76 kB 00:00:00
(3/19): audit-2.8.5-4.el7.x86_64.rpm | 256 kB 00:00:00
(4/19): libcgroup-0.41-21.el7.x86_64.rpm | 66 kB 00:00:00
(5/19): checkpolicy-2.5-8.el7.x86_64.rpm | 295 kB 00:00:00
(6/19): libselinux-2.5-15.el7.x86_64.rpm | 162 kB 00:00:00
(7/19): libselinux-utils-2.5-15.el7.x86_64.rpm | 151 kB 00:00:00
(8/19): libselinux-python-2.5-15.el7.x86_64.rpm | 236 kB 00:00:00
(9/19): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm | 40 kB 00:00:00
(10/19): libsemanage-python-2.5-14.el7.x86_64.rpm | 113 kB 00:00:00
(11/19): libsepol-2.5-10.el7.x86_64.rpm | 297 kB 00:00:00
(12/19): libsemanage-2.5-14.el7.x86_64.rpm | 151 kB 00:00:00
(13/19): policycoreutils-2.5-34.el7.x86_64.rpm | 917 kB 00:00:00
(14/19): python-IPy-0.75-6.el7.noarch.rpm | 32 kB 00:00:00
(15/19): setools-libs-3.3.8-4.el7.x86_64.rpm | 620 kB 00:00:00
(16/19): selinux-policy-3.13.1-268.el7_9.2.noarch.rpm | 498 kB 00:00:00
(17/19): selinux-policy-targeted-3.13.1-268.el7_9.2.noarch.rpm | 7.0 MB 00:00:01
(18/19): policycoreutils-python-2.5-34.el7.x86_64.rpm | 457 kB 00:00:01
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/docker-ce-18.06.1.ce-3.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY====================================-] 115 kB/s | 52 MB 00:00:00 ETA
docker-ce-18.06.1.ce-3.el7.x86_64.rpm 的公钥尚未安装
(19/19): docker-ce-18.06.1.ce-3.el7.x86_64.rpm | 41 MB 00:06:10
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计 144 kB/s | 52 MB 00:06:10
从 https://mirrors.aliyun.com/docker-ce/linux/centos/gpg 检索密钥
导入 GPG key 0x621E9F35:
用户ID : "Docker Release (CE rpm) <docker@docker.com>"
指纹 : 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
来自 : https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在更新 : libsepol-2.5-10.el7.x86_64 1/29
正在更新 : libselinux-2.5-15.el7.x86_64 2/29
正在更新 : audit-libs-2.8.5-4.el7.x86_64 3/29
正在更新 : libsemanage-2.5-14.el7.x86_64 4/29
正在更新 : libselinux-utils-2.5-15.el7.x86_64 5/29
正在更新 : policycoreutils-2.5-34.el7.x86_64 6/29
正在更新 : selinux-policy-3.13.1-268.el7_9.2.noarch 7/29
正在安装 : libcgroup-0.41-21.el7.x86_64 8/29
正在更新 : selinux-policy-targeted-3.13.1-268.el7_9.2.noarch 9/29
正在安装 : libsemanage-python-2.5-14.el7.x86_64 10/29
正在安装 : audit-libs-python-2.8.5-4.el7.x86_64 11/29
正在安装 : setools-libs-3.3.8-4.el7.x86_64 12/29
正在更新 : libselinux-python-2.5-15.el7.x86_64 13/29
正在安装 : python-IPy-0.75-6.el7.noarch 14/29
正在安装 : checkpolicy-2.5-8.el7.x86_64 15/29
正在安装 : policycoreutils-python-2.5-34.el7.x86_64 16/29
正在安装 : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 17/29
正在安装 : docker-ce-18.06.1.ce-3.el7.x86_64 18/29
正在更新 : audit-2.8.5-4.el7.x86_64 19/29
清理 : selinux-policy-targeted-3.13.1-166.el7.noarch 20/29
清理 : selinux-policy-3.13.1-166.el7.noarch 21/29
清理 : policycoreutils-2.5-17.1.el7.x86_64 22/29
清理 : libsemanage-2.5-8.el7.x86_64 23/29
清理 : libselinux-utils-2.5-11.el7.x86_64 24/29
清理 : libselinux-python-2.5-11.el7.x86_64 25/29
清理 : libselinux-2.5-11.el7.x86_64 26/29
清理 : audit-2.7.6-3.el7.x86_64 27/29
清理 : audit-libs-2.7.6-3.el7.x86_64 28/29
清理 : libsepol-2.5-6.el7.x86_64 29/29
验证中 : libselinux-2.5-15.el7.x86_64 1/29
验证中 : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 2/29
验证中 : selinux-policy-targeted-3.13.1-268.el7_9.2.noarch 3/29
验证中 : audit-libs-2.8.5-4.el7.x86_64 4/29
验证中 : checkpolicy-2.5-8.el7.x86_64 5/29
验证中 : policycoreutils-2.5-34.el7.x86_64 6/29
验证中 : python-IPy-0.75-6.el7.noarch 7/29
验证中 : libselinux-utils-2.5-15.el7.x86_64 8/29
验证中 : policycoreutils-python-2.5-34.el7.x86_64 9/29
验证中 : setools-libs-3.3.8-4.el7.x86_64 10/29
验证中 : audit-2.8.5-4.el7.x86_64 11/29
验证中 : docker-ce-18.06.1.ce-3.el7.x86_64 12/29
验证中 : libsemanage-python-2.5-14.el7.x86_64 13/29
验证中 : libsemanage-2.5-14.el7.x86_64 14/29
验证中 : libselinux-python-2.5-15.el7.x86_64 15/29
验证中 : libsepol-2.5-10.el7.x86_64 16/29
验证中 : selinux-policy-3.13.1-268.el7_9.2.noarch 17/29
验证中 : audit-libs-python-2.8.5-4.el7.x86_64 18/29
验证中 : libcgroup-0.41-21.el7.x86_64 19/29
验证中 : libselinux-utils-2.5-11.el7.x86_64 20/29
验证中 : libselinux-2.5-11.el7.x86_64 21/29
验证中 : libsepol-2.5-6.el7.x86_64 22/29
验证中 : selinux-policy-3.13.1-166.el7.noarch 23/29
验证中 : audit-libs-2.7.6-3.el7.x86_64 24/29
验证中 : audit-2.7.6-3.el7.x86_64 25/29
验证中 : policycoreutils-2.5-17.1.el7.x86_64 26/29
验证中 : libsemanage-2.5-8.el7.x86_64 27/29
验证中 : libselinux-python-2.5-11.el7.x86_64 28/29
验证中 : selinux-policy-targeted-3.13.1-166.el7.noarch 29/29
已安装:
docker-ce.x86_64 0:18.06.1.ce-3.el7
作为依赖被安装:
audit-libs-python.x86_64 0:2.8.5-4.el7 checkpolicy.x86_64 0:2.5-8.el7 container-selinux.noarch 2:2.119.2-1.911c772.el7_8 libcgroup.x86_64 0:0.41-21.el7 libsemanage-python.x86_64 0:2.5-14.el7
policycoreutils-python.x86_64 0:2.5-34.el7 python-IPy.noarch 0:0.75-6.el7 setools-libs.x86_64 0:3.3.8-4.el7
作为依赖被升级:
audit.x86_64 0:2.8.5-4.el7 audit-libs.x86_64 0:2.8.5-4.el7 libselinux.x86_64 0:2.5-15.el7 libselinux-python.x86_64 0:2.5-15.el7 libselinux-utils.x86_64 0:2.5-15.el7
libsemanage.x86_64 0:2.5-14.el7 libsepol.x86_64 0:2.5-10.el7 policycoreutils.x86_64 0:2.5-34.el7 selinux-policy.noarch 0:3.13.1-268.el7_9.2 selinux-policy-targeted.noarch 0:3.13.1-268.el7_9.2
完毕!
[root@localhost ~]# docker --version
Docker version 18.06.1-ce, build e68fc7a
(2)配置镜像加速,/etc/docker/daemon.json
中添加如下配置(访问阿里云官网,获取镜像加速地址)
# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://xxxxxxxx.mirror.aliyuncs.com"]
}
EOF
(3)启动并设置开机启动
[root@localhost docker]# systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@localhost docker]#
[root@localhost docker]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@localhost docker]#
2、安装Kubeadm,kubelet,kubectl
(1)配置kubernetes yum源
# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
(2)安装kubeadm,kubelet 和 kubectl yum install -y kubelet kubeadm kubectl
(不指定版本则安装最新版本,可使用kubelet-<version>指定版本)
卸载:yum remove kubelet kubectl kubeadm
指定版本:yum install -y kubelet-1.18.8 kubeadm-1.18.8 kubectl-1.18.8
[root@localhost ~]# yum install -y kubelet kubeadm kubectl
已加载插件:fastestmirror
kubernetes | 1.4 kB 00:00:00
kubernetes/primary | 119 kB 00:00:01
Loading mirror speeds from cached hostfile
* base: mirrors.bfsu.edu.cn
* extras: mirrors.bfsu.edu.cn
* updates: mirrors.bfsu.edu.cn
kubernetes 879/879
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.25.3-0 将被 安装
--> 正在处理依赖关系 kubernetes-cni >= 0.8.6,它被软件包 kubeadm-1.25.3-0.x86_64 需要
--> 正在处理依赖关系 cri-tools >= 1.19.0,它被软件包 kubeadm-1.25.3-0.x86_64 需要
---> 软件包 kubectl.x86_64.0.1.25.3-0 将被 安装
---> 软件包 kubelet.x86_64.0.1.25.3-0 将被 安装
--> 正在处理依赖关系 socat,它被软件包 kubelet-1.25.3-0.x86_64 需要
--> 正在处理依赖关系 conntrack,它被软件包 kubelet-1.25.3-0.x86_64 需要
--> 正在检查事务
---> 软件包 conntrack-tools.x86_64.0.1.4.4-7.el7 将被 安装
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_queue.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
---> 软件包 cri-tools.x86_64.0.1.25.0-0 将被 安装
---> 软件包 kubernetes-cni.x86_64.0.1.1.1-0 将被 安装
---> 软件包 socat.x86_64.0.1.7.3.2-2.el7 将被 安装
--> 正在检查事务
---> 软件包 libnetfilter_cthelper.x86_64.0.1.0.0-11.el7 将被 安装
---> 软件包 libnetfilter_cttimeout.x86_64.0.1.0.0-7.el7 将被 安装
---> 软件包 libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 将被 安装
--> 解决依赖关系完成
依赖关系解决
===============================================================================================================================================================================================================================
Package 架构 版本 源 大小
===============================================================================================================================================================================================================================
正在安装:
kubeadm x86_64 1.25.3-0 kubernetes 9.8 M
kubectl x86_64 1.25.3-0 kubernetes 10 M
kubelet x86_64 1.25.3-0 kubernetes 21 M
为依赖而安装:
conntrack-tools x86_64 1.4.4-7.el7 base 187 k
cri-tools x86_64 1.25.0-0 kubernetes 8.2 M
kubernetes-cni x86_64 1.1.1-0 kubernetes 15 M
libnetfilter_cthelper x86_64 1.0.0-11.el7 base 18 k
libnetfilter_cttimeout x86_64 1.0.0-7.el7 base 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k
事务概要
===============================================================================================================================================================================================================================
安装 3 软件包 (+7 依赖软件包)
总下载量:65 M
安装大小:279 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm | 187 kB 00:00:00
(2/10): e382ead81273ab8ebcddf14cc15bf977e44e1fd541a2cfda6ebe5741c255e59f-cri-tools-1.25.0-0.x86_64.rpm | 8.2 MB 00:01:24
(3/10): 6560dbdf3bac5d951cc7e765966becc9ec3faf7658f66dfba2fbda6bf661292b-kubeadm-1.25.3-0.x86_64.rpm | 9.8 MB 00:01:42
(4/10): 97eac08749f6d218d667aa74f8b711a904a3031d8bb1f38da510514d4fde1d39-kubectl-1.25.3-0.x86_64.rpm | 10 MB 00:01:46
(5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm | 18 kB 00:00:00
(6/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:00
(7/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:00
(8/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm | 18 kB 00:00:00
(9/10): e56f02805479cbf60d39ec4a0df95c8945b5998ea2bd1a9b560bd03f800febbf-kubelet-1.25.3-0.x86_64.rpm | 21 MB 00:03:41
(10/10): 14083ac8b11792469524dae98ebb6905b3921923937d6d733b8abb58113082b7-kubernetes-cni-1.1.1-0.x86_64.rpm | 15 MB 00:02:44
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计 186 kB/s | 65 MB 00:05:56
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 1/10
正在安装 : socat-1.7.3.2-2.el7.x86_64 2/10
正在安装 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 3/10
正在安装 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 4/10
正在安装 : conntrack-tools-1.4.4-7.el7.x86_64 5/10
正在安装 : kubelet-1.25.3-0.x86_64 6/10
正在安装 : kubernetes-cni-1.1.1-0.x86_64 7/10
正在安装 : cri-tools-1.25.0-0.x86_64 8/10
正在安装 : kubectl-1.25.3-0.x86_64 9/10
正在安装 : kubeadm-1.25.3-0.x86_64 10/10
验证中 : kubectl-1.25.3-0.x86_64 1/10
验证中 : cri-tools-1.25.0-0.x86_64 2/10
验证中 : kubernetes-cni-1.1.1-0.x86_64 3/10
验证中 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 4/10
验证中 : kubelet-1.25.3-0.x86_64 5/10
验证中 : conntrack-tools-1.4.4-7.el7.x86_64 6/10
验证中 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 7/10
验证中 : socat-1.7.3.2-2.el7.x86_64 8/10
验证中 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 9/10
验证中 : kubeadm-1.25.3-0.x86_64 10/10
已安装:
kubeadm.x86_64 0:1.25.3-0 kubectl.x86_64 0:1.25.3-0 kubelet.x86_64 0:1.25.3-0
作为依赖被安装:
conntrack-tools.x86_64 0:1.4.4-7.el7 cri-tools.x86_64 0:1.25.0-0 kubernetes-cni.x86_64 0:1.1.1-0 libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7
libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 socat.x86_64 0:1.7.3.2-2.el7
完毕!
[root@localhost ~]#
(3)设置kubelet开机自启
[root@localhost ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
3、部署kubernetes master节点
(1)在master节点(192.168.127.200)上执行初始化
--apiserver-advertise-address:设置Master节点API Server的监听地址
--image-repository:设置容器镜像拉取地址
--kubernetes-version:设置K8S版本,需与安装的保持一致
--service-cidr:集群内部虚拟网络,Pod统一访问入口
--pod-network-cidr:Pod网络,与部署CNI网络组件yaml文件中需保持一致
--ignore-preflight-errors:其错误将显示为警告的检查列表,值为 'all' 忽略所有检查中的错误。
更多执行init时的参数可查看官方文档:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址。
$ kubeadm init \
--apiserver-advertise-address=192.168.127.200 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.8 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all
(2)init执行流程:
- [preflight] 环境检查,拉取镜像
- [certs] 证书生成
- [kubeconfig] kubeconfig文件生成
- [kubelet-start] 生成kubelet配置文件并启动
- [control-plane]静态pod启动master组件,包括了etcd
- [mark-control-plane] 给master节点打一个roles和污点
- [bootstrap-token] 引导kubelet生成证书
- [addons] 安装coredns和kube-proxy
安装过程如果出现如下报错:
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
原因是:Docker是用yum安装的,docker的cgroup驱动程序默认设置为systemd。默认情况下Kubernetes cgroup为system,我们需要更改Docker cgroup驱动,配置如下"exec-opts":["native.cgroupdriver=systemd"]
[root@localhost ~]# cat /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://xxxxxxxx.mirror.aliyuncs.com"]
}
# 重启docker
systemctl restart docker
# 重置
kubeadm reset
systemctl restart kubelet
(3)重新初始化执行
[root@localhost ~]# kubeadm init \
> --apiserver-advertise-address=192.168.127.200 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.18.8 \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.244.0.0/16 \
> --ignore-preflight-errors=all
W1102 18:21:54.043251 31653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.127.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.127.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.127.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1102 18:23:29.170725 31653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1102 18:23:29.173761 31653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.006499 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: de7o8q.tvk35fon3u25p774
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.127.200:6443 --token de7o8q.tvk35fon3u25p774 \
--discovery-token-ca-cert-hash sha256:7f5bb7409392f467b2b2524816d66732b4c335cfea6ddda60a48eb9fcf89af85
(1)执行完后,拷贝kubeconfig文件,使kubectl命令可直接操作集群
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
(2)部署pod到集群
kubectl apply -f [podnetwork].yaml
(3)执行完成后保存kubeadm join
命令,其他node通过该命令加入集群
kubeadm join 192.168.127.200:6443 --token de7o8q.tvk35fon3u25p774 \ --discovery-token-ca-cert-hash sha256:7f5bb7409392f467b2b2524816d66732b4c335cfea6ddda60a48eb9fcf89af85
4、安装网络插件(CNI)
kubectl apply –f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
或将 kube-flannel.yml 文件放入目录中执行以下命令:
[root@localhost ~]# kubectl apply -f kube-flannel.yaml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
5、查看是否安装成功
(1)查看kubelet状态systemctl status kubelet
[root@localhost ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since 三 2022-11-02 19:25:47 CST; 27s ago
Docs: https://kubernetes.io/docs/
Main PID: 21349 (kubelet)
Memory: 30.0M
CGroup: /system.slice/kubelet.service
└─21349 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=c...
11月 02 19:25:54 k8s-node2 kubelet[21349]: E1102 19:25:54.092455 21349 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugi...g uninitialized
11月 02 19:25:58 k8s-node2 kubelet[21349]: I1102 19:25:58.627163 21349 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
11月 02 19:25:58 k8s-node2 kubelet[21349]: W1102 19:25:58.685577 21349 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
11月 02 19:25:59 k8s-node2 kubelet[21349]: E1102 19:25:59.093987 21349 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugi...g uninitialized
11月 02 19:26:03 k8s-node2 kubelet[21349]: W1102 19:26:03.685911 21349 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
11月 02 19:26:04 k8s-node2 kubelet[21349]: E1102 19:26:04.095797 21349 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugi...g uninitialized
11月 02 19:26:08 k8s-node2 kubelet[21349]: W1102 19:26:08.686502 21349 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
11月 02 19:26:09 k8s-node2 kubelet[21349]: E1102 19:26:09.097876 21349 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugi...g uninitialized
11月 02 19:26:13 k8s-node2 kubelet[21349]: W1102 19:26:13.686855 21349 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
11月 02 19:26:14 k8s-node2 kubelet[21349]: E1102 19:26:14.099667 21349 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugi...g uninitialized
Hint: Some lines were ellipsized, use -l to show in full.
(2)查看kube-flannel pod是否启动
[root@localhost ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-624px 1/1 Running 0 70s
kube-system coredns-7ff77c879f-5g94v 1/1 Running 0 9m15s
kube-system coredns-7ff77c879f-m9m8f 1/1 Running 0 9m15s
kube-system etcd-k8s-master 1/1 Running 0 9m31s
kube-system kube-apiserver-k8s-master 1/1 Running 0 9m31s
kube-system kube-controller-manager-k8s-master 1/1 Running 0 9m31s
kube-system kube-proxy-4gh8x 1/1 Running 0 9m15s
kube-system kube-scheduler-k8s-master 1/1 Running 0 9m31s
6、node节点加入集群
向集群中加入节点,执行
kubeadm init
初始化完成后输出的kubeadm join
命令
(1)向集群中加入node1(192.168.127.201)
[root@localhost ~]# kubeadm join 192.168.127.200:6443 --token de7o8q.tvk35fon3u25p774 \
> --discovery-token-ca-cert-hash sha256:7f5bb7409392f467b2b2524816d66732b4c335cfea6ddda60a48eb9fcf89af85
W1102 19:23:45.672188 21889 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
(2)同理向集群中加入node2(192.168.127.202)
kubeadm join 192.168.127.200:6443 --token de7o8q.tvk35fon3u25p774 \
--discovery-token-ca-cert-hash sha256:7f5bb7409392f467b2b2524816d66732b4c335cfea6ddda60a48eb9fcf89af85
(3)查看是否成功加入集群
[root@localhost ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 64m v1.18.8 192.168.127.200 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://18.6.1
k8s-node1 Ready <none> 4m38s v1.18.8 192.168.127.201 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://18.6.1
k8s-node2 Ready <none> 2m36s v1.18.8 192.168.127.202 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://18.6.1
(4)查看集群健康状态kubectl get cs
、kubectl cluster-info
[root@localhost ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
[root@localhost ~]#
[root@localhost ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.127.200:6443
KubeDNS is running at https://192.168.127.200:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@localhost ~]#
集群测试
在Kubernetes集群中创建一个pod,验证是否正常运行,这里以nginx为例
# 创建deployment
kubectl create deployment nginx --image=nginx
# 修改端口类型为nodePort供外界访问 编辑方式跟vim一样
kubectl edit svc nginx
...
spec:
clusterIP: 10.106.212.113
externalTrafficPolicy: Cluster
ports:
# 外界暴露指定端口 32627(30000-32767)
- nodePort: 32627
# 容器暴露的端口
port: 80
protocol: TCP
# 集群内访问的单口
targetPort: 80
selector:
app: nginx
sessionAffinity: None
# type改为NodePort
type: NodePort
...
访问 http://192.168.127.200:32627,如下所示。至此,我们已经成功部署了一个nginx的deployment,deployment控制对应的pod的生命周期,service则对外提供相应的服务。