kubeadm方式部署Kubernetes

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 三台服务器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区
  • 三台服务器时间同步

1、准备环境(三台服务器)

主机名和解析要对应

#修改主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
#关闭防火墙
[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld
#关闭SELinux
[root@k8s-master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config    --必须改
[root@k8s-master ~]# setenforce 0
#关闭swap
[root@k8s-master ~]# swapoff -a   临时关闭
[root@k8s-master ~]# vim /etc/fstab     永久关闭
image.png
#固定IP
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33   #配置静态IP
TYPE="Ethernet"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
BOOTPROTO="static"
IPADDR=192.168.0.150
GATEWAY=192.168.0.1
NETMASK=255.255.255.0
DNS1=223.5.5.5
DNS2=223.6.6.6
[root@localhost ~]# systemctl restart network   #重启网络服务
#互相解析
[root@k8s-master ~]# vim /etc/hosts
192.168.0.150  k8s-master
192.168.0.151  k8s-node1
192.168.0.152  k8s-node2

#将桥接的IPv4流量传递到iptables的链:
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master ~]# sysctl --system
#如果net.bridge.bridge-nf-call-iptables报错,加载br_netfilter模块
   # modprobe br_netfilter
   # sysctl -p /etc/sysctl.d/k8s.conf

2、安装Docker/kubeadm/kubelet(三台服务器)

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

2.1 安装docker

[root@k8s-node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-node1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-node1 ~]# yum makecache fast
[root@k8s-node1 ~]# yum -y install docker-ce
[root@k8s-node1 ~]# mkdir -p /etc/docker
[root@k8s-node1 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://2qu17v71.mirror.aliyuncs.com"]
}
EOF
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl start docker
[root@k8s-node1 ~]# systemctl enable docker
#拉取镜像
[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4
[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.4
[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4
[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4
[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1

docker pull quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64 --下载网络插件的配置文件

#打标签
下载完了之后需要将aliyun下载下来的所有镜像打成k8s.gcr.io/kube-controller-manager:v1.17.0这样的tag
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.4 k8s.gcr.io/kube-proxy:v1.17.4
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4 k8s.gcr.io/kube-apiserver:v1.17.4
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
版本号不用变

2.2 添加阿里云的YUM软件源

[root@k8s-node2 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.3 安装kubeadm,kubelet和kubectl

[root@k8s-node2 ~]# yum install -y kubelet-1.17.4-0.x86_64 kubeadm-1.17.4-0.x86_64 kubectl-1.17.4-0.x86_64 ipvsadm
[root@k8s-node2 ~]# systemctl enable kubelet

加载ipvs相关内核模块

[root@k8s-node2 ~]# modprobe ip_vs
[root@k8s-node2 ~]# modprobe ip_vs_rr
[root@k8s-node2 ~]# modprobe ip_vs_wrr
[root@k8s-node2 ~]# modprobe ip_vs_sh
[root@k8s-node2 ~]# modprobe nf_conntrack_ipv4
#编辑文件添加开机启动
[root@k8s-master ~]# vim /etc/rc.local   ----添加如下内容
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
[root@k8s-master ~]# chmod +x /etc/rc.local
#查看是否加载成功
[root@k8s-master ~]# lsmod |grep ip_vs
image.png

2.4 配置启动kubelet(所有节点)

#获取docker的cgroups
[root@k8s-master ~]# DOCKER_CGROUPS=`docker info |grep 'Cgroup' | awk '{print $3}'`
[root@k8s-master ~]# echo $DOCKER_CGROUPS
cgroupfs
#配置kubelet的cgroups
[root@k8s-master ~]# cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=k8s.gcr.io/pause:3.1"
EOF
#启动
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable kubelet && systemctl restart kubelet

3、部署Kubernetes Master

在k8s-master执行

[root@k8s-master ~]# vim master_init.sh
kubeadm init \
  --apiserver-advertise-address=10.8.156.13 \
  --ignore-preflight-errors=Swap \
  --kubernetes-version v1.17.4 \
  --pod-network-cidr=10.244.0.0/16
[root@k8s-master ~]# sh master_init.sh
image.png

看到join这条输出,说明镜像拉取成功,这条输出要记下来,后面会用到

[root@k8s-master ~]# vim join.txt
kubeadm join 192.168.0.150:6443 --token jnsqav.36xokeop20dpr3ee \
    --discovery-token-ca-cert-hash sha256:92466332f2934251c3e79b232e7a70599e1cef444ecec63748d972c654d643c8

使用kubectl工具:

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   4m40s   v1.14.0

4、安装Pod网络插件(CNI)(master节点)

[root@k8s-master ~]# cd ~ && mkdir flannel && cd flannel
[root@k8s-master flannel]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这里注意kube-flannel.yml这个文件里的flannel的镜像是0.12.0,quay.io/coreos/flannel:v0.12.0-amd64
默认的镜像是quay.io/coreos/flannel:v0.12.0-amd64,需要提前pull下来,然后导入。
root@k8s-master ~]# docker load < flannel-0.12.0-amd64.tar

#启动
[root@k8s-master flannel]# kubectl apply -f kube-flannel.yml
#查看
[root@k8s-master flannel]# kubectl get pods -n kube-system
image.png

image.png

5、加入Kubernetes Node(node节点执行)

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

[root@k8s-node1 ~]# kubeadm join 192.168.0.150:6443 --token 3wgsi4.rvfgh84cwkelnfbr \
>     --discovery-token-ca-cert-hash sha256:dc40499eae85cf52abcf28975f48f04587c4fdcbcfe7a8bbf421e74d47e5a927
[root@k8s-node2 ~]# kubeadm join 192.168.0.150:6443 --token 3wgsi4.rvfgh84cwkelnfbr \
>     --discovery-token-ca-cert-hash sha256:dc40499eae85cf52abcf28975f48f04587c4fdcbcfe7a8bbf421e74d47e5a927
image.png

6、测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# kubectl get pod,svc
[root@k8s-master ~]# kubectl get pod,svc -o wide
image.png

image.png

image.png

7、部署 Dashboard

[root@k8s-master ~]# vim kubernetes-dashboard-1.10.1.yml
文件内容kubernetes-dashboard-1.10.1.yml
因为网络的原因,镜像难以下载,需要修改以下两个地方
image: tigerfive/kubernetes-dashboard-amd64:v1.10.1

image.png

改为:
image.png

[root@k8s-master ~]# kubectl apply -f kubernetes-dashboard-1.10.1.yml
image.png

火狐浏览器访问https://IP:30001/(注意是https)
image.png

image.png

创建service account并绑定默认cluster-admin管理员集群角色

[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
image.png

使用输出的token登录Dashboard。


image.png

image.png
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。