CentOS8.1 部署kubernetes1.18

CentOS8.1部署kubernetes1.18()

  • 主机规划
系统 角色 IP
CentOS8.1 Master 10.5.20.200
CentOS8.1 Node 10.5.20.201

1 环境准备

1.1 安装系统

CentOS8.1下载地址

因为是实验环境使用的是vmware的虚拟机进行部署,系统安装步骤在网上一大堆,可以参考

1.2 配置系统

关闭swap

# swapoff -a
# cat /etc/fstab
....
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=0a1ffc69-640b-4f72-8638-7573b2ced3c8 /boot                   ext4    defaults        1 2
UUID=68C4-DD2E          /boot/efi               vfat    umask=0077,shortname=winnt 0 2
/dev/mapper/cl-home     /home                   xfs     defaults        0 0
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0# 此行注释

# mount -a 

关闭firewalld

# systemctl stop firewalld && systemctl disable firewalld

配置网卡

# cat /etc/sysconfig/network-scripts/ifcfg-ens192
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens192
UUID=94e2eeb1-50e5-4ea2-b1ec-1258332f01d0
DEVICE=ens192
ONBOOT=yes
IPADDR=10.5.20.200
PREFIX=16
GATEWAY=10.5.0.1

# nmcli c up ens192 #重启网卡,立即生效

# ip a 查看是否生效

配置内核参数,将桥接的IPv4流量传递到iptables

# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

添加阿里源

# cd /etc/yum.repos

# mkdir repobak && mv *.repo repobak

# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

# dnf clean all && dnf makecache

添加本地解析

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.5.20.200 master01.k8s.ucloud.cn master0
10.5.20.201 node01.k8s.ucloud.cn node01

2 安装基础包

# dnf remove podman -y  # 卸载centos8 自带的podman,不然安装docker的时候会有冲突

# dnf  install -y yum-utils device-mapper-persistent-data lvm2  vim bash-completion net-tools gcc 

# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# dnf install  https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm

# dnf -y install docker-ce

docker 添加阿里云加速

# mkdir -p /etc/docker

# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF

# systemctl daemon-reload && systemctl restart docker&&  systemctl enable docker

3 安装kubectl kubeadm kubelet

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# dnf install kubectl kubelet kubeadm

# systemctl enable kubelet

4 初始化集群

# kubeadm init --kubernetes-version=1.18.0  \
--apiserver-advertise-address=10.5.20.200   \
--image-repository registry.aliyuncs.com/google_containers  \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
  • POD的网段为: 10.122.0.0/16, api server地址就是master本机IP。
    这一步很关键,由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。

返回以下信息表示安装集群完成

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.122.21:6443 --token v2r5a4.veazy2xhzetpktfz \
    --discovery-token-ca-cert-hash sha256:daded8514c8350f7c238204979039ff9884d5b595ca950ba8bbce80724fd65d4

根据提示操作

#  mkdir -p $HOME/.kube
#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

执行下面命令,使kubectl可以自动补充

# source <(kubectl completion bash)

查看节点信息

# kubectl  get nodes
NAME                     STATUS   ROLES    AGE   VERSION
master01.k8s.ucloud.cn   NotReady    master   44h   v1.18.3

#  kubectl get pod --all-namespaces
NAMESPACE              NAME                                             READY   STATUS    RESTARTS   AGE
kube-system            coredns-7ff77c879f-mzxz6                         1/1     Pending   0          44h
kube-system            coredns-7ff77c879f-vzq5g                         1/1     Pending   0          44h
kube-system            etcd-master01.k8s.ucloud.cn                      1/1     Running   0          44h
kube-system            kube-apiserver-master01.k8s.ucloud.cn            1/1     Running   0          44h
kube-system            kube-controller-manager-master01.k8s.ucloud.cn   1/1     Running   0          44h
kube-system            kube-proxy-pfhrm                                 1/1     Running   0          23h
kube-system            kube-proxy-xhtnt                                 1/1     Running   0          44h
kube-system            kube-scheduler-master01.k8s.ucloud.cn            1/1     Running   0          44h

node节点为NotReady,因为corednspod没有启动,缺少网络pod

5 安装calico网络

# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

查看pod和node

#  kubectl get pod --all-namespaces

NAMESPACE              NAME                                             READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-789f6df884-rx6gx         1/1     Running   0          44h
kube-system            calico-node-hwbh9                                1/1     Running   0          23h
kube-system            calico-node-l45nh                                1/1     Running   0          44h
kube-system            coredns-7ff77c879f-mzxz6                         1/1     Running   0          44h
kube-system            coredns-7ff77c879f-vzq5g                         1/1     Running   0          44h
kube-system            etcd-master01.k8s.ucloud.cn                      1/1     Running   0          44h
kube-system            kube-apiserver-master01.k8s.ucloud.cn            1/1     Running   0          44h
kube-system            kube-controller-manager-master01.k8s.ucloud.cn   1/1     Running   0          44h
kube-system            kube-proxy-pfhrm                                 1/1     Running   0          23h
kube-system            kube-proxy-xhtnt                                 1/1     Running   0          44h
kube-system            kube-scheduler-master01.k8s.ucloud.cn            1/1     Running   0          44h
kube-system            traefik-ingress-controller-slj44                 1/1     Running   0          71m
kubernetes-dashboard   dashboard-metrics-scraper-dc6947fbf-dcfz9        1/1     Running   0          44h
kubernetes-dashboard   kubernetes-dashboard-5d4dc8b976-5vnpc            1/1     Running   0          44h
monitoring             prometheus-alertmanager-57c654d958-lm2bd         2/2     Running   0          110m
monitoring             prometheus-kube-state-metrics-55c864f698-cb2pb   1/1     Running   0          110m
monitoring             prometheus-node-exporter-rlzjt                   1/1     Running   0          110m
monitoring             prometheus-pushgateway-f56979ffc-z2mxd           1/1     Running   0          110m
monitoring             prometheus-server-8f95bd494-cdvqk                2/2     Running   6          99m

# kubectl  get nodes
NAME                     STATUS   ROLES    AGE   VERSION
master01.k8s.ucloud.cn   Ready    master   44h   v1.18.3

至此集群正常

6 安装kubernetes-dashboard

官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里添加nodeport

# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml

# cat recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #添加nodeport类型
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000 #添加nodeport端口号
  selector:
    k8s-app: kubernetes-dashboard
    
 # kubectl create -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

查看pod,service

# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.10.14.107   <none>        8000/TCP        44h
kubernetes-dashboard        NodePort    10.10.199.68   <none>        443:30000/TCP   44h

使用浏览器进行访问


k8s-login.png

获取登录token

# kubectl  describe secrets cluster-admin-dashboard-sa-token-|grep token:|awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6IlNQYjk4UHZ3ajNGYnU5MmF5X3hIX2RPZ0tXbWpseXNzcnVnYk9peGRsRGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhLXRva2VuLXE0OXQ0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYmU0MGE4NTMtMTFhNC00ZjVhLWEzYTktZWZmZjkxNWRmM2U5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2x1c3Rlci1hZG1pbi1kYXNoYm9hcmQtc2EifQ.FeGGM3Sqk9IAt2rNjxKlRFLqpFFBq_W5j4nu2A0v9LH-zdT1Sx53Y94MnZsm-YrRsUkksX30Zf9jJJVSqqopheS8sTTztyGikl7rQFdF30bO6Vxe6-4AB8VW9j0Z5J1OgbQsVrW6dAknzd3Ge9n9aMLJa9Huan2TcTW6BiAPV5GjC0coJ-4LbZgkRaYoR8knCeHG3drstM1dGNaIfE-NhaRHfVvpM9RL2Nv7pVox2hAN02E5P0Rzudn72OOSamhOsMUpWsC5lxk4GeoXw3jIZl7z4ZcMdeMH9B0GbcSdacIn3VPgiGDUFA0Vhy8KYxA_God92KdBEn8zjhR4MipR9A

登陆后发现没有数据显示,查看日志发现没有权限,执行以下命令

# kubectl create serviceaccount cluster-admin-dashboard-sa
# kubectl create clusterrolebinding cluster-admin-dashboard-sa   --clusterrole=cluster-admin   --serviceaccount=default:cluster-admin-dashboard-sa
# kubectl get secret | grep cluster-admin-dashboard-sa

刷新页面后数据正常

k8s-login-ok.png

7 添加node节点

在新节点上执行上述1、2、3步骤,执行完成后登陆master节点获取添加worker节点命令,如下:

# kubeadm token create --print-join-command
W0527 14:03:58.614554   11840 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 10.5.20.200:6443 --token 0pd51f.dlmiwulnd5yc449j     --discovery-token-ca-cert-hash sha256:a8bc410843415f788885e431f0e56d8a4e019586c7ff08f80f81397b80707b25 #在新worker节点上执行此命令

# ssh root@10.5.20.201

# kubeadm join 10.5.20.200:6443 --token 0pd51f.dlmiwulnd5yc449j     --discovery-token-ca-cert-hash sha256:a8bc410843415f788885e431f0e56d8a4e019586c7ff08f80f81397b80707b25

# kubectl get nodes
NAME                     STATUS   ROLES    AGE   VERSION
master01.k8s.ucloud.cn   Ready    master   46h   v1.18.3
node01.k8s.ucloud.cn     Ready    node     25h   v1.18.3

  • 至此整个集群部署完成
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。