kubernetes之四 搭建k8s集群(rpm + macvlan + ipam)

上篇kubernetes之三 搭建k8s集群是用二进制文件安装,在离线环境下,需要用到安装包安装。本节使用RPM的方式安装kubernetes,且网路使用macvlan,ip地址管理使用ipam

准备

1). 版本信息
组件 版本号 补充说明
docker 18.03.0-ce
kubernetes 1.18.12
etcd 3.4.7 API VERSION 3.4
linux centos 3.10.0-1127.8.2.el7.x86_64
2). 选择安装节点

资源有限,这里用了三台机器,除了kubernetes的组件外,etcd集群也共享了相同的资源。

IP地址 角色 部署的组件
173.119.126.200 master kube-proxy,kubelet,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler
173.119.126.199 node kube-proxy,kubelet,etcd,flanneld
173.119.126.198 node kube-proxy,kubelet,etcd,flanneld
3). 修改host,3台机器都要修改
#在200机器执行
echo "k8s-master-216-200" > /etc/hosts
#或者
vim /etc/hosts
173.119.126.200 k8s-master-216-200
173.119.126.199 k8s-worker-216-199
173.119.126.198 k8s-worker-216-198
4). 确认mac地址和product_uuid的唯一性
ifconfig -a
cat /sys/class/dmi/id/product_uuid
5). 关闭防火墙
systemctl stop firewalld # 关闭服务
systemctl disable firewalld
6). 禁用SELinux
sestatus    # 查看SELinux状态
vi /etc/sysconfig/selinux
SELINUX=disabled
7). 禁止交换分区
vim /etc/fstab 
#以下这行注释掉
/dev/mapper/rhel-swap   swap    swap    defaults        0 0
8).安装ETCD

此步骤请参照其他文档吧

9). 在每台虚拟机上配置子网接口
#开启网卡混杂
ip link set ens160 promisc on    //开启网卡混杂模块
ip link show ens160  | grep PROMISC     //验证

#配置vlan
ip link add link ens160 name ens160.125 type vlan id 125      //给网卡打vlan
ip link set ens160.125 up                 //启动ens160.125
如果宿主机在同一个vlan中,可以执行一下给ens160.125加上ip,验证连通性,正式使用无需这一步
#ip addr add 173.16.125.250/24 dev ens160.125 brd +          //给ens160.125网卡配置ip,检验
#ip -f inet addr delete 173.16.125.250/24  dev ens160.125      //删除ens160.125网卡的ip
#ip link delete ens160.125 type vlan                    //删除vlan

#创建网卡子接口
docker network create -d macvlan --subnet=173.16.125.0/24 --gateway=173.16.125.254   -o parent=ens160.125     macvlan-125    //创建macvlan网桥
docker network ls | grep macvlan-125     //验证

docker安装

kubernetes安装

我们需要在所有的节点上安装kubeadm, kubelet, kubectl,版本需要一致。在可以连外网的机器上下载组件.
添加kubernetes yum源

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1
EOF

下载
下载kubeadm-1.20.4-0
yum install --downloadonly --downloaddir /tools/kubernetes/rpm kubeadm-1.20.4-0
下载的安装包文件列表
cri-tools-1.13.0-0.x86_64.rpm
kubeadm-1.20.4-0.x86_64.rpm
kubectl-1.20.4-0.x86_64.rpm
kubelet-1.20.4-0.x86_64.rpm
kubernetes-cni-0.8.7-0.x86_64.rpm
libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm
下载对应版本的kubelet,kubectl

yum install --downloadonly --downloaddir /tools/kubernetes/rpm kubelet-1.20.4-0
yum install --downloadonly --downloaddir /tools/kubernetes/rpm kubectl-1.20.4-0

将以上安装包通过scp分发到不同的服务器节点上

scp /tools/kubernetes/rpm/* root@173.119.126.199:/tools/kubernetes/rpm
scp /tools/kubernetes/rpm/* root@173.119.126.198:/tools/kubernetes/rpm

安装kubernetes组件
这里有个安装顺序

rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
rpm -ivh libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
rpm -ivh libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
rpm -ivh libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
rpm -ivh cri-tools-1.13.0-0.x86_64.rpm
rpm -ivh kubernetes-cni-0.8.7-0.x86_64.rpm --nodeps
rpm -ivh kubelet-1.20.4-0.x86_64.rpm
rpm -ivh kubectl-1.20.4-0.x86_64.rpm
rpm -ivh kubeadm-1.20.4-0.x86_64.rpm

启动kubelet

systemctl enable kubelet

生成kube-config.yaml文件

cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.4
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "173.119.126.200:6443"
networking:
  dnsDomain: "cluster.local"
EOF

初始化master节点
kubeadm init
--apiserver-advertise-address=173.119.126.200 --pod-network-cidr=173.16.0.0/16

执行后会收到如下的信息:
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 173.119.126.200:6443 --token i2swzj.mbljm7wwsw3tfffs \
    --discovery-token-ca-cert-hash sha256:ae87b8259873818d048f9e096552b91cf61c6cc227456edf2f6c4169baa4ff35 \
    --control-plane --certificate-key 796b87ea14db4cf8c6e9de1dbcc899552add98dfb37975fd82b3087c51e57906

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
(请注意下面这行)
kubeadm join 173.119.126.200:6443 --token i2swzj.mbljm7wwsw3tfffs \
    --discovery-token-ca-cert-hash sha256:ae87b8259873818d048f9e096552b91cf61c6cc227456edf2f6c4169baa4ff35    

按照提示,先执行

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

在其他节点上安装完kubelet,kubectl,kubeadm后执行如下脚本

kubeadm join 173.119.126.200:6443 --token i2swzj.mbljm7wwsw3tfffs \
    --discovery-token-ca-cert-hash sha256:ae87b8259873818d048f9e096552b91cf61c6cc227456edf2f6c4169baa4ff35

执行完后看到,在maser节点执行 kubectl get nodes 得到如下的结果

NAME                 STATUS   ROLES                  AGE    VERSION
k8s-master-126-200   Ready    control-plane,master   7d1h   v1.20.4
k8s-worker-126-198   Ready    <none>                 5d2h   v1.20.4
k8s-worker-126-199   Ready    <none>                 5d3h   v1.20.4

接下来就是配置macvlan, 我们采用的是k8s多集群的ip地址统一管理方式,有位大佬提供了这种方式
cni-ipam-etcd,编译完后将文件命名为ipam-etcd,将其上传到所有的节点的目录下/opt/cni/bin/,
在目录/etc/cni/net.d/下创建文件00-macvlan.conflist,文件内容

{
        "name": "myetcd-ipam",
        "cniVersion": "0.3.1",
        "plugins": [
            {
                "name": "mymacvlan",
                "type": "macvlan",
                "master": "ens160",
                "ipam": {
                    "name": "myetcd-ipam",
                    "type": "ipam-etcd",
                    "etcdConfig": {
                        "etcdURL": "https://173.119.126.199:2379",
                        "etcdCertFile": "/tools/etcd/ssl/server.pem",
                        "etcdKeyFile": "/tools/etcd/ssl/server-key.pem",
                        "etcdTrustedCAFileFile": "/tools/etcd/ssl/ca.pem"
                    },
                    "subnet": "173.16.125.0/24",
                    "rangeStart": "173.16.125.10",
                    "rangeEnd": "173.16.125.100",
                    "gateway": "173.16.125.254",
                    "routes": [{
                        "dst": "0.0.0.0/0"
                    }]
                }
            }
        ]
    }

macvlan配置完毕,需要启用一个应用加以测试,创建一个deploy文件busybox.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app: app
  replicas: 2
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app
        image: busybox:latest #如果不可联网,需指向本地hubbor镜像可以
        args:
        - /bin/sh
        - -c
        - sleep 10; touch /tmp/healthy; sleep 30000

执行

kubectl apply -f busybox.yaml 

通过命令

kubectl get pods

结果

NAME                   READY   STATUS              RESTARTS   AGE    IP              NODE                 NOMINATED NODE   READINESS GATES
app-5f997ff969-77rzl   1/1     Running             14         5d1h   173.16.125.10   k8s-worker-126-199   <none>           <none>
app-5f997ff969-cl8wg   1/1     Running             14         5d1h   173.16.125.11   k8s-worker-126-198   <none>           <none>
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容