关于k8s单机部署和单节点公网访问的问题

一. 修改基本配置


# 1. 三台4c8g机器,一台做master,两台做node节点

# 2. 停用防火墙

systemctl stop firewalld

systemctl disable firewalld

# 3. 添加hosts解析

vim /etc/hosts

192.168.11.21  master

192.168.11.23 node-01

192.168.11.26 node-02

# 关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久

setenforce 0  # 临时

# 关闭swap

swapoff -a  # 临时

sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

二、升级内核


1. 查看系统内核版本

[root@master ~]# uname -r

3.10.0-1127.el7.x86_64

2. 安装ELRepo源

# 倒入公共密钥

[root@master ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

# 安装ELRPO的yum源

[root@node-01 ~]# rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

获取https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

准备中...                          ################################# [100%]

正在升级/安装...

  1:elrepo-release-7.0-3.el7.elrepo  ################################# [100%]

4. yum安装内核

[root@master ~]# yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml

已加载插件:fastestmirror, langpacks

Loading mirror speeds from cached hostfile

* base: mirrors.aliyun.com

* elrepo: hkg.mirror.rackspace.com

* elrepo-kernel: hkg.mirror.rackspace.com

* extras: mirrors.aliyun.com

* updates: mirrors.aliyun.com

base                                                                                                                                                                                        | 3.6 kB  00:00:00   

elrepo-kernel                                                                                                                                                                              | 3.0 kB  00:00:00   

extras                                                                                                                                                                                      | 2.9 kB  00:00:00   

updates                                                                                                                                                                                    | 2.9 kB  00:00:00   

(1/3): elrepo-kernel/primary_db                                                                                                                                                            | 2.1 MB  00:00:00   

(2/3): extras/7/x86_64/primary_db                                                                                                                                                          | 246 kB  00:00:01   

(3/3): updates/7/x86_64/primary_db                                                                                                                                                          |  15 MB  00:01:16   

正在解决依赖关系

--> 正在检查事务

---> 软件包 kernel-ml.x86_64.0.5.17.3-1.el7.elrepo 将被 安装

---> 软件包 kernel-ml-devel.x86_64.0.5.17.3-1.el7.elrepo 将被 安装

--> 解决依赖关系完成

依赖关系解决

===================================================================================================================================================================================================================

Package                                            架构                                      版本                                                      源                                                大小

===================================================================================================================================================================================================================

正在安装:

kernel-ml                                          x86_64                                    5.17.3-1.el7.elrepo                                        elrepo-kernel                                      56 M

kernel-ml-devel                                    x86_64                                    5.17.3-1.el7.elrepo                                        elrepo-kernel                                      14 M

事务概要

===================================================================================================================================================================================================================

安装  2 软件包

总下载量:70 M

安装大小:311 M

Is this ok [y/d/N]: y

Downloading packages:

(1/2): kernel-ml-devel-5.17.3-1.el7.elrepo.x86_64.rpm                                                                                                                                      |  14 MB  00:00:07   

(2/2): kernel-ml-5.17.3-1.el7.elrepo.x86_64.rpm                                                                                                                                            |  56 MB  00:00:14   

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

总计                                                                                                                                                                              4.7 MB/s |  70 MB  00:00:14   

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

警告:RPM 数据库已被非 yum 程序修改。

  正在安装    : kernel-ml-5.17.3-1.el7.elrepo.x86_64                                                                                                                                                          1/2

  正在安装    : kernel-ml-devel-5.17.3-1.el7.elrepo.x86_64                                                                                                                                                    2/2

  验证中      : kernel-ml-devel-5.17.3-1.el7.elrepo.x86_64                                                                                                                                                    1/2

  验证中      : kernel-ml-5.17.3-1.el7.elrepo.x86_64                                                                                                                                                          2/2

已安装:

  kernel-ml.x86_64 0:5.17.3-1.el7.elrepo                                                                kernel-ml-devel.x86_64 0:5.17.3-1.el7.elrepo                                                             

完毕!

5. 查看内核

[root@master ~]# rpm -qa kernel*

kernel-tools-libs-3.10.0-1127.el7.x86_64

kernel-ml-devel-5.17.3-1.el7.elrepo.x86_64

kernel-tools-3.10.0-1127.el7.x86_64

kernel-ml-5.17.3-1.el7.elrepo.x86_64

kernel-3.10.0-1127.el7.x86_64

6. 切换默认内核

(1) grub2-set-default 0

默认启动顺序应该为1,升级后内核是往前面插入,为0

[root@node-01 ~]# grub2-set-default 0

[root@master ~]# grub2-set-default 'CentOS Linux (5.17.3-1.el7.elrepo.x86_64) 7 (Core)'

[root@master ~]# grub2-editenv list

saved_entry=CentOS Linux (5.17.3-1.el7.elrepo.x86_64) 7 (Core)

7. 卸载老内核

[root@master ~]# rpm -e kernel-tools-3.10.0-1127.el7.x86_64

[root@master ~]# rpm -e kernel-tools-libs-3.10.0-1127.el7.x86_64

[root@master ~]# rpm -e  kernel-3.10.0-1127.el7.x86_64 --nodeps

8. 重启并且查看内核是否升级成功

[root@master ~]# reboot

[root@master ~]# uname -r

5.17.3-1.el7.elrepo.x86_64

开启ipvs前置条件


cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

EOF

# 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

修改配置

三、安装k8s的yum源和所需要依赖包


br_netfilter 桥接模式防火墙,开启ipvs前置条件

1. 载入br_netfilter模块

[root@master ~]# modprobe br_netfilter

cat <<EOF >  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl -p /etc/sysctl.d/k8s.conf

2. 配置kubernetes的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

3. 安装相关基础依赖包

[root@master ~]# yum install kmod-kvdo.x86_64

[root@master ~]# yum install -y epel-release.noarch

[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl

4. 启用时间同步服务

[root@master ~]# systemctl start ntpdate.service &&  systemctl enable ntpdate.service

Created symlink from /etc/systemd/system/multi-user.target.wants/ntpdate.service to /usr/lib/systemd/system/ntpdate.service.

[root@master ~]# systemctl start ntpdate.service

同步时间计划任务,保证集群时间一致

[root@master ~]# echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/kubernetes_crontab.tmp

[root@master ~]# crontab /tmp/kubernetes_crontab.tmp

5. 修改相关limits限制配置

    echo -e "\033[32m * soft nofile 65536" >> /etc/security/limits.conf

    echo -e "\033[32m * hard nofile 65536" >> /etc/security/limits.conf

    echo -e "\033[32m * soft nproc 65536"  >> /etc/security/limits.conf

    echo -e "\033[32m * hard nproc 65536"  >> /etc/security/limits.conf

    echo -e "\033[32m * soft  memlock  unlimited"  >> /etc/security/limits.conf

    echo -e "\033[32m * hard memlock  unlimited"  >> /etc/security/limits.conf

四、安装kube-admin


1. 查看最新版本,这里安装最新版本

[root@master ~]# yum list kubelet kubeadm kubectl  --showduplicates|sort -r |grep 1.23.5-0

2. 安装

[root@master ~]# yum install kubectl-1.23.5 kubelet-1.23.5 kubeadm-1.23.5

五、安装etcd


1. 安装cfssl,用于签发证书

[root@master ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

[root@master ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

[root@master ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

[root@master ~]# chmod +x cfssl-certinfo_linux-amd64 cfssljson_linux-amd64 cfssl_linux-amd64

[root@master ~]# mv cfssl_linux-amd64 /usr/local/bin/cfssl

[root@master ~]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

[root@master ~]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

2. 初始化ca

[root@master ssl]# mkdir /opt/ssl && cd /opt/ssl

[root@master ssl]# vim ca-config.json

{

    "signing":{

        "default":{

            "expiry":"87600h"

        },

        "profiles":{

            "harbor":{

                "usages":[

                    "signing",

                    "key encipherment",

                    "server auth",

                    "client auth"

                ],

                "expiry":"87600h"

            }

        }

    }

}

[root@master ssl]# vim ca-csr.json

{

    "CN":"wjbrain.cluster.local",

    "key":{

        "algo":"rsa",

        "size":2048

    },

    "names":[

        {

            "C":"CN",

            "ST":"beijing",

            "L":"beijing",

            "O":"Wjbrain",

            "OU":"wjbrain"

        }

    ]

}

[root@master ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

2022/04/14 18:17:25 [INFO] generating a new CA key and certificate from CSR

2022/04/14 18:17:25 [INFO] generate received request

2022/04/14 18:17:25 [INFO] received CSR

2022/04/14 18:17:25 [INFO] generating key: rsa-2048

2022/04/14 18:17:25 [INFO] encoded CSR

2022/04/14 18:17:25 [INFO] signed certificate with serial number 209267530779730413644508566847819874501840455877

[root@master ssl]# vim etcd-csr.json

{

  "CN": "etcd",

  "hosts": [

    "127.0.0.1",

    "172.28.254.252"

  ],

  "key":{

    "algo" : "rsa",

    "size" : 2048

  },

  "names": [

    {

        "C": "CN",

        "ST": "beijing",

        "L": "beijing",

        "O": "Wjbrain",

        "OU": "wjbrain"

    }

  ]

}

注意这里的--profile=后面的 等于 ca-config.json  中的profiles 后面的名字,这里是harbor 则命令后跟harbor否则报错

[root@master ssl]# cfssl gencert -ca=ca.pem  -ca-key=ca-key.pem  -config=ca-config.json  -profile=harbor etcd-csr.json | cfssljson -bare etcd

2022/04/14 18:20:52 [INFO] generate received request

2022/04/14 18:20:52 [INFO] received CSR

2022/04/14 18:20:52 [INFO] generating key: rsa-2048

2022/04/14 18:20:52 [INFO] encoded CSR

2022/04/14 18:20:52 [INFO] signed certificate with serial number 5486867488289307979618905130875388013409079190

2022/04/14 18:20:52 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for

websites. For more information see the Baseline Requirements for the Issuance and Management

of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);

specifically, section 10.2.3 ("Information Requirements").

3. 安装etcd

[root@master ssl]# yum install etcd -y

3.1. 修改启动服务配置

[root@master ssl]# vim /usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

#After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

ExecStart=/usr/bin/etcd  --name k8s-master-1.k8s.com  --peer-client-cert-auth --client-cert-auth --cert-file=/etc/etcd/ssl/etcd.pem  --key-file=/etc/etcd/ssl/etcd-key.pem  --pe

er-cert-file=/etc/etcd/ssl/etcd.pem  --peer-key-file=/etc/etcd/ssl/etcd-key.pem  --trusted-ca-file=/etc/etcd/ssl/ca.pem  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem  --initial

-advertise-peer-urls https://192.168.11.21:2380  --listen-peer-urls https://192.168.11.21:2380  --listen-client-urls https://192.168.11.21:2379,http://127.0.0.1:2379  --adverti

se-client-urls https://192.168.11.21:2379  --initial-cluster-token etcd-cluster-0  --initial-cluster k8s-master-1.k8s.com=https://192.168.11.21:2380 --initial-cluster-state new

  --data-dir=/var/lib/etcd

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

[root@master ~]# systemctl enable etcd.service

[root@master ~]# systemctl start etcd.service

六、安装Containerd(因为官方废弃,这里用安装Containerd代替docker)


1. 安装

[root@node-01 ~]#  yum install -y yum-utils

[root@node-01 ~]#  yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

[root@node-01 ~]# yum -y install containerd

[root@node-01 ~]# yum install containerd.x86_64 -y

2. 修改配置

# 这里重新生成一次默认配置

[root@node-01 ~]# containerd config default > /etc/containerd/config.toml

# 修改为阿里云源站

#修改前

sandbox_image = "k8s.gcr.io/pause:3.5"

# 修改后

sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"

七、k8s node节点集群配置


1. 修改k8s启动参数重启

[root@node-01 ~]# vim /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock"

# 这里如果在前面安装时,没有安装kubelet,需要安装

[root@node-01 ~]# systemctl restart containerd kubelet

2. 用crictl代替docker

[root@node-01 ~]# vim /etc/crictl.yaml

runtime-endpoint: unix:///run/containerd/containerd.sock

image-endpoint: unix:///run/containerd/containerd.sock

timeout: 10

debug: false

八、初始化集群kubeadmin


1. 配置初始化文件,这里kubernetesVersion需要和kubeadm版本保持一致

[root@master ~]# vim kubeadm-config.yaml

---

apiVersion: kubeadm.k8s.io/v1beta2

kind: ClusterConfiguration

kubernetesVersion: v1.23.5

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

controlPlaneEndpoint: "172.28.254.252:6443"

networking:

  serviceSubnet: "10.31.48.0/22"

  podSubnet: "10.31.32.0/19"

  dnsDomain: "cluster.local"

dns:

  type: CoreDNS

etcd:

  external:

    caFile: /opt/ssl/ca.pem

    certFile: /opt/ssl/etcd.pem

    endpoints:

    - https://172.28.254.252:2379

    keyFile: /opt/ssl/etcd-key.pem

---

apiVersion: kubelet.config.k8s.io/v1beta1

kind: KubeletConfiguration

# 过程报错,需要重置

kubeadm reset

kubeadm init --config=kubeadm-config.yaml

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:

  kubeadm join 172.28.254.252:6443 --token yvnt5i.mgoa3vjiaracx28n \

        --discovery-token-ca-cert-hash sha256:b6865fea40f240a9697922da2532a7f18202d74fba4459dc443caa01b004f8e3 \

        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.28.254.252:6443 --token yvnt5i.mgoa3vjiaracx28n \

        --discovery-token-ca-cert-hash sha256:b6865fea40f240a9697922da2532a7f18202d74fba4459dc443caa01b004f8e3

# 配置k8s需要执行的命令,安装集群后会提醒

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

忘记token查看方法


kubeadm token list

# 查看cert

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

#返回

3ec96d8717d0d521386fb5895da80548dc2ae48b538fcca2d8afea0ec8c38fdb

九、ingress部署


kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/baremetal/deploy.yaml

## 需要修改的配置,不修改,外部访问不到域名 hostNetwork: true,  具体按照图中配置修改

apiVersion: apps/v1

kind: Deployment

metadata:

  labels:

    app.kubernetes.io/component: controller

    app.kubernetes.io/instance: ingress-nginx

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

    app.kubernetes.io/version: 1.8.0

  name: ingress-nginx-controller

  namespace: ingress-nginx

spec:

  minReadySeconds: 0

  revisionHistoryLimit: 10

  selector:

    matchLabels:

      app.kubernetes.io/component: controller

      app.kubernetes.io/instance: ingress-nginx

      app.kubernetes.io/name: ingress-nginx

  template:

    metadata:

      labels:

        app.kubernetes.io/component: controller

        app.kubernetes.io/instance: ingress-nginx

        app.kubernetes.io/name: ingress-nginx

        app.kubernetes.io/part-of: ingress-nginx

        app.kubernetes.io/version: 1.8.0

    spec:

      hostNetwork: true

执行命令,这里面的image 需要从外网下载

kubectl apply -f deploy.yaml

域名直接不加端口进行ingress的代理访问:https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

官方文档:https://kubernetes.github.io/ingress-nginx/deploy/

关于没有云产品,获取外网地址的配置

官网地址:https://metallb.universe.tf/installation/


kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-frr.yaml

[root@master ~]# vim metallb-config.yaml

apiVersion: v1

kind: ConfigMap

metadata:

  namespace: metallb-system

  name: config

data:

  config: |

    address-pools:

    - name: default

      protocol: layer2

      addresses:

      - 39.107.205.172-39.107.205.172  # 替换为您可用的 IP 地址范围

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.3/manifests/metallb.yaml

添加 flannel,否则没有网络


# 其中,Network 要和k8s集群中保持一至

# vim kube-flannel.yml

---

kind: Namespace

apiVersion: v1

metadata:

  name: kube-flannel

  labels:

    k8s-app: flannel

    pod-security.kubernetes.io/enforce: privileged

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  labels:

    k8s-app: flannel

  name: flannel

rules:

- apiGroups:

  - ""

  resources:

  - pods

  verbs:

  - get

- apiGroups:

  - ""

  resources:

  - nodes

  verbs:

  - get

  - list

  - watch

- apiGroups:

  - ""

  resources:

  - nodes/status

  verbs:

  - patch

- apiGroups:

  - networking.k8s.io

  resources:

  - clustercidrs

  verbs:

  - list

  - watch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  labels:

    k8s-app: flannel

  name: flannel

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: flannel

subjects:

- kind: ServiceAccount

  name: flannel

  namespace: kube-flannel

---

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    k8s-app: flannel

  name: flannel

  namespace: kube-flannel

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: kube-flannel-cfg

  namespace: kube-flannel

  labels:

    tier: node

    k8s-app: flannel

    app: flannel

data:

  cni-conf.json: |

    {

      "name": "cbr0",

      "cniVersion": "0.3.1",

      "plugins": [

        {

          "type": "flannel",

          "delegate": {

            "hairpinMode": true,

            "isDefaultGateway": true

          }

        },

        {

          "type": "portmap",

          "capabilities": {

            "portMappings": true

          }

        }

      ]

    }

  net-conf.json: |

    {

      "Network": "10.31.32.0/24",

      "Backend": {

        "Type": "vxlan"

      }

    }

---

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: kube-flannel-ds

  namespace: kube-flannel

  labels:

    tier: node

    app: flannel

    k8s-app: flannel

spec:

  selector:

    matchLabels:

      app: flannel

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      affinity:

        nodeAffinity:

          requiredDuringSchedulingIgnoredDuringExecution:

            nodeSelectorTerms:

            - matchExpressions:

              - key: kubernetes.io/os

                operator: In

                values:

                - linux

      hostNetwork: true

      priorityClassName: system-node-critical

      tolerations:

      - operator: Exists

        effect: NoSchedule

      serviceAccountName: flannel

      initContainers:

      - name: install-cni-plugin

        image: docker.io/flannel/flannel-cni-plugin:v1.1.2

      #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2

        command:

        - cp

        args:

        - -f

        - /flannel

        - /opt/cni/bin/flannel

        volumeMounts:

        - name: cni-plugin

          mountPath: /opt/cni/bin

      - name: install-cni

        image: docker.io/flannel/flannel:v0.21.5

      #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5

        command:

        - cp

        args:

        - -f

        - /etc/kube-flannel/cni-conf.json

        - /etc/cni/net.d/10-flannel.conflist

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      containers:

      - name: kube-flannel

        image: docker.io/flannel/flannel:v0.21.5

      #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5

        command:

        - /opt/bin/flanneld

        args:

        - --ip-masq

        - --kube-subnet-mgr

        resources:

          requests:

            cpu: "100m"

            memory: "50Mi"

        securityContext:

          privileged: false

          capabilities:

            add: ["NET_ADMIN", "NET_RAW"]

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        - name: EVENT_QUEUE_DEPTH

          value: "5000"

        volumeMounts:

        - name: run

          mountPath: /run/flannel

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

        - name: xtables-lock

          mountPath: /run/xtables.lock

      volumes:

      - name: run

        hostPath:

          path: /run/flannel

      - name: cni-plugin

        hostPath:

          path: /opt/cni/bin

      - name: cni

        hostPath:

          path: /etc/cni/net.d

      - name: flannel-cfg

        configMap:

          name: kube-flannel-cfg

      - name: xtables-lock

        hostPath:

          path: /run/xtables.lock

          type: FileOrCreate

十加入集群


[root@node-02 yum.repos.d]# kubeadm join 192.168.11.21:6443 --token 6r704t.puwgzyh9tx5pc6b8  --discovery-token-ca-cert-hash sha256:3ec96d8717d0d521386fb5895da80548dc2ae48b538fcca2d8afea0ec8c38fdb

[preflight] Running pre-flight checks

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

[kubelet-check] Initial timeout of 40s passed.

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# 查看是否加入成功

[root@master ~]# kubectl get nodes

NAME      STATUS    ROLES                  AGE  VERSION

master    NotReady  control-plane,master  89m  v1.23.5

node-01  NotReady  <none>                6s    v1.23.5

node-02  NotReady  <none>                10s  v1.23.5

十一、办公网与k8s互通


#配置防火墙源地址转化规则

iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -d 10.31.0.0/17 -j MASQUERADE

注解:

重点在那个『 MASQUERADE 』!这个设定值就是『IP伪装成为封包出去(-o)的那块装置上的IP』!

用途:不管现在eth0的出口获得了怎样的动态ip,MASQUERADE会自动读取eth0现在的ip地址然后做SNAT出去,这样就实现了很好的动态SNAT地址转换。

#在办公室的核心路由器上,设置静态路由,将 k8s pod网段,路由到 node-1节点上

ip route 10.31.0.0/17 192.168.11.104

#开机启动#

iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -d 10.31.0.0/17 -j MASQUERADE

将master也可以运行容器


kubectl taint node k8s-master1 node-role.kubernetes.io/master-

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 220,063评论 6 510
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,805评论 3 396
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 166,403评论 0 357
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,110评论 1 295
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,130评论 6 395
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,877评论 1 308
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,533评论 3 420
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,429评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,947评论 1 319
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,078评论 3 340
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,204评论 1 352
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,894评论 5 347
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,546评论 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,086评论 0 23
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,195评论 1 272
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,519评论 3 375
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,198评论 2 357

推荐阅读更多精彩内容