K8S集群搭建

  1. 机器配置
    3台centos
    master 2c4g
    node1 8c16g
    node2 8c16g
  2. 通用配置
  • yum源
    yum install -y yum-utils
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
    enabled=1
    gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    exclude=kubelet kubeadm kubectl
    EOF
  • 关闭selinux和swap
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    swapoff /dev/vda2
  • 打开ipvs模块
    modprobe br_netfilter
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
  • 安装containerd.io和k8s组件
    yum install -y containerd.io
    rm -rf /etc/containerd/config.toml
    yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    systemctl enable --now kubelet containerd
  • 内核参数调整
    echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
    echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.conf
    sysctl -p
  1. 安装第一台master
    2.1 安装k8s集群
    kubeadm init --config=kubeadm-config.yaml
    kubeadm-config.yaml
    apiVersion: kubeadm.k8s.io/v1beta3
    kind: InitConfiguration
    localAPIEndpoint:
    advertiseAddress: 192.168.200.21
    bindPort: 6443
    nodeRegistration:
    criSocket: unix:///var/run/containerd/containerd.sock
    imagePullPolicy: IfNotPresent
    taints: null

apiServer:
extraArgs:
{
"service-node-port-range": "80-32767"
}
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
controlPlaneEndpoint: 192.168.200.241:8443
kind: ClusterConfiguration
kubernetesVersion: 1.27.2
networking:
dnsDomain: cluster.local
serviceSubnet: 172.25.0.0/17
podSubnet: 192.170.0.0/16
scheduler: {}


apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
2.2 安装calico网络插件

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml

  1. master重新生成token
    kubeadm token create --print-join-command #生成token,默认24小时内有效
    kubeadm token create --ttl 0 --print-join-command #生成一个永不过期的token

  2. 安装其他master

去master节点查看token, 替换token

kubeadm token list
kubeadm join 192.168.200.241:8443 --token lgvtmy.lt0vww6srga8sprm --discovery-token-ca-cert-hash sha256:f69f229bffcd44bcac83785a716f38c4d2139e4eb6b64ccc5ab27a519589188e --control-plane --certificate-key 38b423e09cb357da1aaee0435a2d273dcbf000300b463827c3389c53c05c687e

  1. 安装worker

去master节点查看token, 替换token

kubeadm token list
kubeadm join 192.168.200.241:8443 --token lgvtmy.lt0vww6srga8sprm --discovery-token-ca-cert-hash sha256:f69f229bffcd44bcac83785a716f38c4d2139e4eb6b64ccc5ab27a519589188e --certificate-key 38b423e09cb357da1aaee0435a2d273dcbf000300b463827c3389c53c05c687e

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容